vRealize Operations 6 (vRops) has different SSL certificate generation requirements than the older version. I have not found it publicly documented anywhere so here it is:
- Generate private key:
openssl genrsa -out vrops.key 2048
- Create certificate signing request:
openssl req -new -key vrops.key -out vrops.scr -days 365 -sha256
- Sign vrops.scr by your Certificate Authority
- Create PEM text file which contains signed cert from #2, private key from #1 and CA certificate (optionally intermediate certs as well)
- Go to vRops admin portal and click the certificate icon in the top right corner (next to admin). Install the new certificate by uploading the PEM file from #4.
Wait a little bit and then re-login. Reboot is not necessary.
One of the deployment options for vCenter Single Sign-On 5.5 (SSO) is high availability mode. It usually consists of two load balanced SSO nodes deployed in single site configuration. It is quite complex to set up and manage therefore I usually advise customers to avoid such configuration and instead co-deploy SSO together with vCenter Server on the same virtual machine – this results in the same availability of vCenter Service and SSO.
However there are reasons when you cannot do this and need to deploy highly available SSO instance. One is if you want to have multiple vCenter Servers in the same SSO domain with single pane of glass Web Client management. Another is vRealize Automation (vRA) deployment which also requires SSO.
VMware has published two whitepapers about the topic. The first VMware vCenter Server 5.5 Deploying a Centralized VMware vCenter Single Sign-On Server with a Network Load Balancer unfortunately unnecessarily adds even more complexity to the whole process. The paper also describes actve – active load balancing of the nodes which is however unsupported configuration (see here). While active – active load balancing might work with vCenter Server services it does not work with vRealize Automation (vCAC). This is due to the tokens used for solution authentication – WS Trust tokens are stateless but WebSSO are not. Also from what I heard vSphere 6 will not work in active – active configuration at all.
The second whitepaper Using VMware vCenter SSO 5.5 with VMware vCloud Automation Center 6.1 is more recent and while you see vCAC/vRA in its title it still very much applies for pure vSphere environments as well (skip the vRA specific chapters) and it is the one I would recommend. It also describes Active – Passive configuration of F5 Load Balancer.
The topic of this article is however usage of NSX load balancer instead of F5. Contrary to vCNS load balancer, NSX can be configured in Active – Passive mode and thus you can create supported HA SSO configuration with pure VMware solutions.
I will not go too deep in the SSO specific configurations in HA setup (did I mentioned it is complex?) as it is very well described in the second whitepaper mentioned above – instead I will focus on the NSX part of the configurations.
The architecture is like this: two SSO nodes with dedicated NSX load balancer in proxy – on a stick mode. This means LB is not inline of the traffic but instead has only 1 interface and SNAT and DNATs the traffic to the nodes. While inline transparent mode configuration is also possible I believe on a stick config is simpler and provides better resiliency (dedicated LB appliance for each application).
Here are the steps for NSX load balancer configuration:
- Deploy Edge Service Gateway for the Load Balancer with one interface preferably in the same subnet as SSO nodes.
- Enable Load Balancer feature
- Upload CA certificate and SSO certificate. See the second whitepaper on how to create SSO certificate.
- Configure service monitoring. While you could use the default TCP healh check, I prefer custom HTTPS type healthcheck which is monitoring /lookupservice URL.
- Create Application Profile. During the SSO node configuration before the custom certificates are exchanged on each node you would use simple TCP profile or perhaps SSL passthrough profile (as the SSL certificate configured in NSX would not match self-signed certificate on the nodes). Another alternative is to edit /etc/hosts on each SSO node to fake the VIP hostname to point to the node (this is described in the first white paper). Once you replace the certificates on the nodes you can use SSL termination on the load balancer, configure VIP certificate and Pool Side certificate and also enable Insert X-Forwarded-For HTTP header so in theory we would see from where the authentication request is coming from (unfortunately SSO access log does not display the information).
- Create Application Rule. Here we will define the logic that will perform the active – passive load balancing. Each SSO node will be in separate pool, with the primary node set up as default. ACL rule is defined to see if the primary node is up. If not we will switch the backend pool to the secondary node. The pool names must match the ones we will create in the next step.
# detect if pool “SSO_primary” is still UP
acl SSO_primary_down nbsrv(SSO_primary) eq 0
# use pool “SSO_secondary” if “SSO_primary” is dead
use_backend SSO_secondary if SSO_primary_down
- Create SSO_primary and SSO_secondary pools. Each will have one SSO node with the healthcheck from step 4 and ports 7444. Notice that I have defined the pool member as vCenter VM container object so NSX will retrieve it’s IP address dynamically via VM Tools. While I could hardcode the node IP address this is nice showcase of NSX – vCenter integration. If inline mode you would check the Transparent checkbox for each pool.
- Now we can create virtual server. We will select Application Profile from step 5, Default Pool from step 7, in the Advanced Tab Application Rule from step 6. For VIP I used the LB default IP (from step 1) and HTTPS 7444 port.
- As a last step do not forget to disable firewall or create firewall rule for the IP and port define in the previous step.
One of the problems of testing NSX in homelab environment is that it is really resource hungry. For example NSX Manager VM deploys with 12 GB RAM. While it is simple to edit its settings and lower memory to about 8 GB without any major impact, VMs that are deployed by NSX automatically (Controllers and Edges) cannot be edited in vSphere Client as the Edit Settings menu option is disabled. Each NSX Controller requires 4 vCPUs and 2.05 GHz CPU reservation. If you go by the book and deploy 3 of them it creates quite a resource impact.
vCPUs can be changed by editing VMs VMX file or by hacking NSX Manager OVF file from which it is deployed from on NSX Manager (located in common/em/components/vdn/controller/ovf/nsx-controller-<version>.vxlan.ovf) if you know how to get to support engineer mode or are not afraid of mounting linux partitions. The CPU reservation cannot be changed this way.
The approach I use is to disable vCenter Server protection of NSX Controller VMs. Find their MoRefIDs (for example with vCenter Managed Object Browser) and then delete respective records from vCenter Server VPX_DISABLED_METHODS table. Afer a restart of vCenter service the VMs are no longer protected and you can simply edit their settings in vSphere Client.
Disclaimer: this is unsupported and should not be done in production environments.
After passing my VCP-NV – the entry level certification for VMware NSX network virtualization technology as soon as it was available I scheduled its advanced exam called VMware Certified Implementation Expert. While there is Expert in the name it should not be confused with VCDX (VMware Certified Design Expert) which is the next level certification. The VCIX is actually more similar to the vSphere or Cloud VCAP Administration exams (VCAP-DCA or VCAP-CIA). Although the exam was announced in October I had to wait till end of November to take it with one test center asking me to reschedule it elsewhere.
The exam concept is very similar to other VCAP Administration exams – cca 4 hours of lab tasks which are usually related to each other. The lab is a nested virtual environment hosted most likely somewhere in west coast USA. You are accessing it remotely while using small test center 4:3 low resolution monitor through a test app that switches screens between the questions and RDP jump box. The awkwardness of switching between questions and the environment (copy paste is not working) together with high latency that makes VM consoles screens redraw line by line (I had to make them smaller to actually see ping results in real time) and PDF manual reading impossible, all that makes the test experience very hard. vSphere Web Client slowness and browser crashes did not help either.
I also had issues understanding some of the questions – I provided feedback through the test tool (bad idea – you will loose precious time) and internally at VMware so I hope that will get fixed in the future.
On the other hand the tasks that you should be implementing or troubleshooting were actually a lot of fun. If I would do those in my lab they would not be that hard but I must admit that I ran out of time, did skip 2 tasks and did not solve 2 troubleshooting ones. As there is altogether about 17 questions I was not sure if I passed but today (after 1 week of waiting) I received email with a passing score.
So what about preparation? I would recommend going through the Hands on Lab or the NSX Install Configure Manage labs as those are very similar to the testing lab. For the troubleshooting I would propose building your own lab from scratch and redoing all the lab tasks from ICM course – this will provide experience of quickly finding the touch points to check when something is not working (emphasis on quick as there will not be enough time to read manual and experiment in the actual exam). Work through all the chapters mentioned in the blueprint. The exam covers most of them.
Just a short post.
vCloud Director 5.6.3 – the first release solely for service providers was released on Tuesday. There is however no upgrade path from vCloud Director 5.5.2 which was released a month ago. So if you are a service provider do not upgrade to 5.5.2 unless you want to wait for the next VCD-SP release which will support the upgrade.