vRealize Orchestrator Client with 4K Screen

This is a quick tip for those that want to run vRealize Orchestrator client on 4K screen in Windows 10 and cannot see anything because the font is so tiny and does not scale. The full credit goes to @joerglew who published in on our internal Socialcast but I have not seen it on public internet.

Download the client.jnlp file from the https://vro-address:8281/vco/client/client.jnlp (the Start Orchestrator Client link).

Edit the file in notepad and add line <property name=”sun.java2d.dpiaware” value=”false”/> into the resources section of the xml.

That’s it. Enjoy nice large font on your 4K screen.

Advertisements

Layer 2 VPN to the Cloud – Part II

Almost 3 years ago I have published an article how to set up layer 2 VPN between on-prem vSphere environment and vCloud Director Org VDC.

As both vCloud Director and NSX evolved quite a bit since to simplify the whole set up, here comes the part II.

Let me first summarize the use case:

The tenant has an application that resides on 3 different VLAN based networks running in its own (vSphere) datacenter. The networks are routed with existing physical router. The tenant wants to extend 2 of these networks to cloud for cloud bursting or DR purposes, but not the 3rd one (for example because there runs a physical database server).

The following diagram shows the setup.

The main advancements are:

  • vCloud Director natively supports NSX L2 VPN (VCD 8.20 or newer needed).
  • NSX now (since 6.2) supports configuration of unstretched networks directly (no static routes are necessary anymore)
  • This means the full setup can be done by the tenant in self-service fashion

Here are the steps:

  • The tenant will deploy freely available NSX Standalone Edge in its datacenter connected to trunk port with 2 VLANs mapped (10 and 11). Additional network configuration is necessary (forged transmits and promiscuous mode or sink port creation – see the link)
  • In the cloud Org VDC tenant deploys two routed Org VDC networks with identical subnets and gateways as networks A and B. These networks must be connected to the Org VDC Edge GW via subinterface (there can be up to 200 such networks on single Edge). The Org VDC Edge must have advanced networking enabled.
  • Tenant enables and configures L2VPN server on its Org VDC Edge GW. Note that this is a premium feature that the service provider must enable in Organization first (see this blog post).
  • Before the L2VPN tunnel is established the following must be taken into account:
    • The Org VDC Edge GW IP addresses are identical with the on-prem existing physical router. Therefore Egress Optimization Gateway addresses must be entered in the Peer Site configuration. That will prevent the Org VDC Edge GW from sending ARP replies over the tunnel.
    • The same must be performed on the Standalone NSX Edge via CLI (see egress-optimize command here).
    • The non-stretched network (subnet C) must be configured on the Org VDC Edge GW so it knows that the subnet is reachable through the tunnel and not via its upstream interface(s). This option however is not in vCloud UI, instead vCloud networking API must be used.
      Alternatively the provider could configure non-stretched network directly in the NSX UI:
    • Finally, the tunnel can be established by configuring L2VPN server details on the on-prem Standalone NSX Edge L2VPN client (endpoint IP, port, credentials, encryption) and providing VLAN to tunnel mappings.
    • Note to find the Org VDC network subinterface tunnel mapping vCloud API must be used again:

vCloud Director vApp Runtime Lease Expiration Action

In vCloud Director it is possible to configure vApp leases. The maximums are set by system admin at Organization level (in Policies), which can be lowered by Org Admin (at org level) and set by vApp owner at the vApp level. A vApp has runtime lease (for how long it will be in running state) and storage lease (for how long it will consume storage once it is not running).

vApp leases are very useful in test & dev or lab environments to make sure abandoned, unused VMs are not running and taking resources.

When vApp lease is coming to an end, its owner gets a reminder via email (how many days before expiration can be configured in User Preferences) and can optionally reset vApp lease to avoid its stopping or deletion.

By default expired running vApp is put into suspended state which means its memory content is saved to datastores. This ensures fully consistent state upon consequent power on of the vApp. This however make not be always needed especially in dev/lab situations – the memory content could take lots of storage space and for example saving 16 GB RAM VM to datastore could also create IO performance impact. As of vCloud Director 8.20 the Organization Administrator can instead change the default runtime expiry action to power off. The setting is done at Org level and must be done via API by setting the element <PowerOffOnRuntimeLeaseExpiration> of OrgLeaseSettingsType to true. The API version must be at least 25.0.

PUT /api/admin/org/eea1f10c-3fee-43d7-bd8e-be63453d6e34/settings/vAppLeaseSettings

Accept:application/*+xml;version=29.0
x-vcloud-authorization:{{x-vcloud-authorization}}
Content-Type:application/vnd.vmware.admin.vAppLeaseSettings+xml

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<VAppLeaseSettings xmlns="http://www.vmware.com/vcloud/v1.5">
    <DeleteOnStorageLeaseExpiration>true</DeleteOnStorageLeaseExpiration>
    <DeploymentLeaseSeconds>1209600</DeploymentLeaseSeconds>
    <StorageLeaseSeconds>15552000</StorageLeaseSeconds>
    <PowerOffOnRuntimeLeaseExpiration>true</PowerOffOnRuntimeLeaseExpiration>
</VAppLeaseSettings>

Note:

When the vApp expiry action is set to power off, the actual VM stop action power off (hard) vs shutdown (gracefull) procedure depends on the vApp’s config for each VM (tab Starting and Stopping VMs).

Also note that subsequent edit of Org policies in UI will reset the Org PowerOffOnRuntimeLeaseExpiration setting back to default (false).

vSphere Replication Issue with ESXi 6.5U1

This is a quick post to highlight an issue vSphere Replication has with ESXi 6.5U1 for To-the-cloud replication.

Only customers that use vSphere Replication for DR or migrations to the cloud endpoints (e.g. vCloud Availability for vCloud Director) with ESXi 6.5U1 hosts are affected (ESXi 6.5 and older works fine). Also host-to-host replication is not affected.

The root cause is that ESXi 6.5U1 hosts are unable to retrieve from vSphere Replication Appliance vr2c-firewall.vib that is responsible for opening outgoing communication ports for replication traffic on the ESXi host firewall.

This results in inability to perform any to-the-cloud replications. To see the issue look into the host Firewall configuration in the Security Profile section. If you do not see Replication-to-Cloud Traffic section you are affected.

The picture below which traffic it is related to (red rectangle on the left):

If you would look into esxupdate.log on the host you will see error: [Errno 14] curl#56 – “Content-Length: in 200 response”.

Until a fix is going to be released here is a workaround:

  1. Download the vr2c-firewall.vib from the vSphere Replication Appliance: https://vSphere-Replication-Appliance-ip-or-fqdn:8043/vib/vr2c-firewall.vib.
  2. Upload the vib to a shared location (datastore)
  3. Install the vib to every host with the following command: esxcli software vib install -v /vmfs/volumes/<datastore>/vr2c-firewall.vib
  4. Verify the fix was installed properly with: esxcli software vib list | grep vr2c

Missing Licensing Metrics in vCloud Director 9.0

You might have noticed that vCloud Director 9.0 no longer displays licensing metrics.

This was a conscious decision as already for some time the licensing is handled externally by vCloud Usage Meter and the metering in vCloud Director caused vCloud database bloat.

If for some reason you still need to have these metrics available in UI, vCloud API or vROps Management Pack you can enable license metering with this command on a cell. No need for reboot, just wait few minutes for the next data collection.

$VCLOUD_HOME/bin/cell-management-tool manage-config -n licensing.metrics.vm.enabled -v true