vRealize Automation 7: Adding NSX Integration

vRealize Automation 7 introduces some new big features for Networking and Security integrations like support for On Demand Security Groups, on demand load balancers, and Security Tags from the blueprint layout.

In order to use these features, you’ll need a functioning NSX installation, and then need to do the following configuration tasks to get it all working-

1. Add an NSX Manager in your vRealize Orchestrator client

vRealize Automation 7 uses vRealize Orchestrator to execute operations against NSX.  You will need to navigate the vRA 7 landing page and click the Orchestrator Client link (if you are using the embedded vRO, otherwise, navigate to the landing page for your external vRO appliance)

vRealize Automation Landing Page

Use your administrator@vsphere.local (or vRO admin credentials) to log in, and navigate to /Library/NSX/Configuration/Create NSX Endpoint to start the endpoint creation:

vRealize Orchestrator: Add a NSX Endpoint

After the endpoint is added, verify you are able to browse the NSX inventory on the inventory tab:

Browse NSX Inventory from vRO

This completes the addition of NSX to vRO.add_new_vRO_endpoint

2. Add Orchestrator as an endpoint to vRealize Automation

To use Orchestrator to manipulate NSX objects, it must be added as a new endpoint.  Do not confuse this with the Orchestrator configuration options under the Administration tab- those are used from XaaS blueprints rather than VM provisioning.  The endpoint configuration options you need are located under Infrastructure / Endpoints.

Add Orchestrator Endpoint

One important change to note for the embedded vRealize Orchestrator 7 is the API interface is NOT running on port 8281 anymore, I’ve posted my example URL below:


Make sure to use credentials that are tested by using them to log in to the vRO instance.  Also, you will need to add a custom property for vRO priority- VMware.VCenterOrchestrator.Priority = 1 (or another number if you are ordering multiple vRO instances).

vRO Endpoint Config

Once you’ve added the endpoint, you will want to make sure data collection completes successfully.  If Data Collection fails, go back thru all your endpoint configuration and make sure it is correct.

vRO Data Collection


3. Specify manager for network and security platform

Once the orchestrator has been configured to connect to the NSX Manager, you need to specify and bind the NSX manager to the vSphere Endpoint- This is done under Infrastructure/Endpoints (if you do not see this tab, try logging in as configurationadmin@vsphere.local or administrator@vsphere.local)-

Networking and Security Config

Once configured, you should be able to initiate Data Collection on the Compute Resource (not located on properties of the endpoint)- The Compute Resource should be available at Infrastucture / Compute Resources / Compute Resources. You should your relevant compute resources by cluster name:

Compute Resources Data Collection

When in the data collection screen, scroll to the bottom and check the Networking and Security Data Collection.  If everything is configured successfully, you should see a successful data collection after it completes-



Once data collection completes successfully, you should be able to include NSX constructs in your blueprint design-  below, you can see a on-demand security group being added to a blueprint-

Networking and Security Blueprint Components


The on demand and blueprint layout features really make configuration and deployment of complex multi tier applications and custom firewall rulesets in NSX significantly easier to deploy- less than 6 months ago this sort of feature set would require extensive custom vRO code.  It is great to see it in the core product now.

Another Home Lab

Hello and welcome to automatevi.com.  My name is Justin Jones- I’m currently employed as a Senior Consultant in Integration and Automation at VMware.  This post documents my home lab build- however, I’ve decided to include a little twist on the standard home lab build.  As a remote employee, I work from home a great deal of the time.  This post includes tips and tricks I’ve found helpful in being a home office worker to make my work life easier and more productive.

Priorities (Use Case)

  1. Low Power (includes bonuses such as quiet, low heat emission, and lower electricity bill)
  2. High Capacity (Specifically Memory and Storage)
  3. High Performance

My home lab resides in my home office, where I spend a good deal of my workday on the phone.  Standard rackmount servers that sound like a hair dryer on full blast and put out an equivalent amount of heat were out of the question.  One popular solution I’ve seen is Mac Minis- I believe these are a pretty solid choice, but with a maximum of 16GB of RAM (from my observations Home Labs are typically memory constrained), I would need 6 Mac Minis to obtain ~96GB RAM capacity.

The ESXi Hosts

  • x3 Shuttle SH67H3 ($250 each)
    • Low Power i5 2400S CPUs ($200)
    • 32GB Memory ($325)
    • Dual Port PCI-E GB NIC (HP NC360T) ($40)
    • Cost per unit: about $800, mostly in RAM


  • Synology DS1813+ (8 bay, $1,000)
    • x4 Crucial M500 CT 960 SSDs – 960 GB -($500 each)
    • x4 Samsung HD204UI – 2 TB – ($125 each)

My lab predates the availability of vSAN.  If I was building a 3 node lab today, I’d give vSAN serious consideration-  for those that don’t have licenses available or for those who want a storage system for more than just VMs, I’d give a nice NAS like the Synology a try.  I’ll planning a separate post detailing its use.

During my day to day job, I provision probably 5 VMs a day on average,  and as many as 20-30+ on a heavy day.  This is because I write and test software integrations that modify VM pre and post build processes,  so part of debugging my code is frequently building a VM.  Yes, I do use linked clones in some cases, but sometimes code needs to be tested in ways that exactly reproduce client configurations, and linked clones cannot be used.  If you do the math, an average of 5 VMs per day is 25 a week, or 1300 per year.

That means shaving 1 minute off provisioning time equates to over 21 hours of time I get back not waiting to see if a code change fixed a bug over the course of a year- With that kind of time, 4 1TB SSDs in RAID 5 make a lot of sense 🙂


Low end commercial switches like PowerConnect and ProCurve can be had for less than $200 each- Due to my goals of low noise and low power consumption, I chose to go with the 2816/2808 due to them being fanless, low power, and compact.

  • Dell PowerConnect 2816
  • Dell PowerConnect 2808
  • Asus RT-N66U Router
  • Aris Cable Modem

Host Utilization


Think 96GB of RAM for a home lab is a lot?  It goes pretty quick:

Home Lab Host Utilization
Home Lab Host Utilization – click for zoom

And the Virtual Machines:

Home Lab VMs
Home Lab VMs – click for zoom

So, revisiting the originally stated goals, let’s take a look at power consumption.  I’m using a Belkin Conserve Insight to measure my power consumption.


Belkin Insight
Belkin Insight – click to zoom


261W Total Power Consumption, for a home lab with:

  • 29 GHz of CPU
  • 96 GB of RAM
  • 8TB of NAS Storage (2.5 TB of which is SSD)
  • 3 ESXi Hosts with x3 1Gb links each
  • 2 Gigabit switches Switches (16 port, 8 port)
  • Router
  • Cable Modem

All using less power than a 27″ iMac under full load.


Home Lab Photo
Home Lab Photo – click to zoom

Hope this post gives you some ideas, feel free to contact me if you have any questions!