Transform Cloud Automation and Availability

We are constantly working to share cool solutions with customers and have them available for demonstration. During CiscoLive London we unveiled our Transform Cloud Automation demo. This expands upon last year’s Workload Mobility demo (linked here) where we leveraged Vblock, VPLEX and OTV to protect mission critical workloads. Many customers leverage Management and Orchestration tools and are demanding they have the same protection as clustered databases or e-mail environments. These administrative services are not traditionally designed to be highly available. Running them on Vblock protected by VPLEX enables rapid recovery from hardware outage and also lets application provisioned in the same environment to be protected.

This Video  highlights the benefits of the solution and how it would be interacted with from an administrative point of view…

Transform Cloud Automation

The video begins with Cisco’s Intelligent Automation for Cloud portal. We build a custom workflow to provision VM’s and allow the user to choose which data center the virtual application (vApp) will be provisioned in. The vApp is provisioned in vCloud Director, allowing for another layer of abstraction for multi-tenancy. vCloud provisions vApps onto a stretched cluster in a “gold tier” of service (denoted by the requester in the CIAC provisioning window). We also have a silver tier cluster which is not protected by VPLEX (not shown). Our stretched cluster is enabled by EMC VPLEX Metro (which VCE integrates as part of the Vblock solution) and Cisco OTV. EMC VPLEX is a hardware solution that takes a storage LUN and replicates it over distance and allows read/write access from two locations simultaneously. As shown in the diagram this requires two Vblocks and VPLEX clusters on each side with the same resources. Our demo has three ESXi hosts at each site with three 500GB DataStore LUNs. VPLEX sits in between the host and storage array, presenting and protecting these LUNs. Write operations by each host are synchronous, read operations are cached locally. VPLEX maintains data consistency during outage and rules can be defined for the system to react appropriately during an outage.

That covers the DataStore, but not the network. Network traffic such as vMotion and virtual machine IP address are tied to a specific IP address. Traditional data center design dictates unique IP address space based upon location. Cisco OTV on the Nexus 7000 series switch can extend layer 2 VLANs between multiple locations, based upon your needs and configuration. OTV is relatively simple to setup and has been designed to mitigate the WAN bandwidth consumed by only sending the Layer 2 traffic required. Details on OTV can be found here.

TransformCloud VPLEX VblockDiagram of the solution

There is a lot of cool stuff going on with this demo, our focus is on the ease of use and management of the environment once it’s in production. We plan on enhancing the demo throughout 2013 and would be happy to share.

VCE Vision Intelligent Operations

Vision Intelligent Operations Software for Vblock Systems

Enabling simple management of Vblock Integrated Infrastructure Systems from VMware vCenter. Vision performs Discovery, Identification, Health Monitoring, Loggging, and Validation of Vblock hardware plus offers an Open API.

VCEVisionVCE Vision Intelligent Operations software capabilities for Vblock systems:

      • Discovery:  Ensuring management tools constantly reflect the most current state of Vblock Systems. As hardware changes the inventory auto-updates.
      • Identification:  Enabling the converged system view. All component details are fed up to the user interface. Hardware model, status, serial numbers, etc.
      • Validation:  Providing system assurance and ensuring compliance with the interoperability matrix. Ensuring running firmware is compatible across components and across Vblocks.
      • Health Monitoring:  Expediting diagnosis and remediation. Correlating hardware warnings, errors and failures with the virtual environment.
      • Logging:  Promoting rapid troubleshooting. The Vblock consists of many hardware devices. Vision assembles all log activity into one location, for review during issue resolution.
      • Open API:  Simplifying integration, backed by VCE and an open community of developers.

The software is delivered pre-installed on all new Vblock Systems, and includes:

    • The VCE Vision Intelligent Operations System Library that runs on all Vblock Systems to implement core functionality including the discovery, compliance, and the object model.
    • A plug-In for vCenter and an adapter for vCenter Operations Manager that enable native integration with these two key components of the VMware management toolset
    • A software development kit, which includes the API documentation, sample code, Java bindings and a simulator, as well as an on-line developer community to help foster innovation.

It is important to mention where Vision fits. It is not a replacement for your current M&O solution. It complements an existing investment in almost any orchestration product that can take advantage of the API. For example, Cisco Intelligent Automation for Cloud (CIAC) could kick off a workflow to request system health from Vision before it began a service provisioning activity. CIAC would receive back status and determine if it will proceed or flag a hardware issue for follow up. The same goes for DynamicOps, Cloupia, BMC or CA products.

vCenter Integration: Vblock next to your virtual data center!!




vCenter Operations Manager – Health Monitoring



This raises the bar in the converged infrastructure market and there’s much more to come.

VCE Product Launch – Unleash Simplicity

VCE is launching a number of new products and updates today. Some of which I have been very involved with over the past year. Vblock System 100, System 200, new modular design for the 300 and 700 plus Vision Intelligent Operations.

Vblock 100

A long time in the works, the Vblock 100 is an affordable integrated infrastructure product which can support approximately 50 to 100 VMs (or 200 VDI users). It is meant for branch offices, small data centers or the lab in my garage. Every Vblock, including the 100 is built in the factory, preconfigured, validated and supported as a single product.

I’ve beta tested and deployed the 100, it’s a great little product. We have been excited to release it. The release date was held back until Vision was ready and then timed with our launch event.

Vblock 100’s in the Factory

The 100 comes in two preset models: 100BX and 100DX. The BX comes in a 24U rack with 3-4 C220 servers and the DX comes in a 42U rack with 3-8 C220 servers. The BX has a EMC VNXe 3150 while the DX has a VNXe 3300. Both have a pair of Cisco Catalyst 3750-X switches. The factory deploys a Logical Advanced Management Pod (LAMP) to host all of software used to manage the system. This includes vSphere, SQL, Update Manager and Vision Intelligent Operations. I’ll post a write up on Vision next.

The main difference between the 100 and 300 (aside from size) is that the network is 1Gb versus 10Gb and Storage is NFS or iSCSI only. There is no SAN in the Vblock 100. The Vblock 100 is pretty simple and I expect to see many deployed this year.

Vblock 100DX

Vblock 200

The Vblock 200 fits in between the 100 and 300, targeted for mixed workloads and mid-size data center needs. The 200 adds SAN connected storage with the VNX 5300 and 10Gb Ethernet with a pair of Nexus 5548 switches. It scales from 4 to 12 C220 servers and is sized to fit into one 42U rack. It also leverages the LAMP and Vision Intelligent Operations tool.

The 200 will be a workhorse due to the product design and price point. It can be a management environment for large data center deployments, Exchange environment, SAP, VDI, Unified Communications, etc.

Vblock 300

The Vblock System 300 has been updated using a new modular design. Previously when a 300 was sold the component layout was cookie cutter. Two UCS Chassis at the bottom of the rack, Fabric Interconnect, MDS, then Nexus switches and finally VNX storage on top. The modularity means each Vblock build will depend on the equipment order. We now support 2 to 16 UCS chassis (up from 2 to 8), Nexus 5548 and 5596 Ethernet switches, 6248 or 6296 fabric interconnect, 2204 or 2208 IOM and Four EMC VNX arrays (5300, 5500, 5700 & 7500).

Vblock 700

Similar to the 300 update, the Vblock System 700 now uses a new modular design. That is kind of where the similarities end. The Vblock 700 scales from 2 to 28 racks, 4 to 128 blades and 24 to 3200 drives. The 700 can be equipped with an EMC VMAX 10K, 20K or 40K storage array. Network hardware includes the Nexus 5548 or 5596 or even the 7010. SAN hardware is either the MDS 9148 or 9513. 128 blades not enough? No problem, the 700 can scale up to 384 blades by integrating additional Fabric interconnects.


So no matter what the workload requirements are VCE has a Vblock for you.

** Next blog post will cover Vision Intelligent Operations

VCE Customer Technology Center

Workload Mobility over distance with VMware, VPLEX & OTV

I have had the pleasure of configuring VCE’s Workload Mobility demo for EMCworld, CiscoLive! and VMworld. We have a number of VblockTM Systems at each conference in the Cisco, EMC and VCE booth. It made perfect sense to connect them together and show off our Workload Mobility Solution. Besides, isn’t cloud all about the ability to offer services from anywhere? Sure it’s just across the trade show floor in our demo but could easily be miles apart.


We have three Vblock 300 systems located in the VCE, EMC and Cisco booths. An additional network aggregation rack has been added to each Vblock system to house Nexus 7010 switches, EMC RecoverPoint appliances and EMC VPLEX engines. Panduit provided 1000 feet of fiber trunk cable containing 6 pair of fiber, which has been hung from the ceiling between booths.


The Nexus 7010 switches are providing our core network services, making each booth it’s own data center. RecoverPoint and VMware Site Recovery Manager are handling traditional long haul disaster recovery. VPLEX Metro is providing Active-Active storage clustering capabilities. This is the ability to stretch a VMware vSphere cluster between two sites today, and up to four in the future.  VPLEX Metro provides storage array block level LUN consistency and data availability while OTV on the Nexus 7000 series switches provide layer 2 network services.


Diagram: VCE Vblock WLM plan


Let’s take a step back for a moment and look at what makes this “cool”. Traditionally, migrating data and applications in or between data centers involves manual steps and data copies, where IT would either make physical backups or use data replication services to handle getting the data from side A to side B.


VMware clusters operate local to one data center, so moving VMs between data centers typically requires an outage, IP address change and time to bring up the services. To enable elastic cloud computing IT must find new ways to address deployment of applications across distance, including maintenance with little or no downtime, disaster avoidance, workload migration, consolidation and growth. IT also wants to be able to actively run workload across multiple locations using vSphere stretched clusters


In comes VCE Workload Mobility Solution

Excerpt from our solution document ( “The VCE Workload Mobility Solution (WLM Solution), an integrated solution that enables fast and simple application and infrastructure migration in real time with no downtime or disruption, can help overcome these migration challenges.


Utilizing the Vblock Infrastructure Platforms, EMC VPLEX Metro, and VMware vMotion, the WLM Solution removes the physical barriers within, across, and between data centers to facilitate data center resizing, consolidation and relocation, technology refresh, and workload balancing. The WLM Solution provides mobility to virtual applications, virtual machines, virtual networks, and virtual storage.”


Diagram: WLM High level drawing


Our live demonstrations at EMCWorld, CiscoLive! & VMworld will demonstrate how this works and what to expect in a production environment. We have three presentations that show what happens when a Vblock system protected by WLM experiences a failure (complete power off), and how to recover from that failure. We have setup two independent VPLEX clusters, Cluster 1 -VCE site A to EMC site B and Cluster 2 – VCE site A to Cisco Site B. However, using storage vMotion you can move VMs between all three sites, but they will only be protected between two sites.


Workload Mobility

We are featuring Workload Mobility in the Cisco booth which comes into play when you need to proactively vMotion VMs between sites. This lets you utilize precious memory, CPU and storage IOPS at both sites for the same workload at the same time. Maybe you want to update firmware, make changes and refresh hardware at one site. Move all the VMs to the other side and perform maintenance with no outage. VPLEX is keeping the VMware datastore in sync at both sites and the Nexus switches make sure network traffic to your VM keeps going.


Outage / EPO – Emergency Power Off

In the VCE booth presentation we are showing what happens when we kill the power on the Vblock system in the EMC booth and how VMs running on that system utilize VMware HA to restart on the surviving Vblock system and recover. VPLEX Metro has been keeping the datastore in sync and the Nexus switches now send network traffic to these VMs in the VCE booth. It takes about a minute for vCenter to register them as down and bring the VMs back up. Not a bad recovery time for a complete outage. VMware Fault Tolerance can take this a step further to keep the VM up during a host failure.



In the EMC booth we are demonstrating Vblock system restoration, highlighting how VPLEX resumes distributed data store replication and DRS, affinity rules and vMotion can be used to automatically move VMs back to a “recovered” Vblock system. VPLEX maintains a journal of writes performed while one site was down. This delta set of I/O transactions will be sent over the WAN once the VPLEX link is restored. VPLEX will show how large this delta set is and show time to completion. If your data center experienced a long outage you can still move VMs back to this Vblock system!


Demo Environment Detail

Each Vblock system has up to ten B200M2 blades installed in the UCS chassis and are broken up into clusters to support demo workloads for each booth. Our VMware environment has one vCenter Server running on the management pod in the VCE booth which manages three clusters with four hosts each. Two hosts come from the primary Vblock system and the other two come from the secondary forming stretched VMware clusters between the booths. HA and DRS are enabled on the clusters. Each booth has demo workstations connected to 1Gb Ethernet ports on the Nexus 7Ks and has access to vCenter in the VCE booth.


On the Vblock systems themselves we have a number of “demo pods”. We have a CRM application simulator using Swingbench and an Oracle 12g database, as well as Microsoft Exchange, VMware vCloud Director, Cisco Unified Communication and VMware View environments running live.


Each cluster has storage protected by VPLEX Metro which manages the volume. This is called a distributed volume and leverages EMC’s AccessAnywhere technology. Multiple VPLEX engines can be installed to protect and distribute these volumes. These engines are based on the same hardware as used in the VMAX, and they have the same cache, modules and reliability numbers for uptime.


Two directors exist in each VPLEX engine and they contain four back-end Fiber Channel (FC) ports which are zoned to the VNX storage arrays and four front-end FC ports which zoned to the hosts. Two 10Gb Ethernet ports or Four FC ports are used for WAN communication to the remote VPLEX. Note that using FC WAN/COM requires external FCIP switches such as Cisco MDS 9200 series. The VPLEX cluster has a management server which maintains a VPN connection to the management server at the remote data center. Up to 8000 volumes can be protected by a single VPLEX. Your data centers will need to provide WAN latency less than 10ms RTT which limits distance to about 100km but really depends on the quality if the link (could be shorter or longer).


A number of VLANs are set up on the switches to support the demos: management, vMotion, ESXi host, NFS, Workloads. These VLANs are all extended via OTV overlay interfaces between the Nexus 7Ks.


OTV runs over a Layer 3 network, which could be provided by the Nexus 7Ks or another layer 3 router. Multiple VDCs or Virtual Device Contexts are required to run OTV and layer 3 on same Nexus switch. A VDC separates the switch into multiple logical switches with physical ports allocated to each VDC. In order to pass traffic between VDCs ports on the same Nexus 7K we cross-connected ports from line card 2 and 3. For redundancy port channels are used to bind multiple ports together for layer 2 trunks and layer 3 interfaces. Switched virtual interfaces (SVIs) have been created on each switch for each VLAN using HSRP. We enabled EIGRP -Enhanced Interior Gateway Routing Protocol, which automatically maintains a routing topology.


Diagram: Nexus 7010 switches, trunk connections and VDCs


A team of 5 engineers built all three Vblock systems from the ground up, configured VPLEX and the Nexus 7k OTV/L3 network in about 15 days. We will have this demo running at each show this year, then move it to our RTP lab.



Photo: Vblock racks at the Staging lab in San Jose, CA


**Originally posted on CiscoDC Blog for CiscoLive! 2012

My VMware ESXi 4.1 home lab

With my new role at VCE as a Solutions vArchitect and Virtualization evangelist I get to work with a lot of cool stuff. Tech I want to test myself and it makes a lot of sense to have a home lab for testing and professional development. Sure there are lab resources at work and we get to work with all that cool tech but the environment is dynamic. A home lab has other uses besides learning VMware and testing. There are some examples on Television personal video recorder, Secure Wireless connections when using public WiFi. Each of these can be a VM running in the lab.

I have read a number of articles on home labs and wanted to add my experience with building one. As we all know technology changes quickly and it may be helpful to share what I’m finding success with as of March, 2011. Many blog posts out there cover hardware from 2009 when desktops were using DDR2 memory and Intel Core 2 Duo processors. These systems were limited to 8GB of RAM. This year second generation Intel Core i3, i5 and i7 processors are available with prices on first generation dropping. These processors and P55/H55 chipsets use DDR3 memory which allow for 16GB of RAM (double the amount of a DDR2 memory system). Intel Core i3/i5/i7 system prices are more expensive but worth it for the amount of RAM you can pack in. I also priced AMD processors but the price gap is small when comparing current technology.

I spent under $375 per PC, running bare bones (no case, monitor, keyboard or HDD).  About $450 per PC with case and HDD.,, and Frys seem to offer the best prices on computer parts (please leave comments with other good sites). Prices on PC parts change with the weather depending on weekly sales, mail in rebates and changes in current hardware.

The setup


  • Processor: Intel Core i3 540 $119
  • Memory: 16GB DDR3 Corsair (2 x 4GB) $75 x 2 packs
  • Motherboard: Intel DH55TC $89 – Onboard video, Gigabit NIC, 4 memory slots, 12 USB ports, PCI slot.
  • Hard Drive: 500GB Seagate Barracuda 7200RPM $39
  • Network Card: Intel 10/100/1000 GT PCI NIC $29
  • MicroAtx Case & 350Watt Power Supply $39
  • Shared storage: Iomega ix2 Storage device $189
  • Network Switch: Cisco Small Business 10/100/1000 5-port $47

Alternate Hardware:

  • Processor: AMD Phenom Quad Core 9750 CPU $100
  • Memory: 8GB DDR2 (4 x 2048MB) $112
  • Motherboard: Asus M4A785-M $50


  • VMware ESXi 4.1 Update 1 Installable
  • VMware ESXi 4.1 Update 1 Installable on USB stick (Instructions:
  • VMware vSphere Client
  • VMware vCenter Server installed onto a VM
  • Microsoft Windows 2003 Enterprise Edition (less overhead / smaller install than server 2008)
  • Microsoft Windows 7 Professional 32-bit (choose any client OS you prefer)
  • Celerra NAS VSA

The process for registering and downloading VMware software remains unchanged. You can go to and find numerous links to 60-day trials for all their software. ESXi installable is free to use which means your VMs will always run even if some licenses expire. I was interested in testing VMware View which included trial license keys and download links for vCenter, vShield Endpoint, View and clients.

There are two inexpensive routes for shared storage that provide NFS and iSCSI mounts. Celerra VSA (free from EMC to use) or a NAS device such as the Iomega iX2 or iX4. Other options exist but since I work with Celerra NAS I think it’s cool to have the software running in my lab. The Celerra VSA runs as a VM on one of your hosts. A better solution may be the Iomega iX2 which runs external to your ESX hosts. Shared storage is required for clustering ESX hosts.

How did VMware install?

Simple! Insert CD media, boot, install, repeat on all hosts. No issues with the Intel Motherboard. The H55/P55 chipsets work with ESX. USB, NIC, SATA, Video all worked fine. You have to go into the BIOS to enable VT and disable Execution Prevention Code. Clustering, HA, DRS, FT all seem to be working. You also have the option of running the hosts without a hard drive booting ESX from USB sticks. Each host would need to be configured with a NFS or iSCSI mount.

How to setup VMware

  • Install VMware ESXi onto each of the hosts.
  • Configure the root user password and host IP information
  • Install VMware Client onto a Windows machine. Point this to the IP of the ESX host.
  • Create a VM on the ESX host and install Windows 2003 or 2008 server.
  • Install VMware tools, OS updates, static IP, etc.
  • Install vCenter Server and let it install SQL Express (unless you want to first setup MS SQL Server)
  • Once vCenter Server is installed restart the client and connect to the IP of the Windows server.
  • Optional: Install Update Manager
  • Install Complete
  • Consider deploying the Celerra VSA virtual machine.

Example VMware Lab

The following image is an example of VMware ESX, VMs, vCenter & Celerra VSA setup on one PC. You can learn a lot about VMware with one PC but additional PCs are required for testing vMotion, HA, DRS, FT, SRM and other features.


Two example setups using a second PC for vMotion or HA.


Two more PCs are required to properly test SRM.


Contact me with any questions or comments!