Transform Cloud Automation and Availability

We are constantly working to share cool solutions with customers and have them available for demonstration. During CiscoLive London we unveiled our Transform Cloud Automation demo. This expands upon last year’s Workload Mobility demo (linked here) where we leveraged Vblock, VPLEX and OTV to protect mission critical workloads. Many customers leverage Management and Orchestration tools and are demanding they have the same protection as clustered databases or e-mail environments. These administrative services are not traditionally designed to be highly available. Running them on Vblock protected by VPLEX enables rapid recovery from hardware outage and also lets application provisioned in the same environment to be protected.

This Video  highlights the benefits of the solution and how it would be interacted with from an administrative point of view…

Transform Cloud Automation

The video begins with Cisco’s Intelligent Automation for Cloud portal. We build a custom workflow to provision VM’s and allow the user to choose which data center the virtual application (vApp) will be provisioned in. The vApp is provisioned in vCloud Director, allowing for another layer of abstraction for multi-tenancy. vCloud provisions vApps onto a stretched cluster in a “gold tier” of service (denoted by the requester in the CIAC provisioning window). We also have a silver tier cluster which is not protected by VPLEX (not shown). Our stretched cluster is enabled by EMC VPLEX Metro (which VCE integrates as part of the Vblock solution) and Cisco OTV. EMC VPLEX is a hardware solution that takes a storage LUN and replicates it over distance and allows read/write access from two locations simultaneously. As shown in the diagram this requires two Vblocks and VPLEX clusters on each side with the same resources. Our demo has three ESXi hosts at each site with three 500GB DataStore LUNs. VPLEX sits in between the host and storage array, presenting and protecting these LUNs. Write operations by each host are synchronous, read operations are cached locally. VPLEX maintains data consistency during outage and rules can be defined for the system to react appropriately during an outage.

That covers the DataStore, but not the network. Network traffic such as vMotion and virtual machine IP address are tied to a specific IP address. Traditional data center design dictates unique IP address space based upon location. Cisco OTV on the Nexus 7000 series switch can extend layer 2 VLANs between multiple locations, based upon your needs and configuration. OTV is relatively simple to setup and has been designed to mitigate the WAN bandwidth consumed by only sending the Layer 2 traffic required. Details on OTV can be found here.

TransformCloud VPLEX VblockDiagram of the solution

There is a lot of cool stuff going on with this demo, our focus is on the ease of use and management of the environment once it’s in production. We plan on enhancing the demo throughout 2013 and would be happy to share.

VCE Vision Intelligent Operations

Vision Intelligent Operations Software for Vblock Systems

Enabling simple management of Vblock Integrated Infrastructure Systems from VMware vCenter. Vision performs Discovery, Identification, Health Monitoring, Loggging, and Validation of Vblock hardware plus offers an Open API.

VCEVisionVCE Vision Intelligent Operations software capabilities for Vblock systems:

      • Discovery:  Ensuring management tools constantly reflect the most current state of Vblock Systems. As hardware changes the inventory auto-updates.
      • Identification:  Enabling the converged system view. All component details are fed up to the user interface. Hardware model, status, serial numbers, etc.
      • Validation:  Providing system assurance and ensuring compliance with the interoperability matrix. Ensuring running firmware is compatible across components and across Vblocks.
      • Health Monitoring:  Expediting diagnosis and remediation. Correlating hardware warnings, errors and failures with the virtual environment.
      • Logging:  Promoting rapid troubleshooting. The Vblock consists of many hardware devices. Vision assembles all log activity into one location, for review during issue resolution.
      • Open API:  Simplifying integration, backed by VCE and an open community of developers.

The software is delivered pre-installed on all new Vblock Systems, and includes:

    • The VCE Vision Intelligent Operations System Library that runs on all Vblock Systems to implement core functionality including the discovery, compliance, and the object model.
    • A plug-In for vCenter and an adapter for vCenter Operations Manager that enable native integration with these two key components of the VMware management toolset
    • A software development kit, which includes the API documentation, sample code, Java bindings and a simulator, as well as an on-line developer community to help foster innovation.

It is important to mention where Vision fits. It is not a replacement for your current M&O solution. It complements an existing investment in almost any orchestration product that can take advantage of the API. For example, Cisco Intelligent Automation for Cloud (CIAC) could kick off a workflow to request system health from Vision before it began a service provisioning activity. CIAC would receive back status and determine if it will proceed or flag a hardware issue for follow up. The same goes for DynamicOps, Cloupia, BMC or CA products.

vCenter Integration: Vblock next to your virtual data center!!

Vision2

 

 

vCenter Operations Manager – Health Monitoring

Vision

 

This raises the bar in the converged infrastructure market and there’s much more to come.

VCE Product Launch – Unleash Simplicity

VCE is launching a number of new products and updates today. Some of which I have been very involved with over the past year. Vblock System 100, System 200, new modular design for the 300 and 700 plus Vision Intelligent Operations.

Vblock 100

A long time in the works, the Vblock 100 is an affordable integrated infrastructure product which can support approximately 50 to 100 VMs (or 200 VDI users). It is meant for branch offices, small data centers or the lab in my garage. Every Vblock, including the 100 is built in the factory, preconfigured, validated and supported as a single product.

I’ve beta tested and deployed the 100, it’s a great little product. We have been excited to release it. The release date was held back until Vision was ready and then timed with our launch event.

Vblock 100’s in the Factory

The 100 comes in two preset models: 100BX and 100DX. The BX comes in a 24U rack with 3-4 C220 servers and the DX comes in a 42U rack with 3-8 C220 servers. The BX has a EMC VNXe 3150 while the DX has a VNXe 3300. Both have a pair of Cisco Catalyst 3750-X switches. The factory deploys a Logical Advanced Management Pod (LAMP) to host all of software used to manage the system. This includes vSphere, SQL, Update Manager and Vision Intelligent Operations. I’ll post a write up on Vision next.

The main difference between the 100 and 300 (aside from size) is that the network is 1Gb versus 10Gb and Storage is NFS or iSCSI only. There is no SAN in the Vblock 100. The Vblock 100 is pretty simple and I expect to see many deployed this year.

Vblock 100DX

Vblock 200

The Vblock 200 fits in between the 100 and 300, targeted for mixed workloads and mid-size data center needs. The 200 adds SAN connected storage with the VNX 5300 and 10Gb Ethernet with a pair of Nexus 5548 switches. It scales from 4 to 12 C220 servers and is sized to fit into one 42U rack. It also leverages the LAMP and Vision Intelligent Operations tool.

The 200 will be a workhorse due to the product design and price point. It can be a management environment for large data center deployments, Exchange environment, SAP, VDI, Unified Communications, etc.

Vblock 300

The Vblock System 300 has been updated using a new modular design. Previously when a 300 was sold the component layout was cookie cutter. Two UCS Chassis at the bottom of the rack, Fabric Interconnect, MDS, then Nexus switches and finally VNX storage on top. The modularity means each Vblock build will depend on the equipment order. We now support 2 to 16 UCS chassis (up from 2 to 8), Nexus 5548 and 5596 Ethernet switches, 6248 or 6296 fabric interconnect, 2204 or 2208 IOM and Four EMC VNX arrays (5300, 5500, 5700 & 7500).

Vblock 700

Similar to the 300 update, the Vblock System 700 now uses a new modular design. That is kind of where the similarities end. The Vblock 700 scales from 2 to 28 racks, 4 to 128 blades and 24 to 3200 drives. The 700 can be equipped with an EMC VMAX 10K, 20K or 40K storage array. Network hardware includes the Nexus 5548 or 5596 or even the 7010. SAN hardware is either the MDS 9148 or 9513. 128 blades not enough? No problem, the 700 can scale up to 384 blades by integrating additional Fabric interconnects.

 

So no matter what the workload requirements are VCE has a Vblock for you.

** Next blog post will cover Vision Intelligent Operations

VCE Customer Technology Center