Transform Cloud Automation and Availability

We are constantly working to share cool solutions with customers and have them available for demonstration. During CiscoLive London we unveiled our Transform Cloud Automation demo. This expands upon last year’s Workload Mobility demo (linked here) where we leveraged Vblock, VPLEX and OTV to protect mission critical workloads. Many customers leverage Management and Orchestration tools and are demanding they have the same protection as clustered databases or e-mail environments. These administrative services are not traditionally designed to be highly available. Running them on Vblock protected by VPLEX enables rapid recovery from hardware outage and also lets application provisioned in the same environment to be protected.

This Video  highlights the benefits of the solution and how it would be interacted with from an administrative point of view…

Transform Cloud Automation

The video begins with Cisco’s Intelligent Automation for Cloud portal. We build a custom workflow to provision VM’s and allow the user to choose which data center the virtual application (vApp) will be provisioned in. The vApp is provisioned in vCloud Director, allowing for another layer of abstraction for multi-tenancy. vCloud provisions vApps onto a stretched cluster in a “gold tier” of service (denoted by the requester in the CIAC provisioning window). We also have a silver tier cluster which is not protected by VPLEX (not shown). Our stretched cluster is enabled by EMC VPLEX Metro (which VCE integrates as part of the Vblock solution) and Cisco OTV. EMC VPLEX is a hardware solution that takes a storage LUN and replicates it over distance and allows read/write access from two locations simultaneously. As shown in the diagram this requires two Vblocks and VPLEX clusters on each side with the same resources. Our demo has three ESXi hosts at each site with three 500GB DataStore LUNs. VPLEX sits in between the host and storage array, presenting and protecting these LUNs. Write operations by each host are synchronous, read operations are cached locally. VPLEX maintains data consistency during outage and rules can be defined for the system to react appropriately during an outage.

That covers the DataStore, but not the network. Network traffic such as vMotion and virtual machine IP address are tied to a specific IP address. Traditional data center design dictates unique IP address space based upon location. Cisco OTV on the Nexus 7000 series switch can extend layer 2 VLANs between multiple locations, based upon your needs and configuration. OTV is relatively simple to setup and has been designed to mitigate the WAN bandwidth consumed by only sending the Layer 2 traffic required. Details on OTV can be found here.

TransformCloud VPLEX VblockDiagram of the solution

There is a lot of cool stuff going on with this demo, our focus is on the ease of use and management of the environment once it’s in production. We plan on enhancing the demo throughout 2013 and would be happy to share.

VCE Vision Intelligent Operations

Vision Intelligent Operations Software for Vblock Systems

Enabling simple management of Vblock Integrated Infrastructure Systems from VMware vCenter. Vision performs Discovery, Identification, Health Monitoring, Loggging, and Validation of Vblock hardware plus offers an Open API.

VCEVisionVCE Vision Intelligent Operations software capabilities for Vblock systems:

      • Discovery:  Ensuring management tools constantly reflect the most current state of Vblock Systems. As hardware changes the inventory auto-updates.
      • Identification:  Enabling the converged system view. All component details are fed up to the user interface. Hardware model, status, serial numbers, etc.
      • Validation:  Providing system assurance and ensuring compliance with the interoperability matrix. Ensuring running firmware is compatible across components and across Vblocks.
      • Health Monitoring:  Expediting diagnosis and remediation. Correlating hardware warnings, errors and failures with the virtual environment.
      • Logging:  Promoting rapid troubleshooting. The Vblock consists of many hardware devices. Vision assembles all log activity into one location, for review during issue resolution.
      • Open API:  Simplifying integration, backed by VCE and an open community of developers.

The software is delivered pre-installed on all new Vblock Systems, and includes:

    • The VCE Vision Intelligent Operations System Library that runs on all Vblock Systems to implement core functionality including the discovery, compliance, and the object model.
    • A plug-In for vCenter and an adapter for vCenter Operations Manager that enable native integration with these two key components of the VMware management toolset
    • A software development kit, which includes the API documentation, sample code, Java bindings and a simulator, as well as an on-line developer community to help foster innovation.

It is important to mention where Vision fits. It is not a replacement for your current M&O solution. It complements an existing investment in almost any orchestration product that can take advantage of the API. For example, Cisco Intelligent Automation for Cloud (CIAC) could kick off a workflow to request system health from Vision before it began a service provisioning activity. CIAC would receive back status and determine if it will proceed or flag a hardware issue for follow up. The same goes for DynamicOps, Cloupia, BMC or CA products.

vCenter Integration: Vblock next to your virtual data center!!

Vision2

 

 

vCenter Operations Manager – Health Monitoring

Vision

 

This raises the bar in the converged infrastructure market and there’s much more to come.

VCE Product Launch – Unleash Simplicity

VCE is launching a number of new products and updates today. Some of which I have been very involved with over the past year. Vblock System 100, System 200, new modular design for the 300 and 700 plus Vision Intelligent Operations.

Vblock 100

A long time in the works, the Vblock 100 is an affordable integrated infrastructure product which can support approximately 50 to 100 VMs (or 200 VDI users). It is meant for branch offices, small data centers or the lab in my garage. Every Vblock, including the 100 is built in the factory, preconfigured, validated and supported as a single product.

I’ve beta tested and deployed the 100, it’s a great little product. We have been excited to release it. The release date was held back until Vision was ready and then timed with our launch event.

Vblock 100’s in the Factory

The 100 comes in two preset models: 100BX and 100DX. The BX comes in a 24U rack with 3-4 C220 servers and the DX comes in a 42U rack with 3-8 C220 servers. The BX has a EMC VNXe 3150 while the DX has a VNXe 3300. Both have a pair of Cisco Catalyst 3750-X switches. The factory deploys a Logical Advanced Management Pod (LAMP) to host all of software used to manage the system. This includes vSphere, SQL, Update Manager and Vision Intelligent Operations. I’ll post a write up on Vision next.

The main difference between the 100 and 300 (aside from size) is that the network is 1Gb versus 10Gb and Storage is NFS or iSCSI only. There is no SAN in the Vblock 100. The Vblock 100 is pretty simple and I expect to see many deployed this year.

Vblock 100DX

Vblock 200

The Vblock 200 fits in between the 100 and 300, targeted for mixed workloads and mid-size data center needs. The 200 adds SAN connected storage with the VNX 5300 and 10Gb Ethernet with a pair of Nexus 5548 switches. It scales from 4 to 12 C220 servers and is sized to fit into one 42U rack. It also leverages the LAMP and Vision Intelligent Operations tool.

The 200 will be a workhorse due to the product design and price point. It can be a management environment for large data center deployments, Exchange environment, SAP, VDI, Unified Communications, etc.

Vblock 300

The Vblock System 300 has been updated using a new modular design. Previously when a 300 was sold the component layout was cookie cutter. Two UCS Chassis at the bottom of the rack, Fabric Interconnect, MDS, then Nexus switches and finally VNX storage on top. The modularity means each Vblock build will depend on the equipment order. We now support 2 to 16 UCS chassis (up from 2 to 8), Nexus 5548 and 5596 Ethernet switches, 6248 or 6296 fabric interconnect, 2204 or 2208 IOM and Four EMC VNX arrays (5300, 5500, 5700 & 7500).

Vblock 700

Similar to the 300 update, the Vblock System 700 now uses a new modular design. That is kind of where the similarities end. The Vblock 700 scales from 2 to 28 racks, 4 to 128 blades and 24 to 3200 drives. The 700 can be equipped with an EMC VMAX 10K, 20K or 40K storage array. Network hardware includes the Nexus 5548 or 5596 or even the 7010. SAN hardware is either the MDS 9148 or 9513. 128 blades not enough? No problem, the 700 can scale up to 384 blades by integrating additional Fabric interconnects.

 

So no matter what the workload requirements are VCE has a Vblock for you.

** Next blog post will cover Vision Intelligent Operations

VCE Customer Technology Center

Cisco Cloud Mega Test

Posted on Cisco’s Data Center blog http://blogs.cisco.com/datacenter/ciscos-cloud-verse-vce-and-eantc-cloud-mega-test/#more-69949 May 22nd, 2012

“Over the past four to five months, there has been significant buzz about VCE’s role in theEANTC Cloud Mega Test.  I was lucky enough to be a part of the test team, and I wanted to share some of my experiences in working on this fantastic project with EANTC and Cisco.

It started with a bang, of course.  Back in late January, Light Reading published their first report on the testing EANTC had done of Cisco’s CloudVerse architecture. I was at Cisco Live London where details of the test were first shared and members of the CloudVerse team were in attendance to share the results. Over the next couple of months, EANTC followed that up with other reports in the series.  All in all, they covered the Cisco Unified Data Center that is the foundation for cloud services, Cloud Intelligent Networks, Cloud Applications & Services, and Long-haul Optical Transport used in delivering cloud-based services.  Of course, I wasn’t involved in all of that.

As with all of the Mega Test programs (the Mobile Mega Test and Medianet Mega Test being the ones that Light Reading conducted previously), these programs are a big deal.  Cisco spends millions of dollars — literally — on lab infrastructure, engineers and communications for each one of these tests.  Light Reading has EANTC come in to provide independent, objective oversight and testing.  And when the report comes out, there is a lot of buzz in the industry on exactly what went on.  It’s not every day we get to play in a multi-million dollar sandbox!  I was one of several dozen people from Cisco, VMware, VCE, EMC and Ixia working on this project.

As the buzz about the test bounced around in the industry, a sidebar conversation emerged about VCE’s involvement in the test. As you may know from social media, I’m a Principal vArchitect with VCE Corporate Engineering.  Essentially, my job is to make sure that customers get the most out of VCE’s technology – VblockTM Systems.  The Vblock system is pre-engineered, pre-tested converged infrastructure that combines Cisco’s computing and networking equipment, EMC’s storage equipment, and virtualization from VMware.  VCE itself operates as a joint venture between Cisco and EMC with investments from VMware and Intel.

One of the things that was missed in the excitement over the test results themselves was the fact that the Vblock system played a big part in the Cloud Mega Test.

 

Of course, the test wasn’t intended to examine just the Vblock system.  The Mega Tests have always been about putting together all of the moving parts necessary for Service Providers to deliver a specific service to their subscriber base.  This Mega Test focused on the cloud, so the data center, the network, the applications, and the transport were all part of the test.  Still, Cisco needed *something* to act as the data center infrastructure… and what easier way to quickly implement a data center than rolling in a big Vblock Series 700?

I was fortunate enough to be one of the people who got to help Cisco with this.  With the timeline for the tests being as tight as they were, everyone at Cisco wanted to bring in large chunks of technology that they knew would “just work.”  They didn’t want to spend time piecing together all of the components and worrying if a firmware rev. on some element somewhere would unexpectedly wreak havoc at the worst time.  This is exactly the problem that the Vblock system solves.

The system we used for the Cloud Mega Test was a Vblock Series 700 MX which was an ideal choice because in any test setup there is a massive amount of work to do in a short period of time. Cisco, VCE and EMC lent experts in each of the technologies being tested. Being able to roll in an operational computing platform, already running ESXi, with Storage presented and Nexus 1000v configured greatly reduced initial setup time. In fact a lot of the Light Reading content focuses on the solution and not on the Vblock system since it has become a foundational element.  Choosing a Vblock Series 700 MX (VMAX based) instead of a 300 (VNX based) came down to the fact that VMAX provides a scalability factor that VNX can’t match. I’m In this case, the VMAX was a better choice because it offers up to eight engines with more cache, fiber channel connectivity and flat out IO paths than the VNX. VCE has service providers that use both VMAX and VNX depending on the business model they are trying to support. VMAX typically fits most of their requirements especially with the recently announced VMAX SP (VMAX Service Provider).  VMAX SP will provide APIs and pre-defined architecture that that fits into a SPs business model.

VCE has spent a lot of time developing a multi-tenant solution architecture for Vblock systems; I was close to this effort last year and continue to support it from time to time. Cisco’s Cloud Mega Test and VCE’s secure multitenant solution provides guidelines for tenant isolation, compliance, governance, auditing, security, logging, QoS and a framework allowing the customer to choose which pieces they want to implement. One of VCE’s core values is allowing partners to integrate with our platform giving our customers choice over which products they want to use.

The best part about using Vblock systems for this effort is that we build them in the factory. Our professional services organization is a well oiled machine, able to take the logical config document we created during our planning discussions and apply it to the hardware components. With vCenter and ESXi installed onto all the blades with storage and Nexus 1000v fully configured, all we do is roll the cabinets into the data center and connect power and network. Integration into the Cloud Mega Test was a breeze. This allowed our team to focus on building the test virtual machines, vCloud Director environment and vCloud Connector implementation. When marketing asked me to comment on the Vblock system setup for the Cloud Mega Test I initially thought that I didn’t have to do much with the hardware since everything I did was with the VMware stack. This is clearly why VCE is succeeding is the market. We have architected the platform to serve up virtualized workloads and it excels. I know this sounds like fluff but many of us have spent years as partners delivering solutions where the vendor is trying to solve business problems with a bill of materials. Having to be the person to take a truck load of goods and turn it into a functional solution is time-consuming and nerve racking. It has been nice to be involved with projects such as the Cloud Mega Test where we get to focus further up the stack. I have to admit that I didn’t modify the VMAX configuration during the setup; I did however fix a problem with another vendor’s storage system.

The VMAX storage array performed exactly as expected, which was a good thing but not that exciting of a story for readers. I think the take away here is that EMC VMAX offers the most horsepower when it comes to storage — up to 2400 drives, 128 fiber channel ports, 1TB of cache across eight engines. Most storage arrays sold have two storage processors, less cache and a handful of fiber channel ports. There is no better way to provide storage to a mixed workload in a multi-tenant environment. With respect to servers, we’ve moved from rack mount and old school blade chassis to fabric interconnects, converged 10gb Ethernet, and converged network adapters (Ethernet/Fiber Channel) configurable as 2 to 128 virtual interfaces. VCE employs structured architecture and validated support matrix with a dedicated support organization. Today, up to eight UCS chassis are supported in a Vblock system which means up to 64 blades can be managed through one interface.

Many of the Cisco engineers on the Mega Test project were involved with the Vblock system before VCE was formed and Vblock became a product. I have been with VCE for a year and a half, and probably another 6 months prior to that, Cisco built one of the very first Vblock systems in RTP. A lot of fantastic stuff comes from these marketing efforts. Watch closely as much of the CloudVerse Mega Test turns into products from Cisco, VCE or your favorite Service Provider.

Being part of the EANTC Cloud Mega Test was a great experience.  Not only did I get to work with a lot of high-end technology, but I worked with a bunch of amazing people too.  The folks at EANTC are incredibly bright, and they didn’t cut Cisco any slack on the test plan.  The Cisco folks were super, too.  It’s amazing just how broad a solution portfolio they have at Cisco.  They have literally everything that they need to build a public cloud infrastructure.

The best part was that this whole experience confirmed for me just how valuable the Vblock system can be for our customers.  Without the pre-testing and validation that we do here at VCE, there is no way we would have been able to pull off this test plan.  I would probably still be in the Cisco labs, tweaking some setting somewhere to work out just one more kink.  But the Vblock system came through with flying colors in a really rigorous environment.

I’m really hoping that Cisco does more of these Mega Test programs, because I’d love to do it again.  If you happen to be at EMC World this week, swing by booth 410 or 515. I’d be happy to chat about the test further. Leave comments below and suggest what you might want to see tested next.  Perhaps they’ll take our suggestions… who knows!

BTW, if you want to read the official reports, you can find them here:

http://www.lightreading.com/ciscoseries

http://www.cisco.com/go/cloudmegatest

http://www.vce.com/cloudmegatest