Transform Cloud Automation and Availability

We are constantly working to share cool solutions with customers and have them available for demonstration. During CiscoLive London we unveiled our Transform Cloud Automation demo. This expands upon last year’s Workload Mobility demo (linked here) where we leveraged Vblock, VPLEX and OTV to protect mission critical workloads. Many customers leverage Management and Orchestration tools and are demanding they have the same protection as clustered databases or e-mail environments. These administrative services are not traditionally designed to be highly available. Running them on Vblock protected by VPLEX enables rapid recovery from hardware outage and also lets application provisioned in the same environment to be protected.

This Video  highlights the benefits of the solution and how it would be interacted with from an administrative point of view…

Transform Cloud Automation

The video begins with Cisco’s Intelligent Automation for Cloud portal. We build a custom workflow to provision VM’s and allow the user to choose which data center the virtual application (vApp) will be provisioned in. The vApp is provisioned in vCloud Director, allowing for another layer of abstraction for multi-tenancy. vCloud provisions vApps onto a stretched cluster in a “gold tier” of service (denoted by the requester in the CIAC provisioning window). We also have a silver tier cluster which is not protected by VPLEX (not shown). Our stretched cluster is enabled by EMC VPLEX Metro (which VCE integrates as part of the Vblock solution) and Cisco OTV. EMC VPLEX is a hardware solution that takes a storage LUN and replicates it over distance and allows read/write access from two locations simultaneously. As shown in the diagram this requires two Vblocks and VPLEX clusters on each side with the same resources. Our demo has three ESXi hosts at each site with three 500GB DataStore LUNs. VPLEX sits in between the host and storage array, presenting and protecting these LUNs. Write operations by each host are synchronous, read operations are cached locally. VPLEX maintains data consistency during outage and rules can be defined for the system to react appropriately during an outage.

That covers the DataStore, but not the network. Network traffic such as vMotion and virtual machine IP address are tied to a specific IP address. Traditional data center design dictates unique IP address space based upon location. Cisco OTV on the Nexus 7000 series switch can extend layer 2 VLANs between multiple locations, based upon your needs and configuration. OTV is relatively simple to setup and has been designed to mitigate the WAN bandwidth consumed by only sending the Layer 2 traffic required. Details on OTV can be found here.

TransformCloud VPLEX VblockDiagram of the solution

There is a lot of cool stuff going on with this demo, our focus is on the ease of use and management of the environment once it’s in production. We plan on enhancing the demo throughout 2013 and would be happy to share.

VCE Vision Intelligent Operations

Vision Intelligent Operations Software for Vblock Systems

Enabling simple management of Vblock Integrated Infrastructure Systems from VMware vCenter. Vision performs Discovery, Identification, Health Monitoring, Loggging, and Validation of Vblock hardware plus offers an Open API.

VCEVisionVCE Vision Intelligent Operations software capabilities for Vblock systems:

      • Discovery:  Ensuring management tools constantly reflect the most current state of Vblock Systems. As hardware changes the inventory auto-updates.
      • Identification:  Enabling the converged system view. All component details are fed up to the user interface. Hardware model, status, serial numbers, etc.
      • Validation:  Providing system assurance and ensuring compliance with the interoperability matrix. Ensuring running firmware is compatible across components and across Vblocks.
      • Health Monitoring:  Expediting diagnosis and remediation. Correlating hardware warnings, errors and failures with the virtual environment.
      • Logging:  Promoting rapid troubleshooting. The Vblock consists of many hardware devices. Vision assembles all log activity into one location, for review during issue resolution.
      • Open API:  Simplifying integration, backed by VCE and an open community of developers.

The software is delivered pre-installed on all new Vblock Systems, and includes:

    • The VCE Vision Intelligent Operations System Library that runs on all Vblock Systems to implement core functionality including the discovery, compliance, and the object model.
    • A plug-In for vCenter and an adapter for vCenter Operations Manager that enable native integration with these two key components of the VMware management toolset
    • A software development kit, which includes the API documentation, sample code, Java bindings and a simulator, as well as an on-line developer community to help foster innovation.

It is important to mention where Vision fits. It is not a replacement for your current M&O solution. It complements an existing investment in almost any orchestration product that can take advantage of the API. For example, Cisco Intelligent Automation for Cloud (CIAC) could kick off a workflow to request system health from Vision before it began a service provisioning activity. CIAC would receive back status and determine if it will proceed or flag a hardware issue for follow up. The same goes for DynamicOps, Cloupia, BMC or CA products.

vCenter Integration: Vblock next to your virtual data center!!

Vision2

 

 

vCenter Operations Manager – Health Monitoring

Vision

 

This raises the bar in the converged infrastructure market and there’s much more to come.

VCE Product Launch – Unleash Simplicity

VCE is launching a number of new products and updates today. Some of which I have been very involved with over the past year. Vblock System 100, System 200, new modular design for the 300 and 700 plus Vision Intelligent Operations.

Vblock 100

A long time in the works, the Vblock 100 is an affordable integrated infrastructure product which can support approximately 50 to 100 VMs (or 200 VDI users). It is meant for branch offices, small data centers or the lab in my garage. Every Vblock, including the 100 is built in the factory, preconfigured, validated and supported as a single product.

I’ve beta tested and deployed the 100, it’s a great little product. We have been excited to release it. The release date was held back until Vision was ready and then timed with our launch event.

Vblock 100’s in the Factory

The 100 comes in two preset models: 100BX and 100DX. The BX comes in a 24U rack with 3-4 C220 servers and the DX comes in a 42U rack with 3-8 C220 servers. The BX has a EMC VNXe 3150 while the DX has a VNXe 3300. Both have a pair of Cisco Catalyst 3750-X switches. The factory deploys a Logical Advanced Management Pod (LAMP) to host all of software used to manage the system. This includes vSphere, SQL, Update Manager and Vision Intelligent Operations. I’ll post a write up on Vision next.

The main difference between the 100 and 300 (aside from size) is that the network is 1Gb versus 10Gb and Storage is NFS or iSCSI only. There is no SAN in the Vblock 100. The Vblock 100 is pretty simple and I expect to see many deployed this year.

Vblock 100DX

Vblock 200

The Vblock 200 fits in between the 100 and 300, targeted for mixed workloads and mid-size data center needs. The 200 adds SAN connected storage with the VNX 5300 and 10Gb Ethernet with a pair of Nexus 5548 switches. It scales from 4 to 12 C220 servers and is sized to fit into one 42U rack. It also leverages the LAMP and Vision Intelligent Operations tool.

The 200 will be a workhorse due to the product design and price point. It can be a management environment for large data center deployments, Exchange environment, SAP, VDI, Unified Communications, etc.

Vblock 300

The Vblock System 300 has been updated using a new modular design. Previously when a 300 was sold the component layout was cookie cutter. Two UCS Chassis at the bottom of the rack, Fabric Interconnect, MDS, then Nexus switches and finally VNX storage on top. The modularity means each Vblock build will depend on the equipment order. We now support 2 to 16 UCS chassis (up from 2 to 8), Nexus 5548 and 5596 Ethernet switches, 6248 or 6296 fabric interconnect, 2204 or 2208 IOM and Four EMC VNX arrays (5300, 5500, 5700 & 7500).

Vblock 700

Similar to the 300 update, the Vblock System 700 now uses a new modular design. That is kind of where the similarities end. The Vblock 700 scales from 2 to 28 racks, 4 to 128 blades and 24 to 3200 drives. The 700 can be equipped with an EMC VMAX 10K, 20K or 40K storage array. Network hardware includes the Nexus 5548 or 5596 or even the 7010. SAN hardware is either the MDS 9148 or 9513. 128 blades not enough? No problem, the 700 can scale up to 384 blades by integrating additional Fabric interconnects.

 

So no matter what the workload requirements are VCE has a Vblock for you.

** Next blog post will cover Vision Intelligent Operations

VCE Customer Technology Center

Workload Mobility over distance with VMware, VPLEX & OTV

I have had the pleasure of configuring VCE’s Workload Mobility demo for EMCworld, CiscoLive! and VMworld. We have a number of VblockTM Systems at each conference in the Cisco, EMC and VCE booth. It made perfect sense to connect them together and show off our Workload Mobility Solution. Besides, isn’t cloud all about the ability to offer services from anywhere? Sure it’s just across the trade show floor in our demo but could easily be miles apart.

 

We have three Vblock 300 systems located in the VCE, EMC and Cisco booths. An additional network aggregation rack has been added to each Vblock system to house Nexus 7010 switches, EMC RecoverPoint appliances and EMC VPLEX engines. Panduit provided 1000 feet of fiber trunk cable containing 6 pair of fiber, which has been hung from the ceiling between booths.

 

The Nexus 7010 switches are providing our core network services, making each booth it’s own data center. RecoverPoint and VMware Site Recovery Manager are handling traditional long haul disaster recovery. VPLEX Metro is providing Active-Active storage clustering capabilities. This is the ability to stretch a VMware vSphere cluster between two sites today, and up to four in the future.  VPLEX Metro provides storage array block level LUN consistency and data availability while OTV on the Nexus 7000 series switches provide layer 2 network services.

 

Diagram: VCE Vblock WLM plan

 

Let’s take a step back for a moment and look at what makes this “cool”. Traditionally, migrating data and applications in or between data centers involves manual steps and data copies, where IT would either make physical backups or use data replication services to handle getting the data from side A to side B.

 

VMware clusters operate local to one data center, so moving VMs between data centers typically requires an outage, IP address change and time to bring up the services. To enable elastic cloud computing IT must find new ways to address deployment of applications across distance, including maintenance with little or no downtime, disaster avoidance, workload migration, consolidation and growth. IT also wants to be able to actively run workload across multiple locations using vSphere stretched clusters

 

In comes VCE Workload Mobility Solution

Excerpt from our solution document (http://www.vce.com/pdf/solutions/vce-workload-mobility.pdf): “The VCE Workload Mobility Solution (WLM Solution), an integrated solution that enables fast and simple application and infrastructure migration in real time with no downtime or disruption, can help overcome these migration challenges.

 

Utilizing the Vblock Infrastructure Platforms, EMC VPLEX Metro, and VMware vMotion, the WLM Solution removes the physical barriers within, across, and between data centers to facilitate data center resizing, consolidation and relocation, technology refresh, and workload balancing. The WLM Solution provides mobility to virtual applications, virtual machines, virtual networks, and virtual storage.”

 

Diagram: WLM High level drawing

 

Our live demonstrations at EMCWorld, CiscoLive! & VMworld will demonstrate how this works and what to expect in a production environment. We have three presentations that show what happens when a Vblock system protected by WLM experiences a failure (complete power off), and how to recover from that failure. We have setup two independent VPLEX clusters, Cluster 1 -VCE site A to EMC site B and Cluster 2 – VCE site A to Cisco Site B. However, using storage vMotion you can move VMs between all three sites, but they will only be protected between two sites.

 

Workload Mobility

We are featuring Workload Mobility in the Cisco booth which comes into play when you need to proactively vMotion VMs between sites. This lets you utilize precious memory, CPU and storage IOPS at both sites for the same workload at the same time. Maybe you want to update firmware, make changes and refresh hardware at one site. Move all the VMs to the other side and perform maintenance with no outage. VPLEX is keeping the VMware datastore in sync at both sites and the Nexus switches make sure network traffic to your VM keeps going.

 

Outage / EPO – Emergency Power Off

In the VCE booth presentation we are showing what happens when we kill the power on the Vblock system in the EMC booth and how VMs running on that system utilize VMware HA to restart on the surviving Vblock system and recover. VPLEX Metro has been keeping the datastore in sync and the Nexus switches now send network traffic to these VMs in the VCE booth. It takes about a minute for vCenter to register them as down and bring the VMs back up. Not a bad recovery time for a complete outage. VMware Fault Tolerance can take this a step further to keep the VM up during a host failure.

 

Restoration

In the EMC booth we are demonstrating Vblock system restoration, highlighting how VPLEX resumes distributed data store replication and DRS, affinity rules and vMotion can be used to automatically move VMs back to a “recovered” Vblock system. VPLEX maintains a journal of writes performed while one site was down. This delta set of I/O transactions will be sent over the WAN once the VPLEX link is restored. VPLEX will show how large this delta set is and show time to completion. If your data center experienced a long outage you can still move VMs back to this Vblock system!

 

Demo Environment Detail

Each Vblock system has up to ten B200M2 blades installed in the UCS chassis and are broken up into clusters to support demo workloads for each booth. Our VMware environment has one vCenter Server running on the management pod in the VCE booth which manages three clusters with four hosts each. Two hosts come from the primary Vblock system and the other two come from the secondary forming stretched VMware clusters between the booths. HA and DRS are enabled on the clusters. Each booth has demo workstations connected to 1Gb Ethernet ports on the Nexus 7Ks and has access to vCenter in the VCE booth.

 

On the Vblock systems themselves we have a number of “demo pods”. We have a CRM application simulator using Swingbench and an Oracle 12g database, as well as Microsoft Exchange, VMware vCloud Director, Cisco Unified Communication and VMware View environments running live.

 

Each cluster has storage protected by VPLEX Metro which manages the volume. This is called a distributed volume and leverages EMC’s AccessAnywhere technology. Multiple VPLEX engines can be installed to protect and distribute these volumes. These engines are based on the same hardware as used in the VMAX, and they have the same cache, modules and reliability numbers for uptime.

 

Two directors exist in each VPLEX engine and they contain four back-end Fiber Channel (FC) ports which are zoned to the VNX storage arrays and four front-end FC ports which zoned to the hosts. Two 10Gb Ethernet ports or Four FC ports are used for WAN communication to the remote VPLEX. Note that using FC WAN/COM requires external FCIP switches such as Cisco MDS 9200 series. The VPLEX cluster has a management server which maintains a VPN connection to the management server at the remote data center. Up to 8000 volumes can be protected by a single VPLEX. Your data centers will need to provide WAN latency less than 10ms RTT which limits distance to about 100km but really depends on the quality if the link (could be shorter or longer).

 

A number of VLANs are set up on the switches to support the demos: management, vMotion, ESXi host, NFS, Workloads. These VLANs are all extended via OTV overlay interfaces between the Nexus 7Ks.

 

OTV runs over a Layer 3 network, which could be provided by the Nexus 7Ks or another layer 3 router. Multiple VDCs or Virtual Device Contexts are required to run OTV and layer 3 on same Nexus switch. A VDC separates the switch into multiple logical switches with physical ports allocated to each VDC. In order to pass traffic between VDCs ports on the same Nexus 7K we cross-connected ports from line card 2 and 3. For redundancy port channels are used to bind multiple ports together for layer 2 trunks and layer 3 interfaces. Switched virtual interfaces (SVIs) have been created on each switch for each VLAN using HSRP. We enabled EIGRP -Enhanced Interior Gateway Routing Protocol, which automatically maintains a routing topology.

 

Diagram: Nexus 7010 switches, trunk connections and VDCs

 

A team of 5 engineers built all three Vblock systems from the ground up, configured VPLEX and the Nexus 7k OTV/L3 network in about 15 days. We will have this demo running at each show this year, then move it to our RTP lab.

 

 

Photo: Vblock racks at the Staging lab in San Jose, CA

 

**Originally posted on CiscoDC Blog for CiscoLive! 2012

Cisco Cloud Mega Test

Posted on Cisco’s Data Center blog http://blogs.cisco.com/datacenter/ciscos-cloud-verse-vce-and-eantc-cloud-mega-test/#more-69949 May 22nd, 2012

“Over the past four to five months, there has been significant buzz about VCE’s role in theEANTC Cloud Mega Test.  I was lucky enough to be a part of the test team, and I wanted to share some of my experiences in working on this fantastic project with EANTC and Cisco.

It started with a bang, of course.  Back in late January, Light Reading published their first report on the testing EANTC had done of Cisco’s CloudVerse architecture. I was at Cisco Live London where details of the test were first shared and members of the CloudVerse team were in attendance to share the results. Over the next couple of months, EANTC followed that up with other reports in the series.  All in all, they covered the Cisco Unified Data Center that is the foundation for cloud services, Cloud Intelligent Networks, Cloud Applications & Services, and Long-haul Optical Transport used in delivering cloud-based services.  Of course, I wasn’t involved in all of that.

As with all of the Mega Test programs (the Mobile Mega Test and Medianet Mega Test being the ones that Light Reading conducted previously), these programs are a big deal.  Cisco spends millions of dollars — literally — on lab infrastructure, engineers and communications for each one of these tests.  Light Reading has EANTC come in to provide independent, objective oversight and testing.  And when the report comes out, there is a lot of buzz in the industry on exactly what went on.  It’s not every day we get to play in a multi-million dollar sandbox!  I was one of several dozen people from Cisco, VMware, VCE, EMC and Ixia working on this project.

As the buzz about the test bounced around in the industry, a sidebar conversation emerged about VCE’s involvement in the test. As you may know from social media, I’m a Principal vArchitect with VCE Corporate Engineering.  Essentially, my job is to make sure that customers get the most out of VCE’s technology – VblockTM Systems.  The Vblock system is pre-engineered, pre-tested converged infrastructure that combines Cisco’s computing and networking equipment, EMC’s storage equipment, and virtualization from VMware.  VCE itself operates as a joint venture between Cisco and EMC with investments from VMware and Intel.

One of the things that was missed in the excitement over the test results themselves was the fact that the Vblock system played a big part in the Cloud Mega Test.

 

Of course, the test wasn’t intended to examine just the Vblock system.  The Mega Tests have always been about putting together all of the moving parts necessary for Service Providers to deliver a specific service to their subscriber base.  This Mega Test focused on the cloud, so the data center, the network, the applications, and the transport were all part of the test.  Still, Cisco needed *something* to act as the data center infrastructure… and what easier way to quickly implement a data center than rolling in a big Vblock Series 700?

I was fortunate enough to be one of the people who got to help Cisco with this.  With the timeline for the tests being as tight as they were, everyone at Cisco wanted to bring in large chunks of technology that they knew would “just work.”  They didn’t want to spend time piecing together all of the components and worrying if a firmware rev. on some element somewhere would unexpectedly wreak havoc at the worst time.  This is exactly the problem that the Vblock system solves.

The system we used for the Cloud Mega Test was a Vblock Series 700 MX which was an ideal choice because in any test setup there is a massive amount of work to do in a short period of time. Cisco, VCE and EMC lent experts in each of the technologies being tested. Being able to roll in an operational computing platform, already running ESXi, with Storage presented and Nexus 1000v configured greatly reduced initial setup time. In fact a lot of the Light Reading content focuses on the solution and not on the Vblock system since it has become a foundational element.  Choosing a Vblock Series 700 MX (VMAX based) instead of a 300 (VNX based) came down to the fact that VMAX provides a scalability factor that VNX can’t match. I’m In this case, the VMAX was a better choice because it offers up to eight engines with more cache, fiber channel connectivity and flat out IO paths than the VNX. VCE has service providers that use both VMAX and VNX depending on the business model they are trying to support. VMAX typically fits most of their requirements especially with the recently announced VMAX SP (VMAX Service Provider).  VMAX SP will provide APIs and pre-defined architecture that that fits into a SPs business model.

VCE has spent a lot of time developing a multi-tenant solution architecture for Vblock systems; I was close to this effort last year and continue to support it from time to time. Cisco’s Cloud Mega Test and VCE’s secure multitenant solution provides guidelines for tenant isolation, compliance, governance, auditing, security, logging, QoS and a framework allowing the customer to choose which pieces they want to implement. One of VCE’s core values is allowing partners to integrate with our platform giving our customers choice over which products they want to use.

The best part about using Vblock systems for this effort is that we build them in the factory. Our professional services organization is a well oiled machine, able to take the logical config document we created during our planning discussions and apply it to the hardware components. With vCenter and ESXi installed onto all the blades with storage and Nexus 1000v fully configured, all we do is roll the cabinets into the data center and connect power and network. Integration into the Cloud Mega Test was a breeze. This allowed our team to focus on building the test virtual machines, vCloud Director environment and vCloud Connector implementation. When marketing asked me to comment on the Vblock system setup for the Cloud Mega Test I initially thought that I didn’t have to do much with the hardware since everything I did was with the VMware stack. This is clearly why VCE is succeeding is the market. We have architected the platform to serve up virtualized workloads and it excels. I know this sounds like fluff but many of us have spent years as partners delivering solutions where the vendor is trying to solve business problems with a bill of materials. Having to be the person to take a truck load of goods and turn it into a functional solution is time-consuming and nerve racking. It has been nice to be involved with projects such as the Cloud Mega Test where we get to focus further up the stack. I have to admit that I didn’t modify the VMAX configuration during the setup; I did however fix a problem with another vendor’s storage system.

The VMAX storage array performed exactly as expected, which was a good thing but not that exciting of a story for readers. I think the take away here is that EMC VMAX offers the most horsepower when it comes to storage — up to 2400 drives, 128 fiber channel ports, 1TB of cache across eight engines. Most storage arrays sold have two storage processors, less cache and a handful of fiber channel ports. There is no better way to provide storage to a mixed workload in a multi-tenant environment. With respect to servers, we’ve moved from rack mount and old school blade chassis to fabric interconnects, converged 10gb Ethernet, and converged network adapters (Ethernet/Fiber Channel) configurable as 2 to 128 virtual interfaces. VCE employs structured architecture and validated support matrix with a dedicated support organization. Today, up to eight UCS chassis are supported in a Vblock system which means up to 64 blades can be managed through one interface.

Many of the Cisco engineers on the Mega Test project were involved with the Vblock system before VCE was formed and Vblock became a product. I have been with VCE for a year and a half, and probably another 6 months prior to that, Cisco built one of the very first Vblock systems in RTP. A lot of fantastic stuff comes from these marketing efforts. Watch closely as much of the CloudVerse Mega Test turns into products from Cisco, VCE or your favorite Service Provider.

Being part of the EANTC Cloud Mega Test was a great experience.  Not only did I get to work with a lot of high-end technology, but I worked with a bunch of amazing people too.  The folks at EANTC are incredibly bright, and they didn’t cut Cisco any slack on the test plan.  The Cisco folks were super, too.  It’s amazing just how broad a solution portfolio they have at Cisco.  They have literally everything that they need to build a public cloud infrastructure.

The best part was that this whole experience confirmed for me just how valuable the Vblock system can be for our customers.  Without the pre-testing and validation that we do here at VCE, there is no way we would have been able to pull off this test plan.  I would probably still be in the Cisco labs, tweaking some setting somewhere to work out just one more kink.  But the Vblock system came through with flying colors in a really rigorous environment.

I’m really hoping that Cisco does more of these Mega Test programs, because I’d love to do it again.  If you happen to be at EMC World this week, swing by booth 410 or 515. I’d be happy to chat about the test further. Leave comments below and suggest what you might want to see tested next.  Perhaps they’ll take our suggestions… who knows!

BTW, if you want to read the official reports, you can find them here:

http://www.lightreading.com/ciscoseries

http://www.cisco.com/go/cloudmegatest

http://www.vce.com/cloudmegatest

My VMware ESXi 4.1 home lab

With my new role at VCE as a Solutions vArchitect and Virtualization evangelist I get to work with a lot of cool stuff. Tech I want to test myself and it makes a lot of sense to have a home lab for testing and professional development. Sure there are lab resources at work and we get to work with all that cool tech but the environment is dynamic. A home lab has other uses besides learning VMware and testing. There are some examples on lifehacker.com Television personal video recorder, Secure Wireless connections when using public WiFi. Each of these can be a VM running in the lab.

I have read a number of articles on home labs and wanted to add my experience with building one. As we all know technology changes quickly and it may be helpful to share what I’m finding success with as of March, 2011. Many blog posts out there cover hardware from 2009 when desktops were using DDR2 memory and Intel Core 2 Duo processors. These systems were limited to 8GB of RAM. This year second generation Intel Core i3, i5 and i7 processors are available with prices on first generation dropping. These processors and P55/H55 chipsets use DDR3 memory which allow for 16GB of RAM (double the amount of a DDR2 memory system). Intel Core i3/i5/i7 system prices are more expensive but worth it for the amount of RAM you can pack in. I also priced AMD processors but the price gap is small when comparing current technology.

I spent under $375 per PC, running bare bones (no case, monitor, keyboard or HDD).  About $450 per PC with case and HDD. Newegg.com, buy.com, tigerdirect.com and Frys seem to offer the best prices on computer parts (please leave comments with other good sites). Prices on PC parts change with the weather depending on weekly sales, mail in rebates and changes in current hardware.

The setup

Hardware:

  • Processor: Intel Core i3 540 $119
  • Memory: 16GB DDR3 Corsair (2 x 4GB) $75 x 2 packs
  • Motherboard: Intel DH55TC $89 – Onboard video, Gigabit NIC, 4 memory slots, 12 USB ports, PCI slot.
  • Hard Drive: 500GB Seagate Barracuda 7200RPM $39
  • Network Card: Intel 10/100/1000 GT PCI NIC $29
  • MicroAtx Case & 350Watt Power Supply $39
  • Shared storage: Iomega ix2 Storage device $189
  • Network Switch: Cisco Small Business 10/100/1000 5-port $47

Alternate Hardware:

  • Processor: AMD Phenom Quad Core 9750 CPU $100
  • Memory: 8GB DDR2 (4 x 2048MB) $112
  • Motherboard: Asus M4A785-M $50

Software:

  • VMware ESXi 4.1 Update 1 Installable
  • VMware ESXi 4.1 Update 1 Installable on USB stick (Instructions: http://www.vladan.fr/how-to-install-esxi-40-on-usb-memory-key/)
  • VMware vSphere Client
  • VMware vCenter Server installed onto a VM
  • Microsoft Windows 2003 Enterprise Edition (less overhead / smaller install than server 2008)
  • Microsoft Windows 7 Professional 32-bit (choose any client OS you prefer)
  • Celerra NAS VSA

The process for registering and downloading VMware software remains unchanged. You can go to http://www.vmware.com and find numerous links to 60-day trials for all their software. ESXi installable is free to use which means your VMs will always run even if some licenses expire. I was interested in testing VMware View which included trial license keys and download links for vCenter, vShield Endpoint, View and clients.

There are two inexpensive routes for shared storage that provide NFS and iSCSI mounts. Celerra VSA (free from EMC to use) or a NAS device such as the Iomega iX2 or iX4. Other options exist but since I work with Celerra NAS I think it’s cool to have the software running in my lab. The Celerra VSA runs as a VM on one of your hosts. A better solution may be the Iomega iX2 which runs external to your ESX hosts. Shared storage is required for clustering ESX hosts.

How did VMware install?

Simple! Insert CD media, boot, install, repeat on all hosts. No issues with the Intel Motherboard. The H55/P55 chipsets work with ESX. USB, NIC, SATA, Video all worked fine. You have to go into the BIOS to enable VT and disable Execution Prevention Code. Clustering, HA, DRS, FT all seem to be working. You also have the option of running the hosts without a hard drive booting ESX from USB sticks. Each host would need to be configured with a NFS or iSCSI mount.

How to setup VMware

  • Install VMware ESXi onto each of the hosts.
  • Configure the root user password and host IP information
  • Install VMware Client onto a Windows machine. Point this to the IP of the ESX host.
  • Create a VM on the ESX host and install Windows 2003 or 2008 server.
  • Install VMware tools, OS updates, static IP, etc.
  • Install vCenter Server and let it install SQL Express (unless you want to first setup MS SQL Server)
  • Once vCenter Server is installed restart the client and connect to the IP of the Windows server.
  • Optional: Install Update Manager
  • Install Complete
  • Consider deploying the Celerra VSA virtual machine.

Example VMware Lab

The following image is an example of VMware ESX, VMs, vCenter & Celerra VSA setup on one PC. You can learn a lot about VMware with one PC but additional PCs are required for testing vMotion, HA, DRS, FT, SRM and other features.

Slide1

Two example setups using a second PC for vMotion or HA.

Slide2
Slide2

Two more PCs are required to properly test SRM.

Slide4

Contact me with any questions or comments!