Archive for June 2012

  Leave a comment

Abiquo: More than a cloud veneer.

 

It appears Abiquo has all the pre-requsit features one would expect from a cloud veneer. It sits infront of a virtual environment and provides self-provisioning cloud features to customers.  As you would expect from such a product, it supports many hypervisors: VMWARE, XEN, XenServer, KVM, Hyper-V, and Virtual Box. It has a service catalog with standard templates, shared/community templates, and customer owned templates. It has resource limits controlling how much any one customer is allowed to consume. It has LDAP integration for authentication. It includes the ability to create private networks and allows customer branding. It seems to have nailed the basics.  But what makes Abiquo stand out?

 

The ease of implementation provides an interesting argument for Abiquo.  During installation, Abiquo will scan an address range and look for hosts. It will add the hosts it finds to its console. It will then scan the hosts for virtual machines and allow the installer to easily add those virtual machines to the console too. This allows Abiquo to be installed quickly into an existing virtual environment.

 

Abiquo has detailed resource limits that can implement controls down to the number of CPU’s, amount of memory, number of public IP Addresses, and more.  In addition to providing storage from the virtual environment for cloud consumption, Abiquo can use Netapp API’s to provide Storage As A Service and tiered storage (with different pricing per tier). Abiquo uses rules and algorithms to determine virtual machine placement. And finally, Albiquo comes with built-in V2V so that virtual machines may be moved between competing hypervisors.

 

In the end, Abiquo is a step ahead of other products in its class (such as Cloudstack and Openstack), and can serve as a low cost replacement for vCloud Director.

Posted June 24, 2012 by cloudbusterspodcast in Uncategorized

Tintri VMstore Flash Based Storage Array   Leave a comment

Flash storage, deployed in solid state drives (SSD) can be 400 times faster than disk. However, SSD is expensive. A cottage industry of companies like Tintri are trying different approaches to using SSD in new ways.
Traditional Storage Arrays implement SSD as read/write cache. Traditional Storage Arrays attempts to predict what will be needed based upon current activity and will move items onto SSD to improve performance. Tintri VMStore takes a different approach. They use SSD as primary storage for “hot” data, and then offloads “cold” data to sata. It determines what data is hot based upon performance information gathered at the VMWARE VM’s disks. Tintri believes that gathering information at this level provides better efficiency than gathering the information at a LUN level because caching often guesses wrong leaving idle data sitting in the cache. Because Tintri looks at VM’s disks, it is easier to identify I/O bottleknecks.
Since Tintri is determining which data is on SSD versus SATA, they argue that administrators no longer need to be concerned about storage tiers. I argue that unless you have enough Tintri storage to meet your needs, you’ll still contend with storage tiers.
Since SSD is expensive it must be used as efficiently as possible. Tintri accomplishes this by first de-duping data, and then compressing it. Wouldn’t this take a performance hit? Remember, SSD can be 400 times faster when compared to traditional disk – the performance hit would be insignificant compared to the faster drive.
SSD use Multi-Level-Cells to hold data. These cells can be overwritten only 5000 to 10000 times before the wear out. If your cache wears out – who cares? Just replace it. But if your primary storage wears out? You have a big problem! Tintri combats this by trying to efficiently use SSD via dedup and compression; and to add error checking and RAID 6 for when the SSD wears out.
Since SSD can be 400 times faster than disk, you can do some creative inefficient things and still be faster. Such as de-dup, compression, and data integrity checking. With error checking and parity, the system will ‘heal itself’ from disk problems on the fly. It also uses a Non-Volatile NVRAM as a write buffer so that if the Tintri mid-stream, the last write can still be constructed: The hypervisor decides to write data; the data goes to the primary NVRAM; the data is copied to secondary NVRAM; then the hypervisor is told the data was written; Tintri confirms the NVRAM data is complete; finally the NVRAM is written to SDD. Again… you would think this might slow things down, but not when you are working 400 times faster than traditional disk.
Another nice item: Trintri does automatic disk alignment for virtual machines.
Much of the Trintri literature compares how much faster SSD is versus disk,  how more efficient their vmware-aware performance monitoring is compared to caching, and how Trintri takes advantage of the faster SSD to add efficiency and redundancy for high availability.

Posted June 1, 2012 by cloudbusterspodcast in Uncategorized