Leave a comment

Abiquo: More than a cloud veneer.


It appears Abiquo has all the pre-requsit features one would expect from a cloud veneer. It sits infront of a virtual environment and provides self-provisioning cloud features to customers.  As you would expect from such a product, it supports many hypervisors: VMWARE, XEN, XenServer, KVM, Hyper-V, and Virtual Box. It has a service catalog with standard templates, shared/community templates, and customer owned templates. It has resource limits controlling how much any one customer is allowed to consume. It has LDAP integration for authentication. It includes the ability to create private networks and allows customer branding. It seems to have nailed the basics.  But what makes Abiquo stand out?


The ease of implementation provides an interesting argument for Abiquo.  During installation, Abiquo will scan an address range and look for hosts. It will add the hosts it finds to its console. It will then scan the hosts for virtual machines and allow the installer to easily add those virtual machines to the console too. This allows Abiquo to be installed quickly into an existing virtual environment.


Abiquo has detailed resource limits that can implement controls down to the number of CPU’s, amount of memory, number of public IP Addresses, and more.  In addition to providing storage from the virtual environment for cloud consumption, Abiquo can use Netapp API’s to provide Storage As A Service and tiered storage (with different pricing per tier). Abiquo uses rules and algorithms to determine virtual machine placement. And finally, Albiquo comes with built-in V2V so that virtual machines may be moved between competing hypervisors.


In the end, Abiquo is a step ahead of other products in its class (such as Cloudstack and Openstack), and can serve as a low cost replacement for vCloud Director.


Posted June 24, 2012 by cloudbusterspodcast in Uncategorized

Tintri VMstore Flash Based Storage Array   Leave a comment

Flash storage, deployed in solid state drives (SSD) can be 400 times faster than disk. However, SSD is expensive. A cottage industry of companies like Tintri are trying different approaches to using SSD in new ways.
Traditional Storage Arrays implement SSD as read/write cache. Traditional Storage Arrays attempts to predict what will be needed based upon current activity and will move items onto SSD to improve performance. Tintri VMStore takes a different approach. They use SSD as primary storage for “hot” data, and then offloads “cold” data to sata. It determines what data is hot based upon performance information gathered at the VMWARE VM’s disks. Tintri believes that gathering information at this level provides better efficiency than gathering the information at a LUN level because caching often guesses wrong leaving idle data sitting in the cache. Because Tintri looks at VM’s disks, it is easier to identify I/O bottleknecks.
Since Tintri is determining which data is on SSD versus SATA, they argue that administrators no longer need to be concerned about storage tiers. I argue that unless you have enough Tintri storage to meet your needs, you’ll still contend with storage tiers.
Since SSD is expensive it must be used as efficiently as possible. Tintri accomplishes this by first de-duping data, and then compressing it. Wouldn’t this take a performance hit? Remember, SSD can be 400 times faster when compared to traditional disk – the performance hit would be insignificant compared to the faster drive.
SSD use Multi-Level-Cells to hold data. These cells can be overwritten only 5000 to 10000 times before the wear out. If your cache wears out – who cares? Just replace it. But if your primary storage wears out? You have a big problem! Tintri combats this by trying to efficiently use SSD via dedup and compression; and to add error checking and RAID 6 for when the SSD wears out.
Since SSD can be 400 times faster than disk, you can do some creative inefficient things and still be faster. Such as de-dup, compression, and data integrity checking. With error checking and parity, the system will ‘heal itself’ from disk problems on the fly. It also uses a Non-Volatile NVRAM as a write buffer so that if the Tintri mid-stream, the last write can still be constructed: The hypervisor decides to write data; the data goes to the primary NVRAM; the data is copied to secondary NVRAM; then the hypervisor is told the data was written; Tintri confirms the NVRAM data is complete; finally the NVRAM is written to SDD. Again… you would think this might slow things down, but not when you are working 400 times faster than traditional disk.
Another nice item: Trintri does automatic disk alignment for virtual machines.
Much of the Trintri literature compares how much faster SSD is versus disk,  how more efficient their vmware-aware performance monitoring is compared to caching, and how Trintri takes advantage of the faster SSD to add efficiency and redundancy for high availability.

Posted June 1, 2012 by cloudbusterspodcast in Uncategorized

AWS Console Tage   Leave a comment

AWS Console Tags

When there are only a dozen or so instances in the console it can be easy to manage. As soon as the quantity approaches fifty you’ll find the AWS console becomes difficult to manage. That’s partially because the EC2 console only displays fifty objects per page and partially because environmental complexity seems to reach a threshold around that number.  This is where tags can come in handy. Tags are additional fields that can be added to each object to help you manage information about the object. Like everything in AWS, you’ll want to develop a good strategy for how you’ll use tags before you get started.

Each object in the EC2 console can have a limited number of tags. If you create the same tags for each instance you’ll have a common framework to describe the instances. For example, suppose you need to track what instances belong to which departments. You can create a “Departments” tag for each instance to contain that information. The trick, however, is that you keep the tag names consistent because they are case sensitive. If the tag names aren’t identical across instances, AWS will assume you have created a new tag.

When you look at the console you can add tags to the column view by clicking on the Hide Columns button. This will allow you to select which tags will appear as columns on the console. Unfortunately, your selections won’t stick between sessions. Every time you log into the console you will need to reselect your view.

I always suggest creating a tag for your volumes that describes what instance owns the volume. This will be important should you ever terminate the instance. When an instance is terminated its data volumes will detach and remain available to reattach to another instance. If this isn’t maintained you could end up with several orphaned volumes and no idea what they belonged to. This can be avoided by simply tagging every volume you make.

The Cloud Busters are interested in learning how you use tags to help organize your console. Add a comment to this blog or email us a cloudbusterspodcast@yahoo.com.

Posted December 21, 2011 by cloudbusterspodcast in Uncategorized

Foursquare, and Why You Care About Location-Based Social Networks   Leave a comment

 Foursquare, and Why You Care About Location-Based Social Networks
By Tony D’Orazio

By now, most people with a smartphone are familiar with Foursquare.  For those of you who are not, Foursquare is a location-based social network.  Using this fun and informative tool, you can “check in” to places you visit, and even become “mayor,” if you are the Foursquare player who has checked in most during the last 2 month period.  “Badges” are available for completing several tasks, such as checking in after 3am on a school night or stopping at far too many Starbucks locations.

Sound really stupid and like a gigantic waste of time?  It’s not.

From a personal standpoint, it can be useful for some people to keep track of friends and places in the community.  Knowing that three of your friends have checked in at your local bar might alert you to go there and share a beverage. Knowing that there are 120 people at Frontier Field right now, on the other hand, might be a flag that downtown should be avoided.  This second use of Foursquare – traffic avoidance – is one that is particularly useful when out of town, and might not know a city as well.

In fact, Foursquare is a terrific way to learn about a city, be it Rochester or wherever you happen to be visiting.  When people check in at venues in Foursquare, they have the option of leaving a tip behind.  For example, a recent tip at a local Wegmans states that it has 1 hour free child care.  That would be particularly useful to someone needing that service.  A quick perusal of other local tips will give clues on what is good to order at a particular restaurant, or special unpublished deals you might not find elsewhere.

Businesses have also begun to notice Foursquare as a marketing tool.  Several local and national chains have begun to offer discounts based upon Foursquare.  For example, the aforementioned Starbucks offered the mayor of each of their locations a 10% discount on drinks last summer.  This increased traffic to individual stores.  Furthermore, according to Roostercom.com, the users of Foursquare are largely women, and largely those in the appealing 35-49 age group – in other words, people who have money.  

Those that download and use the Foursquare client on their smartphones will also be the targets for such discounts and specials.  Using the phone’s GPS, Foursquare will alert the user to specials nearby; literally, businesses can and do use Foursquare to invite people in. Clients are available for all major smartphone platforms; if yours is one of the few that doesn’t have a native client, Foursquare also works in the mobile web browser on your phone.  

We here at examiner.com want to give you another reason to use Foursquare.  According to the official press release, “Foursquare, the premier social city guide, will now begin displaying tips from Examiner.com, the insider source for local, on its website and its mobile applications.”  This means that all local Examiners, here in Rochester and elsewhere, will provide local tips and insight, available both here and on Foursquare.  You can follow examiner.com on Foursquare here, and see the tips provided so far.

So give Foursquare a try.  Follow examiner.com on Foursquare, as well as your local Examiners, including this one.

Posted November 20, 2011 by cloudbusterspodcast in Uncategorized

Installing Microsoft Forefront Threat Mitigation Gateway (TMG) into Amazon AWS   Leave a comment

By: Kevin Gilbert

To secure a website deployment in AWS, I wanted what every security conscious administrator wants: a firewall I can monitor, intrusion protection, and a reverse proxy that does web publishing. These requirements can be a challenge in a public cloud like AWS. Forefront’s Unified Acess Gateway (UAG) can be a great solution, but is too expensive and too much overkill for what I needed.  TMG offers the required features in a simpler solution.

 The challenge with installing TMG is that the installer locks down the network interfaces on the instance during installation. This security breaks the remote desktop (rdp) connection and makes the instance unreachable.

I tried everything to get around this problem. I tried the installation inside AWS and then inside a VPC. I tried using Team Viewer instead of Microsoft’s RDP. I tried building TMG locally using VMWARE and then uploaded the virtual machine into AWS.  I probably ran the installer fifty times with no luck.

Just as I was about to give up, a colleague found the solution in a forum that talked about doing a remote installation of TMG. I gave it a try and it worked!


  1. Make two instances: once named TMG-installer and one named TMG. Set up their security groups to allow you to RDP and for them to be able to RDP each other.

2. Using the TMG-installer as an RDP man-in-the-middle. In other words, remote into TMG Installer and use TMG-Installer to remote into TMG via TMG’s private IP address.


3. Using the RDP main-in-the-middle connection, run the TMG installer

4. Here’s the magic. During the installer, TMG locks down the instance network. If the installer is being ran through RDP, the RDP private IP address will be written into the TMG firewall and allowed. If you are connected via an elastic IP, the elastic IP won’t be written into the firewall because this is a public IP address. If you do the installer locally in vmware and then upload the virtual machine to AWS, it won’t work because because your local address is used. You must install TMG in AWS using the RDP main-in-the-middle so that the TMG-Installer’s private IP address is written into the TMG’s firewalled and allowed.

5. After TMG is installed, you need to open up RDP on the TMG instance to all networks. You’ll control who actually can RDP via the instance’s security group. You do this by opening the TMG console, clicking on the Firewall Policies branch, and in the right side of the screen selecting Edit System Policy. Within the system policies you will find a terminal services policy that you can open to all networks. The last step is to click on the Firewall Policies branch and add a new a firewall policy allowing all networks to RDP. Click apply.


6. You can terminate the TMG-installer instance because it is no longer needed. You’ll be able to RDP from anywhere that the TMG Instance’s security group allows.

Posted October 22, 2011 by cloudbusterspodcast in Uncategorized

Paetec GRC at 2011 Rochester Security Summit   Leave a comment

Jim Gran from Paetec talked at the October 2011 Rochester Security Summit about automation for IT governance, risk and compliance (Grc). Jim provided two underlining theme in his speech: By being compliant are you secure? Not necessarily; and to get corporate buy-in to security one must generate value or reduce cost rather then cram security down peoples throat.

In 2007 Paetec merged with US-L and became a public company. As a result of going public they needed to get compliance in place. First they needed to be sox compliant. To be successfully theymade self-assessment built into the culture intuitively.

Paetec targeted logical access: people can only access what they need, which is the concept of Least Privilege. Change management so managers know what changes are put into production. Privilege access management so that everyone isn’t an administrator all the time. Policy and standard development that explains why we do things certain ways. Foundational security items like antivirus and firewalls. Sdlc to manage the development of software in a way that includes a security review.

Paetec uses Oracle Identity Management, which allows for decertification of the access on a regular basis to make sure people still have only the access they need.  This saved money and lowered complexity because it didn’t need to be managed individually on every system. Also does separation of duty monitoring with this tool. Linked to HR so it sees when someone new is hired, job change, or let go.

Paetec uses BMC’s Remedy for change management. Remedy hooks Tripwire into the system which watches for unauthorized changes. When Tripwire sees a change, it checks for a change control document in Remedy. Management is notified if a change is made that doesn’t have a  change control document.

Paetec uses Centify and Oracle for single sign on across all systems. They also use this on their customer portal so that employees can log into the customer portal using their normal credentials.

Paetec used Cyberark for password vaulting. An individual normally doesn’t have escalated privileges to production systems. However, if they have been approved to make a change, the vault will give them a temporary password and then will change the password once the change period has ended. This automates the temporary privilege increase process while providing auditable logs about activities.

Paetec uses Symmantec Control Compliance Suite (CCS) to provide dashboards that automated compliance assessments on standards. Need to see how you are doing on PCI compliance? There is a dashboard for it. Paetec has added additional rules for internal audit metrics that they are interested in tracking too. Administrators like this because they get a dashboard that shows risk scores for each of their systems so they know where to focus their efforts.

Paetec uses RSA Envision and Archer EGRC for monitoring the security devices such as firewalls, antivirus, routers, etc. These products collect the logs, aggregates events across multiple devices, and can apply business logic.

Paetec received employee buy-in through newsletter, training, placards, and give aways. Jim’s belief is to let people know why you are doing the various security efforts. To focus the message on serving the employees. The employees own important aspects of security and you can’t do security without them.


Posted October 5, 2011 by cloudbusterspodcast in Uncategorized

Amazon AWS Storage Options   Leave a comment

By: Kevin Gilbert 

Amazon provides several AWS storage options. This can be confusing if you are new to AWS.

Instance Store – When  you start a new instance from an Amazon AMI template, it will launch in Instance Store storage. Instance Store Storage can be used for an instance’s root device. This provides a very cheap option because you only pay for the instance, not the root instance store. You can get better performance then the other options. However, the storage is not persistent. It uses ephermeral drives – if you terminate your instance the volumes are destroyed and your data is lost. Also, root devices are capped at only 10GB, but you can have a 1.6TB data volume.

Elastic Block Storage – Elastic Block Storage is an option for instances. It provides faster boot time. If you terminate an instance the EBS volume detaches and remains available to reattach to another instance. Volume sizes are only 1TB, but you can use software RAID or span drive letters across multiple volumes so that’s not a big deal. EBS holds many advances compared to Instance Store however EBS is more expensive. Also, some people are concerned about EBS because they were burned during the April 2011 EBS outage which took EBS offline for some customers.

S3 –Simple Storage Service is large amounts of cheap and relatively slower disk (but not slow enough that you’ll probably notice). The storage is normally accessed through a web brower but can also be accessed through command lines. S3 doesn’t need to be mounted or connected to an instance in order to be viewed. There are some third party tools that will mount S3 as a drive letter and then do the command line operations in the back ground. Individual objects can be up to 5 terabytes in size. For high reliability, data is stored in multiple data centers within your region. Unlike Instance Store and EBS, an instance’s root drives can not run from S3. While an instances’s data drives can be in S3 by using some third party tools that mount S3 storage as drive letters, that really isn’t the intention of S3. The intention of S3 is for static data such as backups and web pages.

Reduced Redundency Storage – RRS is S3 storage with a lower SLA. It provides 99.99% uptime instead of 99.9999999999%. While S3 can survive a two datacenter outage, RRS can only survive a single data center outage. To select RRS it is a simple check box when uploading files or a checkbox within the file’s properties. RRS is significantly cheaper then the normal S3 storage.

Now that you’ve seen the various storage that is available in AWS, you can get creative on how you use it. For example, some will  boot an instance built on Instance store (because it is cheaper) and as part of the boot process, copy their data that they’ve backed up in S3 to the Instance Store volume. Therefore they don’t care that instance store is not persistent because their current data is available in S3. Others choose to run databases within EBS because they don’t want to lose their data should their instance get terminated.

Posted October 1, 2011 by cloudbusterspodcast in Uncategorized

Trend Addresses Cloud Security Concerns   Leave a comment

John Lister from Trend Micro spoke at the September 2011 Rochester VMWare user group meeting.   IDC says the number one concern for cloud is security. ESG Research found security was the fourth biggest blocker to cloud adoption.  Gartner rates security as the number one concern.

Inter-m attacks uses named pipes to attack vm on same host. This has been going on for over 2 years with confiker. It gains access to one VM through a vulnerability and then uses that to hit the other VMs.

To answer these concerns vmware made vShield available in August 2010. vShield sees all inbound and outbound traffic going to each vm so it can be monitored and controlled.  It might be an unpatched vm that was just turned on- doesn’t matter because vshield protects it.

The Trend Deep Security add-on will additionally look at each vm up through the application layer and make remediation recommendations and protection. It also does pattern file distribution for each agentless vm. In the lab they have 250 vm per host because agentless is better on performance. 

Trend Deep Security is at the common criteria level 3 plus certification, soon to be 4 – which means it is a high security product.

Trend Deep Security performs integrity monitoring for detection of unauthorized changes. When something changes it emails an alert. All agentless (vsphere 5). It also provides deep packet inspection for ids ips and log inspection.

Trend performed a case study for an Airline. The airline ran out of data center space so they moved apps to Amazon AWS. They used the Trend Ssecure Cloud product. As a result, all the data is encrypted all the time. The connection keys continually change and decrypting is done on the fly.

I am going to find out more information about the Trend Secure Cloud Product for a future blog entry.


About the Author:

Kevin Gilbert is the Technology Manager with SIGMA Marketing and holds several certifications including CISSP, SSCP, Security +, NISMA, VCP, MCSE, and more.

Posted September 23, 2011 by cloudbusterspodcast in Uncategorized

VMWARE and F5   Leave a comment

F5 spoke at the September Rochester Vmware User Group meeting about F5’s virtual application accelerator that runs inside the vmware vfabric. F5 does more then just load balancing. They are the Gartner Golden Quadrant leader for application acceleration.

 If your web servers are overloaded vcenter can be made to launch additional web servers. Nothing new there. But then vcenter automatically adds those servers to the F5 pool. Completely hands off. When activity slows down, F5 automatically tares down that extra capacity.

 With VMWARE’s Site Revocery Manager (SRV) using global traffic manager (GTM) will do a health check on the web servers. If it fails the health check, GTM will redirect the traffic to the fail over site automatically. It will even take care of DNS for you.

F5’s LTM virtual edition can be downloaded for a 90 day trial. Lab version restricts to 10mb aggregate traffic. There is a free vcenter plugin that works with physical and virtual versions of LTM.

F5 has a hot plug chassis solution for cloud providers. It works with multiple tenancy scenarios where each client can have identical ip addresses without conflict.

Cloud bursting is ability to ramp up to the cloud when busy. Content is served from your data center and when busy, global traffic manager pulls in web instances that are identical to help with the load. It can power up and down cloud resources as needed.

 F5 supports long distance vmotion via wan optimization. The optimization is encrypted for security. The acceleration can make vmotion 3 to 4 times faster. After a long distance vmotion you have to deal with a DNS change, right? Not with GTM. GTM handles the dns change to the second site for you. F5 uses a vmotion isession tunnel to move your VM from one site to the other. San replication makes vmotion faster but is not necessary because it will do a storage vmotion. There is no distance limitation. One client has a vm follow the sun around the world every day.

Whether you are balancing multiple data centers or balancing cloud and your own data center, F5 can make the connections easier and better.

About the Author:

Kevin Gilbert is the Technology Manager with SIGMA Marketing and holds several certifications including CISSP, SSCP, Security +, and NISM.

Posted September 21, 2011 by cloudbusterspodcast in Uncategorized

Going Viral AND Staying Viral   Leave a comment

Friday, March 11, 2011 is a day that will live forever in Internet history.  On that day, the Comedy Central blog dedicated to the show Tosh.0 posted a video for the song “Friday”, by 13-year-old Rebecca Black, as part of a post called “Songwriting Isn’t For Everyone”.  I won’t be taking shots at young Ms. Black’s performance –many on the Internet have – but the song, as written, is awful.  At one point, the lyrics slowly explain the order of the days of the week.  A big conflict in the song is whether the singer will be sitting in the front or the back seat of the car.
In the four days following the Comedy Central blog post, more than six million people viewed the Rebecca Black video on YouTube. As of this writing, there are almost 120 million views. Several cover versions, remixes, parodies, and copycat performances have been released. The song has been released to iTunes, to great success.
“Friday” has become the latest in a long line of things on the Internet to “go viral”. That term for rapidly-growing, short-term, word-of-mouth marketing, coined in the mid-90s and popularized by a Fast Company magazine article in 1996, has been used to describe several quick-flashing fads, from  the infamous Double Rainbow videos, to Charlie Sheen’s Twitter account,  to Larry Platt’s famous “Pants On The Ground” American Idol audition. Most of these have been very short-lived and quickly forgotten.
I have personal experience with going viral.  I created a Facebook page in February 2010, during the Winter Olympics, in tribute to the colorful trousers worn by The Norwegian Olympic Curling Team. Within a week, over a ½ million “Likes” were registered on the page, which peaked at about 660,000 fans during the Olympic Closing ceremonies.  Click-through traffic from my page crushed the manufacturer of the pants.  Curling clubs around the world were packed with fans wanting to learn how to curl, many of whom where only fans of the pants previously.
After that, the fans started to go away, and I struggled with retaining as many fans as possible and sustaining the site.  I was successful.  As of 15 March 2011, I still have 598,896 “Likes”. I know by statistics that people are still interacting with my new posts, and are still following links.  I have a link on the page, provided by Loudmouth Golf (the makers of the pants), which drives traffic to their site (and generates quite modest revenue for USA Curling’s Katie Beck Memorial Fund for junior curling).  A 90% retention rate over a year is good for a business, and fantastic for something viral.
How can you retain your viral customers, as I have?
Stay As Close To On-Topic As Possible, Without Sounding Like a Broken Record. Your customers came to you because you offered something that was entertaining in a different way, or because something you offered was attractive to them.  Don’t abandon that attractive component; rather, expand on the subject, while offering something new.  In my case, I followed Loudmouth Golf’s pants and the Norwegian curling team post-Olympics. 
Be Persistent. If you aren’t offering new content, people will move on.  This rule applies not only for viral content, but any published content in general. My times of greatest loss have come when I have not posted for more than a week.
Don’t Try To Duplicate Your Success. You got lucky once by being yourself.  Don’t try to catch the same lightning in a different bottle.  You will miss and it will tarnish what you did with your original content.
Don’t Sweat the Haters and the Bandwagoners If you have gone viral, there will be a backlash.  There are people who are going to get sick of you.  And they will tell you this, in no uncertain terms.  In some cases, they will tell you in truly vile terms.  And there are people who will just leave, and not come back.  Those are not your customers.  Put a positive spin on their comments if you possibly can, but do not chase them down to recover them. 
Broaden Your Audience By Using Other Avenues I am amazed to still find people who never had any idea about my little Facebook page.  I’ve used a 2nd Facebook page to drive some traffic to this one; I have also used my Twitter account and my curling blog to bring new people into the conversation.  If you’ve got something viral you are trying to sustain, don’t be afraid to reach out and tell people about it. Use other methods to reach this audience. This new audience will help drive the conversation further, in directions you never saw possible.
Going viral can be a very good thing for your brand.  How you react in the aftermath will determine if it remains an asset or become a liability.
Tony D’Orazio is a Systems Administrator and member of the Cloud Busters Pod Cast.

Posted September 16, 2011 by cloudbusterspodcast in Uncategorized