Leave a comment

Abiquo: More than a cloud veneer.

 

It appears Abiquo has all the pre-requsit features one would expect from a cloud veneer. It sits infront of a virtual environment and provides self-provisioning cloud features to customers.  As you would expect from such a product, it supports many hypervisors: VMWARE, XEN, XenServer, KVM, Hyper-V, and Virtual Box. It has a service catalog with standard templates, shared/community templates, and customer owned templates. It has resource limits controlling how much any one customer is allowed to consume. It has LDAP integration for authentication. It includes the ability to create private networks and allows customer branding. It seems to have nailed the basics.  But what makes Abiquo stand out?

 

The ease of implementation provides an interesting argument for Abiquo.  During installation, Abiquo will scan an address range and look for hosts. It will add the hosts it finds to its console. It will then scan the hosts for virtual machines and allow the installer to easily add those virtual machines to the console too. This allows Abiquo to be installed quickly into an existing virtual environment.

 

Abiquo has detailed resource limits that can implement controls down to the number of CPU’s, amount of memory, number of public IP Addresses, and more.  In addition to providing storage from the virtual environment for cloud consumption, Abiquo can use Netapp API’s to provide Storage As A Service and tiered storage (with different pricing per tier). Abiquo uses rules and algorithms to determine virtual machine placement. And finally, Albiquo comes with built-in V2V so that virtual machines may be moved between competing hypervisors.

 

In the end, Abiquo is a step ahead of other products in its class (such as Cloudstack and Openstack), and can serve as a low cost replacement for vCloud Director.

Advertisements

Posted June 24, 2012 by cloudbusterspodcast in Uncategorized

Tintri VMstore Flash Based Storage Array   Leave a comment

Flash storage, deployed in solid state drives (SSD) can be 400 times faster than disk. However, SSD is expensive. A cottage industry of companies like Tintri are trying different approaches to using SSD in new ways.
Traditional Storage Arrays implement SSD as read/write cache. Traditional Storage Arrays attempts to predict what will be needed based upon current activity and will move items onto SSD to improve performance. Tintri VMStore takes a different approach. They use SSD as primary storage for “hot” data, and then offloads “cold” data to sata. It determines what data is hot based upon performance information gathered at the VMWARE VM’s disks. Tintri believes that gathering information at this level provides better efficiency than gathering the information at a LUN level because caching often guesses wrong leaving idle data sitting in the cache. Because Tintri looks at VM’s disks, it is easier to identify I/O bottleknecks.
Since Tintri is determining which data is on SSD versus SATA, they argue that administrators no longer need to be concerned about storage tiers. I argue that unless you have enough Tintri storage to meet your needs, you’ll still contend with storage tiers.
Since SSD is expensive it must be used as efficiently as possible. Tintri accomplishes this by first de-duping data, and then compressing it. Wouldn’t this take a performance hit? Remember, SSD can be 400 times faster when compared to traditional disk – the performance hit would be insignificant compared to the faster drive.
SSD use Multi-Level-Cells to hold data. These cells can be overwritten only 5000 to 10000 times before the wear out. If your cache wears out – who cares? Just replace it. But if your primary storage wears out? You have a big problem! Tintri combats this by trying to efficiently use SSD via dedup and compression; and to add error checking and RAID 6 for when the SSD wears out.
Since SSD can be 400 times faster than disk, you can do some creative inefficient things and still be faster. Such as de-dup, compression, and data integrity checking. With error checking and parity, the system will ‘heal itself’ from disk problems on the fly. It also uses a Non-Volatile NVRAM as a write buffer so that if the Tintri mid-stream, the last write can still be constructed: The hypervisor decides to write data; the data goes to the primary NVRAM; the data is copied to secondary NVRAM; then the hypervisor is told the data was written; Tintri confirms the NVRAM data is complete; finally the NVRAM is written to SDD. Again… you would think this might slow things down, but not when you are working 400 times faster than traditional disk.
Another nice item: Trintri does automatic disk alignment for virtual machines.
Much of the Trintri literature compares how much faster SSD is versus disk,  how more efficient their vmware-aware performance monitoring is compared to caching, and how Trintri takes advantage of the faster SSD to add efficiency and redundancy for high availability.

Posted June 1, 2012 by cloudbusterspodcast in Uncategorized

AWS Console Tage   Leave a comment

AWS Console Tags

When there are only a dozen or so instances in the console it can be easy to manage. As soon as the quantity approaches fifty you’ll find the AWS console becomes difficult to manage. That’s partially because the EC2 console only displays fifty objects per page and partially because environmental complexity seems to reach a threshold around that number.  This is where tags can come in handy. Tags are additional fields that can be added to each object to help you manage information about the object. Like everything in AWS, you’ll want to develop a good strategy for how you’ll use tags before you get started.

Each object in the EC2 console can have a limited number of tags. If you create the same tags for each instance you’ll have a common framework to describe the instances. For example, suppose you need to track what instances belong to which departments. You can create a “Departments” tag for each instance to contain that information. The trick, however, is that you keep the tag names consistent because they are case sensitive. If the tag names aren’t identical across instances, AWS will assume you have created a new tag.

When you look at the console you can add tags to the column view by clicking on the Hide Columns button. This will allow you to select which tags will appear as columns on the console. Unfortunately, your selections won’t stick between sessions. Every time you log into the console you will need to reselect your view.

I always suggest creating a tag for your volumes that describes what instance owns the volume. This will be important should you ever terminate the instance. When an instance is terminated its data volumes will detach and remain available to reattach to another instance. If this isn’t maintained you could end up with several orphaned volumes and no idea what they belonged to. This can be avoided by simply tagging every volume you make.

The Cloud Busters are interested in learning how you use tags to help organize your console. Add a comment to this blog or email us a cloudbusterspodcast@yahoo.com.

Posted December 21, 2011 by cloudbusterspodcast in Uncategorized

Foursquare, and Why You Care About Location-Based Social Networks   Leave a comment

 Foursquare, and Why You Care About Location-Based Social Networks
By Tony D’Orazio

By now, most people with a smartphone are familiar with Foursquare.  For those of you who are not, Foursquare is a location-based social network.  Using this fun and informative tool, you can “check in” to places you visit, and even become “mayor,” if you are the Foursquare player who has checked in most during the last 2 month period.  “Badges” are available for completing several tasks, such as checking in after 3am on a school night or stopping at far too many Starbucks locations.

Sound really stupid and like a gigantic waste of time?  It’s not.

From a personal standpoint, it can be useful for some people to keep track of friends and places in the community.  Knowing that three of your friends have checked in at your local bar might alert you to go there and share a beverage. Knowing that there are 120 people at Frontier Field right now, on the other hand, might be a flag that downtown should be avoided.  This second use of Foursquare – traffic avoidance – is one that is particularly useful when out of town, and might not know a city as well.

In fact, Foursquare is a terrific way to learn about a city, be it Rochester or wherever you happen to be visiting.  When people check in at venues in Foursquare, they have the option of leaving a tip behind.  For example, a recent tip at a local Wegmans states that it has 1 hour free child care.  That would be particularly useful to someone needing that service.  A quick perusal of other local tips will give clues on what is good to order at a particular restaurant, or special unpublished deals you might not find elsewhere.

Businesses have also begun to notice Foursquare as a marketing tool.  Several local and national chains have begun to offer discounts based upon Foursquare.  For example, the aforementioned Starbucks offered the mayor of each of their locations a 10% discount on drinks last summer.  This increased traffic to individual stores.  Furthermore, according to Roostercom.com, the users of Foursquare are largely women, and largely those in the appealing 35-49 age group – in other words, people who have money.  

Those that download and use the Foursquare client on their smartphones will also be the targets for such discounts and specials.  Using the phone’s GPS, Foursquare will alert the user to specials nearby; literally, businesses can and do use Foursquare to invite people in. Clients are available for all major smartphone platforms; if yours is one of the few that doesn’t have a native client, Foursquare also works in the mobile web browser on your phone.  

We here at examiner.com want to give you another reason to use Foursquare.  According to the official press release, “Foursquare, the premier social city guide, will now begin displaying tips from Examiner.com, the insider source for local, on its website and its mobile applications.”  This means that all local Examiners, here in Rochester and elsewhere, will provide local tips and insight, available both here and on Foursquare.  You can follow examiner.com on Foursquare here, and see the tips provided so far.

So give Foursquare a try.  Follow examiner.com on Foursquare, as well as your local Examiners, including this one.

Posted November 20, 2011 by cloudbusterspodcast in Uncategorized

Installing Microsoft Forefront Threat Mitigation Gateway (TMG) into Amazon AWS   Leave a comment

By: Kevin Gilbert

To secure a website deployment in AWS, I wanted what every security conscious administrator wants: a firewall I can monitor, intrusion protection, and a reverse proxy that does web publishing. These requirements can be a challenge in a public cloud like AWS. Forefront’s Unified Acess Gateway (UAG) can be a great solution, but is too expensive and too much overkill for what I needed.  TMG offers the required features in a simpler solution.

 The challenge with installing TMG is that the installer locks down the network interfaces on the instance during installation. This security breaks the remote desktop (rdp) connection and makes the instance unreachable.

I tried everything to get around this problem. I tried the installation inside AWS and then inside a VPC. I tried using Team Viewer instead of Microsoft’s RDP. I tried building TMG locally using VMWARE and then uploaded the virtual machine into AWS.  I probably ran the installer fifty times with no luck.

Just as I was about to give up, a colleague found the solution in a forum that talked about doing a remote installation of TMG. I gave it a try and it worked!

HOW TO INSTALL TMG

  1. Make two instances: once named TMG-installer and one named TMG. Set up their security groups to allow you to RDP and for them to be able to RDP each other.

2. Using the TMG-installer as an RDP man-in-the-middle. In other words, remote into TMG Installer and use TMG-Installer to remote into TMG via TMG’s private IP address.

 

3. Using the RDP main-in-the-middle connection, run the TMG installer

4. Here’s the magic. During the installer, TMG locks down the instance network. If the installer is being ran through RDP, the RDP private IP address will be written into the TMG firewall and allowed. If you are connected via an elastic IP, the elastic IP won’t be written into the firewall because this is a public IP address. If you do the installer locally in vmware and then upload the virtual machine to AWS, it won’t work because because your local address is used. You must install TMG in AWS using the RDP main-in-the-middle so that the TMG-Installer’s private IP address is written into the TMG’s firewalled and allowed.

5. After TMG is installed, you need to open up RDP on the TMG instance to all networks. You’ll control who actually can RDP via the instance’s security group. You do this by opening the TMG console, clicking on the Firewall Policies branch, and in the right side of the screen selecting Edit System Policy. Within the system policies you will find a terminal services policy that you can open to all networks. The last step is to click on the Firewall Policies branch and add a new a firewall policy allowing all networks to RDP. Click apply.

 

6. You can terminate the TMG-installer instance because it is no longer needed. You’ll be able to RDP from anywhere that the TMG Instance’s security group allows.

Posted October 22, 2011 by cloudbusterspodcast in Uncategorized

Paetec GRC at 2011 Rochester Security Summit   Leave a comment

Jim Gran from Paetec talked at the October 2011 Rochester Security Summit about automation for IT governance, risk and compliance (Grc). Jim provided two underlining theme in his speech: By being compliant are you secure? Not necessarily; and to get corporate buy-in to security one must generate value or reduce cost rather then cram security down peoples throat.

In 2007 Paetec merged with US-L and became a public company. As a result of going public they needed to get compliance in place. First they needed to be sox compliant. To be successfully theymade self-assessment built into the culture intuitively.

Paetec targeted logical access: people can only access what they need, which is the concept of Least Privilege. Change management so managers know what changes are put into production. Privilege access management so that everyone isn’t an administrator all the time. Policy and standard development that explains why we do things certain ways. Foundational security items like antivirus and firewalls. Sdlc to manage the development of software in a way that includes a security review.

Paetec uses Oracle Identity Management, which allows for decertification of the access on a regular basis to make sure people still have only the access they need.  This saved money and lowered complexity because it didn’t need to be managed individually on every system. Also does separation of duty monitoring with this tool. Linked to HR so it sees when someone new is hired, job change, or let go.

Paetec uses BMC’s Remedy for change management. Remedy hooks Tripwire into the system which watches for unauthorized changes. When Tripwire sees a change, it checks for a change control document in Remedy. Management is notified if a change is made that doesn’t have a  change control document.

Paetec uses Centify and Oracle for single sign on across all systems. They also use this on their customer portal so that employees can log into the customer portal using their normal credentials.

Paetec used Cyberark for password vaulting. An individual normally doesn’t have escalated privileges to production systems. However, if they have been approved to make a change, the vault will give them a temporary password and then will change the password once the change period has ended. This automates the temporary privilege increase process while providing auditable logs about activities.

Paetec uses Symmantec Control Compliance Suite (CCS) to provide dashboards that automated compliance assessments on standards. Need to see how you are doing on PCI compliance? There is a dashboard for it. Paetec has added additional rules for internal audit metrics that they are interested in tracking too. Administrators like this because they get a dashboard that shows risk scores for each of their systems so they know where to focus their efforts.

Paetec uses RSA Envision and Archer EGRC for monitoring the security devices such as firewalls, antivirus, routers, etc. These products collect the logs, aggregates events across multiple devices, and can apply business logic.

Paetec received employee buy-in through newsletter, training, placards, and give aways. Jim’s belief is to let people know why you are doing the various security efforts. To focus the message on serving the employees. The employees own important aspects of security and you can’t do security without them.

 

Posted October 5, 2011 by cloudbusterspodcast in Uncategorized

Amazon AWS Storage Options   Leave a comment

By: Kevin Gilbert 

Amazon provides several AWS storage options. This can be confusing if you are new to AWS.

Instance Store – When  you start a new instance from an Amazon AMI template, it will launch in Instance Store storage. Instance Store Storage can be used for an instance’s root device. This provides a very cheap option because you only pay for the instance, not the root instance store. You can get better performance then the other options. However, the storage is not persistent. It uses ephermeral drives – if you terminate your instance the volumes are destroyed and your data is lost. Also, root devices are capped at only 10GB, but you can have a 1.6TB data volume.

Elastic Block Storage – Elastic Block Storage is an option for instances. It provides faster boot time. If you terminate an instance the EBS volume detaches and remains available to reattach to another instance. Volume sizes are only 1TB, but you can use software RAID or span drive letters across multiple volumes so that’s not a big deal. EBS holds many advances compared to Instance Store however EBS is more expensive. Also, some people are concerned about EBS because they were burned during the April 2011 EBS outage which took EBS offline for some customers.

S3 –Simple Storage Service is large amounts of cheap and relatively slower disk (but not slow enough that you’ll probably notice). The storage is normally accessed through a web brower but can also be accessed through command lines. S3 doesn’t need to be mounted or connected to an instance in order to be viewed. There are some third party tools that will mount S3 as a drive letter and then do the command line operations in the back ground. Individual objects can be up to 5 terabytes in size. For high reliability, data is stored in multiple data centers within your region. Unlike Instance Store and EBS, an instance’s root drives can not run from S3. While an instances’s data drives can be in S3 by using some third party tools that mount S3 storage as drive letters, that really isn’t the intention of S3. The intention of S3 is for static data such as backups and web pages.

Reduced Redundency Storage – RRS is S3 storage with a lower SLA. It provides 99.99% uptime instead of 99.9999999999%. While S3 can survive a two datacenter outage, RRS can only survive a single data center outage. To select RRS it is a simple check box when uploading files or a checkbox within the file’s properties. RRS is significantly cheaper then the normal S3 storage.

Now that you’ve seen the various storage that is available in AWS, you can get creative on how you use it. For example, some will  boot an instance built on Instance store (because it is cheaper) and as part of the boot process, copy their data that they’ve backed up in S3 to the Instance Store volume. Therefore they don’t care that instance store is not persistent because their current data is available in S3. Others choose to run databases within EBS because they don’t want to lose their data should their instance get terminated.

Posted October 1, 2011 by cloudbusterspodcast in Uncategorized