Many of you may have heard about Amazon’s latest product named Lightsail. This service is designed to competed with shared hosting providers. A few years back I moved my websites from shared hosting, to self hosing, and ultimately to a dedicated server with 1&1 hostsing. For $85 a month it wasn’t a bad deal. Although the server seemed very sluggish at times and the network wasn’t very fast it was still better than shared hosting.
Over the course of the last two weeks I moved all my web assets to Amazon Lightsail and I couldn’t be happier. The Amazon network is lightening fast and the $5 per month instance is out performing the dedicated server I had from 1&1 hosting. Lightsail offers many plans starting at $5 a month all the way to $80 a month.
I’m sure you have all heard the news about the VMware and Amazon partnership. I’ve been getting loads of questions from people and it seems that their are misconceptions on what exactly this means short term. Here is some of what I have heard and some clarification as to what it really is.
The offering will be VMware’s hypervisor running nested on top of AWS. – False
The offering is actually the vSphere hypervisor running on bear metal running inside Amazon’s data center.
I want AWS features, not just vSphere in another datacenter. I don’t see any AWS value or features with this offering – False
The machines running on vSphere in the AWS datacenter can take advantage of lots of AWS offerings such as storage, database offerings, security, analytics, and from what I understand 70 other services. While it’s not the ability to use the AWS API to provision workloads this is still huge. This of projects you may have that utilize AWS services interacting with workloads running in your own physical data center and the what you have to do you secure those interactions. Now you have the ability to run the workloads inside the same data center as those services greatly reducing the complexities of securing those communications.
It’s great but what about NSX?
In the offering vSphere, NSX, and vSAN are all available. I can’t speak to how the cost and licensing works with regards to these, but they are all available.
When will this be generally available?
It is expected to be available sometime late H2 2017.
As more and more info becomes available it will become even more apparent how much value this will add to the enterprise datacenter. Most organizations today have a disconnect when it comes to their on-prem and off-prem workloads. Having a standardizes infrastructure, standardized process, and standardized integrations can only lead to less complex and more manageable infrastructure. As more information becomes available that can be shared I will certainly be focusing more on this area and once possible I will certainly be providing some insight and sneak peaks into this great new partnership.
Many of you are at VMworld 2016 and had the opportunity to be at the Keynote Live this morning. However there are those of us that are not at VMworld this year so I decided to put together some highlights from this mornings keynote.
The big theme for the keynote this year was the announcement of VMware Cloud Foundation and Cross Cloud Services. Although I say too much about Cloud Foundation beyond what what was discussed in this mornings keynote I think the below slide really helps shed some light. Although you will hear Cloud Foundation compared to Nutanix, I see it as more than just converged infrastructure. I see it more as a converged cloud. If you look at the let side of the below image you can see that VMware Cloud Foundation includes, Private Cloud as well as VMware vCloud air, and the IBM cloud. The benefit here is all of these environments are built on top of VMware technology. To the right you see the Non-VMware-Based Clouds which includes Amazon, Azure, and Google CP. These would be what’s part of the VMware Cross Cloud Services.
Last week we had our “TechSummit” and VMware and as part of the event their was a hackathon where team or individuals could sign-up and enter a cool integration into the competition. In the true spirit of a hackathon Tom Bonanno and I decided to do something cool. That something we named vRealize Voice Automation.
To be able to utilize the Amazon Echo to create, destroy, power on, & off workloads in vRealize Automation
Using the Amazon Alexa skills API we were able to create a new Alexa skill with three intents:
These intents combined with what Amazon calls Utterances allow us to take the speech input and determine variables within the speak for items like “blueprint” or “hostname”. That we then could use. The input taken from the Alexa API is then sent to some node.js code that is hosted on Amazon Lambda where we looked at the intent that was called and the variable values associated with and we then make a Rest API call to VMware vRealize Orchestrator invoke a workflow and pass the parameters to it as inputs. From there vRO talks to vRA and success.
It is certainly a cool solution, but remember the Alexa doesn’t always hear what you want it to hear and that can be catastrophic if your performing a destroy operation as you will see in the following videos.
Below are two videos. One is a commercial that was made for our hackathon entry and the other is a demonstration of the integration in action and a bit more on how we did it.
Creating an Amazon AWS Endpoint is really just assigning the credentials you would like to use to communicate with Amazon. vCAC already knows how to communicate with Amazon, it just doesn’t know what it needs to authenticate. To create the AWS Enpoint perform the following steps:
Creating an Amazon AWS credential has a few extra steps then a general set of credentials. You will need to login to your AWS account and access your Acess Key Id as well as your Secret Access Key to be utilized in the creation. The steps below outline the process to create an Amazon AWS set of credentials.
Usually most people go straight for connecting vCAC to vCenter, but I have decided to connect to Amazon EC2 first. I’m doing this for a few reasons, but mainly because anyone reading this has access to EC2. All you really need is any computer with a Desktop Virtualization tool like VMware workstation and you can test vCAC with Amazon EC2. If you don’t have an Amazon AWWS account go to http://aws.amazon.com and sign-up.
Signing up for Amazon AWS is free and what’s even better is you can also provision “Micro.Instances” for free for an entire year as long as you stay within these guidelines. The basics are this:
750 Hours of Linux/Windows Micro Instance Usage per month. (613Mb Memory). This is enough to run a single micro instance for the whole month.
750 Hours of Elastic Load Balancing plus 15GB of data processing
30GB of Elastic Block Storage
5GB of S3 Storage with 20,000 Get requests and 2,000 Put requests
And some other goodies…..
You can run more than one micro instance at a time as long as the consecutive run time of your machines doesn’t go over 750 hours a month. Once you provision an instance it automatically counts as 15 minutes used. I don’t bother trying to calculate by the 15 minutes so the way I look at it is I can perform 750 provisioning tests per month if each test is less than an hour.