Posts categorized “rackspace-private-cloud”
Since the initial launch of the OpenStack Innovation Center back in July of 2015, much work has been done. Wanted to take a moment to share the current status and some details about its next phases. If you are unfamiliar with OSIC, let me start off with some very quick background information.
Rackspace Private Cloud (RPC) powered by OpenStack has done a great job incorporating and enabling many of the great capabilities natively found within Cinder. With RPC, you gain the ability to leverage either Cinder nodes (commodity hardware using ephemeral storage exposing that storage as Block storage to your cloud) or to connect your OpenStack cloud directly to a shared storage solution via Cinder integration drivers. This is where our friends at NetApp come into play. Rackspace and NetApp have formed a unique relationship to improve the Cinder shared storage capability within OpenStack. These two teams worked together to create a repeatable, approved, and tested process to integrate NetApp storage solutions into Rackspace Private Cloud footprints within a Rackspace datacenter or at the customer's datacenter.
So you have spent months convincing your leadership to go with OpenStack. Finally the keys of the cloud are turned over to you as the Cloud Operator, and you then look over at your co-workers and say “now what”. The next set of phrases normally are something like: Now how do we best administer this cloud? Cloud is suppose to be easier, right?
Through the course of technology, infrastructure and application monitoring have changed positions. Not so long ago, monitoring was an afterthought when rolling out your new application or standing up your new rack of servers. More recently, I have observed monitoring to be one of the first considerations, to the point where it is actually in the initial project plan.
This evolution, while late in my mind, is the right direction…not just for the System Admin who gets the 2AM email alert or the application owner who on a monthly basis sadly report to his leadership 97% SLA on his app. Truly knowing how your application is affecting your infrastructure is one of the keys to a successful cloud.
With monitoring now being in an elevated position, that then leaves you to think: what should I use for monitoring? While there are plenty of software solutions in the market, many of which solve for different problems.
As someone who works daily on libraries that power OpenStack, OpenStack itself, and deployment strategies for OpenStack, it's nice to be able to combine all of these roles when possible. As a core reviewer for the OpenStack Image Service (Glance), it's crucial that I not simply review the code in the a given change by eye but also test it to ensure:
- It doesn't introduce new bugs
- It fixes the bug, or provides the functionality, it claims to
Lately, I have been using the os-ansible-deployment (a.k.a., OS-A-D) to test changes that I review in OpenStack. This provides me with several benefits.
Working on OS-A-D is my primary responsibility so the more I use it, the more familiar I become with it.
It deploys all of the OpenStack services similar to how Rackspace Private Cloud deploys them but on a single server instead of on hundreds.
It runs all of the services inside containers, so, if OpenStack is (at that point in time) not co-installable, it isn't a problem for my testing since I only care about testing how the patch works with the given service and other affected services, not that the dependencies of different services are incompatible.
It's easy to use Ansible to continuously redeploy a service inside a container (assuming you're already familiar with Ansible). All of that said, it isn't a complete replacement for DevStack. DevStack, used by the Jenkins jobs in OpenStack to ensure a patch will pass the gate, is also necessary. If you're developing a patch for OpenStack, Ansible becomes a bit more cumbersome than DevStack to use, especially since you need to worry about how DevStack interacts with your patch.
That aside, let's look at an example of how you might review a change with OS-A-D with an example. I do all of my development on servers in Rackspace's public cloud. Let's start by creating a new server:
$ nova boot --key-name my-key \ --flavor performance2-15 \ --image 8226139f-3804-4ad6-a461-97ee034b2005 \ --poll osad-glance_store-review
We'll need a few things before we can start the OS-A-D playbooks:
# apt-get update # apt-get install -y fail2ban tmux vim git
tmux is optional, but I tend to use it heavily in my development
environment. After everything finishes installing, I usually start up a tmux
session and do the following:
# git clone https://github.com/stackforge/os-ansible-deployment /opt/os-ansible-deployment # cd /opt/os-ansible-deployment
In here, we have a directory with vars for playbooks and a sub-directory with
vars that decide which versions to install of certain OpenStack services and
dependencies OS-A-D will install. In the example, I'm going to walk through
with you, we will be editing
playbooks/vars/repo_packages/openstack_other.yml. We'll want to update it to
glancestore_git_repo: https://github.com/sigmavirus24/glance_store glancestore_git_install_branch: bug/1263067
In our example, we'll be testing out https://review.openstack.org/168507 from
glance_store. You'll notice we're pulling from my fork of
This is largely due to the fact that OS-A-D is meant for deployments, and it's
unlikely anyone would deploy from Gerrit. So to test a change, I will happily
push it to my fork, since I usually check it out locally to review it anyway.
Once we've edited that, all we need to do is run
./scripts/gate-check-commit.sh and wait for it to build an all-in-one
version of OS-A-D. This will also run a subset of tempest's tests against the
AIO as well as defcore's tests. These should provide a good indication that
the AIO is functional.
If you want to check what version of
glance_store we have installed, you can
# ansible glance_all -m shell -a "pip freeze | grep store"
You can also do
# pip install -d . glance_store
This downloads the wheel that was built from the local package index. Now that we have an functioning cloud, we can test the actual patch. But that's something I'll leave as an exercise for the reader.
When you want to test your patch at scale, the best way to do it is with os-ansible-deployment. You can easily scale your test environment from one machine to tens (or hundreds) of machines and ensure that everything works in a production environment. It manages your dependencies for you, builds them from source, and gives you a way to have a reproducible set of build artifacts by giving you a private package index that you can reuse and clone for future use.
This is the fourth and last article in my series on OpenStack orchestration with Heat. In the previous articles, I gave you a gentle introduction to Heat, and then I showed you some techniques to orchestrate the deployment of single and multiple instance applications on the cloud, all done with generic and reusable components.
Today I'm going to discuss how Heat can help with one of the most important topics in cloud computing: scalability. Like in my previous articles, I'm going to give you actual examples that you can play with on your OpenStack cloud, so make sure you have an environment where you can run tests, whether it's a Rackspace Private Cloud, DevStack or any other OpenStack distribution that includes Heat.
This is the third article in my series on OpenStack orchestration with Heat. In Part 1, I introduced the HOT template syntax, and then in Part 2, I showed you some of the techniques Heat offers to orchestrate the deployment of applications that run entirely within a single compute instance.
Today, building on the same ideas exposed in my previous article, I'm going to show you how to design deployments across more than one instance, and I'm going to demonstrate these concepts by deploying an application that runs on a server and connects to a MySQL database on another server. You have seen how to deploy a Python application in my previous examples, so, to add some variety, I'm now going to switch to a PHP application as guinea pig. That application is none other than the venerable Wordpress.
In the newest release of the Rackspace Private Cloud (RPC v9.0), we made changes to the reference architecture for improved stability. These changes included a different approach for deploying the cloud internally, which may also interest anyone looking into running the Rackspace private cloud. The decision to use Ansible going forward was based on two major thoughts: ease of deployment and flexible configuration. Ansible made it very easy for Rackspace to simplify the overall deployment and give users the ability to reconfigure the deployment as needed to fit their environments. Are you familiar with Ansible? If yes…skip the next paragraph and if not, please read on.
Welcome to the second part of my series on OpenStack orchestration with Heat. In the previous article I gave you an introduction to Heat orchestration. All the examples I showed you were simple and not terribly useful, as they were only intended to introduce the structure of the HOT (Heat Orchestration Template) syntax.
In today's article, I'm going to elevate the complexity quite a bit, demonstrating some of the tricks you can use with Heat to perform deployments of single instance applications. As with the introductory examples, you are encouraged to try my examples on a Rackspace Private Cloud, DevStack or any other OpenStack installation that includes Heat.
With this article I begin a series of hands-on developer oriented blog posts that explore OpenStack orchestration using Heat.
To make the most of this article, I recommend that you have an OpenStack installation where you can run the examples I present below. You can use our Rackspace Private Cloud distribution, DevStack, or any other OpenStack distribution that includes Heat.
One of the HOTest new projects released within the previous release of OpenStack is the Heat project. Described as a main line project part of the OpenStack Orchestration program because Heat alone is not the complete orchestration capability being developed by the community, my gut tells me we have more projects based on orchestration coming soon. Setting some base ground work on what Heat provides capability wise is important. This is covered in two quick topics, what is orchestration and what is a stack?