As a PhD student at UC Berkeley, my duties involve some amount of teaching; so, this semester (Spring 2015), as well as last spring, I have been a teaching assistant for a class taught by my advisor, Tom Griffiths. The class, called Computational Models of Cognition (COGSCI 131), aims to introduce students to computational models of human behavior. The problem sets are a mixture of simple programming assignments—usually requiring students to implement pieces of different models—and written answers, in which students report and interpret the results of their code.
In the past, the problem sets were written in MATLAB. This year, however, we decided to make the switch to Python. In particular, we decided that the IPython/Jupyter notebook would be an ideal format for the assignments. The notebook is a cross-platform, browser-based application that seamlessly interleaves code, text, and images. With the notebook, it is possible for us to write instructions in the notebook, include a coding exercise after the instructions, and then ask for their interpretation of the results immediately after that. For an example of what the notebook looks like, you can check out try.jupyter.org for a demo.Read More
ClojureBridge aims to increase diversity within the Clojure community by offering free, beginner-friendly Clojure programming workshops for women. On March 13-14, 2015 we held a ClojureBridge event at the Rackspace office in Austin, TX. It was put on by an amazing group of organizers to foster the adoption of Clojure by women in technology.Read More
MongoDB Inc just released what is arguably the most important change to the MongoDB database in its short history.
MongoDB version 3.0
MongoDB 3.0 brings with it a wealth of new features, but most notably a new pluggable storage engine API. We wanted to help customers get familiar with the new storage engine and features quickly and easily.
Because of the new pluggable storage engine API, MongoDB 3.0 promises a massive leap forward in functionality, usability and features. Developers, DevOps Engineers and DBA's should start getting acquainted with MongoDB 3.0. In particular:
Full Release Notes
From a community standpoint, the more people using 3.0 and filing any bug reports the better. We wanted a quick and easy way for folks to experiment. We needed tooling. A couple attributes of the tooling we thought where really important are:
We created an Ansible playbook that installs and configures a simple MongoDB 3.0 configuration. It takes just a few minutes to setup and is completely customizable.
Installation is 4 simple steps:
Complete and up-to-date installation and configuration instructions.
In a nutshell:
For this, you need to have git and Ansible installed. Installation is pretty easy. For most systems you simply need to:
# Centos/RHEL # Ansible sudo yum install ansible # git sudo yum install git
Simply clone the repo to the box where you installed Ansible:
git clone https://github.com/rackerlabs/ansible-mongodb.git
We need to tell Ansible to use the host(s) where we want MongoDB to be installed. We need to ensure we tell Ansible the correct configuration for our host(s), as well as set any startup parameters we want.
# edit hosts file, and change <MYIP> to the ip address of the host to provision vi hosts.txt # install the required roles ./mongodb_roles.sh # alter the default config (or at least inspect it for being correct) vi roles/ansible-roles_mongodb-install/defaults/main.yml
Simply launch the helper shell scripts:
cd ansible-mongodb ./setup-mongodb.sh
For a fully managed solution with replica sets and sharding, hit up email@example.com and the support folks will install and configure a MongoDB 3.0 instance in the ObjectRocket fully managed environment.Read More
Recently I had the pleasure of hosting a webinar covering the Evolution of OpenStack. No matter how many times I review the history of OpenStack, I manage to learn something new. Just the idea that multiple companies, with distinct unique ideas can come together to make what I consider to be a super platform is amazing. Whether you think OpenStack is ready for prime time or not, it is hard to deny the power and disruptive nature it has in the current cloud market.Read More
In the wake of recently announced vulnerabilities to the Xen hypervisor that our Cloud Servers platform is built on top of, a reboot will be necessary in some instances on both our First Generation and Next Generation Cloud Servers. The details of our announcement are available at https://community.rackspace.com/general/f/53/t/4978 and via https://status.rackspace.com/.
In order to complete the patching of our systems, we have scheduled
reboot windows on a per-region basis beginning Monday March 2 and running
through Monday March 9. To discover the time ranges during which your
affected servers will be rebooted, your
Cloud Control Panel will contain the
information for whichever region is currently visible (Note: you can change
this via the region selector on the left side of the control panel).
Alternatively, you can run our
tool to discover the reboot windows of servers across all regions at once.
Binary downloads for many platforms are available
While this blog post may seem trivial on the surface, it does pack some very interesting information on how very flexible the Rackspace Cloud Files product can be. While executing another customer project, the age old question of: “Where are we going to put the database backups?” was raised. Back in the day this question only really had one solution. In the current age of the cloud, you have a few options. Since I like to live life on the edge…I raised my hand and said Cloud Files.
For those of you not familiar with Cloud Files, the easiest way to describe it is shared Object Storage. In OpenStack lingo, you could also call it shared Swift. Cloud Files is an API enabled Object storage capability found on the Rackspace Public cloud platform. In this post, we will walk you thru how easy it is to store something as simple as database backups in Cloud Files using simple automation, fronted by Ansible of course (my orchestration drug of choice). I promise this post will be short and sweet.Read More
Hadoop is constructed from a large set of servers (or nodes) so to properly manage it you need to have a good overall view of the system. It is ok if some data nodes are not in service — up to a certain number — which is called partial failure. The important thing is to be able to see how many nodes are in an up or down state to know the overall health of the cluster.
This article discusses how The Rackspace Global Data team created a well-monitored Hadoop cluster by taking advantage of cloud services.Read More
Python Tennessee was a wonderfully put together conference with a great variety of speakers.
OpenStack SDKs exist for several programming languages, including Python, Go, Ruby, and many more. For those who don't wish to write code, users in the *nix world can use Curl at the command line to perform operations.
What about Microsoft Windows administrators? Are they required to learn linux and bash and curl? What if they could use the skills they already have, or learn new skills that are native to the Windows environment, for OpenStack administration? Is there a command line or scripting tool that suits the Windows DevOps world?Read More
This is the fourth and last article in my series on OpenStack orchestration with Heat. In the previous articles, I gave you a gentle introduction to Heat, and then I showed you some techniques to orchestrate the deployment of single and multiple instance applications on the cloud, all done with generic and reusable components.
Today I'm going to discuss how Heat can help with one of the most important topics in cloud computing: scalability. Like in my previous articles, I'm going to give you actual examples that you can play with on your OpenStack cloud, so make sure you have an environment where you can run tests, whether it's a Rackspace Private Cloud, DevStack or any other OpenStack distribution that includes Heat.Read More