Posts categorized “devops”
Ansible development is fast, and anyone using Ansible extensively has most likely come across an instance where a playbook that used to work does not work on a later Ansible version. Or, a system that wasn't supported initially is now added and an existing role requires modification to make it work on the new system. See Molecule for Ansible role creation for more details on using and debugging Molecule. Creating a Molecule scenario to test an existing role allows for easy testing and modification of that role with all the benefits that Molecule provides.
In our Quality Engineering organization, we create, configure, and destroy a lot of servers via automation. Ansible is a great method for handling the configuration of servers, but the creation of Ansible roles and playbooks can be trial and error for even experienced Operations Engineers. Molecule provides a way to speed up the development and confidence of Ansible roles and playbooks by wrapping a virtualization driver with tools for testing and linting.
Using Terraform with Rackspace Public Cloud
Handling a huge scale of infrastructure requires automation and infrastructure as code. Terraform is a tool that helps to manage a wide variety of systems including dynamic server lifecycle, configuration of source code repositories, databases, and even monitoring services. Terraform uses text configuration files to define the desired state of infrastructure. From those files, Terraform provides information on the changes to be made based on the current state of that infrastructure, and can make those changes.
Modern application environments can be complex and include many discrete elements that can all affect the end user's experience. Because of this, it can be challenging to develop an effective monitoring strategy that allows you to be alerted during potential performance problems and also to use these metrics from a variety of systems to proactively address potential bottlenecks and slow points before they cause end user impact. In this article, we'll be discussing several best practices for ensuring that your environment is effectively monitored.
Long running threads, application locks, thread contention, and other problems can all cause significant performance problems in Java applications (up to and including a complete lock up of the Java Virtual Machine (or JVM)!) Thread dumps are a vital tool in analyzing and troubleshooting performance problems in Java applications. They represent a point-in-time snapshot of the stack traces for all active threads that exist within the JVM. Typically, in order to troubleshoot these issues and get to the root cause, an engineer takes several thread dumps approximately 5-15 seconds apart. In this way, we can compare the state of all threads to determine commonalities -- namely, threads that are long running, blocking other threads, leading to circular deadlocks, and so on. In large applications, you may have thousands of threads, which can make this analysis a challenging prospect. In this article, we'll discuss how we can use a tool called fastthread.io in order to offload most of the heavy lifting and give us immediate insight in to the state of the application threads.
Rackspace Application Services provides application support and management to a wide variety of customers ranging in size from small environments with only a few application servers to customers that run thousands of Java Virtual Machines (or JVMs) across their environment. To help facilitate this, we heavily rely on Ansible to help us automate implementation, troubleshooting, and maintenance tasks. While Ansible is quite powerful and easy to use, many organizations do not take full advantage of some of the features that it provides. In this article, we'll be discussing how you can extend Jinja2 and Ansible's built-in filter plugins and how you can craft a completely new filter plugin to make specific tasks easier.
Last year, we shared the foundation Rackspace uses for Sitecore security hardening in a blog on this site. We're due for an update now that Sitecore has published additional best practices, and, here at Rackspace, we've folded those recommendations into our PowerShell process for securing environments. The Rackspace Managed Services for Sitecore team incorporates this into our provisioning work program for enterprise Sitecore projects.
As more web application workloads move to the cloud, organizations need to be concerned about attacks from the internet. External threats are scanning public IP ranges to find known vulnerabilities and exploit businesses. Let's take a look at the Azure Application Gateway (WAF), and see how it can be a part of our toolset for protecting our web applications.
Azure SQL is Microsoft's answer to Platform as a Service for SQL Server. It extracts a lot of the day to day administrative tasks of managing an installation. Let’s take a look how a consumer of Azure SQL can export data to restore to a local on-premise installation.
With Azure App Service, backing up your web app is available depending which App Service plan is choosen. With the introduction of larger applications moving to the cloud, certain files or folders do not need backed up. This is not something an end user can do in the Azure portal, so let's investigate how we can accomplish filtering of files or folders during the backup process.
As we've discussed in previous posts, AppDynamics is a powerful Application Performance Management (APM) tool that can be used to help tune performance in your application. However, with many organizations adopting a CI/CD approach to their application development lifecycle, it can be difficult to determine how these frequent deployments are affecting application performance and end-user experience.
Application Performance Management (APM) tools can provide incredibly valuable insight into the performance of your applications and ultimately your end users' experience. This insight, however, does not come without its cost. Because APM tools instrument code at runtime, there is always some level of performance overhead. In contrast to some older APM tools, modern APM tools are designed to minimize the negative performance impact as much as possible to allow you to safely run them in production without your end users' experience suffering. In this post, we'll evaluate the overhead introduced on an Adobe Experience Manager (AEM) environment by two popular APM tools: AppDynamics and New Relic
You may have found the extensions tab when browsing in an Azure Web App. Selecting extensions to add to an application is as easy as just pointing and clicking. Moving outside of the portal to an ARM template, things get a little bit tricky because documentation is lacking.
Information Design and Documentation Presents RPCO v13.1!
You can use the MyCloud Control Panel and the Orchestration service to create a new Rackspace server and install Jenkins in one step.
If you are using Azure Blob Storage and have a heavy workload, here's something you can do to improve performance that the majority of people are not doing - pay attention to the name you use for an Azure storage account.
Using Sitecore with the experience database requires a connection to MongoDB, which can add quite a bit of complexity to your Sitecore installation. Here are some frequently asked questions about using Object Rocket to host Mongo DB for Sitecore.
One of the great mysteries in life is predicting the future needs of your collection database as it stores interactions over the entire life of your application. It’s a tricky thing to predict accurately, because user behavior and site content change over time. However, we can make some estimations to provide some guidance on our database requirements.
Sitecore has the option of making use of TempDB in Sql Server to speed up your session state operations. What catches people off guard is the fact that tempdb is recreated at service restart of SQL Server. This becomes a problem when you have to recreate the table structure and user permissions inside tempdb.
Security Hardening for Sitecore Environments
We in the Rackspace Managed Services for Sitecore team work with a variety of enterprise Sitecore projects. Part of our implementation routine is to complete "security hardening" for Sitecore, which means applying the set of published security best-practices from Sitecore.
Out-of-the-box, Sitecore installs a demo-friendly and developer-ready solution; this is not a configuration suitable for running in production, on many grounds, but our focus here is on security aspects . . . so let's examine the specific Sitecore security hardening recommendations (Sitecore Security Hardening documentation) each in turn and share how we, at Rackspace, apply the recommendations.
Where do you conduct your User Acceptance Testing (UAT) activities? It's a loaded question that many organizations have challenges addressing as they first need to obtain a clear definition of what what UAT is (and what it isn't) before they even consider where UAT activities should occur. The benefits of a properly instituted UAT environment far outweigh the challenges, and the danger of not having one, but success requires a thoughtful and purposeful approach.
One day I was testing this neat new API feature and was really struggling with those
"I'm not a browser!" I thought. "Can I have this in a proper scripting or dev language?"
Since I couldn't find it anywhere, I decided to write this tutorial myself.
Everybody talks about Security, and, in the Cloud, sometimes the tools and options available seem confusing or inefficient because they require a lot of repetitive actions. Plus, it's all using Linux tools. And I want to use PowerShell.
Most companies just need a simple means to filter traffic to their Cloud Servers, and so Rackspace has launched, around 2015 and in limited availability, our own implementation of a very useful feature called 'Security Groups'.
AppDynamics is a powerful Application Performance Management tool that, properly configured, can provide tremendous insight in to application and infrastructure performance bottlenecks and enable operations and development teams to rapidly identify and resolve issues. Although AppDynamics collects and measures application performance data out of the box, some configuration and customization is necessary in order to reach its full capabilities. This guide explains best practices around how to identify your application's critical business transactions in order to get the most out of AppDynamics and, ultimately, the most out of your application and infrastructure.
Using the Azure diagnostic extension lets you capture a good set of metrics to help trend and diagnose your virtual machine. What a lot of people don't know is that you can configure it to capture custom log files.
Azure file storage is a great storage offering for a simple centralized file storage share that I often see go unused. A super feature is the ability to mount the share as a mapped network drive on your local machine.
Automation in Windows has historically been a challenge due to lack of built in tools for remote management. In the past few years, the enhancements to PowerShell and WinRM (Windows Remote Management) have forged a path that is now more on par with other operating systems in regards to remote access.
In a previous blog post, I described how to setup Sitecore in a Docker container. A reader asked about pulling Docker images on an Azure docker host and why it wasn't working. Turns out, there is an open issue about this exact issue. I was doing some testing today in Azure and noticed you still cannot do a Docker pull while your host is running in Azure, so let's look at the workaround in Azure.
Sitecore implementations with Content Delivery nodes in multiple locations must keep their databases and content in sync. The Sitecore Scaling Guide summarizes areas of concern, such as isolating CM and CD servers, enabling the Sitecore scalability settings, maintaining search indexes, etc. Sitecore runs on top of SQL Server, and one topic touched on in the Scaling Guide is SQL Server replication, and conveniently there is a Sitecore guide just for that specific subject. This guide explains how, with SQL Server Merge Replication, one can coordinate the content of Sitecore databases that are not in the same location. This is the starting point for what we at Rackspace have found to be a global publishing architecture that meets the needs of enterprise Sitecore customers.
I previously made a blog post on how to manually setup Sitecore running in a Docker container. I would like to take it one more step and build a Docker image using an automated install of Sitecore during the build process. We can then build Sitecore development enviornments on demand using our Docker Sitecore image.
In my previous blog post on running Sitecore in a Docker container, I used Azure SQL to host my Sitecore databases. Wanting a clean enviornment each time I develop, I needed a quick way to provision to Azure. With that requirement, I wrote a PowerShell script that makes this task repeatable for development and testing.
OpenStack SDKs exist for several programming languages, including Python, Go, Ruby, and many more. For those who don't wish to write code, users in the *nix world can use Curl at the command line to perform operations.
What about Microsoft Windows administrators? Are they required to learn linux and bash and curl? What if they could use the skills they already have, or learn new skills that are native to the Windows environment, for OpenStack administration? Is there a command line or scripting tool that suits the Windows DevOps world?
Elasticsearch is a powerful distributed schema-less datastore and its main focus is indexing/search functionality. One benefit of Elasticsearch is simple cluster management via multicast, which is provided out of the box.
Unfortunately, multicast is often blocked by cloud vendors due to security concerns of a mutli-tenancy network (imagine exposing your software to the rest of the cloud via multicast). This is where Rackspace Cloud Networks can help out. One of the primary goals of Cloud Networks is to allow"personal" L2 networks in a mutli-tenancy environment. This means we get multicast!
This is a guest post from Topher Marie, VP of Engineering at JumpCloud.
I’ve got a new Unix server up at RackSpace and I need to get my users some accounts on it. How do I go about doing that? I could copy and paste each of their passwords… well, I guess really I should have them come type their passwords, shouldn’t I? Actually, I know that public key authentication is more secure and robust than password-based. I’m going to go with that. Let’s make the assumption that each of the users that I want to give access to this machine have public/private key pairs already setup. Here are some basic notes for that procedure to get you started.
A question that often comes up is "why should I use config management when I can just use images?" In this article, we’ll explore the differences between images and configuration management, and talk about the benefits and drawbacks of each.
Understanding the Chef Environment File in Rackspace Private Cloud v4.2.x powered by OpenStack Havana
In a previous post I went through two typical Chef Environment files specific to Rackspace Private Cloud v4.1.x powered by OpenStack Grizzly with nova-network and Quantum Networking. However, with Rackspace Private Cloud v4.2.x powered by OpenStack Havana some things have changed, in particular Quantum has been renamed to Neutron.
In the following post, I am going to break down each part of the Chef Environment file, including the Highly Available pieces, specific to Rackspace Private Cloud 4.2.x powered by OpenStack Havana.
This is a guest post written by Michael DeHaan, CTO at AnsibleWorks. AnsibleWorks provides IT orchestration solutions that simplify the way IT manages systems, applications, and infrastructure.
A while back I wrote about Ansible as a way to simply automate IT infrastructure, and showed how to achieve some interesting zero-downtime rolling update capabilities.
Rackspace Private Cloud uses Chef to deploy an OpenStack environment. Chef provides the ability to quickly configure and deploy an OpenStack environment on one to many nodes. An integral part of deployment is the Chef Environment file. This file can be difficult to understand as a newcomer to Chef.
In the following post, I am going to break down each part of two typical Chef Environment files specific to Rackspace Private Cloud v4.1.x powered by OpenStack Grizzly.
A new post covering the Chef Environment file for Rackspace Private Cloud v4.2.x, including the highly available bits, can be found here.
Here are the latest Ruby treats from the Developer Relations Group.
After two months the fog community released 1.16.0. With all this extra time we sure managed to pack a lot of goodies into it!
- Support for Rackspace Auto Scaling.
- The Rackspace Compute provider now defaults to Next Gen Servers.
- Cloud Block Storage now supports creating volumes using a snapshots.
- Cloud Servers now retrieves full details for flavor and image calls.
When building a non-trivial application, you will need to manage assets in the cloud. Servers, files, containers, load balancers, databases - setting these up and maintaining them is a part of your day-to-day work. You can use the Rackspace control panel to spin up a server. Or you can use the Rackspace API, and write a quick script to do what you need. Each of these tools has its ups and its downs, depending on your point of view and how you like to work.
As Ruby developers, we've become accustomed to doing a lot from the command line. In fact, there is so little that isn't done with a CLI (or editor), jumping over to the GUI of the control panel feels both jarring and limiting. So we decided to build rumm - a command line tool for working with the Rackspace cloud.
London Unlocked Update
Last week we wrapped the second stop of Unlocked: The Hybrid Cloud in beautiful London, and I want to give you quick recap on the event. In our best turnout so far, we had over 100 developers, engineers and business executives join us who were eager to learn about all things cloud. The London event was a little different than the first in NYC. Based on feedback from NYC attendees, we split the London Unlocked event into dual tracks: business and technical. We wanted to test the waters and see if there was an appetite for more focused tracks, so we decided to offer it up and see what the feedback was like. It was a super successful event in all regards - minus the fire alarm that caused us to evacuate for approximately 30 minutes. I'm absolutely looking forward to returning to London for the next Unlocked event (date TBD).
Portland is known for quite a few things, great food, street artists, and of course plenty of breweries. This week Portland will be home to OSCON, a gathering of all things open source. Come to OSCON to learn about the latest open source technologies and how best to utilize them.
Rackspace has been an advocate of open source technology for many years. We worked with NASA three years ago to create an open source cloud, OpenStack, which has taken the world by storm. The impact that OpenStack has had on the world is massive. Some well known companies including Best Buy, Comcast and Bloomberg are all using OpenStack due to its open nature and the community around it. These companies spoke at the OpenStack Summit.
This week, Rackspace embarks on a global journey across the globe called Unlocked: The Hybrid Cloud. Unlocked is a free one-day cloud workshop sponsored and hosted by Rackspace that we’ll hold in several major cities across the globe to help you determine which cloud environment – public, private or hybrid cloud – is the best fit for your application.
But don't expect to get blasted with Rackspace marketing the entire day. In fact, you should expect the opposite. We’re a world leader in cloud computing and over the years we have seen thousands of cloud applications and infrastructure designs, and we have spoken with hundreds of engineering teams. What we have gained from that is a solid understanding of what building in the open cloud looks like; from single-server WordPress installs to complex adaptive applications that span clusters of cloud servers and everything in between.
The second day of Gluecon 2013 started off with a bang with a great keynote presentation from Lew Cirne, CEO of New Relic and Stephen O’Grady of Redmonk. Day two was packed with compelling topics including enterprise software history, CEOs who love to code, why to API and a further introduction to Google Compute Engine.
Just a quick scan of the two-day conference agenda made it apparent that this was going to be a hands-on conference where developers could speak their minds about APIs, tools, distributed systems and a bunch of other topics. There were more than 20 sessions chock full of great discussion. The developer vibe was strong and the standard developer “uniform” of jeans and t-shirts dominated the expo floor and sessions.