Security Hardening for Sitecore Environments
We in the Rackspace Managed Services for Sitecore team work with a variety of enterprise Sitecore projects. Part of our implementation routine is to complete "security hardening" for Sitecore, which means applying the set of published security best-practices from Sitecore.
Out-of-the-box, Sitecore installs a demo-friendly and developer-ready solution; this is not a configuration suitable for running in production, on many grounds, but our focus here is on security aspects . . . so let's examine the specific Sitecore security hardening recommendations (Sitecore Security Hardening documentation) each in turn and share how we, at Rackspace, apply the recommendations.
In a previous post, I talked about crypto API tradeoffs. In this
post, I'll go into a specific API design case in
cryptographic library for Clojure, a language that runs on the Java Virtual
A Performance comparison: Apache vs NGINX and OpenStack Keystone
In a previous article, I showed how to configure keystone to run behind NGINX instead of current recommended configuration using Apache. Since its inception, NGINX has enjoyed significant growth in the web server space. Netcraft monthly data web server market share for NGINX has grown significantly since its introduction. Data for May 2016 can be found at: http://news.netcraft.com/archives/2016/05/26/may-2016-web-server-survey.html
Producing cryptographic software is a difficult and specialized endeavor. One of the pitfalls is that getting it wrong looks exactly like getting it right. Much like a latent memory corruption bug or a broken distributed consensus algorithm, a piece of cryptographic software can appear to be functioning perfectly, while being subtly broken in a way that only comes to light years later. As the adage goes, attacks never get worse; they only get better. Implementation concerns like timing attacks can be fiendishly complicated to solve, involving problems like division instructions on modern Intel CPUs taking a variable number of cycles depending on the size of the input. Implementation concerns aren't the only problem; just designing the APIs themselves is a complex task as well.
Like all API design, cryptographic API design is a user experience exercise. It doesn't matter how strong or fast your cryptographic software is if no one uses it. The people who end up with ECB mode didn't end up with it because they understood what that meant. They got stuck with it because it was the default and it didn't require thinking about scary parameters like IVs, nonces, salts and tweaks. Even if someone ended up with CTR or CBC, these APIs are still precarious; they'll still be vulnerable to issues like nonce reuse, fixed IV, key-as-IV, unauthenticated encryption...
User experience design always means deep consideration of who your users are. A particular API might be necessary for a cryptographic engineer to build new protocols, but that API is probably not a reasonable default encryption API. An explicit-nonce encryption scheme is great for a record layer protocol between two peers like TLS, but it's awful for someone trying to encrypt a session cookie. We can't keep complaining about people getting it wrong when we keep giving them no chances at getting it right. This is why I'm building educational material like Crypto 101 and why I care about cryptography like nonce-misuse resistance that's easier to use correctly. (The blog post on my new nonce-misuse resistant schemes for libsodium is coming soon, I promise!)
Before you can make your API easy to use, first you have to worry about getting it to work at all.
An underlying cryptographic library might expose an unfortunate API. It might
be unwieldy because of historical reasons, backwards compatibility, language
limitations, or even simple oversight. Regardless of why the API is the way it
is, even minute changes to it—a nicer type, an implied parameter—might have
subtle but catastrophic consequences for the security of the final
product. Figuring out if an arbitrary-length integer in your programming
language is interchangeable with other representations, like the
implementation in your crypto library or a
char *, has many complex
facets. It doesn't just have to be true under some conditions; ideally, it's
true for every platform your users will run your software on, in perpetuity.
There might be an easy workaround to an annoying API. C APIs often take a
char * together with a length parameter, because C doesn't have a standard
way of passing a byte sequence together with its length. Most higher level
languages, including Java and Python, have byte sequence types that know their
own length. Therefore, you can specify the
char * and its associated length
in a single parameter on the high-level side. That's just the moral equivalent
of building a small C struct that holds both. (Whether or not you can trust C
compilers to get anything right at all is a point of contention.)
These problems compound when you are binding libraries in languages and environments with wildly different semantics. For example, your runtime might have a relocating garbage collector. Pointers in C and objects in CPython stay put, but objects move around all the time in environments like the JVM (HotSpot) or PyPy. That implies copying to or from a buffer whenever you call C code, unless the underlying virtual machine supports "memory pinning": forcing the object to stay put for the duration of the call.
Programmers normally operate in a drastically simplified model of the world. We praise programming designs for their ability to separate concerns, so that programmers can deal with one problem at a time. The modern CPU your code runs on is always an intricate beast, but you don't worry about cache lines when you're writing a Python program. Only a fraction of programmers ever has to worry about them at all. Those that do typically only do so after the program already works so they can still focus on one part of the problem.
When designing cryptographic software, these simplified models we normally program in don't generally work. A cryptographic engineer often needs to worry about concerns all the way up and down the stack simultaneously: from application layer concerns, to runtime semantics like the Java Language Specification, to FFI semantics and the C ABI on all relevant platforms, to the underlying CPU, to the mathematical underpinnings themselves. The engineer has to manage all of those, often while being hamstrung by flawed designs like TLS' MAC-then-pad-then-encrypt mess.
In future blog posts, I'll go into more detail about particular cryptographic API design concerns, starting with JVM byte types. If you're interested, you should follow me on Twitter.
Footnote: I'm happy to note that cffi now also has support for memory pinning since PyPy will support it in the upcoming 5.2 release, although that means I'll no longer be able to make Paul Kehrer of PyCA fame jealous with the pinning support in caesium.
Run OpenStack Keystone and Horizon using NGINX on Ubuntu 16.04
I previously wrote an article showing how to convert OpenStack from using an Apache server for both Keystone and the Horizon interface. Since that article was written, OpenStack has moved to the Mitaka release, and Unbuntu has moved to a new long term release "Ubuntu 16.04 - xenial". These two releases bring a number of changes to the configuration. In this article, I show you how to make the transition to ngix running these newer releases.
Where do you conduct your User Acceptance Testing (UAT) activities? It's a loaded question that many organizations have challenges addressing as they first need to obtain a clear definition of what what UAT is (and what it isn't) before they even consider where UAT activities should occur. The benefits of a properly instituted UAT environment far outweigh the challenges, and the danger of not having one, but success requires a thoughtful and purposeful approach.
Craig Costello, Patrick Longa and Michael Naehrig, three cryptographers at Microsoft Research, recently published a paper on supersingular isogeny Diffie-Hellman. This paper garnered a lot of interest in the security community and even made it to the front page of Hacker News. Most of the discussion around it seemed to be how no one understands isogenies, even within cryptography-literate communities. This article aims to give you a high-level understanding of what this cryptosystem is and why it works.
One day I was testing this neat new API feature and was really struggling with those
"I'm not a browser!" I thought. "Can I have this in a proper scripting or dev language?"
Since I couldn't find it anywhere, I decided to write this tutorial myself.
Everybody talks about Security, and, in the Cloud, sometimes the tools and options available seem confusing or inefficient because they require a lot of repetitive actions. Plus, it's all using Linux tools. And I want to use PowerShell.
Most companies just need a simple means to filter traffic to their Cloud Servers, and so Rackspace has launched, around 2015 and in limited availability, our own implementation of a very useful feature called 'Security Groups'.
AppDynamics is a powerful Application Performance Management tool that, properly configured, can provide tremendous insight in to application and infrastructure performance bottlenecks and enable operations and development teams to rapidly identify and resolve issues. Although AppDynamics collects and measures application performance data out of the box, some configuration and customization is necessary in order to reach its full capabilities. This guide explains best practices around how to identify your application's critical business transactions in order to get the most out of AppDynamics and, ultimately, the most out of your application and infrastructure.
Using the Azure diagnostic extension lets you capture a good set of metrics to help trend and diagnose your virtual machine. What a lot of people don't know is that you can configure it to capture custom log files.