Posts categorized “architecture”
Businesses compete to transform digitally, but most are restricted in some way or another from moving over to the cloud or to a new data center by existing applications or infrastructure. Docker® comes to rescue and enables the independence of applications and infrastructure. It is the only container platform that addresses every application across the hybrid cloud.
This blog provides insights into the Docker architecture and key features so that you can get started with these migration activities and explains why you might want to use Docker.
Originally published by TriCore: June 6, 2017
Oracle® Data Pump (expdp, impdp) is a utility for exporting and importing database objects in and across databases. Part 1 of this two-part blog post series discussed the introduction of multitenant architecture in Oracle Database 12c and how to use Data Pump to export and import data. Part 2 covers how to take an export of only pluggable databases (PDBs) and the restrictions that Data Pump places on PDBs.
Originally published by TriCore: June 6, 2017
Oracle® Data Pump (expdp, impdp) is a utility for exporting and importing database objects in and across databases. While most database administrators are aware of Data Pump, support for multitenant architecture in Oracle Database 12c introduced changes to how Data Pump exports and imports data.
The Threat and Vulnerability Analysis team at Rackspace is charged with providing internal vulnerability scanning, penetration testing, and red/purple teaming capabilities to reduce cyber-based threats, risk, and exposure for the company. One of our tasks, as part of meeting certain compliance objectives, is to ensure systems are not exposed from various networking "perspectives" without going through a bastion first.
This blog post explores the basics of Oracle® GoldenGate® and its functions. Because it's decoupled from the database architecture, GoldenGate facilitates heterogeneous and homogeneous real-time capture of transactional change data capture and integration.
This post describes the Oracle® In-Memory Advisor (IMA), a feature of Database 12c, and describes its benefits. This feature is available in Oracle Database version 220.127.116.11 and later.
Originally published by Tricore: July 11, 2017
In Part 1 of this two-part series on Apache™ Hadoop®, we introduced the Hadoop ecosystem and the Hadoop framework. In Part 2, we cover more core components of the Hadoop framework, including those for querying, external integration, data exchange, coordination, and management. We also introduce a module that monitors Hadoop clusters.
Originally published by Tricore: July 10, 2017
Apache™ Hadoop® is an open source, Java-based framework that's designed to process huge amounts of data in a distributed computing environment. Doug Cutting and Mike Cafarella developed Hadoop, which was released in 2005.
Built on commodity hardware, Hadoop works on the basic assumption that hardware failures are common. The Hadoop framework addresses these failures.
In Part 1 of this two-part blog series, we'll cover big data, the Hadoop ecosystem, and some key components of the Hadoop framework.
This blog gives an overview of the non-relational database, Apache Cassandra™. It discusses its components and provides an understanding of how the database operates and manages data.
Parallel Replicat is one of the new features introduced in Oracle ® GoldenGate 12c Release 3 (18.104.22.168). Parallel Replicat is designed to help users to quickly load data into their environments by using multiple parallel mappers and threads.
This blog discusses the Oracle Exadata Smart Flash Cache feature and its architecture, including the write-back flash cache feature.
Using Sitecore with the experience database requires a connection to MongoDB, which can add quite a bit of complexity to your Sitecore installation. Here are some frequently asked questions about using Object Rocket to host Mongo DB for Sitecore.
Where do you conduct your User Acceptance Testing (UAT) activities? It's a loaded question that many organizations have challenges addressing as they first need to obtain a clear definition of what what UAT is (and what it isn't) before they even consider where UAT activities should occur. The benefits of a properly instituted UAT environment far outweigh the challenges, and the danger of not having one, but success requires a thoughtful and purposeful approach.
At our annual rax.io internal technical conference in San Antonio this week, I had a blast hacking on a reporting tool for our new content engine behind developer.rackspace.com and support.rackspace.com.
Sitecore implementations with Content Delivery nodes in multiple locations must keep their databases and content in sync. The Sitecore Scaling Guide summarizes areas of concern, such as isolating CM and CD servers, enabling the Sitecore scalability settings, maintaining search indexes, etc. Sitecore runs on top of SQL Server, and one topic touched on in the Scaling Guide is SQL Server replication, and conveniently there is a Sitecore guide just for that specific subject. This guide explains how, with SQL Server Merge Replication, one can coordinate the content of Sitecore databases that are not in the same location. This is the starting point for what we at Rackspace have found to be a global publishing architecture that meets the needs of enterprise Sitecore customers.
Since the initial launch of the OpenStack Innovation Center back in July of 2015, much work has been done. Wanted to take a moment to share the current status and some details about its next phases. If you are unfamiliar with OSIC, let me start off with some very quick background information.
Before getting into the nuts and bolts of the load balancing architecture itself, it's important to understand the (typical) multiple tiers of an E-Commerce application framework:
- Firewall (edge)
- Physical local traffic manager (LTM)
- Web Server
- Application Server
- Database Server (cluster)
Keep in mind that, top to bottom, the environment will be asymmetrical from a load perspective. For example, a single web server will typically be capable of 2-3x the number of concurrent connections as a single application server; heavily dependent on cache density - higher density will shift more load up into the web tier. Caching will be a subject for a later discussion, but at a glance should account for 80+ percent of content served. With room for variance, the majority of successful architectures achieve this metric and those that struggle tend to miss. This is not to say, of course, that a lower density will necessarily have difficulties. In addition to relocating load away from application servers, a higher cache density opens an opportunity for external services, such as Akamai CDN, to absorb load ahead of ever reaching the environment.
What is MongoDB?
MongoDB is, among other things, a document-oriented NoSQL database. This means that it deviates from the traditional, relational model to present a flexible, horizontally scaling model for data management and organization.
How does MongoDB work with AEM?
MongoDB integrates with Adobe Experience Manager (AEM) by means of the crx3mongo runmode and JVM options: -Doak.mongo.uri and -Doak.mongo.db
Why would I MongoDB?
Primarily MongoDB provides an alternate HA configuration to the older CRX cluster configuration. In reality, the architecture is more similar to a shared catalog on NFS or to NetApp than true clustering. The authors and publishers using MongoDB are not necessarily aware of each other.
When it comes to the battle cry of E-Commerce, "we're losing $1m per minute" is the clear winner, but a strong second is certainly "we want a disaster recovery solution". There are numerous benefits to disaster recovery and business continuity planning, especially speaking as the recipient of those 4am emergency calls. Traditional DR, with routing changes, cutover plans, scaled-down performance, and questionable technical tasks, is a well-traveled path in the industry, and it is very much inline with the expectations of most organizations even today. In the rapid-fire world of E-Commerce, this approach offers several challenges and misses a few key opportunities to take advantage of warm-side management.
If you are an OpenStack contributor, you likely rely on DevStack for most of your work. DevStack is, and has been for a long time, the de-facto platform that contributors use for development, testing, and reviews. In this article, I want to introduce you to a project I'm a contributor to, called openstack-ansible. For the last few months, I have been using this project as an alternative to DevStack for OpenStack upstream development, and the experience has been very positive.
I - Introduction
This is the first of a two-part series that demonstrates a pain-free solution a developer could use to transition code from laptop to production. The fictional deployment scenario depicted in this post is one method that can significantly reduce operational overhead on the developer. This series will make use of technologies such as Git, Docker, Elastic Beanstalk, and other standard tools.
Managing infrastructure and database technology has grown at Rackspace and our list of supported technologies in the data umbrella has grown tremendously.
My name is David Grier and I am a product engineer at Rackspace. I concentrate most of my time on Cassandra, Hadoop and related components in the Big Data ecosystem.
We are proud to announce our partnership with Datastax, with whom we are providing a managed DataStax Enterprise (DSE) solution. This article is a high level view of that managed solution and how we are providing it to our customers.
Container technology is evolving at a very rapid pace. The purpose of the webinar talk in this post is to describe the current state of container technologies within the OpenStack Ecosystem. Topics we will cover include:
- How OpenStack vendors and operators are using containers to create efficiencies in deployment of the control plane services
- Approaches OpenStack consumers are taking to deploy container-based applications on OpenStack clouds
Last week I had the privilege to attend the OpenStack Super Bowl, aka the OpenStack Summit, in Vancouver. It was incredible just to be around so many other folks who also believe strongly in OpenStack.
So in between sessions, I stumbled across a friendly competition sponsored by Intel called, Rule the Stack. It was a competition to see who can build a fully functioning OpenStack cloud the fastest on (6) six physical servers. My coworker had mentioned it to me a week earlier, but, frankly, I forgot about it. I was focused on my two workshops and did not have extra time to plan. Anyone who knows me would know I love a challenge and never turn one down. Yes, of course you know I had to sign up to give it a go.
Before going much further, I wanted to fully disclose that I did not win the main prize in any way :D. I watched the SUSE guys do it in 6 minutes (which is a whole other discussion). Despite knowing I would not 'win' the competition, I went for it anyway. For me personally, it was not about winning but was more about solving this real life puzzle in a real-life repeatable way. The Intel guys appreciated my determined nature and awarded me as the 'Most Determined' participant.
When dealing with OpenStack, one of the challenges is designing an architecture that can scale horizontally and make decisions based on the commodity hardware presented to you. Holding true to the foundation OpenStack was originally built on, open cloud platform can run on any hardware (OEM or commodity or Open Compute). This competition pushes you to make all those decisions.
Again, this struck a cord in my heart because this is what I do for a living and because I believe the approach we take with RPC (Rackspace Private Cloud) makes solving those decisions very easy.
The quick breakdown of the competition is:
- You are provided with (6) six physical nodes consisting of (3) three different configurations. Two node types had the same processor and memory but had a different number of drives. Then the third node type had a different processor, more memory and TPM module (more details can be found on the Intel site above).
- Had to build using the Kilo release of OpenStack
- The process for building out your configuration was open to you to decide. You could connect to the local network where the servers are connected via your own laptop or laptops provided.
- There was opportunity for bonuses, shaving time from your final clock time, and penalties could be given for unconfigured nodes or nodes that were not optimized for use.
As soon as I saw the node configurations, I knew exactly how I wanted it to be setup. Keep in mind, I was not aiming for the fastest build but, rather the most complete flexible real life design. Despite HA not being a requirement (although I am attempting to have that rule changed for Tokyo (wink wink)), my reference architecture did include a dual server control plane. Also, I decided to include dual Cinder nodes and, of course, dual compute nodes. My complete reference architecture is outlined below.
The next step was to determine how to utilize the (4) four VLAN networks part of the provided specs. RPC asked for three individual network bridges and a management network. Each node had two NICs, and the first NIC was bound to VLAN 11. I sort of went back and forth with this decision for a while but finally settled in on one that worked.
At this point, I am all ready to go, but there is still one last decision to make. How do I lay down the base OS on these servers? Again, not totally concerned with speed, I wanted to use a way that could be repeatable, flexible and cover the most ground possible without requiring post-install configurations. After a quick poll of my team, two contenders came to the forefront: Cobbler or MaaS. I'm not going to say which one turned out to be the most complete option and how I did it, as it could be my secret weapon for Tokyo. I will say is you would be shocked as to which one turned out to be the best option.
So everything is prepped and the clock starts. Let’s just say the first time around was not pretty at all. Did I give up then? Of course not! I just signed up to try again. Second attempt was a bit better, but I literally ran out of time before the next participants arrived (at that point I was still building at the 2 hour mark). Yes, the third attempt was running perfect, and, yet again, I was stopped because the previous participants had cut into my time slot a bit. My fourth, and final, attempt did the trick. This last attempt would have come in right under 1 hour and 30 minutes, but, unfortunately, I was literally being kicked out by security at the end of the session day on Thursday.
Pro tip: Sign up early and do not wait until the end of the Summit, as you will not be allowed a big enough time slot to finish.
All and all, it was a great experience and one that I plan to repeat in Tokyo. Special thanks go out to the Intel staff on hand in Vancouver - they were the best and very supportive/accommodating. Just a great set of guys! Congrats to the SUSE team who I have to assume were the winners in Vancouver. Good thing mostly about this is...you get another chance to step up and show off your stuff in order to be crowned “Ruler of the Stack”.
Tsuki ni anata o sanshō shite kudasai! (Translation: "See you in November!” per Google :D )
Hello! This is the first post in a series that will bring you new and interesting links every week from the perspective of a Rackspace Security Engineer. I try to include links that are useful/interesting to a general audience, so you don't have to be an "uber 1337 h4x0r" to enjoy them. If you have any comments, or if you want to submit a link, feel free to leave a comment or catch me on Twitter.
As the look and feel of the cloud evolves, matures, and hedges toward main stream adoption, the Solution Architects, Developers, and Infrastructure engineers of Enterprises face the challenge to determine what technologies to consume. Should I go with something that requires vendor licensing? Or should I look to Open Source technologies, such as OpenStack? Then if you do decide that OpenStack solves for your technology needs, how best could someone layout its pros and cons to their senior leadership.
Those of us who have ever had to stand in front of their Director/CTO/CIO and figuratively 'fight' for a particular technology/product completely understands that this task is not for the meek of heart. I can remember very vividly holding index cards in my hands with bullet points, as I was attempting to lay out all the reasons why OpenStack should be the company's next major infrastructure shift. Being prepared for this conversation is critical to the overall enterprises architecture, so you need to articulate clearly why OpenStack is the best choice. You can never be too prepared. There will always be questions that you as a technology advocate, will not even think of. In my opinion, being prepared is key. So let’s start on our technology layer cake.
Architecting applications for a cloud environment usually means treating each cloud server as ephemeral. If you destroy the cloud server, the data is destroyed with it. But, you still need a way to persist data. Cloud block storage has typically been that solution. Attach cloud block storage to a cloud server, save your data within that cloud block device, and when/if the cloud server is destroyed, your data persists and can be re-attached to another cloud server.
The IPython/Jupyter notebook is a wonderful environment for computations, prose, plots, and interactive widgets that you can share with collaborators. People use the notebook all over the place across many varied languages. It gets used by data scientists, researchers, analysts, developers, and people in between.
In the newest release of the Rackspace Private Cloud (RPC v9.0), we made changes to the reference architecture for improved stability. These changes included a different approach for deploying the cloud internally, which may also interest anyone looking into running the Rackspace private cloud. The decision to use Ansible going forward was based on two major thoughts: ease of deployment and flexible configuration. Ansible made it very easy for Rackspace to simplify the overall deployment and give users the ability to reconfigure the deployment as needed to fit their environments. Are you familiar with Ansible? If yes…skip the next paragraph and if not, please read on.