Posts categorized “architecture”
Businesses compete to transform digitally, but most are restricted in some way or another from moving over to the cloud or to a new data center by existing applications or infrastructure. Docker® comes to rescue and enables the independence of applications and infrastructure. It is the only container platform that addresses every application across the hybrid cloud.
This blog provides insights into the Docker architecture and key features so that you can get started with these migration activities and explains why you might want to use Docker.
Originally published by TriCore: June 6, 2017
Oracle® Data Pump (expdp, impdp) is a utility for exporting and importing database objects in and across databases. Part 1 of this two-part blog post series discussed the introduction of multitenant architecture in Oracle Database 12c and how to use Data Pump to export and import data. Part 2 covers how to take an export of only pluggable databases (PDBs) and the restrictions that Data Pump places on PDBs.
Originally published by TriCore: June 6, 2017
Oracle® Data Pump (expdp, impdp) is a utility for exporting and importing database objects in and across databases. While most database administrators are aware of Data Pump, support for multitenant architecture in Oracle Database 12c introduced changes to how Data Pump exports and imports data.
The Threat and Vulnerability Analysis team at Rackspace is charged with providing internal vulnerability scanning, penetration testing, and red/purple teaming capabilities to reduce cyber-based threats, risk, and exposure for the company. One of our tasks, as part of meeting certain compliance objectives, is to ensure systems are not exposed from various networking "perspectives" without going through a bastion first.
This blog post explores the basics of Oracle® GoldenGate® and its functions. Because it's decoupled from the database architecture, GoldenGate facilitates heterogeneous and homogeneous real-time capture of transactional change data capture and integration.
This post describes the Oracle® In-Memory Advisor (IMA), a feature of Database 12c, and describes its benefits. This feature is available in Oracle Database version 188.8.131.52 and later.
Originally published by Tricore: July 11, 2017
In Part 1 of this two-part series on Apache™ Hadoop®, we introduced the Hadoop ecosystem and the Hadoop framework. In Part 2, we cover more core components of the Hadoop framework, including those for querying, external integration, data exchange, coordination, and management. We also introduce a module that monitors Hadoop clusters.
Originally published by Tricore: July 10, 2017
Apache™ Hadoop® is an open source, Java-based framework that's designed to process huge amounts of data in a distributed computing environment. Doug Cutting and Mike Cafarella developed Hadoop, which was released in 2005.
Built on commodity hardware, Hadoop works on the basic assumption that hardware failures are common. The Hadoop framework addresses these failures.
In Part 1 of this two-part blog series, we'll cover big data, the Hadoop ecosystem, and some key components of the Hadoop framework.
This blog gives an overview of the non-relational database, Apache Cassandra™. It discusses its components and provides an understanding of how the database operates and manages data.
Parallel Replicat is one of the new features introduced in Oracle ® GoldenGate 12c Release 3 (184.108.40.206). Parallel Replicat is designed to help users to quickly load data into their environments by using multiple parallel mappers and threads.
This blog discusses the Oracle Exadata Smart Flash Cache feature and its architecture, including the write-back flash cache feature.
Using Sitecore with the experience database requires a connection to MongoDB, which can add quite a bit of complexity to your Sitecore installation. Here are some frequently asked questions about using Object Rocket to host Mongo DB for Sitecore.
Where do you conduct your User Acceptance Testing (UAT) activities? It's a loaded question that many organizations have challenges addressing as they first need to obtain a clear definition of what what UAT is (and what it isn't) before they even consider where UAT activities should occur. The benefits of a properly instituted UAT environment far outweigh the challenges, and the danger of not having one, but success requires a thoughtful and purposeful approach.
At our annual rax.io internal technical conference in San Antonio this week, I had a blast hacking on a reporting tool for our new content engine behind developer.rackspace.com and support.rackspace.com.
Sitecore implementations with Content Delivery nodes in multiple locations must keep their databases and content in sync. The Sitecore Scaling Guide summarizes areas of concern, such as isolating CM and CD servers, enabling the Sitecore scalability settings, maintaining search indexes, etc. Sitecore runs on top of SQL Server, and one topic touched on in the Scaling Guide is SQL Server replication, and conveniently there is a Sitecore guide just for that specific subject. This guide explains how, with SQL Server Merge Replication, one can coordinate the content of Sitecore databases that are not in the same location. This is the starting point for what we at Rackspace have found to be a global publishing architecture that meets the needs of enterprise Sitecore customers.
Since the initial launch of the OpenStack Innovation Center back in July of 2015, much work has been done. Wanted to take a moment to share the current status and some details about its next phases. If you are unfamiliar with OSIC, let me start off with some very quick background information.
Before getting into the nuts and bolts of the load balancing architecture itself, it's important to understand the (typical) multiple tiers of an E-Commerce application framework:
- Firewall (edge)
- Physical local traffic manager (LTM)
- Web Server
- Application Server
- Database Server (cluster)
Keep in mind that, top to bottom, the environment will be asymmetrical from a load perspective. For example, a single web server will typically be capable of 2-3x the number of concurrent connections as a single application server; heavily dependent on cache density - higher density will shift more load up into the web tier. Caching will be a subject for a later discussion, but at a glance should account for 80+ percent of content served. With room for variance, the majority of successful architectures achieve this metric and those that struggle tend to miss. This is not to say, of course, that a lower density will necessarily have difficulties. In addition to relocating load away from application servers, a higher cache density opens an opportunity for external services, such as Akamai CDN, to absorb load ahead of ever reaching the environment.
What is MongoDB?
MongoDB is, among other things, a document-oriented NoSQL database. This means that it deviates from the traditional, relational model to present a flexible, horizontally scaling model for data management and organization.
How does MongoDB work with AEM?
MongoDB integrates with Adobe Experience Manager (AEM) by means of the crx3mongo runmode and JVM options: -Doak.mongo.uri and -Doak.mongo.db
Why would I MongoDB?
Primarily MongoDB provides an alternate HA configuration to the older CRX cluster configuration. In reality, the architecture is more similar to a shared catalog on NFS or to NetApp than true clustering. The authors and publishers using MongoDB are not necessarily aware of each other.
When it comes to the battle cry of E-Commerce, "we're losing $1m per minute" is the clear winner, but a strong second is certainly "we want a disaster recovery solution". There are numerous benefits to disaster recovery and business continuity planning, especially speaking as the recipient of those 4am emergency calls. Traditional DR, with routing changes, cutover plans, scaled-down performance, and questionable technical tasks, is a well-traveled path in the industry, and it is very much inline with the expectations of most organizations even today. In the rapid-fire world of E-Commerce, this approach offers several challenges and misses a few key opportunities to take advantage of warm-side management.
If you are an OpenStack contributor, you likely rely on DevStack for most of your work. DevStack is, and has been for a long time, the de-facto platform that contributors use for development, testing, and reviews. In this article, I want to introduce you to a project I'm a contributor to, called openstack-ansible. For the last few months, I have been using this project as an alternative to DevStack for OpenStack upstream development, and the experience has been very positive.
I - Introduction
This is the first of a two-part series that demonstrates a pain-free solution a developer could use to transition code from laptop to production. The fictional deployment scenario depicted in this post is one method that can significantly reduce operational overhead on the developer. This series will make use of technologies such as Git, Docker, Elastic Beanstalk, and other standard tools.
Managing infrastructure and database technology has grown at Rackspace and our list of supported technologies in the data umbrella has grown tremendously.
My name is David Grier and I am a product engineer at Rackspace. I concentrate most of my time on Cassandra, Hadoop and related components in the Big Data ecosystem.
We are proud to announce our partnership with Datastax, with whom we are providing a managed DataStax Enterprise (DSE) solution. This article is a high level view of that managed solution and how we are providing it to our customers.
Container technology is evolving at a very rapid pace. The purpose of the webinar talk in this post is to describe the current state of container technologies within the OpenStack Ecosystem. Topics we will cover include:
- How OpenStack vendors and operators are using containers to create efficiencies in deployment of the control plane services
- Approaches OpenStack consumers are taking to deploy container-based applications on OpenStack clouds
Last week I had the privilege to attend the OpenStack Super Bowl, aka the OpenStack Summit, in Vancouver. It was incredible just to be around so many other folks who also believe strongly in OpenStack.
Hello! This is the first post in a series that will bring you new and interesting links every week from the perspective of a Rackspace Security Engineer. I try to include links that are useful/interesting to a general audience, so you don't have to be an "uber 1337 h4x0r" to enjoy them. If you have any comments, or if you want to submit a link, feel free to leave a comment or catch me on Twitter.
As the look and feel of the cloud evolves, matures, and hedges toward main stream adoption, the Solution Architects, Developers, and Infrastructure engineers of Enterprises face the challenge to determine what technologies to consume. Should I go with something that requires vendor licensing? Or should I look to Open Source technologies, such as OpenStack? Then if you do decide that OpenStack solves for your technology needs, how best could someone layout its pros and cons to their senior leadership.
Those of us who have ever had to stand in front of their Director/CTO/CIO and figuratively 'fight' for a particular technology/product completely understands that this task is not for the meek of heart. I can remember very vividly holding index cards in my hands with bullet points, as I was attempting to lay out all the reasons why OpenStack should be the company's next major infrastructure shift. Being prepared for this conversation is critical to the overall enterprises architecture, so you need to articulate clearly why OpenStack is the best choice. You can never be too prepared. There will always be questions that you as a technology advocate, will not even think of. In my opinion, being prepared is key. So let’s start on our technology layer cake.
Architecting applications for a cloud environment usually means treating each cloud server as ephemeral. If you destroy the cloud server, the data is destroyed with it. But, you still need a way to persist data. Cloud block storage has typically been that solution. Attach cloud block storage to a cloud server, save your data within that cloud block device, and when/if the cloud server is destroyed, your data persists and can be re-attached to another cloud server.
The IPython/Jupyter notebook is a wonderful environment for computations, prose, plots, and interactive widgets that you can share with collaborators. People use the notebook all over the place across many varied languages. It gets used by data scientists, researchers, analysts, developers, and people in between.
In the newest release of the Rackspace Private Cloud (RPC v9.0), we made changes to the reference architecture for improved stability. These changes included a different approach for deploying the cloud internally, which may also interest anyone looking into running the Rackspace private cloud. The decision to use Ansible going forward was based on two major thoughts: ease of deployment and flexible configuration. Ansible made it very easy for Rackspace to simplify the overall deployment and give users the ability to reconfigure the deployment as needed to fit their environments. Are you familiar with Ansible? If yes…skip the next paragraph and if not, please read on.