Posts categorized “aws”
When you migrate resources from one Amazon Web Services® (AWS) account to another, you might be asked to migrate the Amazon Route 53™ Domain Name Service (DNS) records as well. To do this, use cli53, a command line tool for Amazon Route 53, to migrate Route 53 DNS records from the source to the target. cli53 exports all the DNS records into a JSON file. After cli53 moves the records, you need to make some complex changes to the file before finally importing it to the target. This blog explains how to simplify the process of migrating all Route 53 DNS records from the source to the target.
In recent years, Data Lakes have moved from the technology boondocks to the prime beachfront real estate of the data sciences. Why is this happening, and why are they important? The short answer ... there's value in there.
One of the things I love about working with Cloud is the various ways you can fit together different services to perform complex business functions in a relatively straight-forward manner.
On-demand infrastructure, with its speed, agility, efficient use of resources and lower costs drives many organizations toward cloud adoption.
When used in conjunction with tools like CloudFormation or Terraform, users are able to provision and remove cloud infrastructure programmatically. This is typically referred to as Infrastructure as Code, or IaC, and is great for stateless resources.
However, what if some servers cannot be regularly re-provisioned from scratch? Does it mean they need to be up and running 24/7, even if only used during limited hours?
Terraform has gained a lot of popularity in the last couple years. Rackspace
prefers to use Terraform to quickly spin up new architecture in AWS and Azure.
However, with Amazon's lightning-fast deployment of new features, it has become
harder for the Provider maintainers to keep up. Developers are left waiting for
new features to be developed and merged into the
master branch before becoming
available for general consumption.
Windows File Server have been one of the longest running methods for sharing files in corporate america for decades. Amazon brings its expertise in serverless to provide a solution for the problem of the large Total Cost of Ownership that on-premise options have had in the past.
User authentication is a common application requirement that has been solved numerous times in the past - why trouble yourself with implementing and managing it yet again, when you could be working on exciting new features in your application instead? The AWS Application Load Balancer (ALB) can greatly simplify user authentication with several different social media, SAML 2.0, and OpenID Connect identity providers (IdP).
In this post, we'll walk through the entire process of setting up ALB authentication using Amazon Cognito against a Microsoft Active Directory Federation Services SAML IdP.
If you've ever used Aurora Read Replicas, you may have noticed that there are several different endpoints available. The Cluster Endpoint, the Reader Endpoint, and Instance Endpoints... with all of these options, how do you know which one to use and when? As with any non-trivial system, the answer is... it depends. In this blogpost, we'll look at the different endpoints, use-cases for them, and the trade-offs that come with those design decisions.
If you are reading this, you have probably heard of Amazon Aurora. As you know, Amazon Aurora is a PaaS service provided by AWS as part of the RDS suite of services. It provides a fully managed relational database management system (RDBMS) that comes in two flavors, MySQL and Postgres, while maintaining wire compatibility with both. But, how does this impact your high availability strategies and options?
AWS Security Hub was announced in Andy Jassy's re:Invent 2018 Keynote(46:23) and pitched as "a place to centrally manage security and compliance across your whole AWS environment (applause)" and then went on to announce an array of partners who were part of the initial integration effort (muted applause). While this announcement enjoyed just 3 minutes on centre stage, this is a significant development.
AWS App Mesh is the latest addition to the AWS product potfolio. To quote AWS: "AWS App Mesh makes it easy to monitor and control microservices running on AWS." AWS App Mesh is in public preview as of this post, and we will take a brief look at it.
Why do you need a service mesh
With increased adoption of microservices, some challenges have surfaced for which a mesh is a solution. A microservice architecture consisting of 50 components may help agility with respect to development, rate of change and the overall flexibility, but with it comes a lack of observability which makes troubleshooting harder. The increased complexity with several hundred services talking to each other also makes management harder. A Service mesh helps manage the complexity of these deployments. It helps improve visibility of the connections between services and, in doing so helps with troubleshooting, management, and security. Service mesh uses network proxies to govern the flow of traffic. By placing itself in the path for all connections in your microservices architecture, it can apply control policies & collect metrics. Service meshes also decouples the application logic from the operational logic by adding a layer that is responsible for how the services connect to each other.
This year's AWS re:Invent was a nonstop, high-powered firehose of exciting new features and products. Native PHP support on Lambda wasn't one of those features, but the new AWS Lambda runtime API and layers capabilities gives us the ability to build a clean, supportable implementation of PHP on Lambda of our own. In this post, we'll take a brief look at the overall workflow and runtime lifecycle, and then I will show you one way to build a PHP runtime to start powering your PHP applications on AWS Lambda.
This blog post reviews how to use Amazon Simple Storage Service (S3), as storage for an Oracle® Database backup. Amazon Web Services (AWS) was the first cloud vendor that Oracle partnered with to enable database backup in the cloud. S3 is the main storage offering of AWS.
One of the benefits of containers is the promise of portability. The Docker® mantra is to build, ship, and run. Containers also promise the ability to, with few changes, move from a developer’s laptop to a production environment and, in the same vein, the ability to move from a data center to the cloud or to many clouds. However, adopting containers alone does not guarantee this. At the core, containers are just a better way of packaging your applications. While they ensure a degree of technical compatibility across many clouds, they don’t ensure complete portability by themselves. In this post, we will look at some of the many considerations from the portability lens.
Modern application environments can be complex and include many discrete elements that can all affect the end user's experience. Because of this, it can be challenging to develop an effective monitoring strategy that allows you to be alerted during potential performance problems and also to use these metrics from a variety of systems to proactively address potential bottlenecks and slow points before they cause end user impact. In this article, we'll be discussing several best practices for ensuring that your environment is effectively monitored.
Information Design and Documentation presents RPCO v13.1!
Using Sitecore with the experience database requires a connection to MongoDB, which can add quite a bit of complexity to your Sitecore installation. Here are some frequently asked questions about using Object Rocket to host Mongo DB for Sitecore.
One day I was testing this neat new API feature and was really struggling with those
"I'm not a browser!" I thought. "Can I have this in a proper scripting or dev language?"
Since I couldn't find it anywhere, I decided to write this tutorial myself.
Everybody talks about Security, and, in the Cloud, sometimes the tools and options available seem confusing or inefficient because they require a lot of repetitive actions. Plus, it's all using Linux tools. And I want to use PowerShell.
Most companies just need a simple means to filter traffic to their Cloud Servers, and so Rackspace has launched, around 2015 and in limited availability, our own implementation of a very useful feature called 'Security Groups'.
MongoDB has made some noticeable improvements with the 3.0 release and new engine, WiredTiger. This post shows how those improvements in MongoDB translate into real performance gains for your application.
I have spent the majority of my career as a Java developer. As a result, I learned to be more productive using an IDE instead of an editor like Vi. Even though Vi is still my editor of choice when I’m in a Linux shell, I don’t believe it’s practical when managing large Java projects.
I - Introduction
In Part I of this series, we depicted a fictional scenario for agile development using a simple "Hello World" application composed of just a single UI layer. During this fanciful (albeit contrived) exposition, we glossed over many of the underlying details for the sake of brevity. In this article, we will take a little peek under the covers and explain in more depth how we achieved rapid, automated deployments of immutable application containers to remote test environments.
With this article I begin a series of hands-on developer oriented blog posts that explore OpenStack orchestration using Heat.
To make the most of this article, I recommend that you have an OpenStack installation where you can run the examples I present below. You can use our Rackspace Private Cloud distribution, DevStack, or any other OpenStack distribution that includes Heat.
Pentago is a board game designed by Tomas Flodén and developed and sold by Mindtwister. Like chess and go, pentago is a two player game with no hidden cards or chance. Unlike chess and go, pentago is small enough for a computer to play perfectly: with symmetries removed, there are a mere 3,009,081,623,421,558 (3e15) possible positions.