Posts categorized “general”
A proxy server is a computer system that sits between the client that requests a web document and the target server (another computer system) that serves the document. In its simplest form, a proxy server facilitates communication between the client and the target server without modifying requests or replies.
The NetScaler Application Delivery Controller (ADC) is a Citrix® Systems core networking product. ADC improves the delivery speed and quality of applications for an end user. The product helps business customers perform tasks such as traffic optimization, L4-L7 load balancing, and web app acceleration while maintaining data security.
Originally published by TriCore: November 7, 2017
Microsoft® introduced the idea of self-service business intelligence (BI) back in 2009, announcing Power Pivot for Microsoft Excel® 2010. After several years, Microsoft released version 1 of Power BI®, but the user experience wasn't great. Microsoft collected feedback from end users and crafted a newer version of Power BI that became popular. This blog provides an introduction to this tool.
NetScaler® Application Delivery Controller (ADC), Citrix® Systems' core networking product, is a tool that improves the delivery speed and quality of applications to an end user. This blog describes how to upgrade, by using the command line interface (cli), the software on NetScaler appliances that are configured in a high-availability setup.
This blog explains how the Google® Distance Matrix API can be incorporated with Oracle® E-business Suite (EBS) and can be used to determine the distance between two physical locations.
The GNU Privacy Guard (GPG) is a complete and free implementation of the OpenPGP standard as defined by RFC4880, also known as PGP (Pretty Good Privacy). GPG, also known as GnuPG, is a command line tool with features for easy integration with other applications.
Most companies that exchange sensitive data, such as payment details, employee information, and so on over the internet, use PGP encryption to transfer files securely between two systems. This blog introduces GPG, why you should use file encryption, and what are the steps involved in both file encryption and decryption.
Originally published by TriCore: October 18, 2017
In Part 1 of this two-part series, we covered some strategies for resolving common issues with Oracle® Business Intelligence Enterprise Edition (OBIEE). In Part 2, we share two additional tips about how to customize the logo and banner text in OBIEE.
Originally published by TriCore: May 11, 2017
There are many Lightweight Directory Access Protocol (LDAP) solutions available for organizational single sign-on (SSO) and user management, including Oracle® Internet Directory (OID), Microsoft® Active Directory (AD), and many other systems. When you have multiple implementations, it can be difficult to manage and use them all. In this blog post, you'll learn how to create a view that you can use to manage all of your enterprise's LDAP implementations.
One of the key points in supporting our customers is to make sure they have a smooth month-end closing process. Many people have issues that lead to a delay in their monthly close. Often, the issues causing these delays are not identifiable and are hidden behind the scenes. To overcome this situation, we have an application monitoring process that helps unearth these hidden problems and allows us to proactively address these even before it comes to our customers’ notice.
Originally published by TriCore: August 2, 2016
In Part 1 of this two-part blog post series, we cover two issues that you might run into while working with Oracle® Business Intelligence Enterprise Edition (OBIEE) and how to resolve them.
Originally published by Tricore: April 20, 2017
SQL Server 2016 introduced three new principal security features: Always Encrypted, dynamic data masking, and row level security.
In this blog, I'm going to introduce the dynamic data masking (DDM) feature.
Are you considering an upgrade to a more modern version of SQL Server? Are you choosing between SQL Server 2016 or SQL Server 2017? If so, then my advice is to upgrade to SQL Server 2017 as I explain in this post.
The release of SQL Server technology provides lots of interesting new features for SQL administrators and developers to ponder. The Community Technology Preview (CTP) 2.0 for SQL Server vNext (generally called SQL Server 2017) is no exception. Many updates have been implemented in the existing features and services of the application. In this blog post, I discuss what is new in the database engine of SQL Server 2017 from a database administrator (DBA) perspective.
Originally published by TriCore: January 17, 2017
This blog post discusses why Oracle® Enterprise Manager 13c (OEM 13c) users might want to consider rolling back to version 12c, and how to make the transition successful.
Originally published by TriCore: April 4, 2017
This blog post aims to help Oracle® Data Integrator (ODI) designers, administrators, and system teams address performance bottlenecks in ODI execution plans. Following the steps outlined here will result in a speedier experience for your end users.
Originally published by TriCore: November 22, 2017
A virtual tape library (VTL) is a data storage virtualization technology that's typically used for backups and recovery. A VTL presents a storage component, which is usually hard disk storage, as tape libraries or tape drives for use with existing backup software.
This blog describes best practices for the implementation architecture of the web client authentication solutions for QlikView®.
Originally published by Tricore: June 26, 2017
In this blog post, we review common issues that database administrators (DBAs) might run into when working with Oracle® Enterprise Manager (OEM) 12c Management Agents. We hope this information helps you fix these problems quickly and keep your Oracle targets well-monitored.
You may be asked to migrate multiple Internet Information Server (IIS) sites from on-site to the Cloud, but migrating individual sites is a long and daunting task. This blog discusses simplifying the process.
Originally published by TriCore: March 27, 2017
If you're considering migrating your email systems to Microsoft® Office 365®, there are a variety of migration methods. The best fit depends on your requirements. This blog post covers factors you should consider when choosing an email migration method.
Popularity Trends is a feature in SharePoint Server 2013 that enhances search and web analytics services by offering a ready-to-use solution.
This blog post shows you how to generate an Array Diagnostic Utility (ADU) report on an Hewlett Packard Enterprise (HPE) server that is running VMware® ESXi™ version 5.x or 6.x. The information in this blog helps you to determine whether the ADU is installed. If it is not installed, you can use the steps provided in this blog to install it.
Originally published by Tricore: Sep 21, 2017
Planning Analytics integrates business planning, performance measurement, and operational data to enable companies to optimize business effectiveness and customer interaction regardless of geography or structure. Planning Analytics provides immediate visibility into data, accountability within a collaborative process, and a consistent view of information.
The traditional supply-chain management processes with on-site IT applications are rapidly moving to the modern supply-chain cloud infrastructure. This blog covers the characteristics and benefits of the modern supply chain management in detail.
You may have noticed our Developer Portal looks different. This portal provides you with everything you need to build powerful, scalable apps using our Software Developer Kits (SDKs), which are built on open-source platforms and take advantage of our extensive APIs. We have redesigned Developer.Rackspace.com.
Modern browsers have APIs called
find one or more elements matching a CSS selector. I'm assuming basic
familiarity with CSS selectors: how you select elements, classes and ids. If
you haven't used them, the Mozilla Developer Network has an excellent
Running a successful developer workshop (aka tutorial) is really difficult. I've attended enough workshops that have gone poorly to know that for a fact. Participating in such a workshop can be very frustrating and a huge turn off for whatever technology is being presented. That translates directly into losing developer mindshare. I think we, as an industry, can do a better job of running developer workshops.
Next week kicks off the 16th OSCON, an annual conference bringing together the free and open source software world, and Rackspace is a proud Silver Sponsor. Starting July 20 and running through July 24, technologists from around the globe descend on Portland, Oregon for a week of tutorials, talks, keynotes, an expo hall, and more, with Rackers taking part in all of it.
From October 9-12, 2014, I attended the seventh annual Ruby DCamp at Prince William Forest Park at Triangle, Virginia. Rackspace was a Platinum sponsor of the "non conference", which is a collision between Ruby developers and communal living. Without a doubt, a unique combination.
My love for the Ruby community goes back 2006 when it saved me from my Java and government contracting induced burnout, making programming fun again. As Matz (Yukihiro Matsumoto) so often says, he wrote Ruby to make programmers happy.
When we, at Rackspace were working on a data visualization dashboard which uses AngularJS framework, we needed to abort requests. Fortunately, AngularJS has amazing built in services of which $http and $resource helped us make these XHR(Ajax) requests much simpler. There are many resources to figure out which might be better for your use case. I’m going to describe how I implemented aborts in $resource and $http in an unified way which increased the performance and showed correct data.
Recently, four members of our Cloud DNS Team got the chance to help a San Antonio non-profit (the Food Policy Council of SA) while participating in the annual OpenAIR hackathon using some of our Rackspace volunteer hours.
Today our Control Panel team announced support for Cloud Block Storage volume cloning. Some of our savvier users may have noticed that volume cloning was silently released as an API-only feature back in early November. Volume cloning (ergo volume copy) allows for the creation of a new volume from an existing one. While this is a pretty big feature, it would have been easy to miss, as its simply the addition of a source_volid parameter to the existing create volume call.
"What's the difference between data and a map?" This question came from an inquisitive fifth grader at the University of Texas on a gorgeous February Saturday in Austin. We brought a group of Rackers, some kites, and a huge red balloon to "Introduce a Girl to Engineering Day," presented by the UT Women in Engineering Program (WEP). We inflated the balloon to about five feet in diameter, attached a digital camera, and launched our mapping rig 100 feet in air above campus.
If you're a big PostgreSQL fan like I am, you may have heard of a tool called WAL-E. Originally developed by Heroku, WAL-E is a tool for efficiently sending PostgreSQL's WAL (Write Ahead Log) to the cloud. In addition to Heroku, WAL-E is now used by many companies with large PostgreSQL deployments, including Instagram.
Let's unpack what that means. If you've ever set up replication with PostgreSQL you're probably familiar with the WAL. Essentially there are two parts to replication and backup in PostgreSQL, the "base backup" and the WAL. Base backups are a copy of your database files that can be taken while the database is running. You might create base backups every night, for example. The WAL is where PostgreSQL writes each and every transaction, as they happen. When you run normal replication, the leader will send its log file to the followers as it writes it.
Instead of just using a simple socket to communicate, WAL-E sends these base backups and WAL files across the internet with the help of a cloud object store, like Cloudfiles (or any OpenStack Swift deployment). This gives you the advantage that, in addition to just being replication, you have a durable backup of your database for disaster recovery. Further, you have effectively infinite read scalability from the archives, you can keep adding more followers without putting more stress on the leader.
With the help of WAL-E's primary author, Daniel Farina, we recently added support for OpenStack Swift to it. It's not yet in a final release, but if you're interested in checking it out, read on!
We launched Cloud Block Storage into unlimited availability a year ago and we now have thousands of customers using the product. The team that designed Cloud Block Storage wanted to create a different kind of block storage in the cloud. When we spoke with our customers about what they wanted in a cloud block storage solution, the feedback focused on three areas:
- Reliable and more consistent performance
- Simpler experience and pricing
- Choice between standard volumes for more disk space or SSD volumes for higher disk I/O performance
Diane Fleming works heavily on the OpenStack API documentation - in this post she covers profiling for Docbook.
I have personally been following this project from the beginning because of the principles Ghost was born out of: simple, elegent, open source design. The Node.js-based blogging platform touts itself as "just a simple blogging platform", but everything about its design is focused on writing - nothing more, nothing less. Posts are written in Markdown with a split screen showing a live preview on the right. Adding an image? Just add the Markdown syntax for an image and suddenly a drag-and-drop image box appears in the preview screen ready for you to upload your image! The whole system makes writing blog posts fast and easy and lets you quickly get back to doing something other than yelling at the internet.
One of the more pressing needs in the world of open source software is the need for decent documentation. That covers a lot of territory; from well-commented code (yeah… right) to API guides to the rare – some say mythical – user guide.
The hardest part about using new technology is knowing where to begin. Spending hours sifting through blog posts and documentation to set up a server environment is often enough to extinguish the excitement about a new framework or application. By the same token, it is frustrating to deploy hosting architecture when you are excited to see how your latest code will perform. And no one likes having to do the same things over and over to create consistent testing, staging and production environments.
Deploying your application to the cloud requires expertise in both system administration and development; often, people have expertise in some areas while having gaps in others. The Rackspace Deployments service was created so we could collect this information and expertise in machine and human readable format and use it to automatically and consistently deploy hosting configurations, frameworks and applications and alleviate some of this stress.
Web application performance is a moving target. During design and implementation, a lot of big and small decisions are made that affect application performance - for good and bad. You've heard it before. But since performance can be ruined many times throughout a project, good application performance simply can not be added as an extra feature at the end of a project.
The modern solution to mitigate quality problems throughout the application development life cycle is called continuous integration, or just CI. The benefits of using CI are many, but for me, the most important factor for embracing CI is the ability to run automated tests frequently and to trace application performance because such tests need to be added to the stack of automated tests already being generated. If you have load tests carried out throughout your project development, you can proactively trace how performance is affected.
I am part of a team (along with Bruce Stringer and Jason Swindle) working on an internal application called Graffiti, which provides an easy way for front-line Rackers to record interactions with customers and provide valuable data with each interaction. It also creates a much simpler way for leads to be passed from the front line to the IB sales team here in SMB cloud.
We knew from the start that we wanted this to be an agile cloud application that utilizes all the things a cloud should be. We had some limitations which meant we had to home grow a few solutions. Since this is an internal application, we could not rely on third-party solutions for off loading workloads. We also had to be careful with our security practices, of course. We ended up going with a configuration that utilizes our internal nova solution for hosting in Cloud Servers. The downside to this is that we have no automated backups, no load balancers and no Cloud Files - much to my dismay.
Continuous delivery — the ability to ship new and awesome features, updates and patches to your customers more frequently — is key to getting ahead of the competition, and staying there.
Getting to continuous delivery of quality code that actually works in production relies on continuous integration: a system for testing code incrementally and frequently.
Continuous integration is both a toolchain and a discipline. It’s less about the specific tooling, though, and more about the practice of continually integrating changes so the system can catch errors and failures while they’re still small and manageable. Your continuous integration system is what gives your team enough confidence in its code to ship frequently.
It’s an established pattern to use message queues when building scalable, extensible, and resilient systems, but a lot of developers are still unsure how to go about actually implementing message queues in their architectures. Worse, the number of queuing solutions makes it hard for developers to get a grasp on exactly what a queue is, what it does, and what each solution brings to the table.
At Iron.io, we’re building IronMQ, a queuing solution we’ve developed specifically to meet the specific needs of today’s cloud architectures. In this post, we wanted to detail how to use queues in your applications and highlight a couple of unique capabilities that IronMQ provides (and which are not found in RabbitMQ and other non-native cloud queues).
One of the things that queuing does really, really well is getting work out of the way. Queues are built to be fast ways to make data available for other processes. That means that you can do more with your data, without making your customer wait. When it comes to response times every second matters, so only critical processing should take place within the immediate response loop. Queues let you do processing on data and perform non-immediate tasks without adding to your response time.
At Rackspace, our goal is to support our customers in any way we can. To move towards that goal, we decided to create a couple of tutorials aimed at beginning to intermediate Ruby on Rails developers. We hope that in sharing some knowledge, we are able to help you better achieve your own goals.
The first tutorial deals with using the Rackspace Cloud API (via the fog gem) to recreate a very basic server control panel with Ruby on Rails. We'll explore some intermediate concepts like databaseless models, and introduce you to the basics of interacting with Rackspace with fog. At the end, you'll have the beginnings of a custom control panel. You should have a grasp of some of the gotchas, tradeoffs, and techniques that can be used to help you refine your cloud infrastructure management workflow.
Agile. Scrum. Extreme programming. RAD. Lean. These terms all represent a departure from the traditional Waterfall development process in favor of a more rapid, iterative approach to application development. For companies with large-scale web applications, there are significant benefits to the agile methodology, but it also presents significant challenges.
When every minute of downtime represents significant lost revenue and increased support costs, it stands to reason that application support should be just as agile as application development. Among other things, this means tearing down l ong-standing walls between teams, and getting all business units working together toward common goals.
Early last year, a project code-named Checkmate was created by Rackers to make it easy for them to deploy complex cloud configurations, such as scalable WordPress with Cloud Servers, Cloud Databases and a Cloud Load Balancer, with one-click for our customers. The goal of the project was to provide a way for Rackers to share and collaborate on these best practices using common collaboration tools like GitHub.
Rackers across the company have a lot of experience running real-world applications in the cloud. We wanted to take this knowledge and not only crowd-source that information into best-practice "blueprints," but also expose that information publically, enabling customers to easily deploy a configuration built on these best practices. We knew that it would be a win-win: customers would have access this expertise without having search for it on the web, and Rackers would get to contribute their knowledge to a broader audience of customers.
Checkmate now powers a new feature we are exposing for preview in the Cloud Control Panel today called Rackspace Deployment Services.
As part of the Google I/O keynote yesterday, several new features for Google Compute Engine were announced. First, GCE is now available to everyone in a preview and available for signups. They also announced Cloud Datastore, a NoSQL database solution and several other features.
Google Compute Engine looks great, but it's the same old thing from a cloud standpoint. AWS and GCE are both single-vendor, lock-in prone providers. You can't run GCE in your own datacenter. You can't customize and install GCE on both a $200 One-laptop-per-child notebook and a $4,000 MacBook Pro. These platforms are not open.
This year at SXSW, I spoke on why open matters. I used examples to explain how open can triumph over closed like the open Internet and AOL, or Linux vs. Windows. One of my favorite examples is the Betamax.
Have you ever set out to do something new, only to find yourself encumbered by a list of prerequisites that must be figured out first? For example, you would like to implement Awesome Feature X in your application. But before doing that you have to figure out how to use a new library. Except the documentation for that library is not very good, or the examples are out of date, or... the list goes on.
Rackspace Service registry status update: Performance and reliability improvements, new features, and more
Back in November we announced the Rackspace Service registry preview. Since then we have been busy listening to user feedback, using that data along with other metrics and inputs to improve our service in different ways.
This blog post provides a high level overview of some of those changes and improvements. It describes some of the new features we have added and things we have changed, improved and removed to make the whole API faster, better, more reliable and user-friendly.
When we started investigating the hosted MongoDB space, we quickly found that most of the companies involved were just hosting MongoDB on top of AWS instances. We were intrigued by the different approach taken by ObjectRocket. Instead of using AWS primitives, they built their service on their own hardware in neighboring data centers, and utilized AWS DirectConnect to provide low latency connectivity.
In order to validate that ObjectRocket’s architectural choices made a difference, Rackspace conducted tests comparing ObjectRocket with two providers that offer MongoDB on generic cloud environments. We chose to compare ObjectRocket’s performance to the hosted providers on AWS. Further, we chose a $150 price point per month for comparison’s sake. SoftLayer’s offering was not included in the comparison because their least expensive MongoDB option costs around $650.
This is a guest post from Tomaz Muraus. Tomaz is a Racker and a project lead for the Rackspace Service registry product. He is also a project chair of Apache Libcloud, an open-source project which deals with cloud interoperability. Before working on Service Registry he worked on the Cloud Monitoring product and before joining Rackspace, he worked at Cloudkick helping customers manage and monitor their infrastructure. In his free time, he loves writting code, contributing to open-source projects, advocating for software freedom, going to the gym and cycling. Be sure to check out his GitHub page.
In November, we launched Service registry into preview. You can read all about it in the blog post titled Keep Track Of Your Services And Applications With The New Rackspace Service registry.
That post describes some common use cases for Service registry and contains information on how you can use it to make your application more highly available and responsive to changes. In this series of posts we take a deep look at some common use cases and illustrate them with code samples.
Rackspace will be at the 11th Annual Southern California Linux Expo! SCALE 11X takes place on Feb. 22-24, 2013, at the Hilton Los Angeles Airport hotel. As the first-of-the-year Linux/Open Source software expo in North America, SCALE 11X expects to host more than 100 exhibitors this year, along with presenting more than 70 speakers.
Rackspace and AppFog are throwing a Happy Hour on Thursday, February 7 at AppsWorld in San Francisco, and we’re asking developers to create an app that helps us choose the beers that will be served at the party. You could win an iPad! All you need to do is RSVP to the party, register for the contest, and then create an app (deployed on AppFog and Rackspace) that allows people to vote for which beers will be served at the happy hour.
That’s right, your votes determine which beers we drink! We’ve selected six beers for the contest: Widmer Hefeweizen, Sierra Nevada Pale Ale, Stella Artois, Lagunitas IPA, Newcastle Brown Ale, and Guinness Stout.
We've moved! The Rackspace DevOps Blog is now hosted on Rackspace Cloud Files (powered by OpenStack Swift) using Octopress. This blog was hosted on WordPress, using a mix of various Rackspace Open Cloud products:
- Cloud Load Balancers
- Cloud Servers
- Cloud Databases
Wayne and I loved this setup and were pleased with performance and security. WordPress on this infrastructure was secure for our purposes (it's a simple blog, not hosting medical data or taking credit cards) so we were happy. So why move?
Rackspace recently announced the Rackspace Service registry, a platform that allows developers to build highly available and responsive applications using a simple but powerful REST API. Currently it provides three main functions:
- Service discovery - Find which services are currently online/active and find services based on different criteria. You can organize your services however best fits your application deployment.
- A platform for automation – Service registry exposes an events feed, which includes all of the events that have happened during the lifecycle of your account (such as, a service comes online, a configuration value gets updated, and so on).
- Configuration storage – This enables users to store arbitrary configuration values in our system and get notified via the events feed when a value gets updated or deleted.
Rackspace will be present at Cloud Expo in Santa Clara this week! If you’re a Rackspace customer we would love for you to stop by our booth and talk to us about how your company uses the Open Cloud. We also have several Rackers speaking during the conference:
Chad Lung is a software engineer on the Rackspace Cloud Integration team and is the maintainer of Atom Hopper. Be sure to check out his personal blog at http://www.giantflyingsaucer.com/blog/ and follow @chadlung on Twitter.
I recently wrote an article introducing Repose, which is a sponsored open-source project that is built to scale for the cloud. Repose is used within Rackspace as a a key element of our internal OpenStack.
Rackspace will be present at Cloud Connect in Chicago this week! If you're a Rackspace customer we would love for you to stop by our booth and talk to us about how your company uses the Open Cloud. We also have several Rackers speaking during the conference:
Our cloud is open. We believe our company should be too. One of our Core Values at Rackspace is full disclosure and transparency, and we want to build a community for our developers that is open, transparent and helps them build amazing applications on the open cloud.