Posts categorized “database”
This post discusses the Oracle® Real Application Clusters (RAC) One Node feature in the Database Enterprise Edition, which was introduced with the 11g Release 2, provides enhanced high availability for single instance Oracle Databases, protecting them from both planned and unplanned downtime. The post also provides instructions for installing the Oracle Grid infrastructure, which is required to use One Node.
Originally published by TriCore: August 29, 2017
Oracle® version 12c offers multitenant database options for host multiple pluggable databases (PDB) with a single container database (CDB). Sometimes, you need to convert a non-CDB database into a CDB pluggable database. This blog describes methods that you can use to convert a non-CDB database into a CDB database.
There are many approaches to upgrading a multi-node Couchbase® Server cluster. This post describes detailed steps for the rolling online upgrade by using the graceful failover and delta recovery method.
Originally published by TriCore: February 12, 2017
This blog takes you through Subledger Accounting (SLA), which is one of the most important features of Oracle® version R12. SLA is the most robust feature in R12, providing the power to modify accounting according to business needs. This blog outlines the difference between R12 and earlier versions of the Subledger, along with some of its key features and components of SLA.
Sometimes businesses require a requisition in order to raise a purchase order (PO) and restrict manual PO creation. This blog shows you how to restrict a user from manually creating a PO.
This blog outlines the steps to change the password for Oracle®'s E-Business
Suite (EBS) APPS schemas and WebLogic®, which is a routine activity for an
Oracle Applications database administrator (DBA). In EBS version R12.2, you can
change passwords by using the
AFPASSWD option, or by using
by some manual steps.
Getting started with MongoDB® is easy. However, you can run into several hiccups with its new features that emerge on an ongoing basis. One such area of concern is security, which is the focus of this blog.
This blog discusses detaching one pluggable database (PDB) from the source container database (CDB) and attaching it to a target CDB. For the purposes of this blog, the PDB is the database in which we save all our application-related databases.
This blog explores Couchbase®, which is an open-source distributed NoSQL document and key-value database, released under the Apache® 2.0 license.
Originally published by TriCore: April 12, 2017
This two-part blog post series covers new performance-tuning features in Oracle® Database. Part 1 discussed Oracle Database version 184.108.40.206. This follow-up post covers version 220.127.116.11.
Originally published by TriCore: April 11, 2017
This two-part blog post series covers new performance-tuning features of Oracle® Database versions 18.104.22.168 and 22.214.171.124. Part 1 discusses the earlier version.
This blog post reviews how to use Amazon Simple Storage Service (S3), as storage for an Oracle® Database backup. Amazon Web Services (AWS) was the first cloud vendor that Oracle partnered with to enable database backup in the cloud. S3 is the main storage offering of AWS.
This post shows you how to integrate Oracle® Discoverer 11g with the single sign-on (SSO) solution delivered by Oracle Access Manager (OAM) 11g. It helps anyone who is looking for a one-stop login solution across different applications.
Originally published by TriCore: June 6, 2017
Oracle® Data Pump (expdp, impdp) is a utility for exporting and importing database objects in and across databases. Part 1 of this two-part blog post series discussed the introduction of multitenant architecture in Oracle Database 12c and how to use Data Pump to export and import data. Part 2 covers how to take an export of only pluggable databases (PDBs) and the restrictions that Data Pump places on PDBs.
Originally published by TriCore: June 6, 2017
Oracle® Data Pump (expdp, impdp) is a utility for exporting and importing database objects in and across databases. While most database administrators are aware of Data Pump, support for multitenant architecture in Oracle Database 12c introduced changes to how Data Pump exports and imports data.
Originally published by TriCore: May 17, 2017
Oracle® Business Intelligence Discoverer is a tool for ad hoc querying, reporting, data analysis, and web publishing for the Oracle database environment.
Oracle® Enterprise Manager (OEM) 12c and 13c includes many performance analysis tools, including a support tool, called OEM Real-Time Automatic Database Diagnostic Monitor (Real-Time ADDM), for the Oracle DBA to use for troubleshooting or tuning real-time, ongoing performance issues. This blog shares knowledge about practical use of Real-Time ADDM to identify and survive an emergency due to any type of database health problems, such as 100% session, process utilization, or exceeding the predefined critical limits setup for input/output (I/O), memory, or interconnect limits. In such cases, Real-Time ADDM is a very handy tool, and provides the capability to do deeper real-time and realistic ADDM analysis of database health, so let's compare RADDM vs. ADDM.
This blog post explores the basics of Oracle® GoldenGate® and its functions. Because it's decoupled from the database architecture, GoldenGate facilitates heterogeneous and homogeneous real-time capture of transactional change data capture and integration.
Online table redefinition allows you to restructure your Oracle® table in production without making the data unavailable. You might be comfortable using temp tables to move data around, but there is a better solution.
Originally published by Tricore: Aug 24, 2017
In Part 1 of this series, we shared some tips for using MongoDB. In Part 2, we cover several more MongoDB topics, including optimization, performance, speed, indexing, schema design, and data safety.
Originally published by Tricore: Aug 2, 2017
While it's easy to get started with MongoDB, more complex issues emerge when you're building applications. You may find yourself wondering things like:
- How do I re-sync a replica member in replica set?
- How can I recover MongoDB after a crash?
- When should I use MongoDB's GridFS specification to store and retrieve files?
- How do I fix corrupted data?
This blog post shares a few tips for handling these situations when you're using MongoDB.
How do you read execution plans? From right to left, left to right, or by checking out costs? Or what about objects like index scans, table scans, and lookups? This blog discusses how to read a Microsoft® SQL Server execution plan.
This blog covers the process for converting a version 11i database to an Oracle® Applications Tablespace Model (OATM) by using an OATM migration utility that has 12 locally managed tablespaces for all products.
Starting with Oracle® 10g, you can partition tables online without any application downtime by using the DBMS_REDEFINITION package.
Use the following steps to change a non-partition table to a partition table by using DBMS_REDEFINITION. This example changes the non-partition table, TABLEA, to a range interval partition table.
Originally published by Tricore: June 14, 2017
This blog identifies the deprecated Microsoft® SQL Server® Database Engine features that are available in SQL Server 2016 and that will be removed in future releases of SQL Server.
This post describes the Oracle® In-Memory Advisor (IMA), a feature of Database 12c, and describes its benefits. This feature is available in Oracle Database version 126.96.36.199 and later.
Originally published by Tricore: July 11, 2017
In Part 1 of this two-part series on Apache™ Hadoop®, we introduced the Hadoop ecosystem and the Hadoop framework. In Part 2, we cover more core components of the Hadoop framework, including those for querying, external integration, data exchange, coordination, and management. We also introduce a module that monitors Hadoop clusters.
Originally published by Tricore: July 10, 2017
Apache™ Hadoop® is an open source, Java-based framework that's designed to process huge amounts of data in a distributed computing environment. Doug Cutting and Mike Cafarella developed Hadoop, which was released in 2005.
Built on commodity hardware, Hadoop works on the basic assumption that hardware failures are common. The Hadoop framework addresses these failures.
In Part 1 of this two-part blog series, we'll cover big data, the Hadoop ecosystem, and some key components of the Hadoop framework.
This blog gives an overview of the non-relational database, Apache Cassandra™. It discusses its components and provides an understanding of how the database operates and manages data.
Minimizing downtime and increasing the database availability are essential objectives that every business aspires to achieve. Database Adminstrators (DBAs) are always looking for new ways to provide a faster recovery solution in case of any data file or complete database corruption failure. Starting with version 10g, the Oracle® Recovery Manager (RMAN) offers a feature called Incremental Merge Backups (IMB), which provides a solution to minimize the recovery time, especially for very large databases (VLDB).
This blog describes the Oracle ® AD Online Patching (adop) utility phases, the patch process cycle steps, and some useful adop commands and tips.
Originally published by Tricore: Aug 14, 2017
This blog describes the following common issues and solutions for the Oracle ® AD Online Patching (adop) utility:
- Data dictionary corruption error
- adop prepare failure
- Forms object generation failure
- adop cutover hang-up
- Patch abort
Every time an Oracle ® Database reads or writes data to a disk, the database generates disk input and output (I/O) operations. The performance of many software applications is limited by disk I/O, and applications that spend the majority of central processing unit (CPU) time waiting for I/O activity to complete are I/O bound. I/O calibration helps to address this issue.
Parallel Replicat is one of the new features introduced in Oracle ® GoldenGate 12c Release 3 (188.8.131.52). Parallel Replicat is designed to help users to quickly load data into their environments by using multiple parallel mappers and threads.
This blog discusses the Oracle Exadata Smart Flash Cache feature and its architecture, including the write-back flash cache feature.
In an Oracle Real Application Clusters (RAC) environment, all the instances or servers communicate with each other using high-speed interconnects on the private network. If the instance members in a RAC fail to ping or to connect to one other via this private interconnect, all the servers that are physically up and running (and the database instances on those servers) might end up in a condition known as split-brain.
Using Sitecore with the experience database requires a connection to MongoDB, which can add quite a bit of complexity to your Sitecore installation. Here are some frequently asked questions about using Object Rocket to host Mongo DB for Sitecore.
Sitecore has the option of making use of TempDB in Sql Server to speed up your session state operations. What catches people off guard is the fact that tempdb is recreated at service restart of SQL Server. This becomes a problem when you have to recreate the table structure and user permissions inside tempdb.
The number of Thanksgiving evenings that have been ruined by the phrase "we didn't load test for this" is incalculable.
The real challenge of being prepared for a CyberMonday is caused by a misconception - load testing is designed to generate hits, views, or raw load. What this strategy misunderstands is that 1000 concurrent connections is not the same as 1000 concurrent page views. Instead, it is an amalgamation of multifaceted behaviors that drive load in specific, often non-overlapping, directions. Properly load testing for an ecomm flood requires accurate metrics of normal traffic and a multiplier like the following data points:
- How many concurrent visitors are expected?
- What is the daily conversion rate?
- Are there hot-spots, such as new or sale items?
- Does order management (OMS) share servers with other functional components?
- How quickly can resources be added into the environment?
- And probably the most difficult question of all - what is acceptable loss?
What is MongoDB?
MongoDB is, among other things, a document-oriented NoSQL database. This means that it deviates from the traditional, relational model to present a flexible, horizontally scaling model for data management and organization.
How does MongoDB work with AEM?
MongoDB integrates with Adobe Experience Manager (AEM) by means of the crx3mongo runmode and JVM options: -Doak.mongo.uri and -Doak.mongo.db
Why would I MongoDB?
Primarily MongoDB provides an alternate HA configuration to the older CRX cluster configuration. In reality, the architecture is more similar to a shared catalog on NFS or to NetApp than true clustering. The authors and publishers using MongoDB are not necessarily aware of each other.
When it comes to the battle cry of E-Commerce, "we're losing $1m per minute" is the clear winner, but a strong second is certainly "we want a disaster recovery solution". There are numerous benefits to disaster recovery and business continuity planning, especially speaking as the recipient of those 4am emergency calls. Traditional DR, with routing changes, cutover plans, scaled-down performance, and questionable technical tasks, is a well-traveled path in the industry, and it is very much inline with the expectations of most organizations even today. In the rapid-fire world of E-Commerce, this approach offers several challenges and misses a few key opportunities to take advantage of warm-side management.
If you are an OpenStack contributor, you likely rely on DevStack for most of your work. DevStack is, and has been for a long time, the de-facto platform that contributors use for development, testing, and reviews. In this article, I want to introduce you to a project I'm a contributor to, called openstack-ansible. For the last few months, I have been using this project as an alternative to DevStack for OpenStack upstream development, and the experience has been very positive.
In the first article of this series we started installing OpenStack from source. We installed keystone and populated it with some basic information including a Services project and an admin user for our new OpenStack install. Additionally in an initial script we setup users and directories for the upcoming installs of the Image service (glance), Networking service (neutron), Compute service (nova) and Volume service (cinder). Now, let's continue and install and start the glance process on the controller node.
The goal of this post is to develop an application in an environment that's as close to your remote deployment environment as possible. Let's do this using Docker Machine and Compose to move an app from local development to remote deployment.
Last week I went to QCon NY 2015 to be both a student and a teacher in their tutorial track. They follow the standard pattern of having 2 days of tutorials prior to the conference proper. To understand QCon a bit better, here's their mission statement.
"QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community.
A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams."
MongoDB Inc just released what is arguably the most important change to the MongoDB database in its short history.
MongoDB version 3.0
MongoDB 3.0 brings with it a wealth of new features, but most notably a new pluggable storage engine API. We wanted to help customers get familiar with the new storage engine and features quickly and easily.
Because of the new pluggable storage engine API, MongoDB 3.0 promises a massive leap forward in functionality, usability and features. Developers, DevOps Engineers and DBA's should start getting acquainted with MongoDB 3.0. In particular:
- wiredTiger storage engine
- concurrency testing
- Journaling, durability and crash recovery
- General compatibility
- SCRAM-SHA-1 authentication compatibility
Full Release Notes
From a community standpoint, the more people using 3.0 and filing any bug reports the better. We wanted a quick and easy way for folks to experiment. We needed tooling. A couple attributes of the tooling we thought where really important are:
- Easy to use
- Configurable by end user(s)
- Uses Rackspace cloud (or any other IP, including localhost)
- Easily repeatable provisioning; so users can break it, tweak it, and rebuild it easy
We created an Ansible playbook that installs and configures a simple MongoDB 3.0 configuration. It takes just a few minutes to setup and is completely customizable.
- CentOS/RHEL (for now)
Installation is 4 simple steps:
- Step 1. Setup Ansible and git.
- Step 2. Clone the repo.
- Step 3. Add roles and change some config files
- Step 4. Provision some MongoDB
Complete and up-to-date installation and configuration instructions.
In a nutshell:
Step 1: Installing Ansible and Git.
For this, you need to have git and Ansible installed. Installation is pretty easy. For most systems you simply need to:
# Centos/RHEL # Ansible sudo yum install ansible # git sudo yum install git
Step 2: Clone the repo
Simply clone the repo to the box where you installed Ansible:
git clone https://github.com/rackerlabs/ansible-mongodb.git
Step 3: Add roles, and change some config files
We need to tell Ansible to use the host(s) where we want MongoDB to be installed. We need to ensure we tell Ansible the correct configuration for our host(s), as well as set any startup parameters we want.
# edit hosts file, and change <MYIP> to the ip address of the host to provision vi hosts.txt # install the required roles ./mongodb_roles.sh # alter the default config (or at least inspect it for being correct) vi roles/ansible-roles_mongodb-install/defaults/main.yml
Step 4: Provision some MongoDB
Simply launch the helper shell scripts:
cd ansible-mongodb ./setup-mongodb.sh
For a fully managed solution with replica sets and sharding, hit up firstname.lastname@example.org and the support folks will install and configure a MongoDB 3.0 instance in the ObjectRocket fully managed environment.
While this blog post may seem trivial on the surface, it does pack some very interesting information on how very flexible the Rackspace Cloud Files product can be. While executing another customer project, the age old question of: “Where are we going to put the database backups?” was raised. Back in the day this question only really had one solution. In the current age of the cloud, you have a few options. Since I like to live life on the edge…I raised my hand and said Cloud Files.
For those of you not familiar with Cloud Files, the easiest way to describe it is shared Object Storage. In OpenStack lingo, you could also call it shared Swift. Cloud Files is an API enabled Object storage capability found on the Rackspace Public cloud platform. In this post, we will walk you thru how easy it is to store something as simple as database backups in Cloud Files using simple automation, fronted by Ansible of course (my orchestration drug of choice). I promise this post will be short and sweet.