In my previous blog post on running Sitecore in a Docker container, I used Azure SQL to host my Sitecore databases. Wanting a clean enviornment each time I develop, I needed a quick way to provision to Azure. With that requirement, I wrote a PowerShell script that makes this task repeatable for development and testing.
At SUGCON 2015, Rackspace and Hedgehog presented about how using Docker will shape the way we work with Sitecore, an ASP.Net web content management system. With the release of Windows Server 2016 Technical Preview 4, we are now able to run Sitecore in a Docker container.
I have spent the majority of my career as a Java developer. As a result, I learned to be more productive using an IDE instead of an editor like Vi. Even though Vi is still my editor of choice when I’m in a Linux shell, I don’t believe it’s practical when managing large Java projects.
AEM 6.1 With MongoDB 3.0 and WiredTiger
Adobe, with the release of AEM 6.1, [officially supports][adobe1] MongoDB 3.0 and its plugabble engine WiredTiger. This post will take you through installing and configuring AEM 6.1 and MongoDB to take advatange of [performance improvements in MongoDB 3.0.][or1]
Virtual Machine Scale Sets, which was recently released in preview from Microsoft, lets you manage a set of virtual machines as one.
Over the last couple of years, we've seen OpenStack deployments shift from a public cloud model, where no one is trusted, to a private cloud model, where collaboration and shared resources between projects is required. As enterprises adopt OpenStack and integrate it into their infrastructure, new use cases continue to multiply, and existing limitations in APIs and data models have been brought to the forefront. One of the more exciting features to come out of Neutron development in the Liberty cycle that addresses a shortcoming is a framework for Role Based Access Control (RBAC).
In the good old days the measure of a programmer was efficiency - how much functionality could be packed into how much space. Languages like C keep code close to the machine and require close attention to, and strong understanding of, machine operation for performance and code execution.
Much like with the catapult, new methods have come along for launching higher-delivery projectiles at high speeds, but, when it comes to hurling a VW beetle the length of a football field, sometimes the old ways are still the best. The importance of efficiency in code has been maligned, and largely obfuscated, by modern delivery mechanisms; however, its effect remains critical to the performance of complex large-scale applications.
Before getting into the nuts and bolts of the load balancing architecture itself, it's important to understand the (typical) multiple tiers of an E-Commerce application framework:
- Firewall (edge)
- Physical local traffic manager (LTM)
- Web Server
- Application Server
- Database Server (cluster)
Keep in mind that, top to bottom, the environment will be asymmetrical from a load perspective. For example, a single web server will typically be capable of 2-3x the number of concurrent connections as a single application server; heavily dependent on cache density - higher density will shift more load up into the web tier. Caching will be a subject for a later discussion, but at a glance should account for 80+ percent of content served. With room for variance, the majority of successful architectures achieve this metric and those that struggle tend to miss. This is not to say, of course, that a lower density will necessarily have difficulties. In addition to relocating load away from application servers, a higher cache density opens an opportunity for external services, such as Akamai CDN, to absorb load ahead of ever reaching the environment.
The number of Thanksgiving evenings that have been ruined by the phrase "we didn't load test for this" is incalculable.
The real challenge of being prepared for a CyberMonday is caused by a misconception - load testing is designed to generate hits, views, or raw load. What this strategy misunderstands is that 1000 concurrent connections is not the same as 1000 concurrent page views. Instead, it is an amalgamation of multifaceted behaviors that drive load in specific, often non-overlapping, directions. Properly load testing for an ecomm flood requires accurate metrics of normal traffic and a multiplier like the following data points:
- How many concurrent visitors are expected?
- What is the daily conversion rate?
- Are there hot-spots, such as new or sale items?
- Does order management (OMS) share servers with other functional components?
- How quickly can resources be added into the environment?
- And probably the most difficult question of all - what is acceptable loss?
For most businesses, "success" can be succinctly defined as "delivering a store and processing customer orders". From a business perspective, that's the exact scope - it's how an E-Comm business makes its money.
From Online to Offline
Although demonstrably enjoyable at holidays, surprises when it comes to product purchases are generally frowned upon. Imagine if, after swiping a credit card at the grocery, all of the bags disappeared and were probably, but not always, teleported to their destination with no indication of which it would be! In essence – Schrödinger’s groceries. To say the least, the novelty would wear thin quickly. This scenario (copyrighted, if technologically feasible at future date) is analogous to accepting an order in an e-comm application, but providing no feedback loop on its status afterward. Those bad old days are long gone, but, in their place, stands a new interaction model - real-time feedback.