web analytics

Enterprise-Class SaaS: Stability, Security, Reliability


Facebooktwittergoogle_pluslinkedin

Large enterprise customers have different needs and requirements than smaller companies.

Small companies tend to be very scrappy and are able to change their decisions on technology quickly. In general, they tend to prioritize features and speed to business impact. And they skew towards single-office locations, in one time zone and geography. Oversight is also much easier with a smaller team.

In contrast, enterprises often span time zones and geographies and tend to prioritize security requirements. Up-time becomes more important as the consequences of an outage ripple across a larger organization. Oversight and reporting are also much more significant requirements.

As we built our predictive Playbooks™ product, we focused on this objective:

Bring the product innovation expected of an SMB startup while ensuring the enterprise-class capabilities already delivered by InsideSales.com.

There are a number of key decisions we made with our predictive Playbooks product from day one to deliver on this goal, including:

  1. Build on a microservices architecture.
  2. Horizontally scale and isolate through a pod architecture.
  3. Design a global platform with a distributed, rather than imperialist, worldview.

Microservices are (good) cheating

Software systems increase in complexity over time. As a result, we learn phrases such as ‘bug introduced by regression’. This phrase really means “the system became too complex for a human to make a change without introducing a side effect.” Good products degrade over time due to the complexity introduced by new features. Put another way, success inevitably evolves into failure, unless we manage it.

Enter microservices. Microservices allow true isolation of functionality, so discreet building blocks can be developed and tested. They can be composed as building blocks into a larger system.

This all sounds simple, and conceptually it is. However, the notion of discreet building blocks is not new. It has existed for as long as computer science has existed. For example, in 1971 Niklaus Wirth wrote about “step-wise refinement,” which involves decomposing a problem into smaller, easier parts.

The author of the term ‘microservice’ was Martin Fowler, who I worked with in 1993 on building an FX Options Systems for Citibank. At that time, the focus was reusable “objects” that could be used as the key building blocks.

The key here is not necessarily the notion of microservices, but microservices done well. A formative proving ground for microservices has been Amazon.com. I spent nearly seven years at Amazon, working on decomposing different core parts of their software stack, where I learned some valuable lessons about how to build enterprise-class software that I’m now applying to InsideSales’ innovative products.

Horizontally scale through a pod architecture

The notion here is simple: If you have 1,000 customers and your system fails every one out of 100 days due to a hardware or software defect, how do you reduce that? One way is to break your customer base up into ten groups of 100 each. Then the failure one out of 100 days would be distributed across ten groups, meaning a failure one out of 1,000 days.

This type of architecture is especially good for dealing with software defects. New software releases will always run a risk of new bugs, no matter how much QA is done. However, if the cohort receiving the new release changes on a round-robin basis, then release-related defects would be reduced by a factor of ten if there are ten pods.

The InsideSales.com Playbooks architecture is designed to allow for independent pods, corralled individually in a virtual private cloud (VPC). The VPC enforces isolation between the pods. Rather than best intentions, we have a reliable mechanism to ensure there are never unforeseen dependencies and single points of failure.

To accomplish this, we provide a single-edge layer of identity management. Users log in and from that we establish which pod their company is in. This layer is critical because it controls both security and also pod mapping.

From this initial login, we establish the notion of a session, which is used on subsequent interactions between the individual user and the pod. This session is used to control routing of all information from the user’s browser to the correct pod.

The traffic to and from the pod is fed through a proprietary edge router that is fully pod aware. This allows for pod failure between data centers with limited impact on the end-users.

Global platforms require a distributed worldview

Most enterprise systems have a central hub. For example, although CRMs like Salesforce and Microsoft Dynamics have multiple data centers around the world, any given enterprise is expected to pick a single central location. A West Coast technology company would pick a West Coast location, so the majority of their users get fast access. Sales reps at that company who are based in Europe or Asia-Pac might feel a lag. Similarly, a German manufacturing company would likely pick Frankfurt as the location of their data center, but sales reps in other regions might suffer from slower performance.

There is a better way of building these architectures. Email is one key to the architectural challenge. Email servers tend to operate locally and store and forward emails to the appropriate recipient. I architected a similar approach at Instinet with one of the first equity trading SaaS applications in 2000. We supported large portfolios of equities being uploaded and then traded globally. One key notion was that end users should have fast performance, with the computer sending the data around the world asynchronously.

We have leveraged a similar architecture with Playbooks. Rather than an imperialist worldview, where there is one central privileged location, we have built a federated architecture where sales reps in regional locations get identical performance. Sales reps in Dublin will interact with a Playbooks server within their region. Sales reps in San Francisco will interact with a Playbooks server within their region. All information is fed behind the scenes back into the core CRM for record keeping, but reps will be able to operate in Playbooks independently of CRM performance.

Getting there

The transformation to a microservices distributed architecture requires a dramatic mind shift. We are very excited about the benefits it will bring InsideSales.com customers, and have been impressed with how our engineering team has embraced the learnings. However, like all big changes, it’s important to manage change. Our evolution looked this:

  1. We started with a small new team in one location (outside of HQ to ensure focus) building the Playbooks product. They were cohesive and able to establish the initial architecture. This took us to our alpha version and demonstrated credibility with our executive team and our customers.
  2. We expanded to a second location, our Provo HQ. We took some of our existing core competencies, such as international telephony and email tracking, and built these into micro-services that could be integrated into Playbooks.
  3. We pivoted the majority of our engineering organization onto the Playbooks product. By this stage,eight months into development, we had a stable base for them to accelerate the effort.

Pivoting a technology strategy is not for the faint of heart. However we are very excited about our journey and the feedback we have received from customers and our own team.

About Steve Brain, CTO InsideSales.com

Steve Brain brings more than 25 years of hands-on experience in software development and a proven track record of engineering leadership, delivering enterprise-grade platforms and solutions. Today, Steve leads a team of engineers who are enhancing InsideSales.com’s industry-leading sales acceleration platform for its growing pool of enterprise customers and businesses of all sizes. Before joining InsideSales.com, Steve led engineering and professional services at Qualtrics and spent 7 years running large engineering teams at Amazon.com, and 12 years building high-frequency and global portfolio trading systems, including the first SaaS portfolio trading platform, at financial services leaders Citibank, Fidelity, Merrill Lynch and Instinet. Some of Steve’s work at Amazon on de-coupling into micro services has since been used a examples by the Amazon CTO in conference presentations. Steve has a bachelor’s degree in computer science engineering, with honors, from the University of Warwick. While in high school Steve co-authored three books on home computer programming, which initially funded a life long passion for skiing.

 

Tags: , , , ,

Leave a Reply

Your email address will not be published. Required fields are marked *

Free Email Updates

Get the latest content first.

Privacy Policy

CLOSE