How These Engineers Include Scalability in Every Step of Development

Top engineers from Rue Gilt Groupe, Starburst and WHOOP share their tips for developing with scalability in mind.

Written by Dana Cassell
Published on Oct. 26, 2023
Brand Studio Logo

In some industries, scalability is an afterthought, a good problem to have after initial success opens up possibilities for growth. In the tech world, scalability is an essential and early part of any product build.

Domenico Fioravanti, director of engineering at Rue Gilt Groupe, defines scalability as “the ability of a technology or system to gracefully and efficiently adapt and expand in response to increased demand or growth without sacrificing performance, reliability or cost-effectiveness.” Since technologies are built to expect increasing demand, this ability to expand with grace is essential.

Scalability encompasses initial design and consistent monitoring within a product, but it also requires companies to consider their human resources and organizational capacity. Built In sat down with three engineering leaders from Boston tech companies to learn how they approach scalability in holistic ways. 

 

Image of Domenico Fioravanti
Domenico Fioravanti
Director, Engineering • Rue Gilt Groupe

Rue Gilt Groupe is a premier off-price, e-commerce portfolio company — connecting more than 35 million shoppers with coveted designers and in-demand labels at an exceptional value. 

 

Describe what scalability means to you. Why is scalability important for the technology youre building?

Scalability is the ability of a technology or system to gracefully and efficiently adapt and expand in response to increased demand or growth without sacrificing performance, reliability or cost-effectiveness.

For Rue Gilt Groupe, scalability is of paramount importance. It ensures that as our member base and demand grow, our platform remains accessible and responsive, providing a seamless shopping experience. 

Scalability also enables us to handle surges in traffic, varying workloads and changing requirements without service disruptions or the need for costly overhauls. It allows for optimal resource allocation, cost management and sustainability in a competitive digital landscape, ultimately supporting our goal of delivering a dependable and innovative solution to our members.

Discussing scalability inevitably addresses the expansion of teams. It’s essential to bring up Conway’s law, which implies that ‘to achieve a scalable architecture, you must also cultivate a technology organization capable of scaling.’ At RGG, we've grown our engineering talent presence in Dublin, thereby augmenting the capabilities of our U.S. team to accelerate development and execution.

 

Rue Gilt Group
Rue Gilt Groupe

 

How do you build this tech with scalability in mind?

Building technology with scalability in mind involves careful planning, architectural decisions and coding practices that enable a system to grow and adapt seamlessly as demands increase. 

 

Building technology with scalability in mind involves careful planning, architectural decisions, and coding practices that enable a system to grow and adapt seamlessly as demands increase.”

 

Here's how the tech team at RGG leverages technology with scalability as a core consideration:

  • We design our system with a modular and component-based architecture.
  • We implement load balancing to distribute incoming requests and prevent any single point of failure.
  • We adopt horizontal scaling by adding more containers or instances to handle increased load.
  • We adopt database scaling, indexing and query optimization.
  • We use caching mechanisms to reduce the load on backend systems.
  • We use high-throughput, low-latency message queues to decouple components and enable asynchronous processing.
  • We leverage Content Delivery Networks to distribute static assets.
  • We choose scalable cloud-based infrastructure and use IaC tools.
  • We implement robust monitoring and logging solutions to track system performance and identify bottlenecks.
  • We monitor our system's performance with regular load testing and benchmarking.
  • We regularly review and adjust capacity based on usage trends.

 

What tools or technologies does your team use to support scalability, and why?

As a leading off-price e-commerce portfolio company, connecting the next-generation shopper to world-class brands and making shopping an occasion to celebrate, our 'why' is always centered around our members.

To ensure a seamless member experience across our portfolio, RGG employs a robust suite of tools for scalability. Docker streamlines application deployment and management, while AWS serves as our cloud provider for scalable infrastructure and services. NGINX and ALBs efficiently distribute incoming traffic across multiple servers or instances, and Redis lessens database load by caching frequently accessed data, enhancing application performance and scalability. Elasticsearch handles large dataset indexing and querying, contributing to scalable search capabilities. Akamai facilitates global content distribution, reducing server load and latency. Datadog offers valuable insights into system performance, aiding in the identification of scalability bottlenecks and issues. Lastly, DynamoDB provides flexible data models and horizontal scaling, all contributing to an exceptional member experience.

 

 

Image of Mukesh Baphna
Mukesh Baphna
Senior Director Engineering • Starburst

Starburst powers a data analytics engine for businesses. 

 

Describe what scalability means to you. Why is scalability important for the technology youre building?

Scalability is a critical attribute for any SaaS platform, especially a data analytics platform like Galaxy. It refers to our ability to grow and handle increased workloads, users, and data while maintaining or enhancing performance, reliability, and responsiveness. 

 

Scalability is a critical attribute for any SaaS platform, especially a data analytics platform like Galaxy.”

 

Scalability is vital for several reasons:

  • Meeting user demand: Scalability empowers us to meet increased traffic without compromising performance. 
  • Enhancing user experience: Ensures Galaxy delivers fast responses and uninterrupted service, improving the overall user experience. 
  • Handling expanding data volumes: Scalability allows us to seamlessly grow with data volumes, maintaining swift and reliable data processing, querying and analysis. 
  • Adapting to varied workloads: Enables dynamic resource allocation based on varying data analytics workloads, reducing costs. 
  • Optimizing query performance: Efficient scaling with concurrency leads to faster execution of complex analytical queries. 
  • Data ingestion: Scalability enables rapid ingestion, allowing customers to leverage analytics over large datasets.

 

How do you build this tech with scalability in mind?

Our approach is guided by a set of core principles and actions. We begin by designing a modular system architecture, allowing for the independent scaling of interconnected components, promoting flexibility and efficient expansion. Leveraging cloud services, such as managed Kubernetes and cloud-native SaaS for databases and streaming, we access features like auto-scaling and load balancing, enhancing system scalability. Elasticity is woven into our services through auto-scaling mechanisms that adapt resource allocation based on demand and capacity, ensuring optimal performance and efficiency. Distributed caching solutions enable horizontal scaling and efficient parallel processing. Robust monitoring provides insights into system performance, facilitating both troubleshooting and proactive scaling. High availability is ensured with failover mechanisms and component redundancy, reducing downtime. Security measures, access controls, and comprehensive documentation are prioritized. Lastly, we actively seek user feedback and continuously monitor system performance to drive ongoing improvement.

 

What tools or technologies does your team use to support scalability, and why?

We’ve built our product stack in such a way that each component or service follows most of the scalability principles I mentioned above. Our services are loosely coupled which allows us to independently manage their life-cycle and also allows operational ease for internal folks.

Additionally, we leverage other SaaS vendors that allow us to keep separation of concerns and help us build an elastic, salable service. To name a few, we use cloud service providers’ managed Kubernetes services for compute, cloud storage service for elastic storage and vendors like Confluent for their cloud-based streaming platform, Cockroach Labs’s globally synced database for control plane, Cloudflare for their web application security and edge networking, metronome for billing, and Datadog and Chronosphere for monitoring and observability.

We also use caching, indexing and parallelism that come with running distributed systems for most of the services on Kubernetes infrastructure that provides additional resilience and scaling guarantees.

Lastly, we’ve built a strong observability stack to track the health of the system proactively and an incident management system that allows us to meet our SLAs.

 

 

Image of Ryan Aubrey
Ryan Aubrey
Senior Software Engineer • WHOOP

WHOOP’s wearable fitness tracking device and performance optimization platform empower users to perform at a higher level. 

 

Describe what scalability means to you. Why is scalability important for the technology you're building?

Within the Platform team at WHOOP, we are always thinking about, planning for and measuring scale with respect to essential tenants. Availability concerns whether we consistently provide our members with the experience they expect. Simultaneously, we closely scrutinize the cost element, specifically evaluating whether the capital needed for building and maintaining the platform aligns with the overall business goals. 

Scalability isn’t a measurement taken at a single point in time. Scalability measures how well your processes, systems, and teams can balance availability and cost over time.

 

Scalability isn’t a measurement taken at a single point in time. Scalability measures how well your processes, systems, and teams can balance availability and cost over time.”

 

For a technology or system that faces little change, achieving scalability may only require considerable effort or thoughtful design at the outset. WHOOP, however, is a very dynamic system; each day, we push the boundaries of human performance with our 24/7 wearable technology. As we release new features like Stress Monitor, Strength Trainer or WHOOP Coach, we must maintain high availability with a sustainable cost profile.

 

How do you build this tech with scalability in mind?

To foster scalable design at WHOOP, we’ve introduced two critical steps into our technical designs: cost calculations and load tests. As scalability is a measurement across time, these steps are meant to forecast when the system will tip into an imbalance of availability and cost.

 

In addition to these steps, engineers at WHOOP design new systems with a few key questions in mind:

  • Will this system maintain availability if our member base grows tenfold? If not, at what point will this design’s availability degrade? 
  • What specific bottleneck did our load test reveal, and how might we mitigate it in the future? 
  • If we move forward with this design, what metrics will we track to know when we need to redesign?
  • How much will this feature cost with our current member base? How will this cost scale with our member base? 
  • What assumptions or key product requirements are the core drivers of cost? If those constraints were relaxed or removed, how would that impact the system’s cost profile over time?

 

Not only do these forecasting exercises help engineering teams avoid painful system re-architecture, they also help bring quantifiable tradeoffs to our product counterparts.

 

What tools or technologies does your team use to support scalability, and why?

While the thoughtful design of a system can go a long way, WHOOP has invested in tooling to support availability and cost observability throughout a system’s lifetime.

WHOOP’s observability tooling instruments CPU usage, API latency, and other critical service health metrics for every new application, forming the cornerstone of our availability analysis. Teams then execute load tests against these applications using Locust, an open-source load-testing tool. Locust provides an easy way to flood an application with traffic, accurately simulating its performance under production conditions and pinpointing potential failure thresholds.

Finally, WHOOP has built cost observability directly into our in-house deployment platform. This feature enables teams to promptly recognize shifts in a service’s cost profile, all within their daily workflow.

These tools provide continuous feedback at each development stage, empowering WHOOP’s engineering teams to navigate the ongoing trade-offs between availability and cost.

 

 

Responses have been edited for length and clarity. Images provided by Shutterstock and listed companies