4 Key Factors to Consider When Scaling Your Software Application
When you’re building your new software application, you might focus on the features and functionality that will dazzle your end-users.
You may also spend a lot of development time and resources on improving the software’s performance.
What you might overlook, however, is scalability.
Scalability is critical to growth. As the number of users grows, so also does the volume of data. A lack of scalability can cause your software to slow down or even crash.
Scalability issues negatively impact software applications all the time. Some apps are victims of their own success, attracting many more new users than anticipated. Outside forces impact others, like when the COVID-19 pandemic caused sudden, massive unemployment that overwhelmed state unemployment bureau websites.
To design against these kinds of failures, it’s important to consider four key factors for scaling your software application.
1. Scalability Can Cost You
More businesses are relying on cloud platforms like Azure and Amazon Web Services to provide scalability. It is a good strategy, but it can result in an unexpected cost increase if the demand for your application spikes.
These surprise costs occur because cloud platforms often charge a per-server fee for virtual machine deployments, as well as load balancing fees.
The good news is that this setup will prevent lag and application crashes if you suddenly get a huge influx of new users.
The bad news is that load balancing costs are determined by the number of new and active requests, as well as the volume of data processed. In addition, costs will increase for the additional processing power (i.e., virtual machines).
In other words, a spike in demand means a spike in cost.
By keeping these costs in mind during the design phase, you can reduce them somewhat—for example, by using an elastic load balancer or tuning the server parameters to increase capacity.
2. Scaling Shared Resources Is Critical
Surprise costs aren’t fun, but cascade failures are even worse. That’s why you need to make sure your shared resources will scale.
Increasing the capacity of one part of your application often overwhelms shared resources such as databases, message queues, and microservices. When this happens, the system slows down significantly and may even crash altogether.
For example, if you anticipate high server demands, you may increase the number of servers. However, because they all still draw from a single database, data access will start to slow down during periods of high demand. If demand exceeds the database’s ability to return results to the server, the server can’t pass that result back to the user or another module in the system.
When a slow shared resource or microservice causes requests to build up along the request processing path, the whole system stops working. This is called a cascade failure.
You can protect against cascade failures by including resilient patterns in your software architecture, such as circuit breakers, bulkheads, fallbacks, retries, and timeouts.
A circuit breaker pattern can be used to check the availability of the shared resource. If the resource is unavailable, the circuit breaker prevents the application from trying to perform the action (i.e., throws an error) until the resource is available again.
A bulkhead pattern splits your application into multiple components and resources that are isolated in such a way that if one fails, the others will continue to function.
The fallback pattern enables your application to continue to function in case of a failed request to the shared resource. Instead of throwing an error because of a missing response, a fallback value is returned.
The retry pattern is simple. If a request fails, it is retried a defined number of times before throwing an error. This is fine for temporary bottlenecks, but retrying might make the problem worse if the supply resource is overloaded.
The timeout pattern sets a time limit on responses to prevent indefinite wait times. If a response isn’t received from the resource within the defined time limit, the request is treated as failed and a timeout error is thrown.
3. Scaling the Data Tier Is Painful
Many developers would rather get a root canal than have to change the data tier of an application. That’s because changing data models to scale your query processing capability is rarely easy.
Scaling your application’s request processing increases the load on your shared transactional databases, which hold customer profiles, account information, and other essential data your business needs to function. The increased load then slows down the system.
A few workarounds can make the load more manageable in the short term, but you’re going to need to address it properly in the long run.
And that’s where it can get painful.
If you’re using a relational database, changing the schema means running scripts to reload your data. While the script is running, the system is essentially not available. And the bigger the database, the longer it takes the script to run.
How long would it take to reload your data? Most likely, several hours—during which your application will be unavailable to your users.
You can also switch to either a no-SQL, schema less database, a distributed database, or a managed cloud-based database, but those have their own challenges and trade-offs.
Software architect, author, and educator Ian Gorton suggests a simpler solution: using caching as a way to reduce the load on your database. In Gorton’s deep dive on scaling, he says, “Your friendly database engine should be able to utilize as much on node cache as you care to give it.”
While it’s a simple and useful solution, Gorton cautions that it could also be expensive.
“For data that is frequently read and changes rarely, your processing logic can be modified to first check a distributed cache, such as a memcached server. This requires a remote call, but if the data you need is in cache, on a fast network this is far less expensive than querying the database instance.”
To introduce a caching layer, your processing logic must be modified to check for cached data.
“If what you want is not in the cache, your code must still query the database and then load the results in the cache, as well as return it to the caller. You also need to decide when to remove or invalidate cached results,” he says, which depends on how fresh you need the results to be.
Gorton believes that a well-designed caching scheme can be “absolutely invaluable” for scaling a system.
“If you can handle a large percentage of read requests from your cache, then you buy extra capacity at your databases, as they are not involved in handling most requests,” he says. “This means you avoid complex and painful data tier modifications while creating capacity for more and more requests.”
4. Monitoring Is Essential
Imagine your app is performing well and the number of users is slowly growing—and then a celebrity praises it. Suddenly, their fans want to use your app, too, and now you have 2 million new users.
Can your app handle that kind of load? How do you even test for that?
You’d have to generate over 2 million test records and a realistic workload. You’d also have to load and deploy your data set and run load tests, probably using a load testing tool.
Even with a tool, it’s a ton of work that probably won’t yield an accurate result.
Most businesses rely on system monitoring, instead. At a base level, this involves setting up alerts for when memory or disk space runs low, remote calls fail, or other infrastructure problems arise.
As your system scales, you’ll need adopt a monitoring solution that provides actionable data and insights, allowing you to respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health.
The good news is that there are several monitoring solutions that provide this kind of observability, and cloud providers have their own monitoring services.
We Can Help
Few companies prioritize scalability when first building an application or software system—and for good reason. It’s difficult, time-consuming work that you don’t need to do right away, if ever.
When scalability does become a priority, it’s often because unexpected demands on the system have caused it to crash or become so slow that it’s unusable.
If you need help scaling your system or application, a software development company like Taazaa may be able to help. Our software architects can guide your development team and expand their bandwidth. We’ll help you get the work done quickly and have your application operating smoothly again.
Contact us today and let’s get started.