Presented by Code Wizards
Code Wizards has just completed the largest and most successful public scale test of a commercially available backend in the games industry yet. This exciting news follows the recent release of scale test results for Nakama running on Heroic Cloud. Across three different workload scenarios, they achieved a remarkable 2,000,000 concurrently connected users (CCU) each time without any issues. Martin Thomas, the CTO of Code Wizards Group, shared that they could have further pushed the limits if needed.
“We’re incredibly pleased with the outcome of the tests. Reaching 2 million CCU flawlessly is a significant achievement. What’s even more promising is the knowledge that we had the capability to go beyond that number. This isn’t just a technical victory; it’s a game-changer for the gaming community at large. Developers can confidently scale their games using Nakama, an off-the-shelf product, unlocking new possibilities for immersive and seamless multiplayer experiences,” said Thomas.
Code Wizards is committed to assisting game companies in creating top-quality games on strong backend infrastructure. Through their partnership with Heroic Labs, they help clients transition from unreliable or costly backend solutions, integrate social and competitive features into their games, and implement live operations strategies to grow their games. Heroic Labs developed Nakama, an open-source game server that supports online multiplayer games in various engines such as Unity, Unreal Engine, Godot, C++, and more.
“Code Wizards has extensive experience stress-testing AAA games with both internal and external backends,” added Thomas.
These tests were conducted using Artillery in conjunction with Amazon Web Services (AWS), utilizing various AWS offerings including AWS Fargate and Amazon Aurora. Nakama on Heroic Cloud underwent similar testing, using AWS services such as Amazon EC2, Amazon EKS, and Amazon RDS, aligning with AWS’s scalable hardware model.
Mimicking real-life usage
To ensure a thorough test of the platform, three distinct scenarios were crafted, each with increasing complexity to simulate real-life usage under load. The first scenario aimed to demonstrate the platform’s scalability to the target CCU. The second scenario simulated varying payload sizes across the ecosystem to mirror real-time user interactions smoothly. The third scenario replicated user interactions with metagame features within the platform itself. Each scenario ran for four hours, with the database being reset between tests to ensure fairness and consistency.
A closer look at testing and results
Scenario 1: Basic stability at scale
Target
To conduct basic soak testing of the platform and demonstrate the feasibility of sustaining 2M CCU while providing a reference point for subsequent scenarios.
Setup
- 82 AWS Fargate nodes, each with 4 CPUs
- 25,000 clients on each worker node
- 2M CCU ramp achieved over 50 minutes
Each client carried out common actions and scenario-specific actions, such as establishing a realtime socket and performing heartbeat “keep alive” actions using standard socket ping/pong messaging, respectively.
Result
The baseline for future scenarios was established successfully. Key results include:
- 2,050,000 worker clients successfully connected
- 683 new accounts created per second, simulating a large-scale game launch
- 0% error rate across client workers and server processes, including no authentication errors or dropped connections
CCU for the test duration (from the Grafana dashboard)
Scenario 2: Realtime throughput
Target
The goal was to demonstrate that the Nakama ecosystem can scale appropriately under variable loads. The scenario extended the baseline from Scenario 1 by introducing a more intensive real-time messaging workload.
Setup
- 101 AWS Fargate nodes, each with 8 CPUs
- 20,000 clients on each worker node
- 2M CCU ramp achieved over 50 minutes
The clients performed common actions and additional actions specific to this scenario, such as joining chat channels and sending chat messages at randomized intervals.
Result
The successful run showcased the capacity to scale under load, with impressive metrics:
- 2,020,000 worker clients successfully connected
- 1.93 Billion messages sent, peaking at 44,700 messages per second
- 11.33 billion messages received, with a peak average rate of 270,335 messages per second
Chat messages sent and received for the test duration (from the Artillery dashboard)
Note
A data recording issue with Artillery metrics (as reported on GitHub) caused a data point loss towards the end of the ramp-up phase but did not impact the scenario thereafter.
Scenario 3: Combined workload
Target
The objective was to demonstrate Nakama’s performance at scale under primarily database-bound workloads. Every client interaction in this scenario involved a database write operation.
Setup
- 67 AWS Fargate nodes, each with 16 CPUs
- 30,000 clients on each worker node
- 2M CCU ramp achieved over 50 minutes
- The authentication process involved the server setting up a new wallet and inventory for each user, containing 1,000,000 coins and 1,000,000 items
Clients performed common actions and specific server functions at random intervals, such as spending coins from their wallet or granting items to their inventory.
Result
The transition to a database-bound workload did not affect Nakama’s performance, with impressive 95th percentile results:
- Clients sustained a workload of 22,300 requests per second once fully ramped up, with minimal variation
- Server requests processing times remained below 26.7ms for 95% of the scenario duration, with no unexpected spikes
Nakama overall latency 95% of processing times (from the Grafana dashboard)
For more details on the testing methods, results, and additional graphs, contact Heroic Labs at contact@heroiclabs.com.
Supporting great games of every size
Heroic Cloud is trusted by thousands of studios globally and serves over 350M monthly active users (MAU) across various games. To discover more about reliable game backends that power some of the best games in the industry, check out Heroic Labs case studies or visit the Heroic Labs section on