Since our release of phpSAS into production we wanted to explain a bit more how it works. Specifically in terms of the performance differences.
To start off, phpSAS is much like SRAS in that it is stores files to disk directly, then organizes the data using SQL. It was built to support both PostgreSQL and MySQL/MariaDB so that an existing SRAS database can be dropped in without any issues. Furthermore, we have noticed that PHP 7’s ability to query is far faster than that of Ruby 1.9.3, which was the last working version that could run SRAS. So in simple terms, a request comes into phpSAS which then queries the SQL database which in turn returns the information and location of the asset, then serves the asset back to the grid/simulator/user.
Since running it in production on several different types of grids, low to high load environments, we wanted to report exactly what we are seeing on our end in terms of the performance. First off let’s look at the raw statistics.
|Grid Active Users
||Number of Assets
||Memory Usage (phpSAS)
||Memory Usage (SRAS)
||902,113 (84.2 gb)
||534,052 (48.7 gb)
||3,308,648 (148.9 gb)
As you can see from the data above, the statistic speaks for itself. The new asset server is able to scale significantly higher, and with the ability to put it behind a load balancer allows it to handle even higher spikes of traffic. The reason we have not included CPU load is because there is really no significant CPU load to speak of. Both SRAS and phpSAS never really spiked above 2 percent load even during heavy traffic of loading IAR’s or OAR’s from multiple simulators.
In terms of end-point performance for the client the following chart should give you some idea of the performance compared to the other asset servers. Please note that mileage may very on these results depending on hardware and the grid’s connection. We tested this with fairly simple mesh, textures, and objects. The Viewer was run on a machine with 8gb of ram and a Nvidia grid card on ultra-settings, while on a 1Gbps connection. Provided by LiquidSky.
If you are interested in seeing the difference for yourself then head over to ZetaWorlds and create a local account. While you could use the hypergrid, you will most likely not see a great difference in performance, as load times greatly depend on the performance of the grid you are teleporting in from.
Zetamex Network has been a big user of SRAS for a long time. We didn’t make the switch to FSAssets due to the performance to system resource usage was still much higher than that of SRAS. Since the most recent releases of many mainstream Linux distributions it is becoming more and more noticeable that SRAS is showing its age. We had already been using software to allow for SRAS to run on older versions of its dependencies in order to get it to work at all.
We realized this was just not cutting it anymore, and we really needed something new to replace the outdated SRAS and make sure that we could support it for the future to come. Reaching out to freelancers to assist us in the project in converting or rewriting something backwards compatible to SRAS, as FSAssets is not directly backwards compatible but could be migrated with some effort. After shopping around for freelancers we came upon one already deep inside the OpenSimulator community who has done custom work for other grids, with whom we have worked with in the past on a few smaller projects for some of our clients.
We decided on to use PHP for the replacement, as we have already built almost the entire Zetamex Network back-end on it. Furthermore, we made sure it was built without relying on any dependencies other than PHP itself. This made it slim, lightweight and easy to future proof by having it all written from scratch for easy upgrades in the future. The development took a while, but this past week it has been rolled out and is running on all hosted and managed Zetamex Network grids. The name of it is phpSAS (PHP Simple Asset Service), close to the same naming of SRAS (Simple Ruby Asset Service).
What makes it different is that we run it on the latest version of PHP, PHP 7, which is the fastest version of PHP to date. The proxy layer for serving assets and handling the load is NGINX
making it even faster at storing assets and allowing it to serve thousands of requests without causing any major load spikes. Best of all, we are now able to load balance the asset server like never before as assets are served just like a website would be. This means we can put it behind large CDN providers with minimal effort, which is our next project for this.
Constant innovation and planning for the future is nothing new for us, but while we are forging ahead our equipment has always struggled to keep up. In the last year we have drastically changed that and now have plenty of computational resources to test and develop new features with. However, since we are not a 24/7 company, some of that new equipment stays idle for a couple hours each day while we recharge our batteries. As shutting down and then rebooting all these servers and systems would be the best option for the environment, it also means even more time spent not developing.
So the logical conclusion is using the downtime for something useful and so we have setup a Folding@Home machine within our testsystem. Using the spare computational power of the cluster during the night it helps the search for a cure to cancer. You can see the statistics of that in the blogs sidebar, which also has a direct link to the first of the machines we setup this way along with our team number, in case you want to join us. In addition to helping cancer-research the software also allows us to simulate synthetic loads during testruns, which provides us with valuable data towards the stability and reliability of what we test. Finding a cure for cancer one feature at a time if you will. We hope to have sparked some interest in Folding@Home and maybe we will see you on our Folding team in the future.