Applying The Wrong Concept The Right Way

In the ongoing, albeit somewhat irregular, theme of technical posts we once again want to bring you a deep dive into a topic that is not often touched upon. This time the focus is on applying a microservice or clustering concept to a piece of software that really does its best toddler temper tantrum impression of not wanting to do its homework.

Microservices

In the world of “web applications” the containerization and clustering of applications through various concepts, layers and confusing config files is a landscape full of wonder and pretty explosions. For most, let’s call them websites from now on, since that is what they are, these setups are not all that useful, since they mostly apply to projects of vast scale. Nonetheless some still fall into the trap of pretty buzzwords and promised gains. Supposedly that is easier than to blame oneself for the code not being optimized or the hardware being overloaded as is. Microservices in most cases describe the concept of splitting a large application into smaller parts, each handling a specific task given the input, producing output. Going along with then clustering these across vast networks to more closely position them near the user and scaling them as markets grow or shrink. For large platforms with a thousands of users this makes the most economical sense, since the solution in the past was to simply slap the entire app onto every growing hardware, which just did not scale performance and cost all that evenly. Thus the concept of distributing load and splitting things into the smallest parts to make them more efficient has helped the internet grow and certain companies and platforms making billions while slashing their IT budget.

Load Balancing

Not a new concept by any means, but an every more important one these days. A single point of ingress for data into an application serving a wide range of potential sources means potential bottlenecks on the horizon. Equally then producing the output from that generally results in a cascade of ever slower processing until you hit the inevitable timeouts. Balancing this load through means of microservices or caching mechanisms is common practice not just in the world of websites. Any type of application, down to the very browser you are reading this through subscribe to the concept of load balancing in one way or another. At the core of the solution is spreading the load across any sort of multiplication that does not rely on other parts to process the data. In most programming languages this is known as asynchronous processing and generally tags along the object-orientated programming style that allows it to work in the first place. As a concept thus the idea is to allow all parts of an application to run and finish on their own time without causing the whole thing to grind to a halt, even if that, in the name of keeping the end results in sync, sometimes cannot be avoided either.

Where does OpenSim come into this though?

This is where it gets really interesting, because OpenSim has been built from the ground up to split individual processing into own parts that can run on their own. These individual services are often asynchronous as well and can even be split and distributed. This design allows both for applying the concept of microservices along with the load balancing that brings to it. However, that is easier said than done. As it turns out the interconnection between the services for the point of once in a while making sure all that asynchronous data actually makes any sense at all is not a straight forward affair. More so since changes and new features demand direct connections to other services that absolutely cannot wait for anything else to go on.

In the past there were attempts to resolve this by simply creating another process of OpenSim running as a sort-of backup to receive the same data and run it independently. Should the return then arrive faster than the main process, then it would be used instead. This went along with splitting services out into their own instances as well, but the resulting complexity and requirement to test each new change to not severely break the chain of data processing meant this project never really went anywhere beyond a working prototype.

That’s not to say the attempt itself did not emphasize the need to maintain the service-based setup of OpenSim. Thankfully for the lesser complex part of providing the main services that even connect the assortment of simulators to a conclusive world this has been maintained. What is commonly referred to as Robust services generally still has the ability to be split and even run as copies of each other. This leaves the door open for both applying the concept of microservices and load balancing to it. Though as already mentioned, there are a few things that managed to become rather large pitfalls to anyone looking to attempt it.

Robust, a simpleton with an attitude

To begin let’s go over the goals and requirements.

  • Split as many services contained in Robust into their own instances
  • For services with a potential to overload from data ingress or processing spawn multiple instances and distribute the load between them
  • Setup connections to each instance in a manner that allows for effective load balancing and reduces the complexity of setup for simulators connecting to them

To achieve these goals we can use a few methods already available, some which require a bit of tinkering and some external systems that without nothing would work. Let’s go over each part.

Robust

With the aforementioned splitting in mind the basic configuration file for a single Robust instance already has a list of services it contains as well as their definitions further down below. All we thus have to do here is to select the services we want to run in each instance and make sure in the end we have instances for all of them. However, rather quickly this idea gets thrown out the window when looking at the actual service definitions. The problem sits in the connection services have with each other. While a lot of them point to them via either a local service definition or external connector, there still exist some that flat out assume a copy of the service is running in the same instance. So the difficulty is now up a notch trying to find the services that have to go together to share data.

Connectors

Most connected services refer to other services via the direct connection established over the addins present as part of the Robust system. We can see these as DLL files describing each service. However, in order to allow for multiple instances to communicate or indeed other parts of the entire software to communicate there exist Connectors. These are also DLLs, but their setup is somewhat different in that they provide a remote-bound connection to a service not defined by the addin, but a URL. This means we can change our service definitions to these Connectors to allow them to connect to a service running in a different instance.

Siblings

Splitting everything up into pieces is one part of resolving issues created from overloaded services, but eventually even that is no longer enough to handle the influx of data. As such applying the idea of load balancing by way of creating copies of an application becomes a requirement. Unfortunately this presents an issue when we want to make sure the individual copies are still able to share data with other parts. Whether this be in the form of connecting multiple services to a single dependency or the other way round. This is where we have to resort to external software to provide a way to group multiple instances of the same service under a common umbrella through which we can establish connections with it.

Includes

When attempting to setup a vast array of instances, each requiring their own little changes to configuration to interconnect properly we quickly run into less of an issue, but more a case of not getting brain freeze in the process. Writing a full configuration for each node requiring hundreds of lines each time to create all the necessary information for it to run is tedious and can easily produce mistakes. Thankfully this is something that has already annoyed at least one person before and to our advantage this person has done something about it. Configurations are capable of loading data from files and combine them into a fully qualified instance configuration. This means we can configure each service for connecting locally and remotely and simply mix and match the required parts via the architecture includes. We can now simply select what an instance is meant to run as local service and what it should connect to remotely.

Hacking

This is where it gets complex. In order to reduce load created from asking the same questions over and over again some services rely on caches. These will cache a request for certain data allowing it to be delivered without the need to retrieval from data storage. Unfortunately these caches are localized to the specific service, if we attempt to then multiply this service there is a chance for cached data corrupting actual data entered on a sibling. To combat this issue we have to go deep into OpenSim, find the caches and either remove them entirely or change their behavior to not be in use when multiple instances of a service are being run. In this case the better and more compatible option is to look for each part of the code that either requests or enters data into the caches and change these actions to be dependent on a flag set to either allow them or not, with the latter defaulting back to retrieving or storing data to the database directly as if the cache had no entry for it.

Long Term

As changes to main parts of OpenSim are still being made in order to update some of the ancient standards used when it was originally conceived along with new features requiring additional code the long term stability of this is still in question. Changes already made to some parts do already cause some instability and require long term testing as well as further changes to mitigate. As such this setup will likely require further “hacking” and even changes to the setup itself to account for changing service relations. As of yet it is unclear whether changes to the service interrelations to retain or even enhance the ability to split each service will be made, but we certainly hope so. Increasing data sizes and ever more growth will test the infrastructure and the more a setup can be spread and load distributed among the parts the more solid it will be in the future. As with everything it requires testing and more testing and ever more testing to identify issues, but as OpenSim is still in development that is frankly a given constant already.

The gritty bits

Having completed the crash course in Robust setup let’s create a hypothetical situation realistic enough to warrant creating a solution for.

Say we have to deal with over 10000 users logging in throughout the day, each having thousands of items in their inventory and being an overly active member of the community, chatting and roaming the world with vigor. How do we handle the influx of hundreds of requests per second?

Let’s go over each part.

1. Nginx

Nginx is a webserver with load balancing capabilities through the use of a proxy setup. This sounds complicated, but is actually relatively easy. What we need to do is setup a hostname for each individual type of service we want to run instances of. Then we pass requests from these hostnames onto a set of instances by passing the request over the ports used by those instances. This takes the form of server definitions with a proxy pass to the upstream ports used by the instance.

An example:

We can do this for all instances, multiple or singular, passing everything over a central port, thus making configuration of simulator connections relatively easy. Nginx handles routing the requests in a somewhat round robin style. This means it is not directly aware of the load placed on each copy, but we are changing the receiver for each request onto a different copy, which is likely enough. If necessary we can always add more copies.

2. Robust

In order to make it easier to run a large number of copies instead of multiplying the binary as a whole we simply treat it as template to spawn copies from. This requires providing each instance with the information of where the configuration should be loaded from. We do this by adding the inifile parameter to the execution command pointing it at a single file containing the aforementioned definitions and includes.

An example:

Configuring each service as normal making sure to use the Connectors for the remote counterparts. As mentioned above this structure looks confusing at first, but is actually a lot less work to do as we simply combine what we need rather than writing the config sections out in each file. Organizing the local connectors for services included in the specific robust instance we configure and the remote ones to connect to other robust instances in folders to make it easier to see what’s what.

3. Simulators

Connecting a simulator to this setup is remarkably easy given the complexity of what it is connected to. For the most part we can use the hostnames to connect the simulator services to their Robust providers. Only on select services, GridInfo in particular, a more direct connection is required. This also goes for external asset servers, which we hope will become more common as they are the second biggest bottlenecks in OpenSim.

A rough example of the setup:

Configuration depends on how you set things up and what type of service and instance split is done. As mentioned we don’t need to worry about setting up specific ports for each service as the individual parts are proxied through to their respective endpoints already, which also handles balancing load. The identification is no longer the port, but the hostname itself.

4. Runtime Environment

This section is somewhat optional, but may be of value in the future. A big issue with setting up so many individual services is handling them in case restarts and changes are required. As we are dealing with a program that runs independently we can simply push it to the background and nuke it whenever a restart is desired, but this might incur data loss. A better solution is providing a separate runtime environment for each instance. This can easily be accomplished under Windows by simply stuffing the window into a corner and forgetting about it, but as Windows is not a recommended platform to run services such as OpenSim, in Linux this is a bit more difficult. It is possible to simply send the process away as mentioned, but there is no way to interact or get it back other than sending data to it, which gives us no feedback. The better option is to use runtime environments, which are plentiful on Linux, such as docker or LXC for containers or more simply things like “screen”. The latter provides “windows” we can select at will to interact with each instance and both send commands, but also view the process working. Which one of these works best depends on familiarity and what level of separation you want for each service.

The Grand Solution

To test this setup we have created a testing environment running this setup:

  • As previously mentioned all routed through Nginx to reduce the complexity of connecting simulators.
  • Each instance having only minor changes to its configuration in the realm of setting the port to use.
  • External and internal routing for service connections is also done with the proxy to reduce complexity of service interconnection and take advantage of full load balancing of all requests.
  • Configurations based primarily on includes rather than full configuration files reducing complexity and clutter
  • Spawning instances of a template binary to reduce complexity of upgrades
  • Retaining simple configuration for simulators to services without the need to specify individual ports
  • Splitting services logically based around encountered load and retaining services that have no ability to remotely connect to integrated services
  • Minimal changes to OpenSim itself

This is obviously not the solution to all potential problems and there is no guarantee future changes won’t break a setup as complex as this. Certainly we hope for the opposite since there is only so much a single fully qualified instance of Robust can do on its own and hitting that limit is not a pretty sight.

Applying the concepts of microservice and load balancing in regards to OpenSim may seem wrong and there are certainly many obstacles in the way of doing so, but the core of it was always part of the idea behind the service-based setup of Robust or even OpenSim as a whole. Thus these concepts can work for it as well, despite the issues that exist due to inter-service dependencies and caching setups. It most certainly has ways to go to truly embrace them, but it is already possible to observe the positive aspects. Whether it is a setup as complex and distributed as shown here or simply splitting out one or two, the future undoubtedly lies in utilizing them.

Next to the brief mention of this capability on the official OpenSim wiki and some snippets of configuration options that can be found on the web this marks the first time it has been fully documented and tested. In the pursuit to fully dissect and test it we share the interest with a number of people, who have provided information, time and effort in testing as well. It shows the strong community spirit often found associated with opensource projects and we hope to propagate this to anyone reading this article. Instrumental in kicking this project off by providing the initial basis of configuration options and pointers to information we want to thank Gimisa Cerise who has been pulling apart the hidden and complex inner workings of OpenSim for a long time now. Equally we have to extend thanks to the OpenSim team for providing assistance in tracking down interconnected modules. The continuous effort put into the OpenSim project by everyone involved makes things like this possible in the first place; their ongoing support and work toward the project drives it forward and we are happy to be a part of it and contributing where possible.

We will certainly continue testing and pushing the boundaries of OpenSim to make sure it is prepared for the future and hope this insight into the capabilities it has will provide some positive impact on the metaverse as a whole.

10 Years Of Performance

From the perspective of the average user the intricacies of software development may seem akin to magic or some form of science. The reality is often much less glamorous and filled with frustrations. OpenSim as software is now over 10 years old and in that time a lot has changed. From new features, improvements and most importantly bugfixes it has evolved to now support the continuous growth of so many communities. What was once a large team of enthusiasts has largely turned into a handful of individuals still working actively to enhance the software. Naturally development has slowed down and the focus has shifted somewhat to improving the existing features.

While bugs being squashed are an important part of development another is probably even more important. Performance. As the metaverse as a whole grows and the features demand ever more performance to deal with new shiny things such as mesh, whether you are greeted by a slideshow or a fluid movie is ever more important. This especially in times that culture shifts towards demanding engagement in video games and not just pretty pictures.

This is generally an incremental process, but OpenSim is more than the parts it is made up of. As software it relies on a framework to avoid re-inventing the wheel and handle common things without creating even more programming code. As this framework is still in active development it naturally improves as well, bringing new features, performance enhancements and new concepts to the table. The framework underneath is .NET/Mono, which is now entirely owned by Microsoft. Previously the Mono part was under the hands of Xamarin and independent. As project it was aimed to provide a .NET environment for the Linux kernel. Not that long ago Microsoft bought Xamarin however. What you take away from that acquisition is up to debate still, but it has so far brought some advancements to Mono.

OpenSim has somewhat embraced this change and eventually switched to newer versions of the Mono framework as the basis. Mainly driven by the new features and performance of Mono this change has brought many things with it. From the initially bumpy start most of the teething issues have been resolved to the joy of anyone now working with OpenSim. Adjustments to the code and application of new concepts has meant that OpenSim performance has improved significantly. Especially in the active development branch this improvement is quite striking to see. We recently saw this first-hand.

ZetaWorlds, our in-house grid, recently turned 8 years old and to celebrate that occasion a large party was held on one of its regions. This saw nearly 40 people attending the celebrations at some point, which does put a not insignificant load on the various aspects of OpenSim. It was thus rather reassuring to see the performance was not only stable, but in comparison to what it would have been 10 years ago, a lot better than expected. Short of staging an actual test to find the breaking point of it all it did serve to illustrate how far the improvements have come in a decade.

A measure of performance in OpenSim is generally bound to the frames per second the simulator can produce running the region itself. In an ideal scenario this would be 55 frames per second, which serves as the maximum and stable point. Any number lower than this would constitute a situation of backlog, where there are more things to process between each frame than can be reasonably achieve in the allocated time. Situations that produce a lower number can often cascade further essentially grinding everything to a halt. On the side of the actual hardware running OpenSim there is no real measure of performance, instead here we are looking for the amount of resources consumed by the instance of OpenSim. This is the same for any process running on a computer with memory and processor time consumption being the important metrics to monitor.

Not too long ago the general consensus was that each avatar on a region would consume 150-250 megabytes of memory and a good 10% processor load. As resource usage increased the likelihood of Opensim being unable to keep up handling all those resources and thus having to reduce the amount of frames it could produce per second generally happened around 30-40 avatars. This would often mean many gigabytes of memory usage and nearly filling most consumer grade processors of the time. So the following performance improvements are rather striking to see.

It goes without saying that this is an almost ideal situation of all avatars not engaging in a contest of who can throw the most physical objects at each other, which would likely not have had the same results. It is also important to mention that especially when loading a new avatar to a region, much like any significant change, there are times OpenSim chokes on the amount of data to process. This often results in temporary freezes, which resolve themselves, but have a negative impact on the average frame time as is clearly visible.

While there are still ways to go as is evident from the processor usage being pretty high, an event like this being possible and not causing anywhere near the expected resource usage is a massive improvement. From these metrics it seems reasonable to assume that even twice the amount of avatars should, given they are not trying to have a bumper-car session, be within the realm of possibilities.

Much like ZetaWorlds we are looking forward to what the future may hold and hope the quest to improve performance and reduce resource utilization will make even larger events possible in the future. We thank the community ZetaWorlds for making this, in a way unscheduled, performance test possible and obviously for celebrating with us the 8 years of success.

 

PayPal Automatic Payment [Updated]

PayPal is ever evolving, both in the good and bad directions. As you may have noticed, if you have selected automatic payments via PayPal, all these automatic payment plans have been cancelled by PayPal. The information given is sparse, but appears to point at the mismatch of the prorata billing cycle we use to create invoices on the first of the month rather than on the same day the service order is received. PayPal attempts to collect the automatic payments from the first of the month to the date of the order, which results in numerous failed attempts as funds are only released close to the order creation date. We believe PayPal has thus elected to cancel all automatic payments that received too many failures in the past. We have tried to reach out to PayPal for more information as to how to prevent this in the future, but are still awaiting response. For the time being we have disabled automatic payments through PayPal to avoid potential further issues from the failed attempts and to stop the creation of new automatic payments that will be cancelled immediately. We apologize for the inconvenience this may cause, we are working on a solution.

 

[Update 21.10.20]

We have heard back from PayPal with the information that the requests sent to their API contain the correct field for changing the prorata dates, but their API has bundled the calls into a new section. This means we have to wait for our billing system provider to adjust the gateway for PayPal to properly send the data in the correct format. As the changes are fairly recent it seems the update to the gateway is still in progress and the current version of the system does not yet contain the needed changes for prorata to work properly on subscriptions. We will continue to monitor this and have expressed our requirement for this system to work to the provider alongside many others expressing the same need for this change to be accounted for. We expect this to take a few more weeks to be fully resolved and will re-enable subscriptions as soon as we can verify the gateway is working properly for prorata subscriptions. We apologize for the inconvenience.

Service Provider Regulation For Enhanced Security

As an EU company we are bound to abide by the regulations set forward by the EU as implemented by the country of residence. In some cases these regulations are already reflected by local law and can even exceed the requirements set forward by the EU. In the latest round of regulations passed at the beginning of 2020 for enhancing the security of online banking and money transfer in general the EU has set forward a set of regulations that are supposed to make it more difficult to gain unauthorized access to banking accounts through a secondary form of verification.

In reality the implementation of these regulations falls short in a lot of areas and a lot of banking institutions or others handling money transactions are only partially following the regulations set forward. This is despite the clear outlines for what such an implementation requires to provide. Even more concerning is the nature of these implementations often disregarding the specifics of allowing the willful dismissal of participation in the additional verification step. In turn this results in pages and pages worth of complaints against such institutions for both making the steps mandatory despite the optional status and lack of complete adherence to the regulation in general, leaving, in the worst of cases, anyone with the inability to procure or provide verification without access to their funds. The latter being a downright case of discrimination against disabilities in a lot of cases.

We have elected not to follow these regulations, not that we need to as we do not handle sensitive banking information on our end. We do, however, see changes in how our customers pay for their services which are undoubtedly the result of how regulations are implemented with various payment providers we work with. While not strictly our responsibility and far out of our reach to even do something about, we still want to apologize for any inconvenience you may experience paying for your services. We have launched formal complaints with at least one of our payment providers in regards to their failure to properly implement these new EU regulations in a manner that both satisfies the guidelines set forth in the regulations themselves and the implementation abiding the disabilities act.

If you are still having issues with processing your payment with one of our payment processors please do not hesitate to contact us via ticket.

OpenSim Archive – A call for contributions

Zetamex Network is proud to be the sponsor of the OpenSim Archive, a project that aims to centralize a library of resource for and around OpenSimulator for the benefit of creators and those who want to become one. Based around the idea of sharing resources and knowledge, which is one of the core principles behind open source software and the entire FOSS movement, not re-inventing the wheel for every car that’s made this library’s aim to aid content creators and everyone else alike. We hope the project will grow and creators will contribute to the enhancement of the worlds we spend our virtual life in. The project is made available to everyone free of charge and even includes things such as old software, compiled OpenSimulator binaries and other useful items. Zetamex Network provides the storage and bandwidth for this project and will handle submissions.

If you are a creator or know of publicly available resource with open licenses we ask for you to get in contact with us to contribute to the library. You can find a contact point on the right, just click the big yellow button or send us an email!

 

Expanding Reach – New Payment Gateways Updated Transaction Fees

The world is a very diverse place with lots of different attitudes to monetary transaction. Philosophical aspects aside the main interest for us as a company in that is the diversity of currencies and ways to turn said currency into goods and services. So in the pursuit of bringing not just more harmony to the world, but also provide easy of access to our services we have added new payment gateways. Additionally we have re-enabled an option on an already existing gateway and upgraded another. Without further ado here they are with the shiny new one leading the front:

Paysafe

Paysafe offers debit-like pins that can be purchased with a certain monetary value attached. These pins can then be used to pay for goods and services online if other payment methods are not available. They have been around for quite some years now and are often the only way for people without access to a bank account to pay for goods and services on the internet. Given they can be purchased with cash they are also quite popular with young people who may just get an allowance handed out in cash. We have partnered with Paysafe to bring this payment method to our brands so that more people can enjoy them.

Stripe SEPA

SEPA is a direct debit system that directly pulls money from the associated bank account. It is quite popular in Europe, because once setup, it is almost hassle-free and so long as the bank account has the appropriate funds on it customers don’t have to worry about their bills not being paid. With the worry of paying bills on time out of the way there is more time to focus on the important things in life.

PayPal Subscriptions

We had previously disabled this functionality due to problems with overcharges and the lack of flexibility in adjusting the subscription when services were changed. These problems have not completely gone away, but recent changes to our billing system have advanced detection of problems in this regard so we feel confident to enable this function once more. With PayPal Subscriptions customers can subscribe to their monthly bill and have it automatically paid without re-authorization. This, much like direct-debit, reduces the complexity of monthly payments for our customers.

 

Transaction Fees

A very important change that has come along with adding new payment gateways is that we are updating our transaction fees. Previously a flat 5% rate was applied to any invoice customers received. This is changing now for a new structure that applies different fees depending on the gateway used. This reflects the contractual conditions we have with the gateway providers and more accurately reflects the additional costs we incur for each gateway. Invoices are automatically updated when a different payment method is selected so customers can see directly which one is better for them. We understand due to many different circumstances some payment methods are less appealing than others so we have tried to reduce the transaction fees where possible. The new transaction fees are as follows:

 

Gateway Fee Gateway Fee Gateway Fee Gateway Fee Gateway Fee
PayPal 5% Stripe 3% Paysafe(new) 15% Stripe SEPA(new) 3% Bank Transfer 7%

 

Additional payment methods may come in the future, if customers have special requirements for payments they can of course contact us in this regard as always. We hope these new payment methods will provide customers with ways to reduce the hassle on monthly bills and provide more alternatives to those without access to the previously available methods.

The Informal Checklist For Grid Owners

Starting your own grid can be a dream, but like any dream it has the potential to turn into a nightmare. We often compare grids with restaurants or other customer-centered businesses, with the reasoning that both share a similar requirement – that a specific culture needs to exist for them to succeed. There are many things to consider and even more things that may not be on everyone’s radar.

It’s a business

“But I am just doing this for fun?!” That may be so and there is inherently nothing wrong with that. The problem comes when you let people sign up with you. Here in Germany, anyone offering signup to a service that is provided to them, free of charge or paid, makes themselves at least partially liable for what goes on within the service provided. That is, until you setup a legal agreement with the user to make them aware of what you are liable for and what you cannot possibly control. Treating a grid as a business, even if you are only running it as a hobby, is vital when you allow people to sign up.

This means first and foremost that a proper Terms of Service agreement and a set of rules for your users to follow is key to keeping some sort of order (and your sanity) intact. It may be that you and most of your users tend to disregard such legal matters, after all what really is in there (apart from lots of legal-speak and the odd joke about monkeys), but when things go wrong this can quickly get dicey. These agreements regulate why, how and who is liable for loss of data, information, breaches of privacy or law. Without it you, the grid owner, are fully liable for it all, because it happened in your “house”. Jurisdictions vary greatly on this, but considering the stricter laws of countries you may gain users from is important, equally are the laws of your own place of dwelling, locations of hardware and software licenses used. Not being in the clear in this department can cost dearly and may even have an impact on your personal life.

Time is money

While you may not value your hobby as much as others whose job it is to keep the world turning, those others may have a different outlook on that. Most pieces of hardware out there doing the dirty work of keeping your grid up and running are associated with monthly costs that need to be handled. Add to this the costs for licenses and software (not to mention actual employees) and you get into a jungle of numbers rather quickly. Having a grasp of what it means to setup a budget and balance costs is important for your success. Managing what costs can be reduced and what needs to ultimately cost more to get the value you need is equally important. This, again, takes extra time, which you may not value as hobby-time, given the very close proximity to doing actual work (if you are accountant that is). This means you end up with actually sacrificing your free time for something you may not be all too keen on handling and we all know what level of enthusiasm we have towards things we don’t like to do and don’t even get the benefit of a monetary return from.

Of course, this is not saying that hiring an accountant and setting up a full business plan is the only way to go, but it is important to keep an overview of what goes on and where funds go. Others make errors, even the big guys, so identifying that someone may overcharge you for something or bills getting lost potentially causing service terminations is quite important. It not only keeps service interruptions from happening, but also keeps the people and companies you work with on your side. Not getting paid is certainly not going to make them want to help you with problems or other support requests should they arise. Valuing others work for what it is, while making sure you are not being ripped off is an important consideration.

Diversity

We have a list of over 400 known grids out there, most of which allow users to sign up with them and thus creating competition. You may not see it as such, but to create a community it needs to be created around something. A gimmick, niche idea or loads of PR are often not enough to keep momentum in that endeavor. Standing out is not easy and requires more than just “this has not been done before”. Even if your aim is not to become the next big thing, having a strategy to build a community is vital if you don’t want to be all alone.

This is one of the big reasons we advise anyone looking to start their own grid to first establish a community before breaking off into the ocean that is the metaverse. Having roots and ties somewhere else can greatly help to bring new users in and create an association that makes it easier for people to find you. It also helps to establish what aim you have for your community, because as a breakaway there is a clear indication wherever you came from and what you may be doing differently that even warrants a closer look.

More than meets the eye

Okay, those titles are getting a bit over the top, but this is kind of an important one. A grid is never just the grid and maybe a website, instead it requires a whole network of additional systems to truly make it work well. This includes, but is not limited to, monitoring systems, backup storage, staff communications, document sharing, test systems, shared calendars, organization tools, wikis and completely custom OpenSim support and management systems. The infrastructure needs of a grid can quickly become a bigger burden of maintenance than the grid itself so selecting the proper systems and software not only provides the necessary force behind a growing grid, but also reduces the overall time you spend dealing with that stuff in the first place. This in turn leaves you more time to actually take care of the grid, its users and their needs and problems that come your way.

Selecting the appropriate things for this is not rocket science, but it’s not trivial either. You may end up falling flat on your face a number of times with stuff that just refuses to work and for that you need alternative plans and solutions. It’s obviously easier to know from the start what to use and what to avoid, which is something we have quite a lot of experience with given how long we have been doing exactly that, but even so there is never a standard solution that fits everyone’s needs equally well. While a lot of solutions offer ways to bend them to specific needs it is not a given property to expect, some stuff just can’t be done in a reasonable manner, and  compromises have to be made, priorities set, and all requirements, present and future, considered. The last thing you want is to be permanently stuck with something that just will not work out as planned.

A Plan B

Not just Primitives can be flexible, most humans are too, though that depends on how much circus blood is in you. Joking aside, keeping a clear head is very important in the day to day operation of a grid. Things change all the time and even the best plans can fall flat when new information or problems come flying your way. Making sure of having some flexibility in your approach will not only help to keep things on track, but will also prevent that level of stress that can spell the disaster that can result from snap decision making. To have a backup plan, both in the sense of planning a secondary option and actually planning how you want to ensure data security, is not an easy task. With the changing nature of the underlying software and the requirements increasing number of users and advances in technology put on the systems that run it all means you are dealing with an exponential increase in the amount of data you handle. Moving that around and making sure it is security and in a state that actually allows recovery in the event something does go belly up may seem like an easy task of just copy and paste, but making a full backup every day is not exactly efficient both in terms of cost of storage and time.

Speaking of security

We all know the fearmongering of your favorite VPN provider, but while their “what ifs” are sometimes over the top they raise similar points one has to consider when dealing with user data. You may have heard about the GDPR and other similar laws such as COPPA. These laws also apply to grids. After all, as previously mentioned, when you let them sign up with you, you become a service provider to them. While both of these laws make specific exceptions for small businesses and hobby projects, complying with them is good practice nonetheless. Unfortunately there is a lot of misinformation and literal interpretations for not just those laws, but most laws governing digital exchanges of data. Thankfully though, when time is taken to read and understand what the aim is, not so much even the literal meaning, but the “good conscience” approach they want you to take, it quickly becomes apparent that complying with these laws is not all that difficult.

It obviously helps to have someone on hand that is well-versed in such matters and deals with them for a living, but not many have a free attorney on speed-dial so learning a bit of legalese and employing a bit of the good old “copy others work” is the minimum. Without diving too deep into the matter, having a firm understanding of what you can and cannot do is still a very helpful tool to have, especially dealing with user concerns and problems. The last thing anyone wants is a fun hobby turning into a court case over some stolen shoes, and yes that has happened.

Data security is also a matter of technological understanding. In short that means setting up hardware and software to ensure your neighbor can walk around in your yard and take things for himself. This is a very time consuming effort as every information security guru you talk to has other ideas of what and what isn’t a secure way of handling cat gifs and social security numbers. It is no less important though. Providing an angle for attack and loss of not just your, but your users’ data, in most jurisdictions makes you liable for the damages. Another reason to properly setup Terms of Service agreements, but just waiving all your responsibility and hoping nothing bad happens is not exactly a strategy most users will want to trust with their data.

The green paper

Money is a complicated subject at the best of times. Having a budget is a good start, sure, but there is more to it than that. Money offers a way to exchange things and services for something that does not require the other party to think of something to return the favor with. It thus creates a big decision to make. Internal money across the grid or external gateways. What gives you the least headaches and what is best for commerce. Even if your aim is not to create a giant shopping mall, with the way money is handled across grids these days users may find themselves wanting support for such a system at their own homes.

Across the spectrum the same question is something you have to ask yourself. When the aim is to break even or even turn the little hobby into something that pays for your next holiday then deciding how to setup the money-service exchange can make or break it all. Handling invoices, rendering payments on time, dealing with credit cards, are quite daunting tasks, and the best approach is not always do-it-yourself. Billing systems are a dime a dozen, but selecting one that works is still not as easy as picking the one that has the nicest logo. By the same token, the way you present all that to your user-turned-customer is equally important.

Fake it until you make it

This is definitely not the policy you should employ when creating an online-presence for your grid or project. While that may work on Instagram it certainly does not work for someone who can tell if someone just slapped their name on something they found looking for “cool website templates”. While there are systems available that come nicely packaged all ready to go, those only provide the basics and to the keen eyed user will make you look like “just another” grid. Taking a bit of time to actually customize and present yourself to your audience can make the difference between registration and a closed tab.

When you have an idea on what you want to convey to the world out there it pays back exponentially to invest some effort into creating a look and feel that is in line with the emotion you want to evoke. That sounds like a lot of “designer” talk and more work as well, but hoping to wow someone in this day in age by creating a logo and adding some buzzwords to the introduction paragraph is not going to work all too well. In the same sense that the world keeps turning, creating a constant flow of information and engagement with your community through something other than throwing parties and posting ads everywhere is going to ensure those who stumble upon your little dwelling are going to stick around for more.

Long term

The vast majority of grids don’t even make it to their first anniversary and not many more stay around for more than 3. Those that manage to maintain an active community usually have the aforementioned diversity and other factors in order alongside a set of dedicated, almost stoic, individuals keeping them up and running. While that is the hard requirement for keeping it all going there are still some softer requirements that apply to create something that lasts. Biggest of all is constant feedback between users and operators, especially in regards to the services you provide to them. What people need and what they get needs to align and grow and evolve along with them. This usually comes naturally, but an unwillingness to let that happen and accept the changing nature can quickly lead to tension.

Equally important for the long term success is having an actual plan on what the long term goal should be. Ideas are easy to come by, but whether they work and catch on is not easily determined. A community may shape this goal with their own wishes and wants, but letting everyone pitch in is not always good for keeping everyone on the same page. Engaging with users over what you want the grid to look like in 5 years and what they may expect to happen is very important. That engagement is going to secure and manifest their feeling of being “at home” and have control over their own destiny. Staying true to your goal, while keeping an open mind to changing the path to it is thus necessary to even make it to the first anniversary party.

On the backend, as mentioned before, constant improvement is also necessary to adjust to changing times. One of the most important aspects in this, long term, is dealing with automation and control. Just a handful of regions and users are easily dealt with, but as things grow they can easily grow out of control. Creating and providing tools to handle certain tasks automatically, giving users the ability to handle their day-to-day business without constant staff interaction or simply making sure bills are paid and servers are up to date is vital. The daily operational workload should ideally stay the same even as the actual workload increases. The only way to achieve this is through automation and clear-cut and standardized procedures, well documented for everyone involved. There is an age-old saying “If you do it thrice, automate” which holds true for most things, because it ends up being a more effective use of your time.

But wait, there is more

So much more. It is difficult to touch on all the topics that end up on the table of a grid owner and there is only so much that can be put in a single article before everyone stops reading and returns to cat videos, so this has to do. Rest assured though, should you elect to kickstart your grid we will be more than happy to assist you all the way from concept to reality. That is, after all, what we do, have done and will keep on doing, that’s our hobby turned business.

There is a contact link somewhere on the right here, while you are there feed the hamster, he has been getting lonely and quite skinny as of late.