Service Provider Regulation For Enhanced Security

As an EU company we are bound to abide by the regulations set forward by the EU as implemented by the country of residence. In some cases these regulations are already reflected by local law and can even exceed the requirements set forward by the EU. In the latest round of regulations passed at the beginning of 2020 for enhancing the security of online banking and money transfer in general the EU has set forward a set of regulations that are supposed to make it more difficult to gain unauthorized access to banking accounts through a secondary form of verification.

In reality the implementation of these regulations falls short in a lot of areas and a lot of banking institutions or others handling money transactions are only partially following the regulations set forward. This is despite the clear outlines for what such an implementation requires to provide. Even more concerning is the nature of these implementations often disregarding the specifics of allowing the willful dismissal of participation in the additional verification step. In turn this results in pages and pages worth of complaints against such institutions for both making the steps mandatory despite the optional status and lack of complete adherence to the regulation in general, leaving, in the worst of cases, anyone with the inability to procure or provide verification without access to their funds. The latter being a downright case of discrimination against disabilities in a lot of cases.

We have elected not to follow these regulations, not that we need to as we do not handle sensitive banking information on our end. We do, however, see changes in how our customers pay for their services which are undoubtedly the result of how regulations are implemented with various payment providers we work with. While not strictly our responsibility and far out of our reach to even do something about, we still want to apologize for any inconvenience you may experience paying for your services. We have launched formal complaints with at least one of our payment providers in regards to their failure to properly implement these new EU regulations in a manner that both satisfies the guidelines set forth in the regulations themselves and the implementation abiding the disabilities act.

If you are still having issues with processing your payment with one of our payment processors please do not hesitate to contact us via ticket.

OpenSim Archive – A call for contributions

Zetamex Network is proud to be the sponsor of the OpenSim Archive, a project that aims to centralize a library of resource for and around OpenSimulator for the benefit of creators and those who want to become one. Based around the idea of sharing resources and knowledge, which is one of the core principle behind open source software and the entire FOSS movement, and not re-inventing the wheel for every car that’s made this library aims to aid content creators and everyone else alike. We hope the project will grow and creators will contribute to the enhancement of the worlds we spend our virtual life in. The project is made available to everyone free of charge and even includes things such as old software, compiled OpenSimulator binaries and other useful items. Zetamex Network provides the storage and bandwidth for this project and will handle submissions.

If you are a creator or know of publicly available resource with open licenses we ask for you to get in contact with us to contribute to the library. You can find a contact point on the right, just click the big yellow button or send us an email!


Expanding Reach – New Payment Gateways Updated Transaction Fees

The world is a very diverse place with lots of different attitudes to monetary transaction. Philosophical aspects aside the main interest for us as a company in that is the diversity of currencies and ways to turn said currency into goods and services. So in the pursuit of bringing not just more harmony to the world, but also provide easy of access to our services we have added new payment gateways. Additionally we have re-enabled an option on an already existing gateway and upgraded another. Without further ado here they are with the shiny new one leading the front:


Paysafe offers debit-like pins that can be purchased with a certain monetary value attached. These pins can then be used to pay for goods and services online if other payment methods are not available. They have been around for quite some years now and are often the only way for people without access to a bank account to pay for goods and services on the internet. Given they can be purchased with cash they are also quite popular with young people who may just get an allowance handed out in cash. We have partnered with Paysafe to bring this payment method to our brands so that more people can enjoy them.

Stripe SEPA

SEPA is a direct debit system that directly pulls money from the associated bank account. It is quite popular in Europe, because once setup, it is almost hassle-free and so long as the bank account has the appropriate funds on it customers don’t have to worry about their bills not being paid. With the worry of paying bills on time out of the way there is more time to focus on the important things in life.

PayPal Subscriptions

We had previously disabled this functionality due to problems with overcharges and the lack of flexibility in adjusting the subscription when services were changed. These problems have not completely gone away, but recent changes to our billing system have advanced detection of problems in this regard so we feel confident to enable this function once more. With PayPal Subscriptions customers can subscribe to their monthly bill and have it automatically paid without re-authorization. This, much like direct-debit, reduces the complexity of monthly payments for our customers.


Transaction Fees

A very important change that has come along with adding new payment gateways is that we are updating our transaction fees. Previously a flat 5% rate was applied to any invoice customers received. This is changing now for a new structure that applies different fees depending on the gateway used. This reflects the contractual conditions we have with the gateway providers and more accurately reflects the additional costs we incur for each gateway. Invoices are automatically updated when a different payment method is selected so customers can see directly which one is better for them. We understand due to many different circumstances some payment methods are less appealing than others so we have tried to reduce the transaction fees where possible. The new transaction fees are as follows:


Gateway Fee Gateway Fee Gateway Fee Gateway Fee Gateway Fee
PayPal 5% Stripe 3% Paysafe(new) 15% Stripe SEPA(new) 3% Bank Transfer 7%


Additional payment methods may come in the future, if customers have special requirements for payments they can of course contact us in this regard as always. We hope these new payment methods will provide customers with ways to reduce the hassle on monthly bills and provide more alternatives to those without access to the previously available methods.

The Informal Checklist For Grid Owners

Starting your own grid can be a dream, but like any dream it has the potential to turn into a nightmare. We often compare grids with restaurants or other customer-centered businesses, with the reasoning that both share a similar requirement – that a specific culture needs to exist for them to succeed. There are many things to consider and even more things that may not be on everyone’s radar.

It’s a business

“But I am just doing this for fun?!” That may be so and there is inherently nothing wrong with that. The problem comes when you let people sign up with you. Here in Germany, anyone offering signup to a service that is provided to them, free of charge or paid, makes themselves at least partially liable for what goes on within the service provided. That is, until you setup a legal agreement with the user to make them aware of what you are liable for and what you cannot possibly control. Treating a grid as a business, even if you are only running it as a hobby, is vital when you allow people to sign up.

This means first and foremost that a proper Terms of Service agreement and a set of rules for your users to follow is key to keeping some sort of order (and your sanity) intact. It may be that you and most of your users tend to disregard such legal matters, after all what really is in there (apart from lots of legal-speak and the odd joke about monkeys), but when things go wrong this can quickly get dicey. These agreements regulate why, how and who is liable for loss of data, information, breaches of privacy or law. Without it you, the grid owner, are fully liable for it all, because it happened in your “house”. Jurisdictions vary greatly on this, but considering the stricter laws of countries you may gain users from is important, equally are the laws of your own place of dwelling, locations of hardware and software licenses used. Not being in the clear in this department can cost dearly and may even have an impact on your personal life.

Time is money

While you may not value your hobby as much as others whose job it is to keep the world turning, those others may have a different outlook on that. Most pieces of hardware out there doing the dirty work of keeping your grid up and running are associated with monthly costs that need to be handled. Add to this the costs for licenses and software (not to mention actual employees) and you get into a jungle of numbers rather quickly. Having a grasp of what it means to setup a budget and balance costs is important for your success. Managing what costs can be reduced and what needs to ultimately cost more to get the value you need is equally important. This, again, takes extra time, which you may not value as hobby-time, given the very close proximity to doing actual work (if you are accountant that is). This means you end up with actually sacrificing your free time for something you may not be all too keen on handling and we all know what level of enthusiasm we have towards things we don’t like to do and don’t even get the benefit of a monetary return from.

Of course, this is not saying that hiring an accountant and setting up a full business plan is the only way to go, but it is important to keep an overview of what goes on and where funds go. Others make errors, even the big guys, so identifying that someone may overcharge you for something or bills getting lost potentially causing service terminations is quite important. It not only keeps service interruptions from happening, but also keeps the people and companies you work with on your side. Not getting paid is certainly not going to make them want to help you with problems or other support requests should they arise. Valuing others work for what it is, while making sure you are not being ripped off is an important consideration.


We have a list of over 400 known grids out there, most of which allow users to sign up with them and thus creating competition. You may not see it as such, but to create a community it needs to be created around something. A gimmick, niche idea or loads of PR are often not enough to keep momentum in that endeavor. Standing out is not easy and requires more than just “this has not been done before”. Even if your aim is not to become the next big thing, having a strategy to build a community is vital if you don’t want to be all alone.

This is one of the big reasons we advise anyone looking to start their own grid to first establish a community before breaking off into the ocean that is the metaverse. Having roots and ties somewhere else can greatly help to bring new users in and create an association that makes it easier for people to find you. It also helps to establish what aim you have for your community, because as a breakaway there is a clear indication wherever you came from and what you may be doing differently that even warrants a closer look.

More than meets the eye

Okay, those titles are getting a bit over the top, but this is kind of an important one. A grid is never just the grid and maybe a website, instead it requires a whole network of additional systems to truly make it work well. This includes, but is not limited to, monitoring systems, backup storage, staff communications, document sharing, test systems, shared calendars, organization tools, wikis and completely custom OpenSim support and management systems. The infrastructure needs of a grid can quickly become a bigger burden of maintenance than the grid itself so selecting the proper systems and software not only provides the necessary force behind a growing grid, but also reduces the overall time you spend dealing with that stuff in the first place. This in turn leaves you more time to actually take care of the grid, its users and their needs and problems that come your way.

Selecting the appropriate things for this is not rocket science, but it’s not trivial either. You may end up falling flat on your face a number of times with stuff that just refuses to work and for that you need alternative plans and solutions. It’s obviously easier to know from the start what to use and what to avoid, which is something we have quite a lot of experience with given how long we have been doing exactly that, but even so there is never a standard solution that fits everyone’s needs equally well. While a lot of solutions offer ways to bend them to specific needs it is not a given property to expect, some stuff just can’t be done in a reasonable manner, and  compromises have to be made, priorities set, and all requirements, present and future, considered. The last thing you want is to be permanently stuck with something that just will not work out as planned.

A Plan B

Not just Primitives can be flexible, most humans are too, though that depends on how much circus blood is in you. Joking aside, keeping a clear head is very important in the day to day operation of a grid. Things change all the time and even the best plans can fall flat when new information or problems come flying your way. Making sure of having some flexibility in your approach will not only help to keep things on track, but will also prevent that level of stress that can spell the disaster that can result from snap decision making. To have a backup plan, both in the sense of planning a secondary option and actually planning how you want to ensure data security, is not an easy task. With the changing nature of the underlying software and the requirements increasing number of users and advances in technology put on the systems that run it all means you are dealing with an exponential increase in the amount of data you handle. Moving that around and making sure it is security and in a state that actually allows recovery in the event something does go belly up may seem like an easy task of just copy and paste, but making a full backup every day is not exactly efficient both in terms of cost of storage and time.

Speaking of security

We all know the fearmongering of your favorite VPN provider, but while their “what ifs” are sometimes over the top they raise similar points one has to consider when dealing with user data. You may have heard about the GDPR and other similar laws such as COPPA. These laws also apply to grids. After all, as previously mentioned, when you let them sign up with you, you become a service provider to them. While both of these laws make specific exceptions for small businesses and hobby projects, complying with them is good practice nonetheless. Unfortunately there is a lot of misinformation and literal interpretations for not just those laws, but most laws governing digital exchanges of data. Thankfully though, when time is taken to read and understand what the aim is, not so much even the literal meaning, but the “good conscience” approach they want you to take, it quickly becomes apparent that complying with these laws is not all that difficult.

It obviously helps to have someone on hand that is well-versed in such matters and deals with them for a living, but not many have a free attorney on speed-dial so learning a bit of legalese and employing a bit of the good old “copy others work” is the minimum. Without diving too deep into the matter, having a firm understanding of what you can and cannot do is still a very helpful tool to have, especially dealing with user concerns and problems. The last thing anyone wants is a fun hobby turning into a court case over some stolen shoes, and yes that has happened.

Data security is also a matter of technological understanding. In short that means setting up hardware and software to ensure your neighbor can walk around in your yard and take things for himself. This is a very time consuming effort as every information security guru you talk to has other ideas of what and what isn’t a secure way of handling cat gifs and social security numbers. It is no less important though. Providing an angle for attack and loss of not just your, but your users’ data, in most jurisdictions makes you liable for the damages. Another reason to properly setup Terms of Service agreements, but just waiving all your responsibility and hoping nothing bad happens is not exactly a strategy most users will want to trust with their data.

The green paper

Money is a complicated subject at the best of times. Having a budget is a good start, sure, but there is more to it than that. Money offers a way to exchange things and services for something that does not require the other party to think of something to return the favor with. It thus creates a big decision to make. Internal money across the grid or external gateways. What gives you the least headaches and what is best for commerce. Even if your aim is not to create a giant shopping mall, with the way money is handled across grids these days users may find themselves wanting support for such a system at their own homes.

Across the spectrum the same question is something you have to ask yourself. When the aim is to break even or even turn the little hobby into something that pays for your next holiday then deciding how to setup the money-service exchange can make or break it all. Handling invoices, rendering payments on time, dealing with credit cards, are quite daunting tasks, and the best approach is not always do-it-yourself. Billing systems are a dime a dozen, but selecting one that works is still not as easy as picking the one that has the nicest logo. By the same token, the way you present all that to your user-turned-customer is equally important.

Fake it until you make it

This is definitely not the policy you should employ when creating an online-presence for your grid or project. While that may work on Instagram it certainly does not work for someone who can tell if someone just slapped their name on something they found looking for “cool website templates”. While there are systems available that come nicely packaged all ready to go, those only provide the basics and to the keen eyed user will make you look like “just another” grid. Taking a bit of time to actually customize and present yourself to your audience can make the difference between registration and a closed tab.

When you have an idea on what you want to convey to the world out there it pays back exponentially to invest some effort into creating a look and feel that is in line with the emotion you want to evoke. That sounds like a lot of “designer” talk and more work as well, but hoping to wow someone in this day in age by creating a logo and adding some buzzwords to the introduction paragraph is not going to work all too well. In the same sense that the world keeps turning, creating a constant flow of information and engagement with your community through something other than throwing parties and posting ads everywhere is going to ensure those who stumble upon your little dwelling are going to stick around for more.

Long term

The vast majority of grids don’t even make it to their first anniversary and not many more stay around for more than 3. Those that manage to maintain an active community usually have the aforementioned diversity and other factors in order alongside a set of dedicated, almost stoic, individuals keeping them up and running. While that is the hard requirement for keeping it all going there are still some softer requirements that apply to create something that lasts. Biggest of all is constant feedback between users and operators, especially in regards to the services you provide to them. What people need and what they get needs to align and grow and evolve along with them. This usually comes naturally, but an unwillingness to let that happen and accept the changing nature can quickly lead to tension.

Equally important for the long term success is having an actual plan on what the long term goal should be. Ideas are easy to come by, but whether they work and catch on is not easily determined. A community may shape this goal with their own wishes and wants, but letting everyone pitch in is not always good for keeping everyone on the same page. Engaging with users over what you want the grid to look like in 5 years and what they may expect to happen is very important. That engagement is going to secure and manifest their feeling of being “at home” and have control over their own destiny. Staying true to your goal, while keeping an open mind to changing the path to it is thus necessary to even make it to the first anniversary party.

On the backend, as mentioned before, constant improvement is also necessary to adjust to changing times. One of the most important aspects in this, long term, is dealing with automation and control. Just a handful of regions and users are easily dealt with, but as things grow they can easily grow out of control. Creating and providing tools to handle certain tasks automatically, giving users the ability to handle their day-to-day business without constant staff interaction or simply making sure bills are paid and servers are up to date is vital. The daily operational workload should ideally stay the same even as the actual workload increases. The only way to achieve this is through automation and clear-cut and standardized procedures, well documented for everyone involved. There is an age-old saying “If you do it thrice, automate” which holds true for most things, because it ends up being a more effective use of your time.

But wait, there is more

So much more. It is difficult to touch on all the topics that end up on the table of a grid owner and there is only so much that can be put in a single article before everyone stops reading and returns to cat videos, so this has to do. Rest assured though, should you elect to kickstart your grid we will be more than happy to assist you all the way from concept to reality. That is, after all, what we do, have done and will keep on doing, that’s our hobby turned business.

There is a contact link somewhere on the right here, while you are there feed the hamster, he has been getting lonely and quite skinny as of late.

Why Dynamic DNS Is Not For Grids

A domain you don’t have to pay for and allows you to place ever changing residential networks behind effectively allowing them to be reached under the same address. Sounds like a great idea on paper right?

Normally it is just that, a great way to deal with the annoying changing IP-addresses of residential internet; makes for a great way to create local networks of people looking to share information and content among themselves and show the world your homelab projects. Unfortunately that is really where the good usage-cases end. As of late the increase of dynamic DNS being in use across the metaverse is causing all manners of problems, especially when these grids and standalones are the home locations of content creators and social animals. These problems may not be obvious at first so here are a few examples:


When an asset is created locally it is given identification information such as the creator of the asset. This creator id normally can be easily traced back to the origin location of where it was created. This is by design and works well for static systems, but introduce dynamic DNS into the mix and the problems resolving this creator information start to add up. Eventually you will simply see errors telling you that creator information could not be retrieved. This can result in quite a bit of spam and the simulation will grind to halt if too many of these requests have to be completed without a result.

DNS Caching

While not always an issue, depending on the individual system and internet provider DNS information is usually cached at some point, should a change then occur the caches need to be emptied and new, updated, information inserted. If that does not happen often enough you may find your favorite place unreachable for quite some time. Clearing and resetting these caches can be as simple as restarting your computer, at worst your internet provider, who can intercept DNS requests, are caching data longer. At that point you need to dive into your router or modem, if that is actually possible at all.

DNS Spoofing

Unfortunately many free dynamic DNS act quite liberal with re-assigning addresses should they remain unused, this can lead to just about anyone fetching a disused address and sending malicious data out to those trying to connect to that address. While not technically considered spoofing, the results are the same and specifically for a system that has no way of being “smart” about what it connects to you end up with no way of distrusting a connection until it is too late.


This is probably the worst offender of slowdowns when it comes to DNS. Requests for the status of your friends being sent all over the metaverse, waiting for a desperate reply from their homes only to not get one. These requests are sent in such a manner to fetch the data quickly and as efficiently as possible, unfortunately there really is only one way to fetch them and that is to contact each and everyone of them on their own and asking them of their status. With no central service that handles these requests it ends up being endpoint-to-endpoint or in a way peer-to-peer. If your friendslist is longer than average you may encounter serious issues that can even prevent login all together.


Residential connections are often set to have about 1/10th of their download speeds as upload speed, sometimes even less. These speeds are often not even advertised, because for the average user who simply sends requests for websites, chats and perhaps some game data, upload does not matter all that much. However, running a service via these connections can easily overwhelm slower connections. Upload speed is what matters when someone external tries to connect to you; you need to send them all the data they need to see you and your lovely new creation. The lower the upload speed the more issues you encounter, from slow reaction times all the way to failures reaching your destination. This gets exponentially worse the more data is requested as things back up quickly.

Ripple Effect

What may not be obvious, the more dynamic DNS is in use, the worse the service quality gets for others on the metaverse. As mentioned above, the problems quantify themselves the more data is spread and this can have negative effects on normally hosted grids and standalones as well. They too have to wait for data to get there or run into timeout. This can lead to the issues described above, even on systems and connections with more than adequate metrics. The result is a metaverse that is slower to respond, sees more connection failures and in the worst case scenario can become impossible to sustain, leaving the only option to shut hypergrid down completely to maintain some sort of performance inside a grid.

Breach Of Contract

Many residential internet providers specifically disallow using the service they provide for purposes of running a business or conduction business with the connection itself as the carrier. This means, should an unusual amount of traffic go one way, that appears as though you are actively using your connection to host a service against payment to others, a swift shutdown from your provider may come your way. This does not happen often, but if your grid grows you may find them getting a bit weary over what is going on. While some providers offer business connections, these are often not that much better in service quality and can cost many times more than a residential connection, negating the savings you may have had over paying for a server in a datacenter.


This among other small reasons is why we strongly recommend not to use such service and having your grid or standalone professionally hosted with hardware of adequate proportions. That is really the only way to mitigate some of these issues and reduce the effect they can have on your experience. In the past we have happily provided consulting to those wishing to move away from dynamic DNS and find a better home for their creations and events; along with providing the professional hosting that enables the stress-free use of the metaverse for everyone involved. If you are currently using a dynamic DNS and residential connection for your standalone or grid, get in contact with us, we can offer you something better that will in the long run avoid a lot of headaches!

Common Stability Misconceptions

Stability is a key factor for any grid or region and has been a point of contention around operators for a long time now. With software never being truly free of bugs, especially not the more complex it becomes stability can be overlooked. Thankfully the current aim toward updating some of the basis and utilizing more current technology has pushed a renewed sense for stability, compatibility and performance. That said, let’s actually take a look at the common stability misconceptions, data to back them up and disprove them.

Traffic equals instability

The idea that constant traffic causes eventual collection of more and more garbage that is not properly cleared has been around for a long time. For the most part the core concept itself is true, some accumulation of abandoned data can never be fully avoided, but clearing out and freeing up resources has massively improved with the change to a more recent version of the .NET framework(see picture). This major shift was accompanied by various fixes towards the consumption and, more importantly, the re-allocation of used resources. The return of which granted an overall reduction in resource usage that often sits in the 30% range. Such a change is definitely noticeable and even measurable. The bigger the resources usage was before these changes the bigger the gains are. Some of our customers are seeing reductions beyond 50% simply from upgrading to more recent versions. That said, there is still some work to be done and given the nature of the framework some resource leakage, as it is commonly called, still occurs. Levels are much lower compared to a few years ago, but edge-cases still exist and we still recommend refreshing areas with heavy traffic often, especially when the clientele has a tendency to, let’s put it mildly, act less gracefully in their self-accessorization(is that even a word?).

Error means crash

Humans are quire capable of handling errors in their own “programming” or execution of tasks, most programs however tend to struggle with that and need a helping hand from their programmers to make sure they don’t Windows98 on their users. Handling is often done by quite literally attempting to execute a task and waiting for a return. Should the return not occur the program simply continues throwing whatever broke right into the users face, essentially for them to fix, should they know how. The other common option is the typical “has stopped working” you are all used to, subsequent sending of bug reports, which are actually read believe it or not, or the “write to log and die” method of simply closing with the user being none the wiser as to what happened. The more complex a program becomes the more problematic it can be to employ those latter methods so proper error handling is vital. When so called “hard crashes” are reported it can generally be attributed to misuse these days. The time when a simple error in an item or script caused irrecoverable shutdowns of vital functions are almost gone and usually can be easily traced back to something that is easily fixed or, well, completely out of user control. Either way the crashes are becoming less and less frequent with each new piece that is reviewed and brought up to current programming specifications.

Restart every…

An actual configuration option that can be set, yet probably is a lot easier accomplished through external program control systems. Nonetheless refreshing, as we like to call it, does help to maintain a “refreshed” state that is not impacted by long runtimes. Then again, as you can see from this picture

runtimes that exceed day, weeks and months are no rarity anymore. Even areas with greater resource usage can share the same timeframes for uptime. We are all familiar with the method of fixing a program by simply restarting it, to reset everything to the start and clear any potential wrong data out, but that expects whatever is wrong to be able to reset itself and not reload the bad data. As such making sure such bad data cannot even enter, be that at runtime or after restarting, is a key part of creating long-term stability. The common method to achieving this is defining the types of data carefully and thus not giving the chance for incorrect or corrupt data to maintain in said types. Maintaining good type definitions throughout a program can be tough and sometimes you just want a catch-all for some random piece of data you don’t want to strictly define, but therein lies the issue that can lead to data corruption you cannot detect unless you wait until accumulation or constant overwriting causes an issue. As such doing long-term tests, that tax how long a program can clean up after itself, are important to verify the true stability of a program and its ability to maintain a clean set of working information.

I still crash though

And that cannot be fully avoided as mentioned above. Often the reason for this kind of comes close to the famous “an unfortunate turn of events” that come together to bring about a state that cannot be handled, because it was not even expected in the first place. With a client-server type application the transport of data between them being subject to various variables of doom the results can be unexpected. Even so, recovery from such states are possible and often not as far away. A good example is the main difference between the two common transport protocols used: UDP vs. TCP, you can look up yourself what those two like about each other and what they hate. The basic concept is that unlike UDP, TCP actually checks whether or not data has reached the other side. You can imagine this return may make some things slower when the difference of a tiny piece of information reaching the other side may not make a difference, so UDP is used to send data that can account for small missing pieces or simply to “fire and forget” because you, the user, are not going to notice the loss. However, in the implementations lies the caveats that can break the camels back. UDP in itself is just a sender and receiver of data, what you do with the data is up to you. You could, if you wanted to, send data and request the verification for it manually, all while still using UDP; that’s what, at least in some form, is used to send some of the most requested types of data we encounter. The system is more solid than it sounds, but can still sometimes miss the mark. Normally this is not much of a concern as it is simply resent, but during that period other tasks may be held and eventually you run into being disconnected due to timeout. This issue can be compounded by the amount of data that needs to be sent for certain things and so the more you use it the worse it can get. Thankfully these days methods exist to cache data, to resent only partial amounts when the wholly doesn’t reach the end and so stability can be increased. It doesn’t solve the elephants riddle, but it makes sure it can’t eat all the peanuts.

Who’s to blame?

It has been thrown to everyone under the sun, from users to developers to operators and your uncle Joe, but in the end it’s a community effort. Specifically in terms of educating toward better usage of resources, adequate methods of use and restraint. Nothing’s perfect, but if handled carefully and with a bit of understanding most of it will work just fine. It really is not the fault of people specifically, but what we all do to spread information about things to avoid, things to practice and simple guidelines to follow that ensure a good experience for all, including the poor programming code underneath it all. With enough effort and care the overall experience can be massively positive, but that requires everyone to work together and realize the limitations of software, hardware and people alike. The future will be a bright one, if that is on everyones’ mind.