The Future is Now

I would first off like to inform everyone that Zetamex has officially turned 3 years old this past weekend. We are extremely happy that we have been here for three years, how we originally started out as SoftPaw Host 3 years ago, and now Zetamex. We have been pulling in many new clients, and are starting to staff ourself and grow to the next level. But this means changes are afoot, but our prices are locked in, and standalone plans we have dropped the setup fee.

We have noticed that Dedicated server are becoming harder and hard to keep in stock, so we have decided to ditch the dedicated server plans and are introducing Bundle Offers. This offers users looking for multiple regions, to purchase bulk orders. This will allow us to keep in stock, and are already working with 3 different data centers now to ensure success. Growth is important, and we want to make sure we are ready for it.

Out with the old, and in with the new. You may notice over the course of today, we are changing billing systems, site designs, and later this week ZetaPanel 2.0 will be released. We want users to note that a lot of changes and updates are in the works, and we are becoming even more active over the next few weeks. Change is good, and change is the Future of Zetamex! We are joining the big boys now.

In all this talk about us changing, we have recently acquired a partnership with XHostFire and MyBB Hoster. We have entered a partnership with our servers, to bring you better and more reliable hosting solutions, as well as their web hosting solutions. We are super excited about all of this, we are also doubling up our efforts to increase support ticket responses. We are very excited about all of these changes.

As always, existing plans and packages, if they are no longer available they will be grandfathered. However please note that future orders of those products will not be available.

Emergency Maintenance (Completed)

[8:30 PM EDT Finished] We are now complete with our migration of data, please feel free to login and and restart your simulators if need be. Thank you for you patience.

[8:25 PM EDT Update] Data in transit now, unknown wait time. Please bare with us, we are moving as fast as possible.

[8:14 PM EDT Starting] We are taking everything into read-only mode, turns out our database server needs to move now and not later. Please stay tuned to this blog for updates!

Planned Maintenance

We are planning on taking ALL simulators into READ ONLY mode later today, this is due to us finding an issue with how our data center partitioned our hard drives. We will be moving databases data into a much larger partition. We are starting to hit the partition cap of the directory the files are in now. The estimated time to move this data is unknown, so we are going to moving everyone to READ ONLY. This mean your regions will remain online, and people will be able to access them, however your region will be unable to save any updates you add to it.

Please understand that we MUST do this in order for service to continue, as we are hitting the max size of our current partition that database data is saved in. This is a problem as we have tons of hard disk space left, so we are going to be moving it to another partition where it can take advantage of our several Terabytes of storage.

Thank You,
Timothy Francis Rogers

Upgrades to 0.7.6 Release (Resolved)

[Resolved 10:43 EDT] All upgrades that were planned for today are officially completed!

[UPDATE 10:34 EDT] All standalone clients have been updated successfully.

[START 9:19 EDT] Zetamex is starting the upgrade of all clients to 0.7.6 Release at 10:00 EDT, this will effect all Standalone, Grid, and Custom clients. Please be aware, the downtime expected starting at 10:00 EDT is 2 minutes per simulator. Service will not be down longer than 2 minutes per simulator you have, keeping your uptime at highest possible standards. 

Datacenter Fiber Line Failure (Resolved)

[Resolved 16:56 EDT] Our Datacenter’s ISP has informed us that the repair of the fiber line has been made, that services will start to see the difference here over the next hour or so as traffic in the datacenter flows back into proper channels.

[Update 14:35 EDT] We have received the following update from our datacenter’s ISP “Estimated time of return 6.00pm (EDT)”

[Update 13:00 EDT] We are somewhat happy to announce we are back up and running our our Datacenter’s backup lines, service will not be as fast till everything is completely restored. Do expect some packet loss, but our datacenter is doing everything they can to assist in the packet loss. Our datacenter is pushing all their server through one ISP when it is normally distributed between 3 ISPs. The one ISP can take the hit, why they have 3 and not just one. However pushing so much data through one pipe causes major overhead slowing down data for everyone. The good news is the other ISPs are resolving their issue right now and everything should be back to normal automatically very soon. We also want to thank Google for providing us with a Public DNS resolving service, which is why we are able to continue providing you with service during this time.

[Update 12:06 EDT] We are successfully rerouting traffic, please be aware that we are restoring service using our Datacenter’s backbone network, this network is designed in case of an incident of this nature to keep data flow operational. Thanks to Google’s free Public DNS system we are bypassing our datacenter’s overloaded system for resolving hostnames. The backbone network may have high ping times, however is operational. The high ping times are due to all traffic is having to be rerouted through these backup lines, where normally traffic is split among 3 ISP(Internet Service Providers). Please bare with us as we continue to restore service over the backbone network.

[Update 11:52 EDT] Our Datacenter has informed us that their backbone network is still operational but is taking a huge hit. Their name resolving servers are overloaded due to massive overload on their backbone network. We are attempting to bypass our datacenters name service, and using Google’s public main service and restore service on a slightly degraded level until their ISP repairs the ripped line. This way service can still continue. Please stay tuned for more updates!

[Update 11:39 EDT] Rip was find in fiber line at “Boulevard Monseigneur bridge in QB, Canada. A team from the ISP that serves this line has already been dispatched and is in process of repair”

[first]We have just called and spoken with our datacenter, it turns out our sluggish issue is not result of our system. Our data center located in Canada is suffering from a Fiber line outage, they are running on backup fiber lines as we speak till the issue is resolved. That being said, several packets are failing and are causing a major service interruption for all of our clients at the moment as well as all of our data center’s clients. Our servers are up and running, they are working really hard with the Fiber company to restore service. Speaking with a rep from the datacenter they hope to have the issue fully resolved shortly.

For more news and updates on this please follow our blog at 

Again, we are very sorry but there is nothing we can do during this outage other than prepare services for restart as soon as the fiber line is restored.

Thank You all for understanding.