my web host web and hosting

How is t3n.de hosted?

If 2005 was still a mass host, today t3n.de needs a suitable hosting infrastructure. In this article we give some insights behind the scenes: How has t3n.de been hosted in recent years? Which solutions were used? When and why was the system changed?

The beginnings of the mass host

When we started the third with t3n.de in 2005, our web presence with many other web projects was on a Debian root server at a large hosting provider in southern Germany. At that time, the Apache webserver still delivered our website and our MySQL server on the same machine ran with Debian default settings. We thought that was a nice solution. At least for a while. More precisely: Until someone came up with the idea that you could order our great print magazine for a limited time in our shop for free . What followed was the first “total impact” in our webhosting history: Within a few minutes, our shop was no longer accessible.

In such a moment, of course, one wonders what could have been improved by the MySQL setting, whether the Apache web server is properly configured, what options might have been available to better cache or use a PHP byte cache, … One day later we had a dedicated new rootserver – the largest that our mass host had in the program at that time (but still a self-assembled box with desktop chassis). Still, one problem remained: despite the new server, I was still the sole person responsible for running our website and had to think about what our future should be like.

Same hosting, new server – no solution

Now we had this new, great server with AMD processor and incredible 2 GB RAM – which we would laugh about today. There were still running all services on a machine, but at least no other web hosting projects. Over time, more and more internal ideas came to t3n.de. We started new portals such as the portal “socialnews” (then under the name hype!) And other directories such as our job portal , a marketplace for service providers , an open source directory and a startup directory, All of a sudden, we needed features like single sign-on, internal APIs, newsletters, generated sitemaps … We published more content, were better linked, and better indexed by major search engines.

 In short: everything became more complex, our traffic was steadily increasing and at the same time we were always dissatisfied with our root server. The pure hardware performance was not the problem. Rather, we found in our beginnings as “great” found cheap concept of the hosting provider and the associated poor overall availability to create – “poor availability” stupidly when ordering was not in the product description 😉

When our server was completely unavailable for two days because of the extensive cable fire in the data center and the need to replace all the wires, we thought it would be a good time to rethink our hosting strategy.

The naive time of your own server cabinet in your own server room

Coincidentally, our need for change coincided with our move to a new office. There was a “server room” where you could at least schonmal a 19′-cabinet put in place and the connection was quite luxurious with 35 Mbit.

 In addition, a web agency in the same building also operated hosting and we now had a system administrator trainee in our team, so that our hosting change from a cheap provider to their own server room after a round thing looked. It was for a while also: We could put together our own servers, but at the same time were smart enough to think of the most basic redundancy features such as hard drives in the Raid-1 and fallback power supply (at least schonmal in the closet).

Despite all the effort that we had with the compilation of the cabinet solution, the servers, the fail-safe power solution, etc., in the end the realization that even with reasonably good framework conditions and a team of two grown up now (including me), remained never could withstand the ever increasing demands with each Mehrtraffic.

We did not have any influence on a construction site excavator cutting off our own fiber-optic cable five kilometers away and we were offline for five hours. We had no influence on the fact that the air conditioning system was completely under-dimensioned during the construction of the building for a server room and it would have made no sense to implement our own solution for us. In addition, our landlord eventually came up with the idea that it might be useful to calculate the server space no longer as initially flat rate, but in addition to the electricity consumed. The subsequently introduced electricity meter then negated the still optimistic efficiency estimate. We had to think again something new. In addition, we gained more traffic every day and realized

Leave a Reply

Your email address will not be published. Required fields are marked *