Additional Sept 2024 article: Data centre cooling

For this Computer Weekly feature, we are looking into the state of the art for data centre cooling.

What ticks the boxes for effectiveness, efficiency, cost and energy use? Which solutions are easiest to fit into current data centres, and which improvements bring the greatest benefits?

The rise of new workloads like AI means that datacentre operators need to consider how to get more power in, to support large GPU clusters, and the cooling requirements of these chips.

So how will operators tackle those new power and cooling requirements?

The focus will be on the technologies currently on the market, and which can be applied to existing data centres. We can touch on emerging technologies and those that work with new-build locations, but the focus of the feature is on practical options CIOs and data centre managers can buy now.

I am keen to hear from data centre owners/operators — including CIOs — industry consultants and analysts. Due to the nature of the feature, we are unlikely to quote equipment vendors directly, but happy to receive information on their solutions.

To contribute to this piece, please contact me by email in the first instance. The deadline is Friday 23rd August.

Upcoming articles: September 2024

I am writing the following pieces, to appear in Computer Weekly in September.

Please note the earlier than usual deadlines, due to the Bank Holiday weekend.

How to succeed at cloud repatriation 

Deadline for input: 1700hrs, Friday 16th August

In this feature, we will look at how to make cloud repatriation work. This will include:

  • Which data are best suited to cloud repatriation?
  • Which workloads (and applications) benefit most from moving back on premises?
  • How should you prepare your private infrastructure tfircloud repatriation? What are the potential pitfalls? What could you overlook?
  • How do you make your data / infrastructure future-proofed if you repatriate from the cloud? How do you ensure you will remain cloud-native
  • How do you ensure you can reverse the decision and smoothly transition to the cloud, if the need arises, or use it for burst use cases?

I am looking for CIOs, analysts or consultants’ views for this piece.

What you need to know about Kubernetes DR 

Deadline for input: 1700hrs, Wednesday 21st August

With Kubernetes becoming more widely used in enterprises, the question of how to ensure it can survive an outage or systems failure is ever more critical. We’ll look at the points below; note this is different to K8 backup, which we have covered before.

  • Why do we need DR for Kubernetes?
  • What are the challenges to doing DR for Kubernetes clusters? 
  • What are the infrastructure requirements of DR for Kubernetes?
  • What are the risks to Kubernetes environments that need to be mitigated by DR; how do these differ from other environments)?
  • What key points would a DR plan for a Kubernetes environment contain?
  • What kind of products can help with Kubernetes DR? 

Again the preference is for analyst and consultant views, not vendors.

Data centre cooling

For this feature, we are looking into the state of the art for data centre cooling. What ticks the boxes for effectiveness, efficiency, cost and energy use? Which solutions are easiest to fit into current data centres, and which improvements bring the greatest benefits?

The rise of new workloads like AI means that datacentre operators need to consider how to get more power in, to support large GPU clusters, and the cooling requirements of these chips.

So how will operators tackle those new power and cooling requirements?

The focus will be on the technologies currently on the market, and which can be applied to existing data centres. We can touch on emerging technologies and those that work with new-build locations, but the focus of the feature is on practical options CIOs and data centre managers can buy now.

I am keen to hear from data centre owners/operators — including CIOs — industry consultants and analysts. Due to the nature of the feature, we are unlikely to quote equipment vendors directly, but happy to receive information on their solutions.

To contribute to any of these pieces, please contact me by email in the first instance.