Asia-Pacific’s most advanced datacentre coming to Asean

Thailand will have the first Tier IV Gold datacentre, as rated by the Uptime Institute, in Asia in the first quarter of 2017. This will make it the most advanced datacentre in the Asia-Pacific and the largest in Thailand, with capacity for more than 6,000 data server racks.

The Uptime Institute’s datacentre tier certification is awarded at four levels, and Tier IV represents a certification for a fault-tolerant site infrastructure.

Located in Hemmaraj Industrial Estate in Thailand’s eastern province Chonburi, the 11bn THB (US$300m) Supernap datacentre is being developed by Supernap International, together with a group of leading Thai organisations, including CPB Equity, Kasikorn Bank, Siam Commercial Bank and True IDC.

“The Supernap Thailand datacentre is a mirror of Switch Supernap US facilities, which are the first Tier IV Gold carrier-neutral colocation datacentres. This cutting-edge datacentre will meet the global demand for innovation in Asia-Pacific,” said Khaled Bichara, CEO of Supernap International.

Te datacentre will generate “intense competition” in the industry for customers in Thailand, said Tuang Cheevatadavirut, senior market analyst at IDC.

“This will create a huge impact in the Thailand ICT/IT industry, as the datacentre supply will grow significantly and an international player such as Supernap will cater mostly to the premium customer segment in the Asian or Asean region. The local operators will need to compete with a new standard for quality, security and innovation to differentiate their services,” added Cheevatadavirut.

IDC expects the existing local operators to initially compete on price. Enterprises are expected to be attracted to their lower price points and may try outsourcing non-core business processes.

Later, these local operators will realise that enterprises are willing to pay a premium for datacentre services that drive business growth through systems of engagement, insight and action, rather than maintain existing systems of record,” said Cheevatadavirut.

He said the datacentre is a result of a partnership between five different entities, which means some of the datacentre’s capacity will be utilised by the partners, with the remainder of the capacity being rented out to customers.

The Supernap datacentre is likely to be the most technologically advanced datacentre in the region, as there is no record of any Uptime Institute rated Tier IV datacentres in Japan, Singapore, India, South Korea or China, said Cheevatadavirut.

“These countries may use different standards or they may build to match Tier IV standard, but are not certified and registered with Uptime Institute.”

The Supernap datacentre will cover an area of nearly 12 hectares and will be built outside the flood zone, 110 meters above sea level. It is built 27km away from the international submarine cable landing station, which links the facility to national and international telecoms and IT carriers. 

View the original article here

Advertisements

Asia-Pacific’s most advanced datacentre coming to Asean

Thailand will have the first Tier IV Gold datacentre, as rated by the Uptime Institute, in Asia in the first quarter of 2017. This will make it the most advanced datacentre in the Asia-Pacific and the largest in Thailand, with capacity for more than 6,000 data server racks.

The Uptime Institute’s datacentre tier certification is awarded at four levels, and Tier IV represents a certification for a fault-tolerant site infrastructure.

Located in Hemmaraj Industrial Estate in Thailand’s eastern province Chonburi, the 11bn THB (US$300m) Supernap datacentre is being developed by Supernap International, together with a group of leading Thai organisations, including CPB Equity, Kasikorn Bank, Siam Commercial Bank and True IDC.

“The Supernap Thailand datacentre is a mirror of Switch Supernap US facilities, which are the first Tier IV Gold carrier-neutral colocation datacentres. This cutting-edge datacentre will meet the global demand for innovation in Asia-Pacific,” said Khaled Bichara, CEO of Supernap International.

Te datacentre will generate “intense competition” in the industry for customers in Thailand, said Tuang Cheevatadavirut, senior market analyst at IDC.

“This will create a huge impact in the Thailand ICT/IT industry, as the datacentre supply will grow significantly and an international player such as Supernap will cater mostly to the premium customer segment in the Asian or Asean region. The local operators will need to compete with a new standard for quality, security and innovation to differentiate their services,” added Cheevatadavirut.

IDC expects the existing local operators to initially compete on price. Enterprises are expected to be attracted to their lower price points and may try outsourcing non-core business processes.

Later, these local operators will realise that enterprises are willing to pay a premium for datacentre services that drive business growth through systems of engagement, insight and action, rather than maintain existing systems of record,” said Cheevatadavirut.

He said the datacentre is a result of a partnership between five different entities, which means some of the datacentre’s capacity will be utilised by the partners, with the remainder of the capacity being rented out to customers.

The Supernap datacentre is likely to be the most technologically advanced datacentre in the region, as there is no record of any Uptime Institute rated Tier IV datacentres in Japan, Singapore, India, South Korea or China, said Cheevatadavirut.

“These countries may use different standards or they may build to match Tier IV standard, but are not certified and registered with Uptime Institute.”

The Supernap datacentre will cover an area of nearly 12 hectares and will be built outside the flood zone, 110 meters above sea level. It is built 27km away from the international submarine cable landing station, which links the facility to national and international telecoms and IT carriers. 

View the original article here

Google calls for datacentre storage disk design overhaul

Google wants the IT industry to join forces with the academic community to create a new generation of datacentre storage disks.

The search giant said this would help web-scale internet companies cope with the petabytes of data users upload to their services each day.

Google published a white paper proposing several design options for datacentre disks, which includes a motion to scrap the traditional 3.5in hard disk drive (HDD) form factor on capacity grounds.  

“The current 3.5in HDD geometry was adopted for historic reasons – its size inherited from the PC floppy disk. An alternative form factor should yield a better TCO [total cost of ownership],” suggests the white paper.

“Changing the form factor is a long-term process that requires a broad discussion, but we believe it should be considered,” wrote co-authors Eric Brewer and Lawrence Ying.

“Although we could spec our own for factor (with high volume), the underlying issues extend beyond Google, and developing new solutions together will better serve the whole industry, especially once standardised.”

The company said it would like to see the IT industry invest in taller hard disk designs, with fewer points of failure.

“Current disks have a relatively small fixed height: typically 1 inch for 3.5in disks, and 16mm for 2.5in drives. Taller drives allow for more platters per disk, which adds capacity and amortizes of the costs of packaging, the printed-circuit board and the driver motor/actuator,” the white paper states.

“Given a fixed total capacity per disk, smaller platters can yield smaller seek distances and higher RPM, and thus higher IOPs, but worse GB/$.”

The paper also puts forward a case for making the capacity of spinning disk storage more flexible, rather than relying on the use of fixed capacity drives, because of the TCO benefits this could bring.

For similar reasons, the document floats the idea of shifting the read and write cache from the disk to the host, claiming it could work out cheaper and more efficient.

In a similar vein, the company sets out the reasons why it will not use solid state disks (SSD) as a wholesale replacement for traditional spinning storage media for some time to come. 

“The cost per GB remains too high and, more importantly, the growth rates in capacity/$ between disks and SSDs are relatively close, so that cost will not change enough in the coming decade,” the white paper stated.

To emphasise why it’s pushing for these changes, Google revealed that YouTube users upload more than 400 hours of video content a minute, meaning the company has to bring online up to a petabyte of storage every day to meet demand.

With the company forecasting an exponential growth in data volumes over the coming years, the way it and other web-scale companies run their datacentres will need to adapt.

“Disks are a central element of cloud-based storage, whose demand far outpaces the considerable rate of innovation in disks. Exponential growth in demand implies that most future disks will be in datacentres and thus part of a large collection of disks,” the paper continues.

“We hope this is the beginning of a new era of datacentre disks and a new broad and open discussion about how to evolve disks for datacentres. The ideas presented here provide some guidance and some options, but we believe the best solutions will come from the combined efforts of industry, academia and other large customers.”

View the original article here

London overtakes Amsterdam as most in-demand location for datacentre space

London has overtaken Amsterdam as the leader of the colocation market, with adoption of datacentre capacity higher there than in any other European city over the past year.

That’s according to figures released by real estate advisory firm CBRE, which revealed that 25.9 MW of colocation capacity was purchased in London during 2015, while – in Amsterdam – 17.9MW changed hands.

“London continued its recent fine form to end 2015 with 25.9MW of take-up in the year. This figure was substantially above its 2014 figure of 18.4MW and boosted by three consecutive quarters as Europe’s most in-demand market,” the CBRE noted.

“Amsterdam was always going to find it tough to reach the levels of 2015, but a strong Q4 performance added gloss to what could have been a particularly poor year otherwise.”

Much of the demand for colocation space in London was driven by the retail sector, which purchased more than 10MW of the total capacity sold in the city last year, the CBRE’s research showed.

“We expect London to continue to thrive during the early parts of 2016 as planned new facilities and continued take-up from the IT infrastructure universe continue to keep the market buoyant,” the report stated.

Andrew Jay, head of datacentre solutions for Europe, Middle East and Africa (Emea) at CBRE, said a lot of the increased demand for datacentre capacity was due to the activities of US firms.

“Today, most substantial new requirements are led by IT infrastructure firms coming out of the US West Coast. In essence, these companies – which are few in number – are single-handedly shaping the European markets,” said Jay.

“As we move through 2016, the expectation is we’ll see a surge of activity in Q1 and Q2, again driven by infrastructure companies, given the market is now primed for large-scale deals.

“Finally, concerns remain regarding data protection and privacy exacerbated by the European Court of Justice’s recent ‘invalid’ ruling with respect to Safe Harbour. We think this will have a positive impact on demand in the smaller European markets.”

Across the main European markets – which CBRE counts as Frankfurt, London, Amsterdam and Paris – the total supply of colocation capacity topped 827MW, up from 771MW in the fourth quarter of 2014. This represents a 7.2% rise overall.

Over the coming year, the CBRE predicts demand for datacentre capacity will continue to steadily rise in Europe, as enterprises seek to outsource more of their IT infrastructure to third parties.

“In general 2015 levels of take-up across the European datacentre market were buoyed by a positive economic outlook and general market confidence,” the CBRE report stated.

“Whilst economic growth was not uniform across the region, the underlying drivers of demand for datacentre space appear to have remained solid in traditional and emerging markets.

“The trend to outsource IT infrastructure to third-party providers has remained an important characteristic across Europe and evidence from this final quarter of the year suggests this should continue.” 

View the original article here

Amazon seeks permission to build datacentre on site of former Irish biscuit factory

Amazon Web Services (AWS) has applied for permission to partially demolish part of a former biscuit factory in Ireland as it prepares to repurpose the site as a datacentre.

The infrastructure-as-a-service (IaaS) giant acquired the Jacob’s Biscuit site in Tallaght, South Dublin, during the summer of 2015, according to a report in the Irish Independent, after several years of being unoccupied.

The company is thought to operate more than half a dozen datacentres in Ireland, bringing its total infrastructure spend to more than €1bn, the newspaper estimates.

The company has applied to South Dublin County Council to begin the preparatory work needed to convert the site into a datacentre, which includes a mix of demolition and removal work.

The application, submitted via the Amazon Data Services Ireland Limited (ADSIL) subsidiary, states the company is seeking the council’s permission to partially demolish the main factory building and totally flatten various extensions and outbuildings covering 5,480m2.

It also sets out the company’s plans to remove other “redundant service installations” including tanks, plant compounds and ancillary structures.

A decision on the application is due by 7 April 2016.

Ireland has emerged as a popular location for datacentre builds in recent years, thanks to its temperate climate, relatively low land costs and proximity to transatlantic undersea network cables.

This has seen the likes of Apple, Facebook and Microsoft embark on Irish datacentre builds, along with some smaller players, including Sungard and – more recently – Interxion.

However, concerns about how well-equipped the country is to respond to the growing demand for power these datacentre builds are likely to fuel have emerged of late.

Speaking to Computer Weekly, Ricky Cooper, vice-president for Europe, the Middle East and Africa (EMEA) at datacentre builder Digital Realty, said Ireland is facing some challenges around the supply of “new power” sources.

An in-depth report in the Irish Times highlighted the issue in early 2015, explaining how the country’s reliance on imported power from the UK could put it at risk of energy shortages in the years to come.

Part of the problem for the datacentre sector, Cooper said, is operators are often forced to overprovision power because of the difficulties they face trying to accurately gauge how much customers are likely to use over the course of their contracts.

“As yet, no-one has been able to gauge how much they’re using really accurately. At the same time, they’re virtualising and consolidating, meaning customers are entering into a contract for 500kw, but they might end up using 100kw,” said Cooper.

“There isn’t a mechanism to help them scale down and – in the meantime – we can’t sell that extra 400kw of power. That’s the problem,” he added. xxx xxx xxx xxx xxx xxx xxx xxx xxx xxx

View the original article here

Spotify to move datacentre workloads to Google’s cloud

Spotify is to shift its IT infrastructure to Google Cloud Platform and wind down its reliance on private datacentres to keep up with the growing demand for its music streaming services.

The music service is reportedly used by around by 75 million active users in 58 countries. Spotify lets users stream and listen to songs through mobile devices, desktop PCs and gaming consoles.

In a blog post outlining the move, Nicholas Harteau, vice president of engineering and infrastructure at Spotify, said the company had previously bought or leased datacentre space to provide users with local access to the 30 million songs in its catalogue.

However, as the Spotify user base has grown, meeting the growing demand for its services through datacentres has proven difficult, prompting the company to rethink its stance on cloud.

“Operating our own datacentres may be a pain, but the core cloud services were not at a level of quality, performance and cost that would make cloud a significantly better option for Spotify in the long run,” said Harteau.

“Recently that balance has shifted. The storage, compute and network services available from cloud providers are as high quality, high performance and low cost as what the traditional approach provides. This makes the move to the cloud a no-brainer for us.”

The company has decided to shift its infrastructure onto Google Cloud Platform, with Harteau talking up the range of services Spotify will have access to from a data processing and analytics perspective.

For instance, having access to Google’s BigQuery and Cloud Dataproc tools will let Spotify run complex data queries in minutes, rather than hours, to support its product development activities.

Spotify stopped short of outlining how long it expects the move off-premise to take. Guillaume Leygues, lead sales engineer for Google Cloud Platform, explained in a blog post the size of the task at hand and the range of cloud services it will adopt along the way.

The company intends to make use of Google Cloud Datastore and Google Cloud Bigtable to fulfil its storage requirements, as well as the cloud service’s Direct Peering, Cloud VPN and Cloud Router networking technologies.

“On the data side of things, the company is adopting an entirely new technology stack,” added Leygues. “This includes moving from Hadoop, MapReduce, Hive and a series of home-grown dashboarding tools, and adopting the latest in data-processing tools, including Google Cloud Pub/Sub, Google Cloud Dataflow, Google BigQuery and Google Cloud Dataproc.”

Previously Spotify had been lauded by Amazon Web Services (AWS) as a reference customer, with its use of the Amazon Simple Storage Service (S3) previously the subject of an AWS case study.

Computer Weekly understands that Spotify’s use of AWS for the workloads outlined in the case study is set to continue, while Google Cloud Platform will play host to Spotify’s more “traditional” datacentre infrastructure. 

Computer Weekly contacted Spotify for further comment on this point, but had not received a response at the time of publication.

News of Spotify’s move to the Google cloud comes hot on the heels of Netflix’s announcement earlier this month about its completion of a seven-year push to shift the bulk of its IT infrastructure off-premise and into the AWS public cloud.

View the original article here