Just in:
Global Energy Leaders Chart Course for Sustainable Future at IRENA Assembly // VT Markets Releases Study on Upcoming Bitcoin Halving and Market Implications // VinFast expands access to comprehensive aftersales network in France and Germany through agreement with Mobivia // UAE Delegation Engages in Arab Parliament Committee Discussions // Imperative of Action Against Dubious Kuki-Chin Armed Movement // NEOM welcomes leading industry figures and investors to Hong Kong showcase as part of its ‘Discover NEOM’ China tour // Andertoons by Mark Anderson for Sat, 20 Apr 2024 // Evolution and current state of global crypto adoption – Octa // Congress Is Set To Perform Well In Lok Sabha Polls In Karnataka // Boeing Eyes 2030 Launch for Electric Flying Cars // Emirates Offer Support as Wildfires Ravage Greece // Global Cooperation Takes Center Stage at Dubai International Humanitarian Aid and Development Conference and Exhibition // Qmiax Exchange: Shaping a New Future of Secure and Compliant Cryptocurrency Trading // Tech Giant Discharges Workers Following Disruptive Protest // Galaxy Macau Unveils the New Galaxy Kidz: An Edutainment Center for Play Time // Hong Kong’s R&D Receives International Recognition HKPC’s “InspecSpider” Wins Prestigious “Edison Award” in Innovation Field // Abu Dhabi Environment Agency Endorses ADNOC’s Decarbonization Push // The International Exhibition of Inventions in Geneva Reveals More than 40 Scientific and Technological Innovation Achievements from Hong Kong // Qmiax Exchange Drives Global Cryptocurrency Compliance Process // Navigating Business Setup in Dubai: A Comprehensive Guide by Czar Bizserv //

How just 30 machines beat a warehouse-sized supercomputer to set a new world record

1493120419 afnrdgir8hhbu6nflzaigpcsc4tbwco4xytdckqrkmk

afnrdgir8hhbu6nflzaigpcsc4tbwco4xytdckqrkmk.png

An IBM/Nvidia cluster completed a one-billion cell Echelon petroleum reservoir simulation (shown above) in 92 minutes.


Image: IBM/Stone Ridge Technologies

A high-performance computing record set by a cluster of more than 22,000 compute nodes has been shattered by just 30 machines.

The massive reduction in computing infrastructure needed to set a new record for simulating oil, water, and gas flow was made possible by tapping into the huge parallel processing ability of graphics-processing units (GPUs).

ADVERTISEMENT

While the original record, set by ExxonMobil just a few months ago, used a cluster of more than 770,000 CPUs to run the simulation, this new approach by IBM and Stone Ridge Technology relied on 30 IBM OpenPower servers equipped with 120 Nvidia Tesla P100 GPU accelerators.

The IBM/Nvidia cluster completed a one-billion cell Echelon petroleum reservoir simulation in 92 minutes, faster than ExxonMobil’s approach and using what IBM says is one 10th of the power and 1/100th of the space.

The IBM Power System S822LC machines, codenamed Minsky, used in this latest test pair two IBM POWER8 CPUs with four Tesla P100 GPUs, using the high-speed, bidirectional 40GBps NVLink.

Sumit Gupta, IBM VP of high-performance computing and analytics at IBM, said the result demonstrates the strengths of this tightly coupled GPU/CPU architecture.

“The bottom line is that by running Echelon on Minsky, users can achieve faster runtimes using a fraction of the hardware,” he said.

“One recent effort used more than 700,000 processors in a server installation that occupies nearly half a football field. Stone Ridge did this calculation on two racks of IBM machines that could fit in the space of half a ping-pong table.”

GPUs excel at tasks that can be broken into subtasks, which can then be handled in parallel by the thousands of power-efficient cores within each GPU. Each of the P100 GPUs has 3,584 cores that can be used for parallel compute, with more than 460,000 cores across the cluster used to run the simulation.

While the types of tasks suited to the massively parallel processing offered by GPUs is still limited, Nvidia says the number continues to grow, and today includes computational fluid dynamics, structural mechanics, climate modeling, and other tasks related to manufacturing and scientific discovery.

Nvidia recently began pushing its GPUs as a technology suited to training machine-learning models, particularly the Tesla P100 GPU and its own DGX-1 server.

This month, IBM also added the Nvidia Tesla P100 GPU to the IBM Cloud, providing the option to equip individual IBM Bluemix bare-metal cloud servers with two P100 accelerator cards.

Read more about Nvidia and supercomputing

(via PCMag)

ADVERTISEMENT

ADVERTISEMENT