“India’s Most Powerful Super Computer Could Aid In Doubling Farmers’ Income”
- Pratyush, an array of computers that can deliver a peak power of 6.8 petaflops.
- One petaflop is a million billion floating point operations per second and is a reflection of the computing capacity of a system
- The machines will be installed at two government institutes: 4.0 petaflops HPC facility at IITM, Pune(Indian Institute of Tropical Meteorology (IITM), ); and 2.8 petaflops facility at the National Centre for Medium Range Weather Forecast, Noida
“MIHIR “- HIGH PERFORMANCE COMPUTER SYSTEM
ministry of earth science (MoES)
India on Tuesday commissioned its High Performance Computer (HPC) system – named ‘Mihir‘ (meaning Sun) – at the National Centre for Medium Range Weather Forecasting at Noida, Uttar Pradesh.
“Presently about 24 million farmers receive these advisories with information of weather forecasts on district level.
It is planned to reach out to about 45 million farmers by July 2018
- The facility will improve India’s capacity in weather forecasting and help it to come out with weather forecast down to the block level (for about 6500 blocks) across the country later this year. At present, such facility is available at district level.
- The new system will be India’s largest HPC facility in terms of peak capacity and performance and will propel India’s ranking from the 368th position to the 30th in the list of top 500 HPC facilities in the world.
- A petaflop computer can perform one quadrillion (one thousand trillion) operations per second. A power guzzler, it requires one mega watt of electricity on full load.
- A teraflop machine can carry out one million million operations per second and a grade lower than a peta-flop instrument.
- The new facility would be used to generate weather forecast at the subdivision level, which means alerts on temperature, rainfall and extreme events for 6,500 such blocks.
- “The HPC project cost us Rs 450 crore and we intend to start the block level forecast by June 2018,”
- Currently such forecasts are available at the district level though 130 agro-meteorological field units.
- The weather agency prepared experimental forecasts for 12 km x 12 km patches of land for some of the sub-division. Now operational forecast will be made for 115 sub-division to start with and expanded later
What is High Performance Computing?
- There is no clear definition –
- Computing on high performance computers
- – Solving problem / doing research using computer modeling, simulation and analysis
- – Engineering design using computer modeling, simulation and analysis
- Main area of discipline is developing parallel processing algorithms and software so that programs can be divided into small independent parts and can be executed simultaneously by separate processors
- HPC systems have shifted from supercomputer to computing clusters
Who uses High-Performance Computing
– Research institutes, universities and government labs
- Weather and climate research, bioscience, energy, military etc.
– Engineering design: more or less every product we use
- Automotive, aerospace, oil and gas explorations, digital media, financial simulation
- Mechanical simulation, package designs, silicon manufacturing etc.
– Parallel computing: computing on parallel computers
– Super computing: computing on world 500 fastest supercomputers
When Do We Need High Performance Computing?
- Case1: Complete a time-consuming operation in less time
- Case 2: Complete an operation under a tight deadline
- Case 3: Perform a high number of operations per seconds
What is Cluster?
Cluster is a group of machines interconnected in a way that they work together as a single system
Storage clusters provide a consistent file system image
Allowing simultaneous read and write to a single shared file system
o High‐availability (HA)
Provide continuous availability of services by eliminating single points of failure
Sends network service requests to multiple cluster nodes to balance the requested load among the cluster nodes
Use cluster nodes to perform concurrent calculations
Allows applications to work in parallel to enhance the performance of the applications Also referred to as computational clusters or grid computing
Up to now, performance increases have been attributed to increasing density of transistors
- But there are inherent problems
- A little Physics lesson –
– Smaller transistors = faster processors
– Faster processors = increased power consumption
– Increased power consumption = increased heat
– Increased heat = unreliable processors