High End Computing

Friday, May 09, 2008

multicoreinfo.com

Started a portal for multicore news, research, information, and resaerch papers at www.multicoreinfo.com

Follow www.multicoreinfo.com for multicore processor information.

Tuesday, April 10, 2007

HPCS language resources

High Productivity Computing Systems (HPCS) is a DARPA program, which is funding Cray, IBM, and Sun Microsystems to develop high productivity languages for next generation high-end computing. Here is a link to resources of those languages. http://crd.lbl.gov/~parry/hpcs_resources.html

Wednesday, March 21, 2007

Been a long time

Its been a long time since I wrote a blog. A lot has changed since my last blog, where I kept my resume. I finished my Ph.D. in December 2006 and now working as a postdoc researcher at IIT and also a resident associate at Argonne National Lab. My work involves developing Server Push model for I/O. More details about this project is on our website http://www.cs.iit.edu/~scs

I also decided to use this blog to express my views on technical and non-technical issues I am interested instead of just High Performance Computing. Hope to keep blogging more often.

Tuesday, June 13, 2006

Resume

My resume is posted on my website. Please check it out @ http://www.cs.iit.edu/~suren/appl/skb-res.pdf

I am looking for a research position that utilizes my experience in optimizing data access performance for parallel computing applications.

Sunday, May 21, 2006

New Parallel Programming tools

Ease of programming, portability and superior performance are the vital goals of achieving high productivity in the HEC. Although MPI provides portability and high performance, it is widely known as hard to program. That difficultly fuels the gap between the advances of parallel programming and programmers' knowledge I mentioned on my May 10th blog.

There are many alternatives proposed in the recent years to provide high productivity. I mention some of them in this post.

Global Arrays: this tool kit provides "shared memory" programming interface for distributed memory computers to develop MIMD parallel programs.

UPC: Unified Parallel C is an extension of the C programming language that provides the user with an explicit model of a distributed shared memory system.

Co-array Fortran is an extension to Fortran 95 for Single program multiple data (SPMD) parallel processing.

Cluster OpenMP is an OpenMP extension for clusters. OpenMP is a popular parallel programming paradigm for shared memory multiprocessors. Intel has extended this paradigm for clusters and provides as a component of Intel 9.1 compiler. The performance seems promising, but I have to test it someday to see how good it is.

Sunday, May 14, 2006

Revitalization

High end computing (HEC) is a major strategic tool for science, engineering, and industry. HEC simulations in various areas of science enable to understand the world around us. They study the universe, enabling us to observe the systems that are too small (nanotechnology, biotechnology), too large (astrophysics, hurricanes, tsunamis, aircraft), or too dangerous (nuclear weapons) for direct experimental observation.

Over the past few decades, supercomputers have grown from a few Gigaflops of computing power to hundreds of Teraflops of power. New projects are already aiming for tens of Petaflops. If there is anyone who doubts whether that much computing power is needed, there are many applications in the fields mentioned above that require a lot more computation than Petaflops. Here is a talk by Dr. David Bailey of LBNL that explains about 21st century high-end computing.

In 2003 High End Computing Revitalization Task Force (HECRTF) is formed to develop a plan for undertaking and sustaining a robust Federal high-end computing program. In 2004, a Federal HEC plan is prepared. Numerous open research issues can be found in that plan.

Wednesday, May 10, 2006

Real Bottleneck

Here's an interesting article discussing that the gap between advances of parallel computing and the knowledge of parallel application developers (scientists) is a serious one. Even though there are many optimized libaries and rapidly growing supercomputers, making them available to scientists in various fields is an important issue. As for computer science researchers the fight to overcome the gap between peak performance and sustained system performance (divergence problem) is pivotal.