This week Minnesota Startup Silicon Informatics has been awarded a Small Business Technology Transfer (STTR) contract by the U.S. Army Research Office to advance scalable parallel random number generation technology into products for HPC applications. Scholars from The University of Texas at San Antonio and Florida State University will participate in the research, which will ultimately lead to the development and commercialization of software tools that can help software applications realistically mimic complex phenomena.
The extent to which computer modeling can reflect reality is often limited by the quality and scalability of the random number generation methods. The random number generator and the quality evaluation tool developed in this project will help remove this limitation,” said Boppana. “We feel very privileged to be selected by Silicon Informatics for this research and expect the methods we create to be applicable to a wide range of industries that model complex behaviors, from entertainment and finance to science and engineering.”
Read the Full Story.
In this slidecast, Josh Judd from Warp Mechanics describes MicroPod HPC initiative. Currently a Kickstarter project, MicroPod HPC will enable users to “stand up” a parallel computer using inexpensive commodity hardware, or even use the images as VMs to run a completely virtual development environment.
The MicroPod HPC is a parallel computer that you can afford to use at home. You can “stand up” a parallel computer using inexpensive commodity hardware, or even use the images as VMs to run a completely virtual development environment. The intent is to provide a turn-key framework for R&D of parallel software, and to use as a learning tool.”
Read the Full Story * Download the MP3 * Download the Slides * Subscribe on iTunes * If Dropbox is blocked, download audio from Google Drive.
In our second story on crowdsourced HPC this week, Dean Sheaffer describes the Computing for the Advancement of Science Kickstarter project, which aims to leverage Berkeley Open Infrastructure Network Computing (BOINC) platform.
Donors will help fund five (5) PC platforms designed to run just the BOINC client — 24/7/365. One of the nuances of the BOINC client is that it utilizes CUDA processing cores native to modern graphics cards. With four advanced graphic cards (each with 1,500+ CUDA cores) multiplied by the five PC Platforms, a total of 30,000 CUDA cores will be dedicated to advancing the scientific projects — All day, every day.
Read the Full Story.
In this video, Al Wegener from Samplify Systems presents: Numerical Encoding Shatters Exascale’s Memory Wall.
If you develop applications for your own internal research or development purposes, then by linking APAX into your software, you can reduce your time to results when deploying on supercomputing sites or on the Cloud. First, determine what encoding rates you can achieve with APAX by uploading your data to the APAX Profiler. The APAX Profiler will send you a report and a link to your decoded data file. You can download the data file and run it through your computing application to verify your results have not changed.
The presentation was recorded at the HPC Advisory Council Stanford Conference 2013. Download the slides (PDF).
A new Kickstarter project is looking to build a parallel computer that you can afford to use at home. With the MicroPod, you will be able to “stand up” a parallel computer using inexpensive commodity hardware, or even use the images as VMs to run a completely virtual development environment.
Completing this project will dramatically expand the number of people who have access to basic parallel computing systems, which in turn will expand the number of people who know how to program and operate these systems. That, in turn, will allow more super computers to be built. This is important. The rate of scientific progress world wide is largely limited by the number and speed of super computers that scientists can access. All of the “big science” problems these days have to be modeled and analyzed by such machines, and there just aren’t enough to go around.
This is a very worthwhile cause and we at inside-Startups are hoping you can help them out. Read the Full Story.
Today Cycle Computing announced that the company capped off its record-breaking fiscal year by winning the IDC HPC Innovation Excellence Award. IDC recognized Cycle’s 50,000-core utility supercomputer run in the Amazon Web Services (AWS) cloud for pharmaceutical companies Schrödinger and Nimbus Discovery. In an unprecedented computer run, the cluster completed 12.5 processor years of computation in less than three hours. Running at a cost of less than $4,900 per hour, the computational drug discovery job was recognized by IDC for its impressive return on investment.
In an industry that is evolving as rapidly as HPC, it’s fascinating to be a part of the creativity and innovation we’ve seen in the past year,” said Chirag Dekate, an analyst with IDC. “Cycle Computing’s impressive 50,000 run for Schrödinger and Nimbus Discovery demonstrated a strong ROI from the use of HPC, and we were pleased to recognize their accomplishment.”
Cycle Computing also reported 85 percent client growth in 2012 and has staffed up its sales and support staff. Read the Full Story or check out this interview with Cycle Computing CEO Jason Stowe from SC12.
In this video from SC12, Cycle Computing CEO Jason Stowe demonstrates how easy it is to use the company’s software to provision large compute instances on the AWS cloud.
CycleCloud is the leading software for creating HPC clusters in the cloud, from small to Top 500 Supercomputer scales. CycleCloud makes it easy to deploy, secure, automate, and manage running calculations dynamically at large scales, up to 50000 cores or more. Click here to start using CycleCloud. Companies use CycleCloud in production clusters running molecular modelling, risk analysis, bioinformatics/sequencing, semiconductor simulation, and document processing.
Read the Full Story.
In this video from SC12, Jason Stowe from Cycle Computing describes how the company helps customers maximize the utilization of existing supercomputing infrastructure and bridge the gap between traditional data centers and on-demand utility supercomputing.
In this video from SC12, Doug Johnson from Aeon Computing describes the company’s innovative Data Oasis technology powered by the Lustre file system.
What advantages does Lustre offer as a foundation for a storage system? Bandwidth. Its performance scales out linearly as the file system scales in build out. The more object servers you have, the more network paths you have, the faster your potential. It is the opposite of a large scale monolithic NFS appliance with one spigot.”
For more information, check out our exclusive interview with Aeon’s co-founder, Jeff Johnson.
In this video from SC12, Solarflare CEO Russell Stern describes the company’s new “bump in the wire” ApplicationOnLoad Engine (AOE). By enabling applications to be processed on the fly right on the NIC server adapter, the company is opening up a new paradigm of computation, ransforming the way networks process data and overcoming performance obstacles that cannot be solved by simply adding more processors.
Leveraging our high-performance 28-nm Stratix V FPGA, Solarflare has created a comprehensive firmware development kit that provides a straightforward integrated application development environment,” said Jeff Waters, senior vice president and general manager of the Military, Industrial and Computing Division of Altera. “With its ApplicationOnLoad Engine, Solarflare is delivering an integrated application on-load solution that enables application processing to be moved directly to the network adapter for lower latency, CPU offload or compliance.”
Read the Full Story.