Over at insideHPC, the RichReport is featuring a slidecast with Axel Kloth from SSRLabs. The Silicon Valley Startup has developed an innovative Big Data coprocessor architecture optimized for bandwidth and power efficiency.
Scalable Systems Research Labs is a Silicon Valley startup focused on the development and sale of a family of coprocessors to solve the “Big Data” problem by accelerating execution of applications for customers who demand higher performance and where the power supply or cooling capacity is limited. These coprocessors work with a variety of standards-based application programming interfaces (APIs). SSRLabs’ family of coprocessors improves floating-point computation and analysis of multi-dimensional datasets at substantially higher performance levels and lower power consumption compared to traditional processors.”
Read the Full Story.
In this video from ISC’13, Ian Lintault from nCore HPC describes the company’s innovative BrownDwarf supercomputer technology.
The BrownDwarf Y-Class system is an incredibly important milestone in HPC system development,” said Ian Lintault, managing director of nCore HPC. “Working in close collaboration with TI, IDT and our hardware partner Prodrive, we have successfully established a new class of energy efficient supercomputers designed to fulfill the demands of a wide range of scientific, technical and commercial applications.”
Check out more from the show at our ISC’13 Video Gallery.
In a 2009 interview with insideHPC, data scientist Thomas Thurston talked about research he had done predicting ARM CPUs were on a path to disrupt X86 in HPC. This was the first time most of us had considered the idea of cell phone CPUs someday being relevant for HPC and, frankly, it caused a bit of a fuss. So we asked him to elaborate a year later, which he did in this article Armed Invasion of HPC? posted in 2010. The fallout from that discussion ranged from constructive to destructive. Some thought it was a provocative idea, others thought it was offensively naïve.
That was then. This is now.
Despite his skeptics at the time, it seems Thurston was onto something. Just today nCore launched BrownDwarf, an actual ARM- and DSP-based supercomputer. What started in cell phones has moved upwards into smartphones, tablets, servers and now even supercomputers as well.
It’s still early, but things are starting to pop. This year alone Nvidia came out with its Kayla GPU-ARM development platform. The Pedraforca Cluster was announced by the Barcelona Supercomputing Center, which will deploy ARM CPUs, GPUs and InfiniBand. Even AMD, a bastion of X86, this year announced its server strategy based on ARM CPUs codenamed “Seattle.” The sound of ARM began as a whisper, but has quickly become a thunder in Intel’s ears.
For those who don’t know, Thurston is the world’s leading expert at predicting if businesses will survive or fail. He does this through predictive modeling and data science, and has worked with heavyweights like Harvard’s Clayton Christensen and tech investing titan Bill Hambrecht. He’s also a venture capitalist and a hedge fund manager. We caught up with Thurston today to share the news on BrownDwarf and get his thoughts on the burgeoning ARM renaissance in HPC.
As early as 2007 we had models predicting ARM would become a disruptive threat to X86 in HPC over the following seven-to-ten years. It’s happening a little faster than our original forecasts, but is basically playing out note for note. Back then we saw ARM moving up from smartphones into tablets (there was no iPad at the time) and low-end laptops. Next it would move into servers and even HPC. Back then everyone was very dismissive of our predictions and sometimes even rude. They said we clearly didn’t know what we were talking about. It turns out we were right, and several other folks saw this coming too. Now it’s undeniable. I can’t wait to see what happens next. The billion dollar question is: how will Intel respond?”
In this video from the Lustre User Group 2013, Doug Johnson from Aeon Computing describes the company’s Hydra storage arrays that marry the speed of Lustre with the HA capabilities of ZFS.
The Hydra60 is a combination Lustre OSS (object storage server) and OST (object storage target) with two active/active failover nodes and shared storage in a single system chassis with an ultra dense 60 drive 6Gb SAS storage infrastructure. With a unified and zonable 6Gb SAS dual-ported backplane and drives the Hydra60 can sustain a remarkable performance while providing high-availability to volumes or object storage. With external interface options including FDR Infiniband, 40/10GbE 1Gb Ethernet and supporting Linux and Lustre releases 2.x the Hydra60 makes an excellent storage platform for Lustre performance with HA operation. The design of Hydra60 provides an affordable, redundant and resilient storage platform by leveraging RAIDZ thereby eliminating the cost of hardware RAID controller technology.”
For more on Lustre, check out our LUG 2013 Video Gallery.
In this video from LUG 2013, Jeff Johnson from Startup Aeon Computing presents an overview of the company’s innovative Lustre storage solutions.
There are many storage solutions available in the market but not all of them do Lustre well. We set out to design a Lustre platform that was good at Lustre data and I/O profiles. Part of that design, in addition to performance, is that it follows Aeon Computing’s business philosophy in that there is no unnecessary, extraneous bull___t that gets in the way.
Read the Full Story.
Over at GigaOm, GigaStacey writes that the solution for better and faster storage may lie in DSSD, a stealthy chip startup backed by Andy Bechtolsheim. Founded in 2010 by Sun Alums Jeff Bonwick and Bill Moore, DSSD is trying to build a chip that would improve the performance and reliability of flash memory for high performance computing, newer data analytics, and networking.
My sources tell me the startup is building a new type of chip — they said it’s really a module, not a chip — that combines a small amount of processing power with a lot of densely-packed memory. The module runs a pared-down version of Linux designed for storing information on flash memory, and is aimed at big data and other workloads where reading and writing information to disk bogs down the application. This fits with the expertise of the team, but this is a problem that others are trying to solve as well with faster and cheaper SSDs and targeted software to to optimize the flow of bits to a database. But the proposal here appears to be about designing an operating system that takes advantage of the difference in Flash memory when compared to hard drives to boost I/O.
Read the Full Story.
In this video from the 2013 GPU Technology Conference, Ivan Dimkovic and Ana Balevic describe the ground-breaking DigiCortex Engine. Recently ported to CUDA, the application has seen huge speedups with GPU computing.
DigiCortexis my hobby project implementing large-scale simulation and visualization of biologically realistic cortical neurons, synaptic receptor kinetic, axonal action potential propagation delays as well as long-term and short-term synaptic plasticity. Current version of DigiCortex is heavily optimized for Intel CPUs (including Sandy Bridge AVX instruction set). The first CUDA-enabled version with GPU acceleration (CUDA optimizations done by Ana Balevic) is available as of v0.95
The simulation footage in this video is really gorgeous, so be sure to watch it in HD mode. Read the Full Story.