The Hydra60 is a combination Lustre OSS (object storage server) and OST (object storage target) with two active/active failover nodes and shared storage in a single system chassis with an ultra dense 60 drive 6Gb SAS storage infrastructure. With a unified and zonable 6Gb SAS dual-ported backplane and drives the Hydra60 can sustain a remarkable performance while providing high-availability to volumes or object storage. With external interface options including FDR Infiniband, 40/10GbE 1Gb Ethernet and supporting Linux and Lustre releases 2.x the Hydra60 makes an excellent storage platform for Lustre performance with HA operation. The design of Hydra60 provides an affordable, redundant and resilient storage platform by leveraging RAIDZ thereby eliminating the cost of hardware RAID controller technology.”
In this video from LUG 2013, Jeff Johnson from Startup Aeon Computing presents an overview of the company’s innovative Lustre storage solutions.
There are many storage solutions available in the market but not all of them do Lustre well. We set out to design a Lustre platform that was good at Lustre data and I/O profiles. Part of that design, in addition to performance, is that it follows Aeon Computing’s business philosophy in that there is no unnecessary, extraneous bull___t that gets in the way.
Over at GigaOm, GigaStacey writes that the solution for better and faster storage may lie in DSSD, a stealthy chip startup backed by Andy Bechtolsheim. Founded in 2010 by Sun Alums Jeff Bonwick and Bill Moore, DSSD is trying to build a chip that would improve the performance and reliability of flash memory for high performance computing, newer data analytics, and networking.
My sources tell me the startup is building a new type of chip — they said it’s really a module, not a chip — that combines a small amount of processing power with a lot of densely-packed memory. The module runs a pared-down version of Linux designed for storing information on flash memory, and is aimed at big data and other workloads where reading and writing information to disk bogs down the application. This fits with the expertise of the team, but this is a problem that others are trying to solve as well with faster and cheaper SSDs and targeted software to to optimize the flow of bits to a database. But the proposal here appears to be about designing an operating system that takes advantage of the difference in Flash memory when compared to hard drives to boost I/O.
DigiCortexis my hobby project implementing large-scale simulation and visualization of biologically realistic cortical neurons, synaptic receptor kinetic, axonal action potential propagation delays as well as long-term and short-term synaptic plasticity. Current version of DigiCortex is heavily optimized for Intel CPUs (including Sandy Bridge AVX instruction set). The first CUDA-enabled version with GPU acceleration (CUDA optimizations done by Ana Balevic) is available as of v0.95
The simulation footage in this video is really gorgeous, so be sure to watch it in HD mode. Read the Full Story.
In this slidecast, Minesh Amin from MBA Sciences presents on the latest release of SPM.Python. Amin was recently awarded a patent for the technology, which now includes support for exploiting parallelism using GPUs by way of PyCuda. With this new feature in place, SPM.Python enables programmers to exploit parallelism in a fault tolerant manner across all three levels of abstraction: servers, cores, and GPUs.
This week Nvidia announced that 16 Startups using the massive computing power of GPU technology will participate in the Emerging Companies Summit. The event takes place March 20 in San Jose, California.
A highlight of the GPU Technology Conference (GTC), ECS will feature 16 companies from around the world. This year’s conference and summit have expanded to include companies focused on mobile computing, game development and cloud-based technologies. Also participating are companies advancing areas as diverse as visual search, financial services and medical diagnostics. Five top startups will be recognized for their innovation in a competition with more than $75,000 in prizes.
Do you have an idea that will change the world of computing? The SC13 conference is seeking proposals for the Emerging Technologies Track, which is a new element of their Technical Program. Aimed at providing an exhibit showcase for novel projects at a national or international scale, the Emerging Technologies Track differs from other aspects of the technical program in that it will provide a forum for discussing large-scale, long-term efforts in HPC, networking, storage, and analysis.
Emerging Technologies welcomes exhibitions of real hardware prototypes and demonstrations of software as well as project presentations in poster form, animated displays, and scheduled presentations or discussions. Successful projects will display future technologies with the potential to influence computing and society as a whole.
Submissions are due July 31, 2013. Read the Full Story.
This week Minnesota Startup Silicon Informatics has been awarded a Small Business Technology Transfer (STTR) contract by the U.S. Army Research Office to advance scalable parallel random number generation technology into products for HPC applications. Scholars from The University of Texas at San Antonio and Florida State University will participate in the research, which will ultimately lead to the development and commercialization of software tools that can help software applications realistically mimic complex phenomena.
The extent to which computer modeling can reflect reality is often limited by the quality and scalability of the random number generation methods. The random number generator and the quality evaluation tool developed in this project will help remove this limitation,” said Boppana. “We feel very privileged to be selected by Silicon Informatics for this research and expect the methods we create to be applicable to a wide range of industries that model complex behaviors, from entertainment and finance to science and engineering.”
In this slidecast, Josh Judd from Warp Mechanics describes MicroPod HPC initiative. Currently a Kickstarter project, MicroPod HPC will enable users to “stand up” a parallel computer using inexpensive commodity hardware, or even use the images as VMs to run a completely virtual development environment.
The MicroPod HPC is a parallel computer that you can afford to use at home. You can “stand up” a parallel computer using inexpensive commodity hardware, or even use the images as VMs to run a completely virtual development environment. The intent is to provide a turn-key framework for R&D of parallel software, and to use as a learning tool.”
Donors will help fund five (5) PC platforms designed to run just the BOINC client — 24/7/365. One of the nuances of the BOINC client is that it utilizes CUDA processing cores native to modern graphics cards. With four advanced graphic cards (each with 1,500+ CUDA cores) multiplied by the five PC Platforms, a total of 30,000 CUDA cores will be dedicated to advancing the scientific projects — All day, every day.
In this video, Al Wegener from Samplify Systems presents: Numerical Encoding Shatters Exascale’s Memory Wall.
If you develop applications for your own internal research or development purposes, then by linking APAX into your software, you can reduce your time to results when deploying on supercomputing sites or on the Cloud. First, determine what encoding rates you can achieve with APAX by uploading your data to the APAX Profiler. The APAX Profiler will send you a report and a link to your decoded data file. You can download the data file and run it through your computing application to verify your results have not changed.
A new Kickstarter project is looking to build a parallel computer that you can afford to use at home. With the MicroPod, you will be able to “stand up” a parallel computer using inexpensive commodity hardware, or even use the images as VMs to run a completely virtual development environment.
Completing this project will dramatically expand the number of people who have access to basic parallel computing systems, which in turn will expand the number of people who know how to program and operate these systems. That, in turn, will allow more super computers to be built. This is important. The rate of scientific progress world wide is largely limited by the number and speed of super computers that scientists can access. All of the “big science” problems these days have to be modeled and analyzed by such machines, and there just aren’t enough to go around.
This is a very worthwhile cause and we at inside-Startups are hoping you can help them out. Read the Full Story.