FrostByte Prototype Design

Goal: After the success of the Open Compute Project-based Tundra (OCP) server, Penguin decided to invest in a new high-density storage supply for server centers. Their solution was to create the FrostByte storage solution.

THE CHALLENGE

CREATING A COST-EFFECTIVE PROTOTYPE

As one of the biggest projects that I had the privilege and responsibility of working on while I was at Penguin Computing, I had a chance to work with the engineering and product management teams on the launch of the company’s OCP storage solution- FrostByte. With the end goal of launching the system at SuperComputing 2016, we had begun making preparations to launch the storage unit over a year before.

Design constraints included:

  • Building a cost-effective tradeshow prototype for display.
  • Demonstrating the scalability of the server rack at a consumer-level.
  • Working alongside and coordinating with the PM, engineering and executive teams in a cross-function team approach.
Penguin Computing FrostByte Rack

FrostByte on display at SuperComputing 2016.


MY ROLE

SCALING AND BUILDING

Functioning as the company’s technical marketer, most of my primary responsibilities with the company included conceptualizing tradeshow models, working the larger tradeshows, coordinating collateral across different teams and designing/writing marketing material using an agile methodology. Based on customer and vendor feedback, we would constantly hold cadence meetings towards the reveal and iterate the prototype until it was ready to ship.

For FrostByte, I also served as principal photographer to take photos of the storage server and composite it through Photoshop. Misc. tasks included creating an easy-to-understand presentation loop as well as installing basic hardware including computer chips and hardware.


DISCOVERY

BUILDING THE PROTOTYPE

penguin_computing_process

As the storage unit itself would potentially involved the deployment of multiple expensive server racks in order to show customers the ecosystem, I was tasked with figuring out a way to find a cost-effective way to display it at tradeshows and marketing materials. Beginning with conceptualizing a tradeshow prototype with our engineering and PM teams through an online spreadsheet, our development of the prototype closely matched that of an agile model.

Since we knew that it would cost several thousand dollars just to build, ship one rack and store it prior to the show and we needed to keep the cost down. Building out multiple racks and shipping them across the country multiple times in the span of a year was not an option. In addition as we wanted to collect feedback from our customers along the way prior to the final build as well, we constantly released individual storage sleds to gather feedback and gauge interest to ensure that we would satisfy storage needs in the OCP market.

Utilizing the aforementioned spreadsheet, our team was able to better strategize on what needed to be inside the rack and organize what we needed for the digital photos, all while giving us flexibility to change things as needed throughout ideation to show the systems capabilities. During our design sprints, we would constantly ask ourselves whether or not what we were building would be something that a customer would want to use for their data storage needs. If it passed, we would then add it to the final prototype and scrap it if not. This would typically occur once every 5-6 weeks culminating up to the unveiling.

One downside was that since we were planning early, some of the final hardware parts that we needed were on limited supply and would not be available until several weeks prior to the show. As this really hampered our ability to test and produce key numbers for running the storage server, our marketing team was forced to work with assumptions. Knowing this, I had prepared templates ahead of time knowing full well that numbers would be fluid until the public reveal.


KEY DELIVERABLES

CREATING THE BRAND

In order to improve the user experience when displaying the rack at tradeshows, I came up with the idea to space out the hardware units with about 2OU (88mm of space) in between to provide key features that could be quickly read. This allowed our product management team to provide me with basic technical information in order to produce collateral to give customers an idea of what each of the units are and how it created an interconnected system.FrostByte management switch
FrostByte metadata server
As the storage system was meant to be equipped with storage monitoring capabilities as well, I had our engineering and warehouse team mount a monitor into the top of the unit that would display a rotating PowerPoint presentation to demonstrate these critical features.

SCALING THROUGH PHOTOS

For the photos, I coordinated with our warehouse team to setup a single rack inside of our warehouse and shot it only moving the camera slightly ajar across the span of multiple days. Since it was crucial no one touched the area as we worked on it, I had carefully measured each time we pushed it along the ground to ensure that the final composite would seamlessly blend into a single image. What resulted was a complicated server ecosystem that was shot using only one rack.

FrostByte datasheet front

FrostByte Datasheet- Front

Frostbye datasheet back

FrostByte Datasheet- Back

CONCLUSION

SECURING THE BID

Given the scope of the project, it was easily one of the complex undertakings that I’ve worked on to date and am extremely proud to be a part of. It involved everything from strategic planning to having to touch small details such as how to include a monitor inside the rack to demonstrate the software package as well as painting buttons by hand for branding.

The impact can still be felt today as FrostByte and Tundra, another OCP server I helped market, were both critical players in securing Penguin Computing’s CTS-1 bid with the NNSA by helping demonstrate the power of scalability at the point-of-entry with easy-to-understand solutions. Through the company’s actions, Penguin Computing was able to provide over 7 petaFLOPS of computing power to the Los Alamos National Laboratory, Sandia National Laboratories, and Lawrence Livermore National Laboratory.