2015-05-29

3 years at Hortonworks!

In 2012 I handed in my notice at HP Laboratories and joined Hortonworks : this May is the third anniversary of my joining the team.

Bruxelles

I didn't have to leave HP, and in the corporate labs I has reasonable freedom to work on things I found interesting. Yet is was through those interesting things that we'd discovered Hadoop. Paolo Castagna introduced me to it, as he bubbled with enthusiasm for what he felt was he future of server side computing. At the time I was working on the problem of deploying and managing smaller systems -but doing so in the emergent cloud infrastructures. Hadoop was initially another interesting deployment problem: one designed to scale and cope with failures, yet also built on the assumption of a set of physical hosts, hosts with fixed names and addresses, hosts with persistent storage and whose  failures would be independent. some of the work I did at that time with Julio Guijarro included dynamic Hadoop clusters, Mombasa (the long haul route to see elephants). The work behind the scenes to give Hadoop services more dynamic deployments, HADOOP-3628, earned me Hadoop committership. While the branch was never merged in, the YARN service model shows its heritage.

While we were doing this, HP customers were also discovering Hadoop —and building their clusters. I remember the first email coming in from a sales team who had been asked to quote the terasort performance of their servers: the sales team hadn't heard of a terasort and didn't know what to do. We helped. Before long we were the back-end team on many of the big Hadoop deals, helping define and review the proposed hardware specs, reviewing and sometimes co-authoring the bid responses. And what bids they were! At first I thought a petabyte was a lot of storage —but soon some of the deals were for 10+, 20+ PB. Projects where issues like rack weight and HDD resonance were as key to worry about as power and logistics of getting the servers delivered. Production lines which needed to be block booked for a week or two, but in doing so allowing server customisation: USB ports surplus? Skip them. How many CPU sockets to fill-and with what SKU? Want 2.5" laptop HDDs for bandwidth over 3.5" capacity oriented storage? All arrangeable, with even the option of a week-long burn in and benchmark session as an optional extra. This would show that the system worked as requested, including setting benchmarks for sorting 5+ PB of data that would never be published out of fear of scaring people (bear that in mind when you read blogs posts showing how technology X out-terasorts Hadoop —the really big Hadoop sort numbers are of 10+ PB on real clusters, not EC2 XXL SSD instances, and they don't get published).

These were big projects and it was really fun to be involved.

At the same time though, I felt that HP was missing the opportunity, the big picture. The server group was happy to sell the systems for x86 system margins, other groups to set them up. But where was the storage group? Giving us HPL folk grief for denying them the multi-PB storage deals —even though they lacked a Hadoop story and didn't seem to appreciate the technology. Networking? Doing great stuff for HFT systems where buffering was anathema; delivering systems for the enterprise capable of handling intermittent VM migration. But not systems optimised for sustained full link rate bandwidth, decent buffering and backbone scalability through technologies like TRILL or Shortest Path Bridging (you can get these now, BTW).

The whole Big Data revolution was taking place in front of HP: OSS software enabling massive scale storage and compute systems, the underlying commodity hardware making PB storeable, and the explosion in data sources giving the data to work with. And while HP was building a significant portion of the clusters, it hadn't recognised that this was a revolution. It was reminiscent of the mid 1990s, when the idea of Web Servers was seen as "just another use of a unix workstation".

I left to be part of that Big Data revolution, joining the team I'd got to know through the OSS development, Hortonworks, and so defining the future, rather than despair about HP's apparent failure to recognise change. Many of us from that era left: Audrey and I to Hortonworks, Steve and Scott to RedHat, Castagna to Cloudera. Before I get complaints from Julio and Chris, —yes some of the first generation of Hadoop experts are still there, the company is taking Big Data seriously, and there are now many skilled people working on it. Just not me.

What have I done in those three years? Lots of things! Some of the big ones include:
  • Hadoop 1 High Availability. One of the people I worked with at VMWare, Jun Ping, is now a valued colleague of mine.
  • OpenStack support: Much of the hadoop-openstack code is mine, particularly the tests.
  • The Hadoop FS Specification: defining a Python-like syntax for Spivey's Z notation, delving through the HDFS and Hadoop source to really define what  a Hadoop filesystem is expected to do. From the OpenStack Swift work I'd discovered the unwritten assumptions & set out to define them, then build a test suite to help anyone trying to integrate their FS or object store with Hadoop to get started. This was my little Friday afternoon project; nobody asked me to do it -but now that it is there it's proven invaluable in getting the s3a S3 client working, as well as being one of the first checkpoints for anyone who wants to get Hadoop to work on other filesystems. Arguably that helps filesystem competitors —yet what it is really meant to do is give users a stable underpinning of the filesystem, beyond just the method signatures.
  • The YARN-117 service model. I didn't start that work, I just did my best to get the experience of the SmartFrog and HADOOP-3628 service models in there. I do still need to document it better, and get the workflow and service launcher into the core code base; Slider is built around them.
  • Hoya: proof of concept YARN application to show that HBase was deployable  as a dynamic YARN application, and initial driver for the YARN-896 services-on-YARN work.
  • Apache Slider (incubating). A production quality successor to Hoya, combining the lessons from it with the Ambari agent experience, producing an engine to make many applications deplorable on YARN though a minimal amount of Python code. Slider is integrate with Ambari, but it works standalone against ASF Hadoop 2.6 and the latest CDH 5.4 release (apparently). I've really got a good insight into the problems of placement of work where access to data has to be balanced with failure resilience; enough to write a paper if I felt like it —rather than just a blog post.
  • The YARN Service Registry. Again, something I need to explain more. An HA registry service for Hadoop clusters, where static and dynamic applications can be registered and used. Slider depends on it for client applications to find Slider and its deployed services; it is critical for internal bonding in the presence of failures. It's also the first bit of core Hadoop with a formal specification in TLA+.
  • Spark on YARN enhancements. SPARK-1537 is my first bit of work there, having the spark history server use the YARN timeline service. Spark internals in Scala, collaboration with the YARN team on REST API definitions and reapplying the test experience of Slider to accompany this with quality tests.
  • Recently: some spare time work mentoring S3a: into a production ready state.
  • Working with colleagues to help shape our vision of the future of Hadoop. Apache Hadoop is a global OSS project, one which colleagues, competitors and users of the technology collaborate to build. I, like the rest of my colleagues get a say there, helping define where we think it can go: then building it.

The latter is a key one to call out. At HP an inordinate amount of my time was spent trying to argue the case for things like Hadoop inside the company itself, mostly by way of PowerPoint-over-email. I don't have to do that any more. When we make decisions it's s done rapidly,  pulling in the relevant people, rather than the inertial hierarchy of indifference and refusal which I sometimes felt I'd encountered in HP.

Which is why working at Hortonworks is so great: I'm working with great people, on grand projects —yet doing this a process where my code is in people's hands within weeks to months, and where an agile team keeps the company nimble. and pretty much all my work has shipped.

If you look at how the work has included applied formal methods, distributed testing, models of system failure and dynamic service deployment, I'm combining production software development with higher level work that is no different than what I was doing in a corporate R&D lab -except with shipping code.

Hortonworks is hiring. If what I've been up to -and how I've been doing it- sounds exciting: then get in touch. That particularly applies to my former HPL colleagues, who have to make their mind up where to go: ink vs enterprise. There is another option: us.

No comments:

Post a Comment

Comments are usually moderated -sorry.