This morning I attended the first in a series of webinars on Azul Systems’ new Zing offering. The webinar/presentation was conducted by Gil Tene, Azul’s VP of Technology and CTO, and Co-Founder. This session focused on the issues surrounding Java virtual machine performance and scalability – particularly in the areas of garbage collection, heap management, memory utilization and such. There was lots of technical talk about “young objects”, “old dead objects”, the linearity of memory and pause time, garbage collection and CPU quantum scheduling jitter, zero overhead instrumentation, etc. All good “techie” stuff for those of us who like that kind of stuff. The series will continue with presentations on Azul’s management and provisioning systems – Zing Resource Controller, Zing Compute Pool Manager and Zing Vision. More good stuff to follow. READ MORE »
Posts tagged java
I’ve been following Azul Systems for a long time now – actually since their stealth mode days. Over the past year I’ve penned a couple of posts about Azul – one about their overall strategy and a second one about their recent Managed Runtime Initiative. Azul is one of those technologies that has intrigued me. I will call them “disruptive” for lack of a better term, and will put them in the same category as Inkranetworks (no longer in existence) and Topspin (acquired by Cisco). Like Inkranetworks and Topspin, Azul is one of those products that has a very elegant architecture and holds the potential to make significant impact on performance, operations and TCO. Unfortunately they suffer some of the same problems that are inherent with disruptive technologies – they’re a “different colored box”, and we all know how those are readily accepted inside of most IT shops. Inkranetworks was one of the most elegant virtual networking appliances I have ever seen. Unfortunately it was just “before its time”. Topspin was similar in its elegance, and ended being acquired by Cisco. READ MORE »
I normally don’t use this forum to endorse vendor products. Over the past couple of months I have been doing a lot of reading and research on the virtualization and cloud computing spaces. What I have found is that both of these spaces still seem to be pretty “fuzzy” in several aspects. Virtualization in the context of how big the market will be over the next few years as well as the number of approaches and players in the market. Cloud computing in the context of… well, it’s just “cloudy”. That’s about all I can say about it. Still too many definitions of what it is and what it’s not. Everybody is in the cloud business these days.
So I decided to step back and look at a company/technology that I have been following for a few years now. The company is Azul Systems in Mountain View, CA. I first became involved with Azul when they were in stealth mode, before they had their offices over next to the Google campus. What initially struck me about Azul systems was the elegance of their architecture and the potential strength of their technology. In a nutshell, Azul builds a “network attached processor” that offloads the bulk of virtual machine execution of Java-based applications. The elegance of their architecture is in the simplistic design of the network attached processor model, and the strength of their technology is in the robustness of their multi-core Vega chip architecture. I will leave it at that for now – you can go study the particulars for yourself.
What I like about Azul Systems boils down to four essential elements – the leadership team, the product architecture, the market, and their customers.
With the exception of Darryl Ramm the original founder/leadership team is still intact. Scott Sellers took over as CEO a couple of years ago after Stephen DeWitt departed (now VP of HP’s PC division – previously CEO of Cobalt Systems). If you haven’t taken the time to read up on Scott’s background you should take a few minutes and do so. He is no stranger to the design and development of successful chip technologies. No doubt he has the technical prowess to pull off the design of a very elegant and powerful chip like the Vega processor. What I really like about Scott is that while he is “technical to the nth degree” he can bring the technical stuff down to layman’s terms so even people like me can grasp the elegance of the design. I remember several sessions earlier on where Scott provided me several down to earth tutorials on cache coherency, garbage collection and heap management – in terms that I could understand their importance with respect to virtual machine processing. But don’t get me wrong – Scott has a solid business sense in addition to his technical expertise – something Azul needs while competing against the hyped-up x86 virtualization market.
Shyam Pillalamarri continues as VP of Software Engineering. When I first met Shyam I was impressed by his interest in us (EDS at the time) as a customer. One thing the whole Azul team demonstrated was their ability to listen, and Shyam was probably the most eager since he had the tough job of developing the software (including the management platform) that fuels the Azul network attached processor model. While elegant in its design, it still nonetheless embodies a fair degree of complexity – especially with the degree of workload management and fault tolerance Azul has built into the architecture. Shyam’s background at Nortel, Shasta and Zeitnet/Cabletron has served him well in this role.
And then there’s Gil Tene, VP of Technology and CTO. Besides being a technical guru Gil is very ebullient and likable. Gil was the original “cheerleader” for the product and it showed through his intense belief and passion in the design. Obviously as CTO he is no technical slouch, having deep experience from Nortel, Shasta and Check Point Software. And his background in the Israeli Navy R&D gives him rich experience in the design of very sophisticated systems. Like Scott, Gil is also willing and able to take the technical aspects of the Azul architecture and bring it down to real-world examples for the likes of you and me – something that is very important when trying to internalize the value of a new technology design (especially one that requires committing your mission critical applications to a new platform). Gil is also an accomplished pilot and still owes me a tour of the Bay Area…
Where Azul seems to have struggled is in the area of Marketing and Operations. There have been several iterations in these roles. I don’t know George Gould (VP Marketing and Business Development), but looking at his resume it appears that he has the credentials for this job. Azul recently brought in Pat Conte as VP of Worldwide Field Operations. I knew Pat at Cobalt Systems and TopSpin Communications. He’s a good guy for this role and I think will really help to strengthen the overall delivery capability for Azul.
I won’t even attempt to do justice to the elegance and technical details of the Azul network attached processor architecture – I will leave that for you to explore. What I will comment on is its simplicity when it comes to application design with respect to the architecture. Simply put, there is no real design work to take advantage of the architecture. You simply replace the JVM on your application servers with the Azul VM proxy and then run your apps on the Azul appliance. The Azul proxy directs transaction traffic to/from any other tiers (e.g., database) and manages the session with client. That’s pretty much it (obviously a lot more behind the scenes). This is all made possible by two key elements: the Vega chipset and the Azul management system. Again, an over-simplification, but these are the guts of the system.
Over the last couple of years the focus has been on multi-core processors in the x86 space. Four cores is now the norm, going to eight within the next few months. When I first engaged with Azul they were moving to the second generation of their Vega multi-core chip. At that time they were delivering at chip with 24 cores on a single processor. Their current generation, the Vega 3 delivers 54 cores on a single processor. A fully populated 7380D configuration will deliver 864 cores in a single 14U box. Pretty impressive. But extracting the aggregate processing power of all those cores is where the beauty of the architecture really shines from my perspective. Scott and team tackled not only the density/power issues associated with building multi-core chips, they addressed the inherent issues of heap management, cache coherency, garbage collection and and array of other nagging problems associated with Java VM processing. The things that programmers usually spend too much time worrying about when developing high-throughput/high-volume Java applications. The kinds that big banks and insurance companies run. Most of those issues have now been sublimated by the Azul architecture and the application systems designers and developers can now focus on the business architecture of the application. And yes, the Vega chip just screams when it comes to transaction throughput – but that’s an area we can argue about all day with respect to different system architectures. Choose the benchmark of your choice and I think you will find Azul at, if not near, the top.
The second piece of the puzzle is the management system. The nice thing about the Azul architecture is that you can simply plug another box into the network attached process array and it’s pretty much ready to take on workload. There’s a little more to it than that, but not much more. The management system (scheduling, workload balancing, etc) is a master/slave model, with any of the nodes in the array being able to play the role of master if a failure of the current master were to occur (which is unlikely with the redundant design of the hardware). As I mentioned from an application perspective, there’s really nothing to do except spool up as many instances of the application as you want. You can do this manually or allow the Azul management system to grow shrink the environment based on transaction demand. The nice thing about the architecture is that you don’t need to worry about potentially moving virtual machines from one server to another to tap into un-utilized processor capacity (something that appears easy with current hypervisor technology, but in reality is not that simple). With the Azul architecture you simply allow capacity consumption to expand and contract as required. And with Azul’s policy management system you can control the levels to which that can occur – so you can actually control departmental or line of business consumption.
Now this all sounds good and it is. But you still have to deploy sufficient processing capacity to allow expansion and contraction to occur. And that means you need the right amount of Azul capacity. In the x86 world you can simply tap into those unused servers sitting over in the corner, right? In real life it’s not that easy as we all know. Obviously this is another area that the technical zelots can argue for hours on end, but in my book, if you are truly serious about running your mission critical applications in a highly virtualized environment, you will carve off a set of resources dedicated solely to their care and feeding. I will argue that point all day long – cloud or no cloud…
While all the elements of the Azul architecture sound good, the $64K question is whether Azul can afford to maintain a proprietary chip architecture. Chip design, development, testing and manufacturing is a grossly expensive process and a huge drain on cash flow. There is no doubt that Scott Sellers knows everything about that process and bringing somebody in like Pat Conte with his Cobalt experience who can help manage the product life cycle aspect of things is a good move. But in the end, people are just wary of proprietary chip sets. Especially when x86 architecture is now gaining (but obviously not yet equal) to the multi-core capability of the Vega 3 chip set. There are still some unique/creative extensions of the Vega 3 chip that are missing in the x86 architecture (or at least not optimized as well as they are in the Vega architecture) that make the Azul platform hum – garbage collection, heap management, cache coherency, etc., but one has to wonder if these can’t be overcome through creative application of x86 architecture and smart programming.
This is really a screwed up market right now. We’ve just crested the hype curve on the virtualization market and now we are hanging ten on the cloud computing wave. It’s easy to get caught up in this hype as so many companies have – reinventing themselves as the “cloud provider” of choice. Azul has stayed true to their business strategy during this hype and has stuck with their value propositions – dramatic Java application performance improvement and unprecedented scalability, along with great management tools to manage the environment. But is that a valid market, and if so, how big? I haven’t done in-depth analysis to try to determine the second half of that question, but what I have seen is that the virtualization market will shortly go into its plateau phase. This is due to several factors, including the penetration of virtualization with respect to mission critical applications. There are just some applications that won’t be virtualized, or if they do it will be for same architecture portability instead of consolidation/density. Add to that the introduction and maturation of new and existing virtualization players (including the server manufacturers) and I think this market will continue to be robust, but at the some time disjointed. And then throw the cloud in on top of that for a little more uncertainty.
So how does this apply to Azul? At some point IT managers are going to step back and say to themselves “has virtualization really solved the scalability and reliability of my mission critical applications?” I think the answer will be “no”. Notice that I left out portability. As I mentioned earlier, virtualization will help in same architecture platform portability, but I think IT managers are no longer worried about cross-architecture/cross-vendor portability. They simply need their mission critical applications to perform better at a lower cost on a given platform architecture. And this is where I believe Azul will be able to continue to present a strong value proposition. You have invested lots of time and money in developing your Java-based mission critical applications. You don’t need to port them. You simply need them to run reliably and scale as needed, without a lot of human on system intervention. That’s Azul’s sweet spot in my opinion.
From my perspective Azul has been smart in managing their sales and marketing budget and resources. They have gone after a fairly narrow market, focusing on enterprises where mission critical applications get lots of attention. Not that they don’t in other enterprises, but when you look at banks and insurance companies – those that are subject to regulation and compliance – then you can see why platform stability, scalability and manageability command an extra level of attention. Azul has landed several key customers in this space including Citistreet, Credit Suisse, Wachovia and Creditex. Additionally they have added the likes of Pegasus Systems in the travel and hotel segment. One could argue that they have been too narrow in their targeting process, but I believe they have gained a foothold in the area where they can demonstrate/validate the key elements of their value proposition. The question is can they now translate that into a compelling draw for those enterprises that have started or will soon be looking to move beyond simple virtualization and leverage their investment in years of Java application development. This is not dissimilar to what we saw with mainframe applications during the client/server hype. Many people predicted the demise of those applications – yet they live on today. I think the same is going to hold true for the “legacy” Java applications – way too much money invested in those lines of code to abandon them for the cloud. IT managers will be looking for a way to optimize and extract additional value from their investments. Let’s hope Azul’s current customer base will validate their value proposition.
So is Azul around for the long haul? If you had asked me that question a couple of years ago I wouldn’t have been so sure. Legal troubles with Sun Microsystems and lack of traction with Microsoft (relative to Azul’s initial vision) led me to believe that they had a limited lifespan. But they have stuck to their principles, haven’t panicked and seem to be on a good track. As we come out of this recession I think IT managers will be very focused and careful about how they spend their precious budget dollars, and I believe they will pay a whole lot more attention to their core business applications. I think this bodes well for Azul – if they stay focused…