Translate

December 15, 2014

The disruptive nature of the cloud - The Fallacy of Availability through Reliability

So, as the title of this post suggests, I want to discuss the disruptive nature of the Cloud. This will be a series of 5 posts in total, you're reading the last post.

Read about the cloud and what it means and you're bound to read that the introduction of the Cloud, the real Cloud, the one that meets all criteria for being a Cloud, has been disruptive, but has it?.

I like the NIST definition of Cloud, it is quite comprehensive and less vendor-biased than Gartner's defintion.

The Cloud has been disruptive when it comes to the IT industry, especially the hosting market. But it has also been a force in how we handle IT within the enterprise. There are a few important aspects of IT in the enterprise that we consider to have changed due to the Cloud:


  • Moving from in-house IT management services to off-site IT management services. 
  • Moving from CAPEX (Capital Expenses) based IT investments to OPEX (Operating Expenses). 
  • Moving from on-premise (business) applications to off-premise (hosted) applications. 
  • Moving from a centralized IT to a democratized IT 


  • I'm sure you can think of other movements as well in your IT environment, but these are typically considered to be not only happening, but also to be disruptive within the enterprise.
    These are in fact not really changes happening due to the Cloud, the Cloud merely gave these movements a boost and fast-tracked the changes in IT. The point in case, as you can read in above articles, is that the cloud hasn't been that disruptive at all. It's more or less all the same all over again.

    But there's an actual disruptive nature to the Cloud, it's got everything to do with the many 9s you read about in brochures of Cloud providers. The amount of numbers of 9 relates to the stability or rather the availability of of something. It's the percentage of up-time of something, preferably a service, but in many cases it's just about a server.
    The disruptive part is in that traditionally the availability of a service or even a server is depending on the stability of the service or server for that matter. And what is traditionally the case is that the availability of a service, an application, is actually depending on the stability of the infrastructure on which the application is running. The more reliable the infrastructure, the more available the application and therefore the service is.
    As enterprises controlled the infrastructure, even in a hosted environment, applications were developed that relied on how the infrastructure was realized.
    And that's where the disruptive part comes into play, because in the cloud, the enterprise no longer controls the infrastructure, and by no means one can depend on its reliability.

    The bottom line here is that traditionally applications are designed-for-success. The application is relying on the fact that the infrastructure can be depended on and that in essence, the infrastructure will not fail. In the cloud this is not the case, applications need to be designed-for-failure.

    Back to why this is the case. It's quite simple. Clouds are consisting of massive amounts of infrastructure. This is because the Cloud provider wants to achieve an economy of scale such that it can offer infrastructure and consequently services on that infrastructure for as many customers as possible for as low a price as possible. In order to do this, it makes more sense to use cheap hardware, so costs are low instead of expensive hardware. Because, well 100,000 servers of US$ 2,500 each is more expensive than those same 100,000 servers costing each US$800. The price per server for a customer can be lower in the latter case... you get the economics.
    But as things go with cheap stuff, it breaks down all the time. But spare parts are cheap as well. The Cloud provider just needs to make sure that the fixed or new server is up an running again in no time. Oh, and when you run out of spare parts, you just buy other cheap kit.
    With virtualization coming into play, a piece of hardware going belly up means hardly anything as the virtual server can be back up on another piece of cheap kit within literally minutes or even seconds.
    Enterprises buy expensive hardware because they want reliable hardware as they're typically too small to get actual economies of scale so the whole Cloud paradigm doesn't work.

    Here's the pretty part, when you control the infrastructure, you can determine what part of the availability you want to handle within the infrastructure and what's in the application. You have access to the complete OSI stack, so you decide where what is handled.
    Now forget about being in control of the complete stack, you're only controlling layer 7, the application layer. Or when you're lucky, which you won't, you have something to say about the 6th layer, the presentation layer. All of a sudden, all the availability requirements will have to be handled in layer 7. That's right, in the application. Because all you know is that 99.999% of the time (probably more likely is 99.8% of the time) the infrastructure will do what you want it to do, but when it doesn't. You have no clue, you just know that it'll be back up in seconds. Probably not knowing what it was doing before it crashed. In fact, Cloud providers will not tell you how they reach the many 9s from their brochures, you can sue them when they don't come true on their promises.

    What's so disruptive about this? Well, ever since businesses started using computers, applications and more importantly the design and programming models could stay the same going from one computing model to another. Always the paradigm has been design-for-success. Granted there was error handling, but this was handling errors that could be predicted. Wrong user input, key-uniqueness violations, race-conditions. But with the Cloud, all of a sudden, the paradigm must be design-for-failure. Assume it won't work and make the application robust enough to handle failure, handling errors that cannot be predicted. Unavailability of resources, session losses at inconvenient times, state of the infrastructure changing all the time without the application being aware of it.
    See? The issue here is that applications can't be migrated from one computing model to another by just adapting the model when you go to the Cloud. All of a sudden, the application model will most likely have to change significantly. And with that, your breed of developers need to change as well. They have to re-learn how to program enterprise grade software. And no, just creating stateless horizontally scaling applications doesn't cut it. Because unless you design for failure, your application won't be horizontally scaling across the board. It will always have a bottleneck at the ESB (see my previous post on why the ESB doesn't scale, by design) or the database. In fact, the scaling capabilities of an application has nothing to do with the Cloud, it's a traditional problem that surfaced with the rapid increase of computing utilization. More users, more transactions, more requests, more of everything required different kinds of scaling than the traditional vertical (faster CPU, more memory, more bandwidth) kind. This is not related to the Cloud at all. In fact, the scaling solutions we all know is also heavily relying on the reliability of the infrastructure.

    Concluding, the Cloud is disruptive. It's disruptive in that it needs redesign of the applications that are migrating from the on-premise to the Cloud.

    Okay, there's another thing that's different from traditional computing models, and I hinted on that already. You have no clue as how things are done at the Cloud provider, you have no say about it, and the Cloud provider will never tell you. And that's something that a lot of enterprises have to get used at. You have to trust your computing resources vendor at face-value that you get what you're paying for, and you pay for the result and not for how it's done. And that's disruptive for a lot of managers and architects, especially because these are typically the control-freak kind of people.

    NB: The Cloud needs economies of scale, most enterprise's IT environments are not big enough to reach these economies of scales, thus an on-premise private Cloud makes no sense from a cost perspective. This is not to say that your own Cloud is a no-go.

    December 1, 2014

    When Scrum needs to mature in the enterprise, architecture comes to the rescue.

    This post is the consequence of a comment on my previous post that startled me as it stated that in Scrum there's no place for the architect. Scrum has no future were it not for the architect.

    First of all, I am not a fan of Scrum, mainly because it sounds too much like an acronym and I hate acronyms. The other reason why I don't like Scrum is because it entices too many religious zealots to start a jihad against all non-believers. And then there's the third reason, the reason why I don't like Scrum is because too many so-called Scrum practitioners use it as an excuse for not going for longevity and quality of their deliverables.

    So, with that of my chest, let me tell you that I really love what Scrum stands for. That whole thing about delivering something useful to who ever pays you for building it as soon as possible. Allowing for the customer to change his mind constantly about what is important and what not. And the very notion that sometimes you do something and than you have to redo it but differently is part of the deal is excellent.
    Scrum is awesome and it addresses a lot of very significant aspects of old-school project management. Especially when it comes to long analysis phases, and even longer design phases and excruciatingly long development phases, Scrum has introduced a lot of benefits.

    One of the main reasons why Scrum is such a success from an adoption perspective is the fact that it has been introduced in the limited scope of development project teams. And then in most cases those teams that had to face a lot of changes in requirements and priorities, namely front-end, user facing systems. Systems that directly addresses the needs of an end user.

    As the common enemy called "customer" united all developers, Scrum thrived.

    The rebellious attitude introduced with Scrum appeals to many developers, and I don't mean any disrespect to developers. I am a developer and often I am my own customer, or in Scrum terminology, I am my own product owner and I suffer from changing priorities and requirements all the time. It's part of usable software.

    In many organizations these days, development projects are done with the Scrum manifesto in one hand and a sincere lack of documentation in the other.Here lies already a big problem, namely the fact that in these organizations people, even Scrum zealots, still talk about projects. In and by itself it is impossible to do projects using Scrum. The very notion of projects is preposterous, but that'll be covered in another post sometime.
    These enterprises have introduced Scrum in their software development projects and those organizations that are a bit more mature have introduced DevOps, which is the second stage in Scrum-olution if you ask me.

    Havoc is wrecked when a Scrum team is developing software, functionality that depends on the deliverable of another Scrum team. How are sprints aligned? How are interfaces co-developed, how are release dates, even in continuous delivery situations, coordinated? Scrum, nor DevOps, can answer these questions. The reason for this is that the scope of Scrum was never intended to go beyond the activities of the Scrum team. Inter-scrum-communication is, well not existent.
    Of course it exists, because it happens all the time. Initiatives like meta-scrums, meta-sprints and meta-other-Scrum-buzzwords are popping up all over the place.
    The fact is that once you look beyond the scope of sprints and epics and what the Product Owner is looking at from a back log perspective, there's not a lot that you can't can do without the quality and talents of an architect. It is always the architect that delivers the bigger picture.

    Scrum is, I believe, something that stems from the sport rugby. And if not, than still for the purpose of this post, it does. And where the Scrum teams play the game, it is the architects that define the playing field as well as the rules by which to play. Mind you, Scrum does not equal developer's anarchy. It allows for the freedom of the team, if suitable, to apply a pragmatic approach to realizing the product owner's products. But the freedom is within the limitations set forth by the architect in terms of policies, principles and standards. Because of the architect, features integrate and systems can communicate. But more importantly, it is architecture that allows for cohesion and consistency between different applications and services, thus ensuring for example compliance to laws and regulations. Especially in environments that are predominantly based around SOA (WS* and REST both), it is imperative that policies and standards are defined and adhered to in a consistent and cohesive manner in order to be and stay compliant.
    This becomes very apparent when principles like:

    • "One Version of the Truth"
    • "Re-use of business logic"
    • "Role based Access Control"

    Are to be followed. Without an architecture framework, it becomes extremely expensive from a governance point of view for an organization to enforce or even ensure adherence.

    Do you need an architect in a Scrum team? Probably not as long as you've got one or two members in your team that are taking care of the relevant design work, whether or not implicit to the Sprints deliverables. But in the bigger picture, there is definite need for an architect, especially when you not only want your development teams to be agile, and capable of adjusting to the whims of the product owner but also your whole business to be agile and capable of adjusting to market demands.

    Thus, unless you want to keep Scrum to be Waterfall's baby brother for ever and ever, you need to mature your agility and that is only possible by applying architecture and involve architects. But not so much within Scrum teams, but across these teams.

    November 25, 2014

    The reason why IT architects fail, almost consistently, is modeling.

    Summary: Because every architect will try to fit her solution in her specific model, the way architectures are created within a project is independent from each other. Thus within a project the application architecture is not really considered when creating the infrastructure architecture, nor is the data architecture or the security architecture. Consequently, creating an architecture for any business application is a waste of time. Or is it?

    Probably, when you're like me an architect or like some people I know a project manager, you already know that involving an architect in your project, or acting as an architect in a project guarantees exactly two things:

    1. The project will not deliver on time
    2. The project will not stay within budget
    Of course I'm just kidding, in fact you should be experiencing the exact opposite, involving an architect in your project or acting as an architect on a project, provided the architect is considered to be a project member and not somebody that can get a project out of a dire situation, means that the project is most likely delivered on time or as close as possible to it and all within budget or close to the budget. This is what the architect does and there's more. The architect will ensure that the deliverable will be of a higher quality and therefore be of more use once taken into production. The promised or predicted ROI (Return of Investment) from the business case is more realistic.

    The reason why the architect is helping with this is not because she's architecting the deliverable, although she is, that doesn't really help. The reason is that the architect is the single one person in the project that is looking at the problem(s) at hand from a holistic point of view. Approaching from different angles and addressing issues with an open mind. Architects have the talent to think outside of the box, creativity is their key capability.

    Still, architects almost all fail all the time in doing architecture, that is IT architects. Because this blog is about IT architects. The key reason here is that they are architecting within the boundaries of their area of expertise.

    So what do I mean by all this. It's actually quite simple. Consider yourself in case you're an architect or that person in your project that is the architect. What kind of architect is she? Most likely, we're dealing with an application architect, an infrastructure architect, a security architect, a data architect, a UI architect, an information architect, a business architect or maybe even an enterprise architect. Point here is that within pretty much any serious business application, you're touching all of the areas discussed above. But hardly ever do you see an architecture that is integrating all of these areas into one single model.

    Architects are working a model and in the best of cases, are handing their model to other architects to make them aware. The project manager has no clue of what's going on or dares to make any assumptions as to what is going on here. But the case in point is that the application architect creates the application's architecture, hopefully taking the input from the business architect to understand where in the various business processes the application is being used. 
    Once the application architecture is done, the security architect is tasked to make it secure. This is when the application architect is not suffering from a god complex and doing all that security design afterwards himself. Obviously the data architect has no clue as to what is going on, because he's never involved. The application architect did all that work instead. Because, data is an integral part of the application, so...
    This is where the architect is done and the project's developers take over. Best case scenario, the resulting application is close to what the architecture intended it to be. At least the application will adhere to some of the enterprise's standards, values and principles. Once all that development is over, the infrastructure architect is requested to, oh wait, best case scenario, the application architect is reviewing the application for compliance with the architecture, and this is when the infrastructure architect is to create the infrastructure architecture on which the application is deployed.

    Okay almost the same scenario, but now the story starts with the infrastructure architecture and the application architecture is subjected to the infrastructure architecture.

    Now do the same with a security architecture to start with and ...

    In all these scenario's the different architectures are created as separate products. One is subjected to the other. There's really absolutely nothing holistic about it. There's nothing integral to it either. And it makes no sense when you think of it.
    I will not make an analogy with architects of buildings or other physical structures because there is hardly an analogy.

    The issue at hand here is, as I stated, that the different architectures are considered separate products, with hardly any coherence between them. For some weird and actually inconceivable reason, we're let to believe that a security architecture can be created separate from an application architecture and infrastructure architecture and data architecture and business architecture. This is of course complete nonsense. There is no way you can create a generic infrastructure architecture, not even a high level reference architecture without understanding the application that will have to run on it. Same for a data architecture. What's the point of having that without knowing the application of the data in a business process? There is none. Don't let anybody fool you, because it's just not there.
    An important reason why we are working this way, is because we are let to believe that we need to work with models. Decent architects will at least not try to fit the world into their models, but every architect will work with models. Which makes sense because this allows them, us, to simplify the problem's solution. Understand the model and you understand why the solution works. But have you ever tried to come up with a model that covers all of the architectures mentioned before? A single model? Have you tried to draw a diagram that shows it all? And in case you tried, did you succeed? It's not possible because the perspectives, the points of view, the points of interest are different for each of the architectures. They operate, they concern themselves on different levels of abstractions and with different levels of detail.

    Because all these models consist of different elements, different symbols is used on the diagrams and the level of abstraction is seemingly different, the architectures are done separately, in a logical order to the project manager, the enterprise's culture and the individual relationships between the architect. Key point is that the consistency between architectures is coincidental.
    Hence, the creation of any architecture in the end is a waste of time, because none will hold when the project manager actually wants to deliver on time and within budget. Because he will sacrifice security over deadlines when it costs too much implementing the infrastructure according to the architecture. The data architecture is thrown out the window as soon as the application is not performing because of the database queries. Etc.

    There is a solution to all this and that is to create a truly integrated and holistic architecture of the complete solution. Have all these architects work on a single architecture that encompasses all these different aspects and have a single solution architecture that addresses application composition, data structures, business processes, infrastructure and security consistently.
    This requires that dogma is thrown out the window and reference architectures are basically just informed decisions on architecture principles and guidelines with architecture blueprints emanating from them.
    But most importantly, it requires a model that covers every single aspect of an application. And I mean every single aspect. The model must and can only be a meta model. A definition of a business application defined in data elements and their inter-relationships (including cardinality, direction of traversion etc) that allows for the definition of every single aspect of any business application. This meta model is a data model.
    And there's the key to the issue, the model we're having to deal with in order to make an architecture worth creating, is a data model. The different architectures we're used to having are just different views on that data model. This allows for various levels of abstraction with various level of detail, as however of interest to the reader.
    Because the model is capable of capturing every aspect of the business application, the model is capable of capturing every aspect of a set of business applications, hence it allows us to capture the complete application landscape and hence consistency across the board, architecturally speaking that is.

    Now why is this so important, what is the significance of having a single model for all architectures to be based on? The significance is in the fact that all of a sudden an architecture cannot be subjected to another architecture, because there is only one architecture. Consistency and coherence are a given, it is impossible to not have a concise architecture that is comprehensive and all encompassing because that would mean that part(s) of the architecture are not finished. Which is fine because it will have to be a conscious decision to omit these parts.
    In addition, for application developers it will be counter-productive not to stay with the architecture, because that will mean that deployment in production will be harder, because every single aspect of the application is taken care of in the architecture. Thus allowing the developer to concentrate on quality, and consequently an improved ROI.

    I hope you enjoyed this blog post, I for one want to thank you for reading it. I am aware that this post was quite long, but I hope worth your while reading it. Please leave any comments, agreements or disagreements as the more views there are, the better sight we all have,

    Iwan

    November 21, 2014

    The not so disruptive nature of the cloud - Centralized vs. Democratized

    So, as the title of this post suggests, I want to discuss the disruptive nature of the Cloud, or rather the cloud not being so disruptive at all. This will be a series of 5 posts in total, you're reading the fourth post.

    Read about the cloud and what it means and you're bound to read that the introduction of the Cloud, the real Cloud, the one that meets all criteria for being a Cloud, has been disruptive.

    I like the NIST definition of Cloud, it is quite comprehensive and less vendor-biased than Gartner's defintion.

    The Cloud has been disruptive when it comes to the IT industry, especially the hosting market. But it has also been a force in how we handle IT within the enterprise. There are a few important aspects of IT in the enterprise that we consider to have changed due to the Cloud:


  • Moving from in-house IT management services to off-site IT management services. 
  • Moving from CAPEX (Capital Expenses) based IT investments to OPEX (Operating Expenses). 
  • Moving from on-premise (business) applications to off-premise (hosted) applications. 
  • Moving from a centralized IT to a democratized IT 

  • I'm sure you can think of other movements as well in your IT environment, but these are typically considered to be not only happening, but also to be disruptive within the enterprise.
    These are in fact not really changes happening due to the Cloud, the Cloud merely gave these movements a boost and fast-tracked the changes in IT.

    Last time, which was a while ago, I wrote about the location of the data center, or rather about where the IT Infrastructure was located. This time around I want to discuss how IT resources find their way to the user, the customer.

    Ever since the beginning of (business) usage of IT within the enterprise there has been a movement from either centralized governance towards decentralized or even democratized and back. But first let me explain what I mean by 'democratized'. It is actually quite simple. Democratized means that everybody and their mother can obtain or have access to IT resources in this case. Consider computers, storage, network access and software applications.
    In the era of mainframes, IT was centralized, with the advent of PC's it got democratized, with client/server architectures it moved back to centralized and with the advent of thin clients, it became even more centralized. But the key here is that we've been at a democratized IT situation a long time ago, when PC's were introduced and software was installed locally on the PC by either a support engineer or the user herself. As long as you could get hold of the installation disks (and of course a valid license) you could install whatever you wanted. This was very beneficiary for the agility of the user but very bad for cost control on IT support expenditure. Because with all that software installed, viruses got installed as well, rendering the PC's unusable and a support engineer had to come over and fix the problem.
    Another important problem with a decentralized and especially a democratized situation with respect to the IT environment is collaboration between users. Apart from the fact that there is in principle no common ground for data, or information exchange if you will, but the diversity of applications in use, similar applications at that, means that exchanging information, actually collaborating is cumbersome to say the least. This resulted in a move towards centralization where PC's are not much more than computers that allow for a user friendly interface on top of a centralized application.

    With the advent of the cloud, and specifically SaaS offerings, the model became intrinsically more centralized, but at the same time, because of the public nature of SaaS offerings, they also became available for anybody with a means to pay for the service, thus democratization entered the enterprise again. With IaaS, but more over with PaaS, the democratization of IT resources extended to not just (business) applications but also towards computing resources and storage. Both Amazon and Microsoft have a ton of additional services on top of their own platforms, provided by both themselves and third parties.
    Everybody with a credit card can get their own enterprise software running in the cloud, create their own development environments or deploy a new marketing website. Typically with better availability promises than their internal IT departments can offer.

    Is this a new way of working, is democratization a new way for enterprises to handle IT? Hardly. So once again, there's nothing disruptive here that is related to the cloud.

    So, why is the cloud really taking off? Why hasn't the hype died yet? Why is the cloud causing IT departments such headaches? In the next, fifth and last installment of this series, I will reveal the true disruptive nature of the cloud. Stay tuned.


    July 22, 2014

    The not so disruptive nature of the Cloud - From On-Premise to Off-Premise

     So, as the title of this post suggests, I want to discuss the disruptive nature of the Cloud, or rather the cloud not being so disruptive at all. This will be a series of 5 posts in total, you're reading the third post.

    Read about the cloud and what it means and you're bound to read that the introduction of the Cloud, the real Cloud, the one that meets all criteria for being a Cloud, has been disruptive.

    I like the NIST definition of Cloud, it is quite comprehensive and less vendor-biased than Gartner's defintion.

    The Cloud has been disruptive when it comes to the IT industry, especially the hosting market. But it has also been a force in how we handle IT within the enterprise. There are a few important aspects of IT in the enterprise that we consider to have changed due to the Cloud:


  • Moving from in-house IT management services to off-site IT management services. 
  • Moving from CAPEX (Capital Expenses) based IT investments to OPEX (Operating Expenses). 
  • Moving from on-premise (business) applications to off-premise (hosted) applications. 
  • Moving from a centralized IT to a democratized IT 

  • I'm sure you can think of other movements as well in your IT environment, but these are typically considered to be not only happening, but also to be disruptive within the enterprise.
    These are in fact not really changes happening due to the Cloud, the Cloud merely gave these movements a boost and fast-tracked the changes in IT.

    Last time I wrote about the non-disruptiness of the Cloud in terms of finances, CAPEX vs. OPEX, this time around I will discuss the location of your (business) applications.

    Of course with the advent of the Cloud, we're no longer hosting our applications in our own data centers, which may or not be operated by a third party, the hosting provider but instead we host our applications with a Cloud provider in the form of Software as a Service (SaaS). The posterboy of SaaS is most likely Salesforce.com. Talk with anybody about SaaS and ask for a typical example of SaaS and they're likely to mention Salesforce.com. Read any article about SaaS and the article is bound to mention the same.

    So what is SaaS? Well, something SaaS is not, is disruptive. But there's a lot out there in the world that is not disruptive. SaaS is the situation where you buy the right to use a particular piece of software that is hosted by a third party, the SaaS provider, and you're not the only one using the software. Ideally you're only aware of other customers using the same software because you understand the concept of SaaS and not because they're messing up your 'copy' of the software.
    The NIST has an interesting definition of SaaS, which from a SaaS customer point of view is particularly interesting when it comes to the last part of the definition, where it discusses the level of control the customer has, namely no control other than some limited configuration capabilities.

    Of course we all know about email services like GMail, Hotmail, YahooMail, or your ISP's webmail solution. And although this is SaaS, this is only since the introduction of SaaS as terminology a real alternative for on-premise email capabilities for enterprises.

    This is nothing new to those that have been using the numerous ASP (Application Service Provider) solutions out in the market. The difference with SaaS is in the Cloud. Where the ASP was hosted on-premise with the ASP vendor, the SaaS is hosted in the Cloud, allowing it to be used in massive scale, and because of the sheer unlimited resources, it allows for extremely diverse applications.

    The move from on-premise to off-premise as a Cloud aspect for enterprises is hardly disruptive, it's something that's been happening even before there was a Cloud and embraced by the business long before the IT embraced or even grasped the concept.

    The most evident different between ASP and SaaS is the extremely standardized contracts of SaaS offerings if there is a contract at all. Where as with ASP a separate contract was closed between provider and consumer, allowing for some tailoring, with SaaS this is not so much the case. Contracts are standard and with a mere credit card you can sign up. No long term contracts, but pay as you go.

    Next time another undisruptive aspect of the Cloud.

    March 6, 2014

    The not so disruptive nature of the Cloud - CAPEX vs. OPEX

    So, as the title of this post suggests, I want to discuss the disruptive nature of the Cloud, or rather the cloud not being so disruptive at all. This will be a series of 5 posts in total, you're reading the second post.

    Read about the cloud and what it means and you're bound to read that the introduction of the Cloud, the real Cloud, the one that meets all criteria for being a Cloud, has been disruptive.

    I like the NIST definition of Cloud, it is quite comprehensive and less vendor-biased than Gartner's defintion.

    The Cloud has been disruptive when it comes to the IT industry, especially the hosting market. But it has also been a force in how we handle IT within the enterprise. There are a few important aspects of IT in the enterprise that we consider to have changed due to the Cloud:

  • Moving from in-house IT management services to off-site IT management services. 
  • Moving from CAPEX (Capital Expenses) based IT investments to OPEX (Operating Expenses). 
  • Moving from on-premise (business) applications to off-premise (hosted) applications. 
  • Moving from a centralized IT to a democratized IT 

  • I'm sure you can think of other movements as well in your IT environment, but these are typically considered to be not only happening, but also to be disruptive within the enterprise.
    These are in fact not really changes happening due to the Cloud, the Cloud merely gave these movements a boost and fast-tracked the changes in IT.

    Last time I wrote about Cloudsourcing, this time the more financial part is the topic.

    Ask the various vendors as well as analysts about the consequences of moving from the traditional data center model to the Cloud and they're bound to mention the big change that will happen regarding to your costs, namely going from CAPEX to OPEX based expenditure. Something that is to many a huge step in corporate financing.
    My question here is: "Really?" In the 1990's there was already some form of Cloud, but at the time it was marketed as Utility Computing. The idea was that you would buy computing power just like you buy electricity, as a utility. Sun Microsystems was betting on this as were many other vendors.

    Basically, in its essence, this is what Cloud is to a large part about. But the monniker 'Utility Computing' was dropped and after some iterations and the fact that the Internet is always portrayed as a cloud in diagrams, the new name was born. Fact is that by coining 'Utility Computing' these vendors addressed the accountants in the enterprises they were talking to, explaining the cost model. Why? Because the model of basic utilities was very well known to the people from the more financially inclined departments. They were used to thinking in terms of closing a contract and than pay for whatever utility they were using as they went. Pay-per-use. This definitely was and is true for a lot of IT related costs especially in the data center like power consumption, cooling (Air Conditioning).

    So the disruptive nature of the Cloud on the financials of an enterprise are not that disruptive at all, in fact at the time of introduction of the Cloud in its very early stages the Cloud vendors tried not to be disruptive at all and instead conform to what those with the money already knew.

    February 25, 2014

    The not so disruptive nature of the Cloud - Cloudsourcing

    It's that time again. I've been busy lately, very busy indeed. But there've been some events with some of my customers that led me to write up this post.

    So, as the title of this post suggests, I want to discuss the disruptive nature of the Cloud, or rather the cloud not being so disruptive at all. This will be a series of 5 posts in total, you're reading the first post.

    Read about the cloud and what it means and you're bound to read that the introduction of the Cloud, the real Cloud, the one that meets all criteria for being a Cloud, has been disruptive.

    I like the NIST definition of Cloud, it is quite comprehensive and less vendor-biased than Gartner's defintion.

    The Cloud has been disruptive when it comes to the IT industry, especially the hosting market. But it has also been a force in how we handle IT within the enterprise. There are a few important aspects of IT in the enterprise that we consider to have changed due to the Cloud:

    I'm sure you can think of other movements as well in your IT environment, but these are typically considered to be not only happening, but also to be disruptive within the enterprise.
    These are in fact not really changes happening due to the Cloud, the Cloud merely gave these movements a boost and fast-tracked the changes in IT.
    Let's start with that one about in-house versus off-site. I had to restrain myself not to use the term 'outsourcing' because in fact that is what is perceived by many that the Cloud is doing. I coined the term 'Cloudsourcing' at one of my customers while writing up a major chunk of the hosting parts for a rather large tender. Depending on the service that you contract, IaaS or PaaS (SaaS is a different beast all together) you're either outsourcing or insourcing. Again I prefer the NIST definition of XaaS which can be found here.


    The diagram above is not entirely according to the NIST definition, well arguably. According to the NIST definition, how I read it, a managed OS is also PaaS, hence when you only get an OS, you're still contracting PaaS. But I think you see that when you're contracting IaaS, using the Cloud is merely hosting, you still need to do all the leg-work, well pretty much of it. Definitely not outsourcing your IT.
    On the other hand, when you're contracting PaaS, you're actually closer to outsourcing than to hosting. But my point is, we've been hosting our systems since way before there was Cloud and the same goes for outsourcing. The Cloud in this respect is not really disruptive, it's not even different from what we've been doing since very long.
    Is there nothing new that came with the Cloud? Well actually there is. The new part here is that we have a heterogeneous model of hosting and outsourcing where it is not always very clear what is what and looking at our systems that run our applications, some could be part of the outsourcing model and others could be more following a hosting model. To make matters more interesting, servers can move from one model to another based on service offerings and pricing.
    The disruptive part here is hardly at the enterprise level. No, it is at the consultancy firm level. It is them that do not have any real experience on how to tender these kind of deals, how to write the right contracts and how to propose the right organizational transformative processes. Why? Because the Cloud is such a novel concept and we are all led to believe that it is so different when you move to the cloud.

    Next time up, part 2 in the series; The not so disruptive nature of the financial model, CAPEX vs OPEX.

    January 6, 2014

    The demise of the ESB in a world of webservices

    Abstract: In a world where no single enterprise can develop the software needed to run the enterprise, it no longer makes sense to connect everything to the Enterprise Service Bus, but instead everything should be connected to the internet. This is exactly what is happening. And this trend is causing the dimise of the ESB as there is no validity to the ESB being the backbone of Enterprise Application Integration, something that was promised by the major players in the world of messaging. Promises never truly delivered on.

    Remember what the ESB was good for?
    1. Data Exchange
    2. Data Transformation
    3. Data Routing
    A role that has been assumed by web-services, both in the form of SOAP based web-services (predominantly conforming to the WS* standards) and in the form of REST-based web-services.

    Other posts in this trilogie:
    ESB's don't scale, which shouldn't be a problem because that is by design
    The ESB addresses a niche, not mainstream

    In this third post in a series of three on the ESB of Arc-E-Tect I will delve deeper in the inevitable dimise of the ESB in today's world of connected systems.

    Let's face it, the ESB isn't dead, it was never alive in the first place. Not in the incarnation the big vendors would like us to believe. It was and is merely a renamed messaging system.

    That being said we can concentrate on whether or not we should still consider the ESB. The Enterprise Service Bus.

    This is a post in a series about the Enterprise Service Bus, ESB, an acronym so easily to love or to hate. Parts of this post are also found in another post on the dimise of the ESB in a world of webservices.

    First of all, let's not call it Enterprise Service Bus, but stick with ESB. Just like SOAP, which originally was the Simple Object Access Protocol and now just is SOAP. Probably because there was nothing Simple with SOAP. Just like there's nothing Enterprise about ESB. Unless of course you consider the budgets needed to actually get something running.

    So, the ESB became obsolete. First of all, we should identify what the ESB was supposed to deliver before we can say it became obsolete.

    Basically there are just a few functionalities an ESB actually was supposed to provide.
    1. Data exchange. Now I'm carefully choosing my words here and keeping it as abstract as can be. But in fact the ESB's data exchange capabilities didn't extend much further as messaging. Messaging in the sense of what products like IBM's MQ Series and Microsoft's ActiveMQ provide. You have a limited amount of data, and you send it between two points.
    2. Data transformation. Actually this should be message transformation, but it applies to all data. The ESB is particularly good at transforming one data format to another. This is actually some legacy capability from the time there was no ESB and only MOM (Message Oriented Middleware). Because the promise of being able to loosely couple two systems by utilizing MOM justified the creation of functionality that would transform one data format to another. This to further decouple both points.
    3. Data routing. Again, the premise here was to decouple sender and receiver. As the ESB already decouples sender and receiver by implementing an asynchronous communication protocol, by allowing for transformation from the sender's format to the receivers format, the only coupling aspect still to be resolved is the addressing. The ESB removes the necessity of the sender knowing where the receiver is located by allowing for logical names that are resolved and data is routed to the 'correct' recipient.
    In 1997 I started working on an ESB using IBM MQ Series as the underpinning messaging technology, we implemented all three core capabilities of the ESB. The likes of Microsoft, IBM, Lotus and Neon came to visit us as they wanted to see what we were doing. This all at a one of the Dutch leading banks.
    As we implemented the usage of the ESB and later on as the Middleware architect and Enterprise Application Integration Architect (EAI) for this customer I found that the premise of the ESB, which at the time was called MOM (Message Oriented Middleware) was awesome.
    We connected a front-end developed in Visual Basic with a backend developed in COBOL. On the client side, we developed an ActiveX control in C++ and a whole series of modules that allowed for on the fly creation of message formats, stored in a meta-data format that was retrieved from a meta-data server. The initial format for the messages was actually the format the meta-data server understood and we called it meta-meta data. Pretty deep, huh :)
    On the server side, a COBOL program running on the AS/400 we developed some components using C as the programming language of choice. Here as well we developed something similar to the meta-data server. As I recall, we were using copy-books.
    It was all pretty advanced stuff and we were able to decouple the server and the client, develop both independently from each other and the queueing provided by MQ Series allowed also for asynchronous communication.
    Shortly thereafter products came on the market that did a lot of the functionality we implemented and on the part of MQ Series, NEON was a marketleader.

    This is the time around which SOA (Service Oriented Archtecture) started its emergence and everybody and their mother who thought of themselves as a messaging vendor jumped on this bandwagon. With not much more than the good old MOM with a new label attached to it.

    Interestingly enough, the messaging format problems was solved by the introduction of XML. XML already existed but as it was both text oriented and self describing and on top of that, like a cherry, it was human readable... more or less. The problem of course still being that the actual message has to ad here to a defined format. DTD's (Document Type Definition) and later XSD's (eXensible Schema Definition) were invented to ensure this and both standards are doing a pretty good job at it. When combining this with XSLT (eXtensible Stylesheet Language Transformation) you have a format that is self describing, a means to define the format and a means to transform from this format to pretty much every other format.
    Don't get me wrong, I really love the fact that with XML I can do all this stuff. More importantly, I don't need a message tranformation engine to do so. Of course, this awesomeness only works when you start off with XML.

    From a decoupling perspective this is understandably pretty neat because it allows the sender of the message to determine the message format and the recipient can parse the message in a standard way, verify that it complies to the definition and because of that transform it to a format it can understand. Granted, when the sender doesn't use XML you're stuck with being able to handle that format. And in those cases a message transformation engine comes in pretty handy.
    The question is what happens when a lot of senders send messages in XML but not the format the recipient understands. Who will do the transformation to the right format? You could do this at the recipient end, but that is bad practice. Why? Because you're coupling sender and receiver again. That was something we were tryint to avoid. Doing this in a message transformation engine makes perfect sense and is the right way to handle this.

    So XML is excellent for messaging. Messaging is awesome for asynchronous communciation. And it is great for communications where the sequence of messages is important. Many communications are not asynchronous at all. They're request/reply style interactions. But interestingly enough, these typically don't require ordering... and when the communication si asynchronous, we typically notice that ordering is important.
    First about the request/reply not being ordered. What you should realize is that in these cases the client, the requestor takes care of only proceeding with the processing once the reply is received. So the handling from a server (recipient) of multiple requests is regarind multiple requests of multiple clients. These are independent and therefore by nature unordered.
    The communications that are asynchronous but need to be ordered are usually feeds of data that need to be processed in sequence. Data feeds from a single client are sequence by using a... yes... a queue. It's the perfect mechanism for sequencing. When dealing with multiple clients feeding the server, we need to resolve to unique sequence numbers that are universally unique or using timestamps... which is always tricky.

    I'm probably not the first to notice this as SOAP and WS* standards are addressing this exactly. By this I'm meaning the benefits of XML and the characteristics of communications between systems.
    The XML part of SOAP based webservices is that the XML is very well defined. Arguably that is, but from the perspective of the freeformat XML actually is, it is darn well defined. Also, the webservice is a request/reply protocol. In case you want to use it for feeds you just ignore the reply.
    There's another nice feature of SOAP and that is that it can use various communication protocols like HTTP(s), JMS (Java Message Service), SMTP (Simple Mail Transfer Protocol, or email) or anything else. Basically, you can send a SOAP message via any means as long as the recipient can get to the message. I've seen SOAP used via HTTP(s) in about 99% of the cases and JMS used in those cases where queueing was necessary.

    Now we're getting at the dimise of the ESB.

    Remember what the ESB was good for?
    1. Data Exchange
    2. Data Transformation
    3. Data Routing
    Well with SOAP being XML based and arguably very wel defined, the Data Transformation is not that relevant anymore. The Data Exchange with SOAP is handled as well, you use HTTP(s) in pretty much all the cases you come across it. Leaves us the Data Routing.
    So what was the routing about? Well, it was all about having a logical name resolve to a physical location, where the ESB would determine where to send the message based on an address that one can remember, or that is not tied to a physical location. In the world of HTTP(s), which is dominated by the world of TCP/IP, this is actually handled by the DNS (Domain Naming Service). The DNS can perfectly handle this translation and do the routing. It's what is was designed for. Almost its raison d'etre. Maybe not even almost.
    So in a world where we do SOAP based communications, there really is no reason to have an ESB.
    So what else can an ESB do? Or rather; what else do we use an ESB for?

    Many ESB vendors herald the fact that their ESB is perfect to handle webservices. I never understand what they mean by this. But I think it can mean only two things:
    1. The ESB can provide a WS* interface to a legacy service that is not providing a WS* interface. Basically it exposes a SOAP interface and translates this into a format and a protocol that the legacy service understands.
    2. The ESB can provide a legacy interface to a WS* service. In this case a client doesn't speak SOAP, can't invoke a WS* interface and instad it invokes another interface on the ESB which changes this call into a WS* interface and than returns the result in the correct format.
    In the first case, the ESB is basically an adapter from WS* to a legacy format. Which in the old world of EAI makes perfect sense. But with WS* being such a well defined interface standard, it makes just as much sense to develop a WS* adapter in your prefered development environment and deploy it in your prefered application server. Typically this will save a lot of expensive licenses and you'll be able to scale way better. More on that later. The benefit here is that you can leverage the programming skills of your developers to develop in Java or .Net or PHP or any other programming language the WS* frontend and deploy it in any of the relevant application servers. Mind that more and more 'enterprise applications' (like Siebel, Dynamics, SAP, PeopleSoft, etc) provide a WS* interface. because everybody has been jumping on the WebServices bandwagon since the term was invented.
    In the second case, the ESB is basically a transformation engine from some protocol to another. Something it has been very well suited. But again, since WS* and SOAP are so well established it makes as much sense if not more, to just develop this transformation at the client side instead of through the ESB. Again, license costs will be reduced and scalability will be enhanced.

    Let's address that scalability issue for a second or two. First of all, an ESB doesn't scale. That's by design. It was never intended to scale, it was intended to be the pivot in enterprise communications. ESB's are not buses at all. They are the hub in a hub-and-spoke model. They have to be, because they are the middleman handling all the Babylonical speak impediments within an enterprise. Concurrency is handled by threading. An important aspect here is state. ESB's have the crucial design flaw of being able to keep state. 
    When you talk to an ESB vendor about what the ESB can do for you, the vendor will tell you the three features I've listed before and an important 4th one. Well, the vendor thinks it's important, I believe it's a design flaw. This feature is "Data Enrichment". What they mean is that you can send a half-baked message to an ESB and before the ESB is delivering the message to the server, the recipient, it will call all kinds of other 'services' to enrich the original message with more information which is needed to deliver the message. This means that the ESB needs to keep state, while enriching the message. It also means that the ESB is no longer a mere intelligent data transport that routes and transforms on the fly, but it has become an application, a business application.
    Because the ESB is designed to be able to handle this, the ESB is designed to be able to keep state. And thus is doesn't scale. It scales as far as the box it is running on can scale. Scalability is all vertically.
    There's another problem with the scalability and that is the dependency on physical resources. The queues of an ESB are physical, they're filesystems, databases or something else that's physical and allows for some persistence, or even complete persistence. This means again, that it doesn't scale because it needs coordination of accessing these resources because they guarantee sequence.
    When scaling an ESB, it needs to be setup across nodes and there needs to be a vivid dialogue between the nodes about what each node is doing and has been doing, this is pure overhead. The busier the ESB is, the more chatter, ie the more overhead. This will require more nodes which requires more chatter. The result is a very disappointing performance curve.

    Don't worry, this is all by design. The design here is that the ESB should be doing more than just be an intelligent data transport, because were it just an intelligent data transport, it would not have any added value in a SOAP ruled WS* compliant world, which is today's world.
    The ESB is designed to have added functionality to warrant the purchase of its licenses. This added functionality is allowing the sender (or client, or consumer) to not care about the sequence of its messages, because the ESB handles this. But that's a moo point since the client will either do a fire and forget (a feed) and not worry about sequence or it will wait for the response to a request before continuing processing. No added value at all. But the ESB, by queueing also ensures the sequence of requests or messages from several clients. So the recipient (or producer, or server) gets the requests or feed in the order the ESB receives them. Which means nothing because this may not at all be the same sequence the clients did in fact send them. Think about latency of different clients causing the ordering of messages going boinkers all over. Meanwhile this desparate need to sequence requires that there is a single point in the ESB doing all the sequencing. Which means dropping the desire to be scalable.

    The ESB really has no place in an environment where webservices are dominating the communication between systems. And mind that what goes for SOAP based WS* compliant webservices also holds true for RESTful webservices that use XML or JSON as the message format language. Actually even more so, because RESTful webservices are predominantly HTTP(s) based.

    Going back to the scalability for a second. In a webservices based environment, scalability is achieved at the recipient end, where the service producer (server or recipient) is designed such that it handles one request (or message) at a time. Such that ordering is either irrelevant or part of the interface specification.

    Ask the ESB vendor again why you should get yourself an ESB after reciting the previous, and your vendor will likely start talking about monitoring. How convenient it is to have the ESB doing all the monitoring for you, keeping track of an audit trail or at least a message trail because it is the central hub through which all the messages flow. Emphasizing the ESB's main flaw again, it being a central hub.
    But in this reason there's another flaw; The ESB can only monitor, or log for that matter, that what it knows. So it can log the receipt of a message by the ESB, not that a message was send to the ESB. Consequently, the ESB only knows about the sending of the message to the recipient, not about the recipient having received it. And I'm not really convinced that the information the ESB has about the message flow is the most interesting information. Not at all, hence you need to log the sending by the sender and reception by the recipient still. You could use the ESB to be a collector of log messages, have all systems send their log messages to the ESB and have it deal with them, store the messages in a database or in a log file or in both or forward them to a dashboard or do it all. And this is exactly why you should keep an ESB around. There's a full post on this in my blog.

    Concluding, the ESB has no right to be a mainstream technology in today's and tomorrow's IT environment where webservices will dominate the interfaces. It's perfect for it's particular niche.

    As always, I'm really interested in your views and ideas. More perspectives make for a richer understanding of the topic. So please share your thoughts, stay respectful and be argumentative.

    Iwan