Translate

February 27, 2017

The Arc-E-Tect's Predictions for 2017 - Activities and Roles [9/10]

The Arc-E-Tect's Prediction on Activities and Roles

It's 2017, meaning 2016 is a done deal, and most of my predictions for 2016 I made about a year ago and never got around documenting in any form have proven to be absolutely bogus. Which leads me to come up with my predictions for 2017 and properly document them. So continue your effort reading this post and hopefully you'll enjoy reading them. Unfortunately we'll have to wait for about a year to find out to what extent I got it all right, but for now, ...Activities.

Why Activities? Activities because we’re breaking down silo’s in 2017 and expect people to work in teams with responsibilities become team concerns. Consequently, we’ll see less explicit roles, more implicit roles and activities shared and distributed within teams. Roles? Where we’re going, they have no roles.

Activities in, Roles out


The thing is, we’re moving, as an industry, in the direction where we want be able to get feedback as early in the process as possible, which means that every person concerned with creating and delivering a products will be involved in everything needed to create that product and ensure that it works as intended and more importantly as needed. In this setup, everybody is what we in 2016 called a full-stack developer. Concerned not only with developing the software, but also with developing the infrastructure it needs to run on. Or the other way around, not only concerned with setting up the infrastructure on which the software is going to run on, but also with creating that software.
And on top of that, the whole team is about creating value for the business, determining the right level of quality and more importantly ensuring that that level of quality is part of the product. Meaning that everybody tests, tests and tests some more. Not because testing is a step in the delivery process, but testing is part of the product creation process. The same goes for other areas. For example security and compliance is no longer something that is considered an afterthought or something that is moving along in the sidelines, but instead is an integral part of the product. The DBA is no longer a separate role, but instead the team will ensure that tuning of the database is an activity everybody partakes in.

What does this mean? It means that we don’t have a tester anymore, yet we’ll be testing more than ever. Not because we’re more insecure or self-conscious, but because everybody in the team will be testing. Or rather will ensure that the product is tested. Same goes for the security officer and the security architect. They’ll still be there on an enterprise level, but instead of doing their magic to the products, they will define risk and policy and principles and the team will ensure that the product complies to all.

Team members will get an implied role because the show the most affinity or expertise in some area and rely on others in other areas. But everybody will be responsible and accountable to perform activities instead of assuming a role.

Thanks once again for reading my blog. Please don't be reluctant to Tweet about it, put a link on Facebook or recommend this blog to your network on LinkedIn. Heck, send the link to my blog to all your Whatsapp friends and everybody in your contactlist. But if you really want to show your appreciation, drop a comment with your opinion on the topic, your experiences or anything else that is relevant.

Arc-E-Tect

February 22, 2017

API Management in Azure using Microsoft's solution - Resources and Templates [2/2]

This is a series of posts regarding the topic of API's and API Management on Microsoft Azure. Not all posts in this series are directly related to Microsoft Azure and their implementation of API Management, that is intentional. The series also explains about API's, about creating API's and about what it construes to in order to manage them, conceptually and in reality. The reason for this series is that over the past 12 months I've come across many articles on the web, have been in many discussions and advised various clients of mine on this topic. Sometimes discussing with fellow architects, other times with vendors, still other discussions where with developers and managers. But the overall situation has always been that none of the people at the other side of the table had a full grasp of what developing API's in an organisation means, what it entails to manage them or what should be worried about when deciding to go API. I hope you enjoy reading these articles, and when you feel like it, comment on the articles. I always take serious comments serious and will reply to my best effort.

This post is the last post in of a series on API Management using Microsoft's solution.

API Management is not the same as an API Manager. In short, API Management is what you need to do with your API's, and an API Manager is something with which you can do this. There are a lot of different solutions that allow you to implement API Management, one of these is Microsoft's solution on Azure. The tricky part is that when you start reading their documentation on API Management you'll read little about how to actually implement API Management in your organization using their solution. It's all very conceptual. This shouldn't be a problem since the concept behind API Management are more important than the actual implementation… until you want to actually implement it.
Read the previous posts on the topic to learn more about the concept, continue reading this post to learn more about how to use Microsoft's solution in Azure, their cloud platform.

Resources and Templates


Finally, resources and templates, the bricks and mortar of the cloud. In the cloud you're typically dealing with resources. An infinite amount of resources, or at least that's how it feels. Everything in the cloud is, or should be a resource. Some are very basic like computing power, storage and networking. Others are more comprehensive like database, firewalls and message queues. And then there are a quite a few that are truly complex and very useful on a high level, for example directory services.

The cloud, being what it is, is like a normal infrastructure on which you host something that generates value for your business. Hence everything you need to run business applications in a traditional hosting environment, you also need in a cloud environment. Obviously there are significant differences between traditional hosting platforms and the cloud, but when you don't take too close a look, you're likely not to see these differences.
So in the cloud you also need to define systems, attach storage, put a firewall in front, put connectivity in place etc. You can do this by hand every time you need an application's infrastructure. Typically through a portal by clicking your way around and assemble the infrastructure. But more sophisticated and a way better practice, is to define the infrastructure in a text file, typically JSON for most cloud platforms, and use the cloud vendor's tooling to create the infrastructure based on this file. As such, the file becomes a template for a specific infrastructure setup you need. By providing a parameter-file you can externalize specifics of the infrastructure. For example the URL's to locate a web-service can be defined in this parameter-file to distinguish between an infrastructure intended for testing and the same infrastructure intended for production runs.

The particular template is called a resource template, it defines which resources are needed and how are they specified in order to run a business application.

One of these resources that you can use is an API manager, just like you can specify databases and virtual machines as resources. And here's your challenge.

The challenge is in that an API Manager consists of three parts;
  1. Developer portal, used by developers to find your API's and their documentation.
  2. Publisher portal, used by API developers and the likes to manage the API's.
  3. Gateway, used by applications developed by those mentioned above in 1 to access API's managed by those as mentioned above in 2.
Each of these have their own context and is used by a different group of 'users'. The real interesting part of the API Manager is the API Gateway as it is the component that exposes the API's you've been developing. This is your product. It is the resource that is part of your software. And the thing is; it can be shared or limited in scope to just the product you're developing.
Ideally you would have one gateway per product, because the gateway and particularly the API's it exposes, are part of your product and as your product evolves, the API's that come with it will evolve as well. And of course you would want a consistent life cycle across all components that relate to your product. Since the API gateway is just like any other resource in Azure, the above is perfectly doable. In fact, it is possible to include the API gateway as part of your product's resource template and when you provision the relevant infrastructure and deploy your product on it, the API gateway is provisioned as well.
Pretty awesome, when you're willing to forget that the costs of an API gateway are pretty steep. We're talking about close to €2,5k / month. There's not really a price based on usage. Microsoft is really weird in that when it comes to pricing in the cloud. That whole pay-per-use is not really everywhere in their pricing schemes. I like Amazon better in that regard.

So an API gateway per product is not really an option in most cases I would argue. Instead, I would advise to have a gateway per product suite. In case you have teams that handle multiple products, scope the gateway to such teams, or otherwise scope the gateways to department. Use it as a rule of thumb though and not as the law.

The point here is that you want to be able to have your API's evolve with your products and that you want teams to be as independent of each other as possible. But in addition you want your API's to be operated independent of each other. And this is important. In Azure, API's don't scale, it's the gateway that scales. And you want to be able to be smart about that. Especially when it comes to policies and usage tracking or rather generating value from API's being used. When a team is responsible for the success of its products and therefore the value that is being generated, it becomes obvious that that team would want to be able in control of what is affecting their success.

The alternative would be to work with an SRE approach, where you have a team that's responsible for your platform, including your cloud platform(s). This team would then realize the API gateway for the other teams as a service. The catch here is that this platform team decides where your API's are 'hosted', or rather whether or not you share the API gateway between teams or not. Unless your platform team is really committed and more importantly has a thorough understanding of what API's really are and I mean really understand this, I would advice against this approach. I would oppose it for the sole reason that your API's are the window into the soul of your organization. When the API is not performing well, your organization is not performing well. And especially when you're going API first, and thus build a platform, you're screwed without proper API management.

In case you do decide to go the platform team route, make sure that your processes are completely automated. Including the deployment of new API's as well as new versions of existing API's. My preposition here is that you'll be working agile as can be, deploy to production as soon as you're confident that your software is up to it. Meaning that new software needs most likely new (versions of) API's. Don't make the platform team a bottleneck, so make sure that you're working with them to deploy the changes API's consistently, repeatable and consistently. Better to abide by their rules then put your own in place. Drop the whole platform team approach when they're not providing a 100% automated deployment process for your API's.

Then there's the portals. The developer portal is a tricky one because it provides access to your API's from a developer perspective. You should be really nervous when you're nervous about potential unwanted developers nosing into your API registry. Because it means your security is way, way, way below par. Remember, API's are different from regular services in that they are build such that they make no assumptions as to who accesses them. And unless you've build them that way, you'll be in for some really serious security challenges. That said, there's no reason why not to have different portals for developers within your organisation and developers from outside your organisation. And have API's exposed only to internal teams and publicly exposed API's. Just make sure that this is an exposure aspect and not, I repeat, not an API implementation aspect.
So develop API's as if they're possible accessed by just anybody and their mother. Expose API's to a subset of this group.

Then there's the operational aspect of your API captured in the publisher portal. Here you should take an approach that only the team responsible for an API should have access to the API from a management perspective in an operational environment. In other words, access to an API's policies is for the team that 'owns' the API only. You'll need to take care of that. period.

Mind that Microsoft is rapidly changing their API Management service on Azure. Most likely. as I type this, they're making life easier for you on using the service in your organization. The concepts as I've described still hold though. And as Microsoft will hopefully come to realize that also for API Management a pay-per-use model is the way to go, you'll be able to treat API Management as part of your product instead of as part of your platform.

This concludes my series on API Management using Microsoft's solution. I hope you did enjoy it.

The complete series:


Thanks once again for reading my blog. Please don't be reluctant to Tweet about it, put a link on Facebook or recommend this blog to your network on LinkedIn. Heck, send the link to my blog to all your Whatsapp friends and everybody in your contactlist.But if you really want to show your appreciation, drop a comment with your opinion on the topic, your experiences or anything else that is relevant.


Arc-E-Tect

February 21, 2017

The Arc-E-Tect's Predictions for 2017 - Heterogeneous and Homogeneous [8/10]

The Arc-E-Tect's Prediction on Heterogeneity and Homogeneity

It's 2017, meaning 2016 is a done deal, and most of my predictions for 2016 I made about a year ago and never got around documenting in any form have proven to be absolutely bogus. Which leads me to come up with my predictions for 2017 and properly document them. So continue your effort reading this post and hopefully you'll enjoy reading them. Unfortunately we'll have to wait for about a year to find out to what extent I got it all right, but for now, ...Heterogeneity
.
Why Heterogeneity? Because homogeneous environments are a fraud as you can read here.

Heterogeneous in, Homogeneous out

In 2017 we’ll truly face the uprising of new and more technologies, concepts, architectures, models, etc. And in order to be able to manage this we will finally understand that we need to embrace the fact that our environments consist of a multitude of everything. In many smaller organisations that are at the forefront of technology and that are working in agile environment it is a given, but now that large organisations have also set out to adopt the ‘Spotify’ concept and thus teams have a huge amount of autonomy, polyglot is key.

Of course the irrational drive to create a homogeneous environment in every aspect was completely unsustainable, but 2017 will mark the turning point for this endeavour. ‘The best tool for the job’ instead of the ‘hammer for everything’ approach has turned out to be the best approach to solving problems throughout history and across industries and finally IT is picking this trend up as well.
An important aspect here is the fact that centralised IT is no longer a viable option for those organisations that need an agile business. A stronger and clearer view on what within an organisation’s IT is actual commodity and what isn’t will be supporting this. Yes, standardising on an Office suite and a particular version of Microsoft Office at that, makes sense. But even when it comes to IT very close to the user, say their devices, requires us to embrace the concept of polyglot.
With BYOD (Bring Your Own Device) finally becoming the norm, it is even from the user perspective no longer a matter of standardisation. IOS, Android, Windows 10 and other platforms are replacing the Windows desktop. This is possible with the Cloud becoming the platform of choice and SaaS offerings becoming pervasive. Business differentiators, the IT products that sets organisations apart, are no longer tied to specific infrastructures, technologies and architectures. Instead, they’re now treated as business differentiators, needed to create the value for the business. Sometimes because the business needs to be the first with the product, sometimes it needs to be the best among the competition. Whatever is needed, no homogeneous environment will be able to provide either in a sustainable manner.


Thanks once again for reading my blog. Please don't be reluctant to Tweet about it, put a link on Facebook or recommend this blog to your network on LinkedIn. Heck, send the link to my blog to all your Whatsapp friends and everybody in your contactlist. But if you really want to show your appreciation, drop a comment with your opinion on the topic, your experiences or anything else that is relevant.

Arc-E-Tect

February 16, 2017

The Arc-E-Tect's Predictions for 2017 - Product in, Project out [7/10]

The Arc-E-Tect's Prediction on Products and Projects

It's 2017, meaning 2016 is a done deal, and most of my predictions for 2016 I made about a year ago and never got around documenting in any form have proven to be absolutely bogus. Which leads me to come up with my predictions for 2017 and properly document them. So continue your effort reading this post and hopefully you'll enjoy reading them. Unfortunately we'll have to wait for about a year to find out to what extent I got it all right, but for now, ...Products.

Why Products? Because products are supposed to outlive the projects that created them and 2017 we finally see that value creation is the sole reason why we do IT.

Products in, Projects out

It shouldn't surprise you, but I'm not a big proponent of projects and instead love to see it when organisations switch to a product focused approach. But in 2017 it will turn out that I'm not the only one.

The main difference is that we'll see IT as a product and we'll be delivering products to users. We might be doing this in projects, but organisations will switch from a project oriented organisation towards product oriented organisations. The main impact will be organisational. Of course, but organisations are (becoming) ready for this. For one teams will become responsible not only for the creation of solutions in a project, but also for operating those solutions. There won’t be a developer and an operator in those teams, but people doing both. These teams will be responsible for the products they create. That’s a responsibility towards the user. Secondly the these product oriented teams will be considered business teams as they are creating business value with the products that are developed. Accountability for the success of the products in a business concern. Mind that in 2015 we already started to consider security and compliance to be business concerns and not an IT concern. In 2017 we’ll start to see the success of our products and not only its security, to be a business concern as well. The gap between IT and business will be benign.

I can be short about projects, they won’t disappear in 2017. There will be more than ever, but their impact on the business will be less interesting from this year forward. As the overhead of doing projects is increasing almost exponentially because of all kinds of reporting requirements, there is a necessity to increase the size of projects in order to make it worth our while to run a project. This is spiraling out of control and to put a hold on it, we’ll see organizations drop the heavy duty control mechanisms like Prince2 and adopt extremely lightweight governance structures in place, which in turn requires tiny releases, often. And this puts it all in place to develop products feature by feature. Hence… Products in, Projects out. Which automatically means that the Product Owner will be the hero of 2017 and the Project Manager is no longer there to save the day, something that can be read about in another post.


Thanks once again for reading my blog. Please don't be reluctant to Tweet about it, put a link on Facebook or recommend this blog to your network on LinkedIn. Heck, send the link to my blog to all your Whatsapp friends and everybody in your contactlist. But if you really want to show your appreciation, drop a comment with your opinion on the topic, your experiences or anything else that is relevant.

Arc-E-Tect

February 13, 2017

The Arc-E-Tect's Predictions for 2017 - KVI and KPI [6/10]

The Arc-E-Tect's Prediction on KVI and KPI

It's 2017, meaning 2016 is a done deal, and most of my predictions for 2016 I made about a year ago and never got around documenting in any form have proven to be absolutely bogus. Which leads me to come up with my predictions for 2017 and properly document them. So continue your effort reading this post and hopefully you'll enjoy reading them. Unfortunately we'll have to wait for about a year to find out to what extent I got it all right, but for now, ...Key Value Indicators.

Why KVI's? Why Key Value Indicators? Because we all work to increase our business' value. Empowerment and autonomy of teams invalidates the role of KPI's and instead teams are judged by the value they create and not the costs that they incur.

KVI in, KPI out

Forget about performance. Performance, in the end, means nothing when it comes to an organisation’s bottomline. What matters is value. However you want to cut it, unless value is created, it’s not worth the effort. And by value being created I mean that the difference between cost and benefit increases.
So unless a KPI is expressed in terms of how much value is being created, it’s highly questionable to judge a team or an individual by this KPI. In fact, if you take LEAN seriously, you should be aware that in many situations it will be the case that no performance will create more value than some or a lot of performance. In many cases, KPI’s will result in the creation of waste, of shelved products.

So what we will see in 2017 is that teams as well as individuals will be judged not by their performance but by the value they create. By KVI’s instead of KPI’s. This will be in conjunction with an increased level of autonomy and a management style that revolves around empowerment. There will be a framework, an architecture defined by principles, defining the boundaries within which a team can operate freely such that they will be able to meet and exceed the agreed upon value that will be created by them.

A key aspect in this whole new way of thinking lies in the fact that we start thinking in terms of products instead of projects and that teams will be held responsible for these products from cradle to grave. Product Owners, DevOps and Product Teams are the new trends in 2017 and that allows us to drop the questionable KPI in favor or the real metric: KVI.

Thanks once again for reading my blog. Please don't be reluctant to Tweet about it, put a link on Facebook or recommend this blog to your network on LinkedIn. Heck, send the link to my blog to all your Whatsapp friends and everybody in your contactlist. But if you really want to show your appreciation, drop a comment with your opinion on the topic, your experiences or anything else that is relevant.

Arc-E-Tect

February 3, 2017

Let me spell out why Bimodal IT doesn't work

Summary

In short, in a Bimodal IT environment, both modes need to move at the same pace in order to optimise productivity of the organisation. Value is created by products in the hand of the user, not by parts on a shelf or piles of almost ready products. This means that unless all relevant information is exposed by the Mode 1 systems, the Mode 2 systems will adjust their pace.

There's no excuse to use Bimodal IT as an excuse

Bimodal IT is an excuse for those that just don't want to accept the fact that we live in a world where there's no one size that fits all and that's also a world where we need to live together instead of separately. So thinking that Bimodal IT might work is totally ignoring the fact that the old and the new together result in synergy, that the short term and the long term go hand in hand and that classic IT models like the ancient Change/Run division can co-exist with brand new DevOps way of working. We have to realise that in IT only in the binary system it's black and white, everywhere else in IT there're more than 50 shades of grey.

If you're a wondering about my view on Gartner's Bimodal IT model, you should read my post on it. It's available here.

Why this post? Because in the last week, although I've stayed home for a day and a half caught by the flu, I had several discussions with colleagues about Bimodal IT and unfortunately, they considered in an opportunity. Not an opportunity to bridge the gap between systems that have a short life cycle and those that have a long life cycle. Or rather, short and long delivery cycles. And to be completely precise it's about the time it takes to think of a new feature and deliver it to a user. One mode is considering long cycles, the other is considering short cycles. Or rather, one is assuming many features to be released over time the other just a few. Think about a core accounting system and its front end. Accounting doesn't change that much over time, possibly some compliance reporting, front ends change all the time. Browser changes, mobile devices, etc.

Back to my discussions. The most symptomatic was about "Bimodal IT is used as an excuse not to comply with the ancient and really seriously out dated processes in IT that are (to be) followed within this organization and I assume unwillingness to try to modernise the processes. By just simply referring to Gartner and their definition of Bimodal IT and stating that this is the exact situation at their organisation, some of my clients architects and project managers are implying that the processes only apply to what's running in the back-end and what they're working on, which is the front-end." [from my previous post]

Understanding the frustration

It's the frustration about ancient processes and the requirement to comply with them that is causing this excuse to rear its ugly head. Fine, I can see this and since the organisation doesn't want to look into this and adapt its processes to become a little bit more 21st century, I can only agree to look for every possible reason not to stick with the old ways.

But here's the catch. Well, let's wait for a second here. I had this other discussion on the same topic. Now the discussion was about that Mode 1 systems need to be stable because they're the core systems, forming the backbone of the organisation. Keeping in mind the adagium "Never change a working system" they should be kept stable at all times.
Unfortunately, these systems exist in an ever changing world, so maybe they don't change, their context changes. And because they are so important, after all they're the backbone of the organisation, when troubles arise, they need to be fixed as soon as possible. High speed and high quality. In addition, you want to address issues as soon as they arise instead of queueing them up and apply in large batches.
It's the general misconception of the dinosaur; When you need quality, you spend a lot of time testing. I'll blog on that some other time, for now suffice to state that time and quality are hardly related.

The catch

Than there was the catch. The catch being that Mode 2 systems are those systems that are changed very frequently. Possibly because a first iteration is brought to the user as soon as possible and new functionality added regularly. Possibly because a new idea is tested and dropped when failing or productised when successful. What most of these systems have in common is that they rely on Mode 1 systems, since that's where the organisation's biggest asset resides: Information. You don't want to change an organisation's information model too much, especially not when the information has been build up in years. Information from these systems are exposed, to be used by other systems, typically Mode 2 systems.
Mode 2 systems are new, by definition. They're implementations of never-thought-of-ideas, and therefore it is very likely that the Mode 1 systems do not (yet) expose the relevant information. Hence with the advent of the Mode 2 systems, the Mode 1 systems all of a sudden need to be changed, regularly, often, quickly, consistently. All the time maintaining a high level of quality.
There is no chance that Mode 2 people will allow the Mode 1 people to move at their own speed because that means they'll be moving just as slow. 'It's the slowest boyscout in the line' case.

In short, in a Bimodal IT environment, both modes need to move at the same pace in order to optimise productivity of the organisation. Value is created by products in the hand of the user, not by parts on a shelf or piles of almost ready products. This means that unless all relevant information is exposed by the Mode 1 systems, the Mode 2 systems will adjust their pace.

Decoupling not to the rescue this time

Mind that decoupling techniques like API's, ESB's, etc are not solving this problem. Interfaces are owned by the systems that implement them. No matter what method is used to define these interfaces. So thinking that there's an ESB, or an API manager or other decoupling technology will prevent the Mode 2 people to move as slow as the Mode 1 people is foolish.
Also, introducing a Canonical Data Model or some other data defined insulation layer will not help you either. In fact, that might, or rather will, introduce more complexity so will slow things down even more.
And let's be honest; Why would we need to insulate ourselves from working together. Understanding each other's contexts and limitations? Agile and DevOps is all about breaking down silos. Understanding that collaboration gets us further. And in the end we need to deliver products in the hands of the user. Having said that, there's no point in not changing archaic processes and apply some LEAN on them.


Thanks once again for reading my blog. Please don't be reluctant to Tweet about it, put a link on Facebook or recommend this blog to your network on LinkedIn. Heck, send the link to my blog to all your Whatsapp friends and everybody in your contactlist.
But if you really want to show your appreciation, drop a comment with your opinion on the topic, your experiences or anything else that is relevant.

Arc-E-Tect

API Management in Azure using Microsoft's solution - Resources and Templates [1/2]

This is a series of posts regarding the topic of API's and API Management on Microsoft Azure. Not all posts in this series are directly related to Microsoft Azure and their implementation of API Management, that is intentional. The series also explains about API's, about creating API's and about what it construes to in order to manage them, conceptually and in reality. The reason for this series is that over the past 12 months I've come across many articles on the web, have been in many discussions and advised various clients of mine on this topic. Sometimes discussing with fellow architects, other times with vendors, still other discussions where with developers and managers. But the overall situation has always been that none of the people at the other side of the table had a full grasp of what developing API's in an organisation means, what it entails to manage them or what should be worried about when deciding to go API. I hope you enjoy reading these articles, and when you feel like it, comment on the articles. I always take serious comments serious and will reply to my best effort.

Applications

Applications are pieces of software that when combined they perform a business function. Typically, applications perform various business functions, but ideally each application provides access to a single business function. If you think of it this way, most of the software we traditionally know and see in our IT landscapes, what we call applications, actually has multiple applications. You can use this software in a variety of ways in a variety of situations to perform a variety of business functions. So the name 'Application' is incorrect but understandable.
As always, the cause of this misnomer can be found in history. Deploying software used to be a complex thing in the past as was software development and even more so developing software that communicates with other software. A ton of different concepts, protocols and technologies have emerged over time to facilitate the connectivity between different pieces of software. (Skip the next section when you're not interested in a short but incomplete overview of software interoperability)

Software Interoperability

Microsoft released (D)COM in the past followed by ActiveX. The rest of the world joined into the fray with the incompatible CORBA standard(s), there was RMI for the Java world and RPC for the C(++) world. Then we had other solutions based on message exchange in the form of MOM (Message Oriented Middleware) which was, more, protocol independent. Although MOM as a concept was awesome, the big vendors succeeded in ensuring that their products were not interoperable. And because most of the vendors realised their customers didn't really appreciate this, the ESB was invented. Again, it was there to make sure that pieces of software could interoperate and fix the complexity of our monoliths. ESB's, as we all know, are a big waste of time and money and by far deliver on their promises. I did write a post on the topic, which you can read by clicking here and I'll write a little update on the topic soon as many ESB aficionado's don't see this yet.
So after we started omitting ESB's from our IT landscapes but kept the ESB concept around, we finally found ourselves in a world that actually delivered on promises and was low cost; The Internet.

Web-services

You can skip this part if you already know everything about web-services and are not interested in what I have to say about them. It's relevant, but not such that you can't skip over this section.

So web-services, both SOAP and REST-based, are interesting little beasts in our IT landscape. Web-services are the first real useful concept when realising an SOA. A Service Oriented Architecture. Based on Internet technology and therefore implicitly complying with all the requirements of an SOA.
And then there's the fact that web-services are accessed through a simple interface, the URL. So even in case you don't think about it, you're forced to limit the scope of what a web-service can do because you're limited to a URL. And although behind many URL's there can be a single monolithic structure, or you can use all kinds of different ways of encoding many functions within one URL, there really is no point in doing so, because working with URL's makes it harder to do things the wrong way than to do it the right way.
And the cool part is that this is true for even high level business functions that require a variety of (technical) low level web-services. On every level, working with web-services (almost) requires you to develop software that do only a single (business) function. As such, a web-service can be considered an application.

Web-services make it harder to do it wrong than to do it right.

If you look at it this way, a web-service being an application, you can see that a web-service is a small piece of software that provides just enough functionality for another piece of software or a person, to consider useful.
In effect; Every web-service, by means of its interface exposed as a URL, is exposing a (business) function. And as soon as you aggregate various web-services into a new web-service, meaning you call several web-services in a particular order to implement a high level business function, your new web-service complies to the same rules.

When we look at API's, they're the interface that we put in front of some software to expose specific business functionality. Look at the previous post on the topic by clicking here.

API's are products. There's an intended full stop because in many ways people tend not to think that way. But really, an API is where the boundary of a system is. The rest of the world accesses that software system through the API. The rest of the world might be a user interface.
So if you think about API's being products, or possibly a set of API's being a product, you're on the right track, because you then understand that a web-service is not a product in itself, instead it is a piece in a product's puzzle.
Why this is important? Because the API developer has the responsibility towards the consumers of the API, ensuring that changes to the API do not impact existing consumers of the API and lure new consumers at the same time. A developer of a web-service does not have this responsibility. or rather there's a way out.

Remember that in a previous post I mentioned that an API does not make any assumptions about it's consumers. They're not known in advance and any assumed consumer is likely not the biggest fan. Web-services on the other hand do know their consumers. Or at least, they can be developed such that assumptions are made regarding the consumer. This is also the reason why there's a need for API management in some cases and no need at all in other cases.




Thanks once again for reading my blog. Please don't be reluctant to Tweet about it, put a link on Facebook or recommend this blog to your network on LinkedIn. Heck, send the link to my blog to all your Whatsapp friends and everybody in your contactlist.But if you really want to show your appreciation, drop a comment with your opinion on the topic, your experiences or anything else that is relevant.


Arc-E-Tect