Charles Young

  Home  |   Contact  |   Syndication    |   Login
  201 Posts | 64 Stories | 521 Comments | 373 Trackbacks

News

Archives

Image Galleries

Alternative Feeds

BizTalk Bloggers

BizTalk Sites

CEP Bloggers

CMS Bloggers

Fun

Other Bloggers

Rules Bloggers

SharePoint Bloggers

Utilities

WF Bloggers

Thursday, January 22, 2015 #

Logo-Integration[1]I presented at Monday evening’s inaugural session of the Integration User Group.  This new UG replaces the UK Connected Systems User Group (UKCSUG).  The session was well attended with a good 50 minutes of questions afterwards.  The video of the presentation (first hour only) has been posted at http://bit.ly/185wgtV.  The slides are also available on Slideshare at http://slidesha.re/1Cg4oju.

Sam Vanhoutte,  CTO and Product Manager at Codit, will be speaking next Monday, 26th January.  He is a very well-known face in the Microsoft-centric  integration community.  The title is ‘Overview of Azure Microservices and the impact on integration’.  He will discuss the implications of the announcements made at the recent Integrate conference and how this will affect the future for integration professionals. 
 

You can find further information on planned Integration Monday events at http://www.integrationusergroup.com/event.  Follow the events on Twitter using #integrationmonday.   Our very own Nino Crudele is speaking on 9th Feb.  The title is still to be announced but he is always worth hearing.


Thursday, January 15, 2015 #

I will be presenting at the UK Connected Systems User Group ‘Integration Monday’ webinar on Monday, 19th January at 7:30 pm, GMT:

cropped-ukcsug.logo_[1]

“The software integration market is heating up with dozens of new cloud-based vendors and a sea-change in customer expectations.  What does this means for traditional Enterprise Application Integration?  What do modern integration tools give us and where is this all heading.  The answer is cloud-based microservices PaaS, and Microsoft is leading the charge forward.  What are microservices, what is the next-generation Azure PaaS platform all about and how will this transform the world of application and service integration in the future?”


Registrations are open at http://www.eventbrite.com/e/integration-monday-2015-01-19-charles-young-tickets-15134643125

Sunday, January 4, 2015 #

This is the third in a series of articles on the emerging world of microservices PaaS and its relevance to Enterprise Application Integration. The other articles are:

1) Integration and Microservices: Describes enterprise application integration and its core abstraction of messages. Lists some widely accepted principles of microservice architecture and relates these two worlds to the emerging phenomenon of microservices PaaS.

2) Hexagonal Architecture – The Great Reconciler?: Describes hexagonal architecture, its correspondence to traditional multi-tier layered architecture, the Enterprise Service Bus and ‘traditional’ EAI/EDI tools, and the inspiration it provides for microservice thinking.

This article discusses the current iPaaS market and compares and contrasts common mediation approaches for iPaaS and EAI. I’ll considering briefly how this relates to microservices PaaS.

 

iPaaS is born

In 2011, Gartner introduced two new terms to the emergent cloud market. A Cloud-Enabled Integration Platform (CEIP) is an integration technology which supports and exploits private cloud hosting. An Integration Platform-as-a-Service (iPaaS) is a cloud-hosted integration platform that provides “a suite of cloud services enabling development, execution and governance of integration flows connecting any combination of on-premises and cloud-based processes, services, applications and data within individual, or across multiple, organizations”. Gartner described an iPaaS as combining “integration and governance and other capabilities…as a single, integrated infrastructure”.

At the time, Gartner was criticised for defining an iPaaS along similar lines to large on-premises integration platforms that handle mission-critical workloads within the enterprise. They seemed to envisage iPaaS as a cloud-hosted equivalent of the existing on-premises technologies, offering a correspondingly extensive list of functionality.  This vision was deeply problematic for an emergent market. Apart from the huge amount of engineering this would entail, the very idea strikes at the heart of one of the central benefits of the cloud. The chief characteristic of the cloud is not technology innovation. It did not emerge from some break-through in computer science or disruptive new technology capability. The cloud, both public and private, provides familiar platforms and frameworks. It introduces greater elasticity and it can reduce costs and timescales across the application lifecycle, especially at the start of that cycle. However, one of its most disruptive characteristics, especially in the public cloud, is not technological, per se, but rather its ability to ‘democratise’ technology.

Democratisation means making technology more readily accessible to a wider group of consumers. The public cloud, by its very nature, demands this accessibility. It employs a direct cost model that requires customers to pay for what they use. The commercial proposition is to provide the widest group of customers with rich capabilities they can easily consume at a lower cost than any alternative. Public cloud vendors, therefore, have no choice. They have to democratise technology in order to build a viable commercial offering.

Democratisation is a challenge for enterprise application integration. EAI has a reputation for being difficult. The tools and products that support EAI are widely seen as complex and difficult to use. The major enterprise-level integration suites certainly contain rich technology sets. They engender a steep learning curve. However the complexity is not a simple matter of tooling. There is inherent complexity that arises from business processes themselves and from the siloed, proprietary implementation of so many applications and services. When developers experience the steep learning curve associated with integration technology, this is often as much to do with gaining an understanding of EAI patterns, approaches and techniques as it is about mastering a particular tool set.

Since Gartner introduced the term in 2011, there has been an explosion of iPaaS offerings. I had no trouble, recently, identifying over forty such services in the marketplace. My list is not exhaustive. Few of these correspond to Gartner’s original vision of cloud-based equivalents to their on-premises cousins. Instead, the market has voted with its feet by rejecting the long shopping list of functionality in favour of lightweight integration offerings with a focus on rapid development and integration of SaaS services and APIs. Gartner has had to follow this line, and now happily ranks these services in its ‘magic quadrant’.

There are, of course, many variations on a theme. There is no precise way to categorise the different approaches to iPaaS. However, we can suggest four broad categories:

  • CEIP: On-premises integration and service bus technologies which are extended to support hosting in the cloud. This is the ‘Cloud-Enabled Integration Platform’ identified by Gartner as a distinct category to iPaaS. However, it is useful to include it here to illustrate the full spectrum of current approaches. Solutions generally require development using mainstream or specialised integrated development environments (IDEs) and programmatic skills.
     
  • Hi-Fidelity Hybrid: Cloud-based extensions to existing on-premises technology or on-premises agents for cloud-based technology. They offer high fidelity between cloud-hosted and on-premises integrations, allowing workloads to be moved between environments with little or no re-engineering. Cloud-hosted functionality that extends existing on-premises technology often requires development skills and the use of mainstream or specialised IDEs.
     
  • Low-Fidelity Hybrid: Lightweight cloud-based extensions to an existing on-premises technology. The extensions extend the existing tool set with cloud-based capabilities, but do not attempt to reproduce an equivalent or code-compatible cloud environment in regard to the on-premises functionality. They may, however, provide a degree of ‘symmetry’.
     
  • Cloud-Only: ‘New build’ iPaaS services with no relationship to any on-premises product or technology. Development tools are typically browser-based, but the service may allow custom code, written in some mainstream or proprietary language, to be uploaded and hosted in the cloud environment. Some services, however, are strictly configuration-only, and are therefore closer to the SaaS, rather than the PaaS, model.

When marketing iPaaS capabilities, vendors generally emphasise some or all of the following characteristics:

  • Rapid development of integrations.
     
  • Use of browser-based composition and mapping tools, rather than IDEs.
     
  • A preference for configuration over coding.
     
  • Extensive libraries of service- and application-specific connectors. Some vendors claim to offer thousands of connectors. Libraries of between 100 and 400 connectors are more common.
     
  • Good support for SaaS and social networking APIs.
     
  • Elastic scale.
     
  • Workflow-style composition of mediation and routing components.

These characteristics play to the strength of the cloud as an environment that democratises integration. Developers and ‘power users’ create their integrations rapidly without the need for in-depth development skills. They employ simple browser-based graphic tools rather than complex development environments. They use connectors that have a built-in understanding of their target applications and services. These connectors reduce mediation wherever possible to a set of configurable options. The solutions scale automatically to handle workloads of arbitrary size and complexity.

iPaaS is a forward-looking approach. It envisages an ever-increasing dependency on cloud-hosted SaaS services, integrated primarily via REST APIs. Most iPaaS services support JSON data representation as well as XML. Indeed, the ascendancy of REST and JSON as the basis for cloud-based service interchange is one of the chief enablers of iPaaS. They act as the cement for its foundations, providing a base level of standardisation which offers significant benefits.

This isn’t just a matter of adopting a common approach based on a minimum set of verbs, uniform resource location and human-readable data representation. The stateless and minimalist qualities of REST force SaaS development teams to think carefully about the consumption of their services by applications which are unknown at the time of development. Most SaaS services are built from the ground-up for integration. They implement APIs that are designed to be consumed by unknown clients. Where possible, they avoid the sorts of complexity that regularly arise in enterprise application integration. Where necessary, they constrain that complexity through the use of RESTful APIs and well-understood web service patterns.

 

Adaptation and connectors

One of the more striking characteristics of the evolving iPaaS world is the emphasis placed on libraries of connectors. Each connector handles mediation between some service or application and the iPaaS integration service itself. We saw in a previous post that mediation between an ‘inner’ and ‘outer’ world is a key feature of hexagonal architecture and integration services. In hexagonal architecture, connectors are adapters which may drive, or be driven by, the logic contained in a given iPaaS service instance.

Most iPaaS services provide connectors that support specific applications, services and APIs. Connector design emphasises configuration over development. This is especially true of API-centric connectors. Typically, these components will implement detailed support for API-specific methods. For RESTful services, these methods are centred on the use of the standard HTTP verbs and URL resources.

Most iPaaS services provide support for major SaaS services such as Salesforce.com, Dropbox, Google Apps, Dynamics CRM and Workday. There may be connectors for established ERP software such as SAP and JD Edwards, social networks like Twitter, Facebook and LinkedIn. There may be support for relational and NoSQL databases, application software, vertical industry standards like SWIFT, HL7 and HIPAA, EDI standards (EDIFACT, X12, TRADACOMS, etc.,) and much more.

As well as API-centric connectors, most iPaaS services support a library of protocol adapters. Protocol adapters handle specific transport and application protocols such as FTPS, SFTP, SMTP, POP3 and AMQP. These adapters extend the range of integration scenarios that the service can address.

Connectors play a crucial role in message mediation. However, they are only part of the story. Mediation involves the processing of messages as they pass between systems, applications and services. Connectors and protocol adapters handle one end of this interchange. In terms of hexagonal architecture, they mediate directly with the external world. However, the ports and adapters of hexagonal architecture connect this external world to the application domain. Mediation needs to handle domain-specific concerns as well as connections and protocols.

 

Mediation unplugged

Mediation frameworks have traditionally been modelled on the upper levels of the OSI Reference Model. Even where this is not explicitly the case, the OSI model still provides a useful yardstick against which mediation frameworks can be measured. Of course, we need to interpret any correspondence to the OSI model in a generous and abstract way, rather than a highly literal fashion. The OSI model is, after all, conceptual in nature.

The OSI reference model plays a foundational role in networking. It may seem strange to claim it is also central to integration. Consider, however, that networks connect different applications and services, but are not sufficient to mediate between those systems. For example, consider encryption and decryption. The network stack handles transport-level encryption (e.g., SSL/TLS) in layer 6 (the Presentation layer) of the OSI model. However, a network stack isn’t designed to handle application-specific encryption of data fields at the message level. Similarly, the network stack will handle sessions at the transport level (e.g., socket-based sessions) in layer 5 (the Session layer) in the OSI reference model. However, it won’t know how to link session management to higher-level transaction semantics, error handling, recovery mechanisms or message throttling policies.

We can think of mediation as a set of extensions to the upper layers of the OSI model. The top three layers (Session, Presentation and Application) act as a logical unit concerned with mediation between source and destination systems. The lower layers handle networked transmission of data across a physical medium. As a unit, the upper three layers interface directly with the Transport layer (layer 4). For integration purposes, we extend all four upper levels. This is because, in integration, we will need to cater, at a minimum, for protocol adaptation, including adapters based on transport-level protocols such as TCP and UDP.

 

Understanding mediation frameworks

With these concepts in mind, we can sketch the logical approach to mediation taken by integration tools and services. The following diagram illustrates the concerns that may be addressed by different layers of the mediation framework. In this model we can think of the connector as the bridge between the network stack and the mediation layers.

 

MediationFrameworkConcerns


The list is by no means exhaustive, but does serve to illustrate that mediation is non-trivial. Many of the current iPaaS services do not attempt to address all of the concerns outlined above. This is often true of the Cloud-Only category identified earlier, and also of the cloud-based extensions in the Low-Fidelity Hybrid category. The Hi-Fidelity Hybrid category exhibits more variation. Some services provide rich EAI functionality that is reproduced in the cloud. Others are similar to Cloud-Only services, but provide simple on-premises agents or daemons to allow hosting of workloads behind corporate firewalls.

The richest mediation frameworks are generally associated with the CEIP category. This, in turn, reflects the fact that many products in this category are fully functional EAI products. I am, of course, generalising. There is a great deal of variation. However, the iPaaS market values simplicity, whereas the EAI market has tended to value rich, generic functionality and accept the corresponding complexity.

One example of this is batch processing. I haven’t investigated all the iPaaS offerings in depth, but I have yet to find any which offers the level of batch control I am used to in the EAI world. This is hardly surprising. With its emphasis on SaaS and REST APIs, iPaaS plays to a market in which batching, if used at all, is handled by external services through well-understood patterns and standardised interfaces. In the EAI world, message batching is fairly common, but is associated with a host of complex issues that are often handled inadequately by applications and services. EAI tools, therefore, integrate batch management deeply into their mediation frameworks alongside transaction control, persistence, recovery, failed message handling and other capabilities.

Another area where EAI tools are often richer is transformation. Almost all integration products implement a user interface to support the ‘mapper’ paradigm. However, these tools are not all equal. There is considerably variation among iPaaS services. Some provide transformation tools that rival those of EAI products. Other implement very rudimentary tools that can only handle simple maps. A surprising number of iPaaS mapping tools depend on textual scripting, rather than configurable component functionality, to do the heavy lifting. Again, I have not investigated all the iPaaS tools in depth, but I have yet to find mapping tools that are designed to handle the extremely large message types and complex mapping requirements that are sometimes imposed by vertical industry messaging standards or complex enterprise suites.

 

Connector libraries

Most iPaaS tools emphasise the use of configurable connectors to speed up development and aid rapid integration. To this end, they provide connector libraries, many of which are sizeable. This is sometimes claimed to be an advantage over EAI products. The truth, of course, is more nuanced than the sales literature suggests, and it is worth considering some of the issues that arise.

Connectors can be broadly, and non-exhaustively, categorised as follows:

  • Protocol Adapters: These connectors are highly generic. They handle a specific combination of transport and application protocols, but are not built for any specific application or service. They support protocols such as FTP, SFTP, POP3, SMTP, HTTP, TCP/IP sockets, etc.
     
  • Bridges: These connectors are generic, but connect the application domain to a specific external messaging platform or system (e.g., IBM Websphere MQ, Tibco Rendezvous), a value-added network or a runtime environments (JRE, CLR).
     
  • SOA connectors: These connectors support service orientation and ESB approaches. Again, they are generic adapters. They mainly support SOAP, including WSDL and WS-* standards, and REST. They may also support integration via ESB products and services.
     
  • Data Adapters: These connectors are specific to data storage and management technologies. They are generic in the sense that those data management systems can be used for a wide variety of purposes. Data adapters handle connection and authentication with data stores. They generally support the ability to query and poll data stores as well as perform CRUD operations. They may provide forms of metadata harvesting through design-time support for obtaining data schemas and other information.
     
  • Application Adapters: These connectors support interchange with specific applications, including line-of-business and back-office applications, ERP systems, CRM systems, etc. Like data adapters, they handle connection and authentication with those applications and provide forms of metadata harvesting. They support application-specific message formats and functions.
     
  • API Adapters: These connectors support specific APIs. These are generally web APIs, most of which are RESTful or closely aligned to REST. API adapters generally support metadata harvesting to provide access to message schemas and to lists of operations that the API supports. They also handle connectivity to public APIs, often via OAuth 1.0 or 2.0.

There is no general difference between iPaaS and EAI connectors. In both worlds, they provide broadly comparable capabilities with similar configuration-first approaches. However, the established EAI tools focus on on-premises application integration and predate the emergence of SaaS. Hence, they generally offer fewer API adapters out of the box. This is in contrast to the iPaaS world which emphasises integration via web APIs. This is one of the reasons why iPaaS products often provide larger connector libraries. However, it is not the whole story.

As well as web APIs, iPaaS connector libraries generally provide richer support for NoSQL data management systems and for distributed caching. EAI tools tend to focus more on rich support for a small number of major relational database systems.

Some iPaaS libraries provide multiple application adapters in place of a single, richer adapter. This is often the case for adapters for large ERP and CRM suites. I suspect this reflects a desire to make each adapter as easy to use as possible. Again, I can only generalise, but I think it is fair to say that EAI tools have a greater tendency towards providing a smaller number of larger, more complex application adapters.

Another area in which iPaaS services tend to differ to EAI products is the access to the libraries. With EAI tools, the emphasis has generally been on ‘out of the box’ adapters. Some EAI products have spawned sizeable ecosystems of third party and community connectors. However, these ecosystems may have no centralised catalogue or market place through which they can be discovered, purchased or downloaded. They are licenced by each vendor and supported separately to the EAI product.

The iPaaS world is striving to implement a different model, albeit with partial success.  Adapters are often easily discoverable via web sites.  With so many new entrants in the market, few vendors are in a position to build sizeable ecosystems of third-party and community adapters. However, some are having a degree of success. Others must invest in providing sizeable libraries of proprietary connectors. Those vendors that are establishing a wider ecosystem take care to ensure that adapters are easily discoverable. They may provide a market place for third party connectors with the possibility of generating revenue streams. Some vendors link adapter usage directly to service cost, providing greater access or usage to customer who pay higher premiums.

The centralisation and monetisation of connector ecosystems is not, by any means, a definitive characteristic of today’s iPaaS markets, but it does, I believe, point to the future, especially in the context of emerging microsystem PaaS platforms.

 

Mediation beyond the connector

Connectors are configurable components which handle the interchange with external applications and systems. In hexagonal architecture, this is only part of what an ‘adapter’ must achieve. EAI tools and iPaaS services, alike, provide additional support for implementing mediation.

There is a great deal of variation in terms of implementation, but some broad themes emerge. We can sketch the outline of three such approaches. It is tempting to associate these neatly with EAI, ESB and iPaaS, and there is indeed a degree of correlation. In reality, however, this delineation is not clear.

Pipeline Mediation

In this model, mediation is handled through some form of pipeline processing. The pipeline is a construct that handles the sequential processing of messages emitted by or sent to a connector. We can think of these pipelines as conceptual extensions to the message channel that terminates at some message endpoint. In reality, pipelines are specialised forms of sequential workflow. The diagram below illustrates one possible arrangement for two-way interchange.

 

 

PipelineMediation


Pipelines are often invoked in the context of some configurable container or service which handles additional transport, session and application concerns, leaving the pipelines free to concentrate primarily on presentation concerns. The container or service may host or manage connectors.

Pipelines are generic constructs. They may support a flexible template approach for defining sequential processing stages, or the stages may be pre-determined. Because of their generic nature, they generally require careful attention to be given to the definition of message schemas. Some pipelines only allow configuration of built-in functionality. Others allow developers to plug a sequence of mediation components into each pipeline. Pipelines may support the use of message maps to transform messages. In an earlier post I described how messages are the primitives in EAI, and how this is linked to the perceived value of individual messages to the business. Pipeline processing corresponds well to this emphasis in EAI.

One of the main advantages of this model is that it maintains a clean distinction between mediation and application logic, in accordance with hexagonal architecture. The constrained nature of pipelines discourages developers from using mediation to implement automated business process logic and other application services. Pipelines don’t model complex flow control and are designed for per-message processing.

The container model provides a tightly-coupled and monolithic approach to handling mediation. Although it may make use of pluggable and re-usable components, the container is complex and often provides considerably more functionality than is required for any single interchange. The complexity of the mediation framework suits more centralised architectures. It may require more complex configuration and deployment than other models. It tends to emphasis development of schemas, maps and mediation components using specialised tooling.

Monolithic mediation containers do have advantages when tackling complexity. For example, it makes it very easy to implement robust transactional control over entire batches of messages, ensuring that the entire batch can succeed or fail as a unit, and that individual messages that fail can, if required, be routed to a dead letter queue or error handler as part of the single transaction co-ordinated with any external transactions as required. As we have seen, this kind of complexity is often encountered in EAI.

Independently Deployable Mediation Services

This approach emphasises mediation through the loosely-coupled composition of fine-grained mediation services. A decade ago, this was strongly advocated as a better alternative to pipeline processing by the ESB community, leading to controversy in the wider EAI community. Instead of centralised, monolithic, heavy-duty mediation frameworks, this approach employs a lightweight, highly distributed service container model. The monolithic mediation pipeline is refactored as a collection of independent services that can be hosted freely across the distributed environment and scaled horizontally across as many instances as required.

 

ESBMediation_CHARLES_PC_1


The approach supports easier evolution of mediation. As the business landscape changes, organisations must evolve their investment in mediation to handle new requirements quickly and cost-effectively. This is facilitated by an approach in which each mediation component is separately deployable and versionable as a discrete fine-grained service. It may provide the freedom to implement mediation functionality using a mixture of languages, tools and frameworks. However, this potential advantage is often compromised by the need to adopt a unified service container technology which, in turn, builds in dependency on a constrained set of languages, runtimes and tools.

A lightweight container model may not implement the same richness as a heavy-duty mediation container. In any case, some issues, such as robust transactional control over entire batches, is very difficult to implement in a highly distributed fashion. There is still a need to co-ordinate the invocation of the discrete services and to provide monitoring, tracking and diagnostics across an entire composition of fine-grained services.

Over the years, it’s fair to say that the original ESB vision for EAI has only been partly realised. Several ESB frameworks and products provide EAI functionality. Indeed, it has become difficult to think of ESB independently of EAI. There has been a tendency to build complex container models that mirror much of the complexity and even the monolithic nature of EAI tools. The configuration and deployment models required for EAI can be even more complex than those used in more centralised models. Like EAI products, the containers support the implementation of coarse-grained mediation services.

From a commercial perspective, ESB tools tend to licence their technology differently to EAI tools, allowing affordable distribution of solutions widely across the enterprise. This is often the major differentiator for on-premises products. ESB products favour wide distribution of mediation services. EAI products favour constrained distribution of mediation services over a small number of centralised servers.

Workflow mediation

We’ve seen that there is a tendency to move towards more monolithic, coarse-grained mediation services to handle the complexities of mediation, especially in EAI scenarios. The third approach acknowledges and embraces this. The motivation is to provide integration tools that simplify the composition of coarse-grained mediation services by moving away from configurable frameworks and the composition of services. Instead, entire interchanges of arbitrary complexity and granularity are composed graphically as single services. These are very coarse-grained services that combine three elements:

  • Connectors – a single service may contain multiple connectors
     
  • Mediation components – the emphasis is on configurable, pre-built components that implement fine-grained behaviours.
     
  • Flow control elements – used to bind the connectors and mediation components to handle multiple interchanges. Implementations may support a range of workflow features including branching, looping, parallelism and the ability to model and invoke sub-flows.

 

WorkflowMediation


Workflow mediation has been supported in many different forms for a long time. It is commonly used in ETL (Extract, Transform and Load) tools. Some of the ESB products have adopted this approach instead of fine-grained, highly decoupled mediation services, although they also support the implementation of fine-grained services where required.

EAI products have tended to take a slightly different approach. They may implement workflow tools which can be used for mediation purposes, but cleanly separate them from mediation frameworks which are often licenced as different modules or products. The primary purpose of these tools is to implement service orchestration and automated business processes in the application domain. They may be based on BPEL or executable BPMN. These orchestration tools may have limited or no support for connectors, but can generally interact with web services, although this may be restricted to SOAP. They may interact directly with a proprietary message fabric (e.g., queue-based topics and subscriptions) which support a separate mediation framework, and which may also be licenced separately.

 

Business Process Integration and Hexagonal Architecture

Most iPaaS services implement workflow mediation as their central approach. Many provide browser-based workflow composition tools, although some still require specialised IDEs. Workflow mediation favours configuration over development. It supports rapid implementation and deployment using graphic tools. In some cases, it may enable power users as well as developers. However, this comes at a price. We have seen that many iPaaS services provide simplification by avoiding some of the inherent complexity that regularly arises in EAI. We can also see that the workflow approach constitutes a monolithic, coarse-grained approach to integration.

The problem run deeper than the coarse-grained nature of the services. Workflow mediation fosters an approach that runs contrary to the insight of hexagonal architecture. It makes no clean distinction between the mediation layer and the application domain. Indeed, it is common for iPaaS vendors to promote their workflow mediation as an environment for automating business processes.

In hexagonal architecture, business processes are ‘external’ to the application domain. They are the activities that people, departments and trading partners undertake on a regular basis. They include customer-facing and back office processes as well as processes that cross organisational boundaries.

EAI addresses scenarios where organisations need to achieve the following:

  • The freedom to select appropriate technology to support specific business activities and to vary the technology selection over time to support evolving business processes.
     
  • Integration of diverse business activities, carried out in different parts of the organisation, within business processes that cross organisational boundaries and share data, services and resources in an efficient and coherent manner.
     
  • The ability to govern business processes and measure its performance, regardless of the diversity of activities that represent that process and the systems that support those activities.
     
  • A clear delineation of corporate responsibility for different processes and activities.

EAI is more than technical integration of different applications. It is the integration of the business itself, the activities it carries out, the processes that support those activities and its interaction with customers and trading partners. Organisations that understand EAI in this way are better equipped to make appropriate decisions about integration technology and mediation approaches.

The ‘inside’ of hexagonal architecture is the locus for business process integration. It handles interchange between the activities and processes that reside on the ‘outside’. The automated processes in the application domain, together with the processes, whether automated or manual, in the external business environment, constitute the concerns of importance and interest to the business. Ports and adapters, together with the external applications, services, data stores, devices and systems, support those business concerns and capabilities.

BPI


Messages provide the core abstraction for business process integration. They are the unit of exchange across the mediation boundary. Some messages will be dictated by external applications, systems and services. Others will be defined within the application domain, independent of any specific application. These domain messages (in EAI they are often called ‘canonical’ messages) play a pivotal role when integrating business processes. They protect the investment that organisations make when integrating their processes. They do this by cleanly decoupling the integration logic from the various applications and systems used by the organisation.

Workflow mediation encourages the implementation of monolithic integration services that make no clean distinction between the ‘inside’ of hexagonal architecture and the mediation layer. While this may be acceptable in many scenarios, it does not provide the best approach to addressing enterprise integration requirements. Of course, developers can take care to ensure that they only use these tools for mediation, separating out any business process integration into independently deployable services. Merging automated business process integration logic with the mediation layer should be avoided where possible. It undermines many of the core reasons for investing in integration tooling.

 

iPaaS meets µPaaS

iPaaS has rapidly grown to become a major theme in cloud computing and has spawned many new companies and services. iPaaS tools provide simple, rapid integration of services and applications. They focus on the integration of web APIs alongside other applications and services, and they exploit the elasticity of the public cloud to scale effectively and rapidly as required. However, from a microservices perspective, many of today’s iPaaS services violate core microservice principles and leave much to be desired.

We have already discussed the problem of workflow mediation which is central to many of the current iPaaS offerings. Workflow mediation promotes a monolithic approach which encourages inappropriate and brittle merging of application logic with the mediation layer. Other issues arise. IPaaS tools are proprietary and specialised. They support runtime environments which often limit the choice developers have to select languages and tools. They may be tied to a specific cloud platform or environment, limiting hosting choices. They are, in effect, as ‘centralised’ as EAI tools ever were, forcing developers to depend on a single proprietary platform for services on which their custom solutions depend, but which are not suitable for any purpose beyond integration.

This may seem harsh, given the appeal that iPaaS has in today’s marketplace. My purpose here is not to undermine that market, but to point out that today’s iPaaS offerings are not a panacea for integration problems. They simplify the model, but each simplification introduces additional issues. For each advantage there is a disadvantage. Like any software, iPaaS services represent a set of trade-offs. Organisations need to consider these trade-offs carefully when selecting technologies appropriate to the needs.

I claim no proven ability to foresee the future, but I suspect that the current generation of iPaaS tools will be a brief step in the evolution of cloud-centric integration. The emergence of next-generation microservices PaaS (µPaaS) platforms suggests that the current models will not prevail. A decade ago, ESB tried to move integration forward to a model that accords more closely with microservice principles. It largely failed in this endeavour.  iPaaS addresses some of the complexity issues, but fails to tackle the fundamental problems.  µPaaS opens up an opportunity to think through these issues afresh. It addresses some of the impediments that have afflicted earlier attempts. It shares the same goals as today’s iPaaS services, but points to a radically different and more democratised approach. These are themes I hope to explore in greater detail in future posts.


Saturday, December 20, 2014 #

This is the second in a series of articles on the emerging world of microservices PaaS and its relevance to Enterprise Application Integration. The other articles are:

1) Integration and Microservices: Describes enterprise application integration and its core abstraction of messages. Lists some widely accepted principles of microservice architecture and relates these two worlds to the emerging phenomenon of microservices PaaS.

3) Mediation Models for iPaaS and EAI - This article discusses the current iPaaS market and compares and contrasts common mediation approaches for iPaaS and EAI. I’ll considering briefly how this relates to microservices PaaS.

 

In an earlier post, I described enterprise application integration and its core abstraction of messages. I listed some widely accepted principles of microservice architecture and related these two worlds to the emerging phenomenon of microservice PaaS. In this post, I’m going to lay a foundation for understanding more precisely the correspondence between application integration and microservices. I intend is to build on that foundation in future posts.

The Oath of Non-Allegiance

Υπόσχομαι να μην εξετάζω ιδέες με αποκλειστική βάση τήν προέλευση τους, αλλά να συμπεριλαμβάνω ιδέες από όλες τις σχολές και κληρονομιές για να βρω αυτές που ταιριάζουν καλύτερα τα δεδομένα της συγκεκριμένης περίστασης.

 

The Bleeding Edge of Architecture

In 2005, Alistair Cockburn, one of the co-authors of the Agile Manifesto and the originator of the Oath of Non-Allegiance, posted an article on his site that recast an old idea in a new form. The concept of ports and adapters is deeply familiar to EAI developers. The new form, which he called ‘hexagonal architecture’ is widely regarded as the inspiration behind microservices.

Hexagonal architecture is a response to the observation that business logic within a given application domain has a tendency to bleed across architecture layers, ending up where it has no right to be. As Alistair Cockburn puts it:

“The attempted solution, repeated in many organizations, is to create a new layer in the architecture, with the promise that this time, really and truly, no business logic will be put into the new layer. However, having no mechanism to detect when a violation of that promise occurs, the organization finds a few years later that the new layer is cluttered with business logic and the old problem has reappeared.”

To understand hexagonal architecture, we first need to understand what it replaces.

 

Multi-Tier Architecture

The most common representation for enterprise-level application architectural is the layered multi-tier approach. A simple representation is provided in the diagram below.

clip_image002

In this approach, functionality is separated into at least three layers, or tiers, representing presentation, business and data services. The data tier may be extended to include integration with various applications and systems as well as electronic data exchange with external trading partners. In a sense, a layered architecture of this type is really a very coarse-grained, linear, bi-directional variation of the fundamental pattern of models (data), views (presentation) and controllers (business logic). Of course, we don’t normally think of it like that. MVC patterns are applied at a much finer level within the presentation tier. MVC, in its turn, elaborates the fundamental notion that applications process input to provide output.

The division between presentation tier and data/integration tier can be somewhat arbitrary. For example, implementing an email channel in the presentation tier is much the same as integrating with an email server in the data tier. The choice of tier often reflects a distinction between interchange with direct human involvement and interchange that is entirely automated. That makes sense in many scenarios, but in an increasingly digital world, the model can break down. If we integrate a Twitter feed, is that a channel in the presentation tier or an external event service in the integration tier?

The somewhat arbitrary differentiation between presentation and data tiers can lead to confusion. I often see this in real-world architectural diagrams which extend the basic model by representing additional general purpose capabilities such as mediation, routing, orchestration, monitoring and auditing. These are associated with the business layer because they apply directly to services in that tier, or are used to handle interchange between business services and other services in the adjacent tiers. Where are these capabilities to be placed in the architecture diagram? Should they be at the boundary with the data layer? That doesn’t seem right became they can also be applied at the boundary with the presentation tier. Should they be incorporated into the business logic tier? Again, this doesn’t seem right. They don’t represent business logic.

One way to make sense of this is to think of these capabilities as an elaboration of the boundary between the business tier and its neighbours. This solves the problem of their application to both adjacent tiers and also delimits a space in which the business logic resides, and which provides general purpose capabilities to those services. We might represent this as follows:

clip_image003

 

Enter the Bus

In service-oriented architecture, the ‘border’ around the business services is widely envisaged as an ‘enterprise service bus’. However, the terminology of ESBs is problematic. We are invited to visualise the bus in terms of data flow through some universal and all-pervasive communication fabric, just like the data bus on a motherboard. ESBs are habitually depicted as communication pipes into which services pour data to be transported to some destination.

In reality, the ESB provides no such pipework. At the risk of sounding crass, that is what Ethernet is for. A better mental picture is that of the bus bar running around all four walls of our business logic factory to which we can connect our services, wherever they reside in that space. The ESB provides general-purpose capabilities that any service, wherever it may be located within the enterprise, can invoke as required. These services facilitate connection, communication, discovery, insight and much more. These capabilities are services in their own right. They implement discrete units of logic that resolve the parameters of communication, facilitate mediation and monitor behaviour.

Conceptually, an ESB is like the Keep of some medieval castle, protecting the architectural integrity of the services that reside within its four walls and defining a workplace for the servants and guards that look after the needs of those residents. Because an ESB is a concept, we are free to select the appropriate combination of software to support its physical implementation. We should never confuse the notion of the ESB with any specific software product. An ESB is an architectural statement of intent and a framework for the embodiment of service orientation.

The sense of data flow on the bus emerges naturally when we consider that services must communicate in order to collaborate. However, communication between services that live ‘on the bus’ (within the Keep) generally requires little or no run-time support from the bus itself. Most ESBs standardise the interfaces and application protocols used by services (e.g., SOAP or REST over HTTP). If one service knows the address of another service and understands how to format or process the messages exchanged between those services, then that may be quite sufficient. However, if addresses are subject to change over time, it may be wise to implement a general mechanism to resolve endpoints. This facilitates dynamic dispatch, making it easier to evolve and change individual services over time. We may need additional general-purpose capabilities to map between different data representations, enforce security policies or monitor SLAs. These are all capabilities which can be offered by the bus for use by business services, as and when required.

Things get more complex when services that live ‘on the bus’ (in the Keep) need to communicate with the world beyond. The ESB generally defines the acceptable approaches and protocols for such communication, rather like the guards keeping watch at the entrance to the Keep and the servants that manage the channels of communication between the residents and the outside world. It is at this level that there is often a need for more sophisticated mediation and routing services. We may need protocol adaptation, batch control, authentication services, sophisticated data transformation capabilities and much more.

One very common pattern in ESB design is to create services whose sole task is to manage mediation across the bus boundary. These are referred to an ‘On-Ramp’ and ‘Off-Ramp’ services[1], again playing to the notion of ESBs as channels through which things flow. When a message arrives at the ESB boundary from the outside world, it may first be mediated by an On-Ramp service which routes it on to some service on the bus. Conversely, when a service on the bus initiates communication with the outside world, it may send the message to an Off-Ramp service which mediates it and relays it on to some external destination.

In real-world service bus design, there may be many situations where the use of intermediary mediation services is deemed top-heavy. In this case, a business service may be implemented to act as its own on-ramp or off-ramp. This, of course, leads to stronger coupling. These trade-offs are as common is ESB design as they are in any other aspect of development.

Equipped with an understanding of the ESB, we are in a better position to evaluate ESB products and frameworks. There are many such technologies available in the marketplace, and much variation in what they offer, so at best we can only speak very generally about the functionality they provide. One common theme is that of service containership. Containership is used to ensure that services can be deployed, versioned and retired freely and independently across the enterprise. For example, some ESB products implement dynamic code deployment via distributed agents so that services can be spun up, torn down and scaled horizontally wherever required. Another role of containers is to provide an environment for each service instance that caters for cross-cutting concerns such as service management, monitoring and security.

Another common theme of ESB products is the inclusion of integration tooling to handle adaptation, mediation and routing between services and, more importantly, across the ESB boundary. This functionality may be accessible through the container. It may be used to implement configurable on-ramp and off-ramp services. In this respect, ESB products are often similar to EAI/EDI products and frameworks. One common difference, however, is that ESB products emphasise highly distributed service environments whereas EAI/EDI products generally promote constrained distribution across a few centralised machines. There are pros and cons for both models, and the choice can be quite confusing for customers.

 

Hexagonal Architecture

Alistair Cockburn’s great insight was to employ the common notion of adapters and ports, widely employed in EAI and EDI, to recast multi-tier architecture in a new form. He replaced the traditional layered representation with an approach which emphasises the boundary between the application domain (the business logic) and the rest of the world. In this model, we no longer have separate presentation and data tiers. Instead, we have a boundary around the application domain that defines an ‘inside’ and an ‘outside’ with respect to business logic.

His aim, as you will recall, was to use architecture to prevent business logic from bleeding across the layers of traditional multi-tier architecture. He accordingly drew application domains as hexagonal shapes. The number of sides is of no consequence. This multi-faceted approach reflects, instead, the idea that the application logic can communicate with ‘external’ devices and systems via ports. Each port represents a logical entry or exit point to or from the application domain. Conversations take place across ports according to some protocol.

In hexagonal architecture, ports are envisaged as containers for adapters. A single port may contain multiple adapters. The role of the adapter is to implement a concrete protocol by which some external system or device can communicate with the application. We can use ports and adapters to represent our enterprise application as follows.

clip_image004

 

Mediation

I’ve elaborated on Alistair Cockburn’s original representation by depicting an inner mediation boundary into which adapters are plugged. The reason I’ve done this is to clarify the nature of ports. They are used for two purposes. The first is to group adapters according to a common semantics. For example, we might group adapters together in a single port based on their support for customer channels, or because they connect to trading partners or to a database layer, or to line of business applications. This semantic grouping is more nuanced and informative than the use of presentation and data tiers of multi-tier architecture.

The second role of ports is to protect the application domain from the impact of change to external systems and applications that may result in the use of different protocols, message formats, conversational patterns etc. Of course, it is never possible to fully protect the application domain from all changes. However, ports help to ensure that, in the same way that business logic does not bleed into other layers, mediation logic does not bleed into the application domain. Instead, ports do whatever is required to adapt the external world to application domain. Similarly, if we make changes to the application domain, ports minimise the impact on external applications and systems. When changes occur, it is often possible to handle the impact entirely in the relevant port so that the conversation appears to continue unchanged on the other side of the boundary.

Reducing the impact of change in this way is very beneficial. I’ve seen situations where, without this isolation and indirection, the cost of regression testing alone is so high that any significant change becomes economically unviable. This is often the trigger that leads organisations to overhaul their entire approach and adopt a ‘port and adapter’ architecture for integration. With the right kind of mediation, the cost of handling change can be reduced significantly, allowing organisations to be agile and responsive to an ever-changing business landscape.

The reason, then, for illustrating the inner boundary in the diagram above is to highlight the role of mediation in minimising the impact of change and protecting the investments made either side of the boundary. It also illustrates that mediation may occur at both the adapter and the port levels. Consider a scenario where the application domain processes orders sent via different channels. Each channel handles different conversations with different representations of orders. The conversations, however, all relate to the same topic and each order is processed the same way within the application domain. From an inside-out viewpoint, the application domain interacts with mediation capabilities at the port level. However, from an outside-in viewpoint, each channel uses the mediation capabilities of an individual adapter.

One of the features that emerges from hexagonal architecture is an asymmetry between two types of port. Alistair Cockburn termed these ‘primary’ and ‘secondary’ ports. A primary port ‘drives’ the application logic. Conversations with the application domain are initiated by adapters, often as a direct consequence of some external stimulus such as an external system initiating a conversation. Secondary ports are used by the application domain to ‘drive’ adapters. In this case, this is always as a result of the application domain initiating the interaction, generally in order to communicate with an external system or device. Primary ports are coloured black in the diagram above, while secondary ports are coloured grey.

 

Adapters and Ports

As we saw earlier, the role of an adapter is to implement a concrete protocol by which some external system or device can communicate with the application. The terms ‘adapter’ and ‘protocol’ can bear very wide interpretation. Alistair Cockburn defined adapters in terms of the pattern described by the Gang of Four (Design Patterns - Gamma, Helm, Johnson and Vlissides – Addison Wesley, 1995). This adapter pattern concerns only the conversion, at the code level, of one programmatic interface to another. In the world of EAI and EDI, adapters generally handle a greater range of concerns, including transport and application protocol adaption, batch management, transaction co-ordination and much more. Adapters, then, include a wide range of concrete implementations including Data Transfer Objects, MVC frameworks, ORM, data access layers, services built using Apache CXF, Microsoft WCF and other communication frameworks, and the sophisticated adaptation technologies provided by EAI, EDI and ESB products. These technologies, incidentally, model adaptation and mediation on the upper layers (4 to 7) of the OSI reference model. Increasingly, we are seeing the emergence of ‘hybrid’ adaptation where the adapters and ports may reside behind a corporate firewall and communicate with an application domain hosted in the cloud.

In contrast to adapters, ports are architectural concepts that signify a common semantics shared by one or more adapters. Alistair Cockburn related these semantics to specific topics of conversation between the application domain and the outside world. In my experience, this is by far the most common approach in real world architecture and design. As semantic constructs, ports may or may not have corresponding concrete implementations.

Let’s consider real-world implementations. One EAI product I have used extensively implements primary and secondary ports as configuration items. At run time, this configuration is used to spin up service instances distributed across machines. Each instance represents an endpoint. The port configures this endpoint with a selected protocol adapter and sequential workflows that perform specific mediation tasks on inbound and outbound messages. The port also provides an additional transformation capability that is shared across all the adapters defined by the port. In another example, a widely used communication framework (WCF) allows a single service implementation to be configured with multiple endpoints, each bound to a different mediation stack. In this case, the service itself represents a port.

Alistair Cockburn emphasises another important benefit of ports and adapters. This is the idea that adapters can be implemented as test harnesses and mocks. Adapters in primary ports drive the application domain while those in secondary ports are driven by the application domain. This fits perfectly with the use of test frameworks and tools. It is not my purpose here to discuss testing in relation to hexagonal architecture, but I do want to highlight a further insight. If adapters can be implemented as test harnesses and mocks, this implies that the use cases that define the functionality of the application domain apply at the inner boundary of the hexagonal model. They are, themselves, decoupled and isolated from any of the external systems and devices that the application communicates with. This implies that test harnesses and mocks, designed per use case, can provide a mechanism to detect the bleeding of business logic across different layers.

 

Hexagonal Architecture and Microservices

At this stage, it should be obvious that hexagonal architecture describes succinctly the central concerns and approaches in EAI and EDI. We can see that it also corresponds well to the notion of the enterprise service bus. However, hexagonal architecture has a third application. It plays a foundational role in microservice thinking and is regularly described as its main inspiration. This may come as something of a surprise, given that several commentators contrast microservices, sometimes sharply, with ESBs and ‘traditional’ integration approaches. What is going on?

To understand this, we must first explore the relationship between hexagonal architecture and microservices. Microservices are discrete services that ‘do one thing and do it well’. They are independently deployable and versionable and can be developed, replaced or retired with little or no impact on other services or dependent applications. To achieve this, we must pay close attention to the very concerns that hexagonal architecture addresses. We must implement effective mediation at the application service boundary to ensure that business logic never bleeds across it. Because the real world is messy and does not adhere to microservice principles, we are forced to adopt a clear architectural ‘inside’ for our microservices and an ‘outside’ in which the rest of the world resides.

clip_image005

Within the microservices world, there is little need for sophisticated forms of mediation. We adopt a standardised interface such as REST and build our microservices accordingly. Of course, we may still need to transform between different data representations, especially if we compose solutions from microservices built at different times by different teams. Other mediation requirements may arise from time to time, but in each case, we can implement a microservice to handle these concerns. In the microservice world, mediation is best done using discrete, lightweight mediation services. This, incidentally, was exactly the argument promoted by some the leading voices in the emergent world of ESB a decade ago.

Hexagonal architecture naturally supports another microservices principle. Microservices should be organised around business capabilities. Traditional multi-tier architectures group services by functionality. Microservices, by contrast, should be grouped according to the products and services offered by the organisation. Microservice principles advocate cross-functional development teams with product-centric attitudes aligned to business capabilities.

Hexagonal architecture replaces functional tiers with the notion of ports aligned to the semantics of conversational topics. By extension, these semantics are aligned to business capabilities. Consider, again, the example used earlier of a scenario in which orders are received via different channels and processed in the application domain. Each channel handles different conversations with different representations of orders. However, each conversation shares the same semantics. They are all about the same topic. By grouping the adapters for each channel into a single port we can mediate all these conversations to a single collection of order-processing microservices. We have grouped our microservices according to a business capability.

We saw, earlier, that hexagonal architecture allows us to test microservices against specific use cases by ensuring that those use cases are defined and applied at the inner boundary. This is yet further evidence of the way hexagonal architecture supports the principle of business capability alignment.

 

Reconciling different worlds

Given all this, we may be tempted to think that hexagonal architecture provides the ultimate reconciliation between microservices, on the one hand, and ESB and integration tools on the other. Using the previous diagram, we might claim that microservices describe what goes on in the ‘inside’, while ESB and integration tools should always be employed at the port level. Indeed, that approach would be entirely acceptable in terms of hexagonal architecture.

I can’t see this claim being accepted by the microservices community. However, I think it reasonable to suggest that hexagonal architecture takes some of the heat out of the argument. It invites us to consider that ESB, EAI and EDI are closely aligned to microservices and share much in common. In particular, they share many common concerns which they address in fundamentally similar ways at the architectural level.

As an example, consider the real world example of an EAI product that implements a text-book implementation of hexagonal architecture. The product I have in mind predates Alistair Cockburn’s article, but uses the same terminology and ideas to address the same concerns in the same way. More than this, its ‘inside’ allows developers to create and host services which can be deployed, versioned and hosted as independently as required and scaled horizontally across multiple processes and machines. Each of those services implements standardised interfaces. Mediation is separated cleanly from these services and implemented with ports.

I am quite certain that this product will never be regarded as an example of microservice architecture. Indeed, it would be widely considered as an excellent example of exactly the approach that microservices disavow. In the same way, this EAI tool was regularly singled out by commentators in the emergent ESB community a decade ago as an example of everything they disapproved of. Some of the iPaaS vendors have recently adopted similar arguments against the same toolset. The wheel turns and turns.

To make sense of this, we need only ask what characteristics of the EAI product attract the disapprobation of the microservices community. It clearly isn’t its adherence to hexagonal architecture or to those principles of microservices which it supports. However, it is easy to identify the problem. The EAI product provides a heavy-duty proprietary toolset and a closed-world run-time environment, licensed in a fashion that makes it too expensive for most organisations to distribute widely across the enterprise. It is, instead, almost always distributed over a small number of centrally managed servers. It scales, in part, by implementing complex internal algorithms to squeeze every last ounce of capability out of the available hardware.

If we look at the ‘inside’ of the EAI product, we see further issues. The services must be built using the tooling supplied by the product. This involves the use of a proprietary script language with the ability to invoke code hosted in one of the major runtimes, limiting the languages and tools developers can use to implement custom logic. Communication between services is handled via the same fabric used to route messages to and from the ports. This always involves the use of asynchronous queues and cannot be reasonably claimed to be ‘lightweight’ in the microservices sense.

The EAI product, then, violates several of the principles of microservices while, at the same time upholding several others. It is, however, architecturally compatible with microservices at the level of hexagonal architecture. This indicates that we could, if we so wish, use microservices in conjunction with this EAI tool. Why might we want to do that?

This question, I think, gets to the heart of the matter when looking at integration from a microservices perspective. The previous diagram illustrated the use of microservices in the application domain, only. We have seen that hexagonal architecture admits a very wide range of adapter implementations. From a microservices perspective, it is natural to advocate implementing adapters as microservices. In this case, our architecture looks like this.

clip_image006

It is precisely that this stage that, as an EAI developer, I begin to have doubts. Reservations naturally suggest themselves to me. Microservices should be simple and should focus on a single task. However, my experience of integration is that adapters must often exhibit considerable complexity, not only in terms of custom mediation logic which must be as complex as the use case demands with respect to a given channel, but also at the level of transactional control, batching, correlation and even thread pooling and parallelism. If I adopt a microservices approach, do I try to untangle this complexity by separating out different concerns into different services? If so, how do I wrap all those services into a single transaction? How do I ensure recoverability if any service ‘breaks’. How do I correlate across those services?

It is because these issues are common in the work I do that I invested time and effort a decade ago in becoming a ‘high priest’ of integration focused on a specific product. That product gives me a tool box containing all I need to answer the most complex mediation requirements I meet. I don’t need to write complex code myself. I can depend on a mature codebase from a major vendor. So, if I am to adopt microservice adapters, I want to be sure that these microservices, together, provide me with all I need to continue to address the complex issues in integration. If I am not given the tools in the microservices world, I will look elsewhere. I care much more about getting the job done than I do about adherence to an abstract set of principles. Getting the job done is what pays the mortgage.

I can summarise this simply. When I was a student, more than thirty years ago, I studied the science of photography to degree level (measuring the detective quantum efficiency of photographic emulsions, not learning how to take arty pics). In my first year, we had a pedantic lecturer who endlessly repeated the same few pearls of wisdom. We laughed at him behind his back. One pearl was this. “Your perspective depends solely on your viewpoint”.

I’m older and wiser, and I’ve often had reason to recall his words with gratitude. They illustrate a fundamental truth which we do well to remember every time we are caught in any kind of dispute. Here, then, is what I see as the substantive difference between the traditional EAI perspective and the modern microservices perspective.

clip_image007

The tension ultimately reduces to a debate about how to do mediation between the inside and the outside of the hexagonal world. On the microservices side of the argument, Bob, with his inside-out viewpoint, emphasises the elimination of any unnecessary complexity. He wants to ensure that adapters remain lightweight and can be deployed and versioned as independently as any other microservices. Charlie, the ESB guy, is positioned closest to the inner mediation boundary. He considers he has been promoting the microservices cause for the last decade, albeit with a containership model that Bob does not appreciate at all. He can see the need for some complexity in the adapter layer but also gets Bob’s point about lightweight mediation. Then there is Alice. From her outside-in viewpoint, she is thinking about the complex mediation she does every day. The idea of having to depend on lightweight mediation services, or worse still, write her own mediation framework from the ground up, makes her blood run cold.

The next article in this series discusses the current iPaaS market and compares and contrasts common mediation approaches for iPaaS and EAI.  See http://wblo.gs/frn


[1] To the best of my knowledge, this terminology was introduced by David Chappell in his excellent book, ‘Enterprise Service Bus’, published in 2004 by O’Reilly. This is one of the seminal works on ESB and shaped my thinking more than any other book on the subject. NB., the author is the David Chappell who worked for Progress Software and Oracle, not the well-known speaker and consultant on the .NET circuit of the same name. See http://bit.ly/1AlILM7


Sunday, December 7, 2014 #

This is the first in a series of articles on the emerging world of microservices PaaS and its relevance to Enterprise Application Integration. The other articles are:

2) Hexagonal Architecture – The Great Reconciler?: Describes hexagonal architecture, its correspondence to traditional multi-tier layered architecture, the Enterprise Service Bus and ‘traditional’ EAI/EDI tools, and the inspiration it provides for microservice thinking.

3) Mediation Models for iPaaS and EAI - This article discusses the current iPaaS market and compares and contrasts common mediation approaches for iPaaS and EAI. I’ll considering briefly how this relates to microservices PaaS.

Across the IT industry, people are re-thinking their approach to application integration and electronic data interchange in the context of ubiquitous scale-out cloud platforms, the onward march of service-orientation, the success of the RESTful philosophy, the adoption of agile methodologies and the rise of devops and continuous deployment. In this rapidly evolving world, the microservices concept has gained attention as an architectural and design model that promotes best practice in building modern solutions.

In this post, I will explain the two worlds of integration and microservices and start to explore how they relate to each other. I will start with a description of integration. It is important to understand the central concerns that integration addresses before moving on to look at the emerging microservices world. Without this, we won’t be able to relate the two to each other appropriately. I will move on to describe the principles that underpin the microservices approach and discuss the notion of 'monolithic' applications. I will then discuss the relationship between microservices and integration, especially in the context of emerging microservice PaaS platforms.

 

What is Integration?

I’m specifically interested in the concepts of enterprise application integration (EAI) and electronic data interchange (EDI). These related solution categories are widely understood. Together, they constitute a few square inches in the rich tapestry of software development. However, for many organisations, they play a central role in the effective exploitation of information technology.

EAI is necessary in scenarios where the following statements are true:

“We need to protect our existing investment in different applications, systems and services, including ‘heritage’ systems, line of business applications from different vendors and custom-built solutions.”

“We have business processes and activities that depend on, or touch, multiple systems and applications. We need IT to automate and streamline those processes and activities as much as possible by integrating all those systems into our processes through robust forms of data interchange.”

“We have to accept and manage change in the most cost-effective and efficient way we can. This can include new investments and acquisitions, new business processes, the evolution or replacement of existing systems and other issues. Much of this change is beyond the control of IT and is either dictated by the business or forced on us by partners or software vendors.”

EAI is characterised by techniques and approaches which we apply when we need to integrate applications and systems that were never designed or envisaged to interoperate with each other. This is more than a matter of protocol. Different systems often model business data and process in radically different ways. The art of the EAI developer, rather like that of a diplomat, is to build lines of communication that honour the distinctive viewpoint of each participant while fostering a shared and coherent understanding through negotiation and mediation.

As we will see, the microservice community advocates the use of lightweight communication based on standardised interfaces. This principle is typically satisfied through the use of RESTful interfaces. From an EAI perspective, however, this principle is not particularly interesting. Integration handles the protocols and interfaces dictated to it, whatever they may be, and is primarily concerned with mediation. To continue the analogy of diplomacy, it’s rather like encouraging everyone to communicate face-to-face via Skype video. In theory, this may be convenient and efficient, but it is of little use if each participant speaks in a different language[1] and is only interested in communicating their own concerns from their own perspective. The diplomat’s job is to mediate between the participants to enable meaningful interchange. In any case, some participants may not have the necessary bandwidth available to use the technology. Older participants may not be comfortable using Skype and may refuse to communicate this way.

In EAI, the message is king. The most fundamental task of the EAI developer is to mediate the interchange of messages by whatever means necessary. If it is possible to standardise the protocols by which this done is, then that is valuable. However, such standardisation is secondary to the central aim of ensuring robust mediation of messages. There is a strong correlation between the intrinsic value of individual messages to the business and the use of EAI. The value of a message may be financial (orders, invoices, etc.,), reputational or bound up with efficiency and cost savings. The more each individual message is worth, the more likely it is that EAI approaches are required. Mediation ensures that each valuable message is delivered to its recipient in a coherent form and manner, or, if not, that appropriate action is taken. Any other concerns are secondary to this aim.

Messages are the primitives of integration. At their simplest, they are opaque streams of bytes that are communicated between participants. However, most EAI frameworks provide abstractions that support a distinction between content and metadata. These abstractions may be elaborated as required. For example, content may be treated as immutable while metadata remains mutable. Content may be broken down into additional parts, and each part may be provided with part-specific metadata. Message metadata is typically represented as name-value pairs. It may contain unique identifiers, routing information, quality-of-service data and so forth.

The centrality of messages in EAI cannot be underestimated. As an abstraction, messages are decoupled from other abstractions such as endpoints and processes. This means that they exist independent of any notion of a service contract. Service-orientated approaches cannot be mandated in EAI, however desirable they may be. Perhaps more importantly, they can exist independent of any specific application, system or service. Messages possess the following characteristics:

Extensibility: Messages can be arbitrarily enriched and extended with additional metadata. Metadata supports correlation protocols, identity, routing information and any additional semantics with no requirement to change the message content or format.

Malleability: Message content can be amended or transformed as required as it is passed from source to recipient. In transforming a message, we often create a new message to hold the transformed content. Metadata can be used to record the association between the old and new versions of the message.

Routability: Static or dynamic routing decisions can be made at run-time to control the destination of messages, the protocols used to communicate those messages and other concerns. Such decisions are generally made by evaluating metadata against routing rules. This approach supports the flexibility needed in EAI to implement complex interchange and correlation patterns and to adapt rapidly to changes in business requirements.

Traceability: Messages are traceable as they pass between different services and application and undergo multiple transformations. As well as the progress of individual messages, tracing can record the relationships between those different messages. This includes correlated messages (e.g., response messages correlated to request messages), messages that logically represent a given business, batched and sequences of messages, etc. Tracing provides insight, supports troubleshooting and enables the gathering of metrics.

Recoverability: Messages can easily be persisted in highly available stores so that they are recoverable in the event of a failure. When individual messages are accorded significant worth to the business, this provides assurance that important data is not lost and will be process appropriately. It supports high levels of service and is central to building robust automation of business processes.

In EAI, the focus is on the applications, systems and services that organisations invest in and rely on. For EDI, the focus is on the exchange of data with external organisations, including trading partners, suppliers and customers. EDI shares a lot in common with EAI, but is about the integration of different businesses and organisations, rather than the integration of in-house applications. Like EAI, one of the main drivers is the need to manage change effectively and efficiently, even though that change is often beyond the control of IT.

 

Microservices

Now we have described the world of integration, we need to explore the concept of microservices. This is best understood as a set of principles for service-orientation. By ‘service-orientation’ I mean any approach to software design that conceives of systems and applications as a collaboration between discrete service components. For some people, the term has strong connotations with complex proprietary SOA frameworks and tooling. I use the term only in a literal and broad sense.

There is no one authoritative definition of the term ‘microservice’. However, a reasonable level of consensus has emerged. We can summarise the principles as follows:

Decompose applications into microservices: Microservices apply service-orientation within the boundaries of individual applications. Solutions are created from the ground up as a collaboration of fine-grained services, rather than monolithic applications with a front-end layer of service interfaces.

Let each microservice do one thing, and do it well: The ‘micro’ in microservices is too often equated with low SLOC counts. While SLOC can act as a useful heuristic for detecting microservice code smells, this misses the point. A microservice is focused on handling a small subset of well-defined and clearly delineated application concerns. Ideally, a microservice will handle just one concern. This focus makes it much easier to understand, stabilise and evolve the behaviour of individual microservices.

Organise microservices around business capabilities: Multi-tier architectures historically divide and group services by functionality. Services reside in the presentation tier, the business tier or the data tier. If we think of this as horizontal partitioning, then microservices emphasises vertical partitioning. Microservices are grouped and partitioned according to the products and services offered by the organisation. This fosters cross-functional development teams that adopt product-centric attitudes aligned to business capabilities. It de-emphasises the boundaries of individual applications and promotes the fluid composition of services to deliver value to the business.

Version, deploy and host microservices independently: Microservices should be as decoupled and cohesive as possible. This minimises the impact of change to any individual microservice. Microservices can evolve independently at the hands of different individuals or small teams. They can be written in different languages and deployed to different machines, processes and runtime environments at different times using different approaches. They can be patched separately and retired gracefully. They can be scaled effectively, chiefly through the use of horizontal scaling approaches.

Use lightweight communication between microservices: Where possible, implement simple interchange through standardised interfaces. The general consensus is that REST and JSON are preferred approaches, although they are by no means mandated. Avoid complex protocols and centralised communication layers. Favour choreography over orchestration and convention over configuration. Implement lightweight design patterns such as API Gateways to act as intermediaries between microservices and clients. Design each microservice for failure using automatic retry, fault isolation, graceful degradation and fail-fast approaches.

Avoid centralised governance and management of microservices: Use contract-first development approaches, but don’t enforce centralised source code management, specific languages or other restrictions across different microservice development teams. Don’t depend on centralised discovery e.g., via service directories. Don’t enforce centralised data management or configuration, but instead let each microservice manage its own data and configuration in its own way.

 

On Monoliths

The most common rationale for microservices contrasts them with the design and implementation of monolithic applications. At one extreme, I’ve seen monolithic applications defined as highly coupled solutions deployed as a single package to a single machine and run in a single process. Of course, few modern enterprise-level applications are designed this way. Multi-tier architectures, component-based design and service orientation, together with the wide-spread adoption of modern design patterns, has ensured that most enterprise-level development has long moved on from the creation of such monstrosities.

A better definition of the term ‘monolith’ focuses on the need to deploy, version and patch entire applications as a whole, regardless of how they are physically structured. From this perspective, the problem is cast in terms of the impact of fine-grained changes on the application as a whole. A change in one component may require a full re-deployment of the entire application.

This type of monolithicity has a detrimental effect on the entire application lifecycle. Each developer must constantly re-build and re-deploy the entire application on their desktop just to test a small change to a single component. The build manager must maintain complex scripts and oversee slow processes that repeatedly re-construct the entire application from numerous artefacts, each worked on by different developers. The testers are restricted to black-box integration testing of large and complex combinations of components. Deployment is a fraught and costly battle to coax the entire ensemble to function in alien environments. Every patch and upgrade requires the entire application to be taken out of commission for a period, compromising the capability of the business to function. Significant change becomes infeasible just in terms of regression testing. To cap it all, once the architects and developers have moved on, no one is left with sufficient understanding of how the application functions as a whole.

Microservices replace the classic notion of the application, defined by tiers of functionality, with the concept of loosely-coupled collaborations of microservices grouped according to business capability. They facilitate the continuous evolution and deployment of solutions, allowing small groups of developers to work on constrained problem domains while minimising the need to enforce top-down governance on their choice of tools, languages and frameworks. Microservices support the agile, high velocity development of product-centric solutions using continuous deployment techniques. They support elastic scalability. They help to minimise technical debt.

 

Novelty

An obvious objection could be that microservices lack novelty, by which I mean that they do not possess sufficient distinction from pre-existing and generally received architectural concepts to be of interest. Certainly each of the principles outlined above has a long history pre-dating the emergence of the term ‘microservice’. Such objections arise naturally when microservices are characterised as a remedy to mainstream notions of service-orientation. While it is true that some examples of service-orientated architecture prove vastly over-complicated for the problems they address, that is simply a matter of poor application of architectural principles. Any attempt to claim that ‘service-orientation is a bad thing’ and cast microservices as the solution misses the point entirely and quickly descends into caricature and absurdity.

In reality, microservice principles are a service-orientated response to the world of agility, devops and continuous deployment. As such, their novelty emerges from their ability to mould and fashion the direction of service-orientated development in the context of these concerns. They also represent the desire to ‘democratise’ software development, allowing developers from the widest circle to collaborate without unnecessary restriction.

A number of articles and presentations contrast the microservice approach to the use of proprietary integration and service bus frameworks. While some of the arguments are spurious and ill-informed, the underlying intention is good. It is the desire to avoid closed worlds with their ‘high-priesthoods’ in favour of a more open world in which mainstream development approaches can be used to solve problems by any suitably experienced developer.

I should declare my own position here. I have spent the last decade or more as a ‘high-priest’ of integration with a focus on a proprietary framework and set of tools. However, with the advent of cloud computing, I increasingly inhabit the ‘democratised’ world. I have, as it were, a foot in both camps. Indeed, I spend roughly equal time moving between these two camps. I see worth in both, and I believe they are more closely aligned than some imagine. However, I also recognise that the flow of history is clearly towards democratisation.

 

When integration and microservices meet

Now we have defined the worlds of integration and microservices, we need to ask some obvious questions. Where and how do these two worlds meet? Do they overlap or not? Are they complementary or do the contradict each other?

There is plenty of scope for disagreement here. We can imagine an argument between two protagonists. Alice is an EAI veteran. Bob is a microservice evangelist.

Alice kicks things off by asserting that microservices are an irrelevance to her. Integration exists to handle the interchange between any applications, systems and services, regardless of their architecture. She is happy to integrate anything, including monolithic applications, microservices, SOA services and all systems of any kind, shape or size.

Bob, piqued by her dismissive attitude, retorts that if people had concentrated on creating microservices in the first place, rather than monolithic applications, there would be no need to integrate applications at all. It is Alice’s skills that would be irrelevant.

Alice, rising to the bait, responds loftily that Bob’s naïve idealism has no relevance in the real world and that she doesn’t expect to be looking for a new job anytime soon.

Bob, irritated by Alice’s tone, suggests that the very tools, approaches and architectures that Alice uses are monolithic, promote monolithic solutions and cause many of the problems microservices are there to solve. She is part of the problem, not part of the solution.

Now seriously annoyed, Alice claims that microservices represent a simplistic and childish approach that completely ignores the inherent complexity she faces every day. The tools she uses were selected because they address this complexity directly. Bob’s way of thinking, she claims, is born of lack of experience and a hopeless idealism. It can only promote chaos and invite failure.

I’m sure you agree this has to stop! We will leave Alice and Bob to their own devices, well out of earshot. For our part, wisdom dictates a cool-headed, reasoned response. We need to think through the issues carefully and honestly, making sure we take time to understand different perspectives and to properly analyse the root causes of the problems we face. I may be a high priest of integration, but I’m as keen as anyone to understand what works well, what works poorly and what is completely broken. Integration can certainly be a demanding discipline and the approaches we use sometimes leave much to be desired. Can the world of microservices inform us and help us do better?

There is a clear delineation of domains of interest that characterise the argument. My world broadly splits into two such domains. The first is the domain of business applications, systems and services. This is located firmly on the other side of the fence to where I am. I have no control over the applications that live in that domain. My job is to accept their existence, trust that the business has good reasons to invest in them and work out how to integrate them. My interests are different to, but do not conflict with, those of the developers who build and maintain those applications.

The second domain is that of integration. This is my natural home and here I have some control over my environment. I can select, or at least influence, the tools and frameworks I believe fit the problem domain, and I can design and implement integration solutions.

clip_image004

Clearly, microservice thinking applies to the first domain. It does so without conflict with the Integration domain. However, microservices are unlikely to dominate the first domain any time soon. Most organisations will continue to invest in on-premises and SaaS Line-of-Business applications, enterprise level CRM, CMS and ERP systems and data management systems. They will apply the principle of ‘buy, before build’, and hence, even if the whole world moves to RESTful interchange at the API level, their services and applications will still be silos of functionality and data in need of integration.

Even in scenarios where organisations invest in writing custom applications and services, it is highly unlikely that they will be willing to re-write their entire custom estate around the principles of microservices. It is far more likely that organisations will adopt microservice approaches over an extended period, using evolutionary approaches to tackle new problems. They will only re-write existing applications and services as microservices when there is a compelling commercial reason to do so.

 

The rise of µPaaS

We are seeing the first stirrings of interest (this was written in late 2014) in merging the principles of microservices with the provision of Platform-as-a-Service in cloud envrionments. The concept is to build out public PaaS offerings through the provision of microservice market places. In this emerging world, developers will create solutions by selecting pre-built microservices and combining and blending them with additional custom microservices. Public cloud platforms will support commercial approaches to monetise microservices. The PaaS platform itself, however, will leave developers free to exploit this market place, or not, as they choose. They can combine its offerings with custom-built and free/open-source microservices as required.

I cannot resist the temptation to call this new world ‘microPaaS’, or µPaaS. Its emergence is the main incentive to write this article. As soon as the µPaaS concept began to emerge, two key requirements came into sharp focus. The first is the need for better containership at the OS level. PaaS models must, of necessity, provide some kind of OS container for packaged code. This may be a virtual machine instance with automated provisioning. However, this locks developers into a single OS and any runtime environments that happen to target that OS. This violates the intention to allow developers to select the most appropriate tools and technologies for each individual microservice. In addition, microservices demand the freedom to deploy and host each microservice independently. Using an entire virtual machine as a container, possibly for a single microservice, is a top-heavy approach. Hence, a µPaaS needs lightweight, OS-agnostic containership. Efforts are currently focused on the evolution of Docker which, today, is a Linux-only technology, but tomorrow will emerge on other OS platforms, and specifically on future versions of Microsoft Windows.

The second issue is that of integration. In the microservices world, the vision often extends as far as different development teams collaborating within a larger organisation. However, on a public cloud platform, everyone gets to play. This is a problem. Microservices will be provided by entirely different teams and organisations. We can expect that, following the open source model, entire communities will emerge around mini-ecosystems of microservices that share common data representations, transactional boundaries and conventions. However, across the wider ecosystem as a whole, there will still be a need to provide mediation, transformation and co-ordination.

clip_image005In the µPaaS world, the ideal is to provide integration capabilities as microservices, themselves. The danger, here, lies in constant re-invention of wheels, solving the same integration problems again and again. This suggests the need to provide first class generic integration microservices as a fundamental part of the ecosystem. This, however, highlights a further risk. Generic integration microservices must cater for the complex and arcane issues that can arise when composing solutions from incompatible parts. They cannot afford to ignore this complexity. If they do so, they will be an endless source of frustration and will lower the perceived value of the ecosystem as a whole. Instead, they must implement first-class abstractions over the complexities of integration in order to avoid compromising the ‘democratised’ nature of a µPaaS. They must be easy for any developer to use. No high priests of integration allowed!

The need for integration capabilities in µPaaS is driven by another consideration. A µPaaS platform will be used to build new solutions. However, there will still be a need to integrate these with existing applications and services. This, of course, includes integration with on-premises applications as part of hybrid architectures. This integration can, of course, be achieved using existing EAI/ESB tools and products. However, µPaaS offers the chance to re-think the approach to EAI from a microservices perspective. Again, a driving force for this is the democratisation of EAI, bringing down the cost and effort required to integrate applications. Done well, a microservice approach to integration will result in solutions that are easier to maintain and evolve over time, which scale easily, but which provide the robust message handling capabilities at the heart of integration.

One other reason for providing integration services in µPaaS is to support EDI workloads. The cloud provides an obvious location to host EDI solutions, and we have already seen the emergence of iPasS support for EDiFACT/X12 and AS2, together with trading partner management functionality. Expect to see this capability evolve over time.


The future landscape

Organisations that have made significant investment in EAI, EDI and service bus technologies are unlikely to replace those technologies with microservices in the near future. These tools will continue to play a critical role in enabling organisations to integrate their systems and applications effectively. Until we see µPaaS providing equivalent functionality, they will retain their role as the appropriate tools, frameworks and products for enabling robust, enterprise-level integration of mission-critical workloads.

Microservices apply service-orientated thinking inside application boundaries and serve to break down those boundaries. Contrast this with the application of service-orientation at the application boundary itself. Ten years ago, it was still rare for commercial applications to provide a web service API. Now, it is almost unthinkable for any modern business application to omit such features. In turn, this has allowed EAI tools to evolve more closely towards the concepts of the enterprise service bus. Likewise, many ESB products add value by incorporating integration technologies and tools.

Many of the concerns addressed by existing EAI tools are analogous to those of the microservices world. EAI emphasises strong decoupling of applications and services, ensuring those investments can vary and evolve over time, or even be removed or replaced with minimal impact on other applications and services. It does this through strong decoupling. Within the integration domain itself, most modern EAI and ESB products implement integration components as services. They generally allow those services to be hosted independently and to be scaled horizontally, although cost issues related to licencing and hardware can place limits on this. Integration services are often fine-grained, supporting a constrained set of behaviours for mediation or transformation. They evolved before the notion of microservices was conceived, and they do not generally adhere to all the microservices principles. However, they share a common heritage with microservices and share similar goals.

One issue that muddies the waters in EAI is the hosting of business logic within the integration domain. This can be a controversial matter. Some argue that business logic should be hosted within separate applications, systems and services. This may be driven by the centrality of ERP systems within organisations, or the need to ensure that different parts of the organisation take responsibility for automating the activities in which they are engaged. In this case, the integration layer is viewed simply as a communication hub that mediates messages between these systems. Others argue that business logic services naturally belong within the integration layer. This approach emphasises the need to decouple automated business processes from supporting applications and systems in in order to allow the organisation to rip and replace those systems over time with minimal impact on custom logic.

In my experience, the driving forces that dictate the best approach have more to do with organisational culture and longer-term IT strategy than with any architectural principal. Part of the art of integration is to intelligently predict how business requirements and IT landscapes are likely to evolve over time and to design solutions accordingly. This explains why, in many scenarios, the investment in EAI and ESB products results in the hosting of significant business logic within the integration domain.

clip_image006

What then, of the future? Microservices and µPaaS will undoubtedly work their magic in the enterprise space. However, they won’t be used exclusively. Integration will, in part, move to the µPaaS world. µPaaS itself, will predominantly favour public cloud, but will also be available within private cloud implementations. Today’s EAI and ESB tools will evolve along the lines of cloud enablement and will continue for the foreseeable future to play an important role with the enterprise. Where business logic today is hosted in the integration domain, we can expect a move towards the use of microservices. Integration itself will be ‘democratised’ at least to an extent. This will reduce costs and timescales, and help organisations meet the challenges of the future.

clip_image007

 

In the next post, I will lay a foundation for understanding more precisely the correspondence between application integration and microservices.  See http://wblo.gs/foR.


[1] Notwithstanding the reported advent of automated real-time translation capabilities in Skype. No analogy is perfect!


Thursday, July 31, 2014 #

What is truly offensive about Richard Dawkins' comments on date rape and paedophilia is his air of intellectual superiority founded on hopeless ignorance of basic logic.  He believes the following to be an invalid syllogism:

X is Bad

Y is Worse

Therefore X is not Bad

A syllogism can be valid or invalid, but it remains a syllogism.  As Aristotle might have put it…

All syllogisms have a middle term that appears in both the major and minor premises

In Dawkins’ example, the middle term does not appear in the minor premise

Therefore Dawkins’ example is not a syllogism

He might also have said…

For all syllogisms, the minor term is the subject of the conclusion

In Dawkins’ example the minor term does not appear in the conclusion

Therefore Dawkins’ example is not a syllogism

Modus Baroco, x2

Using fancy technical terms to try to convince others how clever you are only works if you actually know what those terms mean.


Thursday, July 3, 2014 #

From Google, this morning…” Charles, do you know Charles Young?” with a lovely picture of myself.

Nope, never heard of me. Google clearly has no idea who I am either.


Monday, June 2, 2014 #

I’ve recently been resurrecting some code written several years ago that makes extensive use of the BAM Interceptor provided as part of BizTalk Server’s BAM event observation library.  In doing this, I noticed an issue with continuations.  Essentially, whenever I tried to configure one or more continuations for an activity, the BAM Interceptor failed to complete the activity correctly.   Careful inspection of my code confirmed that I was initializing and invoking the BAM interceptor correctly, so I was mystified.  However, I eventually found the problem.  It is a logical error in the BAM Interceptor code itself.

The BAM Interceptor provides a useful mechanism for implementing dynamic tracking.  It supports configurable ‘track points’.  These are grouped into named ‘locations’.  BAM uses the term ‘step’ as a synonym for ‘location’.   Each track point defines a BAM action such as starting an activity, extracting a data item, enabling a continuation, etc.  Each step defines a collection of track points.

Understanding Steps

The BAM Interceptor provides an abstract model for handling configuration of steps.  It doesn’t, however, define any specific configuration mechanism (e.g., config files, SSO, etc.)  It is up to the developer to decide how to store, manage and retrieve configuration data.  At run time, this configuration is used to register track points which then drive the BAM Interceptor.

The full semantics of a step are not immediately clear from Microsoft’s documentation.  They represent a point in a business activity where BAM tracking occurs.  They are named locations in the code.  What is less obvious is that they always represent either the full tracking work for a given activity or a discrete fragment of that work which commences with the start of a new activity or the continuation of an existing activity.  The BAM Interceptor enforces this by throwing an error if no ‘start new’ or ‘continue’ track point is registered for a named location.

This constraint implies that each step must marked with an ‘end activity’ track point.  One of the peculiarities of BAM semantics is that when an activity is continued under a correlated ID, you must first mark the current activity as ‘ended’ in order to ensure the right housekeeping is done in the database.  If you re-start an ended activity under the same ID, you will leave the BAM import tables in an inconsistent state.  A step, therefore, always represents an entire unit of work for a given activity or continuation ID.  For activities with continuation, each unit of work is termed a ‘fragment’.

Instance and Fragment State

Internally, the BAM Interceptor maintains state data at two levels.  First, it represents the overall state of the activity using a ‘trace instance’ token.  This token contains the name and ID of the activity together with a couple of state flags.  The second level of state represents a ‘trace fragment’.   As we have seen, a fragment of an activity corresponds directly to the notion of a ‘step’.  It is the unit of work done at a named location, and it must be bounded by start and end, or continue and end, actions.

When handling continuations, the BAM Interceptor differentiates between ‘root’ fragments and other fragments.  Very simply, a root fragment represents the start of an activity.  Other fragments represent continuations.  This is where the logic breaks down.  The BAM Interceptor loses state integrity for root fragments when continuations are defined.

Initialization

Microsoft’s BAM Interceptor code supports the initialization of BAM Interceptors from track point configuration data.  The process starts by populating an Activity Interceptor Configuration object with an array of track points.  These can belong to different steps (aka ‘locations’) and can be registered in any order.  Once it is populated with track points, the Activity Interceptor Configuration is used to initialise the BAM Interceptor.  The BAM Interceptor sets up a hash table of array lists.  Each step is represented by an array list, and each array list contains an ordered set of track points. 

The BAM Interceptor represents track points as ‘executable’ components.  When the OnStep method of the BAM Interceptor is called for a given step, the corresponding list of track points is retrieved and each track point is executed in turn.  Each track point retrieves any required data using a call back mechanism and then serializes a BAM trace fragment object representing a specific action (e.g., start, update, enable continuation, stop, etc.).  The serialised trace fragment is then handed off to a BAM event stream (buffered or direct) which takes the appropriate action.

The Root of the Problem

The logic breaks down in the Activity Interceptor Configuration.  Each Activity Interceptor Configuration is initialised with an instance of a ‘trace instance’ token.  This provides the basic metadata for the activity as a whole.  It contains the activity name and ID together with state flags indicating if the activity ID is a root (i.e., not a continuation fragment) and if it is completed.  This single token is then shared by all trace actions for all steps registered with the Activity Interceptor Configuration.

Each trace instance token is automatically initialised to represent a root fragment.  However, if you subsequently register a ‘continuation’ step with the Activity Interceptor Configuration, the ‘root’ flag is set to false at the point the ‘continue’ track point is registered for that step.   If you use a ‘reflector’ tool to inspect the code for the ActivityInterceptorConfiguration class, you can see the flag being set in one of the overloads of the RegisterContinue method.   

This makes no sense.  The trace instance token is shared across all the track points registered with the Activity Interceptor Configuration.  The Activity Interceptor Configuration is designed to hold track points for multiple steps.  The ‘root’ flag is clearly meant to be initialised to ‘true’ for the preliminary root fragment and then subsequently set to false at the point that a continuation step is processed.  Instead, if the Activity Interceptor Configuration contains a continuation step, it is changed to ‘false’ before the root fragment is processed.  This is clearly an error in logic.

The problem causes havoc when the BAM Interceptor is used with continuation.  Effectively the root step is no longer processed correctly, and the ultimate effect is that the continued activity never completes!   This has nothing to do with the root and the continuation being in the same process.  It is due to a fundamental mistake of setting the ‘root’ flag to false for a continuation before the root fragment is processed.

The Workaround

Fortunately, it is easy to work around the bug.  The trick is to ensure that you create a new Activity Interceptor Configuration object for each individual step.  This may mean filtering your configuration data to extract the track points for a single step or grouping the configured track points into individual steps and the creating a separate Activity Interceptor Configuration for each group.  In my case, the first approach was required.  Here is what the amended code looks like:

// Because of a logic error in Microsoft's code, a separate ActivityInterceptorConfiguration must be used
// for each location. The following code extracts only those track points for a given step name (location).
var trackPointGroup = from ResolutionService.TrackPoint tp in bamActivity.TrackPoints
                      where (string)tp.Location == bamStepName
                      select tp;
var bamActivityInterceptorConfig =
   
new Microsoft.BizTalk.Bam.EventObservation.ActivityInterceptorConfiguration(activityName);

foreach (var trackPoint in trackPointGroup)
{
    switch (trackPoint.Type)
    {
        case TrackPointType.Start:
            bamActivityInterceptorConfig.RegisterStartNew(trackPoint.Location, trackPoint.ExtractionInfo);
            break;

etc…

I’m using LINQ to filter a list of track points for those entries that correspond to a given step and then registering only those track points on a new instance of the ActivityInterceptorConfiguration class.   As soon as I re-wrote the code to do this, activities with continuations started to complete correctly.


Friday, December 6, 2013 #

We are now SolidSoft Reply. This morning, the company was acquired by Reply S.p.A. This is great news for us. We will continue to build the business under the SolidSoft name, brand and culture, but as part of a much larger team. Further information at http://www.reply.it/en/investors/financialnews/readd/%2c15230

Monday, April 29, 2013 #

Microsoft does not currently offer RHEL on subscription on the Windows Azure platform, and people have reported problems when trying to create and run their RHEL VMs.  So, does RHEL run on Azure?  Read on here.

 

http://solidsoft.azurewebsites.net/articles/posts/2013/does-red-hat-enterprise-linux-run-on-azure.aspx


Wednesday, January 30, 2013 #

I can't say I follow things that closely in the Windows Phone world, but I am aware of the upgrade to Windows phone 7.8.  I've been looking forward to this for a while.  The improvements in the UI look nice, and when I get it, I can try to kid myself that my company phone, a Nokia Lumia 800, is really an 820.


It appears that the roll-out of 7.8 started today in the US for Nokia 900 users.  It can take a while for upgrades to make it to all the eligible phones.  So, imagine my delight when, this evening, my phone informed me an update was waiting for me!  Yeah!  I eagerly started the upgrade process and excitedly informed my bemused family that I was about to get Windows Phone 7.8.

Er...no.  After a successful upgrade, the phone re-booted...into Windows Phone 7.5.

I did a little digging.  It appears that the last upgrade, code-named Tango, has just arrived on my phone.  Tango was released on 20th July last year.  That's just over six months before I got the upgrade.

Oh dear me.

I'll report back on Windows Phone 7.8 in late summer...if I'm fortunate enough to get it by then :-(

Update
 
Apologies to Nokia who I stupidly railed at in an earlier version of this post.   Of course, they simply manufacture the handsets.  In my case, the carrier is Vodafone and they are the company responsible for pushing updates to my phone.    It seems that back in September Vodafone decided to cancel the global roll-out of Tango updates to some users due to a WiFi concern.  Although the press only reported this as affecting a single HTC model, maybe this is connected with my experience.
 
Update 2 (Friday)
 
A colleague has been busy forcing upgrades on his Nokia Lumia 800 (there is a little trick you can use, apparently, that involves switching off your PC WiFi connection at just the right moment while using Zune, and then re-connecting).  He forced an upgrade to Tango.  Now, he reports that he got two further updates and then a third.  The third appears to be Windows Phone 7.8 (which at the time of writing he is currently installing).  So, best guess is that Tango is being rolled out as a precursor to the 7.8 update.  I'll report back on this later.
 
Update 3

After many weeks of non-information and constant complaints on their forum, Vodafone did eventually roll out Windows Phone 7.8.  This was, in fact, a patched version of 7.8.  While I have no problems with Vodafone withdrawing the roll-out of 7.8 in order to fix a bug, I do have issues with the inordinate length of time it took them to issue the patched version and, more importantly, the total lack of information provided by the company to their customers.


Tuesday, January 22, 2013 #

The C# compiler is a pretty good thing, but it has limitations. One limitation that has given me a headache this evening is its inability to guard against cycles in structs.  As I learn to think and programme in a more functional style, I find that I am beginning to rely more and more on structs to pass data.  This is natural when programming in the functional style, but structs can be damned awkward blighters.

Here is a classic gotcha.  The following code won't compile, and in this case, the compiler does its job and tells you why with a nice CS0523 error:

    struct Struct1
    {
        public Struct2 AStruct2
    }

    struct Struct2
    {
        public Struct1 AStruct1
    }

Structs are value types and are automatically instantiated and initialized as stack objects.  If this code were compiled and run, Struct1 would be initialised with a Struct2 which would be initialised with a Struct1 which would be initialised with a Struct2, etc., etc.  We would blow the stack.

Well, actually, if the compiler didn't capture this error, we wouldn't get a stack overflow because at runtime the type loader would spot the problem and refuse to load the types.  I know this because the compiler does a really rather poor job of spotting cycles.

Consider the following.  You can use auto-properties, in which case the compiler generates backing fields in the background.  This does nothing to eliminate the problem.  However, it does hide the cycle from the compiler.  The following code will therefore compile!

    struct Struct1
    {
        public Struct2 AStruct2 { get; set; }
    }

    struct Struct2
    {
        public Struct1 AStruct1 { get; set; }
    }

At run-time it will blow up in your face with a 'Could not load type <T> from assembly' (80131522) error.  Very unpleasent.

ReSharper helps a little.  It can spot the issue with the auto-property code and highlight it, but the code still compiles.  However, ReSharper quickly runs out of steam, as well.   Here is a daft attempt to avoid the cycle using a nullable type:

    struct Struct1
    {
        public Struct2? Struct2 { get; set; }
    }

    struct Struct2
    {
        public Struct1 Struct1 { get; set; }
    }

Of course, this won't work (duh - so why did I try?).  System.Nullable<T> is, itself, a struct, so it does not solve the problem at all.  We have simply wrapped one struct in another.  However, the C# compiler can't see the problem, and neither can ReSharper.  The code will compile just fine.  At run-time it will again fail.

If you define generic members on your structs things can easily go awry.  I have a complex example of this, but it would take a lot of explaining as to why I wrote the code the way I did (believe me, I had reason to), so I'll leave it there.

By and large, I get on well with the C# compiler.  However, this is one area where there is clear room for improvement.

Update

Here's one way to solve the problem using a manually-implemented property:

    struct Struct1
    {
        private readonly Func<Struct2> aStruct2Func;

        public Struct1(Struct2 struct2)
        {
            this.aStruct2Func = () => struct2;
        }

        // Let's make this struct immutable!  It's good practice to do so
        // with structs, especially when writing code in the functional style.
        // NB., the private backing field is declared readonly, and we need a
        // constructor to initialize the struct field.  There are more optimal
        // approaches we could use, but this will perform OK in most cases,
        // and is quite elegant.
        public Struct2 AStruct2
        {
            get
            {
                return this.aStruct2Func();
            }
        }
    }

    struct Struct2
    {
        public Struct1 AStruct1 { get; set; }
    }


Tuesday, November 13, 2012 #

Forget about Steven Sinofski's unexpected departure from Microsoft.   The real news from Redmond is that, after approximately 72 years of utter stagnation, the latest version of Visio has been upgraded to support UML 2.x!   It gets better.  It looks like it actually supports the latest version of UML (2.4.1). 

Unbelievable!


Sunday, July 8, 2012 #

At long last I’ve started using Windows 8.  I boot from a VHD on which I have installed Office, Visio, Visual Studio, SQL Server, etc.  For a week, now, I’ve been happily writing code and documents and using Visio and PowerPoint.  I am, very much, a ‘productivity’ user rather than a content consumer.   I spend my days flitting between countless windows and browser tabs displayed across dual monitors.  I need to access a lot of different functionality and information in as fluid a fashion as possible.

With that in mind, and like so many others, I was worried about Windows 8.  The Metro interface is primarily about content consumption on touch-enabled screens, and not really geared for people like me sitting in front of an 8-core non-touch laptop and an additional Samsung monitor.  I still use a mouse, not my finger.  And I create more than I consume.

Clearly, Windows 8 won’t be viable for people like me unless Metro keeps out of my hair when using productivity and development tools.  With this in mind, I had long expected Microsoft to provide some mechanism for switching Metro off.  There was a registry hack in last year’s Developer Preview, but this capability has been removed.   That’s brave.  So, how have things worked out so far?

Well, I am really quite surprised.  When I played with the Developer Preview last year, it was clear that Metro was unfinished and didn’t play well enough with the desktop.  Obviously I expected things to improve, but the context switching from desktop to full-screen seemed a heavy burden to place on users.  That sense of abrupt change hasn’t entirely gone away (how could it), but after a few days, I can’t say that I find it burdensome or irritating.   I’ve got used very quickly to ‘gesturing’ with my mouse at the bottom or top right corners of the screen to move between applications, using the Windows key to toggle the Start screen and generally finding my way around.   I am surprised at how effective the Start screen is, given the rather basic grouping features it provides.  Of course, I had to take control of it and sort things the way I want.  If anything, though, the Start screen provides a better navigation and application launcher tool than the old Start menu.

What I didn’t expect was the way that Metro enhances the productivity story.  As I write this, I’ve got my desktop open with a maximised Word window.  However, the desktop extends only across about 85% of the width of my screen.  On the left hand side, I have a column that displays the new Metro email client.  This is currently showing me a list of emails for my main work account.  I can flip easily between different accounts and read my email within that same column.  As I work on documents, I want to be able to monitor my inbox with a quick glance.

Windows 8 for productivity

The desktop, of course, has its own snap feature.  I could run the desktop full screen and bring up Outlook and Word side by side.  However, this doesn’t begin to approach the convenience of snapping the Metro email client.  Consider that when I snap a window on the desktop, it initially takes up 50% of the screen.  Outlook doesn’t really know anything about snap, and doesn’t adjust to make effective use of the limited screen estate.  Even at 50% screen width, it is difficult to use, so forget about trying to use it in a Metro fashion. In any case, I am left with the prospect of having to manually adjust everything to view my email effectively alongside Word.  Worse, there is nothing stopping another window from overlapping and obscuring my email.  It becomes a struggle to keep sight of email as it arrives.  Of course, there is always ‘toast’ to notify me when things arrive, but if Outlook is obscured, this just feels intrusive.

The beauty of the Metro snap feature is that my email reader now exists outside of my desktop.   The Metro app has been crafted to work well in the fixed width column as well as in full-screen.  It cannot be obscured by overlapping windows.  I still get notifications if I wish.  More importantly, it is clear that careful attention has been given to how things work when moving between applications when ‘snapped’.  If I decide, say to flick over to the Metro newsreader to catch up with current affairs, my desktop, rather than my email client, obligingly makes way for the reader.  With a simple gesture and click, or alternatively by pressing Windows-Tab, my desktop reappears.

Another pleasant surprise is the way Windows 8 handles dual monitors.  It’s not just the fact that both screens now display the desktop task bar.  It’s that I can so easily move between Metro and the desktop on either screen.  I can only have Metro on one screen at a time which makes entire sense given the ‘full-screen’ nature of Metro apps.  Using dual monitors feels smoother and easier than previous versions of Windows.

Overall then, I’m enjoying the Windows 8 improvements.  Strangely, for all the hype (“Windows reimagined”, etc.), my perception as a ‘productivity’ user is more one of evolution than revolution.  It all feels very familiar, but just better.


Saturday, June 23, 2012 #

The term ‘cloud’ can sometimes obscure the obvious.  Today’s Microsoft Cloud Day conference in London provided a good example.  Scott Guthrie was halfway through what was an excellent keynote when he lost network connectivity.  This proved very disruptive to his presentation which centred on a series of demonstrations of the Azure platform in action.  Great efforts were made to find a solution, but no quick fix presented itself.  The venue’s IT facilities were dreadful – no WiFi, poor 3G reception (forget 4G…this is the UK) and, unbelievably, no-one on hand from the venue staff to help with infrastructure issues.  Eventually, after an unscheduled break, a solution was found and Scott managed to complete his demonstrations.  Further connectivity issues occurred during the day.

I can say that the cause was prosaic.  A member of the venue staff had interfered with a patch board and inadvertently disconnected Scott Guthrie’s machine from the network by pulling out a cable.

I need to state the obvious here.  If your PC is disconnected from the network it can’t communicate with other systems.  This could include a machine under someone’s desk, a mail server located down the hall, a server in the local data centre, an Internet search engine or even, heaven forbid, a role running on Azure.

Inadvertently disconnecting a PC from the network does not imply a fundamental problem with the cloud or any specific cloud platform.  Some of the tweeted comments I’ve seen today are analogous to suggesting that, if you accidently unplug your microwave from the mains, this suggests some fundamental flaw with the electricity supply to your house.   This is poor reasoning, to say the least.

As far as the conference was concerned, the connectivity issue in the keynote, coupled with some later problems in a couple of presentations, served to exaggerate the perception of poor organisation.   Software problems encountered before the conference prevented the correct set-up of a smartphone app intended to convey agenda information to attendees.  Although some information was available via this app, the organisers decided to print out an agenda at the last moment.  Unfortunately, the agenda sheet did not convey enough information, and attendees were forced to approach conference staff through the day to clarify locations of the various presentations.

Despite these problems, the overwhelming feedback from conference attendees was very positive.  There was a real sense of excitement in the morning keynote.  For many, this was their first sight of new Azure features delivered in the ‘spring’ release.  The most common reaction I heard was amazement and appreciation that Azure’s new IaaS features deliver built-in template support for several flavours of Linux from day one.  This coupled with open source SDKs and several presentations on Azure’s support for Java, node.js, PHP, MongoDB and Hadoop served to communicate that the Azure platform is maturing quickly.  The new virtual network capabilities also surprised many attendees, and the much improved portal experience went down very well.

So, despite some very irritating and disruptive problems, the event served its purpose well, communicating the breadth and depth of the newly upgraded Azure platform.  I enjoyed the day very much.

 


Wednesday, March 28, 2012 #

For the last decade, I have repeatedly, in my imitable Microsoft fan boy style, offered an alternative view to commonly held beliefs about Microsoft's stance on open source licensing.  In earlier times, leading figures in Microsoft were very vocal in resisting the idea that commercial licensing is outmoded or morally reprehensible.  Many people interpreted this as all-out corporate opposition to open source licensing.  I never read it that way. It is true that I've met individual employees of Microsoft who are antagonistic towards FOSS (free and open source software), but I've met more who are supportive or at least neutral on the subject.  In any case, individual attitudes of employees don't necessarily reflect a corporate stance.  The strongest opposition I've encountered has actually come from outside the company.  It's not a charitable thought, but I sometimes wonder if there are people in the .NET community who are opposed to FOSS simply because they believe, erroneously, that Microsoft is opposed.

Here, for what it is worth, are the points I've repeated endlessly over the years and which have often been received with quizzical scepticism.

a)  A decade ago, Microsoft's big problem was not FOSS per se, or even with copyleft.  The thing which really kept them awake at night was the fear that one day, someone might find, deep in the heart of the Windows code base, some code that should not be there and which was published under GPL.  The likelihood of this ever happening has long since faded away, but there was a time when MS was running scared.  I suspect this is why they held out for a while from making Windows source code open to inspection.  Nowadays, as an MVP, I am positively encouraged to ask to see Windows source.

b)  Microsoft has never opposed the open source community.  They have had problems with specific people and organisations in the FOSS community.  Back in the 1990s, Richard Stallman gave time and energy to a successful campaign to launch antitrust proceedings against Microsoft.  In more recent times, the negative attitude of certain people to Microsoft's submission of two FOSS licences to the OSI (both of which have long since been accepted), and the mad scramble to try to find any argument, however tenuous, to block their submission was not, let us say, edifying.

c) Microsoft has never, to my knowledge, written off the FOSS model.  They certainly don't agree that more traditional forms of licensing are inappropriate or immoral, and they've always been prepared to say so. 

One reason why it was so hard to convince people that Microsoft is not rabidly antagonistic towards FOSS licensing is that so many people think they have no involvement in open source.  A decade ago, there was virtually no evidence of any such involvement.  However, that was a long time ago.  Quietly over the years, Microsoft has got on with the job of working out how to make use of FOSS licensing and how to support the FOSS community.  For example, as well as making increasingly extensive use of Github, they run an important FOSS forge (CodePlex) on which they, themselves, host many hundreds of distinct projects.  The total count may even be in the thousands now.  I suspect there is a limit of about 500 records on CodePlex searches because, for the past few years, whenever I search for Microsoft-specific projects on CodePlex, I always get approx. 500 hits.  Admittedly, a large volume of the stuff they publish under FOSS licences amounts to code samples, but many of those 'samples' have grown into useful and fully featured frameworks, libraries and tools.

All this is leading up to the observation that yesterday's announcement by Scott Guthrie marks a significant milestone and should not go unnoticed.  If you missed it, let me summarise.   From the first release of .NET, Microsoft has offered a web development framework called ASP.NET.  The core libraries are included in the .NET framework which is released free of charge, but which is not open source.   However, in recent years, the number of libraries that constitute ASP.NET have grown considerably.  Today, most professional ASP.NET web development exploits the ASP.NET MVC framework.  This, together with several other important parts of the ASP.NET technology stack, is released on CodePlex under the Apache 2.0 licence.   Hence, today, a huge swathe of web development on the .NET/Azure platform relies four-square on the use of FOSS frameworks and libraries.

Yesterday, Scott Guthrie announced the next stage of ASP.NET's journey towards FOSS nirvana.  This involves extending ASP.NET's FOSS stack to include Web API and the MVC Razor view engine which is rapidly becoming the de facto 'standard' for building web pages in ASP.NET.  However, perhaps the more important announcement is that the ASP.NET team will now accept and review contributions from the community.  Scott points out that this model is already in place elsewhere in Microsoft, and specifically draws attention to development of the Windows Azure SDKs.  These SDKs are central to Azure development.   The .NET and Java SDKs are published under Apache 2.0 on Github and Microsoft is open to community contributions.  Accepting contributions is a more profound move than simply releasing code under FOSS licensing.  It means that Microsoft is wholeheartedly moving towards a full-blooded open source approach for future evolution of some of their central and most widely used .NET and Azure frameworks and libraries.  In conjunction with Scott's announcement, Microsoft has also released Git support for CodePlex (at long last!) and, perhaps more importantly, announced significant new investment in their own FOSS forge.

Here at Solidsoft we have several reasons to be very interested in Scott's announcement. I'll draw attention to one of them.  Earlier this year we wrote the initial version of a new UK Government web application called CloudStore.  CloudStore provides a way for local and central government to discover and purchase applications and services. We wrote the web site using ASP.NET MVC which is FOSS.  However, this point has been lost on the ladies and gentlemen of the press and, I suspect, on some of the decision makers on the government side.  They announced a few weeks ago that future versions of CloudStore will move to a FOSS framework, clearly oblivious of the fact that it is already built on a FOSS framework.  We are, it is fair to say, mildly irked by the uninformed and badly out-of-date assumption that “if it is Microsoft, it can't be FOSS”.  Old prejudices live on.
For the last decade, I have repeatedly, in my imitable Microsoft fan boy style, offered an alternative view to commonly held beliefs about Microsoft's stance on open source licensing.  In earlier times, leading figures in Microsoft were very vocal in resisting the idea that commercial licensing is outmoded or morally reprehensible.  Many people interpreted this as all-out corporate opposition to open source licensing.  I never read it that way. It is true that I've met individual employees of Microsoft who are antagonistic towards FOSS (free and open source software), but I've met more who are supportive or at least neutral on the subject.  In any case, individual attitudes of employees don't necessarily reflect a corporate stance.  The strongest opposition I've encountered has actually come from outside the company.  It's not a charitable thought, but I sometimes wonder if there are people in the .NET community who are opposed to FOSS simply because they believe, erroneously, that Microsoft is opposed.

Here, for what it is worth, are the points I've repeated endlessly over the years and which have often been received with quizzical scepticism.

a)  A decade ago, Microsoft's big problem was not FOSS per se, or even with copyleft.  The thing which really kept them awake at night was the fear that one day, someone might find, deep in the heart of the Windows code base, some code that should not be there and which was published under GPL.  The likelihood of this ever happening has long since faded away, but there was a time when MS was running scared.  I suspect this is why they held out for a while from making Windows source code open to inspection.  Nowadays, as an MVP, I am positively encouraged to ask to see Windows source.

b)  Microsoft has never opposed the open source community.  They have had problems with specific people and organisations in the FOSS community.  Back in the 1990s, Richard Stallman gave time and energy to a successful campaign to launch antitrust proceedings against Microsoft.  In more recent times, the negative attitude of certain people to Microsoft's submission of two FOSS licences to the OSI (both of which have long since been accepted), and the mad scramble to try to find any argument, however tenuous, to block their submission was not, let us say, edifying.

c) Microsoft has never, to my knowledge, written off the FOSS model.  They certainly don't agree that more traditional forms of licensing are inappropriate or immoral, and they've always been prepared to say so. 

One reason why it was so hard to convince people that Microsoft is not rabidly antagonistic towards FOSS licensing is that so many people think they have no involvement in open source.  A decade ago, there was virtually no evidence of any such involvement.  However, that was a long time ago.  Quietly over the years, Microsoft has got on with the job of working out how to make use of FOSS licensing and how to support the FOSS community.  For example, as well as making increasingly extensive use of Github, they run an important FOSS forge (CodePlex) on which they, themselves, host many hundreds of distinct projects.  The total count may even be in the thousands now.  I suspect there is a limit of about 500 records on CodePlex searches because, for the past few years, whenever I search for Microsoft-specific projects on CodePlex, I always get approx. 500 hits.  Admittedly, a large volume of the stuff they publish under FOSS licences amounts to code samples, but many of those 'samples' have grown into useful and fully featured frameworks, libraries and tools.

All this is leading up to the observation that yesterday's announcement by Scott Guthrie marks a significant milestone and should not go unnoticed.  If you missed it, let me summarise.   From the first release of .NET, Microsoft has offered a web development framework called ASP.NET.  The core libraries are included in the .NET framework which is released free of charge, but which is not open source.   However, in recent years, the number of libraries that constitute ASP.NET have grown considerably.  Today, most professional ASP.NET web development exploits the ASP.NET MVC framework.  This, together with several other important parts of the ASP.NET technology stack, is released on CodePlex under the Apache 2.0 licence.   Hence, today, a huge swathe of web development on the .NET/Azure platform relies four-square on the use of FOSS frameworks and libraries.

Yesterday, Scott Guthrie announced the next stage of ASP.NET's journey towards FOSS nirvana.  This involves extending ASP.NET's FOSS stack to include Web API and the MVC Razor view engine which is rapidly becoming the de facto 'standard' for building web pages in ASP.NET.  However, perhaps the more important announcement is that the ASP.NET team will now accept and review contributions from the community.  Scott points out that this model is already in place elsewhere in Microsoft, and specifically draws attention to development of the Windows Azure SDKs.  These SDKs are central to Azure development.   The .NET and Java SDKs are published under Apache 2.0 on Github and Microsoft is open to community contributions.  Accepting contributions is a more profound move than simply releasing code under FOSS licensing.  It means that Microsoft is wholeheartedly moving towards a full-blooded open source approach for future evolution of some of their central and most widely used .NET and Azure frameworks and libraries.  In conjunction with Scott's announcement, Microsoft has also released Git support for CodePlex (at long last!) and, perhaps more importantly, announced significant new investment in their own FOSS forge.

Here at Solidsoft we have several reasons to be very interested in Scott's announcement. I'll draw attention to one of them.  Earlier this year we wrote the initial version of a new UK Government web application called CloudStore.  CloudStore provides a way for local and central government to discover and purchase applications and services. We wrote the web site using ASP.NET MVC which is FOSS.  However, this point has been lost on the ladies and gentlemen of the press and, I suspect, on some of the decision makers on the government side.  They announced a few weeks ago that future versions of CloudStore will move to a FOSS framework, clearly oblivious of the fact that it is already built on a FOSS framework.  We are, it is fair to say, mildly irked by the uninformed and badly out-of-date assumption that “if it is Microsoft, it can't be FOSS”.  Old prejudices live on.


Thursday, February 23, 2012 #

While coding a very simple orchestration in BizTalk Server 2010, I ran into the dreaded "cannot implicitly convert type 'System.Xml.XmlDocument' to '<message type>'" issue. I've seen this happen a few times over the years, and it has often mystified me.

My orchestration defines a message using a schema type. In a Message Assignment shape, I create the message as an XML Document and then assign the document to the message. I initially wrote the code to populate the XML Document with some dummy XML. At that stage, the orchestration compiled OK. Then I changed the code to populate the XML Document with the correct XML and...bang. I could no longer cast the XML Document to the message type.

I spent some time checking this through. I reverted back to the original code (with the dummy content), but the problem persisted. I restarted Visual Studio (several times), deleted the existing ‘bin’ and ‘obj’ folders and re-built, and tried anything else I could think of. No change.

It then occurred to me to think a little more carefully about exactly what I was doing at the point the code broke. My response message is very simple, and to create the XML content, I am therefore concatenating strings. To ensure I got the right XML, I used BizTalk to generate an example of the XML from the schema. The schema contains two root elements for the request and response messages. To generate the XML, I temporarily changed the 'Root Reference' property of the schema from 'default' to the element that represents the response message...

...and forgot to change the property back :-(

So, I changed the property back to 'default' and...

...success!

I experimented further and ascertained that if the 'Root Reference' property is set to anything other than 'default', the assignment code in my orchestration breaks. This is totally repeatable on the machine I am using. I spent some time looking at the code that BizTalk generates for schemas. When 'Root Reference' is set to 'default', BizTalk generates separate schema classes for each candidate root element, as well as a class for all root nodes. When set to a specific element, BizTalk outputs a single class, only. Apart from that, I couldn't see anything suspicious.

I can't find anything on the Internet about this, so would be interested if anyone else sees identical behaviour. The lesson, here, of course, is to avoid using schemas with multiple root elements. I have now refactored my schema into two new schemas.


Friday, December 16, 2011 #

It's always exciting when a new application you've worked on goes live. The last couple of weeks have seen the 'soft' launch of a new service offered by the UK government called 'Tell Us Once' (TUO). You can probably guess from the name what the service does. Currently, the service allows UK citizens to inform the government (as opposed to Register Officers, who must still be notified separately) just once of two types of 'change-of-circumstance' event; namely births and deaths. You can go, say, to your local authority contact centre, where an officer will work through a set of screens with you, collecting the information you wish to provide. Then, once the Submit button is clicked, that's it! With your consent, the correct data sets are parcelled up and distributed to wherever they need to go - central and local government departments, public sector agencies such as the DVLA, Identity and Passport Service, etc. No need to write 50 letters!

With my colleagues at SolidSoft , I'm really proud to have been involved with the team that designed and developed this new service. For the past few years, we worked originally on the prototypes and pilots (there was more than one!). Over the last eighteen months or so, we have been engaged in building the national system, and development work in on-going. It's been a journey! The idea is very simple, but as you can imagine, the realisation of that idea is rather more complex. Look for future enhancements to today's service, with the ability to report events on-line from the comfort of your own home and the possible extension of the system to cover additional event types in future.

Interaction with government has just got a whole lot better for UK citizens, and we helped make that happen. It's a pity that I don't intend to have any more children (four is enough!), and I really hope I don't have to report a death in the near future, but if I do, I'll be beating a path to the door of my local council's contact centre in order to 'tell them once'.

See http://www.guardian.co.uk/government-computing-network/2011/dec/16/tell-us-once-matt-briggs?utm_source=twitterfeed&utm_medium=twitter

http://www.guardian.co.uk/public-leaders-network/2011/nov/10/tell-us-once-birth-death


Friday, December 9, 2011 #

Yesterday, Microsoft announced the forthcoming release of BizTalk Server 2010 R2 on the BizTalk Server blog site.  This is advanced notice, given that this new version will ship six months after the release of Windows 8, expected in the second half of next year.  On this basis, we can expect the new version of BizTalk Server to arrive in 2013.  Given the BizTalk team’s previous record of name changes, I wonder if this will eventually be released as BizTalk Server 2013.

Microsoft has been refreshingly open in recent months about their future plans for BizTalk Server.  This strategy has not been without its dangers with some commentators refusing to accept Microsoft’s statements at face value.  However, yesterday’s announcement is entirely in line with everything Microsoft has been saying, both publically and privately, for some time now.  Since the release of BizTalk Server 2004, Microsoft has made little change to the core technology with, of course, the exception of a much re-vamped application packaging approach in BizTalk Server 2006.  Instead, Microsoft chose to put investment into a number of important ‘satellite’ technologies such as EDIFACT/X12/AS2 support, RFID Server, etc.  Maintaining the stability of the core platform has allowed BizTalk Server to emerge as a mature and trusted workhorse in the enterprise integration space with widely available skills in the marketplace.

In terms of its major investments, Microsoft’s focus has long shifted to the cloud.  Microsoft has candidly communicated that, given this focus, they have no current plans to add major new technologies to the BizTalk platform.  In addition, they absolutely have no intention of re-engineering the core BizTalk platform.  In my direct experience in recent months, this last point plays very well to prospective and existing enterprise customers.  It takes us straight to the heart of what most organisations want from an integration server: a ‘known quantity’ with a good track record for dependability, scalability and stability and a significant pool of available technical resource.

The announcement of BizTalk Server 2010 R2 illustrates and illuminates Microsoft’s stated future strategy for the product.  An important part of Microsoft’s platform for enterprise computing, it will continue to be enhanced and extended.  It will match future developments in the Windows platform and new versions of Visual Studio.  However, we should not expect to see any dramatic new developments in the world of BizTalk Server.  Instead, the BizTalk platform will continue to steadily mature further as the world’s best-selling integration server.

One of the big messages of yesterday’s announcement is that BizTalk Server will increasingly support its emerging role in building hybrid solutions that encompass systems and services that reside both on-premises and in the cloud.  At SolidSoft , we are increasingly focused on the design and implementation of cloud-based and hybrid integration solutions.  Integration is challenging, and Azure is a young, fast evolving platform.  Microsoft has discussed at length their vision of Azure within a wider ‘hybrid’ context.  The availability of a tried and tested, mature, on-premises integration server is a vitally important enabler in building hybrid solutions.  Better than that, the announcement makes it clear that, as well as new support for the Azure service bus, BizTalk Server 2010 R2 licensing will be revised to open up new opportunities for hosting the server in the cloud.  This ties in with the push in Azure to embrace more fully the IaaS (infrastructure-as-a-service) model and, perhaps most importantly in the BizTalk space, to reduce or eliminate existing barriers between the on-premises and off-premises worlds.   BizTalk Server and Azure belong together.


Sunday, September 25, 2011 #

At last, I can announce that ‘BizTalk Server 2010 Unleashed’ has been published and is available through major booksellers in both printed and electronic form. The book is not a new edition of the old ‘BizTalk Server 2004 Unleashed’ book from several years ago, although Brian Loesgen, our fearless team leader, provided continuity with that title. Instead, this is entirely new content written by a team of six authors, including myself.
 
 
 
BizTalk Server is such a huge subject. It proved a challenge to decide on the content when we started our collaboration a couple of years back (yes, it really was that long ago!). We quickly decided that the book would principally target the BizTalk development community and that it would provide a solid and comprehensive introduction to the chief artefacts of BizTalk Server 2010 solutions – schemas, maps, orchestrations, pipelines and adapters. Much of this content was written by Jan Eliasen and forms part 1 (“The Basics”) of the book.
 
On the day my complimentary copies were delivered, I was working on the implementation of a pipeline component, and had an issue to do with exposing developer-friendly info in Visual Studio. I used this as a test-run of Jan’s content, and sure enough, discovered that he had clearly addressed the issue I had, including sample code. Jan’s contribution is succinct and to the point, but is also very comprehensive (he’s even documented things like creating custom pipeline templates!). I particularly appreciate the way he included plenty of guidance on testing individual artefacts.
 
My contributions to part 1 is a chapter on adapters (the ‘adapter chapter’ as we fondly called it). This explores each of the ‘native’ adapters and the family of WCF adapters. There is also some content on the new SQL adapter which is part of the BizTalk Adapter Pack. In that respect, it overlaps with ‘Microsoft BizTalk 2010 Line of Business Systems Integration’ which I reviewed recently, and also in respect of the SharePoint adapter. However, ‘Microsoft BizTalk 2010 Line of Business Systems Integration’ provides a whole lot more information on a range of LoB adapters. It is written in a different style to BizTalk Server 2010 Unleashed and is highly complementary.
 
Although the original plan was to include content on custom adapter creation, this didn’t, in the end, get covered in any depth. One reason for this is that, going forward, most custom adapter development for both BizTalk and Azure Integration Services (still some way off) is likely to be done using the WCF LoB Adapter SDK. That suggested that we would have had to document two distinct adapter frameworks in order to do the job properly, and this proved a little too much to tackle. Room there for another book, methinks.
 
Part 1 accounts for about half the content of the book. Beyond this, we wanted to add value by covering more advanced topics, including the use of BizTalk Server alongside WCF and the emerging Azure platform, new features in BizTalk Server 2010 and topics that have been only partially covered elsewhere. So, for example, Anush Kumar was contributed an entire section (part 4) on RFID including the new RFID Mobile Framework. Anush is well-known in the BizTalk community due to his involvement in the development of RFID Server. Between Jon Flanders and Brian Loesgen, the book includes content on exploiting WCF extensibility in BizTalk, integrating via the Azure service bus (please note that this content was written before the advent of topics/subscriptions or Integration Services), the BAM framework and the ESB toolkit.
 
There is also a whole section (part 3) written by Scott Colestock that introduces the Administration Console and describes deployment approaches for BizTalk solutions.
 
Rules
That leaves one more subject for which I was responsible. One of the main reasons I was asked to contribute to the book was to document rules processing. Although there is some great content out there on the use of the BRE, I have long felt there is a need for a more comprehensive introduction. Due to some early confusion, I originally intended a total of seven short chapters on rules, but this content was refactored into two longer chapters. The first chapter introduces the Business Rules Framework. My idea was to emphasise the entire framework up front, rather than simply explore the rules composer and other tools. I also tried to explain the typical ‘feel’ of rules processing in the context of a BizTalk application, and the relationship between executable rules and higher-level business rules.
 
The second chapter investigates rule-based programming. It attempts broadly to achieve two related goals. The first is to explain rules programming to developers, to demystify the model, explain the techniques and provide insight into how to handle a number of common issues and pitfalls that rules developers face. The second is to provide a solid theoretical introduction to rules processing, including concepts that are not generally familiar to the average developer. I resisted the temptation, though, to provide an in-depth explanation of how the Rete Algorithm works, which I’m sure will be a relief :-) You can read the Wikipedia article on that.
 
Conclusions
So there you have it. BizTalk Server 2010 is a mature enterprise-level product which, although it has a long future ahead of it, won’t change fundamentally over time. Microsoft has publically stated that their future major investments in EAI/EDI will be made in the Azure space, although new versions of BizTalk Server will continue to benefit from general improvement and greater integration with the evolving Azure platform. So, hopefully, our content will serve for some time as a useful introduction to BizTalk Server, chiefly from a developer’s perspective.

Monday, September 19, 2011 #

One benefit of my recent experience on a BA flight was that I got plenty of time to read through “Microsoft BizTalk 2010 Line of Business Systems Integration”. I’d promised the publisher weeks ago that I would take a look and publish some comments, but August has been such a busy month for me, and they have had to be patient.   I should point out that, for the sake of transparency, that with another BizTalk book about to be released (next week) which I helped co-author, I have an urgent and obvious need to make good on this promise before I start to blog on other stuff.
 
BTS10LoBI is a really welcome addition to the corpus of BizTalk Server books and fills a conspicuous gap in the market.  BizTalk Server offers a wide-ranging library of adapters.  The ‘native’ (built-in) adapters understandably get a lot of attention, as do the WCF adapters, but other adapters, such as the LoB adapters and HIS adapters, are often overlooked.  I came to the book with the mistaken assumption that its chief focus was on the BizTalk Adapter Pack.  This is a pack of adapters built with the WCF-based LoB SDK.  In fact, the book follows a much broader path.  It is a book about LoB integration in a general sense, and not about one specific suite of adapters.  Indeed, it is not simply about adapters.  It focuses on integration with various LoB systems, and explains how adapters and other tools are used to achieve this.

This makes for a more interesting read.  For example, one, possibly unintended, consequence (given that it represents collaboration between five different authors) is that it illustrates very effectively the spectrum of approaches and techniques that end up being employed in real-world integration.  In some cases developers use adapters that offer focused support for metadata harvesting and other features, exploited through tools such as the ‘Consume Adapter Service’ UI.  In other cases, they use native adapters with hand-crafted schemas, or they create façade services.  The book covers additional scenarios where third-party LoB tools and cloud services (specifically SalesForce) are used in conjunction with BizTalk Server.  Coupled with lots of practical examples, the book serves to provide insight into the ‘feel’ of real-world integration which is so often a messy and multi-faceted experience.

The book does not cover the BizTalk Adapter Pack comprehensively.  There is no chapter on the Oracle adapters (not a significant issue because they are very similar to the SQL Server adapter) or the Siebel adapter.  On the other hand, it provides two chapters on the SAP adapter looking at both IDOC and RFC/BAPI approaches.  I particularly welcome the inclusion of chapters on integration with both Dynamics CRM 2011 and Dynamics AX 2009.  I learned a lot about Dynamics CRM which I haven’t had occasion personally to integrate with in its latest version.  The chapter on SalesForce mentions, but does not describe in any detail, the TwoConnect SalesForce adapter which we have used very effectively on previous projects.  Rather, it concentrates on direct HTTP/SOAP interaction with SalesForce.com and, very usefully, advocates the use of Azure AppFabric for secure exchange of data across the internet. 

The book provides two chapters on integration with SharePoint 2010.  The first explores the use of the native adapter to communicate with form and document libraries, and provides illustrated examples of working with InfoPath forms.  It would have been reasonable to stop there, but instead, the second chapter goes on to describe how to integrate more fulsomely with SharePoint via its web service interface, and specifically how to interact with SharePoint lists.
 
Increasingly, the BizTalk community is waking up to the implications of Windows Azure and AppFabric.  This is an important step for developers to take.  Future versions of BizTalk Server will essentially join and extend the on-premise AppFabric world.  As Microsoft progressively melds their on/off premise worlds, BizTalk developers will increasingly have to grapple with integration of cloud based services, and integration of on-premise services via the cloud.  The book is careful to address this emerging field through the inclusion of a chapter on integration via the Azure AppFabric service bus.   As I mentioned above, this is applied specifically to SalesForce integration in a later chapter.  The AppFabric Service Bus is a rapidly-evolving part of the Azure platform, and is set to introduce a raft on new features in the coming months which will greatly extend the possibilities.  Eventually we will see cloud-based integration services appear in this space.  So, the inclusion of this chapter points out the direction of major future evolution of Microsoft’s capabilities and offerings in the integration space.

The book is not shy about providing guidance on practical problems and potential areas of confusion that developers may encounter.  The content is clearly based on real-world experience and benefits from ‘war stories’.  The value of such content cannot be underestimated, and can save developers hours of pain and frustration when tackling new problems.  All in all, I thoroughly welcome this book.  My thanks to the authors, Kent Waere, Richard Seroter, Sergei Moukhnitski, Thiago Almeida and Carl Darski.


Sunday, September 18, 2011 #

I'm sitting is a nice new hotel in Redmond - the Hotel Sierra is well worth considering if you are staying in the area. I'm sleep-deprived and jet-lagged, and it's raining hard outside, but hey, I just got to play with one of the Samsung tablets they handed out at Build, and was not disappointed.  Microsoft is doing something trully remarkable with Win8 Metro.
 
On the other hand, I am deeply disappointed with the UK flag carrier, British Airways. Indeed, I've lost patience with them big-time. So forgive me for getting this off my chest. I am very much in the mood to do as much reputational damage to them as I can.
 
When I checked in on-line, they had booked me into one seat but I could see another with more legroom (a front row). Because of repeated experience over the last few years with defective headsets (I always carry my own earphones these days after one flight here we went through three different headsets before finding one in which one of the earphones actually worked) and bad headset connections (having to constantly twiddle the jack to try to hear anything), I spent a little while consciously debating with myself the intangible risks of changing my seat – i.e., I could easily be swapping a ‘working’ seat for a broken ‘one’. Of course, there was no way to know, so I opted for the seat with more legroom.
 
MISTAKE! Forget about dodgy headsets. Nothing worked. Not even the reading light! Certainly not the inflight entertainment. They failed to show me the safety video (the steward did panic a little when he realised they had failed to comply with their legal obligations). So I sat for 9.5 hours in a grubby, worn-out cabin with nothing!
 
To be fair, they did offer to try to find me another seat (the plane was very full), but I opted for the legroom because I wanted to try to get some sleep. So I could probably have got in-flight entertainment. The point is, though, that this is now more than just an unfortunate couple of co-incidences over the last two or three years. I am reasonably fair-minded and understand that sometimes, with the best will in the world, things just go wrong.  In any case, I was bought up to put-up or shut-up (as my mother would say - it's part of the culture).  However, I am forced to conclude that this is now a repeated trend that I experience regularly to the point where I am consciously suspicious of the seats they give me, and clearly with good reason.  BA simply fails to maintain its cabins to anything like a reasonable or acceptable standard (I must trust they do a better job in maintaining the engines). I used to feel some patriotic pride in BA.  Not now.  It’s so sad to see the British flag carrier consistently deliver such an embarrasingly poor and second-rate service. I will be asking SolidSoft in future to, where possible, book me onto a different carrier and will do what I can to convince the company to use other carriers by default.
 
Personally, I think the UK government should give flag carrier status to someone else (Virgin, I guess).
 
 
 

Thursday, September 15, 2011 #

I've just installed the Windows 8 Developer Preview.  These are some first impressions:

Installation of the preview was quite smooth and didn't take too long.  It took a few minutes to extract the files onto a virtual image, but feature installation then seemed to happen almost instantaneously (according to the feedback on the screen).  The installation routine then went into a preparation cycle that took two or three minutes.  Then the virtual machine rebooted and after a couple of minutes more preparation, up came the licence terms page. 

Having agreed to the licence, I was immediately taken into a racing-green slidy-slidy set of screens that asked me to personalize the installation, including entering my email address.  I entered my work alias.  I was then asked for another alias and password for access to Windows Live services and other stuff.  There was a useful link for signing up for a Windows Live ID.  I duly entered the information.  Only on the next screen did I spot an option to not sign in with a Live ID.  I didn't try this, but I felt a bit peeved that the use of a Live ID had appeared mandatory until that point.  I suspect the idea is to try to entice users to get a Live ID, even if they don't really want one.

A couple more minutes of waiting, et voilà.  The Metro Start screen appeared, covered in an array of tiles.  Simultaneously I got an email (on my work alias) saying that a trusted PC had been added to my Live account.  I clicked the confirmation link, signed into Windows Live and checked that my PC had indeed been confirmed. Then Alan started chatting, but that is a different matter.

Of course, Oracle's Virtual Box (and my Dell notebook) haven't quite mastered the art of touch yet.  For non-touch users a scroll bar appears at the bottom of the Metro UI. I had a moment's innocent fun pretending to swipe the screen with my finger while actually scrolling with the mouse.  Ah, happy days.  Then I discovered that the scroll wheel on my mouse does the equivalent of finger swiping on the Start page.

I opened up IE10.  Wow!  I thought IE9's minimal chrome story was amazing.  IE10 shows how far short IE9 falls.  There is no chrome.  Nothing.  Nadda.  Of sure, there is an address box and some buttons.  They appear when needed (a right mouse click without touch) and disappear again as quickly as possible.  It’s the same with tabs which have morphed, in the Metro UI, into a strip of thumbnails that appear on demand and then get out of the way once you have made your selection.  Click on a new tab and you can navigate to a new page or select a page from a list of recents/favourites.  You can also pin sites to 'Start', which in this case means that they appear as additional tiles on the Start screen.  I played for a minute and then I suddenly experienced the same rush of endorphins that hit me the first time I opened Google Chrome a few years back.  Yes, sad to say, I fell in love with a browser!  A near invisible browser.  A browser that is IE for goodness sake! A browser that does what so many wished IE would do years ago. It gets out of your way.

Do you like traditional tabs?  That's not a problem, because the good-ole desktop is just a click (or maybe a tap or a swipe) away.  There is even a useful widget on the now-you-see-me/now-you-don't address bar that takes you to desktop view.  It is a bit of a one way trip, and results in a new IE frame opening on the desktop for the current page.  On the desktop, IE10 looks just like IE9.  It is, however, significantly more accomplished, and has closed much of the remaining gap between IE9, the full HTML5 spec and some of the additional specifications that people incorrectly term 'HTML 5'.  Microsoft has more than doubled its score on the (slightly idiosyncratic) HTML5 Test site (http://html5test.com/) and now just pips Opera 11.51, Safari 5.1 and Firefox 6 to the post for HTML5 compliance (it beats Firefox by just 2 points, although it is 1 point behind if you take bonus points into consideration) by that measure, although it still falls behind Google Chrome 13.

Pinning caused me some issues which I suspect are simply bugs in the preview.  Having pinned a site, every time I went into the Metro version of IE10, I found that I couldn't click on links, hide the address bar, view tabs, etc.  I eventually had to kill my IE10 processes to get things working properly again.  I noticed that desktop and Metro IE10 processes appear with slightly different icons in the radically redesigned task manager.

One slight mystery here is that the beta of 64-bit Flash worked fine in Desktop view but not in Metro.  No doubt this will long since have become a matter of history by the time all this stuff ships.

For a few minutes, I was rather confused about the apparent lack of a proper Start menu in the desktop view.  If you click on Start, you go back to the Metro Start page.  And then the obvious dawned on me.  In effect, the new Metro Start screen is simply an elaboration of the old Start menu.  In previous version, when you click Start, the menu pops up on top of the desktop.  It is quite rich in previous versions, and allows you to start applications, perform searches for applications and files or undertake various management and administrative tasks.  Windows 8 is really not very different.  However, the Start menu has now morphed into the new Metro Start page which takes up the whole screen.  Instead of a list of pinned and recent applications, the Start screen displays tiles.  Move the mouse down to the bottom right corner (I don't know what the equivalent touch gesture is), and up pops a mini Start menu.  Clicking 'Start' takes you back to the desktop.  Click on 'Search' to search for applications files or settings.  The settings feature is really powerful.  In fact, in Windows 7, searching for likely terms like 'Display' or 'Network' also returns results for settings, but you get far more hits in Windows 8.  The effect is rather like 'God Mode' in Windows 7.  [update: no, I'm wrong.  Windows 7 gives you a similar number of hits, BUT you need to click the relevant section in the search results to see them all.  I've clearly not being using Search effectively to date!]

The mini Start menu is available in the desktop as well.  In this case, if you click 'Search', the search panel opens up on the right of the screen and results then open up to take over the rest of the screen. As I experimented, I found that while things were fairly intuitive, the preview does not always work in a totally predictable fashion.  I also suspect that the experience is currently better for touch screens than for traditional mice (I note Microsoft is busy re-inventing the mouse for a Windows 8 world - see http://www.microsoft.com/hardware/en-us/products/touch-mouse/microsite/).  This is hardly surprising given that Windows 8 is clearly in an early state and is unfinished.  I suspect the emphasis to date has been on touch, and not on mouse-driven equivalents.

Once I grasped the essential nature of the Metro Start page and its correspondence to the Start menu is earlier versions of Windows, I began to feel far more comfortable about the changes. Sure, all the marketing hype is about the radical new UI design features.  However, this really is just the next stage of the evolution of the familiar Windows UI.  Metro is absolutely fabulous as a tablet UI (better than iOS/Android IMHO, which after all, are really just the old 'icons on a desktop' approach with added gestures), and I think it will actually be quite good for desktops, once it is complete.  I note, though, that people have already discovered the registry hack to switch Metro off (see http://www.mstechpages.com/2011/09/14/disable-metro-in-windows-8-developer-preview/), and I think MS would be wise to offer this as a proper setting in the release version.  I anticipate, though, that I will not be switching Metro off, even on a non-touch desktop.

Shutting down presented a little difficulty.  I am used to using the Start menu to do this (the classic 'Start' to stop conundrum in Windows).    I couldn't find a 'Shut Down' command on the Start screen.  I eventually did Ctrl-Alt-Delete (or rather, Home-Del in Oracle Virtual Box) and then found a Shut Down option at the bottom left of the screen.

Booting the VBox image takes 20 seconds on my machine.  20 seconds!   I'll say that again. 20 seconds!!!!  Yes, 20 seconds, just about exactly.  That's on a virtual machine on my notebook.  On the host, it would be significantly faster.  This is Windows like we have never known it before.  Frankly, it is the ability to boot fast and run Windows happily on ARM devices (I'll have to take that on trust as I haven't yet seen it for real) that are the really important changes.  Almost more important than the Metro UI. The nay-sayers and trolls say it can't be done.  I think Microsoft has done it, though.

My last foray into Windows 8 this evening was to launch Visual Studio 2011 Express and have a quick peek at the templates for Win8 development.  I have a lot to explore.

The say first impressions are the most important.  When I saw the on-line video of Windows 8 a couple of months back, I almost fell off my chair in surprise.  Now I have got my hands on an early version I am really quite impressed. Like everyone else, I couldn't see how Microsoft could possibly compete against Apple and Google in the tablet space.  Now...well...I look forward to seeing if and how Apple and Google will respond.  If it is true, as Steve Ballmer states, that Microsoft had 500 thousand downloads of the preview in less than 24 hours, then tectonic plates have already shifted and Microsoft is firmly on track to become a major contender in the tablet space. OK, that's only one in every 14,000 people on the face of planet earth, and yes, the release version of Lion had double that number of hits in the first 24 hours.  Nevertheless, it is a huge figure for an early technical preview of an operating system that won't ship for another year.  It means people are very, very keen to start developing for Metro (I know we are at SolidSoft).  And if Windows 8 succeeds on tablets, what will that mean for Windows Phone which also uses the Metro concept?  Don't ever, ever underestimate Redmond.


Wednesday, September 14, 2011 #

Following the previous post, here is a second bit of wisdom.  In the Load method of a custom pipeline component, only assign values retrieved from the property bag to your custom properties if the retrieved value is not null.  Do not assign any value to a custom property if the retrieved value is null.

This is important because of the way in which pipeline property values are loaded at run time.  If you assign one or more property values via the Admin Console (e.g., on a pipeline in a Receive Location), BizTalk will call the Load method twice - once to load the values assigned in the pipeline editor at design time and a second time to overlay these values with values captured via the admin console.  Let's say you assign a value to custom property A at design time, but not to custom property B.  After deploying your application, the admin console will display property A's value in the Configure Pipeline dialog box.  Note that it will be displayed in normal text.  If you enter a value for Property B, it will be displayed in bold text.  Here is the important bit.  At runtime, during the second invocation of the Load method, BizTalk will only retrieve bold text values (values entered directly in the admin console).  Other values are will not be retrieved.  Instead, the property bag returns null values.  Hence, if your Load method responds to a null by assigning some other value to the property (e.g., an empty string), you will override the correct value and bad things will happen.

The following code is bad:

    object retrievedPropertyVal;
    propertyBag.Read("MyProperty", out retrievedPropertyVal, 0);

    if (retrievedPropertyVal != null)
    {
        myProperty = (string)retrievedPropertyVal;
    }
    else
    {
        myProperty = string.Empty;
    }

Remove the 'else' block to comply with the inner logic of BizTalk's approach.


Here is a small snippet of BizTalk Server wisdom which I will post for posterity.  Say you are creating a custom pipeline component with custom properties.  You create private fields and a public properties and write all the code to load and save corresponding property bag values from and too your properties.   At some point, when you deploy the BizTalk application and test it, you get an exception from within your pipeline stating, unhelpfully, that "Value does not fall within the expected range."  Or maybe, while using the Visual Studio IDE, you notice that values you type into custom properties in the Property List are lost when you reload the pipeline editor.

What is going on?   Well, the issue is probably due to having failed to initialise your custom property fields.  If they are reference types and have a null value, the PipelineOM PropertyBag class will throw an exception when reading property values.  The Read method can distinguish between nulls and, say, empty strings, due to the way data is serialised to XML (e.g., in the BTP file).   Here is a property initialised to an empty string:

            <Property Name="MyProperty">
              <Value xsi:type="xsd:string" />
            </Property>

Here is the same property set to null:

            <Property Name="MyProperty" />

The first is OK.  The second causes an error and leads to the symptoms described above.

ALLWAYS initialise property backing fields in custom pipeline components.  NEVER set properties to null programmatically.