Charles Young

  Home  |   Contact  |   Syndication    |   Login
  197 Posts | 64 Stories | 512 Comments | 373 Trackbacks

News

Twitter












Article Categories

Archives

Post Categories

Image Galleries

Alternative Feeds

BizTalk Bloggers

BizTalk Sites

CEP Bloggers

CMS Bloggers

Fun

Other Bloggers

Rules Bloggers

SharePoint Bloggers

Utilities

WF Bloggers

Sunday, December 7, 2014 #

Across the IT industry, people are re-thinking their approach to application integration and electronic data interchange in the context of ubiquitous scale-out cloud platforms, the onward march of service-orientation, the success of the RESTful philosophy, the adoption of agile methodologies and the rise of devops and continuous deployment. In this rapidly evolving world, the microservices concept has gained attention as an architectural and design model that promotes best practice in building modern solutions.

In this post, I will explain the two worlds of integration and microservices and start to explore how they relate to each other. I will start with a description of integration. It is important to understand the central concerns that integration addresses before moving on to look at the emerging microservices world. Without this, we won’t be able to relate the two to each other appropriately. I will move on to describe the principles that underpin the microservices approach and discuss the notion of 'monolithic' applications. I will then discuss the relationship between microservices and integration, especially in the context of emerging microservice PaaS platforms.

What is Integration?

I’m specifically interested in the concepts of enterprise application integration (EAI) and electronic data interchange (EDI). These related solution categories are widely understood. Together, they constitute a few square inches in the rich tapestry of software development. However, for many organisations, they play a central role in the effective exploitation of information technology.

EAI is necessary in scenarios where the following statements are true:

“We need to protect our existing investment in different applications, systems and services, including ‘heritage’ systems, line of business applications from different vendors and custom-built solutions.”

“We have business processes and activities that depend on, or touch, multiple systems and applications. We need IT to automate and streamline those processes and activities as much as possible by integrating all those systems into our processes through robust forms of data interchange.”

“We have to accept and manage change in the most cost-effective and efficient way we can. This can include new investments and acquisitions, new business processes, the evolution or replacement of existing systems and other issues. Much of this change is beyond the control of IT and is either dictated by the business or forced on us by partners or software vendors.”

EAI is characterised by techniques and approaches which we apply when we need to integrate applications and systems that were never designed or envisaged to interoperate with each other. This is more than a matter of protocol. Different systems often model business data and process in radically different ways. The art of the EAI developer, rather like that of a diplomat, is to build lines of communication that honour the distinctive viewpoint of each participant while fostering a shared and coherent understanding through negotiation and mediation.

As we will see, the microservice community advocates the use of lightweight communication based on standardised interfaces. This principle is typically satisfied through the use of RESTful interfaces. From an EAI perspective, however, this principle is not particularly interesting. Integration handles the protocols and interfaces dictated to it, whatever they may be, and is primarily concerned with mediation. To continue the analogy of diplomacy, it’s rather like encouraging everyone to communicate face-to-face via Skype video. In theory, this may be convenient and efficient, but it is of little use if each participant speaks in a different language[1] and is only interested in communicating their own concerns from their own perspective. The diplomat’s job is to mediate between the participants to enable meaningful interchange. In any case, some participants may not have the necessary bandwidth available to use the technology. Older participants may not be comfortable using Skype and may refuse to communicate this way.

In EAI, the message is king. The most fundamental task of the EAI developer is to mediate the interchange of messages by whatever means necessary. If it is possible to standardise the protocols by which this done is, then that is valuable. However, such standardisation is secondary to the central aim of ensuring robust mediation of messages. There is a strong correlation between the intrinsic value of individual messages to the business and the use of EAI. The value of a message may be financial (orders, invoices, etc.,), reputational or bound up with efficiency and cost savings. The more each individual message is worth, the more likely it is that EAI approaches are required. Mediation ensures that each valuable message is delivered to its recipient in a coherent form and manner, or, if not, that appropriate action is taken. Any other concerns are secondary to this aim.

Messages are the primitives of integration. At their simplest, they are opaque streams of bytes that are communicated between participants. However, most EAI frameworks provide abstractions that support a distinction between content and metadata. These abstractions may be elaborated as required. For example, content may be treated as immutable while metadata remains mutable. Content may be broken down into additional parts, and each part may be provided with part-specific metadata. Message metadata is typically represented as name-value pairs. It may contain unique identifiers, routing information, quality-of-service data and so forth.

The centrality of messages in EAI cannot be underestimated. As an abstraction, messages are decoupled from other abstractions such as endpoints and processes. This means that they exist independent of any notion of a service contract. Service-orientated approaches cannot be mandated in EAI, however desirable they may be. Perhaps more importantly, they can exist independent of any specific application, system or service. Messages possess the following characteristics:

Extensibility: Messages can be arbitrarily enriched and extended with additional metadata. Metadata supports correlation protocols, identity, routing information and any additional semantics with no requirement to change the message content or format.

Malleability: Message content can be amended or transformed as required as it is passed from source to recipient. In transforming a message, we often create a new message to hold the transformed content. Metadata can be used to record the association between the old and new versions of the message.

Routability: Static or dynamic routing decisions can be made at run-time to control the destination of messages, the protocols used to communicate those messages and other concerns. Such decisions are generally made by evaluating metadata against routing rules. This approach supports the flexibility needed in EAI to implement complex interchange and correlation patterns and to adapt rapidly to changes in business requirements.

Traceability: Messages are traceable as they pass between different services and application and undergo multiple transformations. As well as the progress of individual messages, tracing can record the relationships between those different messages. This includes correlated messages (e.g., response messages correlated to request messages), messages that logically represent a given business, batched and sequences of messages, etc. Tracing provides insight, supports troubleshooting and enables the gathering of metrics.

Recoverability: Messages can easily be persisted in highly available stores so that they are recoverable in the event of a failure. When individual messages are accorded significant worth to the business, this provides assurance that important data is not lost and will be process appropriately. It supports high levels of service and is central to building robust automation of business processes.

In EAI, the focus is on the applications, systems and services that organisations invest in and rely on. For EDI, the focus is on the exchange of data with external organisations, including trading partners, suppliers and customers. EDI shares a lot in common with EAI, but is about the integration of different businesses and organisations, rather than the integration of in-house applications. Like EAI, one of the main drivers is the need to manage change effectively and efficiently, even though that change is often beyond the control of IT.

Microservices

Now we have described the world of integration, we need to explore the concept of microservices. This is best understood as a set of principles for service-orientation. By ‘service-orientation’ I mean any approach to software design that conceives of systems and applications as a collaboration between discrete service components. For some people, the term has strong connotations with complex proprietary SOA frameworks and tooling. I use the term only in a literal and broad sense.

There is no one authoritative definition of the term ‘microservice’. However, a reasonable level of consensus has emerged. We can summarise the principles as follows:

Decompose applications into microservices: Microservices apply service-orientation within the boundaries of individual applications. Solutions are created from the ground up as a collaboration of fine-grained services, rather than monolithic applications with a front-end layer of service interfaces.

Let each microservice do one thing, and do it well: The ‘micro’ in microservices is too often equated with low SLOC counts. While SLOC can act as a useful heuristic for detecting microservice code smells, this misses the point. A microservice is focused on handling a small subset of well-defined and clearly delineated application concerns. Ideally, a microservice will handle just one concern. This focus makes it much easier to understand, stabilise and evolve the behaviour of individual microservices.

Organise microservices around business capabilities: Multi-tier architectures historically divide and group services by functionality. Services reside in the presentation tier, the business tier or the data tier. If we think of this as horizontal partitioning, then microservices emphasises vertical partitioning. Microservices are grouped and partitioned according to the products and services offered by the organisation. This fosters cross-functional development teams that adopt product-centric attitudes aligned to business capabilities. It de-emphasises the boundaries of individual applications and promotes the fluid composition of services to deliver value to the business.

Version, deploy and host microservices independently: Microservices should be as decoupled and cohesive as possible. This minimises the impact of change to any individual microservice. Microservices can evolve independently at the hands of different individuals or small teams. They can be written in different languages and deployed to different machines, processes and runtime environments at different times using different approaches. They can be patched separately and retired gracefully. They can be scaled effectively, chiefly through the use of horizontal scaling approaches.

Use lightweight communication between microservices: Where possible, implement simple interchange through standardised interfaces. The general consensus is that REST and JSON are preferred approaches, although they are by no means mandated. Avoid complex protocols and centralised communication layers. Favour choreography over orchestration and convention over configuration. Implement lightweight design patterns such as API Gateways to act as intermediaries between microservices and clients. Design each microservice for failure using automatic retry, fault isolation, graceful degradation and fail-fast approaches.

Avoid centralised governance and management of microservices: Use contract-first development approaches, but don’t enforce centralised source code management, specific languages or other restrictions across different microservice development teams. Don’t depend on centralised discovery e.g., via service directories. Don’t enforce centralised data management or configuration, but instead let each microservice manage its own data and configuration in its own way.

On Monoliths

The most common rationale for microservices contrasts them with the design and implementation of monolithic applications. At one extreme, I’ve seen monolithic applications defined as highly coupled solutions deployed as a single package to a single machine and run in a single process. Of course, few modern enterprise-level applications are designed this way. Multi-tier architectures, component-based design and service orientation, together with the wide-spread adoption of modern design patterns, has ensured that most enterprise-level development has long moved on from the creation of such monstrosities.

A better definition of the term ‘monolith’ focuses on the need to deploy, version and patch entire applications as a whole, regardless of how they are physically structured. From this perspective, the problem is cast in terms of the impact of fine-grained changes on the application as a whole. A change in one component may require a full re-deployment of the entire application.

This type of monolithicity has a detrimental effect on the entire application lifecycle. Each developer must constantly re-build and re-deploy the entire application on their desktop just to test a small change to a single component. The build manager must maintain complex scripts and oversee slow processes that repeatedly re-construct the entire application from numerous artefacts, each worked on by different developers. The testers are restricted to black-box integration testing of large and complex combinations of components. Deployment is a fraught and costly battle to coax the entire ensemble to function in alien environments. Every patch and upgrade requires the entire application to be taken out of commission for a period, compromising the capability of the business to function. Significant change becomes infeasible just in terms of regression testing. To cap it all, once the architects and developers have moved on, no one is left with sufficient understanding of how the application functions as a whole.

Microservices replace the classic notion of the application, defined by tiers of functionality, with the concept of loosely-coupled collaborations of microservices grouped according to business capability. They facilitate the continuous evolution and deployment of solutions, allowing small groups of developers to work on constrained problem domains while minimising the need to enforce top-down governance on their choice of tools, languages and frameworks. Microservices support the agile, high velocity development of product-centric solutions using continuous deployment techniques. They support elastic scalability. They help to minimise technical debt.

Novelty

An obvious objection could be that microservices lack novelty, by which I mean that they do not possess sufficient distinction from pre-existing and generally received architectural concepts to be of interest. Certainly each of the principles outlined above has a long history pre-dating the emergence of the term ‘microservice’. Such objections arise naturally when microservices are characterised as a remedy to mainstream notions of service-orientation. While it is true that some examples of service-orientated architecture prove vastly over-complicated for the problems they address, that is simply a matter of poor application of architectural principles. Any attempt to claim that ‘service-orientation is a bad thing’ and cast microservices as the solution misses the point entirely and quickly descends into caricature and absurdity.

In reality, microservice principles are a service-orientated response to the world of agility, devops and continuous deployment. As such, their novelty emerges from their ability to mould and fashion the direction of service-orientated development in the context of these concerns. They also represent the desire to ‘democratise’ software development, allowing developers from the widest circle to collaborate without unnecessary restriction.

A number of articles and presentations contrast the microservice approach to the use of proprietary integration and service bus frameworks. While some of the arguments are spurious and ill-informed, the underlying intention is good. It is the desire to avoid closed worlds with their ‘high-priesthoods’ in favour of a more open world in which mainstream development approaches can be used to solve problems by any suitably experienced developer.

I should declare my own position here. I have spent the last decade or more as a ‘high-priest’ of integration with a focus on a proprietary framework and set of tools. However, with the advent of cloud computing, I increasingly inhabit the ‘democratised’ world. I have, as it were, a foot in both camps. Indeed, I spend roughly equal time moving between these two camps. I see worth in both, and I believe they are more closely aligned than some imagine. However, I also recognise that the flow of history is clearly towards democratisation.

When integration and microservices meet

Now we have defined the worlds of integration and microservices, we need to ask some obvious questions. Where and how do these two worlds meet? Do they overlap or not? Are they complementary or do the contradict each other?

There is plenty of scope for disagreement here. We can imagine an argument between two protagonists. Alice is an EAI veteran. Bob is a microservice evangelist.

Alice kicks things off by asserting that microservices are an irrelevance to her. Integration exists to handle the interchange between any applications, systems and services, regardless of their architecture. She is happy to integrate anything, including monolithic applications, microservices, SOA services and all systems of any kind, shape or size.

Bob, piqued by her dismissive attitude, retorts that if people had concentrated on creating microservices in the first place, rather than monolithic applications, there would be no need to integrate applications at all. It is Alice’s skills that would be irrelevant.

Alice, rising to the bait, responds loftily that Bob’s naïve idealism has no relevance in the real world and that she doesn’t expect to be looking for a new job anytime soon.

Bob, irritated by Alice’s tone, suggests that the very tools, approaches and architectures that Alice uses are monolithic, promote monolithic solutions and cause many of the problems microservices are there to solve. She is part of the problem, not part of the solution.

Now seriously annoyed, Alice claims that microservices represent a simplistic and childish approach that completely ignores the inherent complexity she faces every day. The tools she uses were selected because they address this complexity directly. Bob’s way of thinking, she claims, is born of lack of experience and a hopeless idealism. It can only promote chaos and invite failure.

I’m sure you agree this has to stop! We will leave Alice and Bob to their own devices, well out of earshot. For our part, wisdom dictates a cool-headed, reasoned response. We need to think through the issues carefully and honestly, making sure we take time to understand different perspectives and to properly analyse the root causes of the problems we face. I may be a high priest of integration, but I’m as keen as anyone to understand what works well, what works poorly and what is completely broken. Integration can certainly be a demanding discipline and the approaches we use sometimes leave much to be desired. Can the world of microservices inform us and help us do better?

There is a clear delineation of domains of interest that characterise the argument. My world broadly splits into two such domains. The first is the domain of business applications, systems and services. This is located firmly on the other side of the fence to where I am. I have no control over the applications that live in that domain. My job is to accept their existence, trust that the business has good reasons to invest in them and work out how to integrate them. My interests are different to, but do not conflict with, those of the developers who build and maintain those applications.

The second domain is that of integration. This is my natural home and here I have some control over my environment. I can select, or at least influence, the tools and frameworks I believe fit the problem domain, and I can design and implement integration solutions.

clip_image004

Clearly, microservice thinking applies to the first domain. It does so without conflict with the Integration domain. However, microservices are unlikely to dominate the first domain any time soon. Most organisations will continue to invest in on-premises and SaaS Line-of-Business applications, enterprise level CRM, CMS and ERP systems and data management systems. They will apply the principle of ‘buy, before build’, and hence, even if the whole world moves to RESTful interchange at the API level, their services and applications will still be silos of functionality and data in need of integration.

Even in scenarios where organisations invest in writing custom applications and services, it is highly unlikely that they will be willing to re-write their entire custom estate around the principles of microservices. It is far more likely that organisations will adopt microservice approaches over an extended period, using evolutionary approaches to tackle new problems. They will only re-write existing applications and services as microservices when there is a compelling commercial reason to do so.

The rise of µPaaS

We are seeing the first stirrings of interest (this was written in late 2014) in merging the principles of microservices with the provision of Platform-as-a-Service in cloud envrionments. The concept is to build out public PaaS offerings through the provision of microservice market places. In this emerging world, developers will create solutions by selecting pre-built microservices and combining and blending them with additional custom microservices. Public cloud platforms will support commercial approaches to monetise microservices. The PaaS platform itself, however, will leave developers free to exploit this market place, or not, as they choose. They can combine its offerings with custom-built and free/open-source microservices as required.

I cannot resist the temptation to call this new world ‘microPaaS’, or µPaaS. Its emergence is the main incentive to write this article. As soon as the µPaaS concept began to emerge, two key requirements came into sharp focus. The first is the need for better containership at the OS level. PaaS models must, of necessity, provide some kind of OS container for packaged code. This may be a virtual machine instance with automated provisioning. However, this locks developers into a single OS and any runtime environments that happen to target that OS. This violates the intention to allow developers to select the most appropriate tools and technologies for each individual microservice. In addition, microservices demand the freedom to deploy and host each microservice independently. Using an entire virtual machine as a container, possibly for a single microservice, is a top-heavy approach. Hence, a µPaaS needs lightweight, OS-agnostic containership. Efforts are currently focused on the evolution of Docker which, today, is a Linux-only technology, but tomorrow will emerge on other OS platforms, and specifically on future versions of Microsoft Windows.

The second issue is that of integration. In the microservices world, the vision often extends as far as different development teams collaborating within a larger organisation. However, on a public cloud platform, everyone gets to play. This is a problem. Microservices will be provided by entirely different teams and organisations. We can expect that, following the open source model, entire communities will emerge around mini-ecosystems of microservices that share common data representations, transactional boundaries and conventions. However, across the wider ecosystem as a whole, there will still be a need to provide mediation, transformation and co-ordination.

clip_image005In the µPaaS world, the ideal is to provide integration capabilities as microservices, themselves. The danger, here, lies in constant re-invention of wheels, solving the same integration problems again and again. This suggests the need to provide first class generic integration microservices as a fundamental part of the ecosystem. This, however, highlights a further risk. Generic integration microservices must cater for the complex and arcane issues that can arise when composing solutions from incompatible parts. They cannot afford to ignore this complexity. If they do so, they will be an endless source of frustration and will lower the perceived value of the ecosystem as a whole. Instead, they must implement first-class abstractions over the complexities of integration in order to avoid compromising the ‘democratised’ nature of a µPaaS. They must be easy for any developer to use. No high priests of integration allowed!

The need for integration capabilities in µPaaS is driven by another consideration. A µPaaS platform will be used to build new solutions. However, there will still be a need to integrate these with existing applications and services. This, of course, includes integration with on-premises applications as part of hybrid architectures. This integration can, of course, be achieved using existing EAI/ESB tools and products. However, µPaaS offers the chance to re-think the approach to EAI from a microservices perspective. Again, a driving force for this is the democratisation of EAI, bringing down the cost and effort required to integrate applications. Done well, a microservice approach to integration will result in solutions that are easier to maintain and evolve over time, which scale easily, but which provide the robust message handling capabilities at the heart of integration.

One other reason for providing integration services in µPaaS is to support EDI workloads. The cloud provides an obvious location to host EDI solutions, and we have already seen the emergence of iPasS support for EDiFACT/X12 and AS2, together with trading partner management functionality. Expect to see this capability evolve over time.

The future landscape

Organisations that have made significant investment in EAI, EDI and service bus technologies are unlikely to replace those technologies with microservices in the near future. These tools will continue to play a critical role in enabling organisations to integrate their systems and applications effectively. Until we see µPaaS providing equivalent functionality, they will retain their role as the appropriate tools, frameworks and products for enabling robust, enterprise-level integration of mission-critical workloads.

Microservices apply service-orientated thinking inside application boundaries and serve to break down those boundaries. Contrast this with the application of service-orientation at the application boundary itself. Ten years ago, it was still rare for commercial applications to provide a web service API. Now, it is almost unthinkable for any modern business application to omit such features. In turn, this has allowed EAI tools to evolve more closely towards the concepts of the enterprise service bus. Likewise, many ESB products add value by incorporating integration technologies and tools.

Many of the concerns addressed by existing EAI tools are analogous to those of the microservices world. EAI emphasises strong decoupling of applications and services, ensuring those investments can vary and evolve over time, or even be removed or replaced with minimal impact on other applications and services. It does this through strong decoupling. Within the integration domain itself, most modern EAI and ESB products implement integration components as services. They generally allow those services to be hosted independently and to be scaled horizontally, although cost issues related to licencing and hardware can place limits on this. Integration services are often fine-grained, supporting a constrained set of behaviours for mediation or transformation. They evolved before the notion of microservices was conceived, and they do not generally adhere to all the microservices principles. However, they share a common heritage with microservices and share similar goals.

One issue that muddies the waters in EAI is the hosting of business logic within the integration domain. This can be a controversial matter. Some argue that business logic should be hosted within separate applications, systems and services. This may be driven by the centrality of ERP systems within organisations, or the need to ensure that different parts of the organisation take responsibility for automating the activities in which they are engaged. In this case, the integration layer is viewed simply as a communication hub that mediates messages between these systems. Others argue that business logic services naturally belong within the integration layer. This approach emphasises the need to decouple automated business processes from supporting applications and systems in in order to allow the organisation to rip and replace those systems over time with minimal impact on custom logic.

In my experience, the driving forces that dictate the best approach have more to do with organisational culture and longer-term IT strategy than with any architectural principal. Part of the art of integration is to intelligently predict how business requirements and IT landscapes are likely to evolve over time and to design solutions accordingly. This explains why, in many scenarios, the investment in EAI and ESB products results in the hosting of significant business logic within the integration domain.

clip_image006

What then, of the future? Microservices and µPaaS will undoubtedly work their magic in the enterprise space. However, they won’t be used exclusively. Integration will, in part, move to the µPaaS world. µPaaS itself, will predominantly favour public cloud, but will also be available within private cloud implementations. Today’s EAI and ESB tools will evolve along the lines of cloud enablement and will continue for the foreseeable future to play an important role with the enterprise. Where business logic today is hosted in the integration domain, we can expect a move towards the use of microservices. Integration itself will be ‘democratised’ at least to an extent. This will reduce costs and timescales, and help organisations meet the challenges of the future.

clip_image007


[1] Notwithstanding the reported advent of automated real-time translation capabilities in Skype. No analogy is perfect!


Thursday, July 31, 2014 #

What is truly offensive about Richard Dawkins' comments on date rape and paedophilia is his air of intellectual superiority founded on hopeless ignorance of basic logic.  He believes the following to be an invalid syllogism:

X is Bad

Y is Worse

Therefore X is not Bad

A syllogism can be valid or invalid, but it remains a syllogism.  As Aristotle might have put it…

All syllogisms have a middle term that appears in both the major and minor premises

In Dawkins’ example, the middle term does not appear in the minor premise

Therefore Dawkins’ example is not a syllogism

He might also have said…

For all syllogisms, the minor term is the subject of the conclusion

In Dawkins’ example the minor term does not appear in the conclusion

Therefore Dawkins’ example is not a syllogism

Modus Baroco, x2

Using fancy technical terms to try to convince others how clever you are only works if you actually know what those terms mean.


Thursday, July 3, 2014 #

From Google, this morning…” Charles, do you know Charles Young?” with a lovely picture of myself.

Nope, never heard of me. Google clearly has no idea who I am either.


Monday, June 2, 2014 #

I’ve recently been resurrecting some code written several years ago that makes extensive use of the BAM Interceptor provided as part of BizTalk Server’s BAM event observation library.  In doing this, I noticed an issue with continuations.  Essentially, whenever I tried to configure one or more continuations for an activity, the BAM Interceptor failed to complete the activity correctly.   Careful inspection of my code confirmed that I was initializing and invoking the BAM interceptor correctly, so I was mystified.  However, I eventually found the problem.  It is a logical error in the BAM Interceptor code itself.

The BAM Interceptor provides a useful mechanism for implementing dynamic tracking.  It supports configurable ‘track points’.  These are grouped into named ‘locations’.  BAM uses the term ‘step’ as a synonym for ‘location’.   Each track point defines a BAM action such as starting an activity, extracting a data item, enabling a continuation, etc.  Each step defines a collection of track points.

Understanding Steps

The BAM Interceptor provides an abstract model for handling configuration of steps.  It doesn’t, however, define any specific configuration mechanism (e.g., config files, SSO, etc.)  It is up to the developer to decide how to store, manage and retrieve configuration data.  At run time, this configuration is used to register track points which then drive the BAM Interceptor.

The full semantics of a step are not immediately clear from Microsoft’s documentation.  They represent a point in a business activity where BAM tracking occurs.  They are named locations in the code.  What is less obvious is that they always represent either the full tracking work for a given activity or a discrete fragment of that work which commences with the start of a new activity or the continuation of an existing activity.  The BAM Interceptor enforces this by throwing an error if no ‘start new’ or ‘continue’ track point is registered for a named location.

This constraint implies that each step must marked with an ‘end activity’ track point.  One of the peculiarities of BAM semantics is that when an activity is continued under a correlated ID, you must first mark the current activity as ‘ended’ in order to ensure the right housekeeping is done in the database.  If you re-start an ended activity under the same ID, you will leave the BAM import tables in an inconsistent state.  A step, therefore, always represents an entire unit of work for a given activity or continuation ID.  For activities with continuation, each unit of work is termed a ‘fragment’.

Instance and Fragment State

Internally, the BAM Interceptor maintains state data at two levels.  First, it represents the overall state of the activity using a ‘trace instance’ token.  This token contains the name and ID of the activity together with a couple of state flags.  The second level of state represents a ‘trace fragment’.   As we have seen, a fragment of an activity corresponds directly to the notion of a ‘step’.  It is the unit of work done at a named location, and it must be bounded by start and end, or continue and end, actions.

When handling continuations, the BAM Interceptor differentiates between ‘root’ fragments and other fragments.  Very simply, a root fragment represents the start of an activity.  Other fragments represent continuations.  This is where the logic breaks down.  The BAM Interceptor loses state integrity for root fragments when continuations are defined.

Initialization

Microsoft’s BAM Interceptor code supports the initialization of BAM Interceptors from track point configuration data.  The process starts by populating an Activity Interceptor Configuration object with an array of track points.  These can belong to different steps (aka ‘locations’) and can be registered in any order.  Once it is populated with track points, the Activity Interceptor Configuration is used to initialise the BAM Interceptor.  The BAM Interceptor sets up a hash table of array lists.  Each step is represented by an array list, and each array list contains an ordered set of track points. 

The BAM Interceptor represents track points as ‘executable’ components.  When the OnStep method of the BAM Interceptor is called for a given step, the corresponding list of track points is retrieved and each track point is executed in turn.  Each track point retrieves any required data using a call back mechanism and then serializes a BAM trace fragment object representing a specific action (e.g., start, update, enable continuation, stop, etc.).  The serialised trace fragment is then handed off to a BAM event stream (buffered or direct) which takes the appropriate action.

The Root of the Problem

The logic breaks down in the Activity Interceptor Configuration.  Each Activity Interceptor Configuration is initialised with an instance of a ‘trace instance’ token.  This provides the basic metadata for the activity as a whole.  It contains the activity name and ID together with state flags indicating if the activity ID is a root (i.e., not a continuation fragment) and if it is completed.  This single token is then shared by all trace actions for all steps registered with the Activity Interceptor Configuration.

Each trace instance token is automatically initialised to represent a root fragment.  However, if you subsequently register a ‘continuation’ step with the Activity Interceptor Configuration, the ‘root’ flag is set to false at the point the ‘continue’ track point is registered for that step.   If you use a ‘reflector’ tool to inspect the code for the ActivityInterceptorConfiguration class, you can see the flag being set in one of the overloads of the RegisterContinue method.   

This makes no sense.  The trace instance token is shared across all the track points registered with the Activity Interceptor Configuration.  The Activity Interceptor Configuration is designed to hold track points for multiple steps.  The ‘root’ flag is clearly meant to be initialised to ‘true’ for the preliminary root fragment and then subsequently set to false at the point that a continuation step is processed.  Instead, if the Activity Interceptor Configuration contains a continuation step, it is changed to ‘false’ before the root fragment is processed.  This is clearly an error in logic.

The problem causes havoc when the BAM Interceptor is used with continuation.  Effectively the root step is no longer processed correctly, and the ultimate effect is that the continued activity never completes!   This has nothing to do with the root and the continuation being in the same process.  It is due to a fundamental mistake of setting the ‘root’ flag to false for a continuation before the root fragment is processed.

The Workaround

Fortunately, it is easy to work around the bug.  The trick is to ensure that you create a new Activity Interceptor Configuration object for each individual step.  This may mean filtering your configuration data to extract the track points for a single step or grouping the configured track points into individual steps and the creating a separate Activity Interceptor Configuration for each group.  In my case, the first approach was required.  Here is what the amended code looks like:

// Because of a logic error in Microsoft's code, a separate ActivityInterceptorConfiguration must be used
// for each location. The following code extracts only those track points for a given step name (location).
var trackPointGroup = from ResolutionService.TrackPoint tp in bamActivity.TrackPoints
                      where (string)tp.Location == bamStepName
                      select tp;
var bamActivityInterceptorConfig =
   
new Microsoft.BizTalk.Bam.EventObservation.ActivityInterceptorConfiguration(activityName);

foreach (var trackPoint in trackPointGroup)
{
    switch (trackPoint.Type)
    {
        case TrackPointType.Start:
            bamActivityInterceptorConfig.RegisterStartNew(trackPoint.Location, trackPoint.ExtractionInfo);
            break;

etc…

I’m using LINQ to filter a list of track points for those entries that correspond to a given step and then registering only those track points on a new instance of the ActivityInterceptorConfiguration class.   As soon as I re-wrote the code to do this, activities with continuations started to complete correctly.


Friday, December 6, 2013 #

We are now SolidSoft Reply. This morning, the company was acquired by Reply S.p.A. This is great news for us. We will continue to build the business under the SolidSoft name, brand and culture, but as part of a much larger team. Further information at http://www.reply.it/en/investors/financialnews/readd/%2c15230

Monday, April 29, 2013 #

Microsoft does not currently offer RHEL on subscription on the Windows Azure platform, and people have reported problems when trying to create and run their RHEL VMs.  So, does RHEL run on Azure?  Read on here.

 

http://solidsoft.azurewebsites.net/articles/posts/2013/does-red-hat-enterprise-linux-run-on-azure.aspx


Wednesday, January 30, 2013 #

I can't say I follow things that closely in the Windows Phone world, but I am aware of the upgrade to Windows phone 7.8.  I've been looking forward to this for a while.  The improvements in the UI look nice, and when I get it, I can try to kid myself that my company phone, a Nokia Lumia 800, is really an 820.


It appears that the roll-out of 7.8 started today in the US for Nokia 900 users.  It can take a while for upgrades to make it to all the eligible phones.  So, imagine my delight when, this evening, my phone informed me an update was waiting for me!  Yeah!  I eagerly started the upgrade process and excitedly informed my bemused family that I was about to get Windows Phone 7.8.

Er...no.  After a successful upgrade, the phone re-booted...into Windows Phone 7.5.

I did a little digging.  It appears that the last upgrade, code-named Tango, has just arrived on my phone.  Tango was released on 20th July last year.  That's just over six months before I got the upgrade.

Oh dear me.

I'll report back on Windows Phone 7.8 in late summer...if I'm fortunate enough to get it by then :-(

Update
 
Apologies to Nokia who I stupidly railed at in an earlier version of this post.   Of course, they simply manufacture the handsets.  In my case, the carrier is Vodafone and they are the company responsible for pushing updates to my phone.    It seems that back in September Vodafone decided to cancel the global roll-out of Tango updates to some users due to a WiFi concern.  Although the press only reported this as affecting a single HTC model, maybe this is connected with my experience.
 
Update 2 (Friday)
 
A colleague has been busy forcing upgrades on his Nokia Lumia 800 (there is a little trick you can use, apparently, that involves switching off your PC WiFi connection at just the right moment while using Zune, and then re-connecting).  He forced an upgrade to Tango.  Now, he reports that he got two further updates and then a third.  The third appears to be Windows Phone 7.8 (which at the time of writing he is currently installing).  So, best guess is that Tango is being rolled out as a precursor to the 7.8 update.  I'll report back on this later.
 
Update 3

After many weeks of non-information and constant complaints on their forum, Vodafone did eventually roll out Windows Phone 7.8.  This was, in fact, a patched version of 7.8.  While I have no problems with Vodafone withdrawing the roll-out of 7.8 in order to fix a bug, I do have issues with the inordinate length of time it took them to issue the patched version and, more importantly, the total lack of information provided by the company to their customers.


Tuesday, January 22, 2013 #

The C# compiler is a pretty good thing, but it has limitations. One limitation that has given me a headache this evening is its inability to guard against cycles in structs.  As I learn to think and programme in a more functional style, I find that I am beginning to rely more and more on structs to pass data.  This is natural when programming in the functional style, but structs can be damned awkward blighters.

Here is a classic gotcha.  The following code won't compile, and in this case, the compiler does its job and tells you why with a nice CS0523 error:

    struct Struct1
    {
        public Struct2 AStruct2
    }

    struct Struct2
    {
        public Struct1 AStruct1
    }

Structs are value types and are automatically instantiated and initialized as stack objects.  If this code were compiled and run, Struct1 would be initialised with a Struct2 which would be initialised with a Struct1 which would be initialised with a Struct2, etc., etc.  We would blow the stack.

Well, actually, if the compiler didn't capture this error, we wouldn't get a stack overflow because at runtime the type loader would spot the problem and refuse to load the types.  I know this because the compiler does a really rather poor job of spotting cycles.

Consider the following.  You can use auto-properties, in which case the compiler generates backing fields in the background.  This does nothing to eliminate the problem.  However, it does hide the cycle from the compiler.  The following code will therefore compile!

    struct Struct1
    {
        public Struct2 AStruct2 { get; set; }
    }

    struct Struct2
    {
        public Struct1 AStruct1 { get; set; }
    }

At run-time it will blow up in your face with a 'Could not load type <T> from assembly' (80131522) error.  Very unpleasent.

ReSharper helps a little.  It can spot the issue with the auto-property code and highlight it, but the code still compiles.  However, ReSharper quickly runs out of steam, as well.   Here is a daft attempt to avoid the cycle using a nullable type:

    struct Struct1
    {
        public Struct2? Struct2 { get; set; }
    }

    struct Struct2
    {
        public Struct1 Struct1 { get; set; }
    }

Of course, this won't work (duh - so why did I try?).  System.Nullable<T> is, itself, a struct, so it does not solve the problem at all.  We have simply wrapped one struct in another.  However, the C# compiler can't see the problem, and neither can ReSharper.  The code will compile just fine.  At run-time it will again fail.

If you define generic members on your structs things can easily go awry.  I have a complex example of this, but it would take a lot of explaining as to why I wrote the code the way I did (believe me, I had reason to), so I'll leave it there.

By and large, I get on well with the C# compiler.  However, this is one area where there is clear room for improvement.

Update

Here's one way to solve the problem using a manually-implemented property:

    struct Struct1
    {
        private readonly Func<Struct2> aStruct2Func;

        public Struct1(Struct2 struct2)
        {
            this.aStruct2Func = () => struct2;
        }

        // Let's make this struct immutable!  It's good practice to do so
        // with structs, especially when writing code in the functional style.
        // NB., the private backing field is declared readonly, and we need a
        // constructor to initialize the struct field.  There are more optimal
        // approaches we could use, but this will perform OK in most cases,
        // and is quite elegant.
        public Struct2 AStruct2
        {
            get
            {
                return this.aStruct2Func();
            }
        }
    }

    struct Struct2
    {
        public Struct1 AStruct1 { get; set; }
    }


Tuesday, November 13, 2012 #

Forget about Steven Sinofski's unexpected departure from Microsoft.   The real news from Redmond is that, after approximately 72 years of utter stagnation, the latest version of Visio has been upgraded to support UML 2.x!   It gets better.  It looks like it actually supports the latest version of UML (2.4.1). 

Unbelievable!


Sunday, July 8, 2012 #

At long last I’ve started using Windows 8.  I boot from a VHD on which I have installed Office, Visio, Visual Studio, SQL Server, etc.  For a week, now, I’ve been happily writing code and documents and using Visio and PowerPoint.  I am, very much, a ‘productivity’ user rather than a content consumer.   I spend my days flitting between countless windows and browser tabs displayed across dual monitors.  I need to access a lot of different functionality and information in as fluid a fashion as possible.

With that in mind, and like so many others, I was worried about Windows 8.  The Metro interface is primarily about content consumption on touch-enabled screens, and not really geared for people like me sitting in front of an 8-core non-touch laptop and an additional Samsung monitor.  I still use a mouse, not my finger.  And I create more than I consume.

Clearly, Windows 8 won’t be viable for people like me unless Metro keeps out of my hair when using productivity and development tools.  With this in mind, I had long expected Microsoft to provide some mechanism for switching Metro off.  There was a registry hack in last year’s Developer Preview, but this capability has been removed.   That’s brave.  So, how have things worked out so far?

Well, I am really quite surprised.  When I played with the Developer Preview last year, it was clear that Metro was unfinished and didn’t play well enough with the desktop.  Obviously I expected things to improve, but the context switching from desktop to full-screen seemed a heavy burden to place on users.  That sense of abrupt change hasn’t entirely gone away (how could it), but after a few days, I can’t say that I find it burdensome or irritating.   I’ve got used very quickly to ‘gesturing’ with my mouse at the bottom or top right corners of the screen to move between applications, using the Windows key to toggle the Start screen and generally finding my way around.   I am surprised at how effective the Start screen is, given the rather basic grouping features it provides.  Of course, I had to take control of it and sort things the way I want.  If anything, though, the Start screen provides a better navigation and application launcher tool than the old Start menu.

What I didn’t expect was the way that Metro enhances the productivity story.  As I write this, I’ve got my desktop open with a maximised Word window.  However, the desktop extends only across about 85% of the width of my screen.  On the left hand side, I have a column that displays the new Metro email client.  This is currently showing me a list of emails for my main work account.  I can flip easily between different accounts and read my email within that same column.  As I work on documents, I want to be able to monitor my inbox with a quick glance.

Windows 8 for productivity

The desktop, of course, has its own snap feature.  I could run the desktop full screen and bring up Outlook and Word side by side.  However, this doesn’t begin to approach the convenience of snapping the Metro email client.  Consider that when I snap a window on the desktop, it initially takes up 50% of the screen.  Outlook doesn’t really know anything about snap, and doesn’t adjust to make effective use of the limited screen estate.  Even at 50% screen width, it is difficult to use, so forget about trying to use it in a Metro fashion. In any case, I am left with the prospect of having to manually adjust everything to view my email effectively alongside Word.  Worse, there is nothing stopping another window from overlapping and obscuring my email.  It becomes a struggle to keep sight of email as it arrives.  Of course, there is always ‘toast’ to notify me when things arrive, but if Outlook is obscured, this just feels intrusive.

The beauty of the Metro snap feature is that my email reader now exists outside of my desktop.   The Metro app has been crafted to work well in the fixed width column as well as in full-screen.  It cannot be obscured by overlapping windows.  I still get notifications if I wish.  More importantly, it is clear that careful attention has been given to how things work when moving between applications when ‘snapped’.  If I decide, say to flick over to the Metro newsreader to catch up with current affairs, my desktop, rather than my email client, obligingly makes way for the reader.  With a simple gesture and click, or alternatively by pressing Windows-Tab, my desktop reappears.

Another pleasant surprise is the way Windows 8 handles dual monitors.  It’s not just the fact that both screens now display the desktop task bar.  It’s that I can so easily move between Metro and the desktop on either screen.  I can only have Metro on one screen at a time which makes entire sense given the ‘full-screen’ nature of Metro apps.  Using dual monitors feels smoother and easier than previous versions of Windows.

Overall then, I’m enjoying the Windows 8 improvements.  Strangely, for all the hype (“Windows reimagined”, etc.), my perception as a ‘productivity’ user is more one of evolution than revolution.  It all feels very familiar, but just better.


Saturday, June 23, 2012 #

The term ‘cloud’ can sometimes obscure the obvious.  Today’s Microsoft Cloud Day conference in London provided a good example.  Scott Guthrie was halfway through what was an excellent keynote when he lost network connectivity.  This proved very disruptive to his presentation which centred on a series of demonstrations of the Azure platform in action.  Great efforts were made to find a solution, but no quick fix presented itself.  The venue’s IT facilities were dreadful – no WiFi, poor 3G reception (forget 4G…this is the UK) and, unbelievably, no-one on hand from the venue staff to help with infrastructure issues.  Eventually, after an unscheduled break, a solution was found and Scott managed to complete his demonstrations.  Further connectivity issues occurred during the day.

I can say that the cause was prosaic.  A member of the venue staff had interfered with a patch board and inadvertently disconnected Scott Guthrie’s machine from the network by pulling out a cable.

I need to state the obvious here.  If your PC is disconnected from the network it can’t communicate with other systems.  This could include a machine under someone’s desk, a mail server located down the hall, a server in the local data centre, an Internet search engine or even, heaven forbid, a role running on Azure.

Inadvertently disconnecting a PC from the network does not imply a fundamental problem with the cloud or any specific cloud platform.  Some of the tweeted comments I’ve seen today are analogous to suggesting that, if you accidently unplug your microwave from the mains, this suggests some fundamental flaw with the electricity supply to your house.   This is poor reasoning, to say the least.

As far as the conference was concerned, the connectivity issue in the keynote, coupled with some later problems in a couple of presentations, served to exaggerate the perception of poor organisation.   Software problems encountered before the conference prevented the correct set-up of a smartphone app intended to convey agenda information to attendees.  Although some information was available via this app, the organisers decided to print out an agenda at the last moment.  Unfortunately, the agenda sheet did not convey enough information, and attendees were forced to approach conference staff through the day to clarify locations of the various presentations.

Despite these problems, the overwhelming feedback from conference attendees was very positive.  There was a real sense of excitement in the morning keynote.  For many, this was their first sight of new Azure features delivered in the ‘spring’ release.  The most common reaction I heard was amazement and appreciation that Azure’s new IaaS features deliver built-in template support for several flavours of Linux from day one.  This coupled with open source SDKs and several presentations on Azure’s support for Java, node.js, PHP, MongoDB and Hadoop served to communicate that the Azure platform is maturing quickly.  The new virtual network capabilities also surprised many attendees, and the much improved portal experience went down very well.

So, despite some very irritating and disruptive problems, the event served its purpose well, communicating the breadth and depth of the newly upgraded Azure platform.  I enjoyed the day very much.

 


Wednesday, March 28, 2012 #

For the last decade, I have repeatedly, in my imitable Microsoft fan boy style, offered an alternative view to commonly held beliefs about Microsoft's stance on open source licensing.  In earlier times, leading figures in Microsoft were very vocal in resisting the idea that commercial licensing is outmoded or morally reprehensible.  Many people interpreted this as all-out corporate opposition to open source licensing.  I never read it that way. It is true that I've met individual employees of Microsoft who are antagonistic towards FOSS (free and open source software), but I've met more who are supportive or at least neutral on the subject.  In any case, individual attitudes of employees don't necessarily reflect a corporate stance.  The strongest opposition I've encountered has actually come from outside the company.  It's not a charitable thought, but I sometimes wonder if there are people in the .NET community who are opposed to FOSS simply because they believe, erroneously, that Microsoft is opposed.

Here, for what it is worth, are the points I've repeated endlessly over the years and which have often been received with quizzical scepticism.

a)  A decade ago, Microsoft's big problem was not FOSS per se, or even with copyleft.  The thing which really kept them awake at night was the fear that one day, someone might find, deep in the heart of the Windows code base, some code that should not be there and which was published under GPL.  The likelihood of this ever happening has long since faded away, but there was a time when MS was running scared.  I suspect this is why they held out for a while from making Windows source code open to inspection.  Nowadays, as an MVP, I am positively encouraged to ask to see Windows source.

b)  Microsoft has never opposed the open source community.  They have had problems with specific people and organisations in the FOSS community.  Back in the 1990s, Richard Stallman gave time and energy to a successful campaign to launch antitrust proceedings against Microsoft.  In more recent times, the negative attitude of certain people to Microsoft's submission of two FOSS licences to the OSI (both of which have long since been accepted), and the mad scramble to try to find any argument, however tenuous, to block their submission was not, let us say, edifying.

c) Microsoft has never, to my knowledge, written off the FOSS model.  They certainly don't agree that more traditional forms of licensing are inappropriate or immoral, and they've always been prepared to say so. 

One reason why it was so hard to convince people that Microsoft is not rabidly antagonistic towards FOSS licensing is that so many people think they have no involvement in open source.  A decade ago, there was virtually no evidence of any such involvement.  However, that was a long time ago.  Quietly over the years, Microsoft has got on with the job of working out how to make use of FOSS licensing and how to support the FOSS community.  For example, as well as making increasingly extensive use of Github, they run an important FOSS forge (CodePlex) on which they, themselves, host many hundreds of distinct projects.  The total count may even be in the thousands now.  I suspect there is a limit of about 500 records on CodePlex searches because, for the past few years, whenever I search for Microsoft-specific projects on CodePlex, I always get approx. 500 hits.  Admittedly, a large volume of the stuff they publish under FOSS licences amounts to code samples, but many of those 'samples' have grown into useful and fully featured frameworks, libraries and tools.

All this is leading up to the observation that yesterday's announcement by Scott Guthrie marks a significant milestone and should not go unnoticed.  If you missed it, let me summarise.   From the first release of .NET, Microsoft has offered a web development framework called ASP.NET.  The core libraries are included in the .NET framework which is released free of charge, but which is not open source.   However, in recent years, the number of libraries that constitute ASP.NET have grown considerably.  Today, most professional ASP.NET web development exploits the ASP.NET MVC framework.  This, together with several other important parts of the ASP.NET technology stack, is released on CodePlex under the Apache 2.0 licence.   Hence, today, a huge swathe of web development on the .NET/Azure platform relies four-square on the use of FOSS frameworks and libraries.

Yesterday, Scott Guthrie announced the next stage of ASP.NET's journey towards FOSS nirvana.  This involves extending ASP.NET's FOSS stack to include Web API and the MVC Razor view engine which is rapidly becoming the de facto 'standard' for building web pages in ASP.NET.  However, perhaps the more important announcement is that the ASP.NET team will now accept and review contributions from the community.  Scott points out that this model is already in place elsewhere in Microsoft, and specifically draws attention to development of the Windows Azure SDKs.  These SDKs are central to Azure development.   The .NET and Java SDKs are published under Apache 2.0 on Github and Microsoft is open to community contributions.  Accepting contributions is a more profound move than simply releasing code under FOSS licensing.  It means that Microsoft is wholeheartedly moving towards a full-blooded open source approach for future evolution of some of their central and most widely used .NET and Azure frameworks and libraries.  In conjunction with Scott's announcement, Microsoft has also released Git support for CodePlex (at long last!) and, perhaps more importantly, announced significant new investment in their own FOSS forge.

Here at Solidsoft we have several reasons to be very interested in Scott's announcement. I'll draw attention to one of them.  Earlier this year we wrote the initial version of a new UK Government web application called CloudStore.  CloudStore provides a way for local and central government to discover and purchase applications and services. We wrote the web site using ASP.NET MVC which is FOSS.  However, this point has been lost on the ladies and gentlemen of the press and, I suspect, on some of the decision makers on the government side.  They announced a few weeks ago that future versions of CloudStore will move to a FOSS framework, clearly oblivious of the fact that it is already built on a FOSS framework.  We are, it is fair to say, mildly irked by the uninformed and badly out-of-date assumption that “if it is Microsoft, it can't be FOSS”.  Old prejudices live on.
For the last decade, I have repeatedly, in my imitable Microsoft fan boy style, offered an alternative view to commonly held beliefs about Microsoft's stance on open source licensing.  In earlier times, leading figures in Microsoft were very vocal in resisting the idea that commercial licensing is outmoded or morally reprehensible.  Many people interpreted this as all-out corporate opposition to open source licensing.  I never read it that way. It is true that I've met individual employees of Microsoft who are antagonistic towards FOSS (free and open source software), but I've met more who are supportive or at least neutral on the subject.  In any case, individual attitudes of employees don't necessarily reflect a corporate stance.  The strongest opposition I've encountered has actually come from outside the company.  It's not a charitable thought, but I sometimes wonder if there are people in the .NET community who are opposed to FOSS simply because they believe, erroneously, that Microsoft is opposed.

Here, for what it is worth, are the points I've repeated endlessly over the years and which have often been received with quizzical scepticism.

a)  A decade ago, Microsoft's big problem was not FOSS per se, or even with copyleft.  The thing which really kept them awake at night was the fear that one day, someone might find, deep in the heart of the Windows code base, some code that should not be there and which was published under GPL.  The likelihood of this ever happening has long since faded away, but there was a time when MS was running scared.  I suspect this is why they held out for a while from making Windows source code open to inspection.  Nowadays, as an MVP, I am positively encouraged to ask to see Windows source.

b)  Microsoft has never opposed the open source community.  They have had problems with specific people and organisations in the FOSS community.  Back in the 1990s, Richard Stallman gave time and energy to a successful campaign to launch antitrust proceedings against Microsoft.  In more recent times, the negative attitude of certain people to Microsoft's submission of two FOSS licences to the OSI (both of which have long since been accepted), and the mad scramble to try to find any argument, however tenuous, to block their submission was not, let us say, edifying.

c) Microsoft has never, to my knowledge, written off the FOSS model.  They certainly don't agree that more traditional forms of licensing are inappropriate or immoral, and they've always been prepared to say so. 

One reason why it was so hard to convince people that Microsoft is not rabidly antagonistic towards FOSS licensing is that so many people think they have no involvement in open source.  A decade ago, there was virtually no evidence of any such involvement.  However, that was a long time ago.  Quietly over the years, Microsoft has got on with the job of working out how to make use of FOSS licensing and how to support the FOSS community.  For example, as well as making increasingly extensive use of Github, they run an important FOSS forge (CodePlex) on which they, themselves, host many hundreds of distinct projects.  The total count may even be in the thousands now.  I suspect there is a limit of about 500 records on CodePlex searches because, for the past few years, whenever I search for Microsoft-specific projects on CodePlex, I always get approx. 500 hits.  Admittedly, a large volume of the stuff they publish under FOSS licences amounts to code samples, but many of those 'samples' have grown into useful and fully featured frameworks, libraries and tools.

All this is leading up to the observation that yesterday's announcement by Scott Guthrie marks a significant milestone and should not go unnoticed.  If you missed it, let me summarise.   From the first release of .NET, Microsoft has offered a web development framework called ASP.NET.  The core libraries are included in the .NET framework which is released free of charge, but which is not open source.   However, in recent years, the number of libraries that constitute ASP.NET have grown considerably.  Today, most professional ASP.NET web development exploits the ASP.NET MVC framework.  This, together with several other important parts of the ASP.NET technology stack, is released on CodePlex under the Apache 2.0 licence.   Hence, today, a huge swathe of web development on the .NET/Azure platform relies four-square on the use of FOSS frameworks and libraries.

Yesterday, Scott Guthrie announced the next stage of ASP.NET's journey towards FOSS nirvana.  This involves extending ASP.NET's FOSS stack to include Web API and the MVC Razor view engine which is rapidly becoming the de facto 'standard' for building web pages in ASP.NET.  However, perhaps the more important announcement is that the ASP.NET team will now accept and review contributions from the community.  Scott points out that this model is already in place elsewhere in Microsoft, and specifically draws attention to development of the Windows Azure SDKs.  These SDKs are central to Azure development.   The .NET and Java SDKs are published under Apache 2.0 on Github and Microsoft is open to community contributions.  Accepting contributions is a more profound move than simply releasing code under FOSS licensing.  It means that Microsoft is wholeheartedly moving towards a full-blooded open source approach for future evolution of some of their central and most widely used .NET and Azure frameworks and libraries.  In conjunction with Scott's announcement, Microsoft has also released Git support for CodePlex (at long last!) and, perhaps more importantly, announced significant new investment in their own FOSS forge.

Here at Solidsoft we have several reasons to be very interested in Scott's announcement. I'll draw attention to one of them.  Earlier this year we wrote the initial version of a new UK Government web application called CloudStore.  CloudStore provides a way for local and central government to discover and purchase applications and services. We wrote the web site using ASP.NET MVC which is FOSS.  However, this point has been lost on the ladies and gentlemen of the press and, I suspect, on some of the decision makers on the government side.  They announced a few weeks ago that future versions of CloudStore will move to a FOSS framework, clearly oblivious of the fact that it is already built on a FOSS framework.  We are, it is fair to say, mildly irked by the uninformed and badly out-of-date assumption that “if it is Microsoft, it can't be FOSS”.  Old prejudices live on.


Thursday, February 23, 2012 #

While coding a very simple orchestration in BizTalk Server 2010, I ran into the dreaded "cannot implicitly convert type 'System.Xml.XmlDocument' to '<message type>'" issue. I've seen this happen a few times over the years, and it has often mystified me.

My orchestration defines a message using a schema type. In a Message Assignment shape, I create the message as an XML Document and then assign the document to the message. I initially wrote the code to populate the XML Document with some dummy XML. At that stage, the orchestration compiled OK. Then I changed the code to populate the XML Document with the correct XML and...bang. I could no longer cast the XML Document to the message type.

I spent some time checking this through. I reverted back to the original code (with the dummy content), but the problem persisted. I restarted Visual Studio (several times), deleted the existing ‘bin’ and ‘obj’ folders and re-built, and tried anything else I could think of. No change.

It then occurred to me to think a little more carefully about exactly what I was doing at the point the code broke. My response message is very simple, and to create the XML content, I am therefore concatenating strings. To ensure I got the right XML, I used BizTalk to generate an example of the XML from the schema. The schema contains two root elements for the request and response messages. To generate the XML, I temporarily changed the 'Root Reference' property of the schema from 'default' to the element that represents the response message...

...and forgot to change the property back :-(

So, I changed the property back to 'default' and...

...success!

I experimented further and ascertained that if the 'Root Reference' property is set to anything other than 'default', the assignment code in my orchestration breaks. This is totally repeatable on the machine I am using. I spent some time looking at the code that BizTalk generates for schemas. When 'Root Reference' is set to 'default', BizTalk generates separate schema classes for each candidate root element, as well as a class for all root nodes. When set to a specific element, BizTalk outputs a single class, only. Apart from that, I couldn't see anything suspicious.

I can't find anything on the Internet about this, so would be interested if anyone else sees identical behaviour. The lesson, here, of course, is to avoid using schemas with multiple root elements. I have now refactored my schema into two new schemas.


Friday, December 16, 2011 #

It's always exciting when a new application you've worked on goes live. The last couple of weeks have seen the 'soft' launch of a new service offered by the UK government called 'Tell Us Once' (TUO). You can probably guess from the name what the service does. Currently, the service allows UK citizens to inform the government (as opposed to Register Officers, who must still be notified separately) just once of two types of 'change-of-circumstance' event; namely births and deaths. You can go, say, to your local authority contact centre, where an officer will work through a set of screens with you, collecting the information you wish to provide. Then, once the Submit button is clicked, that's it! With your consent, the correct data sets are parcelled up and distributed to wherever they need to go - central and local government departments, public sector agencies such as the DVLA, Identity and Passport Service, etc. No need to write 50 letters!

With my colleagues at SolidSoft , I'm really proud to have been involved with the team that designed and developed this new service. For the past few years, we worked originally on the prototypes and pilots (there was more than one!). Over the last eighteen months or so, we have been engaged in building the national system, and development work in on-going. It's been a journey! The idea is very simple, but as you can imagine, the realisation of that idea is rather more complex. Look for future enhancements to today's service, with the ability to report events on-line from the comfort of your own home and the possible extension of the system to cover additional event types in future.

Interaction with government has just got a whole lot better for UK citizens, and we helped make that happen. It's a pity that I don't intend to have any more children (four is enough!), and I really hope I don't have to report a death in the near future, but if I do, I'll be beating a path to the door of my local council's contact centre in order to 'tell them once'.

See http://www.guardian.co.uk/government-computing-network/2011/dec/16/tell-us-once-matt-briggs?utm_source=twitterfeed&utm_medium=twitter

http://www.guardian.co.uk/public-leaders-network/2011/nov/10/tell-us-once-birth-death


Friday, December 9, 2011 #

Yesterday, Microsoft announced the forthcoming release of BizTalk Server 2010 R2 on the BizTalk Server blog site.  This is advanced notice, given that this new version will ship six months after the release of Windows 8, expected in the second half of next year.  On this basis, we can expect the new version of BizTalk Server to arrive in 2013.  Given the BizTalk team’s previous record of name changes, I wonder if this will eventually be released as BizTalk Server 2013.

Microsoft has been refreshingly open in recent months about their future plans for BizTalk Server.  This strategy has not been without its dangers with some commentators refusing to accept Microsoft’s statements at face value.  However, yesterday’s announcement is entirely in line with everything Microsoft has been saying, both publically and privately, for some time now.  Since the release of BizTalk Server 2004, Microsoft has made little change to the core technology with, of course, the exception of a much re-vamped application packaging approach in BizTalk Server 2006.  Instead, Microsoft chose to put investment into a number of important ‘satellite’ technologies such as EDIFACT/X12/AS2 support, RFID Server, etc.  Maintaining the stability of the core platform has allowed BizTalk Server to emerge as a mature and trusted workhorse in the enterprise integration space with widely available skills in the marketplace.

In terms of its major investments, Microsoft’s focus has long shifted to the cloud.  Microsoft has candidly communicated that, given this focus, they have no current plans to add major new technologies to the BizTalk platform.  In addition, they absolutely have no intention of re-engineering the core BizTalk platform.  In my direct experience in recent months, this last point plays very well to prospective and existing enterprise customers.  It takes us straight to the heart of what most organisations want from an integration server: a ‘known quantity’ with a good track record for dependability, scalability and stability and a significant pool of available technical resource.

The announcement of BizTalk Server 2010 R2 illustrates and illuminates Microsoft’s stated future strategy for the product.  An important part of Microsoft’s platform for enterprise computing, it will continue to be enhanced and extended.  It will match future developments in the Windows platform and new versions of Visual Studio.  However, we should not expect to see any dramatic new developments in the world of BizTalk Server.  Instead, the BizTalk platform will continue to steadily mature further as the world’s best-selling integration server.

One of the big messages of yesterday’s announcement is that BizTalk Server will increasingly support its emerging role in building hybrid solutions that encompass systems and services that reside both on-premises and in the cloud.  At SolidSoft , we are increasingly focused on the design and implementation of cloud-based and hybrid integration solutions.  Integration is challenging, and Azure is a young, fast evolving platform.  Microsoft has discussed at length their vision of Azure within a wider ‘hybrid’ context.  The availability of a tried and tested, mature, on-premises integration server is a vitally important enabler in building hybrid solutions.  Better than that, the announcement makes it clear that, as well as new support for the Azure service bus, BizTalk Server 2010 R2 licensing will be revised to open up new opportunities for hosting the server in the cloud.  This ties in with the push in Azure to embrace more fully the IaaS (infrastructure-as-a-service) model and, perhaps most importantly in the BizTalk space, to reduce or eliminate existing barriers between the on-premises and off-premises worlds.   BizTalk Server and Azure belong together.


Sunday, September 25, 2011 #

At last, I can announce that ‘BizTalk Server 2010 Unleashed’ has been published and is available through major booksellers in both printed and electronic form. The book is not a new edition of the old ‘BizTalk Server 2004 Unleashed’ book from several years ago, although Brian Loesgen, our fearless team leader, provided continuity with that title. Instead, this is entirely new content written by a team of six authors, including myself.
 
 
 
BizTalk Server is such a huge subject. It proved a challenge to decide on the content when we started our collaboration a couple of years back (yes, it really was that long ago!). We quickly decided that the book would principally target the BizTalk development community and that it would provide a solid and comprehensive introduction to the chief artefacts of BizTalk Server 2010 solutions – schemas, maps, orchestrations, pipelines and adapters. Much of this content was written by Jan Eliasen and forms part 1 (“The Basics”) of the book.
 
On the day my complimentary copies were delivered, I was working on the implementation of a pipeline component, and had an issue to do with exposing developer-friendly info in Visual Studio. I used this as a test-run of Jan’s content, and sure enough, discovered that he had clearly addressed the issue I had, including sample code. Jan’s contribution is succinct and to the point, but is also very comprehensive (he’s even documented things like creating custom pipeline templates!). I particularly appreciate the way he included plenty of guidance on testing individual artefacts.
 
My contributions to part 1 is a chapter on adapters (the ‘adapter chapter’ as we fondly called it). This explores each of the ‘native’ adapters and the family of WCF adapters. There is also some content on the new SQL adapter which is part of the BizTalk Adapter Pack. In that respect, it overlaps with ‘Microsoft BizTalk 2010 Line of Business Systems Integration’ which I reviewed recently, and also in respect of the SharePoint adapter. However, ‘Microsoft BizTalk 2010 Line of Business Systems Integration’ provides a whole lot more information on a range of LoB adapters. It is written in a different style to BizTalk Server 2010 Unleashed and is highly complementary.
 
Although the original plan was to include content on custom adapter creation, this didn’t, in the end, get covered in any depth. One reason for this is that, going forward, most custom adapter development for both BizTalk and Azure Integration Services (still some way off) is likely to be done using the WCF LoB Adapter SDK. That suggested that we would have had to document two distinct adapter frameworks in order to do the job properly, and this proved a little too much to tackle. Room there for another book, methinks.
 
Part 1 accounts for about half the content of the book. Beyond this, we wanted to add value by covering more advanced topics, including the use of BizTalk Server alongside WCF and the emerging Azure platform, new features in BizTalk Server 2010 and topics that have been only partially covered elsewhere. So, for example, Anush Kumar was contributed an entire section (part 4) on RFID including the new RFID Mobile Framework. Anush is well-known in the BizTalk community due to his involvement in the development of RFID Server. Between Jon Flanders and Brian Loesgen, the book includes content on exploiting WCF extensibility in BizTalk, integrating via the Azure service bus (please note that this content was written before the advent of topics/subscriptions or Integration Services), the BAM framework and the ESB toolkit.
 
There is also a whole section (part 3) written by Scott Colestock that introduces the Administration Console and describes deployment approaches for BizTalk solutions.
 
Rules
That leaves one more subject for which I was responsible. One of the main reasons I was asked to contribute to the book was to document rules processing. Although there is some great content out there on the use of the BRE, I have long felt there is a need for a more comprehensive introduction. Due to some early confusion, I originally intended a total of seven short chapters on rules, but this content was refactored into two longer chapters. The first chapter introduces the Business Rules Framework. My idea was to emphasise the entire framework up front, rather than simply explore the rules composer and other tools. I also tried to explain the typical ‘feel’ of rules processing in the context of a BizTalk application, and the relationship between executable rules and higher-level business rules.
 
The second chapter investigates rule-based programming. It attempts broadly to achieve two related goals. The first is to explain rules programming to developers, to demystify the model, explain the techniques and provide insight into how to handle a number of common issues and pitfalls that rules developers face. The second is to provide a solid theoretical introduction to rules processing, including concepts that are not generally familiar to the average developer. I resisted the temptation, though, to provide an in-depth explanation of how the Rete Algorithm works, which I’m sure will be a relief :-) You can read the Wikipedia article on that.
 
Conclusions
So there you have it. BizTalk Server 2010 is a mature enterprise-level product which, although it has a long future ahead of it, won’t change fundamentally over time. Microsoft has publically stated that their future major investments in EAI/EDI will be made in the Azure space, although new versions of BizTalk Server will continue to benefit from general improvement and greater integration with the evolving Azure platform. So, hopefully, our content will serve for some time as a useful introduction to BizTalk Server, chiefly from a developer’s perspective.

Monday, September 19, 2011 #

One benefit of my recent experience on a BA flight was that I got plenty of time to read through “Microsoft BizTalk 2010 Line of Business Systems Integration”. I’d promised the publisher weeks ago that I would take a look and publish some comments, but August has been such a busy month for me, and they have had to be patient.   I should point out that, for the sake of transparency, that with another BizTalk book about to be released (next week) which I helped co-author, I have an urgent and obvious need to make good on this promise before I start to blog on other stuff.
 
BTS10LoBI is a really welcome addition to the corpus of BizTalk Server books and fills a conspicuous gap in the market.  BizTalk Server offers a wide-ranging library of adapters.  The ‘native’ (built-in) adapters understandably get a lot of attention, as do the WCF adapters, but other adapters, such as the LoB adapters and HIS adapters, are often overlooked.  I came to the book with the mistaken assumption that its chief focus was on the BizTalk Adapter Pack.  This is a pack of adapters built with the WCF-based LoB SDK.  In fact, the book follows a much broader path.  It is a book about LoB integration in a general sense, and not about one specific suite of adapters.  Indeed, it is not simply about adapters.  It focuses on integration with various LoB systems, and explains how adapters and other tools are used to achieve this.

This makes for a more interesting read.  For example, one, possibly unintended, consequence (given that it represents collaboration between five different authors) is that it illustrates very effectively the spectrum of approaches and techniques that end up being employed in real-world integration.  In some cases developers use adapters that offer focused support for metadata harvesting and other features, exploited through tools such as the ‘Consume Adapter Service’ UI.  In other cases, they use native adapters with hand-crafted schemas, or they create façade services.  The book covers additional scenarios where third-party LoB tools and cloud services (specifically SalesForce) are used in conjunction with BizTalk Server.  Coupled with lots of practical examples, the book serves to provide insight into the ‘feel’ of real-world integration which is so often a messy and multi-faceted experience.

The book does not cover the BizTalk Adapter Pack comprehensively.  There is no chapter on the Oracle adapters (not a significant issue because they are very similar to the SQL Server adapter) or the Siebel adapter.  On the other hand, it provides two chapters on the SAP adapter looking at both IDOC and RFC/BAPI approaches.  I particularly welcome the inclusion of chapters on integration with both Dynamics CRM 2011 and Dynamics AX 2009.  I learned a lot about Dynamics CRM which I haven’t had occasion personally to integrate with in its latest version.  The chapter on SalesForce mentions, but does not describe in any detail, the TwoConnect SalesForce adapter which we have used very effectively on previous projects.  Rather, it concentrates on direct HTTP/SOAP interaction with SalesForce.com and, very usefully, advocates the use of Azure AppFabric for secure exchange of data across the internet. 

The book provides two chapters on integration with SharePoint 2010.  The first explores the use of the native adapter to communicate with form and document libraries, and provides illustrated examples of working with InfoPath forms.  It would have been reasonable to stop there, but instead, the second chapter goes on to describe how to integrate more fulsomely with SharePoint via its web service interface, and specifically how to interact with SharePoint lists.
 
Increasingly, the BizTalk community is waking up to the implications of Windows Azure and AppFabric.  This is an important step for developers to take.  Future versions of BizTalk Server will essentially join and extend the on-premise AppFabric world.  As Microsoft progressively melds their on/off premise worlds, BizTalk developers will increasingly have to grapple with integration of cloud based services, and integration of on-premise services via the cloud.  The book is careful to address this emerging field through the inclusion of a chapter on integration via the Azure AppFabric service bus.   As I mentioned above, this is applied specifically to SalesForce integration in a later chapter.  The AppFabric Service Bus is a rapidly-evolving part of the Azure platform, and is set to introduce a raft on new features in the coming months which will greatly extend the possibilities.  Eventually we will see cloud-based integration services appear in this space.  So, the inclusion of this chapter points out the direction of major future evolution of Microsoft’s capabilities and offerings in the integration space.

The book is not shy about providing guidance on practical problems and potential areas of confusion that developers may encounter.  The content is clearly based on real-world experience and benefits from ‘war stories’.  The value of such content cannot be underestimated, and can save developers hours of pain and frustration when tackling new problems.  All in all, I thoroughly welcome this book.  My thanks to the authors, Kent Waere, Richard Seroter, Sergei Moukhnitski, Thiago Almeida and Carl Darski.


Sunday, September 18, 2011 #

I'm sitting is a nice new hotel in Redmond - the Hotel Sierra is well worth considering if you are staying in the area. I'm sleep-deprived and jet-lagged, and it's raining hard outside, but hey, I just got to play with one of the Samsung tablets they handed out at Build, and was not disappointed.  Microsoft is doing something trully remarkable with Win8 Metro.
 
On the other hand, I am deeply disappointed with the UK flag carrier, British Airways. Indeed, I've lost patience with them big-time. So forgive me for getting this off my chest. I am very much in the mood to do as much reputational damage to them as I can.
 
When I checked in on-line, they had booked me into one seat but I could see another with more legroom (a front row). Because of repeated experience over the last few years with defective headsets (I always carry my own earphones these days after one flight here we went through three different headsets before finding one in which one of the earphones actually worked) and bad headset connections (having to constantly twiddle the jack to try to hear anything), I spent a little while consciously debating with myself the intangible risks of changing my seat – i.e., I could easily be swapping a ‘working’ seat for a broken ‘one’. Of course, there was no way to know, so I opted for the seat with more legroom.
 
MISTAKE! Forget about dodgy headsets. Nothing worked. Not even the reading light! Certainly not the inflight entertainment. They failed to show me the safety video (the steward did panic a little when he realised they had failed to comply with their legal obligations). So I sat for 9.5 hours in a grubby, worn-out cabin with nothing!
 
To be fair, they did offer to try to find me another seat (the plane was very full), but I opted for the legroom because I wanted to try to get some sleep. So I could probably have got in-flight entertainment. The point is, though, that this is now more than just an unfortunate couple of co-incidences over the last two or three years. I am reasonably fair-minded and understand that sometimes, with the best will in the world, things just go wrong.  In any case, I was bought up to put-up or shut-up (as my mother would say - it's part of the culture).  However, I am forced to conclude that this is now a repeated trend that I experience regularly to the point where I am consciously suspicious of the seats they give me, and clearly with good reason.  BA simply fails to maintain its cabins to anything like a reasonable or acceptable standard (I must trust they do a better job in maintaining the engines). I used to feel some patriotic pride in BA.  Not now.  It’s so sad to see the British flag carrier consistently deliver such an embarrasingly poor and second-rate service. I will be asking SolidSoft in future to, where possible, book me onto a different carrier and will do what I can to convince the company to use other carriers by default.
 
Personally, I think the UK government should give flag carrier status to someone else (Virgin, I guess).
 
 
 

Thursday, September 15, 2011 #

I've just installed the Windows 8 Developer Preview.  These are some first impressions:

Installation of the preview was quite smooth and didn't take too long.  It took a few minutes to extract the files onto a virtual image, but feature installation then seemed to happen almost instantaneously (according to the feedback on the screen).  The installation routine then went into a preparation cycle that took two or three minutes.  Then the virtual machine rebooted and after a couple of minutes more preparation, up came the licence terms page. 

Having agreed to the licence, I was immediately taken into a racing-green slidy-slidy set of screens that asked me to personalize the installation, including entering my email address.  I entered my work alias.  I was then asked for another alias and password for access to Windows Live services and other stuff.  There was a useful link for signing up for a Windows Live ID.  I duly entered the information.  Only on the next screen did I spot an option to not sign in with a Live ID.  I didn't try this, but I felt a bit peeved that the use of a Live ID had appeared mandatory until that point.  I suspect the idea is to try to entice users to get a Live ID, even if they don't really want one.

A couple more minutes of waiting, et voilà.  The Metro Start screen appeared, covered in an array of tiles.  Simultaneously I got an email (on my work alias) saying that a trusted PC had been added to my Live account.  I clicked the confirmation link, signed into Windows Live and checked that my PC had indeed been confirmed. Then Alan started chatting, but that is a different matter.

Of course, Oracle's Virtual Box (and my Dell notebook) haven't quite mastered the art of touch yet.  For non-touch users a scroll bar appears at the bottom of the Metro UI. I had a moment's innocent fun pretending to swipe the screen with my finger while actually scrolling with the mouse.  Ah, happy days.  Then I discovered that the scroll wheel on my mouse does the equivalent of finger swiping on the Start page.

I opened up IE10.  Wow!  I thought IE9's minimal chrome story was amazing.  IE10 shows how far short IE9 falls.  There is no chrome.  Nothing.  Nadda.  Of sure, there is an address box and some buttons.  They appear when needed (a right mouse click without touch) and disappear again as quickly as possible.  It’s the same with tabs which have morphed, in the Metro UI, into a strip of thumbnails that appear on demand and then get out of the way once you have made your selection.  Click on a new tab and you can navigate to a new page or select a page from a list of recents/favourites.  You can also pin sites to 'Start', which in this case means that they appear as additional tiles on the Start screen.  I played for a minute and then I suddenly experienced the same rush of endorphins that hit me the first time I opened Google Chrome a few years back.  Yes, sad to say, I fell in love with a browser!  A near invisible browser.  A browser that is IE for goodness sake! A browser that does what so many wished IE would do years ago. It gets out of your way.

Do you like traditional tabs?  That's not a problem, because the good-ole desktop is just a click (or maybe a tap or a swipe) away.  There is even a useful widget on the now-you-see-me/now-you-don't address bar that takes you to desktop view.  It is a bit of a one way trip, and results in a new IE frame opening on the desktop for the current page.  On the desktop, IE10 looks just like IE9.  It is, however, significantly more accomplished, and has closed much of the remaining gap between IE9, the full HTML5 spec and some of the additional specifications that people incorrectly term 'HTML 5'.  Microsoft has more than doubled its score on the (slightly idiosyncratic) HTML5 Test site (http://html5test.com/) and now just pips Opera 11.51, Safari 5.1 and Firefox 6 to the post for HTML5 compliance (it beats Firefox by just 2 points, although it is 1 point behind if you take bonus points into consideration) by that measure, although it still falls behind Google Chrome 13.

Pinning caused me some issues which I suspect are simply bugs in the preview.  Having pinned a site, every time I went into the Metro version of IE10, I found that I couldn't click on links, hide the address bar, view tabs, etc.  I eventually had to kill my IE10 processes to get things working properly again.  I noticed that desktop and Metro IE10 processes appear with slightly different icons in the radically redesigned task manager.

One slight mystery here is that the beta of 64-bit Flash worked fine in Desktop view but not in Metro.  No doubt this will long since have become a matter of history by the time all this stuff ships.

For a few minutes, I was rather confused about the apparent lack of a proper Start menu in the desktop view.  If you click on Start, you go back to the Metro Start page.  And then the obvious dawned on me.  In effect, the new Metro Start screen is simply an elaboration of the old Start menu.  In previous version, when you click Start, the menu pops up on top of the desktop.  It is quite rich in previous versions, and allows you to start applications, perform searches for applications and files or undertake various management and administrative tasks.  Windows 8 is really not very different.  However, the Start menu has now morphed into the new Metro Start page which takes up the whole screen.  Instead of a list of pinned and recent applications, the Start screen displays tiles.  Move the mouse down to the bottom right corner (I don't know what the equivalent touch gesture is), and up pops a mini Start menu.  Clicking 'Start' takes you back to the desktop.  Click on 'Search' to search for applications files or settings.  The settings feature is really powerful.  In fact, in Windows 7, searching for likely terms like 'Display' or 'Network' also returns results for settings, but you get far more hits in Windows 8.  The effect is rather like 'God Mode' in Windows 7.  [update: no, I'm wrong.  Windows 7 gives you a similar number of hits, BUT you need to click the relevant section in the search results to see them all.  I've clearly not being using Search effectively to date!]

The mini Start menu is available in the desktop as well.  In this case, if you click 'Search', the search panel opens up on the right of the screen and results then open up to take over the rest of the screen. As I experimented, I found that while things were fairly intuitive, the preview does not always work in a totally predictable fashion.  I also suspect that the experience is currently better for touch screens than for traditional mice (I note Microsoft is busy re-inventing the mouse for a Windows 8 world - see http://www.microsoft.com/hardware/en-us/products/touch-mouse/microsite/).  This is hardly surprising given that Windows 8 is clearly in an early state and is unfinished.  I suspect the emphasis to date has been on touch, and not on mouse-driven equivalents.

Once I grasped the essential nature of the Metro Start page and its correspondence to the Start menu is earlier versions of Windows, I began to feel far more comfortable about the changes. Sure, all the marketing hype is about the radical new UI design features.  However, this really is just the next stage of the evolution of the familiar Windows UI.  Metro is absolutely fabulous as a tablet UI (better than iOS/Android IMHO, which after all, are really just the old 'icons on a desktop' approach with added gestures), and I think it will actually be quite good for desktops, once it is complete.  I note, though, that people have already discovered the registry hack to switch Metro off (see http://www.mstechpages.com/2011/09/14/disable-metro-in-windows-8-developer-preview/), and I think MS would be wise to offer this as a proper setting in the release version.  I anticipate, though, that I will not be switching Metro off, even on a non-touch desktop.

Shutting down presented a little difficulty.  I am used to using the Start menu to do this (the classic 'Start' to stop conundrum in Windows).    I couldn't find a 'Shut Down' command on the Start screen.  I eventually did Ctrl-Alt-Delete (or rather, Home-Del in Oracle Virtual Box) and then found a Shut Down option at the bottom left of the screen.

Booting the VBox image takes 20 seconds on my machine.  20 seconds!   I'll say that again. 20 seconds!!!!  Yes, 20 seconds, just about exactly.  That's on a virtual machine on my notebook.  On the host, it would be significantly faster.  This is Windows like we have never known it before.  Frankly, it is the ability to boot fast and run Windows happily on ARM devices (I'll have to take that on trust as I haven't yet seen it for real) that are the really important changes.  Almost more important than the Metro UI. The nay-sayers and trolls say it can't be done.  I think Microsoft has done it, though.

My last foray into Windows 8 this evening was to launch Visual Studio 2011 Express and have a quick peek at the templates for Win8 development.  I have a lot to explore.

The say first impressions are the most important.  When I saw the on-line video of Windows 8 a couple of months back, I almost fell off my chair in surprise.  Now I have got my hands on an early version I am really quite impressed. Like everyone else, I couldn't see how Microsoft could possibly compete against Apple and Google in the tablet space.  Now...well...I look forward to seeing if and how Apple and Google will respond.  If it is true, as Steve Ballmer states, that Microsoft had 500 thousand downloads of the preview in less than 24 hours, then tectonic plates have already shifted and Microsoft is firmly on track to become a major contender in the tablet space. OK, that's only one in every 14,000 people on the face of planet earth, and yes, the release version of Lion had double that number of hits in the first 24 hours.  Nevertheless, it is a huge figure for an early technical preview of an operating system that won't ship for another year.  It means people are very, very keen to start developing for Metro (I know we are at SolidSoft).  And if Windows 8 succeeds on tablets, what will that mean for Windows Phone which also uses the Metro concept?  Don't ever, ever underestimate Redmond.


Wednesday, September 14, 2011 #

Following the previous post, here is a second bit of wisdom.  In the Load method of a custom pipeline component, only assign values retrieved from the property bag to your custom properties if the retrieved value is not null.  Do not assign any value to a custom property if the retrieved value is null.

This is important because of the way in which pipeline property values are loaded at run time.  If you assign one or more property values via the Admin Console (e.g., on a pipeline in a Receive Location), BizTalk will call the Load method twice - once to load the values assigned in the pipeline editor at design time and a second time to overlay these values with values captured via the admin console.  Let's say you assign a value to custom property A at design time, but not to custom property B.  After deploying your application, the admin console will display property A's value in the Configure Pipeline dialog box.  Note that it will be displayed in normal text.  If you enter a value for Property B, it will be displayed in bold text.  Here is the important bit.  At runtime, during the second invocation of the Load method, BizTalk will only retrieve bold text values (values entered directly in the admin console).  Other values are will not be retrieved.  Instead, the property bag returns null values.  Hence, if your Load method responds to a null by assigning some other value to the property (e.g., an empty string), you will override the correct value and bad things will happen.

The following code is bad:

    object retrievedPropertyVal;
    propertyBag.Read("MyProperty", out retrievedPropertyVal, 0);

    if (retrievedPropertyVal != null)
    {
        myProperty = (string)retrievedPropertyVal;
    }
    else
    {
        myProperty = string.Empty;
    }

Remove the 'else' block to comply with the inner logic of BizTalk's approach.


Here is a small snippet of BizTalk Server wisdom which I will post for posterity.  Say you are creating a custom pipeline component with custom properties.  You create private fields and a public properties and write all the code to load and save corresponding property bag values from and too your properties.   At some point, when you deploy the BizTalk application and test it, you get an exception from within your pipeline stating, unhelpfully, that "Value does not fall within the expected range."  Or maybe, while using the Visual Studio IDE, you notice that values you type into custom properties in the Property List are lost when you reload the pipeline editor.

What is going on?   Well, the issue is probably due to having failed to initialise your custom property fields.  If they are reference types and have a null value, the PipelineOM PropertyBag class will throw an exception when reading property values.  The Read method can distinguish between nulls and, say, empty strings, due to the way data is serialised to XML (e.g., in the BTP file).   Here is a property initialised to an empty string:

            <Property Name="MyProperty">
              <Value xsi:type="xsd:string" />
            </Property>

Here is the same property set to null:

            <Property Name="MyProperty" />

The first is OK.  The second causes an error and leads to the symptoms described above.

ALLWAYS initialise property backing fields in custom pipeline components.  NEVER set properties to null programmatically.


Monday, August 22, 2011 #

In my previous post I mentioned the free AI course being run by Peter Norvig and Sebastian Thrun (122,314 and rising) in conjunction with Stanford University School of Engineering.  Professor Andrew Ng is running a related course on Machine Learning. This is also a free on-line course run along the same lines as the AI course. Over 30,000 people have signed up so far.
 
I mention this because Andrew has just confirmed that he will be speaking at this year’s Rules Fest. Rules Fest is all about the practical application by developers of reasoning technologies to real-world problems. It brings together people from across the whole spectrum of public and private sector organisations, including commercial and research organisations and academia, to inspire, inform and enlighten developers and architects. Machine learning is central to the rapidly evolving world of intelligent systems, and we are very excited that Andrew will be speaking at the event.

Saturday, August 20, 2011 #

Peter Norvig and Sebastian Thrun are offering a free on-line course on AI later this year in conjunction with Stanford University. The course is broadly based on Peter Norvig's book "Artificial Intelligence: A modern Approach" written jointly with Stuart Russell. Along with my colleagues on the Rules Fest committee, we have been following this with interest. In a few days, well over 100,000 people have signed up (112,774 at the time of writing, and still increasing fast). The course broadly overlaps with our natural areas of interest at Rules Fest which is all about the practical application of reasoning technologies in real-world computing. It is very encouraging to us to see the huge interest this course is generating. We will doubtless be contacting Peter, yet again, to see if he will speak at next year's conference (we keep plugging away at this).
 
In another development, we all woke up to the news a couple of days ago that HP, as part of its dramatic change in strategy, has bid almost $11Bn to acquire enterprise search company, Autonomy. Autonomy offers proprietary technology that exploits Bayes theorem, Shannon's information theory and specific forms of SVD to create an intelligent search platform with learning capabilities. Clearly, HP sees this type of technology as playing a major and lucrative role in their future.
 
Some time ago, at an event organised by the excellent BizTalk Users' Group in Sweden, I was asked to do a little crystal ball gazing. I trotted out the line that the next few years will see AI-related and reasoning technologies, formally thought of as esoteric and impractical, find their place at the heart of enterprise computing alongside existing investments in traditional LoB/Back Office applications and integration services. With the advent of cloud computing and platforms such as Azure, we have the horsepower available to make this a practical and feasible possibility for mainstream enterprise computing. AI used to be a dirty word. No longer!

Tuesday, June 21, 2011 #

Microsoft has announced availability of the June CTP for Windows Azure AppFabric. See http://blogs.msdn.com/b/appfabric/archive/2011/06/20/announcing-the-windows-azure-appfabric-june-ctp.aspx. This is an exciting release and provides greater insight into where the AppFabric team is heading in terms of developer and management tooling. Microsoft is offering space in the cloud to experiment with the CTP, but this is limited, so register early to get a namespace!
You can download the SDK for the June CTP. However, we ran into a lot of trouble trying to do this today. Whenever we followed the link, we ended up on the page for the May CTP. We found what appeared to be a workaround which we were able to repeat on another box (and which I reported on Connect), but then a few minutes later I couldn't repeat it. Just now, the given link appears to be working every time in IE, but not in Firefox!   Frankly, the behaviour seems random!   It looks like the same URL points to two different pages, and I suspect that which page you end up on is hit and miss.
The link to the download page is http://www.microsoft.com/download/en/details.aspx?id=17691. If you end up on the wrong page, try again later and you may get to the right place. Or try googling "Windows Azure AppFabric SDK CTP – June Update" and following a link to this page. For some reason, that sometimes seems to work.
Good luck!

Thursday, June 2, 2011 #

I spent some time today summarising the new features in the Windows Azure AppFabric May CTP for SolidSoft consultants. Microsoft released the CTP a couple of weeks ago and has a second CTP coming out later this month.  I might as well publish this here, although it has been widely blogged on already.  There is nothing that you can’t glean from reading the release documents, but hopefully it will serve as a shorter summary.

The May CTP is all about the AppFabric Service Bus.  The bus has been extended to support ‘Messaging’ using ‘Queues’ and ‘Topics’

‘Queues’ are really the Durable Message Buffers previewed in earlier CTPs.  MS has renamed them in this CTP.  They are not to be confused with Queues in Windows Azure storage!  Think of these as ‘service bus queues’.  They support arbitrary content types, rich message properties, correlation and message grouping. They do not expire (unlike in-memory message buffers).  They allow user-defined TTLs.  Queues are backed by SQL Azure.  Messages can be up to 256KB and each buffer has a maximum size of 100 MB (this will be increased to at least 1GB in the release version).  To handle messages larger than 256KB, you ‘chunk’ them within a session (rather like BTS large message handling for MSMQ).  The CTP currently limits you to 10 queues per service namespace. 

Service Bus queues are quite similar to Azure Queues.  They support a RESTful API and a .NET API with a slightly different set of verbs – Send (rather than Put), Read and Delete (rather than Get), Peek-Lock (rather than ‘Peek’) and two verbs to act on locked messages – Unlock and Delete.  The locking feature is all about implementing reliable messaging patterns while avoiding the use of 2-phase-commit (no DTC!).  Queue management is very similar, but configuration is done slightly differently.  AppFabric provides dead letter queues and message deferral.  The deferral feature is a built-in temporary message store that allows you to resolve out-of-order message sequences.  Hey, this stuff is actually beginning to get my attention!

Today’s in-memory message buffers will be retained for the time being.  MS is looking at how much advantage they provide as low-latency non-resilient queues before making a decision on their long-term future.  This is beginning to sound like the BizTalk Server low-latency debate all over again!  Currently, the documented recommendation is that we migrate to queues.

‘Topics’ provide new pub/sub capabilities.  A topic is…drum roll please…a queue!   The main difference is that it supports subscription.  I assume it has the same limitations and capabilities as a normal queue, although I haven’t seen this stated.  It is certainly built on the same foundation.  You can have up to 2000 subscriptions to any one topic and use them to fan messages out.  Subscriptions are defined as simple rules that are evaluated against user and system-define properties of each message.  They have a separate identity to topics.  A single subscription can feed messages to a single consumer or can be shared between multiple consumers.  Unlike Send Port Groups in BizTalk, this multi-consumer model supports an ‘anycast’ model for competing consumers where a single consumer gets a message on a first-come-first-served basis.  MS invites us to think of a subscription as a ‘virtual queue’ on top of the actual topic queue.  Potential uses for anycasting include basic forms of load balancing and improved resilience.

The CTP supports AppFabric Access Control v2.0.  It is fully backward-compatible with the current service bus capabilities in AppFabric.

CTP does not have load balancing and traffic optimization for relay.  These were in earlier CTPs, but have been removed for the time being.  They may reappear in the future.

June CTP

The June CTP will introduce CAS (Composite Application Services).  CAS is a term used by other vendors (e.g., SAP) for similar features, and has been a long time coming in the Microsoft world.  The basic idea is that you build a model of a composite application, the services it contains, its configuration, etc., and then drive a number of tasks from this model such as build and deployment, runtime management and monitoring.  Some of us remember an ancient Channel 9 video on a BizTalk-specific CAS-like modelling facility that MS were working on years ago.  It was entirely BizTalk-specific and never saw the light of day.  However, one connection to make is that CAS will provide capabilities that are conceptually related to the notion of  ‘applications’ in BizTalk Server.  

We will get a graphical Visual Studio modelling tool to design and manage CAS models.  The CAS metamodel is implemented as a .NET library, allowing models to be constructed programmatically.  Models are consumed by the AppFabric Application Manager in order to automate deployment, configuration, management and monitoring of composite applications.

So, things are rapidly evolving.  However, we won’t see anything on Integration Services until, I suspect, next year.  It’s important to remember that the May CTP is all about broadening the existing Service Bus with messaging capabilities, rather than about delivering an integration capability.  So, even though we are seeing more BizTalk Server-like features, we are still a long way off having what Burley Kawasaki called a “true integration service” in the cloud.   Obviously, Azure Integration Services will exploit and build on the Service Bus, but a lot more needs to be done before we have integration-as-a-service as part of the Azure story.