Warning: At the time of writing, I assumed that 'Oslo' was intended as a general purpose suite of tools and technologies for all aspects of modelling and DSL creation on the Microsoft platform. Many months later, it now seems that this is not the case. 'Oslo' eventually found 'a home' as part of SQL Server, and became 'SQL Server Modelling'. According to Doug Purdy at PDC 2009, both 'M' (which incorporates MSchema and MGrammar) and Quadrant will ship with a future release of SQL Server. Doug's presentation addressed only SQL Server database design, implementation and access. He appeared to speak of 'Oslo' technologies primarily as a way of promoting Microsoft's database product. In my opinion, this much-reduced role is to be regretted, and the following article should be read in the knowledge that, although the technologies remain the same, the scope of their expected use appears to have been significantly reduced.
Over the last few weeks, I’ve had the opportunity to look in some depth at Microsoft’s emerging ‘Oslo’ modelling technologies. At this very early stage, ‘Oslo’ consists of ‘pre-alpha’ and CTP implementations of a technology and tooling stack that includes an initial implementation of a SQL Server-based model repository, the ‘MSchema’ and ‘MGrammar’ languages and the ‘Quadrant’ model visualisation tool. There is still a great deal of work to be done. The technologies and tools will evolve significantly over the next year or so, and Microsoft has yet to decide or announce the full scope of the first release. Nevertheless, the major themes of this emerging technology are in place and have been made public.
One of the best ways to understand the concepts that lie behind a new technology is to compare and contrast it with similar technologies provided by other communities. ‘Oslo’ addresses the modelling domain which has enjoyed a great deal of attention over several years. This article is an attempt to articulate something of the nature of Oslo by relating it to the wider world of modelling, especially as envisaged by the OMG (Object Management Group). The OMG are responsible for various modelling specifications, including the Unified Modelling Language (UML), various additional metamodels and the Meta Object Facility (MOF).
What is a model?
Before I discuss Oslo in respect to OMG specifications, I need to lay a little groundwork. It is all too easy to use common terms like ‘model’ without specifying what we mean. One common approach to defining the term comes from the General Model Theory (GMT) of Herbert Stachowiak. Stachowiak states that "[all] cognition is cognition in models and by models", implying that humans cannot understand reality except through the use of models. He identifies three characteristics of a model as follows:
· Simplification (Reduction): A model only maps attributes of interest, and ignores other attributes of the original. In this way, the model removes complexity and provides an abstraction of the original.
· Pragmatism: A model has a practical use within some given context. It doesn’t exist for its own sake, or have any meaning solely in reference to the original. Rather, its meaning is bound up with representing the original when used for some practical purpose.
The original ‘thing’ represented by a model may have existed in the past, may currently exist, or may be something that does not yet exist but could exist in the future. Models represent ‘originals’ regardless of their location in time.
Consider a city street plan. In this case, the city is the ‘original’. The street plan is a model that represents the city. The various features of the plan map directly onto attributes of the city. However, the plan does not try to capture every attribute. Instead, it provides a simplified representation by only capturing some of the city’s attributes – its roads, railways, public buildings, etc. It ignores attributes such as telephone cables, sewers, etc. The street plan is created for a pragmatic purpose – to allow humans to navigate their way through the streets to some destination.
What is a metamodel?
A metamodel provides a ‘language’ used to describe the structure and semantics of a model. It is, itself, a model with the specific purpose of describing how a model is structured. It does not map directly onto the attributes of the original, but onto attributes of the model of the original.
Extending the street plan example, consider the legend printed on the inside cover of the plan. This defines the graphical ‘language’ in which the map is written. Items within the legend map onto attributes of the plan. For example, the legend may define the graphical representation of roads, parks, buildings etc. These definitions map onto various attributes of the plan and allow us to interpret the plan correctly. Without this metamodel, we might not be able to properly understand the plan or the semantics of the symbols it uses.
What on earth is a meta-metamodel?
In the world of modelling, it is sometimes useful to add another layer of meta-modelling. Meta-metamodels are models of metamodels. They provide a language used to define the structure and semantics of metamodels.
The most common purpose of a meta-metamodel is to provide a unified context for handling many different metamodels and their associated models. Consider a company that publishes many different types of plan. Their plans may be drawn to different scales, may be published in different languages and may target a variety of specific needs - e.g., specialised maps for cyclists, maps showing administrative areas, tourist maps, integrated public transport systems etc., etc. The company may well use cartographic software that captures all relevant information within a common data base. Different types of plan for the same area may be extracted from this common set of data. In order to enable this, the cartographic software would probably define some internal meta-metamodel that describes a foundational set of concepts. Attributes of this common model will map onto the symbols used within specific plan types. For example, the meta-metamodel may define the foundational concepts of roads, buildings, rivers, geographical co-ordinates, etc. Different plan types may represent these concepts in different ways with a variety of specialisations - e.g., distinguishing hospitals from other types of building. The meta-metamodel may define some extensible generic approach to capturing specialisation information in a way that can be searched, queried and processed.
The purpose of the meta-metamodel will be to allow cartographers to store and manipulate all the relevant map data in a unified and efficient manner, eliminating ambiguity and duplication while supporting evolving requirements. Building on this basis, cartographers will be able to ensure consistency between different plan types built on a common ontological foundation of base concepts and semantics and a common set of data. It would be particularly valuable if the meta-metamodel represented some well-defined standard. In this case, we would have a solid basis for allowing interoperability and metadata exchange between different systems. Models captured in other systems could be reliably imported and transformed on the basis of standardised specifications at the meta-metamodel level.
OMG Four-Level Metamodel Architecture
One of the core concepts that underpins the OMG’s work on modelling is the ‘classic’ four-level metamodel architecture. This is described as follows:
Layer
|
Name
|
Description
|
M3
|
Meta-Metamodel
|
Core languages for describing the structure and semantics of metamodels
|
M2
|
Metamodel
|
Languages that describe the structure and semantics of user models
|
M1
|
User model
|
Models that describe system data, behaviours and information
|
M0
|
User object
|
System data, behaviours and information
|
Some descriptions of the four-level metamodel architecture make a distinction between M0 and the real-world ‘things’ that the system data and behaviours represent. However, this distinction is not well-defined and will only have specific meaning within a given context. For example, if an instance of a programmatic class is created at runtime, does this object exist at M0 or beyond? Should we consider real-world ‘things’ to be the objects we instantiate at run-time or the entities in the real world that these object represent? There is no one definitive answer to such questions.
UML, metamodels, MOF and MDA
The OMG defines UML in terms of a formal metamodel which resides at M2. This is just one of a family of metamodels for which the OMG have responsibility. Others include Common Warehouse Metamodel (CWM), Ontology Definition Metamodel (ODM) and Software Process Engineering Metamodel (SPEM). The OMG’s specification states that the UML metamodel provides “...the abstract syntax of the UML. The abstract syntax defines the set of UML modeling concepts, their attributes and their relationships, as well as the rules for combining these concepts to construct partial or complete UML models.” It is a fairly complex specification that underpins all the various UML diagram types.
Each of the OMG metamodels, including the UML metamodel, is compliant with a metamodelling language called Meta Object Facility (MOF). MOF is typically described as residing at M3 because it is used as a meta-metamodel that underpins the various OMG metamodels and any other MOF-compliant metamodels. However, some care needs to taken here in understanding the true nature and role of the MOF specification. MOF defines a metamodelling infrastructure that can be used to support systems that exhibit as few as two layers (e.g., M0 and M1), or an arbitrary large number of layers. This is stated explicitly in section 7.2 of the MOF 2 specification which is titled “How Many ‘Meta Layers’?”. The specification points out that MOF has always had the ability to support traversal of an arbitrary number of metamodel layers. Hence, it is not intrinsically an M3 meta-metamodel, even though the OMG use it in this role. Rather, we can think of MOF as a metamodelling language and infrastructure, based on core UML 2.0 packages, which happens to be most widely used at the M3 level.
MOF was originally strongly associated with CORBA, and the OMG provides a specification for mapping MOF to CORBA IDL. Standardised CORBA interfaces can be created for the purpose of reflection and other interactions with metadata. In addition, the Java Metadata Interface (JMI) specification provides MOF to Java mappings (currently only for version 1.4 of MOF). MOF is also used to define the XML Metadata Interchange (XMI) standard for metadata exchange.
MOF, UML, XMI and other OMG specifications underpin the OMG’s Model-Driven Architecture (MDA) standard. MDA (an OMG trademark) specifies a platform-independent model-based approach to system and software design, and also provides an approach to platform-dependent model-driven software generation. The general concepts involve creating PIMs (Platform Independent Models) which can then be translated into PSMs (Platform-Specific Models) as required. PSMs can be used to drive code generation and other model-driven development approaches on a target platform.
A number of vendors provide MOF repository technology. Broadly speaking, MOF repositories are structured to handle MOF-compliant metamodels and models defined using those metamodels. MOF is used at the M3 level to support interoperability across a range of model types and modelling tools. JMI is typically supported as a common API for reflecting on model metadata and interacting with models. XMI is used to exchange metadata between tools and repositories.
MOF repositories are typically used by companies and organisations that have the motivation and resource to manage the centralisation of many different models and model types. For example, a larger organisation will probably invest heavily in the formal definition and management of its enterprise architecture. This architecture may be represented from several different viewpoints using a number of different modelling tools, technologies and techniques. Having established a detailed enterprise architecture, the organisation may wish to enforce architectural policy and track compliance across multiple projects and development, infrastructure, data management and operational teams. Again, this will typically require integration of many different model types with a wide range to tools and technologies. A MOF repository exploits the concept of meta-metamodelling to enable this level of integration and interoperability, and provides a core piece of the enterprise infrastructure that enables the use of Model-Driven Architecture across the software development and application lifecycles.
I think it is fair to suggest that MOF-based model repository technologies have yet to generate a significant and wide-spread following in the wider ecosystem of software development environments. While researching MOF, I was struck by the number of times I read statements to this effect in articles generated by the MDA community. There appears to be something of a consensus that the OMG metamodelling specifications have relevance today mainly in environments that are large and complex enough to warrant the heavy investment and costs involved in adopting a full-blooded repository-based MDA approach. There is, however, clear evidence of a strong desire to see support for modelling and model-driven approaches become a far more mainstream withing software development. For example, as I have been reminded, Eclipse makes extensive use of EMF (the Eclipse Modelling Framework). The Ecore specification provides a metamodel specifiation for EMF, and is very similar to MOF.
A Lesson from History: The Microsoft Repository
So much for OMG. We could easily forget that Microsoft has its own history of involvement in UML-centric metamodeling. Back in 1997, Microsoft announced the Microsoft Repository for SQL Server 7. They renamed this technology as 'Microsoft Meta Data Services' when SQL Server 2000 shipped. The repository implemented a Microsoft specification called the 'Open Information Model' (OIM) which was submitted to the Metadata Coalition (MDC) for standardisation. The MDC took ownership of OIM, removed dependencies on the COM API and ratified it as "a non-proprietary and technology-neutral, and extensible specification" in 1999. OIM defined a collection of metamodels (‘information models’), including a UML 1.0 metamodel, aimed principally at data warehousing and component application development. OIM therefore addressed the M2 level of the four-level architecture, but also included an M3 meta-metamodel specification called 'Repository Type Information Model' (RTIM). RTIM, however, was closely aligned to Microsoft's COM (Component Object Model) in much the same way that MOF was originally aligned to CORBA, and was never adopted as part of the open standard by the MDC.
Building on the data warehouse metamodel support in OIM, MDC worked with the OMG to align the specification with the OMG's Common Warehouse Metamodel (CWM), and MDC subsequently merged with the OMG in 2000. Since then, OIM has disappeared from view, although the CWM documentation still contains a section discussing the specification. If OIM had survived, it is likely that MOF would have been used in place of the old RTIM specification. CWM is, of course, built on the foundation of MOF, so in a sense this did happen indirectly though the alignment of OIM data warehousing information models with CWM.
Unlike MOF, RTIM was not defined using UML. However, in other respects, OIM was heavily orientated towards UML, much in the same way as the various relevant OMG specifications. In the late 1990's Microsoft made a significant effort to support UML-based model-driven development on their platform. In 1998 they released the Visual Modeler with Visual Basic 5 and Visual Studio 6 (C++). This was a light-weight version of Rational Rose integrated into the Visual Basic and Visual Studio IDEs (they were still separate at that stage), and Microsoft worked closely with Rational on this technology. Visual Modeler supported an approximation to round-trip engineering, and was integrated with the Microsoft Repository. In addition, Microsoft built support for UML into Visio 2000, and again integrated this with their model repository technology. However, Visio support for the Repository was dropped in Visio 2002. The Microsoft Meta Data Services SDK was built with dependencies on Rational Rose, and there was some speculation about the possibility of Microsoft purchasing Rational. Ultimately, however, Rational was acquired by IBM in 2003. Microsoft subsequently dropped Microsoft Meta Data Services from SQL Server 2005.
In the Visual Studio world, then, we had integrated modelling and model repository tools with strong UML support a decade ago. OIM's RTIM specification played a similar role to MOF at the M3 level, and the Microsoft Repository was broadly equivalent to modern MOF repositories. However, Microsoft's technology never found universal or widespread acceptance amongst their customers. There are many reasons for this. One crucial consideration was the failure to attract wide-spread support within the ISV community. This was, no doubt, bound up with the emergence of MOF as the dominant metamodelling specification and ever-increasing support for MDA, chiefly in the Java community.
I believe there is a more prosaic reason why ISVs were slow to adopt the Microsoft Repository. Implementing metadata models within an M3 repository requires significant effort, especially if you need to define transformations between existing custom metamodels and the repository schema. Without a compelling business case, many ISVs would not have had the motivation to undertake this task, let alone build the necessary supporting tooling and visualisation to support repository-centric modelling. Other problems include concerns about run-time performance and scalability issues when navigating the graphs contained within such stores. An M3-based approach does not necessarily lend itself to exploiting the raw power of a relational database system. In any event, the COM-centric nature of the Microsoft Repository ruled out any realistic use as a mainstream technology on the .NET platform, despite .NET's COM interoperability features.
Introducing Microsoft’s ‘Oslo’ Model Repository
Since the acquisition of Rational by IBM, and the retirement of the Microsoft Meta Data Services, Microsoft has made little public progress on metamodelling, although one notable exception is its DSL (Domain-Specific Language) toolkit for Visual Studio. This forms a key part of Microsoft’s strategy for promoting the concept of ‘Software Factories’, which is often characterised as an alternative approach to OMG’s MDA. However, it is not a repository-based technology. This general state of affairs has changed, however, with the announcement of Microsoft ‘Oslo’.
The Oslo repository stores model data within a SQL Server database. It exploits a number of features (e.g., Change Data Capture, Resource Governor) introduced with SQL Server 2008 and therefore does not work with previous versions of SQL Server. It takes a radically different approach to the Microsoft Repository. Whereas this old technology was an M3-based repository designed to store models in relation to M2 ‘information models’, the Oslo repository is simply a model store. User models are represented directly as relational tables that store user object data. You will look in vain for a predefined data schema that stores abstract metadata definitions at the M2 level or that represents M3 meta-meta models. This is actually an overstatement. We will see that Microsoft does intend to provide support for storing M2 metamodels directly within Oslo, but this is something I will discuss later.
Oslo offers much more than the repository. In particular, MSchema is provided as a textual (rather than graphical) metamodelling language. Instead of concentrating on a repository data schema designed to capture models in accordance with metamodels, Microsoft provides a formalised definition of mappings from MSchema to T-SQL, and tooling that implements these mappings. Currently, this definition is a draft document running to almost 40 pages, and is included with the Oslo SDK. While it is natural for Microsoft to support mappings to T-SQL, other equivalent mappings can be defined. MSchema, itself, is agnostic with regard to any specific repository technology or platform.
Compare and contrast this with the MOF 2 specification. As I pointed out earlier, although MOF is generally considered to be an M3 meta-metamodelling language, the OMG explicitly states that it can be used in scenarios that involve traversal over as few as two metalayers. From this perspective, MOF can be considered to be a metamodelling language, rather like MSchema, that is typically used at the M3 level in four-level metadata architectures to model metamodels.
I’m labouring this point about MOF because it is important not to fall into the trap of thinking that Oslo simply ignores four-level metadata architecture and the role of meta-metamodelling. It is more correct to suggest that Oslo is agnostic about the number of metalayers required in different scenarios. This means that Oslo retains the potential to act as a good citizen in the wider world of metamodelling while avoiding some of the practical pitfalls that prevented the earlier Microsoft Repository from becoming a mainstream part of Microsoft’s platform. Oslo is a spirited attempt to show how meta-modelling can increase its reach beyond larger organisations and appeal to a much wider audience of ISVs, development teams, operations groups, etc.
The MSchema language deliberately takes the form of a data definition language, albeit with a different approach to typing than that found in SQL DDL, and a more explicit separation of 'intension' and 'extension'. Building MSchema definitions feels natural to many developers and is straightforward. This is vital if Oslo is to succeed in attracting widespread support. For so many existing applications, user data is stored within custom data stores. Those data stores are most often structured to represent user models at the M1 and M0 levels. By providing a general-purpose metamodelling language that can easily and naturally be used to express the shape of existing data schemas Microsoft is significantly lowering the bar for ISVs and development teams who wish to support and exploit the Oslo repository.
Retrofitting Oslo – An Example
Let’s consider an example of ‘retrofitting’ Oslo to an existing model-driven system by conducting a thought experiment that investigates how this might be done in the Microsoft BizTalk Server world. BizTalk Server has a high dependency on SQL Server and uses a number of databases at runtime. Together with the Message Box, the most central of these databases is the BizTalk Management database. This database stores metadata that defines, describes and configures runtime objects and artefacts such as messaging ports, orchestrations, pipelines, etc. In essence, it holds model data at the M1 and M0 levels that describe BizTalk applications and the physical runtime environment in great detail. The data schema implemented within this database directly reflects these runtime entities. Indeed, a good way of researching and understanding BizTalk Server in depth is to investigate the entities and associations that make up this data schema.
A decision would have to be made about the level of detail captured within the Oslo repository. At one extreme, the schema of the Management database could be replicated in entirety within the repository, and perhaps even extended to capture additional information (e.g., explicit data about message flows). In this case, we might reasonably ask about the value of such models. Unless BizTalk Server was re-written to use the model repository rather than its own database (a prospect which might cause real concern to existing BizTalk users), we could easily find ourselves entering a world of pain in terms of synchronisation of data between two different data stores without a clear understanding of what value this comprehensive data replication truly provides. At the other end of the spectrum, the repository models might capture significantly reduced models of BizTalk Server applications for very specific purposes. For example, we might store models in the Oslo repository which we then use to enable ‘software factory’ approaches for one-way code generation. The models would store considerable less detail than the Management database, and would not need to support two-way synchronisation.
Whichever direction we choose to go in, we would want to ensure that models in the repository bear a strong correspondence and conformance to the models already contained within the Management database. It would be reasonable to expect a direct mapping from repository model entities to entities in the Management database. We would want to avoid unnecessary work and re-invention, and ensure that our repository models maintain a high degree of fidelity in describing BizTalk applications and runtime environments.
In this thought exercise, we are capturing models which are specific to a given technology. We need to be able to specify the metamodel very precisely in order to ensure that our models are valid and well-formed in respect to BizTalk Server. However, we will probably be less interested, in this scenario, in ensuring that our metamodels conform to some meta-metamodel. One reason for this is that we probably won’t have a compelling need to exchange BizTalk-specific metadata with other systems and applications. This could change over time, however. For example, as BizTalk Server evolves further, future versions may exhibit much closer integration with platform-level technologies such as WCF and WF, or standards such as BPEL4WS and BPMN. They may be more deeply integrated into platform-level host environments (this is already the case with regard to IIS-based ‘isolated’ hosts in BizTlk Server). In this case, the ability to exchange and transform metadata on the basis of some meta-metamodel could become an important consideration. Even here, though, we might wish to our meta-metamodelling to address a specific technology platform.
A major theme in Oslo is the reduction of the bar that ISVs and development teams face when seeking to support rich modelling approaches to software development and runtime configuration. Oslo is agnostic with regards to the number of metalayers that are required in any given scenario and makes no assumptions with regard to how platform/technology-specific or independent a particular model needs to be. It avoids forcing conformance to any abstract M3 specification and provides a general-purpose metamodelling language that minimises the learning curve for developers. To all this, Microsoft adds pre-defined mappings to T-SQL and is building additional tooling for generalised visualisation of models and parser generation tools for creating domain-specific modelling languages. They also provide many pre-defined models (e.g., for WF workflows) out of the box.
In our BizTalk Server example, the bar is now set very low. MSchema is a natural choice for defining model specifications whose conformance to the models in the BizTalk management database can be verified, but which can reduce the amount of detail to an appropriate level. The Oslo tooling can be used to create tables in the repository which correspond directly to the tables in the Management database. Access to models in the repository can be via any appropriate and familiar data access technology that can connect to SQL Server. There is no need for developers to engage in a steep learning curve in respect to meta-metamodelling, no need to conform to unfamiliar APIs and no need to inject an unnecessary level of platform independence in the way models are specified.
But surely...
I have no doubt that for some advocates of MOF and MDA, the last paragraph will leave them in a state of disbelief. Be careful, though, to understand what I am expressing here. Oslo does not force developers to adopt an MDA-like approach. This does not imply, however, that an MDA-like, or even an MDA-conformant, approach is invalid within the Oslo world. Nor, I believe, should anyone casually claim that OMG’s MDA is without merit. It is true that several influential voices have questioned the universal application of MDA within the broader IT community, sometimes quite forcibly. Some of these voices have found a home in recent years within Microsoft, and Microsoft is therefore now associated with ‘alternative’ concepts of Software Factory design. However, we should keep in mind that MDA has built considerable momentum and has gained a significant following. This indicates that it is addressing real-world needs for its users.
Many questions remain about how Oslo will handle issues which are emphasised in the OMG world. These include enterprise-level metadata exchange, standardised APIs for tooling purposes, robust validation of metadata conformance, etc. It is very early days for Oslo, and Microsoft clearly hasn’t worked out all the answers yet. It is worth noting a few pertinent facts, however, which indicate the direction Microsoft is taking.
OMG Membership and Commitment to UML and XMI
A couple of months ago, Microsoft announced that it had formally joined the OMG. This announcement should absolutely be seen in the context of Oslo, and is, I believe, very welcome (and long overdue) news. Of course, we can expect the usual tedious reactions from those who see conspiracy in all that Microsoft does. However, Microsoft will operate within the OMG on exactly the same basis as any other member. Membership will certainly allow them to influence the future of various specifications, but they will not be able to undermine the OMG world or dominate the standards process. Why would they wish to do so? Microsoft has a strong history of UML adoption, albeit with little momentum in recent years. Perhaps more importantly, they share a common overall goal with the OMG – namely to change the IT world for the better by figuring out how to exploit modelling at every stage of the development and application lifecycles, and to support modelling across a wide spectrum of activity. True, they will probably not be strong advocates of MDA. However, the lessons they learn from Oslo over the next few years, together with their efforts in the area of software factories and domain-specific languages, will be fed directly back to the wider modelling community within an open forum, and that community can, in turn, influence Microsoft’s thinking as Oslo evolves and matures. This is surely of benefit to us all.
Alongside its new membership, Microsoft has publically announced two important commitments. The first is to undertake a renewed effort to support UML within its tools and technologies, and specifically in relation to Oslo. The details of this are not yet in the public domain, but Bill Gates himself spelled out the strategy in his last keynote speech before leaving full-time employment at Microsoft.
The second commitment is to support XMI as a standard method of metadata interchange within Oslo. XMI is strongly associated with MOF, and is a core enabling specification in the OMG family. This one announcement alone shows that Microsoft does not view Oslo as some kind of parallel universe that stands in opposition to the OMG world. Again, we must wait for the details of exactly what will be provided. However, interoperability and interchange between Oslo and existing repositories and modelling tools looks set to be a feature of the Microsoft platform.
Reflection Support
One of the motivations for MOF is to support reflective approaches to discovery and manipulation of metadata. It is difficult to see how this can be achieved in the Oslo world without a standardised way of storing metamodel information within the repository. This has not escaped Microsoft’s attention. The CTP version of the repository provided at the PDC in October contains a number of out-of-the-box model schemas which are expected to form part of the Oslo offering when it ships. Amongst these is Language.Schema which, together with Repository catalogue, ‘folders’, globalisation support and various other schemas, constitute a ‘core’ set of pre-defined Oslo models. Microsoft’s documentation for Language.Schema is currently incomplete, but the introduction states that “... the Language.Schema model schema...contains data types, relationships, and constraints that together represent the stored concepts of the [MSchema language] itself, allowing for introspection of [MSchemas] used to describe other problem domains.” In other words, Languge.Schema is used to model Microsoft’s MSchema metamodelling language.
Like any other schema, Language.Schema is, itself, defined using the MSchema language. In this case, then, MSchema is being used as a meta-metamodelling language. So, after all that we have discussed above, it turns out that Oslo will (probably) provide explicit support for the M3 layer and the capture of M2 metamodels within the repository. Of course, at this early stage, there are still many questions left unanswered. I’m not aware of any tooling in the current CTP to populate the Language.Schema tables, and have no idea how widely Microsoft intends to use this feature. Will they automatically capture metamodels by default, or will this be a purely optional feature? Will Microsoft provide functionality to perform introspection or will they leave that to ISVs and the .NET community? Who knows? Probably not even Microsoft at this stage. The point to grasp here is that Oslo is perfectly capable of supporting traversal over multiple metalayers. This naturally leads to consideration of how MOF-based approaches, and MDA itself, might be supported within the Oslo framework. I can see no practical or technical barrier to capturing MOF-compliant metamodels within Oslo. More interesting questions may surround the feasibility of translation directly between MOF and MSchema, and how useful such translation would prove in the Oslo world. Such questions will have to wait until Microsoft has done a lot more work on developing this emerging technology.
BPMN
One of the recurring questions asked in the wake of Microsoft’s announcement of membership of the OMG is about their intentions in regard to Business Process Modelling Notation (BPMN). BPMN is a standardised specification maintained by the OMG. It is designed to be used as a modelling language in ways not dissimilar to the intended role of the Visio-based ‘Orchestration Designer for Business Analysts’ (ODBA) that targets BizTalk Server. BPMN aims to provide an intuitive approach to modelling business processes that supports collaboration between both technical and business stakeholders. Like ODBA, it is intended for use in generating executable process definitions and provides mappings to BPEL4WS. Another interesting aspect of BPMN is that, since taking control of the specification, the OMG has retrofitted a MOF-compliant metamodel (Business Process Definition Metamodel), bringing BPMN in line with other OMG modelling standards. BPDM is interesting from a BizTalk perspective because it actively seeks to exploit the notions of orchestration and choreography as two complementary viewpoints of the same thing – an approach which, as a BizTalk developer, is dear to my own heart.
The answer to the question is ‘yes’. Microsoft does intend to support BPMN in the Oslo world. To this end they have already created a preliminary BPMN schema which they implement within the current CTP version of the Oslo repository and which is documented in the SDK. Obviously we will have to wait to see what tooling they plan to provide, beyond Quadrant, and how BPMN will be supported in BizTalk Server and WF. Microsoft already offers support for BPEL in both these technologies, so BPMN support will be a natural extension.
Open Specification Promise
Microsoft plans to release the ‘M’ language specification (MGraph, MSchema and MGrammar) under their Open Specification Promise (OSP). The OSP, launched in 2006, follows the lead taken by Sun in their 2005 ‘OpenDocument Patent Statement’. IBM published a very similar ‘Interoperability Specifications Pledge’ in 2007 having previously made a more informal public promise all the way back in 2004. The OSP is a legally-binding and irrevocable covenant that extends automatically to anyone who wishes to implement any of the specifications identified by the OSP. Microsoft promises that it will not sue for patent infringement in any circumstances, with the sole exception of situations where the user is involved in a patent infringement lawsuit against Microsoft concerning the same specifications.
The OSP, and other similar covenants, are principally designed to remove barriers to the open use of key specifications. There is a healthy debate about how compatible these covenants are with the GPL, but that is another matter. The point, here, is that Microsoft is taking an industry-standard route to ensuring that their ‘M’ specification can be freely and widely implemented. They state on their website that they currently have no plans to seek standardisation of the ‘M’ specification.
Conclusions
I’ve attempted, in this article, to explore the Oslo repository from a wider industry perspective informed chiefly by the OMG standards. My plea is that we avoid thinking of Oslo as a rival to MOF-compliant approaches, including MDA. The Oslo vision is about building pragmatic support for modelling practices into a particular platform, and to do so in a way that sets the entry bar as low as reasonably possible for ISVs and development teams. Almost by definition, therefore, Oslo cannot afford to be constituted as a rival to a widely supported set of industry modelling standards. Microsoft must, instead, ensure that Oslo remains ruthlessly agnostic about such matters so that it can be used to support any relevant standards and specifications as and where necessary, and also appeal to those who’s modelling needs are not conformant to specific standards.
We can reasonably speculate that Microsoft will probably use Oslo to extend their offerings around the notion of Software Factories, and will certainly use it to revive momentum in regard to their support for UML. They are less likely to provide MDA-based support, despite their membership of the OMG. However, for Oslo to be a success, it will need to attract broad and active participation from a wide community of ISVs. For some of those ISVs, the challenge will be to build MOF-compliant MDA tooling alongside the Oslo repository, and to work out how to map appropriately between MOF and MSchema. For other ISVs, MOF and MDA will, I suspect, continue to have little direct relevance in the foreseeable future.
Ultimately, the measure of success for Oslo will be the extent to which, say five years after its launch, it underpins the use and exploitation of models on the Microsoft platform, and the value it adds through centralisation, management, composition and discoverability of those models. We use models everywhere today. For Oslo to be deemed a success, it will need to become a foundation and framework of choice for a broad spectrum of ISVs and development teams when handling those models.
The OMG is perhaps best known for managing the UML (Unified Modelling Language) specification. They relate UML and other specifications directly to the four-level metamodel architecture. UML defines a number of diagram types such as use case, collaboration, sequence and class diagrams. Each of these can be considered a type of graphical model. Individual diagrams reside, therefore, at the M1 level. System data and behaviour that conforms to these models resides at the M0 level. For example, consider a class diagram that defines a set of classes and associations for some application. Developers may implement programmatic classes that conform to these models (they transform the class diagram into a different type of model). The objects that are instantiated ar runtime reside at M0 while the classes reside at M1.
· Representation (Mapping): A model represents some ‘original’ thing. Attributes of the original are mapped onto attributes of the model. In GMT, the original may, itself, be a model.