Geeks With Blogs

News
Charles Young
I’m at day two of the October Rules Conference at the Adolphus Hotel in downtown Dallas having flown over from the UK yesterday.   The conference session started for real yesterday lunchtime while I was still in the air, and due to late changes to the agenda, I missed a couple of sessions that I really wanted to hear – fortunately everything in being taped.
The conference is, not surprisingly, smaller than last year.   The Credit Crisis has had its effect on attendance.   Ironically, the program is larger and more diverse than last year.   James Owen and his colleagues have done wonderful things putting together the programme.
This morning started off with a keynote delivered by Thomas Cooper.   Thomas’ history goes right back to the 1970s, and he worked on important expert system technology during his time at DEC.   The keynote was provided an interesting historical perspective and some ideas for future development of rules technology.   Unfortunately, I only really started recording this report after Thomas had completed his talk, so I will gloss over the details here (apologies to Thomas).
Leon Kappelman presented the first of two sessions on enterprise architecture.   Every model is imperfect, but models are the way we filter reality. Responsiveness to change is the key to evolution.   Enterprise architecture is about creating a shared language to communicate, think about and manage an enterprise.   Thinking at the level of requirements is the hardest thing we do conceptually, and if the specification of requirements goes wrong, everything else that follows goes wrong.   We have a mentality of trying to fix things by optimising the components of a system. We don’t think in an integrated system-level fashion.   Favourite quotes “No one has to change – survival is optional” (from Deming) and, talking about measurements on a scale of one to five, “if you’re ‘one’, you suck...in the scientific sense”.   Not sure what to make of that!
John Zachman was on next.   It is years since I have seen anyone use an OHP!   Hilariously, this low-tech technology proved problematic – the damn thing just wouldn’t focus. Anyway, he made the point that Enterprises are very, very complex things.   Architecture is the set of representative description that we use to construct things.   If you can’t describe it, you can’t build it.   Architecture is used to manage change of the things we build.  
John’s presentation style is enthusiastic, and the points he makes are well defined.   His well-known framework provides an approach that concentrates on separation of individual variables within a two-dimensional approach.   It maps the six interrogatives (what, how, where, who, when and why) to six levels of reification (contextual, conceptual, logical, physical, implementational and operational).
Like many people, I’ve been aware of the Zachman framework in its various forms for quite some time.   Although I thought I understood, I don’t think I ever quite got it fully before today.   There is nothing like hearing John explain his own ideas.    The key points I took away is that John claims his framework is logically comprehensive – it is an ontology that encompasses everything that could possibly ever be included in an enterprise architecture.   It unscrambles the spaghetti that often passes for architecture.   Also, the two-dimension representation does not adequately communicate the way in which reification transformations operate down each column.   John showed us a diagram of ‘boxes within boxes’ to get this point across.   Unfortunately, the three-dimension representation doesn’t lend itself to communicating all the detail in the two-dimensional representation. You just can’t win!   Anyway, I think I’ve got it at last, and am now a born-again Zachman convert.
Next up was Rolando Hernandez and Fred Simkin.   Fred works for GE, and their presentation described the importance of rule modelling in the work done on GE’s energy tax processing system.   Their argument is that the enemy of successful rule based applications is ambiguity, and ambiguity is inherent is the use of text.   Words exist in context, and text is ambiguous.   Ambiguity leads to bad code and bad code represents lost knowledge.  This is unfortunate as the capture of knowledge is the whole point of rule-based systems.
In this context, the two presenters then demonstrated the high-level (non-product specific) rule modelling tools that they created for GE.   They support a variety of visual modelling approaches that allows experts to interact with analysts and capture the reasoning processes in a clear, understandable fashion. The idea is to use visual modelling to provide clarity and reduce ambiguity.   As they say, a picture is worth a thousand words.   And then John Zachman chipped in to say that a model is worth a thousand pictures.
Mark Proctor from Red Hat’s JBoss Rules (Drools) team was on next.   His talk was on ‘Production Rule System – Where do we go from here’ although it was actually about the future of the Drools engine (which attracted a mild rebuke from James Owen).   Mark is looking at syntax improvements such as method calls, cleaner, more orthogonal syntax, etc.   JBoss Rules may also gain ‘else’ which most (though not all) commercial engines already support.   Mark is also looking at logical closures in which will allow actions to fire when a matched rule is no longer true, and at a ‘logical modify’ feature to handle aspects of truth maintenance.    He has various ideas about handling repetition through periodic reactivation of rules that are still true.
Mark plans the introduction for rule execution groups.  These support push/pop semantics but avoid a global stack. They are envisaged as a flexible replacement for JBoss Rules’ activation groups.   Another idea is to introduce multi-version concurrency control based on clones of facts in order to provide much better transaction control across, for example, execution groups
Mark introduced the idea of Positional Slotted Language (POSL) to support both positional (ordered) and slotted (unordered) facts.   Some engines support the ability to define fact templates that select a specific model.   POSL is a slightly different approach used dynamically at the point of object construction, rather than pre-defined as an attribute of the fact template.   The syntax is much the same as for attribute construction in C# or VB.NET.   In these .NET languages, of course, the positional values are passed as required arguments to an overload of the attribute constructor whereas the slotted values are used to construct assignments to fields and properties, so the context in which the syntax is used is rather different.   Just like C# attribute construction, POSL is constrained to defining positional values first, followed by slotted name-value pairs.
Mark described ideas for introducing support for federated queries.   Some engines have features that support direct queries of the working memory.   Federated query will provide a mechanism for performing queries against different data sources within the rete network, and Mark is looking into ways of handling joins against different federated sources with the beta network. 
Mark has various plans about handling of uncertainty using pluggable evaluators in the alpha network.   This is, I believe, a really important idea which the rules community needs to give much more attention too, and was the most important part of the presentation. It is especially important when considering the role of rule engines in CEP (complex event processing). Indeed, most CEP engines today have, frankly, little sophistication in handling uncertainty, even though real-world event processing often involves a great deal of uncertainty.   Anyone (Tim?) who still believes that for some magical reason it is impossible to handle uncertainty in rules engines in an efficient and flexible fashion (yes, this claim really has been made!!!) should talk to Mark and his team. I couldn’t help doing a ‘thumbs-up’ from the floor when Mark finished by explaining that the proposed approach will allow Bayesian networks to be integrated directly into Rete networks.   This is something I was thinking about a couple of years ago, but I never got around to doing any serious thinking about how it might be implemented.   Some Bayesian engines use backward chaining on polytree networks.   I wonder if this would be an extra step in the evolution of the ideas Mark described, perhaps encoding Bayes belief nets into beta networks rather than the simpler alpha network extensions that Mark described.
Luke Voss from Mindviews labs was next.   His talk was about ideas of asserting entire graphs of facts to rules engines.   We are used to asserting facts.   Luke’s approach is to add the capability of asserting fact relationships as well.   Ultimately, fact relationships are, of course, just a specialised fact type.   However, Luke described rule engine technology that can exploit the specific semantics of relationships.   For graph theory, normal facts are nodes and fact relationships are edges. Luke pointed out that a number of tools and approaches exist for handling tree structures, performing efficient queries, etc.   The rule engine is similar to a Rete engine with an alpha network that filters the fact graphs and a beta network that performs joins.   He talked about a number of tools he has developed including a visual pattern designer using the Visual Studio DSL toolkit.   He talked about matching on sub-graphs and various approaches to doing things like testing connection between sub-graphs.   A rule engine based on graph matching (I’ll invent a term here) is ideal for solving things like the shortest path problem using, for example, the idea of node costs.   Luke also spent some time talking about detection of planar graphs.
This was a really interesting talk. One idea from the floor had to do with the possible role of a graph-matching engine in handling ontology, and the ways this might be used to simplify the representation of knowledge supplied by domain experts. Another thing I would be interested in is how the technology might handle graph morphisms and use rules as functors.
Daniel Levine was next on the Podium.   He is a professor of psychology, and last year presented a fascinating presentation on rule processing in the human mind.   This year, his talk was provocatively entitled “Truth versus Useful Lies’.   He discussed the ways in which human decision making can be influenced by a range of different factors that are not necessarily logically consistent.   For example, verbal usage in the way identical questions are phrased can influence the answers that people provide.   Children and novices tend to remember what they see and hear in a more literal ‘verbatim’ fashion. Adults and experts use more stable ‘gist’ memories that are more ‘intuitive’ and based on experience.   Moving from verbatim to gist has problems, though. It can lead to misestimating and be influenced by prejudices such as racial stereotyping and sexism.   The most creative ways of thinking and being require both gist and verbatim systems. Gist allows us to analyse trends and see analogies. Verbatim is needed to notice deviations and exceptions, and so accurate estimation.
And then on to the brain!   The dorsolateral prefrontal cortex (DLPFC), orbitofrontal cortex (OFC) and anterior cingulate cortex (ACC) are the three areas that have executive control over gist and verbatim.   The OFC learns and enhances gists.   The ACC challenges prevailing gists and processes conflicts.   The DLPFC determines verbatim overrides, or generates new gists.   I would guess that the ACC and the DLPFC together gives rise to what is often called ‘cognitive dissonance’.   As humans we can be automatic or controlled, deliberate or heuristic, gist or verbatim. However, how do we decide between these different approaches?   It turns out that we modify our responses based on context-dependent ‘task calibration’ that takes into account things like the ambiguity of the information required, the level of precision needed, etc.
Rick Hicks provided the last presentation of the day.   Rick concentrates on rule processing based on propositional logic rather than FOL, and is responsible for EZ-Expert from AI Developers.   One of the main differences is that propositional logic does not support quantification.   Rick’s talk centred on automated rule verification.   He introduced the idea of two-tier verification based on partitioning of the rule base and the verification criteria.   Verification criteria are all about things like reachability, domain constraints, completeness, consistency, domain constraints and conciseness (redundancy and subsumption elimination).   Verification in EZ-Expert is based around a central knowledge repository that stores explicit definitions and uses the closed-world assumption.   Rule builders use repository definitions to constrain rules during development.
Rick then discussed how certain rule types are expected to legitimately fail certain types of verification.   If these rule types are properly identified, they can be used to reduce the amount of conflict resolution required within a rule engine.   Rick went on to point out a number of issues with different conflict resolution strategies.   He posed the question ‘is the approach we take today to conflict resolution rigorous enough?’   For example, how do we modify conflict resolution strategies using belief?   Rick went on to describe the application of finely-tuned conflict resolution to different rule types in EZ-Expert.
The challenge for me is to understand how this might apply to the kind of rules engines I use.   Broadly, Rete engines use FOL rather than propositional logic.  Another issue is that EZ-Expert uses confidence factors to handle belief – a venerable approach that dates all the way back to MYCIN, but which does have some well-documented weaknesses (confidence factors don't compose well, for example).   Of course, I am in danger of being disingenuous here, given that handling of beliefs and uncertainty seems curiously under-valued in today’s business rules engines.   As Rick points out, the developer's belief in each rule is something we just seem to ignore these days.   Contrast this with the early/mid-1980’s, for example, when virtually no expert system – not even those designed to operate in the constrained environment of new-fangled microcomputers - came without these features.   Indeed, many of them eschewed MYCIN-style confidence factors in favour of Bayesian logic, Dempster-Shafer, etc.   I want to see better uncertainty handling in today's inference engines.   Oh, and please, fuzzy logic isn't really uncertainty handling - it's multi-valued logic. 
The day ended with the daily Q/A session.   I warmed to Rolando Hernandez plea to move away from seeing rules as nothing more than constraints on data and back to understanding the basics of knowledge engineering and management.  Luke Voss talked about a demographic shift towards functional forms of programming that will help in fostering a better understanding of rule processing in the industry.   There was a fair amount of discussion about aspects of uncertainty handling, truth maintenance, etc.   I loved the idea that maybe we should be concentrating on modelling artificial stupidity rather than AI.   There is a serious point here.   If we can define what we mean by stupidity and model it is software, we may be better-set to model intelligence.   Thomas Cooper pointed out that stupidity is a relative term and came up with a number of scenarios where we might want to model stupidity.  Fred Simkin reminded us that there is a difference between stupidity and ignorance.  He suggested that there may be a separate case for modelling ignorance explicitly.  I always thought that this is what we do through uncertainty handling.
A great day, with some fabulous presentations.
Posted on Tuesday, October 27, 2009 6:17 PM | Back to top


Comments on this post: October Rules Fest: Day 2

# re: October Rules Fest: Day 2
Requesting Gravatar...
It's cool that Dr. Rick Hicks talked about confidence factor. To me handling uncertainty within a production rule engine has nothing to do with alpha or beta network. The core problem is the data and the results derived from them. All data has a margin of error, which one can quantify with a confidence value. Jamocha provides "some" support for uncertainty with temporal facts, which have a confidence property called validity.

The "logical modify" problem has already been solved and I already wrote a paper on it. The paper is here http://jamocha.svn.sourceforge.net/viewvc/jamocha/morendo/doc/modification_logic.pdf?view=log

The implementation is described in great detail in my paper and anyone is free to copy it.

peter
Left by Peter Lin on Oct 28, 2009 8:28 AM

Your comment:
 (will show your gravatar)


Copyright © Charles Young | Powered by: GeeksWithBlogs.net