BaseVISor Inference Engine - forward-chaining inference engine specialized to handle facts in the form of RDF triples with support for RuleML, R-Entailment and XML Schema Datatypes.
Creately - Online diagram and design tools. No Software Downloads, No Installs. Draw and Design right in your browser.
headup - The Firefox addon that helps you discover content related to your interests and your friends.
yoono - Manage all your social networks and IM services in one place.
I spent the last several weeks working on a prototype application I’m calling RDFBurner. The purpose of this application is to provide a user interface that allows for the rapid composition of triples. The triples will be parsed and used to compose RDF. The user interface provides four entries:
The idea is that you type a new Domain in or choose an existing one from a list, then type or choose a subject, predicate and object. Then save that to create a new triple. Below the entries, you’ll see a list of previously entered triples. The intent behind the user interface is to provide instant visual feedback of your work. Seeing the triples laid out in a tabular modular fashion provides you with many clues as to what sorts of triples you would need to describe your particular domain.
During the construction of the prototype, I became frustrated with the complexity of the code. Simply developing the user interface, I could see already how the code was turning into a tangled web of gibberish. This is a fairly typical result when coding to Microsoft’s “Event Driven” programming paradigm. The idea of Events letting your program know that some thing has occurred such as a key press or a record selection is, on the surface, a good idea. But in practice, if you’re not careful (and Lord knows I’m careful but I want to see results quickly so I’m not that careful) you end up with a big mess. Crafting OOP classes helps to some degree but generally, the paradigm leads to sloppy difficult to follow code.
So, once again, I began to think about the challenge of managing the complexities of a user interface. I realize that it might be possible to de-couple the intent of the user from the features in the program, thereby adding some much needed programming structure. If a user gestures by pressing a key or clicking a record in a grid, I could simply make a note of it and move on. Later, a timer in the program would wake up periodically and process these gestures. When an event fires, the logic would be extremely simple as the only task is to queue up the user’s intent. Features in the program would be simplified as they could be coded as very atomic local scope processes.
- When an event fires such as Control_KeyDown or Form_SelectionChange, create a Gesture object that represents the intent of the user’s action.
- Using a map, translate that gesture object into one or more program features invocation objects.
- Send an invocation object to each active form/control that supports those features.
The code snippet below shows that during an event, we simply queue up a new Gesture object (that’s what LogGesture does). Presumably, some program feature is going to know what to do with the mouse and keyboard focus so the event framework’s key press is suppressed by setting the KeyCode equal to zero.
|Visual Basic |||copy code |||?|
Private Sub Entry_KeyDown(KeyCode As Integer, Shift As Integer)
Select Case KeyCode
Case 9 'Tab key
LogGesture Me.Tag & "_Entry_Tab", CStr(Shift)
KeyCode = 0
Case 40 ' Arrow Key Down
LogGesture Me.Tag & "_Entry_Arrow", "Down"
KeyCode = 0
Case 38 ' Arrow Key Up
LogGesture Me.Tag & "_Entry_Arrow", "Up"
KeyCode = 0
Now later, a timer event will fire and process the Gesture objects waiting in the Queue. In my prototype, I have the timer set to 50 msec. This gives the application a nice snappy feeling. From the code snippet below, you should be able to see that the Gesture is translated into one or more Features. Each feature is supported by one or more forms. Each form is notified that the feature needs to be invoked.
|Visual Basic |||copy code |||?|
Private Sub Form_Timer()
' Translate GlobalGestures Into Features
Dim pGesture As Gesture, pFeature As Feature, pFeatureQueue As FeatureQueue, pFormWithFeature As FormWithFeature, pFormsWithFeatureList As FormsWithFeatureList
For Each pGesture In globalGestures ' Process Gestures Loop
Set pFeatureQueue = TranslateGestureIntoFeatures(pGesture)
If Not (pFeatureQueue Is Nothing) Then
For Each pFeature In pFeatureQueue
Set pFormsWithFeatureList = GetFormsThatSupportFeature(pFeature)
If Not (pFormsWithFeatureList Is Nothing) Then
For Each pFormWithFeature In pFormsWithFeatureList ' For every form that supports the feature
pFormWithFeature.Form.FeatureQueue.Add pFeature ' queue up the feature - note this a form level object unlike the GestureQueue which is global
pFormsWithFeatureList.Remove pFormWithFeature ' We're done so get rid of the form from the list
Set pFeature = Nothing
' Check for any incoming feature requests
For Each pFeature In FeatureQueue ' Process incoming Features Loop
DispatchFeature pFeature ' Execute the subroutine associated with the feature
Finally, the code above is running in what I consider the “master” form which has it’s own queue of feature requests. The queues here are very simple. The DispatchFeature does nothing but checks the name of the Feature from the FeatureObject and calls the subroutine with the same name.
What about RDF?
Because I am building an RDF composition utility, it makes sense to encode the gesture to program feature map in RDF and use SPARQL to retrieve (parts of) the map at runtime. I’ve not done this yet in my current prototype. The purpose of the RDFBurner prototype is to compose triples that will be converted into RDF - a task which I have yet to program. The gesture to program feature mapping is currently implemented as an SQL Query against my rather simple triple store. Once I start actually generating RDF with the utility, I’ll refactor the translator portion to use RDF and SPARQL.
Inserting an RDF map into the Event logic of a typical application has some interesting connotations:
- The resulting RDF could possibly be combined the mapping RDF of other applications, implying the compositions of a type of mash-up that works at the user interaction level.
- Events are not the same as Gestures. For instance, when a user presses the up-arrow key while the focus is on the top row of a grid, the Event might be evntGrid_KeyPressed(UpArrow) while the gesture or intent of the end user is to move the focus to some other control above the grid, i.e. gstrGrid_Leave(Up). An Event is hardware/form/control kind of thing. A Gesture may be triggered by an Event but represents what the user really wants to do.
- It highly likely that there should be a translation mapping in RDF between Events and Gestures.
- Once the mapping is in RDF, it may be possible to infer patterns and sets of patterns, especially if the RDF is coaxed into OWL. We could use such inferred knowledge to generate at minimum a skeleton of the programming logic. Obviously other parts of a given application would make use of RDF knowledgebases and the combination of the classes involved could lead to an Ontology Driven Design.