Our people weigh in on the issues of the day.
Blue Slate's people think a lot about the challenges facing their industries today. In the process, they often come up with completely unexpected slants on current issues, or new ways of thinking about business problems. Bluespeak is where they share those thoughts. Feel free to read and reflect.
[Any views or opinion represented in this blog are personal and belong solely to the blogger and do not represent those of Blue Slate Solutions.]
When depicting the Cognitive Corporation™ as a graphic, the use of semantic technology is not highlighted. Semantic technology serves two key roles in the Cognitive Corporation™ – data storage (part of Know) and data integration, which connects all of the concepts. I’ll explore the integration role since it is a vital part of supporting a learning organization.
In my last post I talked about the fact that integration
between components has to be based on the meaning of the data, not simply
passing compatible data types between systems.
Semantic technology supports this need through its design. What key capabilities does semantic
technology offer in support of integration?
Here I’ll highlight a few.
I am excited to share the news that Blue Slate Solutions has kicked off a formal innovation program, creating a lab environment which will leverage the Cognitive Corporation™ framework and apply it to a suite of processes, tools and techniques. The lab will use a broad set of enterprise technologies, applying the learning organization concepts implicit in the Cognitive Corporation’s™ feedback loop.
I’ve blogged a couple of times (see references at the end of this blog entry) about the Cognitive Corporation™. The depiction has changed slightly but the fundamentals of the framework are unchanged.
The focus is to create a learning enterprise, where the learning is built into the system integrations and interactions. Enterprises have been investing in these individual components for several years; however they have not truly been integrating them in a way to promote learning.
By “integrating” I mean allowing the system to understand the meaning of the data being passed between them. Creating a screen in a workflow (BPM) system that presents data from a database to a user is not “integration” in my opinion. It is simply passing data around. This prevents the enterprise ecosystem (all the components) from working together and collectively learning.
I liken such connections to my taking a hand-written note in a foreign language, which I don’t understand, and typing the text into an email for someone who does understand the original language. Sure, the recipient can read it, but I, representing the workflow tool passing the information from database (note) to screen (email) in this case, have no idea what the data means and cannot possibly participate in learning from it. Integration requires understanding. Understanding requires defined and agreed-upon semantics.
This is just one of the Cognitive Corporation™ concepts that we will be exploring in the lab environment. We will also be looking at the value of these technologies within different horizontal and vertical domains. Given our expertise in healthcare, finance and insurance, our team is well positioned to use the lab to explore the use of learning BPM in many contexts.[Read More] Semantic Technology and Business Conference, East 2011 – Reflections
I had the pleasure of attending the Semantic Technology and Business Conference in Washington, DC last week. I have a strong interest in semantic technology and its capabilities to enhance the way in which we leverage information systems. There was a good selection of topics discussed by people with a variety of backgrounds working in different verticals.
To begin the conference I attended the half day “Ontology 101” presented by Elisa Kendall and Deborah McGuinness. They indicated that this presentation has been given at each semantic technology conference and the interest is still strong. The implication being that new people continue to want to understand this art.
Their material was very useful and if you are someone looking to get a grounding in ontologies (what are they? how do you go about creating them?) I recommend attending this session the next time it is offered. Both leaders clearly have deep experience and expertise in this field. Also, the discussion was not tied to a technology (e.g. RDF) so it was applicable regardless of underlying implementation details.
I wrapped up the first day with Richard Ordowich who discussed the process of reverse engineering semantics (meaning) from legacy data. The goal of such projects being to achieve a data harmonization of information across the enterprise.
A point he stressed was that a business really needs to be ready to start such a journey. This type of work is very hard and very time consuming. It requires an enterprise wide discipline. He suggests that before working with a company on such an initiative one should ask for examples of prior enterprise program success (e.g. something like BPM, SDLC).
Fundamentally, a project that seeks to harmonize the meaning of data across an enterprise requires organization readiness to go beyond project execution. The enterprise must put effective governance in place to operate and maintain the resulting ontologies, taxonomies and metadata.
The full conference kicked off the following day. One aspect that jumped out for me was that a lot of the presentations dealt with government-related projects. This could have been a side-effect of the conference being held in Washington, DC but I think it is more indicative that spending in this technology is more heavily weighted to public rather than private industry.
Being government-centric I found any claims of “value” suspect. A project can be valuable, or show value, without being cost effective. Commercial businesses have gone bankrupt even though they delivered value to their customers. More exposure of positive-ROI commercial projects will be important to help accelerate the adoption of these technologies.
Other than the financial aspect, the presentations were incredibly valuable in terms of presenting lessons learned, best practices and in-depth tool discussions. I’ll highlight a few of the sessions and key thoughts that I believe will assist as we continue to apply semantic technology to business system challenges.[Read More] Creating a SPARQL Endpoint Using Joseki
Being a consumer of semantic data I thought creating a SPARQL endpoint would be an interesting exercise. It would require having some data to publish as well as working with a SPARQL library. For data, I chose a set of mileage information that I have been collecting on my cars for the last 5 years. For technology, I decided to use the Joseki SPARQL Server, since I was already using Jena.
For those who want to skip the “how” and see the result, the SPARQL endpoint along with sample queries and a link to the ontology and data is at: http://monead.com/semantic/query.html
The first step in this project was to convert my mileage spreadsheets into triples. I looked briefly for an existing ontology in the automobile domain but didn’t find anything I could use. I created an ontology that would reflect my approach to recording automobile mileage data. My data records the miles traveled between fill-ups as well as the number of gallons used. I also record the car’s claimed MPG as well as calculating the actual MPG.
The ontology reflects this perspective of calculating the MPG at each fill-up. This means that the purchase of gas is abstracted to a class with information such as miles traveled, gallons used and date of purchase as attributes. I abstracted the gas station and location as classes, assuming that over time I might be able to flesh these out (in the spreadsheet I record the name of the station and the town/state).
A trivial Java program converts my spreadsheet (CSV) data into triples matching the ontology. I then run the ontology and data through Pellet to derive any additional triples from the ontology. The entire ontology and current data are available at http://monead.com/semantic/data/HybridMileageOntologyAll.Inferenced.xml.
It turns out that the ontology creation and data conversion were the easy parts of this project. Getting Joseki to work as desired took some time, mostly because I couldn’t find much documentation for deploying it as a servlet rather than using its standalone server feature. I eventually downloaded the Joseki source in order to understand what was going wrong. The principle issue is that Joseki doesn’t seem to understand the WAR environment and relative paths (e.g. relative to its own WAR).
I had two major PATH issues: 1) getting Joseki to find its configuration (joseki-config.ttl); and 2) getting Joseki to find the triple store (in this case a flat file).[Read More] Semantic Web Summit (East) 2010 Concludes
I attended my first semantic web conference this week, the Semantic Web Summit (East) held in Boston. The focus of the event was how businesses can leverage semantic technologies. I was interested in what people were actually doing with the technology. The one and a half days of presentations were informative and diverse.
Our host was Mills Davis, a name that I have encountered frequently during my exploration of the semantic web. He did a great job of keeping the sessions running on time as well as engaging the audience. The presentations were generally crisp and clear. In some cases the speaker presented a product that utilizes semantic concepts, describing its role in the value chain. In other cases we heard about challenges solved with semantic technologies.
My major takeaways were: 1) semantic technologies work and are being applied to a broad spectrum of problems and 2) the potential business applications of these technologies are vast and ripe for creative minds to explore. This all bodes well for people delving into semantic technologies since there is an infrastructure of tools and techniques available upon which to build while permitting broad opportunities to benefit from leveraging them.[Read More] JavaOne and Oracle’s OpenWorld 2010 Conference, Initial Thoughts
I’ve been at Oracle’s combined JavaOne and OpenWorld events for two days. I am here as both an attendee, learning from a variety of experts, and as a speaker. Of course this is the first JavaOne since Oracle acquired Sun. I have been to several JavaOne conferences over the years so I was curious how the event might be different.
One of the first changes that I’ve noticed is that due to the co-location of these two large conferences the venue is very different than when Sun ran JavaOne as a standalone event. The time between sessions is a full half hour, probably due to the fact that you may find yourself going between venues that are several blocks apart. I used to think that having getting from Moscone North the Moscone South took a while. Now I’m walking from the Moscone center to a variety of hotels and back again. Perhaps this is actually a health regime for programmers!
The new session pre-registration system is interesting. I don’t know if this system has been routine with Oracle’s other conferences but it is new to JavaOne. Attendees go on-line and pre-register for the sessions they want to attend. When you show up at the session your badge is scanned. If you had registered you are allowed in. If you didn’t preregister and the session is full you have to wait outside the room to see if anyone who registered fails to show up.
I think I like the system, with the assumption that they would stop people from entering when the room was full. At previous conferences it seemed like popular sessions would just be standing room only, but that was probably a violation of fire codes. The big advantage of this approach is that it reduces the likelihood of your investing the time to walk to the venue only to find out you can’t get in. As long as you arranged your schedule on-line and you show up on-time, you’re guaranteed a seat.
Enough about new processes. After all, I came here to co-present a session and to learn from a variety of others.
Paul Evans and I spoke on the topic of web services and their use with a rules engine. Specifically we were using JAX-WS and Drools. We also threw in jUDDI to show the value of service location decoupling. The session was well attended (essentially the room was full) and seemed to keep the attendees’ attention. We had some good follow-up conversations regarding aspects of the presentation that caught people’s interest, which is always rewarding. The source code for the demonstration program is located at http://bit.ly/blueslate-javaone2010.
Since I am a speaker I have access to both JavaOne and OpenWorld sessions. I took advantage of that by attending several OpenWorld sessions in addition to a bunch of JavaOne talks.[Read More]