Trends and Transients

 

Overview

Each year there are more new technologies to keep track of, more ways to organise your life and your company’s information, more ways to communicate. This session will introduce you to new technologies, discuss older, under-appreciated technologies, and entertain you at the same time. Our expert speakers will debate current issues and technologies, giving you the benefit of their wide experience and differing points of view, so you can decide for yourself which technologies will meet your needs and which are a waste of your time.

Speakers for 2012 include Dr. Marc Hadley, Eric van der Vlist, and Tom Scott, as well as Faculty Board member Priscilla Walms­ley.

Classes for 2012

Should you write XML Schemas by hand?

Taught by Eric van der Vlist

XML Schema languages are wonderful tools to validate XML documents but they are difficult to read for most people and they are even more difficult to write.

A significant number of my customers are people who know what they want to express fairly well but they do not know XML schema languages well enough to be able to express their data models using any of these languages. They usually come to me with some kind of description of their models (documentation, Excel spreadsheets, XML samples, UML models, …) and instead of writing the schemas for them or giving them XML Schema training, we usually choose to generate schemas from these descriptions.

In this talk, I’ll give several examples and describe the benefits of using custom high-level descriptions to generate XML schemas.

JavaScript, JSON and “Big Data” Analytics

Taught by Dr. Marc Hadley

From its early days as a means to script Web pages, JavaScript has emerged as a “serious” programming language used for a wide variety of server-side tasks. An offshoot data serialization format, JSON, is now often used instead of XML and recently we’ve seen new languages coming to the fore that compile down to JavaScript.

In this talk we’ll present a case study of a hybrid XML and JSON based project that uses a polyglot programming approach to solve some big data analytics problems. Along the way we’ll explain the map-reduce algorithm, talk about JSON vs XML tradeoffs, and explore some polyglot programming techniques.

Academic publishing — becoming digitally native

Taught by Tom Scott.

While the Web has created new business models and significantly increased the number and range of articles that can be published, scientific publishing has remained largely unchanged. Scientific discoveries are still published in journal articles where the article is a review, a piece of metadata if you will, of the scientists’ research.

Content might be distributed over http but what is distributed is still, in essence, a print journal over the Web. Little has changed since 1665 – the primary objects, the things a publisher publishes remain the article, issue and journal.

In this talk I will discuss how reframing the problem — looking at what scientific communication is trying to achieve and what is possible with Web technologies — presents us with an alternative approach. I will also present a case study demonstrating some of what is possible when we consider academic publishing from this perspective.

Canonical Modeling: NIEM and beyond

Taught by Priscilla Walmsley.

Canonical models are standardized vocabularies for XML interchange that are designed by an organization or group of organizations to ease integration and promote reuse and interoperability. The U.S. National Information Exchange Model (NIEM) provides an example of a canonical model: a 6000-element XML vocabulary that is used by U.S. government entities and their information sharing partners at the federal, state and local levels.

Designing XML vocabularies with modular, reusable components is of course nothing new. But new techniques and tools are emerging to allow organizations to formalize and manage large canonical models expressed in XML.

Using NIEM and other examples, this session will describe the benefits of developing canonical models and the techniques used to deploy them. It will also provide guidance on some of the associated challenges, including allowing for customizations, interoperability among differing subsets, and versioning.