Archive | Articles RSS feed for this section

Health Information Technology (Part II) — The End User

Written for Unlimited Priorities and DCLnews Blog.

Debra Spruill

Debra Spruill

With people getting used to easy access to information — and with automation of health records being one of the lynchpins of controlling healthcare costs, you would think there would be more progress — and technologically, there is. But maintaining computerized health care records has its own set of issues, many of them non-technological. Aside from privacy issues, there are additional factors such as the variety of sources for a person’s information, the subjectivity of much of the information, the value of including handwritten notations, and the reluctance toward fully shared information between doctor and patient. These are all issues that Debra Spruill discusses in the wide-ranging second part of Health Information Technology.

A recap — In the first article of this two-article series, the focus was a comparison of the impact the Healthcare Information Technology has on the medical community and how it paralleled the similar revolution for libraries that began in the 1960’s. I revisited the rise of BRS, Bibliographic Retrieval Services, from its beginnings in 1968 at SUNY Albany as Biomedical Communications Network (BCN). Then I reviewed SDC, Systems Development Corporation, and how it evolved from a government contract with the United States Office of Education to disseminate educational information (ERIC). SDC later developed ORBIT and NLM adopted it for its MEDLINE product. And in 1972, Dialog became a commercial online service with its strength in the science field.i All these developments served as the roots of what became known as the information industry and changed the library world forever.

I went on to demonstrate that the healthcare community had much in common with the library community. They both provide services to a varied base — libraries service public, special, government, special needs, and private organizations; healthcare provides serve groups, large regional organizations, clinics, mobile services, and special needs, etc. Ultimately both industries service the needs of individuals — whether they be patients or patrons and regardless of how their needs are presented to the respective organization. In hospitals a patient may walk into a physician’s office or clinic, be referred by another practitioner, through an emergency admission, or in a clinical trial. A library may have a patron walk in, telephone, send an e-mail inquiry, locate their collection through an Internet portal or service, be referred through a 24/7 service, or assist an instructor in study aid development. It is the similarity in servicing the end-user/patron/patient that we will explore in this article.

In addition to the diversity of organization I explored the paradigm shift of long-standing services and time-tested methods being uprooted by new methods and/or technologies. I raised the concerns of professionals whose skillsets had to be modified and sometimes augmented with new skills and tools, specifically as it related to technologies and methodologies. Education and training programs had to be overhauled to meet the new demands. And they continue to require review as new mechanisms emerge, e.g. social networking, mobile platforms, tablets.

In closing I touched on the topic of privacy — one that proved a critical issue in the information community and is certainly a concern in the healthcare community. It is here where we pick up.

The End-User — Call It Patron or Patient

While the library serves patrons and the healthcare community serves patients ultimately they are approaching their client base similarly as the end-user. In other words, the patron or patient is ultimately who they aim to satisfy.

When considering the end-user the library community was challenged with recognizing that the tools that had been developed for their profession were not necessarily those for patron use. These tools, in fact, were developed for and often by the professional, to access, record, and generally be used to provide information to a patron inquiry. A patron would come in or call or send a request stating what it was they wanted; the process was a very results oriented method. The patron did not assume to know what tool or resource was best nor did they necessarily care how the answer or solution was provided. Their interest was in getting the right answer and receiving it in a timely fashion. So if the patron was interested in learning what new materials were being developed for a given technology, for instance what plastics are now being used for kitchen appliances, they would simply ask the librarian and say I’m interested in finding out what plastics are being used with kitchen appliances. They may or may not indicate whether the appliance in question was for a professional restaurant, or whether it was meant for marine use or one in home. And even in the home is it for a base single-family home or for mobile home? This type of information would generally be defined during the interview process with the professional librarian. Now some may remember what these interview processes were like. They were an iterative process for the librarian and other reference professionals to utilize to determine with specificity what it was that the patron really wanted. This iterative process enabled the librarian to recognize which resources would best meet the inquiry’s needs. And they could determine if this was a tool in their collection or whether they might need to borrow something from another library. It would also assist in determining how quickly they could answer the question. This was a method that became refined over the years. These methods were part of the reference desk tool-kit; often with specific written instrtions to assist those on duty.

With the advent of early online tools it became even more necessary for the librarian to work with the patron to determine exactly what was requested. Why? Because the new tools in use were not inexpensive and demanded familiarity with the database(s) and search mechanisms to achieve results. Unlike today’s browser tools, one could not simply put in a series of terms to search. Boolean logic combined with unique search services might require construction of search instructions for separate databases. In fact, it was not uncommon for the same database to have different fields available depending upon the service providing it.

Another major element was the issue of cost control and budget monitoring weighed heavily. One did not frivolously utilize telecommunications, paper, and staff time. Costs had to be justified.

Patron Access to Electronic Health Records — What Does It Mean?

So what is the parallel in the field of electronic health records? What does patient access mean and what does it imply?

It means that now the professional, whether doctor, nurse practitioner, or dentist, is being placed in the position of making information available to the patient that has never been shared previously; except in verbal communications. While this category of data is identified as patient information, it has actually been anything but. It has historically been the healthcare provider’s information about the patient not to readily available to the patient.

Patient information encompasses a very wide berth. It could be the lab tests ordered, test results received, physician notes, consultation notes, opinions by the physician about the patient, consultant physician comments, and myriad other types of information. This information, while collected has not generally been shared with the patient. If a nurse made a notation in the patient’s file overnight for the physician to read in the morning it might never be shared with the patient. This was for the physicians eyes only. Other than the health professionals no one else may ever have been able to see the information recorded about the patient. There are many discussions being undertaken within the medical community around the topic of electronic health records about the sharing of patient information today.

Physicians may be reluctant to share all their notes and observations with a patient. There is concern it could undermine the confidence of a patient with their physician. There is concern the notes could discourage or alarm patients in certain settings. Each patient may or may not be able to cope with the full force of information held in their files. What information and when to share it is at the core of the discussion.

In addition, the issue of information accuracy is paramount to the discussion of electronic health records. It is deemed to be the greatest challenge facing the medical profession in providing patient information.

Without question the major hurdle is the provision of accurately matching patient health information with the myriad sources from which it would be derived. Again, determining that John Smith’s lab tests are properly assigned to the correct John Smith will be daunting. And what of names with spelling variations, e.g. Chinese name structures where the Western version of first name, last name is inverted. While it’s recognized that accurate matching and providing health information for patients has benefits such as improved patient care, improved patient safety, better efficiencies, improved fraud detection, better data integrity, the provision of this information has unparalleled challenges.ii

The Department of Health and Human Services Office of the National Coordinator for Health Information Technology has a privacy and security policy committee focusing on these issues exclusively. The goal is to provide patient access to health information within four days. The objective was once provision of a patient summary; it is now provision of patient access “on demand.” However the Health Information Technology (HIT) Standards Committee has yet to define the standards, what constitutes relevant information is unclear.iii

What Other Players are in the Mix?

When libraries were challenged with this world of new technologies, there were several players that impacted how service was provided. There were telecommunications (until 1984, AT&T was the only phone companyiv), distributors such as BRS, Dialog, SilverPlatter, etc., publishers such as Wiley, networks such as SOLINET, PALINET, etc., which were all organizations that affected how data was distributed, organized, and how users were trained.

So who are the other players in the complicated electronic healthcare world? There are the myriad components of the healthcare community — physicians, hospitals, clinics, pharmaceutical firms, federal, state and local governments, laboratories, public health agencies, EHR vendors, and patients. Each has a voice in how this new environment will shape up.

What are the challenges being dealt with? Medication reconciliation, submission of immunization data, drug formulary checks, drug and allergy checks, submission of reportable lab data and reconciliation with orders, clinical decision support, and exchange of clinical information.

How is the National Health Information Technology initiative organized?v It consists of Federal Advisory Committees that fall under two main umbrellas, Health IT Policy Committee and Health IT Standards Committee. Within these committees are various workgroups, such as clinical operations, privacy and security, implementation, vocabulary task force, meaningful use, information exchange, enrollment, governance, etc.

The committees are comprised of participants across the full spectrum of the healthcare community — physicians, business people, EHR vendors, healthcare unions, academia, legislators, public health agencies, nurses, hospitals, legal authorities, pharmaceutical companies, insurance companies, armed services, and clinics.

This Health IT Standards Committee fully recognizes that the challenges facing patient matching are critical. They acknowledge that it is not possible to achieve perfection in matching patient information but that every effort must be made to eliminate errors and misattribution. They concede that inaccuracy is not just a technology problem-it is also a people problem. They recognize that the quality of the data provided can prohibit accurate matching of information when that data is poor. There is no “one-size-fits-all” solution. And as the data becomes further removed from its source the challenges increase. Add to that multiple sources of data and the challenge multiplies even more. While the use of universal identifiers would be helpful it does not provide the final answer either.vi

Conclusion

So where will Health Information Technology (HIT) lead us? Well, I believe the genie cannot be put back into the bottle. Health Information Technology is an advance that we as a nation, as patients, as providers, and as care-givers, need. As a mobile society we need to have our health information travel as readily as we do. As a technologically savvy society, we need to have health information be current, accurate and exchangeable. This last bastion of critical care information needs to move into the 21st century with all other content. We need to realize the cost savings promised, the improvement in healthcare foreseen, and the advances in managed patient care assured.

And finally, I recommend that we all tap into the information available through The Office of the National Coordinator for Health Information Technology. There may be a role we can all play.

References

i. Bjorner, Susanne, and Stephanie C. Ardito. “Online Before the Internet: Early Pioneers Tell Their Stories.” Searcher June 2003. www.infotoday.com/searcher/jun03/ardito_bjorner.shtml (accessed October 7, 2010)

ii. U.S. Department of Health and Human Services, The Office of the National Coordinator for Health Information Technology, Health IT Policy Committee: Recommendations to the National Coordinator for Health IT, healthit.hhs.gov/portal/server.pt/community/healthit_hhs_gov__policy_recommendations/1815.

iii. “HIT Exchange discusses EHR certification with CCHIT Chair Karen Bell, MD, MMS,” EHR Decisions, ehrdecisions.com/, March 4, 2011

iv. Wikipedia contributors, “Bell System divestiture,” Wikipedia, The Free Encyclopedia, en.wikipedia.org/w/index.php?title=Bell_System_divestiture&oldid=414141380 (accessed March 15, 2011)

v. U.S. Department of Health and Human Services, The Office of the National Coordinator for Health Information Technology, healthit.hhs.gov/portal/server.pt/community/healthit_hhs_gov__home/1204

vi. U.S. Department of Health and Human Services, The Office of the National Coordinator for Health Information Technology, Health IT Policy Committee: Recommendations to the National Coordinator for Health IT, healthit.hhs.gov/portal/server.pt/community/healthit_hhs_gov__policy_recommendations/1815.

About the Author

Debra Spruill is a consultant in the field of preservation with an emphasis on digital preservation. She was recently Director, OCLC Preservation Service Centers responsible for strategic, business development, operational, and contracting for its four Centers, including on-site locations. She was also responsible for client contracts. Most recently, Ms. Spruill was named to the Library of Congress ALTO XML Schema Editorial Board. Ms. Spruill is a member of the Unlimited Priorities team.

Comments { 0 }

Conference Buzz: Personal Digital Archiving

Written for Unlimited Priorities and DCLnews Blog.

I used to think that personal digital archiving meant scanning and storing family documents and photos. The Personal Digital Archiving conference in San Francisco on February 24-25 proved that although that is certainly included, the concept extends into many other areas as well. The conference venue was the fascinating headquarters of the Internet Archive (the building was once a church and has many interesting architectural features), and one would be hard put to suggest a more suitable organization to host it. The conference was very successful, and one measure of that is that there were 150 attendees—twice as many as last year.

Cathy Marshall of Microsoft Research opened the conference with a brief history and said that we are now in the third era of personal archiving. The first era, 2005-7, was a time of benign neglect, when many people were ambivalent about the value of their data. The next era began in 2007, when personal data achieved a life of its own. The present era began in 2009, when social media raised many other issues. Marshall’s main points were:

  • Someone else should be doing the archiving.
  • We won’t know why we have saved all those pictures after a couple of decades have passed.
  • Benign neglect becomes online neglect.
  • Digital information will survive only as long as someone takes care of it.

What is everyone doing with all those cheap digital cameras? The photos they take will become the digital archives of our times. And what about home movies? They have largely been supplanted by videos, but there are lots of them still in consumers’ hands. The Center for Home Movies was established to “collect, preserve, provide access to, and promote understanding of home movies and amateur motion pictures.” It even organized a Home Movie Digitization and Access Summit that drew 46 attendees: film makers, film transfer companies, and stock footage vendors.

Clifford Lynch, Director of the Coalition for Networked Information keynoted the second day of the conference. He said that we are moving into a second generation of understanding personal digital archives, where the complex of ownership and control is not clearly understood. We do not understand shared spaces for personal archiving very well, and we need “Archive Me” buttons on many more Web sites. Although we have built up many systems to record our “public lives” (notable dates, public offices held, residences, etc.), we need to think about how these spaces interconnect to the general infrastructure of society.

Three interesting projects were described in a “fast talks” session:

  • AboutOne, a subscription service, was developed to help busy people control all aspects of their records. Cloud computing and business software allows businesses to eliminate mundane tasks and gain new levels of efficiency; AboutOne brings these benefits to families.
  • Personal Archiving Day, an open house for the public on saving digital information and sponsored by the Library of Congress, will be held on April 22.
  • The Rosetta Project is a global collaboration of language specialists and native speakers working to build a publicly accessible digital library of human languages.

Personal health information has many unique issues, especially involving privacy. MedHelp, an online health community with 12 million unique visits per month, has found that providing tools for users to track and share their health data has become a successful business. Privacy was seen as an option, not a restriction. Some healthcare providers are even using the data generated by trackers to help them in caring for their patients.

Finally, the personal data of many scientists and researchers may have historical value. Computer industry pioneers shared their thoughts about digitizing their archives. Edward Feigenbaum, often called the “father of expert systems” has an archive of 15,000 documents which has been digitized using the Self-Archiving Legacy Toolkit (SALT) system that he has developed in conjunction with the Stanford University library. Christina Engelbart spoke on behalf of her father Douglas Engelbart, who invented the mouse and made one of the first transmissions over the Internet. The Stanford Mouse Site tells the history of his invention of the mouse and contains many of his original materials.

In developing a scholar’s archive, context is everything. What is their story, and what were they thinking? A major lesson for archivists is to work with scholars throughout their career so that content, metadata, and extra materials can be archived along the way. It is much harder to compile robust archives when the creator of the original content is retired or deceased; and the archives will not be as rewarding for the scholars and students of the future.

The PDA conference was fascinating and revealed that personal archiving has many implications and applications. Personal archives are relevant to information professionals and are an entirely new genre with its own characteristics. They raise issues of ownership, copyright, preservation, privacy, and historical interest.

Comments { 0 }

Conference Buzz: NFAIS 2011 — Taming the Information Tsunami

Written for Unlimited Priorities and DCLnews Blog.

NFAISThe 53rd annual conference of the National Federation of Advanced Information Services (NFAIS) was held in Philadelphia on February 28 — March 1. Its theme was “Taming the Information Tsunami: The New World of Discovery.” Here are a few brief highlights of the conference:

In his address, “The Crowd, the Cloud, and the Exaflood: The Future of Collaboration”, Michael Nelson, Visiting Professor, Internet Studies, Georgetown University said that content used to be king, but now the king is connection. He gave us 12 “words that work” in today’s highly connected environment: vision, cloud, game changer, many-to-many, things, exaflood, collaboration, consumerization, people, emotion, predictions, and policy.

Rafael Sidi, an Elsevier Vice President, said that we should not look at our products, but at our platforms. Customers are leveraging social networking platforms; Twitter has changed us. The new “gold rush” area is applications because people are solving problems with them. Openness will lead to creating new things and bring collaboration.

John Blossom, author of Content Nation, said that we must learn to swim naturally in an ocean of content. As long as a system works, many users will not care about the platform. We are now in the era of the “second Web”, and no longer go to data; the data is all around us.

A major event of the conference was the presentation of the Miles Conrad Lecture by Professor Ben Shneiderman, Founding Director of the Human-Computer Instruction Laboratory at the University of Maryland. His lecture was the first one in the series to focus on social media, and echoing other speakers, he said that we have shifted from content to community. Social discovery has become a new media lifestyle, and a significant part of it revolves around apps. He also mentioned the issue of privacy in healthcare, and noted that the PatientsLikeMe service has an openness policy. Users are encouraged to share their experiences and learn from those of others. The site has become widely used and has over 50,000 registered users. Shneiderman was also instrumental in the development of NodeXL, which is a template for Excel that facilitates the display and analysis of social network graphs. The graphs can be clustered to display communities and the connections between them, which increases the understanding of the social media world.

Many of the speakers’ presentations are available on the NFAIS Web site.

Comments { 0 }

Conference Buzz: Record Attendance at O’Reilly’s Tools of Change for Publishing

Written for Unlimited Priorities and DCLnews Blog.

Tools of Change (TOC) for Publishing

Tools of ChangeAbout 1,400 attendees attended O’Reilly Media’s Tools of Change (TOC) for Publishing Conference in New York, February 14-16. From its inception five years ago, when attendance was 400, TOC has grown every year, and the 2011 conference was the largest ever. It’s easy to see why; TOC continues to focus on the rapid changes occurring in the publishing industry, attracts leading speakers, and provides a forum for vendors to exhibit their latest products. It has become one of the industry’s leading events.

Is the world ready for e-books?

Author and Wolfram Research co-founder Theodore Gray wondered if the world was finally ready for e-books. He noted that it is unsatisfying to him to need to resort to print to make any money on a book and predicted that in the future, simple static textbooks will be produced as open source projects because nobody will want to pay for them, either in print or electronic form. Users will, however, pay for enrichment and interactivity, and now that technology to add such capabilities to e-books is available, the world is ready for them.

The pace of change is accelerating.

David “Skip” Pritchard, President and CEO of Ingram Content Group, followed Gray’s theme and emphasized that we are in a time of rapid change in the publishing industry, and the pace of change is accelerating–a point made by several additional speakers as well. Pritchard urged attendees not to allow company history to get in the way of adapting their organizations to today’s environment. Change is not always obvious to us; skill sets and talent are often hidden in an organization. He also noted that everything will not change; authors will continue to have status, and curation will still be needed.

If all information is free, who will pay authors?

Margaret Atwood, author of numerous poems and books, struck a note for authors, asking if the future is on the Internet, and all information is free, who will pay authors? Have we stopped to think about whether today’s changes are really good or not? She advised the publishing industry to never forget its primary source. Authors are a primary source because everything in the industry depends on them. And in an age of “remote” and “virtual”, there is still a craving for “real” and “authentic”.

We must not speak of digital content as a secondary use.

Brian O’Leary, founder and principal of Magellan Media, gave an impressive talk on the damage that containers (i.e. books, magazines, and newspapers) used to transmit information have done to the present-day industry. Containers are an option, not a starting point. They limit how we think about our audiences and how they will find our content. Our world today is one of content and browsers, and a new breed of born-digital competitors is starting with context and thus meeting the challenge of being relevant to audiences who instinctively turn to digital content. We must not speak of digital content as a secondary use. Publishers are increasingly in the content solution business, where the future is in giving readers access to content-rich products. Starting with context requires publishers to make a fundamental change in their work-flow, and if they make the leap, remarkable opportunities are available.

Six trends currently affecting the publishing industry.

Kevin Kelly’s presentation opened the concluding day of the conference, and he noted that his latest book, What Technology Wants (Viking/Penguin, 2010), is the last printed book he will write. All his future works will be in digital form. Kelly, the former Executive Editor of Wired magazine, discussed six trends currently affecting the publishing industry:

  • Screening. We are moving from being people of the book to people of the screen, and we have not yet begun to see the extent that screens will permeate our culture. Every flat surface is a potential screen site.
  • Interacting. We interact with not only our fingertips, but also with gestures (as with smart phones, for example) and even our whole body. Reading will be affected by this trend and will expand to a bodily conversation and also to a nonlinear process; for example, we now have alternate endings for some books.
  • Sharing. Reading is becoming much more social. We read socially and must learn to write socially. Everything increases in value by being shared.
  • Accessing. We gain much more value by accessing information rather than owning it.
  • Flowing. Files flow into pages which flow into streams. Streams go everywhere, are never finished, and are constantly in flux. Books will operate in the same environment.
  • Generating (not copying). The Internet is the world’s largest copying machine, but future value will be in products which must be generated in context and cannot be copied. There is no better time for readers than now, but publishers are not ready for the idea that books will sell for 99 cents.

The largest platform in the world is the mobile handset.

Finally, mobile content were not forgotten at TOC. Cheryl Goodman, Director of Publisher Relations at Qualcomm, noted that the largest platform in the world is the mobile handset, but unfortunately most publishers have neither engaged this market nor changed their digital strategies to accommodate it. As a result, advertisers and marketers, not publishers, will determine the future course of the industry. This is an opportunity for publishers to function as a conduit to highly curated content.

There was an enormous lot to assimilate at TOC. Most of the speakers’ presentations, as well as the live streams of the keynote sessions are available on the TOC website, and further summaries appear on my blog. The dates and venue for TOC 2012 will be announced shortly.

Comments { 0 }

Implementing DITA at Micron Technology, Inc. — Interview with Craig Henley, Manager, Marketing Publications Group

Written for Unlimited Priorities and DCLnews Blog.

Charlotte Spinner

Charlotte Spinner

Micron Technology, Inc., is one of the world’s leading semiconductor companies. Their DRAM, NAND, and NOR Flash memory products are used in everything from computing, networking, and server applications, to mobile, embedded, consumer, automotive, and industrial designs. Craig Henley is manager of the Marketing Publications Group at Micron. His team leads the DITA effort at Micron and oversees all conversion and implementation projects.

DITA (Darwin Information Typing Architecture) is an XML-based international standard that was initially developed by IBM for technical documentation and is now publicly available and is being applied to technical documentation in many industries. Craig shares with Charlotte Spinner of Unlimited Priorities his thoughts about a recent DITA conversion he worked on with Data Conversion Laboratory.

Charlotte Spinner: Craig, what were the business circumstances that led Micron to DITA? Describe the problem you were trying to solve.

Craig Henley: Micron has one of the most diverse product portfolios in the industry. Our complete spectrum of memory products—DRAM, Flash memory and more—require data sheets that contain technical specifications describing the product. These data-intensive documents typically exceed 100 pages and sometimes reach 200-300 pages, are heavy on graphics and tabular data, and are very complex. For each product family (e.g., SDRAM or DDR2) every item is available in multiple densities (256Mb, 512Mb, 1Gb, 2Gb, etc.), and each permutation requires its own large data sheet document. This provides the descriptive information a design engineer needs in order to incorporate the parts into the product, so data sheets are a key component for sales.

The data sheets were maintained using unstructured Adobe® FrameMaker® and stored, along with the many graphics, in large zip files which were then stuffed into a large enterprise CMS. We always knew that 80-90% of the content was reusable and could form a core batch of content, with the rest of the specifications varying a minute amount. But with the old system, if anything changed we had to update each document individually. This was a very unwieldy, very manual process—the “brute force method.” Even if we had to change boilerplate content—something in legal or copyright, logos, colors—it had to be changed in the template, and also in every old document whenever it was next modified. This was a maintenance nightmare, and the challenges compounded exponentially because of the sheer number of items involved.

The bottom line is that all of our products have to be supported at the data sheet level, so documentation is mission-critical for us. And we pride ourselves on the quality of our documentation. So we asked ourselves, “Do we keep trying to work harder and increasing resources and staff? Do we scale the brute force method, or do we work smarter?” Brute force is not cost-effective, so we decided to use DITA to help us work more efficiently.

CS: Beyond the obvious advantages of an XML-based solution, what made DITA an especially good fit for this project?

CH: We’re part of the digital era, but in fact we felt we were behind because our already large product portfolio was growing due to acquisitions. We adopted DITA in the nick of time. It verified our expectations with regard to eliminating redundancy and increasing efficiency, and also opened the door to new functionalities and capabilities, such as allowing us to publish documents that we couldn’t before.

For example, we had heard from field engineers that customers wanted a full complement of documentation for our Multichip Packages (MCPs), which may contain DRAM, Flash, Mobile DRAM, e-MMC, or more, so we have to assemble data sheets for all of the discrete components, not just a general one for the MCP overall. There are many different types of MCPs—any package can leverage other discrete components. This was a nightmare in the old paradigm. If anyone updated a DRAM or other specification that was being leveraged in 6-7 MCPs, how could we keep up with that?

DITA allows reuse in a way that lets us remove redundant information: we use existing DITA maps, pull out topics that don’t need to be there, nest the maps inside the MCP datasheet, and voila!—it’s created in minutes. All the information is still connected to the topics that are being leveraged in their original discrete datasheets, so if they are updated by engineers, the changes are inherited. This makes it easy to regenerate content, and it’s seamless to the customer.

Another factor that made DITA a great solution is that it’s an XML schema that’s very basic in design, so reuse is there and it’s easy, but it imposes enough structure on the user-base that everyone is operating under the same model. We call it “guided authoring”—it keeps people from veering off with their own methods for creating the documentation, which wouldn’t promote clean handoff. Authoring under DITA is clean—it guarantees that elements are used in a consistent way and leaves less room for errors. Initially, industries moving into the XML paradigm developed their own in-house DTDs, and I think that made it slower to adopt. But with DITA, the standardization makes it easy to have interoperability between departments, and even companies, which seems to be supporting its wider-scale adoption.

CS: Did you know much about DITA before embarking on this project?

CH: We had read about DITA in books and independent research articles, and learned more while attending an STC conference, so we had an idea that it could work well for us.

CS: Did you have any trouble selling the idea of a DITA conversion internally?

CH: At the core, we had the support we needed. Our management believes in innovation— they trusted us to go out and do things differently. So we brought in some reps from our key customer base for a pilot study. Once we proved the success of DITA in the pilot mode, and how it could scale, it gained traction and sold itself. We started at the grassroots level and it went from there, one successful demo at a time.

CS: Did you think you could do this alone at any point, or did you always know that using an outside expert was the best approach? What led you to Data Conversion Laboratory?

CH: We like the “learn to fish” approach, but when it came to full-blown conversion of legacy documents, we knew we’d need to go outside.

We had heard of DCL in STC publications, and we regularly read the DCL newsletter. We knew in the back of our minds that if we went full-on to DITA we would need to build a key XML foundation of content, and we didn’t want to do that manually. Tables in XML are complex, and ours are really complex. Our initial attempts at that XML conversion were too time- and labor-intensive, so we were concerned.

We brought in DCL and they talked us through their process. They explained some filters they use, and why the tables don’t have to be such a challenge. A test of some complex tabular data came back in pristine XML, so they were something of a lifesaver for us. DCL’s proprietary conversion technology—that “secret sauce” they have—is pretty magical.

CS: What did you have to do to prepare? Was there a need to restructure any of your data before converting to DITA?

CH: We did have to do some preparation in learning how to do some of our own information architecture work, and we discovered some best practices in prepping our unstructured content. Mostly it involved cleaning up existing source files, making sure we were consistent in our tagging for things like paragraphs, to ensure clean conversion. It was a fair amount of work—a front-loaded process—but well worth the investment.

CS: How did you get started? What was the initial volume?

CH: We had about a 2,000 page-count initially, much of which was foundational content we could leverage for lots of other documents. Starting with this set helped us scale almost overnight, and also helped accelerate our timeline, ramping quickly from proof-of-concept to pilot to full-blown production. Without that key conversion our timeline would have been drastically pushed out.

CS: Did the process run smoothly?

CH: Yes, I would say it ran very smoothly. There were a few unexpected results, but we communicated those back, adjustments were made, and that was it. The timeframe was on the order of a few months, and that was more dependent on us. We couldn’t keep up with DCL. In fact, we referred to their process as “the goat” because of the way it consumed and converted the data. They’re fast. They were always ahead of us.

CS: How did you make use of DITA maps?

CH: We adopted a basic shell for our documents, but then started nesting DITA maps within other DITA maps. This was another efficiency gain, giving us the capability to assemble large documents literally in minutes as opposed to hours and days. We specialized at the map level for elements and attributes to make them match our typical data sheet output, so we found maps to be quite flexible.

CS: What, if any, were the happy surprises or unexpected pitfalls of the project, and how did you deal with them?

CH: There were no pitfalls with DITA itself. It’s a general, vanilla specification. It might not semantically match every piece of content, but you can use it out of the box and adjust the tags and attributes to match your environment as needed with a little bit of customization. That’s actually a benefit of DITA—you can use it initially as is, and then modify it further along in your implementation.

CS: What have been the greatest benefits of the conversion? Were you able to reduce redundancy? By how much?

CH: The greatest benefit is that it helped us lay that foundation of good XML content in our CMS so we now can scale our XML deployment exponentially faster. With regard to redundancy, this is a guess-timate, but I’d say we had about a 75% reduction of manual effort, so a 75% gain in efficiency. For heavier reuse documents such as MCPs, the benefit scales even more.

CS: How do you anticipate using DITA down the road?

CH: We are publishing different types of documents in our CMS now, going beyond data sheets. Within technical notes, for example, there’s not much reuse, but DITA is still good because that data can be leveraged in other documents. We’d like to start seeing even more document types and use cases to leverage the reuse and interoperability. You don’t hear about DITA for pure marketing content, or things that are more standalone, but given what we’ve seen, we don’t see why you couldn’t use DITA for that. We’d like to branch out into other types of content. The pinnacle would be to have all of our organization’s data— internal and external —leveraged, authored and used within the XML paradigm. That might sound crazy and aggressive, but there would be benefits, such as dynamically assembling content to the Web.

CS: Can you expand a little more on dynamically-generated content?

CH: We haven’t fielded many requests for that yet, but we see the potential. Someone could call up and request a particular configuration of data sheets, and we could throw together that shell very fast because the topic-based architecture promotes that. The next level would be to take modular information and make it available on-demand to customers in a forward-looking capacity.

For example, if you build a widget on your website for a customer to request only certain parameters, such as electrical specs, across, say, four densities of a given product family, the content could be assembled as needed. Our CMS does facilitate that internally, assembling content on demand, but that could be a major differentiator for us if available via the intranet and Web. As XML comes of age, it’s not impossible, and it’s probably where we’re headed. We could package in mobile capacity, or whatever’s needed.

By making information available at that level, you speed up responsiveness and get feedback on a specification quickly. Fielding requests to the right people quickly can ramp up the communication process for development.

CS: Looking back, how would you advise an organization to prepare if they’re about to embark on a DITA conversion?

CH: Advise folks to understand and justify the reasoning. Spell out clearly—and potentially quantify—the types of efficiency gains they’ll get by going to this model. Understand that use case and justification, and be prepared to articulate that to the right people. Remember that saving time results in cost savings. Be able to articulate that part of the story.

In terms of planning and implementation, start at that proof-of-concept level, move to the pilot level, and identify ways you can scale the pilot to the full blown organizational level. Have a plan for scaling, because you’re going to hear that question.

Part of our success was adopting DITA and a CMS that was built to work with DITA, marrying the two. A good CMS that works well with an open source such as DITA lets you scale with your resources, because it gives you methods to accomplish your key tasks and provides avenues for bridging the gap—the leap—from the desktop publishing paradigm to the XML publishing paradigm. DITA with a good CMS implementation helps you bridge that gap and helps your users—writers, application engineers, etc.—take that step.

CS: Are there any other comments you’d like to make about your experiences with DITA?

CH: In a nutshell, we’re believers.

About the Author

Charlotte Spinner is a technical specialist for Unlimited Priorities. Further information about Micron Technology, Inc. may be found at www.micron.com. Further information about Data Conversion Laboratory, Inc. may be found at www.dclab.com.

Comments { 0 }

Health Information Technology – Is That a New Information Revolution on the Horizon?

Written for Unlimited Priorities and DCLnews Blog.

Debra Spruill

Debra Spruill

A librarian walked into a doctor’s office looked around and saw the steel cabinets overflowing with paper files taking up valuable office space and turned away with a knowing smile. Then the Librarian said to the doctor, ‘Don’t worry, this won’t hurt a bit.’

Today, all across the country, doctors, hospital administrators, and myriad other healthcare professionals are wrestling with the challenges of transitioning from their traditional record keeping of countless paper files in color tabbed folders to the emerging electronic health records (EHR) requirements. This takes me back to the days when libraries were beginning the uphill climb to adapt their reference tools and methods to online databases and online public access catalogs (OPAC).

Looking Back at Changes for Libraries

In 1968, two key developments were underway that would later change the way libraries would function in the new information world. BRS, Bibliographic Retrieval Service, was underway as the Biomedical Communications Network (BCN) at SUNY Albany; and Systems Development Corporation (SDC) contracted with the United States Office of Education (USOE) for dissemination of educational information (ERIC). And the world is changing; by 1971, SDC created ORBIT and NLM installed it to support MEDLINE; ERIC is also offered publicly by SDC. The next year, Dialog becomes a commercial online service with NASA RECON, Nuclear Science Abstracts, and ERIC databases. i Unbeknown to librarians and their patrons the world over, these undertakings on opposite ends of the country would portend seminal changes to how information is provided and used going forward.

The Healthcare Community Begins Applying New Technology

In 2009, as part of what is known as the Stimulus Package (American Recovery and Reinvestment Act of 2009), the Federal Government included $19.2 billion to fund the conversion of medical records keeping from paper to electronic format to be interactive by 2015. ii While the technology of electronic health records (EHR) has been in practice for some years, for instance Kaiser Permanente in California began providing its patients’ electronic medical records some five years ago iii, most physicians and healthcare institutions have not embraced it. The major roadblock was the cost of making EHR a universally accepted reality. The Stimulus Package funding is intended to eliminate that barrier and pave the way for implementation of a national system for any and all healthcare providers receiving Medicare and/or Medicaid reimbursements to securely use and share health information. I will not entertain the political arguments for or against the utility of EHRs. The purpose of this piece is to present the similarities between the library/information field and the healthcare field in tackling the onset of technology to an established organization of information and tools.

So what are the commonalities between these two ventures? I find several—diversity of organization, patron/patient use, reformatting of existing formats, high cost of implementation, updating or new skillsets requirements, privacy concerns, and co-existence with existing methods. Let us explore each of these.

Diversity of Organization

The arrangement, pricing, and provision of information and information tools for healthcare providers are very similar to the challenges that confronted libraries—the issue of various organizational structures. Libraries are broadly grouped as public, academic, special and government types. But under these very broad umbrellas are many communities—public libraries that serve urban, suburban and rural communities; special libraries with diverse audiences such as medical, technical, scientific, and/or legal users; they might be in a corporation or be a standalone entity; their patrons may require special assistance, materials in languages other than English, provision of materials for remote use and several other profiles or combinations. The purpose of these organizations and the professionals that serve in them is the collection and provision of information to satisfy whatever the patron or users need. And their tools ideally should make it easy for them to meet these needs.

Likewise, a healthcare provider profile may be that of a single hospital or be part of a regional hospital network; it may be an ambulatory care center with no overnight patients or a long-term care facility; it may be mobile; it may be a teaching institution connected to a university with heavy research requirements; it may be a single doctor, a group practice specializing in a particular aspect of care or one that must prepare for an influx of new residents annually. Like libraries there are many profiles and characteristics that can exist. But the common link is the provision of healthcare to patients, whether in the office, in the hospital or clinic, on a schedule or in an emergency. And present in any scenario for the professionals in these facilities is the collection and dissemination of information to satisfy the need for timely and accurate patient care.

The Paradigm Shift

Both of these are long-standing services with tried and true systems in place as well as basic skills that have stood the test of time. And, like other long-standing services, at some point the basic assumptions are shaken up, and new methods and/or technologies impact the profession.

In the library world, the introduction of online databases was truly revolutionary. It changed how tools were organized, produced, disseminated, accessed, and used. Of course, the timeline changed again when the computer networks were linked around the globe and the Internet became a tool of communication and information sharing. The Internet has its origins in the 1960s as a United States military research tooliv—by the mid-1990’s, it became a commercial entity.

Healthcare has undergone its share of shakeups, too, particularly in the method of care and the growth of fields of specialization. Healthcare often requires a team to treat a patient where one general practitioner used to serve the purpose. The tools employed in healthcare have truly changed—no more leeching or bleeding of patients, no more exploratory surgeries. There are magnetic resonance imaging (MRI) technology, computed tomography (CT scans), dialysis treatment at home, and minimally invasive surgical procedures. Preventative care is highly recommended to provide a treatment at the earliest stages of serious illnesses with the goal of avoiding life-threatening effects.

Yet, with these cutting edge methods, most physicians and hospitals still utilize paper files and require their patients to complete paper forms. We go to our many physicians and complete similar forms providing the same information each time—name and address, vital statistics, spouse, insurance, health history, family health history, etc. Countless hours are spent collecting and filing this information. Healthcare facilities have valuable real estate taken up locally and in remote facilities just to store these paper documents. And then there is the staff time too. Filing, gathering when a patient comes in or is brought in, and goodness knows if the patient is unable to communicate—then valuable time is lost trying to guess what the patient’s history is, and whether special care is required, for example for allergic reactions.

Changed Skillsets

One of the major concerns for the library profession was the new skills required when reference tools and sources were modified to fit the online and Internet environments. I will address the impact of the online tools because they were most disruptive at the time of their introduction. While the Internet offered its challenges, it seems that the providers of the information (publishers, music producers, studios, and authors) appear to be more severely impacted than the users.

The availability of reference tools online meant that the library profession had to adjust more quickly and radically than had previously been required. The predictable, timely arrangement and publication of known sources usually in book or serial format was suddenly uprooted. While early online search methods were clunky with the early technology of acoustic couplers and telephone lines, CRT terminals and the DOS operating system, they still provided more flexibility in providing more concise results than traditional print tools. Queries could be posted combining search terms—and, or, not—and searches could be saved and combined with new searches, then, the results could be printed and handed over to the requestor along with the cited materials usually photocopied from the identified sources.

These tools and database collections required new skills particularly in the area of search methodology. Vendors who wanted their products used had to provide training programs, printed documentation, and telephone help lines. User groups and networks sprang up all over the country and thousands of air miles were travelled to provide and take instruction. New occupations of search and topical specialists sprang up.

The healthcare profession is undergoing a similar transformation as their content technology tools are evolving. The U.S. Department of Health and Human Services (HHS) has created the Office of the National Coordinator for Health Information Technology (ONC) to coordinate the national efforts to implement health information technology (HIT) and the electronic exchange of information.v The mission of the ONC is promotion of the development of a nationwide HIT infrastructure that enables the electronic use and exchange of information; to provide leadership in the development and implementation of standards and the certification of HIT products; coordination of HIT policy; HIT strategic planning; and to establish the governance for the Nationwide Health Information Network.vi

Interestingly, libraries, by their nature, were eager to share their new methods and tools. In fact, new occupations began with librarians being employed by online database producers to research, develop, and design new tools, revamp existing products, and collect information from libraries about how they would like data provided to them.

The healthcare profession is doing this in a very concerted way, now, through the ONC. While the library profession had a structure in place to train in these new tools, the healthcare profession is in the position of having to build such a network. While physicians, nurses, and their support staff have been trained in using new computer-assisted tools, the people who input and manage patient information may not be as astute collectively. Of course, there are early adopters that will have their staff trained in whatever software and hardware is being introduced at their institution, but a formal program has not existed nationally for training in Health Information Technology—until now.

The ONC has initiated an ambitious program of workforce training to assure a solid foundation of personnel that can satisfy the future needs of the public health.vii This program is establishing a set of operations such as the State Health Information Exchange Cooperative Agreement Program to facilitate development of health information exchange capabilities between healthcare providers and hospitals; Health Information Technology Extension Program to provide technical assistance and best practices guidelines for healthcare providers.

In addition to these support programs are education and training tracts to assist preparation of a new workforce. Specifically the Community College Consortium to Educate Health Information Technology Professionals to quickly create HIT training and education programs at Community Colleges or to expand existing programs. The training time-frame for these non-degree programs is 6-months to meet the urgent need to provide a trained workforce. Grants are also being made available to develop Competency Examinations to certify those that complete non-degree programs. Likewise there is an Assistance Program for University-Based Training to prepare an adequate number of HIT professionals. Both of these programs are being supported by a set of Curriculum Development Centers providing grants to institutions of higher education in the development of HIT curriculum.

Privacy

A critical issue that was much discussed when online database tools were developed was privacy. Patrons were accustomed to going to their library and for the most part locating materials they wanted personally, even if they interacted with staff to identify what they needed. Initially, with online databases this direct patron access did not exist. If you wanted a query conducted using an online database, it meant submitting requests—either verbally or often in writing—to information professionals who would process them. In certain settings, it was necessary for the professional to keep notes to properly charge the costs back to some cost-center or user. Naturally users, particularly in public libraries, were very concerned about their privacy. Who else would know what they were searching and why—and would the information be shared? These concerns did not lessen with Internet access in public places. What could be seen by children walking past a screen, what sites were being sought in academic libraries, and who would have access to that electronic trail continue to be discussed. Issues about surveillance, “Big Brother,” were discussed. Privacy policies were developed with the purpose of protecting the computer users, particularly in public spaces, and casual observers, e.g. children, when materials displayed may not have been appropriate.

And nowhere is privacy of greater concern than around the topic of patient information. The impact of insecure health information is fraught with issues such as exposure to employers, families, partners, and communities. The provision of the electronic transmission of health information by a healthcare provider, health plan, or clearinghouse is being guided by the HIPAA Privacy Ruleviii and enforced by the Office for Civil Rightsix. Additional steps are being developed to build upon requests for comment to implement the Health Information Technology for Economic and Clinical Health (HITECH) Act.

While HIT will assure the exchange of information for the clinical practitioner it will also provide access to information for patients. This is a major adjustment in the provision of healthcare in this country. Previously health information was consigned to be a physician’s or hospital’s domain. HIT will assure that patients have access to parts of their health history, too. This has been met with some resistance by the healthcare professionals, especially regarding notes and other observations made in the files for their eyes only.

The bottom line is that privacy in the healthcare arena is a serious factor and will require careful review.

The End User–To Be Continued

The library community was eventually faced with how to make its previously closely-held access tools and resources available to its patrons, even prior to the Internet. For centuries the librarian was the gate-keeper with special training in the resources available. Suddenly, with the advent of online databases, computer advances, and ultimately the World Wide Web, the barriers between the end user and the information were gone.

The healthcare profession is facing that dilemma with the advent of EHR. Patients will be able to access information previously kept from them.

More about this to come.

References

i. Bjorner, Susanne, and Stephanie C. Ardito. “Online Before the Internet: Early Pioneers Tell Their Stories,” Searcher June 2003.  (accessed October 7, 2010)
ii. Public Law 111-5 – American Recovery and Reinvestment Act of 2009. [PDF 1227 KB] Public and Private Laws. 111th Congress. H.R. 1. Tuesday, February 17, 2009.
iii. Huffington Post Investigative Fund, Fred Schulte, “Stimulus to Push Electronic Health Records Could Widen Digital Divide,” The Huffington Post. (accessed October 26, 2010)
iv. Wikipedia contributors, “Internet,” Wikipedia, The Free Encyclopedia,  (accessed October 26, 2010).
v. U.S. Department of Health and Human Services, The Office of the National Coordinator for Health Information Technology. (accessed September 25, 2010).
vi. Ibid.
vii. U.S. Department of Health and Human Services, The Office of the National Coordinator for Health Information Technology HITECH Programs. (accessed September 25, 2010).
viii. U.S Department of Health and Human Services, HSS Strengthens HIPAA Enforcement. HHS.gov News Release.  (accessed October 15, 2010).
ix. U.S. Department of Health and Human Services, Health Information Privacy. HHS.gov.

About the Author

Debra Spruill is a consultant in the field of preservation with an emphasis on digital preservation. She was recently Director, OCLC Preservation Service Centers responsible for strategic, business development, operational, and contracting for its four Centers, including on-site locations. She was also responsible for client contracts. Most recently, Ms. Spruill was named to the Library of Congress ALTO XML Schema Editorial Board. Ms. Spruill is a member of the Unlimited Priorities team.

Comments { 0 }

Technology Trends 4U — 2011

Written for Unlimited Priorities and DCLnews Blog.

Richard Oppenheim

Richard Oppenheim

As 2011 fades in, it is time to look ahead and make predictions of what is anticipated for the next year. Sometimes this is called planning, sometimes guessing. My goal is to offer some educated guesses for your 2011 planning activities.

In the November issue of DCLnews Blog, I wrote a synopsis of 2010 technology pronouncements – “The Digital Forest“. That article’s last paragraph is repeated here for emphasis:

The flood of digital data will deliver more to watch, more to read, more to store and file. We have choices to make to avoid being strangled by data overload. We can all join hands, virtually, and seek wisdom as to what works best for us this month. There must be an App for that.

The digital data tsunami encircling planet Earth will grow as more content from every corner of the galaxy will be loaded onto one or many data libraries. You can choose how and where to dive into the oncoming torrent of data. It is not recommended that you find some remote mountain top and just watch the content flow accelerate.

In 2011, developers will continue their unceasing delivery of gadgets, life-changing products, life-interrupting services, and many opportunities for us to be amused or amazed or confused with how to use and/or escape from changing technology. A lot of the choosing process will have something to do with your age and how you use technology today. For the age factor, the dividing bar is set at about 35ish.

  • Born before 1975, computers and other technology resources were learned as a teenager or adult as an appendage for your life
  • Born after 1975, computers and other technology were part of your growing up and integrated within your life

There are lots of illustrations (have some fun and make your own lists). One of the more visible examples is the transition from film to digital photography. When picture taking required film, and then a store to print using special paper and chemicals, there was one superior film, Kodachrome. As you read this, know that Kodachrome is no longer. Kodak stopped film production in 2009. The last place to develop and print Kodachrome stopped its operations on 12/31/2010.

Things change, technology changes things with increasing velocity. Trends analyses are important to highlight what has been, what is no longer here, and what is coming.

Content Trends

The going forward trends begin with technologies that support increasing volumes of content and connectivity. How often one uses e-mail is another age indicator. Younger folks prefer online chats and text messaging. Facebook has supplanted Yahoo and other sites as a major communications hub. Email sent to more than one person requires inserting multiple addresses, use of ‘cc’ or ‘bcc’. Facebook and text messaging and twitter provides immediate broadcast to a large population. FB reports that it processes over four billion messages daily.

Volume use of all things technology is increasing at an ever-increasing rate. In December, IDC research issued its 2011 prediction report. The IDC report stated:

…the biggest stories of 2011 revolve around the build-out and adoption of this next dominant IT platform (in our view, the industry’s third major platform) — defined by a staggering variety of mobile devices, an expanding mobile broadband network, and cloud-based application and service delivery, with value-generating overlays of social business and pervasive analytics, generating and analyzing unprecedented volumes of information.

IDC estimates that in 2011, there will be 330 million smartphones sold worldwide and 42 million media tablets. IDC predicts that the PC-centric era will end as over half of the 2.1 billion people who regularly use the Internet will do so using non-PC devices. By mid-2012, non-PC devices capable of running software applications will outsell PCs. Demand for tablets, with Apple’s iPad still leading, will increase as the tablet platform takes off in emerging markets.

The other large growth is what is now called ‘Cloud Computing’. IDC predicts that 80 percent of new software offerings will be available as cloud services in 2011. As I discussed in the “The Digital Data Forest” the growth of content from all sources needs to be incorporated for any future business or personal planning. IDC states:

The ‘digital universe’ of information and content will expand by almost 50% — to almost 2 trillion gigabytes. Businesses are drowning in information — and still want more, creating big opportunities for ‘big data’ analytics and management.

You may be worrying about how to keep up with the constant process known as ‘change’. With technology, change will always happen. David Pogue, writer for the NY Times, said in a November 24, 2010 personal tech column:

Forget about forever—nothing lasts a year. Of the thousands of products I’ve reviewed in 10 years, only a handful are still on the market. Everybody knows that’s the way tech goes. The trick is to accept your gadget’s obsolescence at the time you buy it, so you feel no sense of loss when it’s discontinued next fall. (The other trick is to learn when that’s going to happen: new cameras in September and February, new iPods in September, new iPhones in July…)

Your Trends, Your Way

Oprah Winfrey’s new cable network, OWN, has started. With the content from many devices – Phone, Tablet, TV, et al, everyone will be able to create his/her own private networks. The preliminary name for my network is RON. It is not a rival for OWN.

Using available resources from the cable company, various internet providers, smartphones, and friends, there will be one or more networks for each person on the planet. Comcast (Xfinity) provides click through buttons on shopping sites for direct purchase over the internet connected TV. They also provide a smartphone app that allows users to directly program their at-home DVR.

This is just one example of the integration between the internet and the cable/satellite signal deliveries. Sales of DVDs and other physical storage devices will quicken their decline over the next few years. Internet TV access will enable customizing a group of networks that link together. Note that all major sports, NFL, MLB, NBA, and NHL have each created their own channels. Companies of all sizes will deliver information using video and audio through their own production or linking with entertainment providers.

Connecting with an overflowing inbox will need assistance. A new app for the iPad, Flipboard is a personalized magazine creator app that aggregates nine online media sources, grabs content from links posted such as Twitter and Facebook (including photos and video), and then presents that content in an easy-to-read, magazine-like format.

Continuing this trend, the concepts supporting social networking will expand with companies adapting to the use of social networking for brand identification, commentaries, and announcements. User support will expand with online chats and direct video calls, such as Skype provides. Professionals, lawyers, accountants, and advisors will also expand this form of client connection.

Retail sites, including eBay and Amazon, are integrating with social networks. Facebook announced that shoppers who go to Amazon.com can log into Facebook and get recommendations for purchases based on their declared tastes in music and movies. In November, eBay rolled out Group Gifts, a way for Facebook friends to chip in together for a gift. Facebook is also building analytic tools to let retailers learn more about who’s drawn to certain products. Amazon’s iPad app, Windowshop, shows images and lets users browse as if they were inside a store. Available at www.windowshop.com.

Wister.com is rolling a social style network that enables users to upload review of businesses, such as restaurants, entertainment facilities and even comments about retail and non-retail businesses. Other users can agree or disagree with comments. If a store gets a bad review, the store will have 48 hours to respond to the review. This form of sharing goes farther than just a tweet or Facebook post as the comment will be accessible by anyone on the Wister site.

Information sharing will expand exponentially. Current uses include: calendars, contacts, emails, photos, music, and the younger set’s need for sharing current actions. There are a growing variety of software to collaborate on documents, spreadsheets and other business reporting. With all device connections, collaboration will expand providing user friendly features for annotating, note insertion, and image editing. Service sites like www.basecampHQ.com, will provide even more ways to spread the information around the office and around the world.

Mobility Essentials

Key mobile trends to watch in 2011 include: a lot more smartphone apps, event-based marketing, and many location based services. Wherever you are, your hand held device knows the map coordinates and the surrounding streets, buildings, and weather.

New GPS apps will integrate location with content about your past behavior or your calendar to suggest activities that may be appropriate where you are. Initially, you will have to request this information. The upgrades or premium services will support push technology and deliver content to you like an alarm clock. Retail marketing will integrate geo-targeting apps with a database of your purchase history, likes, and dates such as birthdays to make recommendations, alerting you to specific store locations. Already exiting apps, such as RedLaser, can help you locate the same products for lower prices. Future apps will have that information ready without any specific key click required.

Augmented reality is also on its way. Based on your location, you can request images and information for how the surrounding area appeared in a prior time period. Redrawing of the geography would include other buildings, no buildings, etc. depending how far back you want your reality augmented.

Handheld devices can identify, for example, when a policeman or doctor was in the vicinity and immediately alert them to an emergency. Your device will also be able to tap into video cameras around the corner, some location you are going to, or at your house. A few years ahead, this video image will be able to alert you to traffic, crowds or some other identifiable situation.

Sustainability Needs More Energy

Technology can help reduce wasted energy, space, and natural resources. New technologies are available that help organizations become more energy efficient, implement new ways to distribute goods and services in a more sustainable manner, and enable safe and renewable sources of energy. For example, in 2010, Google announced its 5 billion dollar commitment to an off shore wind farm along the Atlantic coast.

The U.S. Department of Energy is allocating funds to support the research and development of clean, reliable energy for buildings and transportation. Applicants include teams from university, industry, and national laboratories. Under the program, the grantees will conduct cost analyses for different manufacturing volumes to help gauge the near-term viability and long-term potential of new technologies.

One of the hot suppliers of alternative energy is Bloom Energy. Today, commercial electricity costs about 13 cents per kilowatt-hour. Costs with a ‘Bloom Box’ are between 8 cents and 10 cents per kilowatt-hour and break-even after installation in less than 5 years. A few of Bloom’s customers include eBay Inc., Cypress Semiconductor Corp., Adobe Systems Inc., Safeway Inc., and Wal-Mart. Replacing fossil fuel suppliers with clean and easy to maintain fuel cell boxes also eliminates any need for combustion.

Google’s web application PowerMeter, enables users and electric companies to track energy consumption. A monitor kit attaches to your electricity meter and transmits data via Wi-Fi. The web app will show how much energy is used. This application is currently being used in San Diego.

Smart meters will track electricity data with fine-grained detail. Home broadband connections opens up online electricity monitoring to a much broader base of potential customers.

Trends and You

Technology products and services are coming fast and their arrival speed is accelerating. It is not possible to keep up with trends, to know what to buy, to avoid feeling confused and late to the party. If you are feeling overwhelmed, it is O.K. to step back and ease up on yourself.

It is very important to understand and accept that these trends are coming and will not stop just because you may have worries. Identify what you, your company, and your friends are using, and do your best to stay compatible and collaborative. Hiding in a haystack will not help and the haystack will blow away. Technology will continue to expand everyone’s ability to connect with greater frequency and with a lot greater volume of information. Finding ways to best use this expansion for your benefit is a trend that you need to pursue in 2011 and beyond.

About the Author

Richard Oppenheim, CPA, blends business, technology and writing competence with a passion to help individuals and businesses get unstuck from the obstacles preventing their moving ahead. He is a member of the Unlimited Priorities team. Follow him on twitter at twitter.com/richinsight.

Comments { 0 }

Conference Buzz: 2010 Charleston Conference

Written for Unlimited Priorities and DCLnews Blog.

Donald T. Hawkins

Donald T. Hawkins

At the 2010 Charleston Conference (November 3-6), which was the 30th in the series (surely one of the more long-running information industry events!), one of the highlights was a “discovery systems faceoff” between two of the major players, Serials Solutions and EBSCO. The faceoff came about from a challenge posed after an exchange of Letters to the Editor of The Charleston Advisor. (See “Conference Buzz” in the previous issue of DCL News for an explanation of discovery systems.) It took the form of two questions posed by a moderator, with responses from each company, followed by a rebuttal and summary. After the questions, live demonstrations of each system were conducted. The questions were:

  • Why do libraries need discovery tools and how does your product meet those needs?
  • Why should a library choose your service rather than that of a competitor?

The EBSCO representatives stressed their system’s superiority in the number of sources available and the capability of adding indexes of sources not covered by EBSCO. The Serials Solutions participants described the ease of using its service, its proven value, and a two-year track record of reliability and scalability. They also noted that many publishers have made the full text of their data available to them for use in indexing. The Summon demonstration was flawless, but EBSCO had technical problems with theirs. Unfortunately, the time allotted to the session was too short to permit meaningful audience interaction, but it is clear that this faceoff was only the opening salvo in a long battle for supremacy in the discovery systems area.

Another fascinating presentation at Charleston was by Jon Orwant, Engineering Manager of the Google Books project. So far, Google has scanned about 15 million books, about 10% of those available, which amounts to about 4 billion pages and 2 trillion words. Google collects metadata from over 100 sources, parses the records, creates a “best” record for each data cluster, and displays appropriate parts of it on the site. Orwent described how the resulting database can be used not only for searching but for other interesting projects, such as a study of how language changes over time, publication rates of book publishing as a function of publication date. Google even makes grants available to scientists and linguistic analysts to do research projects because they consider books as a corpus of human knowledge and a reflection of cultural and societal trends over time.

Comments { 0 }

Peeling Back the Semantic Web Onion

Written for Unlimited Priorities and DCLnews Blog.

An Interview with Intellidimension’s Chris Pooley & Geoff Chappell

Chris Pooley is the CEO and co-founder of Intellidimension. His role is to lead its corporate development and business development efforts. Previously, Chris was Vice-President of Business Development at Thomson Scientific and Healthcare where he was responsible for acquisitions and strategic partnerships.

Richard Oppenheim

Richard Oppenheim

As stated in the first article, “the Semantic Web is growing and coming to a neighborhood near you.” (Read Richard Oppenheim’s first article here) Since that article, I had a conversation with Chris Pooley, CEO and co-founder of Intellidimension. Chris understands how the web and the Semantic Web work today. So let’s peel back some of the layers surrounding the semantic web onion and bring the hype down to earth.

Chris has spent years working with and developing applications specifically for the semantic web. Along with Geoff Chappell, Intellidimension president, our conversation ranged around the semantics of the Semantic Web and, more importantly, the impact it will have for access to information resources.

The vision of the founding fathers of the World Wide Web Consortium was for information to be accessible easily and in large volume with a process enabling the same information to be used for infinite purposes. For example, a weather forecast may determine whether your family picnic will be in sunshine or needs to be rescheduled. For the farmer the weather forecast is a key to what needs to be done for the planting and harvesting of crops. The retail store owner decides whether to have a special promotion for umbrellas or sunscreen lotion. The same information is used for different questions and actions.

Data publishers of all sizes and categories have information available. These publishers range from newspapers to retail stores to photo albums to travel sites, and a lot more; get the breaking news story, buy a book, connect with family albums, or book a flight. The web provides access to these benefits in endless combinations. The sites are holders of large volumes of data waiting for you to ask a question, or search. The applications are designed for human consumption so that people can find things when they choose to look.

The Semantic Web is a modified information agent in that there are one or more underlying software applications designed to aggregate information and create a unique pipeline of data for each specific user.

The foundation of the Semantic Web is all about relationships between any two items…

One of the key attributes of the web is that we can link any number of individual pages together. You can be on one page and click to go to another page of your choice. You can send an email that has an embedded link to allow the reader one click access to a specific page.

Chris emphasizes that the Semantic Web is not about links between web pages. “The foundation of the Semantic Web is all about relationships between any two items,” says Chris. Tuesday’s weather has a relationship to a 2pm Frontier flight leaving from Denver. Mary’s booking on that flight means that her ticket and seat assignment also has a relationship. In the semantic web sense, there is a relationship between Tuesday’s weather and Mary.

The growth of the Semantic Web will expand the properties of things to include lots of elements, such as price, age, meals, destination, and so on. The language for describing this information and associated resources on the web is the Resource Description Framework (RDF). Putting information into RDF files, makes it possible for computer programs (“web spiders”) to search, discover, pick up, collect, analyze and process information from the web. The Semantic Web uses RDF to describe web resources.

For end users, the continued adoption of the Semantic Web technologies will mean that when they search for product comparisons they will find more features in the comparisons which should make the process easier, faster, and provide better results.

Whether you seek guidance from the Guru on the mountain top or the Oracle at Delphi, information will range from numbers to statistical charts, from words to books, from images to photo albums, from medication risks to medical procedure analysis to doctor ratings.

Chris Pooley states, “For end users, the continued adoption of the Semantic Web technologies will mean that when they search for product comparisons they will find more features in the comparisons which should make the process easier, faster, and provide better results. For a business user or enterprise the benefits will be huge. By building Semantic Web enabled content, businesses will be able to leverage their former content silos; and the cost of making changes or adding new data elements (maintaining their content) will be reduced while flexibility will be improved, by using the rules-based approach for Semantic Web projects.”

With this vast increase in data volume, users should remember to be certain they trust the data that is retrieved. As part of the guidelines for proper use of the semantic web, we need to establish base levels of reliability for the sources being accessed. This requires some learning and practice to determine what maps appropriately to the level of accuracy needed. The weather forecast can be off a few degrees. Sending a space vehicle to Mars requires far greater accuracy since being off even one degree will cause the vehicle to miss its intended target.

Both end users and enterprise users will learn new ways to pay attention to the data validity. Trusting the source may require a series of steps that includes tracking the information over an extended time period. This learning process will also include a clear explanation of why that information is out there. For example, a company’s historical financial information is not the same as the company’s two year marketing forecast.

There is a chicken and egg aspect to approaching growing accessibility to more data. More data means more opportunity to collect valuable information. It also means that more care needs to be exercised to identify and separate meaningful relevant data from data noise. For example, the retailer Best Buy has started down this path by collecting 60% more bits of information from user clicks on their web site. This enriched data delivers added value to the retailer for more accurate and timely business decisions about products and selling techniques.

One of the intoxicating things about the web is that the vast majority of data, entertainment and resources are all free to anyone with an internet connection. While Chris acknowledges the current state of free resources, he also anticipates that in the future, there will likely be a need for some fee structure for the aggregator of content. With data demand growing exponentially, there will be a corresponding demand for huge increases in both storage capacity and internet bandwidth. The Semantic Web will require more big data mines and faster communications.

There is a significant difference between infrastructure and the applications that ride on that structure. Bridges are constructed to enable cars to use the span to get from one side to the other. The infrastructure of the bridge demands it holds all of the bridge weight as the weight of all cars at any one moment is insignificant to the bridge’s total weight.

Chris Pooley’s company, Intellidimension, builds infrastructure products delivering a useful and usable bridge for enterprise users. These users then create aggregating and solution oriented applications that travel along the appropriately named information super highway. Chris says, “The evolving Semantic Web technologies will offer benefits for the information producer and the information user that will enrich and enlarge what we see and how we see it.”

About the Author

Richard Oppenheim, CPA, blends business, technology and writing competence with a passion to help individuals and businesses get unstuck from the obstacles preventing their moving ahead. He is a member of the Unlimited Priorities team. Contact him by e-mail or follow him on Twitter at twitter.com/richinsight.

Comments { 0 }