Tag Archives | taxonomy

Improving Enterprise Search Using Auto-Categorization: Making the Business Case to Senior Executives

By Marjorie M.K. Hlava and Jay Ven Eman
of Access Innovations, Inc

The significance of using a business case approach to improve corporate search using auto-categorization and taxonomy is the subject of this white paper. These solutions are understood by corporate librarians and knowledge management leaders, but the value aspect is often poorly comprehended by the executives responsible for the budget and approval process.

This paper differentiates between solely presenting a technical resource to the business vs. using a well thought-out business case when attempting to procure enterprise or department funding. Search is on the radar of senior management due to the appearance of Google and other search systems. There is a vast proliferation of knowledge workers, and efficiencies in information throughput are in strong demand. Workers spend more than 25% of their time searching for information (IDC Research, 2008). The average corporation has four search systems with none of them delivering productivity to the work force. This issue has emerged as a significant concern in helping to drive higher business productivity and profits.

This paper outlines how the development of a cohesive taxonomy strategy, well aligned with corporate business needs, becomes a strategic investment supporting staff productivity and overall knowledge worker output quality. It is a tactical purchase to strengthen the company’s competitive edge.

There is now a 92% accuracy rating on accounting and regulatory document search based on hit, miss and noise or relevance, precision and recall statistics [using] Access Innovations. –USGAO

Obstacles in optimizing search

The problem with search is that it usually depends on statistics and immense data processing and storage to process answers, without paying attention to the language of the user. Corporate intranets, pharmaceutical firms, large database publishers, and magazine and content publishers suffer without well-formed information to clearly indicate conceptual links, provide replicable results, and support intuitive semantic search. This directly impacts the knowledge worker’s patience and productivity, with many spending one fourth of their time looking for information rather than using it in creative and strategic ways. Individual lost time multiplied by tens to hundreds in a large corpora- tion significantly undermines the bottom line. By not readily allowing the user his or her own terminology, the system creates small hurdles which, multiplied by many failed searches, become large barriers. The result is a loss of efficiency and flexibility across the entire enterprise.

Agile enterprises must provide a mechanism for the user to automatically translate their terms, dialect, or language into well-formed, standard terms. This provides for consistent, deep searching, the most effective means to obtain information with comprehensive recall and accuracy. It prevents trial- and-error searching that wastes workers’ time. Factor in the direct and burden costs of each knowledge worker; the cost savings rapidly become significant.

Research has shown that most classification systems touted as automatic actually require rules to reach productive levels for production or search. The rules differentiate among meanings of words to correctly interpret a document. To create and maintain these rules, one needs to build a rich semantic layer and then place a rule-based appli-cation over the classification function. Traditional search does not provide this functionality. To facilitate information capture and retrieval that runs at 6, 8, even 10 times greater productivity, a good taxonomy must provide the search backbone.

IT departments, charged with safeguarding valuable corporate information, require a simple and safe way for users to manage the categorization tools, to avert increasing IT costs and burden. The current move to Web 2.0 empowers users and lessens the load on IT departments. Collaborative taxonomy management supports Web 2.0 initiatives.

We have moved from a fielded Boolean search to a faceted search GUI, but the fundamentals of search still hold. The 1960s gave us the Arpanet and ReCon systems, which gave rise to the Internet and present search technologies. Metadata elements rose from fielded data. The missing piece in today’s search is the taxonomy application. The market challenge is to produce solutions that enhance search through taxonomy and automatic categorization.

IEEE had their system up and running in three days, in full production in less than two weeks. –Institute of Electrical and Electronics Engineers

The American Economic Association said its editors think using it is fun and makes time fly! –American Economic Association (AEA)

The business of auto-categorization and taxonomies

Well-formed data, with clear indication of conceptual semantic links, provides replicable results and intuitive, semantic search. Users search with their own words, removing obstacles to search success and increasing productivity. The system translates non-standard word choices to consistent taxonomy terms, resulting in consistent, deep searching and, ultimately, greater knowledge access and use.

To produce the highest level of productivity at the most cost-effective TCO (total cost of ownership), a system must provide both semantic interpretation and governing rules linked to a taxonomy. This ensures fast, accurate search regardless of the skill or number of users.

Good corporate compliance systems need to ensure conformity with accepted taxonomy standards. These include ANSI/NISO Z39.19, and those from the ISO, WC3, British Standards Institute, and other standards-setting organizations.

To minimize costs, the categorization system should work both at the content creation, content management, digital depository end of the information management process and at the search end to provide seamless performance.

Dangers in the industry that inhibit seamless performance include out-of-date data schemas in which critical data is stored in extinct formats and media. Strategic planning for search must consider migration of this data as technical platforms evolve. Most enterprises handle terabytes of data with an average lifespan of 3 years. With often inadequate and over-capacity contingency plans (all of which further exacerbate search inefficiencies), these huge information stores must be configured to ensure that the data is platform-independent and accommodates new technologies.

Value drivers for your project

Business issues and value drivers supporting projected returns are shown here.

Business issues and value drivers supporting projected returns are shown here.

The need for a supportive business case

A business case is vital in helping executives rationalize decisions, especially ones of a technical nature. It facilitates their ability to analyze the technology’s impact compared with other corporate opportunities, particularly with limited budgets.

Having financial metrics along with technical recommendations fuels the ability to communicate expected upstream value. Several industry-leading vendors are extending themselves by drawing up contracts where payment is conditioned on proving delivered value. Accenture, Triology, and IBM have established value-based selling as a best practice; soon, it will be an industry standard.

Research shows that, of over 400 software vendors, close to 75% fail to prove their solution’s tangible value. These vendors sell solutions that challenge the client to build business value. But that business value must be clearly described in the business case.

Building a supportive business case also needs to address technical issues such as enabling semantic search, interlinking data, and using rules.

Many firms use a “discovery” process, where technical and business parties join forces in discovering value in a proposed solution. This collaborative process demonstrates how departmental needs are aligned with business value and IT impact and strengthens your business case.

The following elements are key in assembling a software or services business case:

  1. Value proposition – summarizes the position
  2. Executive summary– brief and bottom line
  3. Risk, impact, and strategic benefit
  4. ROI validation – clear and concise is best
  5. Competitive TCO – for competing vendors
  6. IT impact and support – to build bridges

ProQuest CSA has achieved a 7-fold increase in productivity. –ProQuest CSA

Weather Channel finds things 50% faster using Data Harmony. A significant saving in time. –The Weather Channel

Supporting the Metrics

The baseline for integration of automated or assisted metatagging integrated into your workflow should be 85% accuracy or 15-20% irrelevant returns (noise). When this level is reached, you can potentially see seven-fold increases in productivity and cut search time in half. Achieving these levels demonstrated notable credibility for CSA’s implementation.

Though the benefits of an ROI measure depend on size of audience, audience level, complexity of content, and complexity of search, there are reliable data points that can be used. This table serves as a guideline when building cost-justification efforts to buy auto-classification and taxonomy solutions.

A guideline when building cost-justification efforts.

A guideline when building cost-justification efforts.

The Value Produced

Building your case will be invaluable when presenting it to management or a budgeting committee. It helps your department be viewed as in-step with management and supporting corporate strategic goals. To the owner of the case, the benefits are clear:

  • Projects are better received.
  • Projects are well justified.
  • Projects are viewed beyond “tools”.
  • Projects receive better funding.

Summary

This paper seeks to illuminate the importance of a well thought-out business case. Whether using outside vendors or an internal committee, following the steps to build each aspect of a persuasive business case for a solution’s implementation is ultimately the most successful way to identify your needs and promote your project.

About Access Innovations

Access Innovations, Inc. is a software and services company founded in 1978. It operates under the stewardship of the firm’s principals, Marjorie M.K. Hlava, President and Jay Ven Eman, CEO.

Closely held and financed by organic growth and retained earnings, the company has three main components- a robust services division, the Data Harmony software line, and the National Information Center for Educational Media (NICEM).

Comments { 0 }

OWL Exports From a Full Thesaurus

Jay Ven Eman, Ph.D.

Jay Ven Eman, Ph.D.

What do you make of “198”? You could assume a number. Computer applications make no reliable assumptions since it could be an integer and decimal but not octal, but it could also be something else, too. Neither you nor the computer could do anything useful with it. What if, we added a period, so “198” becomes “1.98”? Maybe it represents the value of something such as its price. If we found it embedded with additional information, we would know more. “It cost 1.98.” The reader now knows that it is a price, but software applications still are unable to figure it out. There is much the reader still doesn’t know. “It cost ¥1.98.” “It cost £1.98.” “It cost $1.98.” There is even more information you would want. Wholesale? Retail? Discounted? Sale price? $1.98 for what?

Basic interpretation is something humans do very well, but software applications do not. Now imagine a software application trying to find the nearest gasoline station to your present location that has gas for $1.98 or less. Per gallon? Per liter? Diesel or regular? Using your location from your car’s GPS and a wireless Internet connection such a request is theoretically possible, but beyond the most sophisticated software applications using Web resources. They cannot do the reasoning based upon the current state of information on the Web.

Finding Meaning

Trying to search the Web based upon conceptual search statements adds more complications. Looking for information about “lead” using just that term returns a mountain of unwanted information about leadership, your water, and conditions at the Arctic Ocean. Refining the query to indicate you are interest in “lead based soldering compounds” helps. Software applications still cannot reason or draw inferences from keywords found in context. At present, only humans are adept at interpreting within context.

Semantic Web

The “Semantic Web” is a series of initiatives to help make more of the vast resources found via the Web, available to software applications and agents, so that these programs can perform at least rudimentary analysis and processing to help you find that cheaper gasoline. The Web Ontology Language (OWL) is one such initiative and will be described herein in relation to thesauri and taxonomies.

At the heart of the Semantic Web are words and phrases that represent concepts that can be used for describing Web resources. Basic organizing principles for “concepts” exist in the present thesaurus standards (ANSI/NISO Z39.19 found at www.niso.org and ISO 2788 and ISO 5964 found at www.iso.org). They are being expanded and revised. Drafts of the revisions are available for review.

The reader is directed to the standards’ Web sites referenced above and to www.accessinn.com, www.dataharmony.com, and www.willpowerinfo.co.uk/thesprin.htm for basic information on thesaurus and taxonomy concepts. It is assumed here that the reader will have a basic understanding of what a thesaurus is, what a taxonomy is, and related concepts. Also, a basic understanding of the Web Ontology Language (OWL) is required. OWL is a W3C recommendation and is maintained at the W3C Web site. For an initial investigation of OWL, the best place to start is the Guide found at W3C.

OWL

From the OWL Guide, “OWL is intended to provide a language that can be used to describe the classes and relations between them that are inherent in Web documents and applications.” OWL formalizes a domain by defining classes and properties of those classes; defining individuals and asserting properties about them; and reasoning about these classes and individuals.

Ontology is borrowed from philosophy. In philosophy, Ontology is the science of describing the kinds of entities in the world and how they relate.
An OWL ontology may include classes, properties, and instances. Unlike ontology from philosophy, an OWL ontology includes instances, or members, of classes. Classes and members, or instances, can have properties and those properties have values. A class can also be a member of another class. OWL ontologies are meant to be distributed across the Web and to be related as needed. The normative OWL exchange syntax is RDF/XML (www.w3.org/RDF/).

Thesaurus

A thesaurus is not an ontology. It does not describe kinds of entities and how they are related in a why that a software agent could use. One could draw useful inferences about the domain of medicine by studying a medical thesaurus, but software cannot. You would discover important terms in the field, how terms are related, what terms have broader concepts and what terms encompass narrower concepts. An inference, or reasoning engine, would be unable to draw any inferences beyond a basic “broader term/narrower term” pairing like “nervous system/central nervous system,” unless specifically articulated. Is it a whole/part, instance, parent/child, or other kind of relationship?

Using OWL, more information about the classes represented by thesauri terms, the relationship between classes, subclasses, and members can be described. In a typical thesaurus, the terms “nervous system” and “central nervous system” would have the labels BT and NT, respectfully. A software agent would not be able to make use of these labels and the relationship they describe unless the agent is custom coded. The purpose of OWL is to provide descriptive information using RDF/XML syntax that would allow OWL parsers and inference engines, particularly those not within the control of the owners of the target thesaurus, to use the incredible intellectual value contained in a well developed thesaurus.

The levels of abstraction should be apparent at this point. At one level there are terms. At another level the relationships between groups of terms are described within a thesaurus structure. The thesauri standards do not dictate how to label thesaurus relationships. A term could be USE Agriculture or Preferred Term Agriculture or PT Agriculture. Hard coding of software agents with all of the possible variations of thesaurus labels is impractical.

OWL then is used to describe labels such as BT, NT, NPT, and RT1, etc., and to describe additional properties about classes and members such as the type of BT/NT relationship between two terms. Additional power can be derived when two or more thesauri OWL ontologies are mapped. This would allow Web software agents to determine the meaning of subject terms (key words) found in the meta-data element of Web pages, to determine if other Web pages containing the same terms have the same meaning, and to make additional inferences about those Web resources.

An OWL output from a full thesaurus provides semantic meaning to the basic classes and properties of a thesaurus. Such an output becomes a true Web resource and can be used more effectively by automated processes. Another layer of OWL wrapped around subject terms from an OWL level thesaurus and the resources (such as Web pages) these subject terms are describing would be an order of magnitude more powerful, but also more complicated and difficult to implement.

OWL Thesaurus Output

An OWL thesaurus output contains two major parts. The first part articulates the basic definition of the structure of the thesaurus. It is an XML/RDF schema. As such, a software agent can use the resolving properties in the schema to locate resources that provide the necessary logic needed to use the thesaurus.

FIGURE 1 – XML/RDF/OWL DECLARATIONS

<!DOCTYPE rdf:RDF [
<!ENTITY rdf "http://www.w3.org/1999/02/22-rdf-syntax-ns#" >
<!ENTITY owl "http://www.w3.org/2002/07/owl#" >
<!ENTITY xsd "http://www.w3.org/2001/XMLSchema#" > ]>
<rdf:RDF
xmlns    ="http://localhost/owlfiles/DHProject#"
xmlns:DHProject ="http://localhost/owlfiles/DHProject#"
xmlns:base ="http://localhost/owlfiles/DHProject#"
xmlns:owl ="http://www.w3.org/2002/07/owl#"
xmlns:rdf ="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
xmlns:xsd ="http://www.w3.org/2001/XMLSchema#">
<owl:Ontology rdf:about="">
<rdfs:comment>OWL export from MAIstro</rdfs:comment>
<rdfs:label>DHProject Ontology</rdfs:label>
</owl:Ontology>

Without agonizing over the details, Figure 1 provides the necessary declarations in the form of URL’s so that software agents can locate additional resources related to this thesaurus. The software agent would not have to have any of the W3C recommendations (XML, RDF, OWL) hard coded into its internal logic. It would have to have ‘resolving’ logic such as, “if you encountered a URL, then do the following…”

FIGURE 2 – SAMPLE TERM RECORD OUTPUT IN XML

<TermInfo>
<T>Agrotechnology</T>
<BT>Biotechnology</BT>
<NT>Animal management technologies</NT>
<NT>Controlled environment agriculture</NT>
<NT>Genetically modified crops</NT>
<RT>Agricultural science</RT>
<RT>Food technology</RT>
<UF>Plant engineering</UF>
<Scope_Note></Scope_Note>
<Editorial_Note></Editorial_Note>
<Facet></Facet>
<History></History>
</TermInfo>

Figure 2 shows a sample thesaurus term record output in XML for the term, “Agrotechnology”. This term has BT, NT, RT, Status, UF, Scope_Note, Editorial_Note, Facet, and History as a complex combination of classes, members, and properties. Anyone familiar with thesauri can determine what the abbreviations mean such as BT, NT, and RT and, thus, they can infer the relationships between all of the terms in the term record. An OWL thesaurus output provides additional intelligence that helps software make the same inferences.

After the declarations portion, shown in Figure 1, the remaining portion of the first part of an OWL thesaurus output is the schema describing the classes, subclasses, and members that comprise a thesaurus and all of the properties of each. Each of the XML elements (e.g. <RT>) in Figure 2 is defined in the schema as well as their properties and relationships. These definitions conform to the OWL W3C recommendation.

The first part of an OWL thesaurus output contains declarations and classes, subclasses, and their properties. It contains all of the logic needed by a specialized agent to make sense of your thesaurus and other OWL thesaurus resources on the Web.

FIGURE 3 – OWL OUTPUT OF TERM RECORD “AGROTECHNOLOGY”

</PreferredTerm>
<PreferredTerm rdf:ID="T131">
<rdfs:label xml:lang="en">Agrotechnology</rdfs:label>
<BroaderTerm rdf:resource="#T603"
  newsindexer:alpha="Biotechnology"/>
<NarrowerTerm rdf:resource="#T252"
  newsindexer:alpha="Animal management technologies"/>
<NarrowerTerm rdf:resource="#T1221"
  newsindexer:alpha="Controlled environment agriculture"/>
<NarrowerTerm rdf:resource="#T2166"
  newsindexer:alpha="Genetically modified crops"/>
<Related_Term rdf:resource="#T127"
  newsindexer:alpha="Agricultural science"/>
<Related_Term rdf:resource="#T2020"
  newsindexer:alpha="Food technology"/>
<Non-Preferred_Term rdf:resource="#T3898"
  newsindexer:alpha="Plant engineering"/>
</PreferredTerm>

The second part of an OWL thesaurus contains the terms of your thesaurus marked up according to the OWL recommendation. Figure 3 shows an OWL output for our sample term, “Agrotechnology”. (Note, since there are no values found in Figure 2 for Scope_Note, Editorial_Note, Facet, and History, these elements are not present in Figure 3.)

Now our infamous software agent could infer that “Agrotechnology” is a ‘NarrowerTerm’ of “Biotechnology”. “Agrotechnology” has three ‘NarrowerTerms’, two “RelatedTerms’, and one “NonPreferredTerm’. From the OWL output, the software agent can resolve the meaning and use of ‘BroaderTerm’, ‘NarrowerTerm’, ‘RelatedTerm’, and ‘NonPreferredTerm’ by navigating to the various URL’s. The agent can determine from the schema dictates that, if a term has property value, ‘NarrowerTerm’, then it must have property type value, ‘BroaderTerm’. A term can’t be a narrower term, if it doesn’t have a broader term. A term that is a ‘BroaderTerm’ must also be a ‘PreferredTerm’ and so on.

FIGURE 4 – OWL OUTPUT OF TERM RECORD “PLANT ENGINEERING”

<NonPreferredTerm rdf:ID="T3898">
<rdfs:label xml:lang="en">Plant engineering</rdfs:label>
<USE rdf:resource="T131" newsindexer:alpha="Agrotechnology"/>
</NonPreferredTerm>

Our thesaurus software agent can infer from Figure 3 that the thesaurus it is evaluating uses “Agrotechnology” for “Plant engineering”. Figure 4 identifies “Plant engineering” as a ‘NonPreferredTerm’ and identifies “Agrotechnology” as the ‘PreferredTerm’. (The logic in the schema dictates that if you have a “NonPreferredTerm”, then it must have a “PreferredTerm”.)

Suppose our software agent encounters “Plant engineering” at another Web site and uses it to locate resources there. Now the agent locates your Web site. The agent would first use “Plant engineering”. From your OWL thesaurus output it would infer that at your site it should use your preferred term, “Agrotechnology”, to locate similar resources.

All the terms and terms relationship in your thesaurus or taxonomy would be defined in part two of the OWL thesaurus output. It is now a Web resource that can be used by software agents. Designed to be distributed and referenced, a given base OWL thesaurus can grow as other thesaurus ontologies reference it.

More Meaning Needed

Even a thesaurus wrapped in OWL falls short of the full potential of the Semantic Web. This ‘first order’ output allows other thesaurus applications to make inferences about classes, subclasses, and members of a thesaurus. By “reading” the OWL wrappings, any thesaurus OWL software agent can make useful infers. By using classes, subclasses, and members and their properties, Web software agents would be able to reproduce the hierarchical structure of a thesaurus outside of the application used to construct it.

However, a lot is still missing. For example, knowing a term’s parent, children, other terms it is related to, and terms it is used for, does not tell you what the term means and what it might be trying to describe. Additional classes, subclasses, and members all with properties are needed. How a term is supposed to be used and why this ‘term’ is preferred over that ‘term’ would be enormously useful properties for improving the performance of software agents.

A more difficult layer of semantic meaning is the relationship between a thesaurus term and the entity, or object, it describes. An assignable thesaurus term is a member of class “PreferredTerm”. When it is assigned to an object, for example a research report or Web page, that term becomes a property of that object. For a Web page, descriptive terms become attributes of the ‘Meta’ element:

<META NAME="KEYWORDS" CONTENT="content management software,
xml thesaurus, concept extraction, information retrieval,
knowledge extraction, machine aided indexing,
taxonomy management system, text management, xml">

None of the intelligence found in an OWL thesaurus output is found in the Meta element. Having that intelligence improves the likelihood that our software agent can make useful inferences about this Web resource.

This intelligence is not currently available because HTML does not allow for OWL markup of keywords in the Meta element. There are major challenges to doing this. To illustrate, the single keyword, “machine aided indexing”, is rendered in Figure 5 as an OWL thesaurus output. This is very heavy overhead.

FIGURE 5 – OWL OUTPUT OF TERM RECORD MACHINE AIDED INDEXING

<PreferredTerm rdf:ID="T131">
<rdfs:label xml:lang="en">Machine aided indexing</rdfs:label>
<BroaderTerm rdf:resource="#T603"
 newsindexer:alpha="Information technology"/>
<NarrowerTerm rdf:resource="#T1221"
 newsindexer:alpha="Concept extraction"/>
<NarrowerTerm rdf:resource="#T2166"
 newsindexer:alpha="Rule base techniques"/>
<Related_Term rdf:resource="#T127"
 newsindexer:alpha="Categorization systems"/>
<Related_Term rdf:resource="#T2020"
 newsindexer:alpha="Classification systems"/>
<Non-Preferred_Term rdf:resource="#T3898"
 newsindexer:alpha="MAI"/>
</PreferredTerm>

The entire rendering depicted in Figure 5 would not be necessary for each keyword assigned to the Meta element of a Web page. A shorthand version could be designed that would direct software agents to the OWL thesaurus output, but such a shorthand method is not available.

Even if HTML incorporates a shorthand OWL markup for Meta keywords, the intelligence required to apply the right keywords automatically, for example, making the determination, “Web page x is about Machine aided indexing”, is not in the current OWL output. Automatic or semiautomatic indexing is the only way to handle volume and variety, especially dealing with Web pages.

Commercial applications such as Data Harmony’s M.A.I.™ Concept Extractor© and similar products provide machine automated indexing solutions. Theoretically, the knowledge representation systems that drive machine automated indexing and classification systems could incorporate OWL markup. When a machine indexing system assigned a preferred term to a Web page, it would write it into the Meta element along with its OWL markup.

However, to truly achieve the objectives of the Semantic Web the OWL W3C recommendation should be extended to include the decision algorithms used in the machine automated indexing process. Or, alternative W3C recommendation regarding the Semantic Web should be used in conjunction with OWL. If this could be accomplished, then software agents could determine the logic used in assigning terms. Next, the agent could compare the logic used at other Web sites and would then be able to make comparisons and to draw conclusions about various Web resources; conclusions like, of the eighteen Web sites your software agent reviewed that discussed selling gasoline, only eight were actual gas stations and only four of the eight had data the agent could determine was the retail price for unleaded.

We have moved closer to locating the least expensive gasoline within a five-mile radius of our current location. What has been described herein is actually being done, but so far only in closed environments where all of the variables are controlled. For example, there are Web sites that specialize in price comparison shopping.

Beyond these special cases, for the open Web the challenges are great. The sheer size of the Web and its speed of growth are obvious. More challenging is capturing meaning in knowledge representation systems like OWL (and other Semantic Web initiatives at W3C like SKOS, Topic Maps, etc.). How many OWL thesauri will there be? How many are needed? How much horsepower will be needed for an agent to resolve meaning when OWL thesauri are cross-referencing each other in potentially endless loops?

For these and other reasons, the Semantic Web may not live up to its full promise. The complexity and the magnitude of the effort may prove to be insurmountable. That said, there will be a Semantic Web and OWL will play an important role, but it will probably be a more simplified semantic architecture and more isolated, for example, to vertical markets or specific fields and disciplines.

For the reader, before you launch your own initiatives, assess your internal resources and measure the level of internal commitment, particularly at the upper levels of your organization. Know what is happening in your industry or field. If Semantic initiatives are happening in your industry, then the effort needed to deploy a taxonomic strategy (OWL being one piece of the solution) should be seriously considered. If you don’t make the effort, your Web resources and your vast internal, private resources risk being lost in the ‘sea of meaninglessness’, putting you at a tremendous competitive disadvantage.

1. BT – broad term, NT – narrow term, NPT – non-preferred term, RT – related term

Comments { 0 }

Automatic Indexing: A Matter of Degree

Marjorie M.K. Hlava

Marjorie M.K. Hlava

Picture yourself standing at the base of that metaphorical range, the Information Mountains, trailhead signs pointing this way and that: Taxonomy, Automatic Classification, Categorization, Content Management, Portal Management. The e-buzz of e-biz has promised easy access to any destination along one or more of these trails, but which ones? The map in your hand seems to bear little relationship to the paths or the choices before you. Who made those signs?

In general, it’s been those venture-funded systems and their followers, the knowledge management people and the taxonomy people. Knowledge management people are not using the outlines of knowledge that already exist. Taxonomy people think you need only a three-level, uncontrolled term list to manage a corporate intranet, and they generally ignore the available body of knowledge that encompasses thesaurus construction. Metadata followers are unaware of the standards and corpus of information surrounding indexing protocols, including back-of-the-book, online and traditional library cataloging. The bodies of literature are distinct with very little crossover. Librarians and information scientists are only beginning to be discovered by these groups. Frustrating? Yes. But if we want to get beyond that, we need to learn — and perhaps painfully, embrace — the new lingo. More importantly, it is imperative for each group to become aware of the other’s disciplines, standards and needs.

We failed to keep up. It would be interesting to try to determine why and where we were left behind. The marketing hype of Silicon Valley, the advent of the Internet, the push of the dot com era and the entry of computational linguists and artificial intelligence to the realm of information and library science have all played a role. But that is another article.

Definitions

The current challenge is to understand, in your own terms, what automatic indexing systems really do and whether you can use them with your own information collection. How should they be applied? What are the strengths and weaknesses? How do you know if they really work? How expensive will they be to implement? We’ll respond to these questions later on, but first, let’s start with a few terms and definitions that are related to the indexing systems that you might hear or read about.

These definitions are patterned after the forthcoming revision of the British National Standard for Thesauri, but do not exactly replicate that work. (Apologies to the formal definition creators; their list is more complete and excellent.)

Document — Any item, printed or otherwise, that is amenable to cataloging and indexing, sometimes known as the target text, even when the target is non-print.
Content Management System (CMS) — Typically, a combination management and delivery application for handling creation, modification and removal of information resources from an organized repository; includes tools for publishing, format management, revision control, indexing, search and retrieval.
Knowledge Domain — A specially linked data-structuring paradigm based on a concept of separating structure and content; a discrete body of related concepts structured hierarchically.
Categorization — The process of indexing to the top levels of a hierarchical or taxonomic view of a thesaurus.
Classification — The grouping of like things and the separation of unlike things, and the arrangement of groups in a logical and helpful sequence.
Facet — A grouping of concepts of the same inherent type, e.g., activities, disciplines, people, natural objects, materials, places, times, etc.
Sub Facet — A group of sibling terms (and their narrower terms) within a facet having mutually exclusive values of some named characteristics.
Node — A sub-facet indicator.
Indexing — The intellectual analysis of the subject matter of a document to identify the concepts represented in the document and the allocation of descriptors to allow these concepts to be retrieved.
Descriptor — A term used consistently when indexing to represent a given concept, preferably in the form of a noun or noun phrase, sometimes known as the preferred term, the keyword or index term. This may (or may not) imply a “controlled vocabulary.”
Keyword — A synonym for descriptor or index term.
Ontology — A view of a domain hierarchy, the similarity of relationships and their interaction among concepts. An ontology does not define the vocabulary or the way in which it is to be assigned. It illustrates the concepts and their relationships so that the user more easily understands its coverage. According to Stanford’s Tom Gruber, “In the context of knowledge sharing…the term ontology…mean(s) a specification of a conceptualization. That is, an ontology is a description (like a formal specification of a program) of the concepts and relationships that can exist for an agent or a community of agents.”
Taxonomy — Generally, the hierarchical view of a set of controlled vocabulary terms. Classically, taxonomy (from Greek taxis meaning arrangement or division and nomos meaning law) is the science of classification according to a pre-determined system, with the resulting catalog used to provide a conceptual framework for discussion, analysis or information retrieval. In Web portal design, taxonomies are often created to describe categories and subcategories of topics found on a website.
Thesaurus — A controlled vocabulary wherein concepts are represented by descriptors, formally organized so that paradigmatic relationships between the concepts are made explicit, and the descriptors are accompanied by lead-in entries. The purpose of a thesaurus is to guide both the indexer and the searcher to select the same descriptor or combination of descriptors to represent a given subject. A thesaurus usually allows both an alphabetic and a hierarchical (taxonomic) view of its contents. ISO 2788 gives us two definitions for thesaurus: (1) “The vocabulary of a controlled indexing language, formally organized so that the a priori relationships between concepts (for example as ‘broader’ and ‘narrower’) are made explicit” and (2) “A controlled set of terms selected from natural language and used to represent, in abstract form, the subjects of documents.”

Are these old words with clearly defined meanings? No. They are old words dressed in new definitions and with new applications. They mean very different things to different groups. People using the same words but with different understandings of their meanings have some very interesting conversations in which no real knowledge is transferred. Each party believes communication is taking place when, in actuality, they are discussing and understanding different things. Recalling Abbott and Costello’s Who’s on First? routine, a conversation of this type could be the basis for a great comedy routine (SIG/CON perhaps), if it weren’t so frustrating — and so important. We need a translator.

For example, consider the word index. To a librarian, an index is a compilation of references grouped by topic, available in print or online. To a computer science person (that would be IT today), it would refer to the inverted index used to do quick look-ups in a computer software program. To an online searcher, the word would refer to the index terms applied to the individual documents in a database that make it easy to retrieve by subject area. To a publisher, it means the access tool in the back of the book listed by subject and sub-subject area with a page reference to the main book text. Who is right? All of them are correct within their own communities.

Returning to the degrees of application for these systems and when to use one, we need to address each question separately.

What Systems Are There?

What are the differences among the systems for automatic classification, indexing and categorization? The primary theories behind the systems are:

  • Boolean rule base variations including keyword or matching rules
  • Probability of application statistics (Bayesian statistics)
  • Co-occurrence models
  • Natural language systems

New dissertations will bring forth new theories that may or may not fit in this lumping.

How Should They Be Applied?

Application is achieved in two steps. First, the system is trained in the specific subject or vertical area. In rule-based systems this is accomplished by (1) selecting the approved list of keywords to be used and, through matching and synonyms, building simple rules and (2) employing phraseological, grammatical, syntactical, semantical, usage, proximity, location, capitalization and other algorithms — based on the system — for building complex rules. This means that, frequently, the rules are keyword-matched to synonyms or to word combinations using Boolean statements in order to capture the appropriate indexing out of the target text.

In Bayesian engines the system first selects the approved list of keywords to be used for training. The system is trained using the approved keywords against a set of documents, usually about 50 to 60 documents (records, stories). This creates scenarios for word occurrence based on the words in the training documents and how often they occur in conjunction with the approved words for that item. Some systems use a combination of Boolean and Bayesian to achieve the final indexing results.

Natural language systems base their application on the parts of speech and the nature of language usage. Language is used differently in different applications. Think of the word plasma. It has very different meanings in medicine and in physics, although the word has the same spelling and pronunciation, not to mention etymology. Therefore, the contextual usage is what informs the application.

In all cases it is clear that a taxonomy or thesaurus or classification system needs to be chosen before work can begin. The resulting keyword metadata sets depend on a strong word list to start with — regardless of the name and format that may be given to that word list.

What Are the Strengths and Weaknesses?

The weaknesses of the systems compared to human indexing are the frequency of what are called false drops. That is, the keywords selected fit the computer model but do not make sense in actual use. These terms are considered noise in the system and in application. Systems work to reduce the level of noise.

The measure of the accuracy of a system is based on

  • Hits — exact matches to what a human indexer would have applied to the system
  • Misses — the keywords a human would have selected that a computerized system did not
  • Noise — keywords selected by the computer that a human would not have selected

The statistical ratios of Hits, Misses and Noise are the measure of how good the system is. The cut-off should be at 85% Hits out of a total of 100% accurate (against human) indexing. That means that Noise and Misses need to be less than 15% combined.

A good system will provide an accuracy rate of 60% initially from a good foundation keyword list and 85% or better with training or rule building. This means that there is still a margin of error expected and that the system needs — and improves with — human review.

Perceived economic or workflow impacts often render this method unacceptable, leading to the attempt to provide some form of automated indexing. The mitigation of these results so human indexers are not needed is addressed in a couple of ways. On the one hand suppose that the keyword list is hierarchical (the taxonomy view) and goes to very deep levels in some subject areas, maybe 13 levels to the hierarchy. A term can be analyzed and applied only to the final level and therefore its use is concise and plugged into a narrow application.

On the other hand, it may also be “rolled up” to ever-broader terms until only the first three levels of the hierarchy are used. This second approach is preferred in the web-click environment, where popular thinking (and some mouse-behavior research) indicates that users get bored at three clicks and will not go deeper into the hierarchy anyway.

These two options make it possible to use any of the three types of systems for very quick and fully automatic bucketing or filtering of target data for general placement on the website or on an intranet. Achieving deeper indexing and precise application of keywords still requires human intervention, at least by review, in all systems. The decision then becomes how precisely and deeply you will develop the indexing for the system application and the user group you have in mind.

How Do We Know If They Really Work?

You can talk with people who have tried to implement these systems, but you might find that (1) many are understandably reluctant to admit failure of their chosen system and (2) many are cautiously quiet around issues of liability, because of internal politics or for other reasons. You can review articles, white papers and analyst reports, but keep in mind that these may be biased toward the person or company who paid for the work. A better method is to contact users on the vendor’s customer list and speak to them without the vendor present. Another excellent method is to visit a couple of working implementations so that you can see them in action and ask questions about the system’s pluses and minuses.

The best method of all is to arrange for a paid pilot. In this situation you pay to have a small section of your taxonomy and text processed through the system. This permits you to analyze the quality and quantity of real output against real and representative input.

How Expensive Will They Be to Implement?

We have looked at three types of systems. Each starts with a controlled vocabulary, which could be a taxonomy or thesaurus, with or without accompanying authority files. Obviously you must already have, or be ready to acquire or build, one of these lists to start the process. You cannot measure the output if you don’t have a measure of quality. That measure should be the application of the selected keywords to the target text.

Once you have chosen the vocabulary, the road divides. In a rule base, or keyword, system the simple rules are built automatically from the list for match and synonym rules, that is, “See XYZ, Use XYZ.” The complex rules are partially programmatic and partially written by human editors/indexers. The building process averages 4 to 10 complex rules per hour. The process of deciding what rules should be built is based on running the simple rule base against the target text. If that text is a vetted set of records — already indexed and reviewed to assure good indexing — statistics can be automatically calculated. With the Hit, Miss and Noise statistics in hand the rule builders use the statistics as a continual learning tool for further building and refinement of the complex rule base. Generally 10—20% of terms need a complex rule. If the taxonomy has 1000 keyword terms, then the simple rules are made programmatically and the complex rules — 100 to 200 of them — would be built in 10 to 50 hours. The result is a rule base or knowledge extractor or concept extractor to run against target text.

Bayesian, inference, co-occurrence categorization systems depend on the gathering of training set documents. These are documents collected for each node (keyword term) in the taxonomy that represents that term in the document. The usual number of documents to collect for training is 50. Some require more, some less. Collection of the documents for training may take up to one hour or more per term to gather, to review as actually representing the term and to convert to the input format of the categorization system. Once all the training sets are collected, a huge systems processing task set is run to find the logical connections between terms within a document and within a set of documents. This returns a probability of a set of terms being relevant to a particular keyword term. Then the term is assigned to other similar documents based on the statistical likelihood that a particular term is the correct one (according to the system’s findings on the training set). The result is a probability engine ready to run against a new set of target text.

A natural language system trains the system based on the parts of speech and term usage and builds a domain for the specific area of knowledge to be covered. Generally, each term is analyzed via seven methods:

  • Morphological (term form — number, tense, etc.)
  • Lexical analysis (part of speech tagging)
  • Syntactic (noun phrase identification, proper name boundaries)
  • Numerical conceptual boundaries
  • Phraseological (discourse analysis, text structure identification)
  • Semantic analysis (proper name concept categorization, numeric concept categorization, semantic relation extraction)
  • Pragmatic (common sense reasoning for the usage of the term, such as cause and effect relationships, i.e., nurse and nursing)

This is quite a lot of work, and it may take up to four hours to define a single term fully with all its aspects. Here again some programmatic options exist as well as base semantic nets, which are available either as part of the system or from other sources. WordNet is a big lexical dictionary heavily used by this community for creation of natural language systems. And, for a domain containing 3,000,000 rules of thumb and 300,000 concepts (based on a calculus of common sense), visit the CYC Knowledge Base. These will supply a domain ready to run against your target text. For standards evolving in this area take a look at the Rosetta site on the Internet.

Summary

There are real and reasonable differences in deciding how a literal world of data, knowledge or content should be organized. In simple terms, it’s about how to shorten the distance between questions from humans and answers from systems. Purveyors of various systems maneuver to occupy or invent the standards high ground and to capture the attention of the marketplace, often bringing ambiguity to the discussion of process and confusion to the debate over performance. The processes are complex and performance claims require scrutiny against an equal standard. Part of the grand mission of rendering order out of chaos is to bring clarity and precision to the language of our deliberations. Failure to keep up is failure to engage, and such failure is not an option.

We have investigated three major methodologies used in the automatic and semi-automatic classification of text. In practice, many of the systems use a mixture of the methods to achieve the result desired. Most systems require a taxonomy in order to start and most systems tag text to each keyword term in the taxonomy as metadata in the keyword name or in other elements as the resultant.

Access Innovations for Document abstracting and indexing • Document conversion • Business Taxonomies • Machine Aided Indexing
All rights reserved. Copyright © 2006 Access Innovations, Inc.

Comments { 0 }