Implementing DITA at Micron Technology, Inc. — Interview with Craig Henley, Manager, Marketing Publications Group

Written for Unlimited Priorities and DCLnews Blog.

Charlotte Spinner

Charlotte Spinner

Micron Technology, Inc., is one of the world’s leading semiconductor companies. Their DRAM, NAND, and NOR Flash memory products are used in everything from computing, networking, and server applications, to mobile, embedded, consumer, automotive, and industrial designs. Craig Henley is manager of the Marketing Publications Group at Micron. His team leads the DITA effort at Micron and oversees all conversion and implementation projects.

DITA (Darwin Information Typing Architecture) is an XML-based international standard that was initially developed by IBM for technical documentation and is now publicly available and is being applied to technical documentation in many industries. Craig shares with Charlotte Spinner of Unlimited Priorities his thoughts about a recent DITA conversion he worked on with Data Conversion Laboratory.

Charlotte Spinner: Craig, what were the business circumstances that led Micron to DITA? Describe the problem you were trying to solve.

Craig Henley: Micron has one of the most diverse product portfolios in the industry. Our complete spectrum of memory products—DRAM, Flash memory and more—require data sheets that contain technical specifications describing the product. These data-intensive documents typically exceed 100 pages and sometimes reach 200-300 pages, are heavy on graphics and tabular data, and are very complex. For each product family (e.g., SDRAM or DDR2) every item is available in multiple densities (256Mb, 512Mb, 1Gb, 2Gb, etc.), and each permutation requires its own large data sheet document. This provides the descriptive information a design engineer needs in order to incorporate the parts into the product, so data sheets are a key component for sales.

The data sheets were maintained using unstructured Adobe® FrameMaker® and stored, along with the many graphics, in large zip files which were then stuffed into a large enterprise CMS. We always knew that 80-90% of the content was reusable and could form a core batch of content, with the rest of the specifications varying a minute amount. But with the old system, if anything changed we had to update each document individually. This was a very unwieldy, very manual process—the “brute force method.” Even if we had to change boilerplate content—something in legal or copyright, logos, colors—it had to be changed in the template, and also in every old document whenever it was next modified. This was a maintenance nightmare, and the challenges compounded exponentially because of the sheer number of items involved.

The bottom line is that all of our products have to be supported at the data sheet level, so documentation is mission-critical for us. And we pride ourselves on the quality of our documentation. So we asked ourselves, “Do we keep trying to work harder and increasing resources and staff? Do we scale the brute force method, or do we work smarter?” Brute force is not cost-effective, so we decided to use DITA to help us work more efficiently.

CS: Beyond the obvious advantages of an XML-based solution, what made DITA an especially good fit for this project?

CH: We’re part of the digital era, but in fact we felt we were behind because our already large product portfolio was growing due to acquisitions. We adopted DITA in the nick of time. It verified our expectations with regard to eliminating redundancy and increasing efficiency, and also opened the door to new functionalities and capabilities, such as allowing us to publish documents that we couldn’t before.

For example, we had heard from field engineers that customers wanted a full complement of documentation for our Multichip Packages (MCPs), which may contain DRAM, Flash, Mobile DRAM, e-MMC, or more, so we have to assemble data sheets for all of the discrete components, not just a general one for the MCP overall. There are many different types of MCPs—any package can leverage other discrete components. This was a nightmare in the old paradigm. If anyone updated a DRAM or other specification that was being leveraged in 6-7 MCPs, how could we keep up with that?

DITA allows reuse in a way that lets us remove redundant information: we use existing DITA maps, pull out topics that don’t need to be there, nest the maps inside the MCP datasheet, and voila!—it’s created in minutes. All the information is still connected to the topics that are being leveraged in their original discrete datasheets, so if they are updated by engineers, the changes are inherited. This makes it easy to regenerate content, and it’s seamless to the customer.

Another factor that made DITA a great solution is that it’s an XML schema that’s very basic in design, so reuse is there and it’s easy, but it imposes enough structure on the user-base that everyone is operating under the same model. We call it “guided authoring”—it keeps people from veering off with their own methods for creating the documentation, which wouldn’t promote clean handoff. Authoring under DITA is clean—it guarantees that elements are used in a consistent way and leaves less room for errors. Initially, industries moving into the XML paradigm developed their own in-house DTDs, and I think that made it slower to adopt. But with DITA, the standardization makes it easy to have interoperability between departments, and even companies, which seems to be supporting its wider-scale adoption.

CS: Did you know much about DITA before embarking on this project?

CH: We had read about DITA in books and independent research articles, and learned more while attending an STC conference, so we had an idea that it could work well for us.

CS: Did you have any trouble selling the idea of a DITA conversion internally?

CH: At the core, we had the support we needed. Our management believes in innovation— they trusted us to go out and do things differently. So we brought in some reps from our key customer base for a pilot study. Once we proved the success of DITA in the pilot mode, and how it could scale, it gained traction and sold itself. We started at the grassroots level and it went from there, one successful demo at a time.

CS: Did you think you could do this alone at any point, or did you always know that using an outside expert was the best approach? What led you to Data Conversion Laboratory?

CH: We like the “learn to fish” approach, but when it came to full-blown conversion of legacy documents, we knew we’d need to go outside.

We had heard of DCL in STC publications, and we regularly read the DCL newsletter. We knew in the back of our minds that if we went full-on to DITA we would need to build a key XML foundation of content, and we didn’t want to do that manually. Tables in XML are complex, and ours are really complex. Our initial attempts at that XML conversion were too time- and labor-intensive, so we were concerned.

We brought in DCL and they talked us through their process. They explained some filters they use, and why the tables don’t have to be such a challenge. A test of some complex tabular data came back in pristine XML, so they were something of a lifesaver for us. DCL’s proprietary conversion technology—that “secret sauce” they have—is pretty magical.

CS: What did you have to do to prepare? Was there a need to restructure any of your data before converting to DITA?

CH: We did have to do some preparation in learning how to do some of our own information architecture work, and we discovered some best practices in prepping our unstructured content. Mostly it involved cleaning up existing source files, making sure we were consistent in our tagging for things like paragraphs, to ensure clean conversion. It was a fair amount of work—a front-loaded process—but well worth the investment.

CS: How did you get started? What was the initial volume?

CH: We had about a 2,000 page-count initially, much of which was foundational content we could leverage for lots of other documents. Starting with this set helped us scale almost overnight, and also helped accelerate our timeline, ramping quickly from proof-of-concept to pilot to full-blown production. Without that key conversion our timeline would have been drastically pushed out.

CS: Did the process run smoothly?

CH: Yes, I would say it ran very smoothly. There were a few unexpected results, but we communicated those back, adjustments were made, and that was it. The timeframe was on the order of a few months, and that was more dependent on us. We couldn’t keep up with DCL. In fact, we referred to their process as “the goat” because of the way it consumed and converted the data. They’re fast. They were always ahead of us.

CS: How did you make use of DITA maps?

CH: We adopted a basic shell for our documents, but then started nesting DITA maps within other DITA maps. This was another efficiency gain, giving us the capability to assemble large documents literally in minutes as opposed to hours and days. We specialized at the map level for elements and attributes to make them match our typical data sheet output, so we found maps to be quite flexible.

CS: What, if any, were the happy surprises or unexpected pitfalls of the project, and how did you deal with them?

CH: There were no pitfalls with DITA itself. It’s a general, vanilla specification. It might not semantically match every piece of content, but you can use it out of the box and adjust the tags and attributes to match your environment as needed with a little bit of customization. That’s actually a benefit of DITA—you can use it initially as is, and then modify it further along in your implementation.

CS: What have been the greatest benefits of the conversion? Were you able to reduce redundancy? By how much?

CH: The greatest benefit is that it helped us lay that foundation of good XML content in our CMS so we now can scale our XML deployment exponentially faster. With regard to redundancy, this is a guess-timate, but I’d say we had about a 75% reduction of manual effort, so a 75% gain in efficiency. For heavier reuse documents such as MCPs, the benefit scales even more.

CS: How do you anticipate using DITA down the road?

CH: We are publishing different types of documents in our CMS now, going beyond data sheets. Within technical notes, for example, there’s not much reuse, but DITA is still good because that data can be leveraged in other documents. We’d like to start seeing even more document types and use cases to leverage the reuse and interoperability. You don’t hear about DITA for pure marketing content, or things that are more standalone, but given what we’ve seen, we don’t see why you couldn’t use DITA for that. We’d like to branch out into other types of content. The pinnacle would be to have all of our organization’s data— internal and external —leveraged, authored and used within the XML paradigm. That might sound crazy and aggressive, but there would be benefits, such as dynamically assembling content to the Web.

CS: Can you expand a little more on dynamically-generated content?

CH: We haven’t fielded many requests for that yet, but we see the potential. Someone could call up and request a particular configuration of data sheets, and we could throw together that shell very fast because the topic-based architecture promotes that. The next level would be to take modular information and make it available on-demand to customers in a forward-looking capacity.

For example, if you build a widget on your website for a customer to request only certain parameters, such as electrical specs, across, say, four densities of a given product family, the content could be assembled as needed. Our CMS does facilitate that internally, assembling content on demand, but that could be a major differentiator for us if available via the intranet and Web. As XML comes of age, it’s not impossible, and it’s probably where we’re headed. We could package in mobile capacity, or whatever’s needed.

By making information available at that level, you speed up responsiveness and get feedback on a specification quickly. Fielding requests to the right people quickly can ramp up the communication process for development.

CS: Looking back, how would you advise an organization to prepare if they’re about to embark on a DITA conversion?

CH: Advise folks to understand and justify the reasoning. Spell out clearly—and potentially quantify—the types of efficiency gains they’ll get by going to this model. Understand that use case and justification, and be prepared to articulate that to the right people. Remember that saving time results in cost savings. Be able to articulate that part of the story.

In terms of planning and implementation, start at that proof-of-concept level, move to the pilot level, and identify ways you can scale the pilot to the full blown organizational level. Have a plan for scaling, because you’re going to hear that question.

Part of our success was adopting DITA and a CMS that was built to work with DITA, marrying the two. A good CMS that works well with an open source such as DITA lets you scale with your resources, because it gives you methods to accomplish your key tasks and provides avenues for bridging the gap—the leap—from the desktop publishing paradigm to the XML publishing paradigm. DITA with a good CMS implementation helps you bridge that gap and helps your users—writers, application engineers, etc.—take that step.

CS: Are there any other comments you’d like to make about your experiences with DITA?

CH: In a nutshell, we’re believers.

About the Author

Charlotte Spinner is a technical specialist for Unlimited Priorities. Further information about Micron Technology, Inc. may be found at www.micron.com. Further information about Data Conversion Laboratory, Inc. may be found at www.dclab.com.

, , ,

Comments are closed.