To experience this website in full, please enable JavaScript or upgrade your browser.

Projects

The AOR Toolkit

The AOR toolkit cover

The Assessing Organisational Readiness toolkit was published in May 2016 and is available as a free download.

The Toolkit will help you measure your organisational readiness for managing digital content.

This process can be useful for making effective decisions about how to create, manage, store and preserve your digital content. Likewise, understanding future requirements is necessary to enable your organisation to decide whether specific actions need to be taken in regard to particular content, or when and how it is desirable to improve on current capabilities.

“It’s about measurement, not improvement; you don’t need to supply ‘evidence’ that you are doing anything.”

aorbutton

How it works

The toolkit is structured as a number of elements, each one describing an aspect of digital content management and covering three discrete areas (organisational, technology and resource) also known as ‘legs’. This will allow users to:

  • Assess current capabilities from an organisational, technological and resource point of view
  • Assess individual, departmantal and organisation-wide readiness
  • Use toolkit examples to easily match existing capabilities
  • Populate our AOR scorecard template

Who is it for?

One key aim of re-launching the AOR toolkit was to widen its possible use beyond the use by education institutions and develop an easy to use reference guide that can be used by any organisation which needs to manage digital content. Here are some job titles we think would have an interest:

  • Archivists
  • Data Curators
  • FOI Officers
  • Librarians
  • Records Managers
  • Repository Managers

aorbutton

The AOR toolkit

Preserving Digital Content – Taking first steps with the AOR toolkit

The ART team at ULCC has long had an interest in promoting and selling our digital preservation expertise, in the form of the Digital Preservation Training Programme, and as a consultancy service, and most recently with the the relaunch of the AIDA toolkit as AOR toolkit. However in our work we meet a lot of people in a lot of organisations, for whom “preservation” – in perhaps the traditional archival sense – isn’t necessarily their sole or principle interest.

Read more

self-assessment image

Self-assessment as digital preservation training aid

On the Digital Preservation Training Programme, we always like to encourage students to assess their organisation and its readiness to undertake digital preservation. It’s possible that AIDA and the new AOR Toolkit could continue to have a small part in this process.

Self-assessment in DPTP

We have incorporated exercises in self-assessment as digital preservation training aid in the DPTP course for many years. We don’t do it much lately, but we used to get students to map themselves against the OAIS Reference Model. The idea was they could identify gaps in the Functional Entities, information package creation, and who their Producers / Consumers were. We would ask them to draw it up as a flipchart sketch, using dotted lines to express missing elements or gaps.

Read more

AIDA’s new name: AOR Toolkit

The hardest part of any project is devising a name for the output. The second hardest thing is devising a name that can also be expressed as a memorable acronym.

I think one of the most successful instances I encountered was the CAMiLEON Project. This acronym unpacks into Creative Archiving at Michigan and Leeds Emulating the Old on the New. It brilliantly manages to include the names of both sponsoring Institutions, and accurately describes the work of the project, and still end up as a memorable one-word acronym.

Read more

AIDA toolkit use cases

The AIDA toolkit: use cases

There are a few isolated uses of the old AIDA Toolkit. In this blog post I will try and recount some of these AIDA toolkit use cases.

In the beginning…

In its first phase, I was aided greatly in 2009 by five UK HE Institutions who volunteered to act as guinea pigs and do test runs, but this was mainly to help me improve the structure and the wording. However, Sarah Jones of HATII was very positive about its potential in 2010.

Read more

Reworking AIDA: Storage

In the fourth of our series of posts on reworking the AIDA self-assessment toolkit, we look at a technical element – Managed Storage.

Reworking AIDA Storage

In reworking the toolkit, we are now looking at the 11th Technology Element. In the “old” AIDA, this was called “Institutional Repository”, and it pretty much assessed whether the University had an Institutional Repository (IR) system and the degree to which it had been successfully implemented, and was being used.

For the 2009 audience, and given the scope of what AIDA was about, an IR was probably just the right thing to assess. In 2009, Institutional Repository software was the new thing and a lot of UK HE & FE institutions were embracing it enthusiastically. Of course your basic IR doesn’t really do storage by itself; certainly it enables sharing of resources, it does managed access, perhaps some automated metadata creation, and allows remote submission of content. An IR system such as EPrints can be used as an interface to storage – as a matter of fact it has a built-in function called “Storage Manager” – but it isn’t a tool for configuring the servers where content is stored.

Storage in 2016

In 2016, a few things occurred to me thinking about the storage topic.

  1. I doubt I shall ever understand everything to do with storage of digital content, but since working on the original AIDA my understanding has improved somewhat. I now know that it is at least technically possible to configure IT storage in ways that match the expected usage of the content. Personally, I’m particularly interested in such configuration for long-term preservation purposes.
  2. I’m also aware that it’s possible for a sysadmin – or even a digital archivist – to operate some kind of interface with the storage server, using for instance an application like “storage manager”, that might enable them to choose suitable destinations for digital content.
  3. Backup is not the same as storage.
  4. Checksums are an essential part of validating the integrity of stored digital objects.

I have thus widened the scope of Element TECH 11 so that we can assess more than the limited workings of an IR. I also went back to two other related elements in the TECH leg, and attempted to enrich them.

To address (1), the capability that is being assessed is not just whether your organisation has a server room or network storage, but rather if you have identified your storage needs correctly and have configured the right kind of storage to keep your digital content (and deliver it to users). We might add this capability is nothing to do with the quantity, number, or size of your digital materials.

To assess (2), we’ve identified the requirement for an application or mechanism that helps put things into storage, take them out again, and assist with access while they are in storage. We could add that this interface mechanism is not doing the same job as metadata, capability for which is assessed elsewhere.

To address (3), I went back to TECH 03 and changed its name from “Ensuring Availability” to “Ensuring Availability / Backing Up”. The element description was then improved with more detailed descriptions concerning backup actions; we’re trying to describe the optimum backup scenario, based on actual organisational needs; and provide caveats for when multiple copies can cause syncing problems. Work done on the CARDIO toolkit was very useful here.

To incorporate (4), I thought it best to include checksums in element TECH 04, “Integrity of Information”. Checksum creation and validation is now explicitly suggested as one possible method to ensure integrity of digital content.

Managed storage as a whole is thus distributed among several measurable TECH elements in the new toolkit.

In this way I’m hoping to arrive at a measurable capability for managed storage that does not pre-empt the use the organisation wishes to make of such storage. The wording is such that even a digital preservation strategy could be assessed in the new toolkit – as could many other uses. If I can get this right, it would be an improvement on simply assessing the presence of an Institutional Repository.

AIDA toolkit

Reworking AIDA: Legal Compliance

Today we’re looking briefly at legal obligations concerning management of your digital content.
The original AIDA had only one section on this, and it covered Copyright and IPR. These issues were important in 2009 and are still important today, especially in the context of research data management when academics need to be assured that attribution, intellectual property, and copyright are all being protected.

Legal Compliance – widening the scope

For the new toolkit, in keeping with my plan for a wider scope, I wanted to address additional legal concerns. The best solution seemed to be to add a new component to assess them.

What we’re assessing under Legal Compliance:

  1. Awareness of responsibility for legal compliance.
  2. The operation of mechanisms for controlling access to digital content, such as by licenses, redaction, closure, and release (which may be timed).
  3. Processes of review of digital content holdings, for identifying legal and compliance issues.

Legal Compliance – Awareness

The first one is probably the most important of the three. If nobody in the organisation is even aware of their own responsibilities, this can’t be good. My view would be that any effective information manager – archivist, librarian, records manager – is probably handling digital content with potential legal concerns regarding its access, and has a duty of care. But a good organisation will share these responsibilities, and embeds awareness into every role.

Legal Compliance – Mechanisms & Procedures

Secondly, we’d assess whether the organisation has any means (policies, procedures, forms) for controlling access and closure; and thirdly, whether there’s a review process that can seek out any legal concerns in certain digital collections.

Legislation regimes vary across the world, of course, and this makes it challenging to devise a model that is internationally applicable. The new version of the model name-checks specific acts in UK legislation, such as the Data Protection Act and Freedom of Information. On the other hand, other countries have their own versions of similar legislation; and copyright laws are widespread, even when they differ on detail and interpretation.

The value of the toolkit, if indeed it proves to have any, is not that we’re measuring an organisation’s specific point-by-point compliance with a certain Statute; rather, we’re assessing the high-level awareness of legal compliance, and what the organisation does to meet it.

Interestingly, the high-level application of legal protection across an organisation is something which can appear somewhat undeveloped in other assessment tools.

The ISO 16363 code of practice refers to copyright implications, intellectual property and other legal restrictions on use only in the context of compiling good Content Information and Preservation Description Information.

The expectation is that “An Archive will honor all applicable legal restrictions. These issues occur when the OAIS acts as a custodian. An OAIS should understand the intellectual property rights concepts, such as copyrights and any other applicable laws prior to accepting copyrighted materials into the OAIS. It can establish guidelines for ingestion of information and rules for dissemination and duplication of the information when necessary. It is beyond the scope of this document to provide details of national and international copyright laws.”

Personally I’ve always been disappointed by the lack of engagement implied here. To be fair though, the Code does cite many strong examples of “Access Rights” metadata, when it describes instances of what exemplary “Preservation Description Information” should look like for Digital Library Collections.

The DPCMM maturity model likewise doesn’t see fit to assess legal compliance as a separate entity, and it is not singled out as one of its 15 elements. However, the concept of “ensuring long‐term access to digital content that has legal, regulatory, business, and cultural memory value” is embedded in the model.

Updating the AIDA toolkit

AIDA cover by ULCC; "Needs Assessment" graphic by OCHA Visual Information Unit, US, Public Domain. Source: thenounproject.com

AIDA cover by ULCC; “Needs Assessment” graphic by OCHA Visual Information Unit, US, Public Domain. Source: thenounproject.com

This week, I have been mostly reworking and reviewing the ULCC AIDA toolkit. We’re planning to relaunch it later this year, with a new name, new scope, and new scorecard.

AIDA toolkit – a short history

The AIDA acronym stands for “Assessing Institutional Digital Assets”. Kevin Ashley and myself completed this JISC-funded project in 2009, and the idea was it could be used by any University – i.e. an Institution – to assess its own capability for managing digital assets.

At the time, AIDA was certainly intended for an HE/FE audience; and that’s reflected in the “Institutional” part of the name, and the type of digital content in scope. Content likely to have been familiar to anyone working in HE – digital libraries, research publications, digital datasets. As a matter of fact, AIDA was pressed into action as a toolkit directly relevant to the needs of Managing Research Data, as is shown by its reworking in 2011 into the CARDIO Toolkit.

I gather CARDIO, under the auspices of Joy Davidson, HATII and the DCC, has since been quite successful and its take-up among UK Institutions to measure or benchmark their own preparedness for Research Data Management perhaps indicates we were doing something right.

A new AIDA toolkit for 2016

My plan is to open up the AIDA toolkit so that it can be used by more people, apply to more content, and operate on a wider basis. In particular, I want it to apply to:

  • Not just Universities, but any Organisation that has digital content
  • Not just research / library content, but almost anything digital (the term “Digital Assets” always seemed vague to me; where the term “Digital Asset Management” is in fact something very specific and may refer to particular platforms and software)
  • Not just repository managers, but also archivists, records managers, and librarians working with digital content.

I’m also going to be adding a simpler scorecard element; we had one for AIDA before, but it got a little too “clever” with its elaborate weighted scores.

Readers may legitimately wonder if the community really “needs” another self-assessment tool; we teach several of the known models on our Digital Preservation Training Programme, including the use of the TRAC framework for self-assessment purposes; and since doing AIDA, the excellent DPCMM has become available, and indeed the latter has influenced my thinking. The new AIDA toolkit will continue to be a free download, though, and we’re aiming to retain its overall simplicity, which we believe is one of its strengths.

A new acronym

As part of this plan, I’m keen to bring out and highlight the “Capability” and “Management” parts of the AIDA toolkit, factors which have been slightly obscured by its current name and acronym. With this in mind, I need a new name and a new acronym. The elements that must be included in the title are:

  • Assessing or Benchmarking
  • Organisational
  • Capacity or Readiness [for]
  • Management [of]
  • Digital Content

I’ve already tried feeding these combinations through various online acronym generators, and come up empty. Hence we would like to invite the wider digital preservation community & use the power of crowd-sourcing to collect suggestions & ideas. Simply comment below or tweet us at @dart_ulcc and use the #AIDAthatsnotmyname hashtag. Naturally, the winner(s) of this little crowd-sourcing contest will receive written credit in the final relaunched AIDA toolkit.

Transcriptorium CATTI2

Transcriptorium – The Journey to a Crowdsourcing Platform

EU logoAt the end of 2015 we saw the last sprint of work of a very comprehensive European project called Transcriptorium (TS), which lasted three years and has been funded by the European Commission. Transcriptorium is part of the 7th Framework Programme funding European research and technological development. For this project, ULCC has developed one of the tools to facilitate the transcription of historical handwritten document images.

The initial main objectives of Transcriptorium were to:

  1. Enhance handwritten text recognition (HTR) technology for efficient transcription.
  2. Bring the HTR technology to users.
  3. Integrate the HTR results in public web portals.

Read more