This year I devised a tailored version of the AIDA assessment toolkit, which I hope is become something fit to be applied to the management of research data. This has made AIDA into something better, but my wider task is to contribute to an all-purpose Integrated Data Management Planning toolkit which is being developed by the DCC, and which will incorporate parts of other measurement toolkits such as DRAMBORA and DAF, both of which have been used much more widely than AIDA. The original AIDA was targeted at “all digital assets in a University”, which now I come to think of it is fairly ambitious. Encouragingly, the DCC project manager tells me “The IDMP toolkit is planning to take forward much of the overall structure of AIDA as we think it is extremely useful as a way to present the practical recommendations we’ll glean from the legacy data.” This suggests to me that the three-legged model, and the five stages of development (both devised by Ann Kenney at Cornell University), are proving their integrity and soundness.
I took my results to a Workshop in Bristol on November 3rd to give a presentation to the numerous Project Managers who are taking part in the JISC Managing Research Data programme. Besides my ally Dr Takeda at the IDMB Project who has supported me since January, it seems one or two others had tried out AIDA, or at least looked at it, and generally found it helpful or potentially helpful. My graphical expression of the five stages – to which I have added more layers of “semantic meaning” – seemed to go down well with Chris Rusbridge. When designing this new version, I layered in a lot of detail from numerous sources, not least of them being existing questionnaires and published guidance on managing research data; when you combine that with the original AIDA elements which included organisation-wide surveys based on Trusted Digital Repository models and digital preservation capabilities, you get quite a complex matrix. One project immediately spotted that in its current form, AIDA would take a very long time to complete.
Many useful things came up in discussion: (1) if you undertake an AIDA, who is going to complete the assessment? I’ve been clear from the outset that no-one person can do it all, and that you’d need to farm out bits of it or work collaboratively, but so far it’s been tested by records managers, some of whom have a good rapport with their IT managers. As regards research data in a University, who is best placed to help, and how many of them are needed? Perhaps the Finance department, the sysadmins, the people who run the IT procurement programme, and people who design and implement policies for the University. Plus, of course, taking into account the assessments from the Researchers themselves. I think this is certainly going to help us model the all-purpose IDMP tool, if we can be quite clear about who is responsible for providing answers, and evidence, for each element in each of the three legs. This would potentially translate into a wide range of user types who can log in to use the tool and perform the assessment.
Lesson (2) – the numerical scoring method I am currently aiming at may not be the best one for some users. Blueprint started to count up their responses and think about calculating averages, but then decided instead to go for a qualitative approach rather than a quantitative one. The reason for this is that the actual difference between the Five Stages has (in their experience) an error bar of 50%. So their results (though I haven’t seen them) are presumably more in the way of a prose narrative describing how well things are working, rather than a numerical score marking and grading the outcome. Where this leaves the Five Stages scale, I’m not sure, but it could probably still work as an indicator.
(3). My AIDA structure has a two-level split that allows assessment of the entire University and (underneath that) a Department, School, Research Group or Project. To this level, I may need to add another unit called ‘Centre’. Apparently a Centre in a University is a bit like a Department, except it specialises in a particular strand of research. When it comes to the actual research data the funding streams are different and more complex. This is good for the researchers, but it also makes it much harder to pin down ownership of the data, and who is ultimately responsible for it.
The theme of the day in Bristol was costs, benefits and sustainability. These are areas I think AIDA can help with in a basic way, but I also think they are better expressed through the matrix which Neil Beagrie is developing within his Keeping Research Data Safe (KRDS) framework. I took notes at one of the workshops where this matrix was discussed, and learned a lot more about research data in many contexts; my impressions might make another interesting post.