To experience this website in full, please enable JavaScript or upgrade your browser.

Tag Archives: curation

A repository for pi(es)

As you may have read recently, Fabrice Bellard has announced the computation of π to almost 2.7 trillion decimal places using a faster algorithm that allows desktop technology to be used, rather than the supercomputers that are usually used to break this particular record. Bellard is an extremely talented programmer who has made a useful contribution to one area of digital preservation with his emulation and virtualisation system QEMU. But it’s a comment by Les Carr that set me thinking about costs, research data and repositories.

“Would you want to put that in your repository?” asked Les. And this is a particularly extreme example where we can do some calculations to give us a fairly good answer. Scientific data centres and the researchers that Pi Pie - CC-BY-NC-SA by Maitri@flickr use them have been considering this question for many years, and one way of looking at it is to see if the cost of recomputation exceeds the cost of storage over a particular time period. We’re assuming here that the initial question – is this worth keeping at all – has been answered at least vaguely positively.

Let’s look first at the cost of recomputation. Fabrice says the equipment used for this task cost no more than €2000. If we assume that it has a life of 3 years, that gives us a cost per day of €1.83. I’m avoiding the usual accounting practice of allowing for inflation, or lost interest on capital, in calculating the true depreciation value of the asset – there’s a number of different schemes and they all give similar results. I’ve just dividided the capital cost by the number of days of use we’ll get. But computers use electricity, and that costs money as well. Let’s assume this is a power-hungry beast that draws 400W and that power costs us 13.5¢ per kwH (which is what my domestic tarrif is if we assume a euro/sterling rate of €1.10 = £1 and 5% VAT.) That adds €1.30/day to the cost of running the system, for a total cost of €3.13/day.

Fabrice’s announcement says that it took 131 days of system time to calculate and verify his results, which gives a computational cost of €410.03 – which I’ll round to €410 since I’ve only been using 3 significant figures so far in the computations, and because there’s a lot of hand-waving involved in lots of these figures. So, we know how much it would take to recompute this result given the software, machine and instructions. (And the computational cost is likely to decline over time in the short term.)

The answer needs a Terabyte of storage. What will it cost to keep that in a repository? That’s a slightly more difficult question to answer, but we can give a number of figures that provide upper and lower bounds. SDSC quote $390/Tbyte/year for archival tape storage (dual copies), excluding setup costs and assuming no retrieval. Moore et al quote $500/year as a raw figure, obtained by dividing total system costs by usable storage within it. At current rates of $1 = €0.67, that gives us a cost of €261/year or €335/year. SDSC are likely to be at the cheap end of the scale. ULCC’s costs, given our lower total volumes, would be closer to €1500/year for a similar service (dual archival tape copies on separate sites) although that does include retrieval costs. Amazon’s AWS would be about €100/year for a single copy. You would want two copies, so it’s twice that, and the cost of transferring the data in would be about 25% more than the storage cost. Since I haven’t factored in ingest costs for any of the other models, I’ll ignore it for AWS as well. (And yes, AWS isn’t a repository, and there’s no metadata, and… This is a back-of-the-envelope calculation. It’s a small envelope.)

Which means, at a very rough level and ignoring many pertinent factors, that after about two years of storage in the repository, we would have been better off recalculating the data rather than storing it. There’s a lot of assumptions hidden there, however. For one, we’re assuming that this data will rarely, if ever, be required. If many people want it, the recalculation cost rapidly becomes prohibitive (and so does the 131 days they have to wait for their request to be satisfied!)

One of the other problems is more subtle. I said that, in the short term, recalculation costs would be likely to fall as computational power becomes cheaper. The energy costs involved will rise, of course, but there’s still a significant downward trend. But after a sufficient period of time, it becomes non-trivial to reconstruct the software and the environment it needs in order to allow the computation to happen. Imagine trying to recalculate something now where the original software is a PL/I program designed to run under OS/360. It’s not impossible by any means, but the cost involved and expertise required is non-trivial. At least with our example we won’t have any doubts about whether the right answer has been produced – the computation of π produces an exact, if never-ending, answer. Most scientific software doesn’t do this and the exact answers produced can depend on the compiler, the floating-point hardware, mathematical libraries and the operating system. Over time, it becomes harder and harder to recreate these faithfully, and we often don’t have any means of checking whether or not we have succeeded. (Keeping the original outputs would help in this, of course, but that’s exactly what we’re trying to avoid.) That’s part of the problem that Brian Matthews and his colleagues examine in the SigSoft project and there’s still a great deal of work to be done there.

So have we answered Les’s question ? My feeling is that in this case we have – there’s a fair amount of evidence that suggests that keeping this particular data set isn’t cost-effective. But in general, the question is far harder to answer. Yet we must strive harder for more general answers as the cost of not doing so is not trivial. Even if money did grow on trees, it still wouldn’t be free and at present we need to be very careful how we use it.

DCC 101 at the NeSC

I’m writing this whilst listening to Michael Day of UKOLN talk about preservation as one component of the data curation lifecycle espoused by the DCC. He is one of a number of people invited by the DCC to give lectures and create exercises as part of the pilot of DCC 101, a course aimed at researchers and the data curators who work with them.

I’m here because I was invited to speak this morning about Ingest, the previous stage in the NeSC - the view from receptionlifecycle. That was followed by a stimulating set of exercises developed and led by Suzanne Embury which got attendees to think about data quality and the varied measures we might use for it. Suzanne’s exercise was based on work by Wang and Strong (“Beyond accuracy: what data quality means for data consumers”) which starts from the premise that we can never directly measure data quality – rather, we must construct measures which are proxies for quality, or measure only one facet of it. This was new work to me, but it was reassuring to see

Read more