There’s been quite a bit of online discussion around writing and marking of proposals in JISC’s recent 12/08 call, including discussion of how Twitter can help you prepare a bid and how it was used (and perhaps abused) during the marking process. Andy Powell has vented his frustration on some aspects of the process (and people who can’t stay within the page limits!) (Updated to add: I also intended to mention Lorna Campbell’s post, written earlier in the process before marking had begun – lots of good advice there about writing a proposal.)The marking process isn’t a secret – it’s exposed on the JISC website, along with some concise guidance on what makes a good bid, and examples of past winning bids. This advice is reinforced at the town meetings that accompany large funding rounds, so none of us have any excuse for not knowing what to do. Yet we continue to see some bids that don’t provide the information requested, or fail to demonstrate how they meet the requirements of the call. (I will readily admit that I’ve been guilty of writing bids like this as well.) More openness about the process can’t hurt, although it may not help. So I’ll say a little about the way we (or rather I) mark, and then speculate a bit about things we might want to do to improve it. If you’ve marked JISC bids yourself, you probably want to skip the next bit and go straight to idle thoughts
Markers – at least those outside the JISC Executive – get anywhere between 1 and 10 bids to look at, and something like 2 weeks in which to do the marking. As well as the bids, they’ll get some guidance for markers and the original text of the call. They’ll also see a log of all bids received, which as well as assigning each bid a unique ID, tells them how much each strand is over-subscribed (that is, how much is being asked for against how much is available.) And they’ll have access to a closed email list to clarify issues with other markers and the Executive. In my experience these lists are not much used these days, although they can prove useful to clarify ambiguities in the call or the marking criteria. Some are probably using Twitter for this now, which is unfortunate as it means that not all the markers will be aware of the discussions.
At an early stage, markers are meant to double-check that they don’t have any conflicts of interest in the bids assigned to them. A bid in which your own institution is a partner definitely constitutes such a conflict, and the Executive usually spot these in advance. But they arise for other reasons, and every marking episode I’ve been involved with has had at least one person realising that they have a conflict, often very late in the process.
Then we begin the process of reading, evaluating, and assigning marks. I suspect everyone has a different approach to this, but the outputs are the same. We score each criterion on a 5-point scale that’s really a 10-point scale (because it has half marks.) The criteria include such things as “Appropriateness to the call” and “value” and the lowest rating implies that the bid fails that criterion in a way which just can’t be fixed, whereas the highest implies that it exceeds expectations significantly in a number of ways. Markers also make comments on each criterion to explain the thinking behind their marking. These comments will form the bulk of the feedback that you will receive if you ask for it.
It’s worth noting that you can (and should) ask for this feedback even if your bid is successful. Sometimes your programme manager will offer it to you unasked. Few bids are perfect, and most bids contain something of value. The feedback can tell you what you need to improve but it will usually also tell you what you did well. Both are good to know!
As well as the criteria-based marking, we are also asked to rate the bid overall as A, B or C. As the picture above shows, this means that we strongly recommend funding, weakly recommend it, or do not recommend funding. These marks are the most significant ones once all the bids are considered at the evaluation stage.
The marking process is now all done via the web, although some find this frustratingly slow if they have to copy their marking information from some other source to the web forms.
The Evaluation Panel
The evaluation process usually involves a face-to-face meeting of all the markers, or just those from the Executive. The exact conduct of the meeting will depend on the size and complexity of the call, the number of bids and the number of projects to be funded. Every bid will have been marked by at least 3 people. Typically, one that scores AAA will be approved without further discussion, and one that scores CCC is likely to be rejected without significant discussion, although the panel will make sure that there’s sufficient information in the comments to provide feedback for the CCC bid.
What happens next depends to a great extent on the degree of competition, the quality of the proposals and the way the evaluation panel chair chooses to work. If there are many more proposals than can be funded, it’s not unusual to try to pick off further outliers – ones that stand out from the rest as being particularly strong or weak – before examining the rest in detail. The marks in each criterion will often come into play here, either to choose between two bids with equal recommendations, or to compare (say) an ABC with a BBB. Markers may be asked to justify or clarify their comments, and opinions do change as the result of discussion at this stage. The Executive will also want to bring other considerations into the process – either to ensure a range of different types of project are funded, or to ensure that funding goes to a range of institutions. Similarly, bids are sometimes approved subject to (agreed) change. Scores of 3 or below for any criteria imply that the marker sees problems that can be, and should be, corrected before funding is approved. If one institution receives funding for a number of related projects, they may be asked to look for economies of scale between them.
When there’s less or even no competition (with the number of bids being less than or equal to the number of projects desired) then the evaluation will have a different focus. It will still be necessary to eliminate projects that are too weak to receive funding, even if that means that some funding will be unspent. (Sometimes that funding can be reallocated to another stream where it can be better spent.) For those that can be funded, any concerns that the markers had need to be turned into guidance for the programme managers or the projects themselves. The budget may need to be made clearer, the dissemination plan improved, or the project may need to take account of the work of a related project, for instance.
So that’s the process, at least the bit of it I see. What might change and what else might we want to know? One area which interests me is inter-marker variance, which can take two forms. Some markers are harsher than others and tend to assign lower marks – there are also generous markers. If the markers are all agreed about the relative ranking of the bids then it’s possible to correct for the variations in absolute scores but at present this isn’t done. I did a brief and very unscientific experiment with bids marked by RPAG some years ago which showed significant inter-marker variability of this sort, although on that occasion I don’t think it had a significant impact on which bids were funded. There’s also the more interesting variance where the markers disagree about the relative merits of bids. We see quite a bit of this – rankings like ABC, AAC or ACC do appear, and evaluation panels will usually devote more time to understanding why such variation occurs. One thing we know nothing about is intra-marker variation – whether the same marker, given the same bid, will mark it the same way twice. In some fields, such as radiography, studies have shown significant variations of this type as well as inter-observer variation. This has led to pressure in some areas for increased use of machine assessment for X-Rays, since it’s repeatable even if it’s wrong.
There’s some interesting research that could be done on some of these areas, although I suspect it will be some time before we see automated marking of bids!
There’s always scope for using rules to improve consistency between markers. Andy Powell was looking for guidance on what to do with bids that are over the page limit, for instance. I think JISC have got clearer about this over time, but I’m wary of being over-prescriptive. It could be left to marker’s discretion as it is now. At the other extreme, such bids could be rejected before a marker ever sees them. Or they could be truncated at the page limit, so that the marking was done on the material within the limit. (For some bids, the material lost would not be significant – for others it would be crucial.)
And although the web-based process is a great improvement on it predecessor in many ways, it isn’t ideal if you aren’t always online. Something that allowed offline completion and online submission would be welcomed by some. Some parts of JISC are also experimenting with web-based bid submission as well. I’ve not had direct experience of this but it would be fascinating to hear from those who have.
I’m also interested to hear about perceptions of the process from the authors of bids, or from those who have considered writing bids but decided, for whatever reason, not to. What could be better? What’s already good and shouldn’t change? What barriers to bidding do people perceive? Could JISC commission work to improve the bidding process, the evaluation process, or both?