Monday 3 December 2012

IMGAME fieldwork

This semester has seen an impressive amount of IMGAME fieldwork and, as usual, our Local Organisers have proved to be thoroughly excellent. Since the end of October we have run Imitation Games on religion in Cardiff, Trondheim and Helsinki, with a trip to Rotterdam scheduled for the week beginning 10 December.

This is an important period for us in two ways. First, the results will provide a crucial test of the method and the underlying theory. The kinds of comparisons we are going to make (and the hypotheses we are trying to test) can be summarised as follows:

  • Cardiff fieldwork: this is a repeat of the fieldwork we did in March 2012 so the results should be very similar (i.e. it is a kind of test-retest reliability measure). By using the same questions at Step Two but collecting new Pretender answers and then having them judged by new judges we should get a measure of how stable the IR is over time. We have also tried filtering the questions (i.e. selecting the best 50% of question based on how they were interpreted by Step One judges in March) to see if this makes a difference to the IR estimate.
  • Scandinavian fieldwork: the intention is to compare the ability of non-Christians to pretend to be active Christians with the success of non-Christians in Poland and Sicily. We have already recorded IRs of close to zero in the predominantly Catholic countries, so we are now hoping to measure a significant, positive IR in Norway and Finland.
  • Rotterdam fieldwork: This is broadly similar to the Scandinavian fieldwork and also tests the ability of non-Christians in a broadly secular country to pretend to be active Christians. Overall, we expect the IRs in Norway (Trondheim), Finland (Helsinki), Netherlands (Rotterdam) and Wales (Cardiff) to be roughly similar to each other and for there to be a statistically significant difference between these four IRs and those measured in Poland (Wroclaw) and Sicily (Palermo).
If the data supports the hypothesis, then we will have made a big step forward in developing the method.

The second reason this semester's fieldwork is important is that we are using it to test and develop the new software being developed by redweb. This is essential to the long term viability of the method as it will make it much easier for new researchers to run Imitation Games as many of the tasks we now perform manually will be automated. This should make the whole fieldwork experience much quicker and also eliminate some of the inevitable data handling errors that creep in as files are cut and copied and pasted by hand.

We will post provisional results in early 2013, assuming the Judges set their files back to us in time. In the meantime, I must end by, once again, thanking the Local Organisers for their hard work, resourcefulness and generally excellent choice of restaurants!

New Publication

The second Hawk-Eye paper is now available in hard copy. The full reference is:


Collins, H.M. and Evans, R.J. (2012) ‘Sport-Decision Aids and the ‘CSI Effect’: Why cricket uses Hawk-Eye well and tennis uses it badly’, Public Understanding of Science, Vol. 21, No. 8 (November 2012), pp. 904 – 921.

It was first published via Online First on July 29, 2011 as doi: 10.1177/0963662511407991.

Wednesday 15 August 2012

Harry Collins elected Fellow of the British Academy

Professor Harry Collins has been made a Fellow of the British Academy for his role in establishing the sociological study of science. More details at: http://t.co/SmCOGyAy

Thursday 21 June 2012

Imitation Game article in “The Atlantic”


The Atlantic – a premier US magazine for high end journalism in areas like technology and society – has just published an article on the Imitation Game. http://bit.ly/LkAHCO

The article describes the Imitation Game’s links to the Turing Test, the importance of interactional expertise and the development of the IMGAME project.

Please read it and tell your friends and colleagues!

Wednesday 13 June 2012

SEESHOP 6 and LO Workshop (Cardiff, 7-10 June 2012)


Last weekend saw the 6th annual SEESHOPconference and the 2nd ever Local Organisers (LO) workshop for the IMGAME project

The LO workshop was held on 7 June and brought together the postgrads and postdocs who help us run the IMGAME project by booking rooms, recruiting participants and generally making sure that we are able to collect the data we need. The aim of the LO workshop was to discuss developments of the methods, share hints and tips on how to manage the fieldwork and discuss some of the preliminary results from the first year.

After the LO workshop, the sixth SEESHOP got underway on Friday 8 June. The conference was a mixture of new participants and old-timers from previous years. The quality of the papers presented was uniformly high and we had lots of really good discussions about expertise and its use in settings ranging from management consultancy, industrial training and the courtroom to the natural and neurological sciences. We also had some lively debates about the relationship between expertise and democracy and on whether or not ‘nudge’ theory could (or should) be used to reduce the chance of maverick or marginal science influencing lay decision-making.
 
The programme is not available on-line at the time of writing but a list of speakers and titles is given below:

Friday 8 June

Paper Session 1: Developing Expertise

Julie Williamson – Experts among us: interactional expertise among management consultants
Simon Williams – Visualization of experiments
Paulo Marques and Rodrigo Ribeiro– Training for development of tacit knowledge

Saturday 9 June

Workshop: DemocraSEE

Alain Bovet – What can SEE bring to the pragmatist approach to the political public
Jean Goodwin – Adopting Walter Lippmann as an ancestor of 3rd Wave
Rob Evans – Habermas, Rawls & Democracy

Paper Session 2: Interactional Expertise

Kathryn Plaisance – Interactional expertise and philosophy of science
Eric Kennedy – A pluralistic approach to interactional expertise

Sunday 10 June

Paper session 3: Expertise Ignored

Robert Jomisko – Reforming science policy reform
Luis Reyes Galindo– Bogus molecular detectors

Paper Session 4: Experts about experts

David Caudill – Reliability standards in law and SEE
Theresa Schilhab – The anatomy of interactional expertise

Paper Session 5: Managing Expertise

Sally Jackson – Design Requirements for Safely Deferring to the Experts
Evan Selinger and Kyle Whyte – Nudging expertise

Friday 1 June 2012

Experts and Consensus (Bayreuth, 25-26 May 2012)

Another week, another conference! It will be the last foreign trip for a while, but was a good one to finish with.

The conference was a small workshop organised by Carlo Martini on the topic of Experts and Consensus in the Social Sciences. The meeting was held at Beyreuth University and consisted of 12 papers from a mixture or philosophers, sociologists and economists in which both the costs and benefits of consensus and different ways of reaching it were explored. The common themes that emerged across the presentations were the importance of institutions in hosting expert interactions, the value of pluralism and hence of not reaching consensus and the difficulty of devising procedural rules for combining different judgements from what appear to be ‘epistemic peers’.

The full programme, with abstracts, is available on-line. For what it is worth, the presentations that related most directly to my own interests and thus prompted me to re-consider my own views were as follows:

  • Maria Jiminez Buedo [] spoke about the difficulty of recognising legitimate expertise in times of controversy and began with the example of Alessio Rastani, who was interviewed by the BBC and prompted a barrage of complaint and speculation about whether he was a real expert or not. The interview is here: watch it and make your own mind up! The BBC news article about the controversy is here. The more general issue is, of course, even if you accept a realist theory of expertise, the problem of how to recognise these experts is only raised, not resolved.
  • Merel Lefevere presented a paper that responded to Heather Douglas’s claim that scientists (and presumably other experts) have a moral duty to contextualise their advice by making reference to limits and problems that can be reasonably foreseen. The difficulty is figuring out what can be reasonably foreseen and Lefevere and her co-author (Eric Schliesser) argued that a more pluralistic approach to expertise, particularly by those responsible for aggregating expert opinion or advice, was needed. I must confess that I think the argument just shifts the problem from defining what can be ‘reasonably’ foreseen to what counts as ‘reasonable’ pluralism (i.e. it doesn’t solve the Problem of Extension) though I am sympathetic to the general idea that advice should reflect the diversity of views available within the relevant expert community.
  • Frank den Butter who I first met 15 years ago at the 10th Anniversary Congress of the TinbergenInstitute, presented a paper that drew on his experiences of working with policy-makers in the Netherlands. The paper stressed the need for ‘compromise,’ rather than ‘consensus,’ and argued that this was a critical factor in reaching agreement about policy options. He illustrated the argument with examples of four consultation exercises based on ‘matching zones’ stakeholders and argued that seeing these events as a ‘game of trust’ – in which repeated face-to-face contact plays an important role – was crucial to their eventual success. In many ways, the these stakeholder exercises could be seen as ways of institutionalising the ‘political phase’ of a technological decision-making in the public domain.
  • Carlo Martini focused on the technical phase, rather than the political phase in a talk that compared the way expert groups were selected based on the nature of the task they had been set. In doing so he emphasised a theme that emerged several times over the two days, namely the importance of institutional design to the problem of expertise. In this case, the two contrasting examples were the Bank of England’s MonetaryPolicy Committee and the US Boskin Commission on CPI.
  •  Laszlo Kosolosky also raised issues that relate to the distinction between the technical and political phases. In arguing that consensus might be over-rated, in the sense that disagreements and their rationales might also be important, he distinguished between an academic consensus and an interface consensus. This struck me as particularly interesting as it provided a way of clarifying something about the SEE approach to expertise, namely that there is an expert community (scientific or otherwise) in a particular domain that might have reached a consensus about some topic. In Kosolosky’s terms this would be an academic consensus, though in old-fashioned SSK we might call this a core-group. In contrast, the interface consensus relates to the collective views of a diverse group of experts, each perhaps representing different core-groups who act as advisers (i.e. at the interface between between expert groups and policy-making). As such pluralistic groups are less likely to reach consensus, especially if the concept it taken to imply a greater degree of buy-in than simple agreement, then Kosolosky argues that what is important is not so much the outcome (consensus or not) but an agreement on the procedures to be followed when managing interactions at the interface. In other words, it is the institutional arrangements that are crucial rather than the achievement of consensus itself.

Of course, this is not to say that the other presentations were not interesting. They were but the ones listed above are the ones that raised questions that relate directly to the Studies in Expertise and Experience perspective.

Tuesday 22 May 2012

RCN Network Meeting, Boston


On Friday 18 and Sat 19 May, I was in Boston for the first meeting of the Research Co-ordination Network on Sustainable Energy Systems, which funded by NSF and led by Tom Seager. The RCN includes a series of internships for sustainability students in settings outside their home discipline and will use a version of the Imitation Game method in order to explore the extent to which being immersed in a different culture, if only for a few month, allows students to develop meaningful levels of interactional expertise.

Martin Weinel and I were there in order to report on the latest developments at Cardiff and to find out more about what the RCN version of the Imitation Game will look like. This was the main focus of the Friday, where we met the other participants in the Network and described our research interests to each other. In relation to the Imitation Game, we learnt that there are a couple of significant innovations being developed by the RCN, namely that:

  1. The RCN version of the Imitation Game will use a Judge plus three other participants: a contributory expert from the target domain (a ‘positive’ control), a student or similar with no immersion in the target domain (a ‘negative’ control) and the student who has had the internship. The hypothesis is that, at the end of the internship, the judge should be able to order the participants and locate the intern’s level on expertise as being between that of the other two participants.
  2. The numbers will be relatively small, so statistical analysis of the kind we are doing at IMGAME will be inappropriate. Instead, the success of the internship program will be measured for the individual students in their performance in the Imitation Game. In addition, by aggregating the individual results, the proportion of interns who achieve this intermediate level of interactional expertise can be calculated, giving a measure of success for the programme as a whole.

The official RCN meeting on Friday was then followed on Sat 19 May by a workshop in which Tom Seager, Evan Selinger and others described how they are using role-play games to teach ethics in relation tosustainability. This is of particular interest to Imitation Game project as, like us, they are taking role play games that could (and have?) been played in more traditional pencil-and-paper ways, and turning them into electronic formats that can be adapted for a wide range of classroom contexts. The whole set up was very impressive and provided plenty of food for thought about how we might develop the Imitation Game and (in my other life) how we might teach research methods differently.

Wednesday 16 May 2012

IMGAME: Palermo fieldwork


Last week’s fieldwork trip to Palermo was a testing time in more ways than one but overall a definite success.

Network and/or software problems meant we struggled to complete the first day of data collection but sterling work by Marika and Leonardo meant that we managed to run three rounds of ‘Step One’ instead of two and at least give ourselves a chance of making up the lost ground. Big thanks must also go to Martin ‘Albert’ Hall for getting up at 4:30 am to reconstruct the databases in time for us to carry on with ‘Step Two’ as planned. After that, things went much more smoothly: recruitment picked up and we managed to collect all the remaining data as planned. In the end, we even finished slightly ahead of schedule.

Palermo also saw a new variation of the IMGAME method, in which Pretenders at Step Two answered two sets of questions rather than just one. This should give a measure of ‘Pretender noise’ and will allow us to explore the reliability of IMGAME data in a different way. We will, of course, also compare the results from Palermo, in which non-Christians pretended to be Catholics, with the results from Budapest and Cardiff, where we researched the same topics.

As ever, I must end by thanking our Local Organiser Marika for all her hard work. In this case, she had the unenviable task of finding atheists in Italy, dealing with computer problems on the first day and looking after Harry, Martin and me for a whole week. She did all this superbly and will be delighted to know that we fully intend to come back next year!J.