Sunday, October 12, 2014

Box office success from questionable science

 
It was inevitable!

It had to happen!

Nature reports that a film, called The Whistleblower,  has been made, based on the Woo Suk Hwang scandal in South Korea, concerning the creation of embryonic stem cells by cloning. The film stars top actors. 

The disputed papers were published in a famous scientific journal, Science, and subsequently retracted

The film apparently paints a sympathetic portrait of Hwang as a man with human frailties, like the rest of us.

The real whistleblower would seem not to have been very pleased with this film because, according to the report, “his own contributions and those of online bloggers were credited to the reporter” (in the film).

The Nature report draws attention to the fact that Nature itself was the first to report that Hwang had procured the eggs for his experiments unethically.

Of course, there is another film in the (potential) making, this time about a paper published in  another famous scientific journal, and which had even more tragic consequences.  

I wonder when such a film will be made.

Thursday, October 9, 2014

A non-binding resolution to the binding problem?


Contributed jointly by Dragan Rangelov and myself

The binding problem is a specific example of a more general problem in brain studies, namely that of integration, that is to say of how the many, specialized, areas of the brain interact to provide the integration that is evident in our perceptions, thoughts and actions.

Binding has come to refer more to this problem within the confines of the visual brain. Here the binding problem becomes the problem of how the several, parallel, processing systems in the brain interact to give us our unitary perception of the visual world, in which different attributes such as form, colour and motion are seen in precise spatial and temporal registration.

The initial mistake is to suppose that we do see these attributes at precisely the same time. In fact, psychophysicalexperiments show that this is not true and that we see and become aware of some visual attributes such as colour before we see and become aware of others, such as motion.

This raises a question which has so far remained un-addressed, namely of whether there is some central station in the brain that “waits” for all the processing systems to complete their tasks before “binding” the results of their operations. There clearly is no such system because, over very brief time windows, we bind the colour that we see at time x to the motion that we had perceived 80 ms before. We therefore mis-bind in terms of the objective reality.

We discussed this issue some two years ago while at a meeting and thought that we should conduct some more experiments on this problem. Our approach was as follows: we presented subjects with lines of different orientation that could be in a number of colours. If colour is bound to orientation at perceptual or pre-perceptual stages, then the accuracy of reporting one attribute, say colour, should co-vary with the accuracy of reporting the other attribute (orientation), when the two are presented to subjects over very brief time windows.

If, however, the two attributes are not bound at the pre-perceptual or perceptual stage, then the accuracy of reporting one attribute (colour) should vary independently from the accuracy of reporting the other (orientation).

Our results, just published, showed that the accuracy of reporting the two attributes is independent, with the accuracy of reporting colour being always greater than the accuracy of reporting the orientation, probably reflecting the fact that colour is perceived before the orientation of lines by about 40 ms.

This suggests that these two attributes, at least, are not bound at either pre-perceptual or perceptual stages.

This result leads us to conclude that binding does not occur by physiological interaction between cells in the visual areas, but rather occurs at pos-perceptual stages, perhaps through the intervention of memory. We only experience attributes as being bound even though they are not bound physiologically, and only because they occur within the same, very brief memory time window.

Our results may provide, we think, an interesting resolution to the  binding problem, namely that there is no such problem to resolve at the perceptual level.

If binding occurs post-perceptually, then the search for how binding occurs shifts to a different arena.

Time will tell whether we are correct in our interpretation. 

We may of course be wrong, but we hope that our new view provides the ground for interesting new experiments and debates.

Thursday, September 4, 2014

(Literally) blind experts


An American friend drew my attention recently to a paper published last year and entitled The Invisible Gorilla Strikes Again: Sustained Inattentional Blindness in Expert Observers. It is the report of a study in which experts failed to detect an unexpected occurrence (gorilla) in their area of expertise even when viewing it directly.

This phenomenon, known for a long time and, to my knowledge, first described in a paper published in 1999 by Simons andChabris, is known as inattentional blindness. In the past, it has been demonstrated with naïve subjects in unfamiliar tasks. The authors of the above study asked: does inattentional blindness also occur frequently among experts?

A very interesting and highly relevant question!

To study this, they asked 24 radiologists (hence experts) to screen CT (Computed Tomography) scans of lungs for nodules. The radiologists ranged in age from 28 – 70 years; hence some must have had very considerable experience. Their eye movements were tracked as they viewed the scans. But embedded in the scans was a gorilla which was some 48 times the size of the average lung nodule that the radiologists were searching for; moreover, it was positioned close to a nodule.

20 of the 24 experts did not report seeing the gorilla and eye tracking revealed that, of the 20 radiologists who did not report seeing the gorilla, 12 had looked directly at where the gorilla was located.

The authors conclude that, “This is a clear illustration that radiologists, though they are expert searchers, are not immune to the effects of IB [inattentional blindness] even when searching medical images within their domain of expertise”. They add, “Presumably, they would have done much better at detecting the gorilla had they been told to be prepared for such a target…perhaps a smaller gorilla would have been more frequently detected because it would have been more closely matched the size of the lung modules” [sic].

A sobering thought!

I imagine that, on the whole, radiologists do end up detecting the nodules, even if they are apparently often not able to detect something that is blindingly obvious (if I may use the phrase in this context).

But just think of experts in other domains, say economics, and above all of expert politicians of all stripes and in all countries. Thinking about them, one cannot help but suppose that they, too, must suffer from inattentional blindness, but this time of a more cognitive variety.

In fact, I am increasingly inclined to believe that many of today’s problems are due to the inattentional blindness of politicians to the continual and rapid and huge changes occurring (and here the size of the gorilla compared to the lung nodules comes to mind). I am increasingly led to believe that politicians just do not have their ears to the ground and, in many instances, are – because of this inattentional blindness - way behind public opinion on a great many issues. Hence they are not up to date experts.

Perhaps this puts the inattentional blindness of radiologists to huge gorillas embedded in the scans they are examining in perspective. 

Saturday, July 26, 2014

Tracey Emin's "My Bed, My Bed" sells for £2.5 million.


This may come as a surprise to those who know about my distaste for much that passes for  “contemporary art”. Many would include Tracey Emin’s My Bed My Bed in that category. But Emin’s creation is something that I actually rather like. It is far better than much in contemporary art. I do not think that it is beautiful and would not want to have it in my house.  But it is something that I would seriously consider having in my art gallery, if I had been fortunate enough to have one. It is certainly far better than the creations of a certain gentleman whose works fetch equivalent, if not much higher, prices.

Apparently created when she was in depression, and in the state in which it was when she had not got out of it for several days, lying on the floor next to the bed is a variety of objects – condoms, cigarettes, knickers and so on.

Why would such a creation be of the slightest interest? Why would anyone even want to consider it a work of art?

I argued in my book Inner Vision: an exploration of art and the brain, that one of the functions of art is to give and gain knowledge. And My Bed My Bed gives, I think, knowledge about thousands, and more likely millions, of beds in bed-sitting rooms in all major cities of the world. It replays a scenario that you will find time and time again if you were to peep into bedrooms or walk into them un-invited.

It gives you knowledge about how many, many millions live like every day and their states of mind.

My Bed My Bed is therefore giving knowledge not only about bedrooms but also about states of mind that keep bedrooms in that state.

All art is abstraction. A portrait painting is great if it succeeds in being an abstraction of a certain kind of character. The actual person portrayed becomes irrelevant, because the portrait gives knowledge about a character, not an individual person.

And so with Tracey Emin’s My Bed. It is far, far more interesting than bisected sharks and cows. These also give knowledge but that knowledge is much better obtained in museums of natural history which are, after all, open to all.

Tracey Emin’s My Bed gives knowledge about something that is normally hidden from view.

So, I am not at all surprised that My Bed My Bed should have been sold at auction this month in London for £2.5 million. It is much better than many other works of art that fetch equivalent prices. It is one of the much better examples of contemporary art.

Friday, July 18, 2014

Juncker and Cameron the best of friends


Looking at the picture published in The Guardian, no one would suspect that David Cameron tried to block the appointment of Jean-Claude Juncker for Presidency of the European Union, and lost big time.

Instead, they appear as if they are, and have been, the best of friends.

It is as if it was all water off a duck’s back.

This is the stuff of successful diplomacy, on the back of hypocrisy.

I would have loved to determine the pattern of brain activity in both at this "oh-so-friendly" moment!

"Nature" and the retracted STAP cell paper

-->


After its publication in January this year, to much fanfare and international acclaim, the two STAP cell papers have been retracted because, it seems, there were flaws in them.

In an editorial, Nature has absolved itself of all responsibility for the flawed papers, claiming that neither its referees nor its editorial team could have spotted the apparently serious flaws in the them, flaws which led to the papers’ rapid demise.

Nature is in fact quite correct. It is not the function of editors or journals to look for manipulated images or plagiarism. I have no doubt that the very great majority of referees would notify the editors at once if they detect such flaws. There is, or ought to be, a certain element of trust between authors, journals and their editors. Moreover, as I understand it, Nature and its referees did not give these papers an easy ride. It took several months before the papers were published, implying that the referees had asked for substantial modifications to the manuscript.

Thus, Nature could be said to come out of it smelling like roses.

Yes, but not quite so fast.

Nature should take a leaf from one of its sister publication, Frontiers in Human Neuroscience, which is in fact owned by the Nature Publishing Group.

After a paper is accepted in Frontiers (but not before, and not if it is rejected), the names of the referees are published on the front page of the article. Publication in Frontiers is also not an easy ride, but at least the authors are allowed to enter into dialogue with the referees to put right or respond to criticisms, something that few journals allow, to the disadvantage of authors. The referees remain anonymous throughout this process, and only if a paper is accepted for publication are their names published.

Hence, if a paper is of extraordinary significance, some of the glory is reflected onto the referees and of course onto the journal. I mean, just imagine, if the Crick-Watson DNA paper had the names of the referees on it, they would no doubt have wanted to share in the glory to some minor extent. Indeed, Nature itself periodically reminds its readers that the DNA article was published in their pages, thus basking in the reflected glory.

Since all reasonable people understand that referees and editors cannot be held accountable for things like manipulated images or plagiarism in a paper, publication of their names in an accepted paper would do no harm, if the published paper turns out to have serious flaws.

If, on the other hand, the paper turns out to be some extraordinary contribution, then they can at least feel pride in helping to bring it to fruition and bask in its glory.

It is a classic case of “heads I win, tails you lose”

Why not try it?

Monday, June 23, 2014

Why impact factors are here to stay


I had intended to follow up my earlier post on impact factors (in defense of Nature and Science) with this one some weeks ago but could not get to it until now. This is fortunate for, in the meantime, my colleague David Colquhoun has written an excellent post about impact factors and such like, with which I agree completely. In fact, David is using facts and figures to give teeth to what we all know – that impact factors are deeply flawed and that no self respecting academic institution or individual should take the slightest notice of them when judging an individual for a position or for a grant application. Above all, papers should be judged by their content – which of course implies reading them, not always an easy thing to do it seems. A particularly important point that David makes is that the importance and influence of a paper may emerge years after its publication. This is why I regard the reason given by the open access journal  e-Life for declining my paper – that  “we only want to publish the most influential papers” - as among the silliest, and indeed the most stupid, I have ever received from a journal, and a journal, moreover, which is viewed as a rival to the so-called “glamour” journals, at least by some. I suspect that the editors of Nature and Science have had much too much experience to write anything quite so silly.

It is now generally acknowledged in the academic community that there is something unsavoury about impact and allied factors. Indeed, there is a San Francisco declaration (DORA) which is a sort of code for honourable conduct for the academic community. I have signed the DORA declaration – even though I detected a few cynical signatures there – and applaud its aims. Given the large number of academics and those from allied trades who have also done so, it is obvious that we all know (or most of us do) that impact factors and the allied statistics regularly collected to classify scientists and their standing are dangerous because so flawed. Yet in spite of this knowledge, impact factors continue to be used and abused on unprecedented scales. Therefore the more interesting question, it seems to me, is why this should be so. That, I think is the question that the academic community must address.

Here to stay because they serve deep-seated human needs
I believe that, whatever their flaws, impact factors are here to stay because they serve at least two needs, as I stated in my last post on the question in respect of editors. They increase their self-esteem and allow them, in the endless and often silent competition that goes on between journals, to declare to others, “we are better than you”. Hence Nature sent out emails some time ago stating that “To celebrate our impact factor…” we are doing such and such. They were the producers and consumers of their own propaganda, declaring in the same breath to themselves and to others that they are better than the competition.

What applies to editors applies equally to individuals.  When a scientist declares  “I have published 20 papers in Nature and 15 in Science”, s/he is also saying “I am a better scientist [than those of you who cannot publish in these high impact journals], without actually having to spell it out in words, perhaps even without thinking about it. In this way, they give notice to their colleagues, both superior and inferior, that they are better.  They too are producers and consumers of their own propaganda, since it elevates their self-esteem on the one hand and the esteem accorded them on the other. In other words, impact factors serve the same purposes for both editors and contributors. And the desire for esteem is deeply ingrained, and any means of obtaining it – of which impact factors is one - is bound to enjoy great success. Impact factors, in brief, are not going to be easily brushed aside by pointing out their flaws, when they cater for much deeper needs.

Two examples
This was dramatically illustrated for me during a lecture I attended fairly recently, given not by an aspirant to a job but by a senior professor. Slide after slide had emblazoned on it Nature or Science in conspicuous letters at least four times the size of the title of the article displayed, the face of the professor growing ever more smug with each additional slide. The irony is that I remember nothing of his lecture, save this and the title of the lecture. And that, of course, was part of the intended effect. For I now remember that the eminent professor publishes in “glamour” journals. He must therefore be good, which is what I assume he was trying to tell us.

Another example: Some four years ago, an advertisement for a research fellow from a very distinguished institution stated that “the successful candidate will have a proven ability to publish in high impact journals”. Nothing about attracting research grants or, better still, attacking important problems. Presumably, if someone publishes in glamour journals, s/he is worthy of consideration because they must be good.

When I made my disapproval of the wording known to the Director of the institution (whom I knew vaguely) he looked at me as if I had come from another planet, or perhaps one who had seen better days and had not kept up with the world. What he said in defense of his preposterous ad confirmed this. I was shocked.

Actually, I should not have been, because he was right and I was wrong.

Centuries of impact factors
There is in essence nothing new in impact factors, although they have been with us for only about 15 years or so. In one guise or another, they have actually been with us for centuries. Does such “in your face” advertising differ radically from putting some grand title after one’s name? Fellows of the Royal Society have for centuries put FRS after their name, to indicate to all their somewhat elevated status. In France, those who have a Légion d’Honneur do it more conspicuously “in your face”, by wearing permanently a badge on their lapel to announce their higher status. In Britain, in official ceremonies, guests are often asked to wear their decorations – to signify their status to others. Impact factors, as used by scientists, are just another means of declaring status. The difference between these displays of superiority and impact factors is simply a difference in the means of delivery, nothing else.

It all really boils down to the same thing. So, if impact factors were to be banished by some edict from the scientific nomenclatura, their place would be taken by some other factor. And what replaces it may in fact be worse.

Impact factors as short-cuts
Impact factors appeal to another feature of the human mind, namely to acquire knowledge through short-cuts.  Where we do not understand easily or cannot be bothered or are sitting on a committee that is judging dozens of applicants for a top position,  impact factors become a welcome aid.

 How does saying that “s/he has published in high impact factor journals” differ from a common phraseology used by many (not only trade book authors), along the lines of, “Dr. X, the Nobel Prize physicist, has affirmed that the war in Mont Blanc is totally unjustified from a demographic point of view”, when there is not the slightest guarantee that a Nobel Prize in physics qualifies its recipient to give informed views on such wars? Once again, the difference between this and impact factors is simply a difference in means of delivery. Like impact factors, this one short-cuts the thinking process and, since similar phraseologies are used so often, we must assume that most of us, when handling issues outside our competence, welcome the comfort of a label that apparently guarantees superior knowledge or ability and hence soothes our ignorance.  

There was a hilarious correspondence years ago (so my memory of it is somewhat blurred) in, I think, The Times, about the nuclear deterrent. Someone, anxious to convince readers of the wisdom of maintaining our nuclear deterrent, wrote that he knew of two Nobel laureates who had come to the conclusion that we should keep our nuclear deterrent. In response, Peter Medawar (the Nobel laureate, if I may say so) wrote back: “If we are going to conduct this dialogue by the beating of gongs, let me say that I know of 5 Nobel laureates who think that we should not keep our nuclear deterrent”.

A recent paper states that “… publication in journals with high impact factors can be associated with improved job opportunities, grant success, peer recognition, and honorific rewards, despite widespread acknowledgment that impact factor is a flawed measure of scientific quality and importance”. It is obvious why this is so; the habit of short-cutting the thinking process, common to most, is wonderfully aided by impact factors. I know of only one outstanding scientist (you guessed it, he was a Nobel laureate) to whom impact factors and all other short-cuts made not the slightest difference in assessing the suitability of a person; he searched for the evidence by reading the papers; that was his sole guide and authority. He was, of course, a loner; almost all of us are vulnerable to being impressed.  

It is totally fatuous to assume that, sitting in a committee that is trying to appoint a new research fellow, I would not – like countless others and absent the process of assessing all the evidence– be impressed by someone who has published 10 papers in Nature and 20 in Science and not show a preference for him over someone who has published entirely in those “specialized” journals which the editors of glamour journals so cynically advise those whose papers they reject without review to publish in.

Hence, for editors, administrators, ordinary and extraordinary scientists alike, impact factors serve deep-seated human needs – to classify people easily and painlessly on the one hand, to declare to others one’s superiority, and to soothe oneself with the belief that one is actually better. There is no way in which all the flaws attached to impact factors could overcome these more basic needs.

One can point to other similarities, where what is considered to be flawed nevertheless is durable and highly successful. Take, for example, posh addresses, say in Belgrave Square in London or Avenue Foch in Paris, or Park Avenue in New York. There is absolutely no guarantee that those inhabiting such addresses are in any way better than those who do not, although they may be (and often are) far richer. This is not unlike those who advertise incessantly that they have published in the glamour journals. There is no evidence whatever that those who publish in them are better scientists. But giving an address such as Science on a paper is like giving an address in Belgravia. It works like magic. Once again, the difference lies only in the means of delivery.

So, let us sink back and get used to it…impact factors are here to stay, and for a very long time. No good blaming editors of journals for it, no good blaming scientists, no good blaming administrators.  It all has to do with the workings and habits of the mind, as well as with its needs.

The only way to deal with the crushing and highly undesirable influence of impact factors in science is to prohibit their use in judging candidates and research grants and indeed the merit of individuals. That would be a great help, and some research organizations are actually implementing such procedures, with what success I cannot guess. But I fear that even that will not be enough. To get rid of it completely, one must change human nature. And that is not currently in our gift.

Sunday, April 27, 2014

Impact factors...in defence of "Nature" and "Science"


More often than not, when the corrupting influence of impact factors on science is discussed, fingers are pointed at Nature and Science, as if these two scientific journals invented impact factors and as if they are the main culprits in debasing science. This is not even remotely true. Rather, the finger should be pointed at the academic community exclusively and on no one else. In fact, there are even limits as to how much the academic community can be blamed, as I argue in my next post.

Nature and Science, and especially the former, are the best known and most sought after of what has come to be known as the “glamour” journals in science. There are, of course, other “glamour journals”, as well as ones that aspire to that status, but none has reached quite the same status as these two. It is therefore perhaps not surprising that they should bear the brunt of the blame.

But it would be hard for even the enemies of these two journals not to acknowledge that both have done a remarkably good job in selecting outstanding scientific papers over so many years. Journals do not gain prestige status by impact factors alone; if they did, their prestige wouild fall, and their impact factors along with it. I myself have little doubt but that the editors of both journals are hard-working, conscientious people, striving to do the best by themselves, by their journal and by science. One way of measuring their success is through impact factors, which is a guide to how often papers in the journal are cited. Impact factors are blind to quality, readability, or importance of a paper. They are simply one measure – among others – of how well a journal is doing and how wide an audience it is reaching. One could equally use other measures, for example the advertising revenue or some kind of Altmetric rating. Impact factors just happen to be one of those measures. And let us face it, no editor of any journal would be satisfied with low impact factors for their journal; if s/he says otherwise, s/he lies. The un-ending search for better impact factors really boils down to human behaviour - the desire to convince oneself and others that one is doing a good job, to esteem oneself and gain the esteem of others. Editors of journals are no different. Like the rest of us, they aspire to succeed and be seen – by their employers, their colleagues and the world at large – to have succeeded. Is it any wonder that they celebrate their impact factors?

To the editors of these journals – and to the rest of us - impact factors are therefore a guide to how successful they have been. I see nothing wrong with that, and find it hard to blame them for competing against each other to attain the best impact factor status. In other words, there is nothing really wrong with impact factors, except the uses to which they are put, and they are put to undesirable uses by the academic community, not by the journals.

In spite of the sterling service both have done to science, by publishing so many good papers, it is also true that they have published some pretty bad ones. In fact, of about the ten worst papers I have read in my subject, in my judgment one (the worst) was published in Nature Neuroscience, one in Nature, and one in Science.  I have, as well, read many mediocre papers in these journals, as well as in others aspiring to the same status, such as the Proceedings of the National Academy of Sciences and Current Biology. This is not surprising; the choice of papers is a matter of judgment, and the judgment made by these journals is actually made by humans; they are bound to get it wrong sometimes, and apparently do so often. By Nature’s own admission in an editorial some time ago, there are also “gems” in it which do not get much notice. Hence, not only does one find some bad or mediocre papers in these journals but un-noticed good ones as well. Retraction rates in both journals are not much worse or better than other journals although retraction rates apparently correlate with impact factors, the higher the impact factor, the more frequent the retractions. But it would of course be entirely wrong to blame the journals themselves, or their referees, if they publish papers which subsequently have to be retracted. The blame for that must lie with the authors.

“Send it to a specialized journal” (euphemism for “Your paper won’t help our impact factor”)
I recently had an interesting experience of how they can also be wrong in their judgment, at least their judgment of the general interest in a scientific work (of course the more the general interest, the higher their impact factor is likely to be). We sent our paper on “The experience of mathematical beauty and its neural correlates” first to Nature, which rejected it without review, stating that “These editorial judgements are based on such considerations as the degree of advance provided, the breadth of potential interest to researchers and timeliness (somewhere in that sentence, probably at “breadth of potential interest”, they are implicitly saying that our paper does not have the breadth of potential interest – in other words will not do much to improve their impact factors). We then sent it to Science, which again returned it without sending it out for review, saying that “we feel that the scope and focus of your paper make it more appropriate for a more specialized journal.” (Impact factors playing a role again here, at least implicitly, because, of course, specialized articles will appeal to a minority and will not enhance the impact factor of a journal, since they are also likely to be cited less often and then only by a minority).

Finally, going several steps down a very steep ladder, we sent it to Current Biology, which also returned it without sending it out to referees for in-depth review, writing that “…our feeling is that the work you describe would be better suited to a rather more specialized journal than Current Biology” (my translation- it will do nothing for our impact factor since only a limited number of workers are likely to read and cite it).

The paper was finally published in Frontiers in Human Neuroscience (after a very rigorous review). Given that this paper has, as of this writing, been viewed over 71,000 times in just over 2.5 months, and that it has been viewed even in war-torn countries (Syria, Libya, Ethiopia, Iraq, Kashmir, Crimea, Ukraine), it would seem that our article was of very substantial interest to a very wide range of people all over the world;  very few papers in neuroscience, and I daresay in science generally, achieve the status of being viewed so many times over such a brief period.  On this count, then, I cannot say that the judgment that the paper should be sent to a specialized general or that its breadth of interest was potentially limited inspires much confidence.


We only want to publish the most influential papers
It is of course a bit rich for these journals to pretend that they are not specialized. I doubt that any biologist reading the biological papers in Nature or Science would comprehend more than one paper in any issue, and that is being generous. In fact, very often what makes their papers comprehensible are the news and views sections in the same issue, a practice that some othert journals are taking up, though somewhat more timidly. By any standard, Nature and Science and all the other journals that pretend otherwise are in fact highly specialized journals.

Be that as it may, they are only pursuing a policy that many other journals also pursue. Consider this letter from e-Life, a recently instituted open access journal, which I have seen being written about as if it is a welcome answer to Nature.

Well, they returned a (different) paper I sent within 24 hours, after an internal review, saying that “The consensus of the editors is that your submission should not be considered for in-depth peer review”, adding prissily “This is not meant as a criticism of the quality of the data or the rigor of the science, but merely reflects our desire to publish only the most influential research”, apparently without realizing that a research can only be judged to have been influential retrospectively, sometimes years after it has been published. But what does “influential” research amount to – one which is cited many times, thereby boosting – you guessed it – the impact factor of the journal. Indeed, e-Life (which has also published some interesting articles) even has a section in its regular email alerts that is intended for the media – which of course help publicize a paper and boost – you guessed correctly again – the impact factor!

So why single out Nature and Science, when so many journals are also pursuing impact factors with such zeal? It is just that Nature and Science are better at it. And their higher impact factors really means that the papers they select for publication are being cited more often than those selected in other journals with aspirations to scientific glamour.

So, instead of pointing fingers at them, let us direct the enquiry at ourselves, while acknowledging that both journals, in spite of all their blemishes and warts, have done a fairly good job for science in general. 

In my next post, I will discuss why impact factors - however repellent the uses to which they are put by us - are here to stay.

Monday, January 27, 2014

Art and science meet up, sort of...

Some time ago, I wrote about an empty canvas by Bob Law, entitled Nothing to be Afraid Of, which was to be auctioned for an estimated £60, 000. Law was described by the head of the contemporary art department at the auction house as the "most underestimated and overlooked minimalist artist in Britain...who didn't get the recognition that he deserved". In his painting he had apparently "... applied the seductive idea of nothing to a canvas, and asks the viewer to reflect”.

A somewhat puzzled David Hockney was reported as saying "It seems to me that if you make pictures there should be something on the canvas".

In the end, the empty canvas was never sold, at least not at that auction.

Now, I have just read in Real Clear Science about the shortest paper ever published.

It is entitled "The Unsuccessful Self-Treatment of a Case of Writer's Block" by one Dennis Upper.
The paper is an empty page. The referee's comments are reproduced below the empty page and read as follows:

"I have studied this manuscript very carefully with lemon juice and X-rays and have not detected a single flaw in either design or writing style. I suggest it be published without revision. Clearly it is the most concise manuscript I have ever seen-yet it contains sufficient detail to allow other investigators to replicate Dr. Upper's failure. In comparison with the other manuscripts I get from you containing all that complicated detail, this one was a pleasure to examine. Surely we can find a place for this paper in the Journal-perhaps on the edge of a blank page."

There is nothing on the page -- and yet "it contains sufficient detail to allow other investigators to replicate..."

Bob Law asked the viewer to reflect by applying "the seductive idea of nothing to a canvas"

Both scientists and artists can now, in the absence of all detail, create their own details.

So science and art do meet, sort of, don't they? After all, who can deny the similarity here?

Maybe someone should ask the auction house to sell a copy of the paper (preferrably signed by Dennis Upper) alongside Bob Law's empty canvas.

That will be a true meeting of art and science - united under money.

The question is: which one will fetch the higher price?