Homeostasis by Peer Review

classic Classic list List threaded Threaded
13 messages Options
Reply | Threaded
Open this post in threaded view
|

Homeostasis by Peer Review

Peter Lissaman
Peer Review is indeed an excellent preserver of status quo.  For the AIAA
(the main aerospace institution) the standard procedure is that the signed
draft paper is submitted by editors to reviewers, who then send anonymous
comments to the author.  Twenty years ago, as a Fellow of said august
Institution, I  proposed simply reversing the process:  sending the paper
anonymously to reviewers and then listing favorable reviewers on the
published paper.  It was received with deafening silence.  Actually, the
Royal Society does do something akin to this.

Peter Lissaman,  Da Vinci Ventures

Expertise is not knowing everything, but knowing what to look for.

1454 Miracerros Loop South, Santa Fe, New Mexico 87505
TEL: (505) 983-7728                        FAX: (505) 983-1694





============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Homeostasis by Peer Review

Marko Rodriguez
Hi,

Or you could separate the review process from the publication process.
E.g. pre-print repositories could provide peer-review services. If a
journal wants a paper it can search for "highly regarded" articles in
pre-print repositories and request from authors for the copyright
permissions to publish their articles in their journals.

Rodriguez, M.A., Bollen, J., Van de Sompel, H., "The Convergence of
Digital Libraries and the Peer-Review Process", Journal of Information
Science, volume 32, number 2, pages 149-159, April 2006.
   [ see http://markorodriguez.com/Research_files/dl-peer-review.pdf ]

Also see: http://www.dlib.org/dlib/october06/vandesompel/10vandesompel.html

See ya,
Marko.


> Peer Review is indeed an excellent preserver of status quo.  For the AIAA
> (the main aerospace institution) the standard procedure is that the signed
> draft paper is submitted by editors to reviewers, who then send anonymous
> comments to the author.  Twenty years ago, as a Fellow of said august
> Institution, I  proposed simply reversing the process:  sending the paper
> anonymously to reviewers and then listing favorable reviewers on the
> published paper.  It was received with deafening silence.  Actually, the
> Royal Society does do something akin to this.
>
> Peter Lissaman,  Da Vinci Ventures
>
> Expertise is not knowing everything, but knowing what to look for.
>
> 1454 Miracerros Loop South, Santa Fe, New Mexico 87505
> TEL: (505) 983-7728                        FAX: (505) 983-1694
>
>
>
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org
>


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Homeostasis by Peer Review

Russell Gonnering
In reply to this post by Peter Lissaman
Peter-

This is an interesting proposal.  Having served on the editorial board of a number of medical publications, I agree that the peer review process tends to preserve the status quo.  The standard for an established author from a "reputable institution" may be, at least unconsciously, different from that used for a neophyte.  I like sending the paper without the authors listed.  I'm not sure about listing the reviewers on the published paper.

Russ
 

Russell S. Gonnering, MD, FACS, MMM, CPHQ

On Jan 27, 2009, at 2:13 PM, Peter Lissaman wrote:

Peer Review is indeed an excellent preserver of status quo.  For the AIAA
(the main aerospace institution) the standard procedure is that the signed
draft paper is submitted by editors to reviewers, who then send anonymous
comments to the author.  Twenty years ago, as a Fellow of said august
Institution, I  proposed simply reversing the process:  sending the paper
anonymously to reviewers and then listing favorable reviewers on the
published paper.  It was received with deafening silence.  Actually, the
Royal Society does do something akin to this.

Peter Lissaman,  Da Vinci Ventures

Expertise is not knowing everything, but knowing what to look for.

1454 Miracerros Loop South, Santa Fe, New Mexico 87505
TEL: (505) 983-7728                        FAX: (505) 983-1694





============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org







============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Homeostasis by Peer Review

Nick Thompson
In reply to this post by Peter Lissaman
Peter, et al.

I haven't been following this thread perhaps carefully enough, but why not
have every article published and every article rated by a number of stars,
and then everybody could set their browser to the minimum number of stars
we are willing to tolerate.  Those of us who don't want to be subject to
the "peer review" effect, could simply set their browser to read everything
with any stars at all!

The problem, is, that we all want our cakes and eat it to:  To have read
all the zany stuff  that will become next year's big thing WITHOUT having
to read the weird stupid stuff that goes nowhere.  We readers are really
the problem.  

Nick

Nicholas S. Thompson
Emeritus Professor of Psychology and Ethology,
Clark University ([hidden email])




> [Original Message]
> From: Peter Lissaman <[hidden email]>
> To: <[hidden email]>
> Date: 1/27/2009 1:14:26 PM
> Subject: [FRIAM] Homeostasis by Peer Review
>
> Peer Review is indeed an excellent preserver of status quo.  For the AIAA
> (the main aerospace institution) the standard procedure is that the signed
> draft paper is submitted by editors to reviewers, who then send anonymous
> comments to the author.  Twenty years ago, as a Fellow of said august
> Institution, I  proposed simply reversing the process:  sending the paper
> anonymously to reviewers and then listing favorable reviewers on the
> published paper.  It was received with deafening silence.  Actually, the
> Royal Society does do something akin to this.
>
> Peter Lissaman,  Da Vinci Ventures
>
> Expertise is not knowing everything, but knowing what to look for.
>
> 1454 Miracerros Loop South, Santa Fe, New Mexico 87505
> TEL: (505) 983-7728                        FAX: (505) 983-1694
>
>
>
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Homeostasis by Peer Review

glen e. p. ropella-2
Thus spake Nicholas Thompson circa 28/01/09 07:34 PM:
> [...] why not
> have every article published and every article rated by a number of stars,
> and then everybody could set their browser to the minimum number of stars
> we are willing to tolerate.  Those of us who don't want to be subject to
> the "peer review" effect, could simply set their browser to read everything
> with any stars at all!

The problem with this is that a number of stars is uni-dimensional,
while the guidelines for reviewers for any given publication are
multi-dimensional (and/or vague).

You could still project down to one dimension if you have a strong
policy of who gets to rate the article, what the criteria are for the
rating, and the trump power of the editor's opinion.

The ultimate automation would be a multi-dimensional rating measure,
though.  Then you might be able to get rid of the per-publication
policies.  I'm imagining a preference- (or query-) controlled modal
5-dimensional 3D space with color and animation.  Oooo, we could also
implement an automated-profiling-controlled physics for the 5D system so
that articles were, say, attracted to you based on your past interests,
more elastic where clusters of articles cover much of the same content,
coefficients of friction were adjusted based on which region of rating
space you were exploring at the time, colors (of the articles) change
when your focus shifts from fact to speculation or physics to
philosophy, etc.

... or not.

--
glen e. p. ropella, 971-222-9095, http://agent-based-modeling.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Homeostasis by Peer Review

Nick Thompson
In reply to this post by Peter Lissaman
Glen,

OK, but ....  No matter how many dimensions of judgment you have, it all
boils down to a one-bit decision: either you are going to read the sucker
or you arent.  

If one knows who the reviewers are ... knows their tastes, ete, perhaps
each consumer could rate reviewers and the program could give a stars by
reviewer-weighting customized for each consumer.  Software could be
provided to do this.  Very close to what Amazon provides right now, except
that each reader could accumulate his own personal judgments of reviewers,
rather than relying on swarm evaluation.  

I forgot to say:  Let's say the journal has an editorial board that rates
and comments on articles as they are submitted.   Let;s say we start a
journal called THE FRIAM JOURNAL OF APPLIED COMPLEXITY.  Every article is
sent out to five reviewers.   So now we have a possiblity of 25 stars, say.
Now,  the editor passes along the ratings and suggestions of the readers
and the author now can make choice.  He can carry on publication with a low
rating, or he can revise and resubmit to get a better rating.  

this led to another thought.  A group such as this one wouldnt need even to
start its own journal.  It could just start a rating service of some other
publication.  We could, for instance, start by rating JASSS and putting the
ratings up on the web.  The trouble is we wouldnt be rating or seeing the
articles that JASSS had rejected.  I suppose we could ask JASSS to send us
all their rejected articles!  

I am probably too lazy to do anything like this, but I really like thinking
about it.  

Nicholas S. Thompson
Emeritus Professor of Psychology and Ethology,
Clark University ([hidden email])




> [Original Message]
> From: glen e. p. ropella <[hidden email]>
> To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
> Date: 1/29/2009 12:44:00 PM
> Subject: Re: [FRIAM] Homeostasis by Peer Review
>
> Thus spake Nicholas Thompson circa 28/01/09 07:34 PM:
> > [...] why not
> > have every article published and every article rated by a number of
stars,
> > and then everybody could set their browser to the minimum number of
stars
> > we are willing to tolerate.  Those of us who don't want to be subject to
> > the "peer review" effect, could simply set their browser to read
everything

> > with any stars at all!
>
> The problem with this is that a number of stars is uni-dimensional,
> while the guidelines for reviewers for any given publication are
> multi-dimensional (and/or vague).
>
> You could still project down to one dimension if you have a strong
> policy of who gets to rate the article, what the criteria are for the
> rating, and the trump power of the editor's opinion.
>
> The ultimate automation would be a multi-dimensional rating measure,
> though.  Then you might be able to get rid of the per-publication
> policies.  I'm imagining a preference- (or query-) controlled modal
> 5-dimensional 3D space with color and animation.  Oooo, we could also
> implement an automated-profiling-controlled physics for the 5D system so
> that articles were, say, attracted to you based on your past interests,
> more elastic where clusters of articles cover much of the same content,
> coefficients of friction were adjusted based on which region of rating
> space you were exploring at the time, colors (of the articles) change
> when your focus shifts from fact to speculation or physics to
> philosophy, etc.
>
> ... or not.
>
> --
> glen e. p. ropella, 971-222-9095, http://agent-based-modeling.com
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Homeostasis by Peer Review

Russ Abbott
Nick,

What a great idea. Makes reviewing like recommending, and it gives potential readers a better sense of why someone liked or disliked an article.  It really changes the nature of a journal. Rather than selectively publishing only the well reviewed articles, everything would be published along with their reviews. The value of a journal would even more depend on the people they could get to do the reviewing. 

It would make it more difficult to list one's "publications," though. Under what circumstances would one say that a submitted paper had been accepted and published?  I guess the journal could still go through its acceptance process. A problem that occurs to me, though, is that often one submits an article with the hope of getting useful comments and with the expectation of having to re-write it.  How would those articles be handled? Would the author accompanying the submission with a request not to have the article and the reviews made public?

Also, I suspect that some reviewers would not want their entire reviews published online. At least a portion of each review would have to be set aside for the author only.

The more I think about it, the more complex but also the more promising it becomes. A very interesting idea.

-- Russ

On Thu, Jan 29, 2009 at 12:37 PM, Nicholas Thompson <[hidden email]> wrote:
Glen,

OK, but ....  No matter how many dimensions of judgment you have, it all
boils down to a one-bit decision: either you are going to read the sucker
or you arent.

If one knows who the reviewers are ... knows their tastes, ete, perhaps
each consumer could rate reviewers and the program could give a stars by
reviewer-weighting customized for each consumer.  Software could be
provided to do this.  Very close to what Amazon provides right now, except
that each reader could accumulate his own personal judgments of reviewers,
rather than relying on swarm evaluation.

I forgot to say:  Let's say the journal has an editorial board that rates
and comments on articles as they are submitted.   Let;s say we start a
journal called THE FRIAM JOURNAL OF APPLIED COMPLEXITY.  Every article is
sent out to five reviewers.   So now we have a possiblity of 25 stars, say.
Now,  the editor passes along the ratings and suggestions of the readers
and the author now can make choice.  He can carry on publication with a low
rating, or he can revise and resubmit to get a better rating.

this led to another thought.  A group such as this one wouldnt need even to
start its own journal.  It could just start a rating service of some other
publication.  We could, for instance, start by rating JASSS and putting the
ratings up on the web.  The trouble is we wouldnt be rating or seeing the
articles that JASSS had rejected.  I suppose we could ask JASSS to send us
all their rejected articles!

I am probably too lazy to do anything like this, but I really like thinking
about it.

Nicholas S. Thompson
Emeritus Professor of Psychology and Ethology,
Clark University ([hidden email])




> [Original Message]
> From: glen e. p. ropella <[hidden email]>
> To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
> Date: 1/29/2009 12:44:00 PM
> Subject: Re: [FRIAM] Homeostasis by Peer Review
>
> Thus spake Nicholas Thompson circa 28/01/09 07:34 PM:
> > [...] why not
> > have every article published and every article rated by a number of
stars,
> > and then everybody could set their browser to the minimum number of
stars
> > we are willing to tolerate.  Those of us who don't want to be subject to
> > the "peer review" effect, could simply set their browser to read
everything
> > with any stars at all!
>
> The problem with this is that a number of stars is uni-dimensional,
> while the guidelines for reviewers for any given publication are
> multi-dimensional (and/or vague).
>
> You could still project down to one dimension if you have a strong
> policy of who gets to rate the article, what the criteria are for the
> rating, and the trump power of the editor's opinion.
>
> The ultimate automation would be a multi-dimensional rating measure,
> though.  Then you might be able to get rid of the per-publication
> policies.  I'm imagining a preference- (or query-) controlled modal
> 5-dimensional 3D space with color and animation.  Oooo, we could also
> implement an automated-profiling-controlled physics for the 5D system so
> that articles were, say, attracted to you based on your past interests,
> more elastic where clusters of articles cover much of the same content,
> coefficients of friction were adjusted based on which region of rating
> space you were exploring at the time, colors (of the articles) change
> when your focus shifts from fact to speculation or physics to
> philosophy, etc.
>
> ... or not.
>
> --
> glen e. p. ropella, 971-222-9095, http://agent-based-modeling.com
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Homeostasis by Peer Review

glen e. p. ropella-2
In reply to this post by Nick Thompson
Thus spake Nicholas Thompson circa 29/01/09 12:37 PM:
> [...] it all
> boils down to a one-bit decision: either you are going to read the sucker
> or you arent.

Not that I'm argumentative or anything; but it's not just binary.  I
have at least 3 modes of reading: 1) read and integrate, 2) sloppy
reading, and 3) skim.  I do (1) when I want/expect to use the content
for some task.  I do (2) when I merely need to carry some context for
understanding or communicating with others.  And I do (3) when I want to
determine whether I need to do (1) or (2), or when I just want an entry
into some topic.

So, the decision is, at least, quaternary.

And much of which type of reading I do depends on the character of the
publication pathway.  And this is one of the reasons I hate the way /.
and digg work.  For whatever reason, I tend to get the most benefit out
of obscure articles ... perhaps similarly, I seem to get the most
enjoyment out of obscure music.  Homogeny seems to be the enemy.

> If one knows who the reviewers are ... knows their tastes, ete, perhaps
> each consumer could rate reviewers and the program could give a stars by
> reviewer-weighting customized for each consumer.  Software could be
> provided to do this.  Very close to what Amazon provides right now, except
> that each reader could accumulate his own personal judgments of reviewers,
> rather than relying on swarm evaluation.

This is a close approximation to what I'd like, except why approximate
if you can shoot for the ultimate?  If we were to develop a complicated
projection from many to one dimensions, we may find that as difficult as
implementing the multi-dimensional measure right off the bat.  I suppose
there would be marketing reasons... a competent funder might demand we
start accumulating users immediately via a reducing projection and build
out the multi-dimensional rating interface over time.

> I forgot to say:  Let's say the journal has an editorial board that rates
> and comments on articles as they are submitted.   Let;s say we start a
> journal called THE FRIAM JOURNAL OF APPLIED COMPLEXITY.  Every article is
> sent out to five reviewers.   So now we have a possiblity of 25 stars, say.
> Now,  the editor passes along the ratings and suggestions of the readers
> and the author now can make choice.  He can carry on publication with a low
> rating, or he can revise and resubmit to get a better rating.

This would be a nice evolution of the system we currently have.  If I
were the editor of an extant journal, I might find it attractive.  But
if I were to start an entirely new publication intent on revolutionizing
peer-review, I would be more inclined to adopt a multi-dimensional
rating system not based solely on number of stars.  Of course, there are
all sorts of compromises.  Perhaps the stars are colored according to
the domain expertise of the reviewer.  Or perhaps we have multiple
symbols for types of rating (innovation vs. clear communication vs.
scientific impact etc.).

> this led to another thought.  A group such as this one wouldnt need even to
> start its own journal.  It could just start a rating service of some other
> publication.  We could, for instance, start by rating JASSS and putting the
> ratings up on the web.  The trouble is we wouldnt be rating or seeing the
> articles that JASSS had rejected.  I suppose we could ask JASSS to send us
> all their rejected articles!

Actually, I'd like to see something like this for hubs like pubmed or
repositories like citeseer or the acm's digital library.  I wouldn't
want it to be publication-specific, though I might want it to be
domain-specific.

> I am probably too lazy to do anything like this, but I really like thinking
> about it.  

[grin]  Oh ... uh ... what?  ... you were talking about actually _doing_
something?!?  Umm ... ok ... perhaps I'm in the wrong place... [patting
pockets, grabbing jacket, retreating from the room] ;-)

--
glen e. p. ropella, 971-222-9095, http://agent-based-modeling.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Homeostasis by Peer Review

Nick Thompson
In reply to this post by Peter Lissaman
Russ,
 
What you propose here is actually more elaborate and interesting than what I had in mind.  It's what I proposed PLUS behavioral and brain sciences.  On my account, nothing gets published until the author is ready;  on my account, everything gets published with a rating.  On your account, everything gets published WITH a rating AND WITH the last set of reviews.  What that means, is that the discussion never gets closed;  there is always a wet edge.  I like that. 
 
I think academics would fall into line.  Promotion committees can count stars just as well as the rest of us. 
 
Nick
 
 
 
Nicholas S. Thompson
Emeritus Professor of Psychology and Ethology,
Clark University ([hidden email])
 
 
 
 
----- Original Message -----
Sent: 1/29/2009 2:11:52 PM
Subject: Re: [FRIAM] Homeostasis by Peer Review

Nick,

What a great idea. Makes reviewing like recommending, and it gives potential readers a better sense of why someone liked or disliked an article.  It really changes the nature of a journal. Rather than selectively publishing only the well reviewed articles, everything would be published along with their reviews. The value of a journal would even more depend on the people they could get to do the reviewing. 

It would make it more difficult to list one's "publications," though. Under what circumstances would one say that a submitted paper had been accepted and published?  I guess the journal could still go through its acceptance process. A problem that occurs to me, though, is that often one submits an article with the hope of getting useful comments and with the expectation of having to re-write it.  How would those articles be handled? Would the author accompanying the submission with a request not to have the article and the reviews made public?

Also, I suspect that some reviewers would not want their entire reviews published online. At least a portion of each review would have to be set aside for the author only.

The more I think about it, the more complex but also the more promising it becomes. A very interesting idea.

-- Russ

On Thu, Jan 29, 2009 at 12:37 PM, Nicholas Thompson <[hidden email]> wrote:
Glen,

OK, but ....  No matter how many dimensions of judgment you have, it all
boils down to a one-bit decision: either you are going to read the sucker
or you arent.

If one knows who the reviewers are ... knows their tastes, ete, perhaps
each consumer could rate reviewers and the program could give a stars by
reviewer-weighting customized for each consumer.  Software could be
provided to do this.  Very close to what Amazon provides right now, except
that each reader could accumulate his own personal judgments of reviewers,
rather than relying on swarm evaluation.

I forgot to say:  Let's say the journal has an editorial board that rates
and comments on articles as they are submitted.   Let;s say we start a
journal called THE FRIAM JOURNAL OF APPLIED COMPLEXITY.  Every article is
sent out to five reviewers.   So now we have a possiblity of 25 stars, say.
Now,  the editor passes along the ratings and suggestions of the readers
and the author now can make choice.  He can carry on publication with a low
rating, or he can revise and resubmit to get a better rating.

this led to another thought.  A group such as this one wouldnt need even to
start its own journal.  It could just start a rating service of some other
publication.  We could, for instance, start by rating JASSS and putting the
ratings up on the web.  The trouble is we wouldnt be rating or seeing the
articles that JASSS had rejected.  I suppose we could ask JASSS to send us
all their rejected articles!

I am probably too lazy to do anything like this, but I really like thinking
about it.

Nicholas S. Thompson
Emeritus Professor of Psychology and Ethology,
Clark University ([hidden email])




> [Original Message]
> From: glen e. p. ropella <[hidden email]>
> To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
> Date: 1/29/2009 12:44:00 PM
> Subject: Re: [FRIAM] Homeostasis by Peer Review
>
> Thus spake Nicholas Thompson circa 28/01/09 07:34 PM:
> > [...] why not
> > have every article published and every article rated by a number of
stars,
> > and then everybody could set their browser to the minimum number of
stars
> > we are willing to tolerate.  Those of us who don't want to be subject to
> > the "peer review" effect, could simply set their browser to read
everything

> > with any stars at all!
>
> The problem with this is that a number of stars is uni-dimensional,
> while the guidelines for reviewers for any given publication are
> multi-dimensional (and/or vague).
>
> You could still project down to one dimension if you have a strong
> policy of who gets to rate the article, what the criteria are for the
> rating, and the trump power of the editor's opinion.
>
> The ultimate automation would be a multi-dimensional rating measure,
> though.  Then you might be able to get rid of the per-publication
> policies.  I'm imagining a preference- (or query-) controlled modal
> 5-dimensional 3D space with color and animation.  Oooo, we could also
> implement an automated-profiling-controlled physics for the 5D system so
> that articles were, say, attracted to you based on your past interests,
> more elastic where clusters of articles cover much of the same content,
> coefficients of friction were adjusted based on which region of rating
> space you were exploring at the time, colors (of the articles) change
> when your focus shifts from fact to speculation or physics to
> philosophy, etc.
>
> ... or not.
>
> --
> glen e. p. ropella, 971-222-9095, http://agent-based-modeling.com
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Homeostasis by Peer Review

John Kennison


Maybe in the near future, researchers will publish papers on their web sites and journals would consist of stars (and maybe other symbols) and links.
________________________________________
From: [hidden email] [[hidden email]] On Behalf Of Nicholas Thompson [[hidden email]]
Sent: Thursday, January 29, 2009 4:20 PM
To: [hidden email]
Cc: friam
Subject: Re: [FRIAM] Homeostasis by Peer Review

Russ,

What you propose here is actually more elaborate and interesting than what I had in mind.  It's what I proposed PLUS behavioral and brain sciences.  On my account, nothing gets published until the author is ready;  on my account, everything gets published with a rating.  On your account, everything gets published WITH a rating AND WITH the last set of reviews.  What that means, is that the discussion never gets closed;  there is always a wet edge.  I like that.

I think academics would fall into line.  Promotion committees can count stars just as well as the rest of us.

Nick



Nicholas S. Thompson
Emeritus Professor of Psychology and Ethology,
Clark University ([hidden email]<mailto:[hidden email]>)




----- Original Message -----
From: Russ Abbott<mailto:[hidden email]>
To: [hidden email]<mailto:[hidden email]>;The Friday Morning Applied Complexity Coffee Group<mailto:[hidden email]>
Sent: 1/29/2009 2:11:52 PM
Subject: Re: [FRIAM] Homeostasis by Peer Review

Nick,

What a great idea. Makes reviewing like recommending, and it gives potential readers a better sense of why someone liked or disliked an article.  It really changes the nature of a journal. Rather than selectively publishing only the well reviewed articles, everything would be published along with their reviews. The value of a journal would even more depend on the people they could get to do the reviewing.

It would make it more difficult to list one's "publications," though. Under what circumstances would one say that a submitted paper had been accepted and published?  I guess the journal could still go through its acceptance process. A problem that occurs to me, though, is that often one submits an article with the hope of getting useful comments and with the expectation of having to re-write it.  How would those articles be handled? Would the author accompanying the submission with a request not to have t he article and the reviews made public?

Also, I suspect that some reviewers would not want their entire reviews published online. At least a portion of each review would have to be set aside for the author only.

The more I think about it, the more complex but also the more promising it becomes. A very interesting idea.

-- Russ

On Thu, Jan 29, 2009 at 12:37 PM, Nicholas Thompson <[hidden email]<mailto:[hidden email]>> wrote:
Glen,

OK, but ....  No matter how many dimensions of judgment you have, it all
boils down to a one-bit decision: either you are going to read the sucker
or you arent.

If one knows who the reviewers are ... knows their tastes, ete, perhaps
each consumer could rate reviewers and the program could give a stars by
reviewer-weighting customized for each consumer.  Software could be
provided to do this.  Very close to what Amazon provides right now, except
that each reader could accumulate his own personal judgments of reviewers,
rather than relying on swarm evaluation.

I forgot to say:  Let's say the journal has an editorial board that rates
and comments on articles as they are submitted.   Let;s say we start a
journal called THE FRIAM JOURNAL OF APPLIED COMPLEXITY.  Every article i s
sent out to five reviewers.   So now we have a possiblity of 25 stars, say.
Now,  the editor passes along the ratings and suggestions of the readers
and the author now can make choice.  He can carry on publication with a low
rating, or he can revise and resubmit to get a better rating.

this led to another thought.  A group such as this one wouldnt need even to
start its own journal.  It could just start a rating service of some other
publication.  We could, for instance, start by rating JASSS and putting the
ratings up on the web.  The trouble is we wouldnt be rating or seeing the
articles that JASSS had rejected.  I suppose we could ask JASSS to send us
all their rejected articles!

I am probably too lazy to do anything like this, but I really like thinking
about it.

Nicholas S. Thompson
Emeritus Professor of Psychology and Ethology,
Clark University ([hidden email]<mailto:nthompson@
%20clarku.edu>)




> [Original Message]
> From: glen e. p. ropella <[hidden email]<mailto:[hidden email]>>
> To: The Friday Morning Applied Complexity Coffee Group <[hidden email]<mailto:[hidden email]>>
> Date: 1/29/2009 12:44:00 PM
> Subject: Re: [FRIAM] Homeostasis by Peer Review
>
> Thus spake Nicholas Thompson circa 28/01/09 07:34 PM:
> > [...] why not
> > have every article published and every article rated by a number of
stars,
> > and then everybody could set their browser to the minimum number of
stars
> > we are willing to tolerate.  Those of us who don't want to be subject to
> > the "peer review" effect, could simply set their browser to read
everything

> > with any stars at all!
>
> The problem with this is that a number of stars is un i-dimensional,
> while the guidelines for reviewers for any given publication are
> multi-dimensional (and/or vague).
>
> You could still project down to one dimension if you have a strong
> policy of who gets to rate the article, what the criteria are for the
> rating, and the trump power of the editor's opinion.
>
> The ultimate automation would be a multi-dimensional rating measure,
> though.  Then you might be able to get rid of the per-publication
> policies.  I'm imagining a preference- (or query-) controlled modal
> 5-dimensional 3D space with color and animation.  Oooo, we could also
> implement an automated-profiling-controlled physics for the 5D system so
> that articles were, say, attracted to you based on your past interests,
> more elastic where clusters of articles cover much of the same content,
> coefficients of friction were adjusted based on which region of rating
> space you were exploring at the time, colors (of the articles) change
> when your focus shifts from fact to speculation or physics to
> philosophy, etc.
>
> ... or not.
>
> --
> glen e. p. ropella, 971-222-9095, http://agent-based-modeling.com
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Homeostasis by Peer Review

Nick Thompson
In reply to this post by Peter Lissaman
Glen,

You raise an interesting information theoretical point.  Note that your
decision to (3) is not independent of your decision to (1) or (2).  Some
redundancy here.

I am sure there are cases where you start to (3) and then decide that you
have to (1) after all.  Or vice versa.  

Nick

Nicholas S. Thompson
Emeritus Professor of Psychology and Ethology,
Clark University ([hidden email])




> [Original Message]
> From: glen e. p. ropella <[hidden email]>
> To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
> Date: 1/29/2009 2:12:46 PM
> Subject: Re: [FRIAM] Homeostasis by Peer Review
>
> Thus spake Nicholas Thompson circa 29/01/09 12:37 PM:
> > [...] it all
> > boils down to a one-bit decision: either you are going to read the
sucker

> > or you arent.
>
> Not that I'm argumentative or anything; but it's not just binary.  I
> have at least 3 modes of reading: 1) read and integrate, 2) sloppy
> reading, and 3) skim.  I do (1) when I want/expect to use the content
> for some task.  I do (2) when I merely need to carry some context for
> understanding or communicating with others.  And I do (3) when I want to
> determine whether I need to do (1) or (2), or when I just want an entry
> into some topic.
>
> So, the decision is, at least, quaternary.
>
> And much of which type of reading I do depends on the character of the
> publication pathway.  And this is one of the reasons I hate the way /.
> and digg work.  For whatever reason, I tend to get the most benefit out
> of obscure articles ... perhaps similarly, I seem to get the most
> enjoyment out of obscure music.  Homogeny seems to be the enemy.
>
> > If one knows who the reviewers are ... knows their tastes, ete, perhaps
> > each consumer could rate reviewers and the program could give a stars by
> > reviewer-weighting customized for each consumer.  Software could be
> > provided to do this.  Very close to what Amazon provides right now,
except
> > that each reader could accumulate his own personal judgments of
reviewers,

> > rather than relying on swarm evaluation.
>
> This is a close approximation to what I'd like, except why approximate
> if you can shoot for the ultimate?  If we were to develop a complicated
> projection from many to one dimensions, we may find that as difficult as
> implementing the multi-dimensional measure right off the bat.  I suppose
> there would be marketing reasons... a competent funder might demand we
> start accumulating users immediately via a reducing projection and build
> out the multi-dimensional rating interface over time.
>
> > I forgot to say:  Let's say the journal has an editorial board that
rates
> > and comments on articles as they are submitted.   Let;s say we start a
> > journal called THE FRIAM JOURNAL OF APPLIED COMPLEXITY.  Every article
is
> > sent out to five reviewers.   So now we have a possiblity of 25 stars,
say.
> > Now,  the editor passes along the ratings and suggestions of the readers
> > and the author now can make choice.  He can carry on publication with a
low

> > rating, or he can revise and resubmit to get a better rating.
>
> This would be a nice evolution of the system we currently have.  If I
> were the editor of an extant journal, I might find it attractive.  But
> if I were to start an entirely new publication intent on revolutionizing
> peer-review, I would be more inclined to adopt a multi-dimensional
> rating system not based solely on number of stars.  Of course, there are
> all sorts of compromises.  Perhaps the stars are colored according to
> the domain expertise of the reviewer.  Or perhaps we have multiple
> symbols for types of rating (innovation vs. clear communication vs.
> scientific impact etc.).
>
> > this led to another thought.  A group such as this one wouldnt need
even to
> > start its own journal.  It could just start a rating service of some
other
> > publication.  We could, for instance, start by rating JASSS and putting
the
> > ratings up on the web.  The trouble is we wouldnt be rating or seeing
the
> > articles that JASSS had rejected.  I suppose we could ask JASSS to send
us
> > all their rejected articles!
>
> Actually, I'd like to see something like this for hubs like pubmed or
> repositories like citeseer or the acm's digital library.  I wouldn't
> want it to be publication-specific, though I might want it to be
> domain-specific.
>
> > I am probably too lazy to do anything like this, but I really like
thinking

> > about it.  
>
> [grin]  Oh ... uh ... what?  ... you were talking about actually _doing_
> something?!?  Umm ... ok ... perhaps I'm in the wrong place... [patting
> pockets, grabbing jacket, retreating from the room] ;-)
>
> --
> glen e. p. ropella, 971-222-9095, http://agent-based-modeling.com
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Homeostasis by Peer Review

Roger Critchlow-2
In reply to this post by John Kennison
On Thu, Jan 29, 2009 at 4:40 PM, John Kennison <[hidden email]> wrote:
>
>
> Maybe in the near future, researchers will publish papers on their web sites and journals would consist of stars (and maybe other symbols) and links.
> ________________________________________

In a sense that's what already happens, except that they publish on
arXiv.org, and the stars are being kept for some topics on blogs here
and there, but mostly in people's heads.

Recommender systems try to track the stars for books (Amazon) and
movies (Netflix) and websites (Google), http://recsys.acm.org/ and
http://www.readwriteweb.com/archives/recommender_systems.php, but they
all fail when the thing to be recommended falls outside the space
spanned by previous experience.  And they all assume a dilettante's
interest in the recommendations and that everyone has a useful opinion
about everything, neither of which holds when we get into the lands of
publish-or-perish.

Ideally you would have a wiki on top of arXiv.org where each wiki
article was an ongoing review of the literature in the article's
subject. And when one published on arXiv.org one would not just pick a
single topic of publication but submit to all the reviews which might
find the new article relevant.  And the reviews would need to be
multi-threaded, a hyper-wiki, so that differences of opinion could
exist side by side rather than attempting to obliterate each other
through ping pong edits.

And that's the issue, of course, in journals or on wikipedia:  whether
the metastasized consensus can silence minority opinion by declining
to publish or by blacklisting ip addresses or otherwise excluding them
from the one true venue.  Free speech meets true speech.

-- rec --

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Homeostasis by Peer Review

Robert Holmes
Or become editor of a journal and then publish all your own stuff there. Just like the editor of Chaos, Solitons and Fractals (Elsevier) did. Story at:
http://www.nature.com/news/2008/081126/full/456432a.html

or more accessibly in blogs at
http://scienceblogs.com/pontiff/2008/12/nature_on_el_naschie.php
http://sbseminar.wordpress.com/2008/11/30/laffaire-el-naschie/

By his use of self-publishing El Naschie (the editor) managed to get his journal ranked #2 in Thomson Reuters' Journal Citation Reports in mathematical physics.

Robert

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org