Portal talk:Mentorship

From Wikiversity
Jump to navigation Jump to search

Hmm... Good start, I wish I could think like that! You captured the essence of content mentoring I think. I often have to come at things from a roundabout perspective and it isn't as concise. I'll see how I do with the other sections if you are done.--Graeme E. Smith 21:27, 29 May 2009 (UTC)[reply]

I was indeed done at the time, I needed more time to think about how to go about improving the other sections as well. I'm not sure if I've captured the essence of research mentoring completely though. That probably needs a bit more work. The problem I encountered with the research mentoring section is that the roles were not very descriptive or distinct from each other, so I was unable to figure out a way to use any of it as a bases for tasks. -- darklama  14:36, 30 May 2009 (UTC)[reply]

I think you did a wonderful job, considering what you had to work with, AFriedman had come up with the framework but her whole approach was Credentialized. I expanded on her framework because I had no other standard to build from, and it jumped too far too fast. I don't think she understands how hard it might be to get published in a new area where there are few journals and those dominated by the more famous authors. One task that I don't think you thought about was Peer Review. I know this has been discussed in the research policy documents, and left open, but one thing I know for a fact is that recruiting peers for original research is going to be difficult. Consider my own work for instance, There are very few peers who are experts in Hybrid artificial Consciousness, Neuroscience, Psychology, Philosophy, Neuro-Anatomy, Attention Systems, Cognitive Architectures, Neural Networks etc. And the best I can hope for is that each discipline peeks at the little they know and signs off that I am accurate on that portion, the Whole Theory, is just too broad for any one specialist to evaluate. --Graeme E. Smith 04:45, 31 May 2009 (UTC)[reply]

I actually did mention peer review, see the Review task, or did you mean something else? Do you mean finding people to do peer review as a task? -- darklama  12:47, 31 May 2009 (UTC)[reply]

Well you did mention review, However I think there is a difference between general review, and Peer Review, and that is the assumption of Peerage, which comes with the ability to understand a specialists work. For instance AFriedman can review my work, and from the point of view of Wikiversity she is my peer, but she can't comment on it, because it moves in a direction she doesn't understand. In fact even Dr. Laberge, the author of one of the articles I quote the most in my work, is finding himself hard pressed to understand my work, and my work is to some extent an extension of his. So yes, recruiting peers to review work that is not understood, is part of mentoring a researcher. This is where I think accreditation might come in, how do you recruit specialists in a particular field to review work, unless there is a basis for respect?--Graeme E. Smith 19:18, 31 May 2009 (UTC)[reply]

Recruiting specialists wouldn't do much good if Wikiversity researchers don't respect or trust the opinions of the specialists, or if the specialists don't respect or trust the work of Wikiversity researchers. Mutual respect and trust is required in order for recruitment and peer review to work. Respect and trust doesn't have to be on the bases of credentials or accreditation. I think Wikiversity researchers need to be open to having anyone review there work, regardless of whether a person is a specialist or not, in order to set the ground work for an open review process. I think specialists in turn would need to be open to the idea of an open review process and be willing to review the research work for its own sakes. That said having mentors find willing reviewers when they don't feel qualified to comment could work as a task. Even though that probably doesn't address the problem head on, eventually some sort of solution may be found, if problems arise. -- darklama  20:54, 31 May 2009 (UTC)[reply]

Ok, part of science 2.0 is the idea of open review/open access. What we could do, is feature articles for review on a certain page, and encourage reviewers to fill out a questionaire about the articles as their review, We would need to build a database of review forms, but because the review forms were standardized, MYSQL should be able to store them. Alternately we could build a template, and the reviewers would fill in the blanks... and save the whole page as a comment, that might work better. Anyone who wanted a review done of their article would post it to the page, and everyone that accessed it would be asked to fill out the form. Most probably wouldn't but any that did would be treated like votes as to the various aspects of the review, such as obvious spelling errors, and whether the article made sense to them or not, and a voluntary disclosure section would capture whether they were experts in the field the article was written in or not. Then a report could be generated that abstracts out the properties of the review.... I think this is complex enough that it will require a parser extension, don't you? Maybe a Wikisite, say Wiki-Revue? we could test it at beta-wikiversity if that is still available, or maybe on a sandbox server...Any paper that passes review would be stored in an Eprint archive and that would be open access? Sorry I get carried away--Graeme E. Smith 20:47, 1 June 2009 (UTC)[reply]

I'm don't know what your responding to, but I'll try to respond anyways. I think you are making the review process out to be more complicated then it is. A database is not required. Reviews are just a form of feedback. The review process can go on talk pages or a specific subpage, like Paper/review or Talk:Paper/review, to keep it all organized. Each reviewer could even have there own subpage like Talk:Paper/Reviews/Graeme_E._Smith to write on. By keeping the review process simple and free-form, people are free to write reviews as they wish to write them rather than restricted by an inherently inflexibly predefined database system. An open review process is more then just about open access and being able to participate. An open review is like being an open book, being able to see how it all works, and includes having a transparent process. There are many problems with the traditional peer review process, which slow and bog it down. Some things that come to mind worth considering w:Open peer review, w:Open research, w:Open publishing, w:Transparency (linguistic), w:Transparency (behavior), w:Radical transparency, and even w:Peer review. -- darklama  22:21, 1 June 2009 (UTC)[reply]