Talk:Open academic practice and Excellence in Research for Australia
1st group meeting notes[edit source]
Meeting: Leigh, Ben, Jamie, Kasia, James, Sarah, 13/5/2011. Initial notes by James.
Target journal(s)[edit source]
List of possible journals to target for publication of this article:
- Overall publication
- Discipline-specific publications
- Sports science
- Disaster management
- Keep survey in or not? (it increases the scope - but delays it?)
- A and A+ journals (end of June)
Continuum of openness:
- Allow "conditional" or 1/2 star rating e.g., if author pays to publish openly - and include extra notes
- Open access - rating
- Open access - condition?
- Open format - rating
- Open format - condition?
- Open licensing - rating
- Open licensing - condition?
Include date of entry
- Template for journal survey by mid 20 May
- Initial survey of journals in spreadsheet by end of June
- Aim for publication is by end of year
Other actions[edit source]
Leigh to bring in Diane Phillips - advise about overall target journal
2nd group meeting notes[edit source]
Meeting: Leigh, James, Sarah, 1/7/2011 Initial notes by James.
These have been expanded during the initial coding period. The extra conditions may or may not be necessary in order to address the overall research questions, so its really up to the individual researcher how detailed they wish to be. For the overall paper, probably only simplified data (i.e., is each journal open or closed, and %s) will be reported, but additional details may be appropriate for a discipline-focused paper.
Selection of journals[edit source]
We need to clarify the method for selecting journals from large discipline lists. Up to now the strategy has been to survey the A* and A journals within a field. Several issues have now become more apparent:
- ERA rankings have been dropped (see section below)
- For some disciplines, there are a small number of journals, so they all (A* through C) all need to be surveyed.
- For some other disciplines, there are a large number of journals, so they need to be sampled in such a way as to generate data which is representative of the ERA recognised journals in that field.
- Sampling from the top journals only will not provide a representative sample, therefore a form of random sampling is needed. Options include:
- Quota sampling with random selection - e.g., target, say, min. 50 journals per discipline. Could be either systematic random (every xth item from the alphabetical list) or selected via random number generator.
- % sampling with random selection - the problem with this approach is that disciplines vary so much in size that %s will need to vary
- Stratified sampling - Sample a quota or % from each rank within each discipline - could be a good method, but given that the rankings have been dropped by ERA as of June 2011, we need to adjust our method for evaluating ERA against open journal criteria so as it remains useful and relavent beyond that date.
- We also need to clarify whether to use only journals with target ANZFoR1 codes or whether to also use journals with ANZFor2 and ANZFoR3 codes as well. The former would mean tighter concentration of journals which have the target discipline as their primary focus. James discovered that the list from Deakin combine the FoR codes and so can include journals where its relevance to the discipline is quite small.
- This generated discussion on the validity of using the Deakin list in our assessment. We agreed that we should use the source list http://www.arc.gov.au/era/era_journal_list.htm and while this is not as usable as the Deakin list, it reassures us that the assessment is consistent with the source list. It was proposed that James and Leigh download the files from the ARC site and generate discipline specific lists based on the agreed method for sampling.
ERA ranking dropped[edit source]
- ERA has dropped the journal rankings, therefore let's move forward (we are still in the early stages of the project) and do not base the article in any way on ranking, although this change should be noted and documented in the article.
- It is recognised that the change in methodological strategy will impact particularly on those surveying large fields, particularly psychology, education and sport
- Yes I agree James, we should move it Leighblackall 10:14, 14 August 2011 (UTC)
Discussion about conditional openness[edit source]
In July, Kasia and Jamie told the team that they would be simplifying their review of journals related to nursing, by first assessing open access. If a journal wasn't open access, they would not assess copyright or format. Leigh agreed with this method, seeing it as significantly reducing the workload, allowing for a greater number of journals to be assessed, rather than just a sample.
In mid August, Ben emailed a link to http://www.plosone.org/static/information.action as an example of a journal that provides open access, free copyright, and reusable format, but it is pay to publish. The journal does mention that not all authors need pay to publish:
- "We offer a complete or partial fee waiver for authors who do not have funds to cover publication fees. Editors and reviewers have no access to payment information, and hence inability to pay will not influence the decision to publish a paper."
It doesn't say how it determines the waiver.
Leigh suggested that for Kasia and Jamie's assessment method, this journal not be counted as open access, because of it being pay for publish and therefore restricting who can publish through it. He clarified that where team members are recording conditions on access, journals like these should be recorded as conditional access - pay to publish.
Ben sees things differently, and believes Leigh is allowing his views about openness cloud his judgement, and that Kasia and Jamie should go further than Yes / No binary assessments of journals.
Leighblackall 10:14, 14 August 2011 (UTC)