ABCT Bipolar SIG/Annual Meeting/2017
2017 annual meeting: San Diego, CA
[edit | edit source]Organizational stuff
[edit | edit source]1. Presentation of Johnson-Youngstrom Poster Award
[edit | edit source]The award was presented to Tommy Ng for his work on meta-analyses of imaging. The prize was a $250 debit card. Congratulations!
2. Dues
[edit | edit source]Dues are $20/year for faculty and students are requested to contribute what they can. Submit payment online via PayPal, to Lauren's GMail account (see email).
3. Ideas for improving listserv
[edit | edit source]- Create a list: Name, email, current projects
- When preparing talks, compile symposiums. SIG members could submit talks, and the SIG can compile symposiums and submit.
- Sending around information, job offers, etc
- Organizing ABCT bipolar SIG at ISBD
4. Next steps for Bipolar SIG
[edit | edit source]- Send around Lauren Weinstock's paper in press
- Build bipolar disorder assessment battery matrix (Youngstrom, Miklowitz, Weinstock)
- Come up with a recommendation about core measures for a standard; need to include neurocog with Patty Walsh
Problems with recruitment/retention
[edit | edit source]1. Recruitment
[edit | edit source]- Difficulty in recruitment of participants
1.1 Solutions
[edit | edit source]- Video conferencing consenting, automated consent forms
- Truncated assessments, with assessments at different time points
- Human touch, e.g., sending snail mail, keeping in touch with participants' lives
- Getting research consent by approaching potential participants in waiting room
- Importance of well-trained research assistants to ensure participants' positive experience
2. Retention
[edit | edit source]- Emerging online data collection makes it more difficult for retention
2.1 Solutions
[edit | edit source]- Incentive-based system if people don't come back
- Paying incentives based on number of completed trials (e.g., $0.01 per completed task, up to $50)
- Progressive reinforcement (earning Amazon gift cards, etc)
- Accounting for inflation (people get paid more over time)
Ethical issues
[edit | edit source]- Discussed reinforcement for clinicians, paying extra -- not ethical
Assessment
[edit | edit source]Reliability
[edit | edit source]Inter-rater reliability is a key topic for validity of diagnoses, but there has been relatively little sharing of methods. One of the things that could be helpful is sharing methods and having a place to discuss issues and ideas.
- Andrew Freeman described procedures for video recording, and inter rater reliability. (<-- could elaborate here)
- What are best practices and feasible options for maintaining high inter-rater reliability in research?
- Coding at symptom level
- ADI-R as model of how to maintain reliability across groups -- biennial videos disseminated to maintain (Thanks, Talia!)
- [ ] Look at ADI-R training system as a model to emulate
- What are best practices and feasible options for maintaining high inter-rater reliability in research?
- Lauren Weinstock raised questions about whether to correct errors when more information becomes available, versus live with that as an aspect of reliability.
- Ben Goldstein made a distinction between clerical error vs. diagnostic evolution or incomplete clinical picture at snapshot
- David Miklowitz described training procedure on KSADS, focusing on agreement at item level (because easier to achieve than diagnosis level).
- Yee et al. JCCAP, filtered vs unfiltered ratings paper
Rating scales
[edit | edit source]Meta-analyses are helping to establish which are the robust measures that continue to show good psychometrics across a variety of samples. This could be a place to talk about ideas for secondary analyses and also looking at the priority areas for future work.
Self report measures could provide a way to calibrate across raters or sites -- consistent method, no variance due to differences in training.
Cross informant issues
[edit | edit source](Let's invite Gaye!)
Open Teaching of Psychological Science is a place that we are sharing scoring information,
[edit | edit source]REDCAP versus Qualtrics
What are the advantages and disadvantages of tightly controlled data capture versus more open systems?