To effectively moderate sessions at a meeting or a graduate program retreat, it’s crucial to manage time efficiently to ensure that the event runs smoothly and on schedule. This involves clear communication with presenters about the session rules, managing Q&A sessions judiciously, and being prepared to enforce time limits with a firm but fair approach. The goal is to create an environment where each speaker has their allotted time respected, the audience remains engaged, and the overall program adheres to its intended timeline. Here’s a guide I wrote for the recent QBC retreat for student moderators.
- Days-hours before the session send the presenters an email to instruct them about the “rules” for the session.
- Prior to gathering speakers figure out how questions will work. Will people just shout from their seats, will there be a microphone runner passing the microphone to people in their seats, or a few microphone stands for people to queue up at.
- Gather your speakers in the break before the session starts and ensure their laptops plug into the AV system. Tell each of them the following rules:
- When you will give them a warning wave that they have X minutes remaining (usually 1 for a 5-10 min talk, 2 for a 10-20 min talk, 5 for a 40 min talk)
- That you will stand up when they are at time (I often also threaten with a beach ball or some object at full time)
- That their laptop will be unplugged if they exceed the time of the talk + questions
- To plug in the laptop of the next speaker while the preceding speaker is answering questions
- Note that powerpoint sometimes has issues when the presentation is already in full screen when you plug into the projector. Better to start out of presentation mode and start it AFTER plugging into the projector.
- There isn’t a need to do a long introduction for a session with multiple speakers (I reserve long intros for keynotes or single seminars). Simply state - our next speaker is X. Most speakers will start with their title anyways - so no need for you to read it. Keep it moving!
- At the end of the talk, you will stand up and moderate the q/a portion. They only get questions if they finish their talk with enough time remaining. If they went over time, NO QUESTIONS! If they encroach into the question time, then limit questions to 1 or 2.
- if there are no questions after an awkward beat… YOU MUST ASK A QUESTION. It can be as simple as:
- I didn’t understand X, can you explain it again
- What would you do next?
- What is the type of data that can’t currently be collected, but you dream would answer this question
- Chose audience members to ask questions:
- favour learners (postdocs/students), especially for the first question.
- keep in mind diversity of who gets to ask questions
- cut off questions at the full time with the line “It is wonderful to see such enthusiasm. Speaker X will be around later to answer questions. Our next speaker is Y.”
Here is an example email I sent for the Protein Society this summer, where I moderated a session:
Looking forward to meeting at the upcoming Protein Society meeting. As the session moderator for “RNA-Protein Machines: Ancient Synergies”, I am passing along some of the instructions here:
1. Session Preparation: Please make sure to be present in the session room at least 15 minutes before the scheduled start time. This will allow us to coordinate and ensure that there are no A/V hiccups.
2. Time Management: To maintain the session’s schedule, it is essential that each speaker starts and ends their presentation on time. I am an “activist moderator” and will cut you off if you go over time (maybe with some kind of beach ball or water gun)!
3. The following time limits have been set for the respective presentation types:
- Senior Talks: 25 minutes for the presentation + 5 minutes for discussion
- Young Investigator Talks: 12 minutes for the presentation + 3 minutes for discussion
- Flash Talks: 2 minutes each for introducing your research/poster (with no Q&A session)
Let me know if you have any questions and I look forward to a great session!
Longtime friend of the lab, Michael Wall, will be visiting and deliver a seminar on his pioneering work using diffuse X-ray scattering and molecular dynamics to study proteins.
November 2nd, 2023 4:00pm in GH S201
Diffuse X-Ray Scattering to Shed Light on Protein Dynamics
Michael Wall, Los Alamos National Laboratory
Dynamics in protein crystals gives rise to diffuse X-ray scattering – intensity beneath and between the Bragg peaks in diffraction experiments. Recent improvements in X-ray beamlines and detectors have created new opportunities for using diffuse scattering to understand protein dynamics. In this talk I will introduce some basic concepts about diffuse scattering from protein crystals and the connection to dynamics. I also will review some modern approaches to diffuse data collection, processing, analysis, modeling, and simulation.
Our journal clubs aim to provide an environment for continued learning and critical discussion. Based on the discussion, we also brainstorm action items that individuals and labs can implement. Our discussions and proposed interventions reflect our opinions based on our identities and lived experiences. Consequently, they may differ from the discussions held by those with other identities and/or experiences.
This journal club took place among the entire Fraser lab. Due to the size, we split into three groups. Each group had unique but overlapping conversations. Below are the major points discussed by each group.
Discussion Leader: Stephanie Wankowicz, Daphne Chen, Eric Greene
“Differential retention contributes to racial/ethnic disparity in U.S. academia”
Summary and Key Points:
The top ranks of academia, particularly tenured faculty positions, suffer from a glaring lack of racial diversity(1). The cause of this lack of diversity is commonly attributed to challenges in recruitment and retention. Recruitment involves increasing enrollment of students in undergraduate or graduate programs, while retention focuses on keeping people in the ‘academic pipeline’ as they transition from role to role. Insufficient recruitment is widely recognized as a critical contributor to the lack of diversity in STEM fields; however, retention also significantly contributes to this disparity (2-4).
This paper addresses these concerns by focusing on differential retention. They frame retention through a null model, which states that if all else was equal, given the number of academics at stage i in a particular race category, there should be a proportional number of academics in that race category at stage i + 1. They examine each NIH race category’s academic career trajectory trends (5). The authors then compare the distribution predicted by this model to what is observed in NSF survey data. This comparison allows them to ask at which stage each race tends to “fall out” of the academic pipeline. The trends presented in this paper represent a significant dropout of certain races when moving from one stage to another. This was most evident from the grad school to postdoc stage, with a significant dropout of Black and Hispanic academics.
This study utilizes the NIH racial categories, which are extremely broad. We discussed how these categories oversimplify racial groups in the United States. The international scholars in the lab also provided perspective on how racial categories are a region- or country-specific issue, with many countries not discussing the issue of race due to a much more homogenous society.
Academic Career Trajectories:
The study broke down the transition from each ‘stage’ of an academic career (graduate, post-doc, pre-tenure faculty, post-tenure faculty). While this analysis removes many confounding factors, the academic career path is not for everyone, and performing this analysis with only those who want to enter the next stage of academia may highlight differences.
However, given this paper’s clear trends and the lack of survey data on career goals, adding this category will likely remain the conclusion. While racism is at the core of these disparities, we discussed specific differences at each career stage and potential solutions.
We also discussed the relative need for more career guidance support for people in the postdoc phase, including financial and mentorship. University postdoc offices often try to support thousands of trainees with only 1-2 full-time employees (6). This lack of general support isolates trainees, especially considering that many inclusive social groups in graduate programs are not found at the postdoc level (7). This can cause a much more isolating experience.
The lack of funding or support for research projects can be more relevant at the postdoc and pre-tenure phases. Research focused on different racial categories, such as health disparities research, is underfunded (8). Further, there is bias in obtaining funding (9).
Internal Lab Support:
While most of these changes need to occur on an institutional scale, we also acknowledged the significance of peer mentoring in fostering retention and support among lab members. Although being social and building relationships with lab mates can mitigate this feeling, it does not eliminate the sense of not belonging. Recognizing the value of non-lab-related peer mentoring networks, such as connections with individuals from other labs or institutions, we discussed how these external support systems can contribute to a cohesive and well-functioning lab environment. Furthermore, creating a supportive environment can help individual members see their next step in academia and expose them to career options they would not have otherwise considered. We acknowledged the need for regular conversations about careers and the next steps, as many trainees (graduate students and postdocs) tend to put off considering their future to focus on their science, and PIs should be, but are not always, proactive in initiating these conversations.
Hypotheses and potential solutions for improving retention:
We discussed the socioeconomic factors that present significant obstacles for individuals pursuing careers in higher education. These factors include the affordability of college education, wage loss during postdoctoral training, reliance on family support, costs associated with grad school interviews and applications, and the increasing financial burden of each education stage (on top of the cost of moving between educational stages). We also noted that gender plays a prominent role in many of these transitions, especially grad school to postdoc, postdoc to pre-tenure, as this is when many people in the canonical ‘academic pipeline’ have children (with fewer parents pursuing a postdoc, especially with women, as childbearing work falls disproportionately on women) (7). Moreover, the hard work of childbearing falls disproportionately on women, which presents a broad gender-specific barrier to advancement. More detailed data would provide valuable insights into the career trajectories of those opting out of academia, furthering our understanding of the challenges and reasons behind their decisions.
- How does the dropout rate from one step to another look among only those who desire to continue in academia?
- How different does this trajectory look for different genders?
- Why do people not move on to a postdoc or pre-tenure position?
- There are also significant dropouts observed from pre-tenure to tenured positions. Why are universities not supporting their pre-tenure faculty through the tenure process?
- What impact does observing a historically underrepresented professor not getting tenure have on an institution’s student population?
- We’d love to see follow-up analyses of this data set, particularly how these trends hold up/change for different disciplines and/or institutions. Can we identify and learn from those demonstrating positive progress toward inclusive academic retention?
Proposed Action Items:
- Advocate for increased funding and support of the postdoctoral affairs office.
- Waiving application fees for graduate school and/or University providing travel funding up front (instead of through reimbursement)
- Provide resources or opportunities for trainees to form peer mentoring networks, such as socials/mixers, funding for groups based on specific career/research goals, etc.
- Bringing awareness of these racial disparities to admission committees, hiring committees, and hiring managers.
1) Research: Decoupling of the minority PhD talent pool and assistant professor hiring in medical school basic science departments in the US
2) How Gender and Race Stereotypes Impact the Advancement of Scholars in STEM: Professors’ Biased Evaluations of Physics and Biology Post-Doctoral Candidates
3) Academia’s postdoc system is teetering, imperiling efforts to diversify life sciences
4) Tenure Decisions at Southern Cal Strongly Favor White Men, Data in a Rejected Candidate’s Complaint Suggest
5) Racial and Ethnic Categories and Definitions for NIH Diversity Programs and for Other Reporting Purposes
6) Growing Progress in Supporting Postdocs
7) Academia’s postdoc system is teetering, imperiling efforts to diversify life sciences
8) Role of funders in addressing the continued lack of diversity in science and medicine
9) Fraser Lab DEIJ Journal Club - Blinding Grant Peer Review
As a faculty member at the University of California, San Francisco (UCSF), I am often asked about my approach to evaluating faculty applications. In writing it out, I not only clarify my thinking, but also provide transparency about how one faculty member evaluates applications. Additionally, by sharing this, I hope to get feedback to help improve my own process for evaluating applications in the future.
Evaluating faculty applications, in my view, is akin to the process of protein folding, as described by Levinthal’s paradox. Levinthal’s paradox suggests that it would be virtually impossible for a protein to achieve its functional structure by exhaustively exploring every possible conformation due to the sheer number of potential configurations. Instead, proteins navigate through a funnel-like process, where a sequence of favorable local interactions steers the protein toward its final, folded ensemble. When I evaluate faculty applications, I adopt a similar approach. I don’t undertake an exhaustive examination of every single detail of all applications. Instead, I employ a funnel-like process, starting with broader criteria, then progressively narrowing down to more specific aspects of the proposed research program. I strive to do this without resorting to traditional markers of prestige such as the reputation of the journals where they’ve published or their academic pedigree. This process guides me toward the most promising applications that resonate with me both scientifically and in terms of shared scientific values.
The first step in my evaluation process is to review the Diversity, Equity, and Inclusion (DEI) statement. Based on other published rubrics , I assess the applicant’s awareness and involvement in DEI initiatives. I’ll also look over any teaching or mentoring track record as part of this, recognizing that not everyone has had the chance or environment to fully engage in these activities. This is a critical step for me. If an applicant does not demonstrate a strong commitment to DEI, I do not proceed further with their application. This initial screening takes less than five minutes per candidate and typically eliminates about half of the applicants.
Next, I turn my attention to the research statement. The opening page (and especially the opening paragraph!) is crucial here. I look for a clearly articulated problem or a set of problems that the applicant intends to address. If the scientific problem statement, its significance, or the applicant’s approach to solving it are unclear to me, I do not proceed with considering the candidate. This step takes less than two minutes per candidate and usually eliminates another half of the remaining applicants.
For the remaining candidates, I undertake a thorough review of the entire research statement and cover letter. I study the applicant’s key preprints and papers to familiarize myself with their specific scientific questions and approaches. Interestingly, many of the faculty members I’ve been involved in hiring at UCSF had not yet published their major work in a peer-reviewed journal at the time of their application. This is not a deterrent for me; in fact, I embrace preprints wholeheartedly. Preprints provide an open and immediate insight into a researcher’s latest work, and I am fully capable of evaluating them on their own merits. However, what I find less favorable are “confidential manuscripts in review”. Because these do not offer me the same level of transparency as preprints, I won’t review them as part of the application. Including such “confidential manuscripts” demonstrates a disconnect with the open science principles that I value in future colleagues.
During this stage, I also try to evaluate how successful they have been in making progress on key problems in prior career stages by scanning letters of reference and scanning additional papers by the applicant (and often in the field of the applicant).
I also want to clarify what I look for in reference letters, even though they are a minor factor relative to the research proposal and papers of the applicant. It’s common for every applicant to be described as “the best person who has passed through the lab in years,” so overall praise isn’t the differentiator for me. Instead, I focus on three key things:
1 - Context for the scientific barrier the candidate overcame in their prior work.
2 - Discussion of how the candidate’s FUTURE work will differentiate from the thrust of their current lab.
3 - Corroborating data on teaching, mentorship, and outreach.
Letters can add depth to these three dimensions, but rarely detract from them. While it’s not a strict requirement, a well-crafted letter that resonates on these three issues can be immensely helpful in painting a comprehensive picture of the candidate.
This overall step of evaluating the research statement and papers (with a scan of letters of references and other papers) is time-intensive, taking approximately 20 to 40 minutes per candidate. However, this is the point where I decide if a candidate should be evaluated by the entire committee, generally nominating about 10-15 candidates.
At this point, I also get the short list of other members of the committee. Some of my colleagues may weigh other factors such as the prestige of journals where the applicant has published, their academic pedigree, or the likelihood of securing funding. This diversity in evaluation criteria is a strength of a committee approach, provided we are all aware of and acknowledge our biases. We typically get about 100-300 applicants in a cycle, but there is usually a significant overlap in shortlists. Generally, the committee process leads to a shortlist of ~25 candidates.
The next step involves a deeper reflection on each shortlisted application. I spend an additional 30 minutes per application, contemplating the fit of the research statement with our institution and gauging my excitement level about the proposed research. I again consider the DEI and teaching/mentoring efforts. My aim is to identify 5 to 7 applicants that I am extremely enthusiastic about, 10 applicants that I am open to learning more about if other committee members are sufficiently enthusiastic, and 5 to 10 applicants that I am skeptical about but am willing to be convinced by other committee members.
Finally, we (the hiring committee) engage in a comprehensive discussion and ranking process. Each committee member presents their shortlisted candidates, and we collectively rank them for zoom and/or on-site interviews. This process tries to offer a balanced assessment of each candidate, helping us identify the most promising faculty members for UCSF.
In conclusion, my approach to faculty application evaluation is designed to be rigorous and thorough, while being efficient and minimizing proxies of prestige like journal name or institution. I’m cognizant that I have my own implicit and explicit biases, but what is outlined here is a reflection of how I try to identify candidates who not only excel in their research but also share our values. I believe it’s important to share my process, not as a standard, but as an example of one possible approach. I encourage anyone serving on a hiring committee to outline their own unique criteria and detail the process they use to arrive at a shortlist.
Thank you to Prachee Avasthi, Zara Weinberg, Willow Coyote-Maestas, Stephanie Wankowicz, Chuck Sanders, Brian Kelch, and Jeanne Hardy for feedback and discussions about this topic.
A group of scientists within the Fraser, Coyote-Maestas, and Pinney labs have begun a journal club centered around issues of diversity, equity, inclusion, and justice within academia, specifically in the biological sciences.
Our goal is to provide an environment for continued learning, critical discussion, and brainstorming action items that individuals and labs can implement. Our discussions and proposed interventions reflect our own opinions based on our personal identities and lived experiences, and may differ from the identities and experiences of others. We will recap our discussions and proposed action items through a series of blog posts, and encourage readers to directly engage with DEIJ practitioners and their scholarship to improve your environment.
November 4th, 2022 – Blinding peer review
Discussion Leader: Eric Greene
Summary Article: “Funding: Blinding peer review”
Primary Article: “An experimental test of the effects of redacting grant applicant identifiers on peer review outcomes”
Bonus Article: “Strategies for inclusive grantmaking”
Summary and Key Points:
STEM research funding is a highly competitive space that has a persistent lack of diversity and representation, especially at the faculty level. I chose this case study as it discusses one of the largest current racial disparities in STEM, highlights a source of white privilege that directly impacts lab funding, and provides experimental evidence towards one mitigation strategy.
The NIH is a substantial funding source for biomedical research in the US and NIH funding is foundational to the existence of many laboratories that are driving biomedical scientific discovery. However, there is a large and persistent funding gap between White and Black investigators, where Black PIs are funded at 55-60% White PIs rate.
In response to this disparity, the NIH conducted a study on the effects of blinding applicants’ identity and institution on the review of R01 proposals. The goal of this large experiment was to gain an understanding about the role of peer review in facilitating racial bias in grant awards and to understand the extent to which blinding applicant identity could blunt racial bias. The experiment uncovered the following:
Scores for applications from Black PIs were unaffected by blinding, but scores for applications from White PIs were significantly lower when the White PIs identity was blinded such that the racial gap was cut in half. This finding could be due to the “Halo effect” where personal/institutional prestige dramatically upweights advantaged/privileged individuals and can be seen as another mechanism fueling a ‘winners keep winning’ phenomena. Indeed the “Halo effect” has been indicated to be a potent factor in manuscript peer-review.
The principle critique of invoking the “Halo effect” to rationalize the findings of this study is that proposal writers did not write their proposal with identifying information redacted, it was done administratively with previously reviewed R01 applications, leaving uncertainty regarding the impact of administrative redaction on ‘grantsmanship’. However, we discussed the likelihood that applicants who benefit from individual/institutional prestige would likely write favorably toward this status in their applications thus in effect working to entrench any positive “Halo effect” benefit.
Blinding applicant identification on grant proposals is not a silver bullet that solves racial disparity in NIH funding. Including being imperfect itself, with ~22% of reviewers able to positively identify blinded applicant identity. However, this is one tool that has a demonstrated effect here to blunt reviewer bias. While blinding was somewhat effective here, there are means of double blinding and/or tiered blinding of application materials that can be used instead that may hold greater potential.
A key part of our discussion was about the review criteria for NIH funding that explicitly required a numerical evaluation of the individual and institution. Evaluation of a person contributes to an obligate entanglement of one’s past scientific accomplishments with their future potential during the grant review process. Not only can this equivalence be false (people often can succeed past initial setbacks), but it also can be harmful by promoting an applicant’s self-worth to be tied to their productivity. Funding requires accounting for equipment available to carry out the research, which is important for accountability on the part of the investigator, but does not necessarily require a numerical number. This detailed level of evaluation would prompt reviewers to score prestigious/well-resourced institutions higher even if the same research could be carried out elsewhere. We discussed as an alternative whether equipment/facilities categories could be scored as ‘sufficient’ or ‘insufficient’ and not influence the overall impact score of the application.
- How does one justly judge an application as fundable?
- The ‘Halo effect’ in consequential academic evaluation processes has amassed supportive evidence beyond grant funding. How do we best de-leverage this effect towards a level playing field?
- Blinding applicant identity can help, even if not perfect, how do we improve blinding processes through an equity lens?
- Another explanation for lower Black PI funding rates stems from the subject matter of study, such as studying health care topics of interest for communities of color, which though important may not necessarily be of high funding value to reviewer or reviewing institute. How can these health care topics be adequately elevated and funded?
- To what extent do non-NIH funding mechanisms also incur racial disparity? What have other organizations tried to mitigate? Have these strategies worked?
Proposed Action Items:
While trainees may have limited influence to change the course of NIH peer review, there are nonetheless actions that one can take:
- Call your Representative/Senators to implore them to raise the NIH budget. The value of NIH sponsored research is high to the general public and with more funds, the 10-30% fund rate will increase and be less demoralizing to independent investigators and trainees.
- Should you find yourself in the position of power as a peer reviewer, practice empathy during the review process and familiarize yourself with bias’ that can crop up in the process
- Vote. The NIH is a government entity and is not immune to political authority figures.
- Encourage unsuccessful applicants to pursue resubmission. Rejection is hard but community can help.
- Encourage other non-federal funding mechanisms to blind reviewers or if they have the budget, to do a study where each application is evaluated blinded and open. Compare the scores and who gets funded.