In graduate bioscience education, there has been increasing emphasis on “evidence-based practices,” “data,” and “evaluation.” This emphasis is transforming graduate bioscience education, challenging institutions to develop standards for rigorous data collection and to present results in formats useful to internal and external stakeholders. This range of stakeholders include students, faculty, staff, institutional leaders, funding agencies and the general public. These requirements for data collection and presentation often manifest in ambiguous or overlapping requirements that can form a barrier to entry to faculty and staff, including those who support extramurally funded training programs. This workshop will take the form of a facilitated discussion with active learning exercises, designed to help participants understand different forms of data reporting requirements. This three-hour session will be useful for those relatively new to evaluation who are interested in an overview of approaches and techniques, as well as understanding the relationship between different types of evaluation (e.g. program evaluation vs training evaluation). In order to have a real-life case study, the exercises in this workshop will be based on the National Institute of General Medical Sciences requirements for graduate T32s . However, the lessons will be broadly applicable. Prior experience working on or supporting NIGMS T32s are not required. For those interested in considering graduate programs more broadly (i.e. beyond those funded by training grants), an alternative exercise and tools will be provided. Objectives of this workshop Participants in this workshop should be able to Dissect an Request for Funding Application (RFA) to identify various data reporting requirements and articulate how they relate to program evaluation, training evaluation and dissemination mandates (NIGMS T32 guidelines 2020) Map out and explain a standard series of steps to conduct a program evaluation, while considering the stage of development of a program (e.g. needs assessment vs outcome evaluation) (CDC Types of Evaluation, accessed 2022) Compare and contrast logic models and theory of change as visual representations to capture interventions and aligned evaluations; propose improvements to existing logic models; and draft their own logic models (Otto AK et al., 2006, Amundsen C and D’Amico L, 2019) Compare and contrast common frameworks for training evaluation (e.g. Kirkpatrick’s model) and explain how these frameworks can help inform the design of instruments and the organization of evaluation data (Praslova 2010) Identify existing data sources (including their institutional data sources) which can be incorporated into program and training evaluation Format This workshop will be divided into 3 parts: Part 1: Understanding data reporting requirements; mapping institutional/organizational interventions Part 2: Program evaluation in practice and using logic models Part 3: Training evaluation in practice