This article summarizes four areas of guidance that MIT research administrators should be aware of, for your own knowledge and to support the valuable advice you provide to your colleagues in research.
Maintaining research security is all about preserving the integrity of the research enterprise and the public’s trust while we pursue MIT’s core mission in research and teaching.
There are four things you can do to help with this, as a research administrator:
- Know when to ask for help
- Recognize risks and learn how to mitigate them
- Understand the difference between institutional responsibilities and outside professional activities
- Think critically before using generative AI tools
1. Know when to ask for help
The Office of the Vice President for Research (VPR) has developed a Discussion Guide on Research Security and Compliance to support principal investigators (PIs) in fostering an open dialogue with their research groups and advisees on key topics. The guide was designed for use as a conversational tool to support informal discussions in a setting such as a lab meeting, where it’s important for PIs to establish clear expectations with their group members.
However, research administrators may also find it serves as a useful primer or onboarding resource with respect to some of the key principles and policies affecting MIT research groups: transparent reporting, export controls, cyber security, international travel, and working outside the research group. The guide is not comprehensive; rather, it is intended to help researchers identify situations that require particular care. The guide points to several key resources for help — including research-compliance-help@mit.edu, which you can use to contact the Research Compliance team in VPR for advice on matters of research security and compliance at any time.
2. Recognize risks and learn how to mitigate them
As Maria Zuber and Richard Lester noted in a March 2024 letter, “Many worthwhile activities that advance MIT’s research and teaching mission may nevertheless pose some amount of risk in the current global context.”
“With well-informed judgment,” they added, “such risks can often be mitigated or eliminated.”
A new VPR webpage on Assessing and Mitigating Risk compiles requirements and guidance associated with a range of academic and research activities, including:
- Participation in talent recruitment programs
- Informal international collaborations
- Hosting students, researchers, and visitors from outside the US
- Accepting gifts
- Traveling abroad
- Participating in executive and professional activities
- Mentoring, advising, or tutoring non–MIT-enrolled students
- Taking a sabbatical or leave of absence from MIT
- Lectures, teaching engagements, or serving on review panels
3. Understand the difference between institutional responsibilities and outside professional activities
Outside professional activities (OPA) can create potential conflicts of interest or commitment. As OPA reporting season approaches in May, take a few minutes to review current guidance from the Office of the Provost. A new webpage on outside professional activities summarizes the internal MIT reporting requirements, including:
- Examples of the types of activities that are, or are not, necessary to report.
- Clarification on how faculty and others with consulting privileges may use a limited amount of Institute time to pursue OPA. Note that this varies by role.
- Guidance on pursuing OPA while on leave/sabbatical or while on a visa.
- Expectations regarding the involvement of other MIT personnel or resources in OPA.
- Additional considerations relating to external academic appointments, consulting, new commercial ventures or start-ups, and international OPA.
4. Think critically before using generative AI tools
IS&T has released new guidance, developed with the Office of General Counsel, on the use of generative AI tools.
We encourage you to read the full guidance, but research administrators should particularly take note of this point:
Do not enter MIT proprietary information, third party information, or data classified as Medium Risk or High Risk into publicly available generative AI tools or services not subject to an Institute licensing agreement.
Examples of such information and data include:
- Non-public research results and data
- Unpublished research papers
- Confidential information received from third parties (such as research sponsors and collaborators)
- Unpublished invention disclosures and patent applications
- Institute financial and human resources information
- Personally identifiable information (including, for example, student records, medical records)
- Any information subject to legal or regulatory requirements necessitating its proper safeguarding and handling
- And any other information not intended to be freely available to the general public, or to the MIT community without access controls.
A few examples of generative AI tools that are NOT licensed by MIT (at time of publication) include Midjourney, Stable Diffusion, and OpenAI’s ChatGPT and DALL-E.
IS&T has published a list of tools that are licensed for use by the MIT community. For more information on MIT’s approach, please review the full guidance from IS&T