Ethical Considerations and Guidelines Development on the Use of Gen AI for Grant Development
Responsible and ethical integration of Gen AI into grant development requires awareness of institutional and funder guidelines.Below are some key ethical considerations researchers need to be aware of when using Gen AI for grant development. For more details, refer to the Library’s Ethical Considerations webpage and the University of Alberta’s Artificial Intelligence Data Safety Guidelines.
- Transparency and Disclosure: Researchers have an ethical obligation for full and honest disclosure of the Gen AI tool and version used, how it was used (including the prompt), and the date it was used.
- Accountability and Originality: Researchers must take full responsibility for the originality and accuracy of their proposals. Gen AI is a tool, not a co-author or source of truth.
- Algorithmic Bias and Fairness: Researchers must review all AI-generated texts for potential bias to ensure proposals comply with human rights and fairness obligations and promote Equity, Diversity, and Inclusion (EDI).
- Data Privacy and Confidentiality: Use U of A enterprise-level Gen AI tools (like Gemini accessed through CCID) for sensitive work, as commercial tools may use your data for future model training. Never use commercially available tools with confidential or unpublished material.
The table below provides the key privacy differences between between free and enterprise-level Gen AI tools
|
Feature |
Free Gen AI Models (e.g., public ChatGPT) |
Enterprise/Paid Models (e.g., Gemini accessed through CCID, some commercially available LLMs) |
|
Data Usage |
Your prompts and data are typically used to train and improve the public model by default. |
Your data is not used for training the model. It remains within your organization's environment. |
|
Privacy Controls |
You may have to manually opt-out of data training through a settings menu. Even then, data might be retained for a short period. |
Data privacy is a core, default feature. No manual opting out is required. |
|
Data Residency |
Data could be stored or processed anywhere in the world. |
Admins are allowed to choose the region where data is stored and offers legally binding guarantees that data will remain within the specific geographic boundaries. |
|
Data Confidentiality |
High risk of data leakage. Confidential or sensitive information you enter could potentially be learned by the model and inadvertently appear in a different user's output. |
Data is isolated and confidential. It stays within a secure, "closed-loop" environment specific to your organization. |
|
Security & Compliance |
Often lack formal security certifications or compliance with regulations (like HIPAA) and may not have a clear Data Processing Addendum (DPA). |
Built with security and compliance in mind |
Adapted from Association for Talent Development: Free vs Paid AI Services: Navigating the Privacy and Security Landscape.
Key Policy Guidance
- Canadian Federal Funding Agencies (Tri-Agency Council and CFI): Consult the Guidance on the use of Artificial Intelligence in the development and review of research grant proposals. Applicants are fully responsible for the entire content of their applications and must ensure all sources are properly acknowledged. Reviewers are explicitly prohibited from using online tools to evaluate applications.
- University of Alberta: The Framework for the Responsible Use of AI at the University of Alberta provides guiding principles for use in all university-related work, including research grant development.