Guideline
Develop a foundational knowledge of AI technology’s capabilities, limitations, and consequences before using it in various academic and professional contexts. Ethical use should be a primary consideration when making decisions about implementing AI.
AI literacy is essential for making informed decisions about the use of AI in an academic setting. Not only can AI tools help you design learning objectives, create rubrics, and generate study tools, using them in your teaching can help students develop the AI fluency they need for future careers. However, these tools present a spectrum of challenges. AI-generated content can include inaccuracies, perpetuate misinformation, and reinforce bias. AI can also be used to falsify or fabricate content and bypass personal effort in learning. Unless you use AI tools licensed by or approved through appropriate university channels, the data you provide them is retained and no longer private, leading to unintended violations of data privacy and copyright law (see Data Protection & Privacy). Before you use AI technology as part of your work and teaching, make sure you have a functional understanding of how it works and have critically evaluated its limitations and impact.
Generative AI
As a deep learning model, generative AI is trained to understand and respond to your input in human-like ways, generating digital content such as text, images, video, music, and computer code. Trained on large sets of data, generative AI can take a prompt you write (in plain language) and translate it into new content. The same prompt will return a unique result each time it’s entered.
Understanding How AI Functions
When pursuing a foundational knowledge of AI’s capabilities and functions, start by getting familiar with the different types of AI and core, underlying concepts such as machine learning, neural networks, and large language models (LLMs). Understanding these concepts helps contextualize the engine behind popular generative AI tools such as ChatGPT, Google Gemini, and Microsoft Copilot. With their ability to learn, reason, and process natural language, these tools can power chatbots, generate text and images, and streamline work. They also retain and train on the data you provide them, so take all necessary precautions to safeguard information protected by data privacy and copyright law. See Data Protection & Privacy to learn more about guidelines for protecting data when using AI.
Writing Effective Prompts
Generative AI tools rely on prompts to create content. The quality and complexity of the prompt influences the quality of generated content. While a carefully crafted prompt can produce highly effective feedback, a prompt that lacks specificity and detail (eg, audience, tone, point of view, format, style) can yield inadequate or unusable results. See Harvard University’s Getting Started with Prompts for Text-Based Generative AI Tools article for some helpful tips on writing effective prompts.
Reviewing & Citing AI-Generated Content
Although a powerful tool to support teaching and learning, AI can generate content that is inaccurate, false, or misleading. It is also subject to bias and hallucination. If the training data used to create an AI model contains social, cultural, or historical bias, the tool can replicate or even amplify that bias. This becomes especially problematic in disciplines dealing with history, ethics, and public policy. When AI hallucinates, it misinterprets patterns and invents information (eg, statistics, citations, or claims). Finally, AI-generated content often contains copyrighted material. Carefully review any AI-generated content you share to ensure it is accurate and includes appropriate citations. Your review is an essential component of responsible AI use.
Ethical Use & Student Learning
In a 2025 survey conducted by the FSU Artificial Intelligence in Education Advisory Committee, FSU faculty shared concerns that student engagement with AI-generated content could lead to misunderstandings, shallow learning, or the adoption of biased assumptions. Faculty also expressed a lack of confidence in the accuracy of AI results. Students had similar concerns, reporting hesitation about the trustworthiness of AI tools and their potential for misinformation. Ethical use should be a primary consideration when making decisions about implementing AI technologies. All use of AI in the classroom should be in service of student learning. See Teaching & Learning to learn more about guidelines for using AI in courses.
AI Literacy Training: To facilitate the responsible and ethical use of AI, the university provides free AI literacy training for all instructors (see current training offerings).