Cover
Empieza ahora gratis Les 1-2-3-4 - slides_merged.pdf
Summary
# Introduction to artificial intelligence
This section explores the definition, historical trajectory, and foundational concepts of artificial intelligence.
### 1.1 What is artificial intelligence?
Artificial intelligence (AI) is a technology designed to simulate human intelligence. It aims to replicate key aspects of human cognitive abilities, such as autonomy and adaptability [3](#page=3).
#### 1.1.1 Defining intelligence
The concept of intelligence itself is multifaceted, with numerous definitions existing among people. However, common elements include [3](#page=3):
* **Autonomy:** The capacity to act and make decisions independently [3](#page=3).
* **Adaptability:** The ability to learn from experiences and adjust behavior accordingly [3](#page=3).
Howard Gardner's theory of Multiple Intelligences posits that intelligence is not a singular, one-dimensional attribute [3](#page=3).
#### 1.1.2 Misconceptions surrounding AI
Misconceptions about AI are prevalent, often fueled by science fiction narratives, marketing hype, and buzzwords. There is a frequent confusion between what AI can currently achieve and what people *believe* it can achieve. It is important to note that there is no single, official definition of AI [3](#page=3).
### 1.2 AI levels
AI systems are often categorized by their capabilities:
* **Weak/Narrow AI:** This type of AI is limited to performing specific, well-defined tasks, such as those performed by virtual assistants like Siri or large language models like ChatGPT. Weak AI currently exists and is functional [5](#page=5).
* **General AI (AGI):** Also referred to as strong AI, AGI would possess the capability to perform any intellectual task that a human can across all domains. AGI does not yet exist, and experts have differing opinions on when it might be achieved, with estimates ranging from 10 to over 100 years [5](#page=5).
* **Super AI (ASI):** ASI would surpass human intelligence and could potentially understand consciousness and complex needs. This level of AI is purely theoretical and may never be realized, or it could emerge centuries from now [5](#page=5).
### 1.3 History and evolution of AI
The aspiration for artificial beings is an ancient human concept [6](#page=6).
#### 1.3.1 Ancient dreams and early mechanization
* **Ancient Civilizations (3000 BCE - 500 CE):** Concepts of artificial beings can be found in myths, such as Talos, the bronze giant in Greek mythology [6](#page=6).
* **Mechanical Revolution (1600-1800):** This era saw advancements in mechanical devices and foundational computational ideas. Gottfried Leibniz developed the binary number system, which is fundamental to modern computing. Jacques de Vaucanson created intricate mechanical automata, like a mechanical duck [6](#page=6).
* **Computing Pioneers (1800-1900):** Charles Babbage and Ada Lovelace conceptualized the Analytical Engine, an early mechanical computer. George Boole laid the groundwork for Boolean logic, which uses true/false values essential for computation [6](#page=6).
#### 1.3.2 The birth of AI (1940-1960)
This period marked the formal beginnings of AI research.
* **Von Neumann Architecture:** John von Neumann contributed to the development of computer architecture, including the concept of stored programs and data [6](#page=6).
* **Neural Network Research:** Initial investigations into neural networks began in 1943 [6](#page=6).
* **Dartmouth Conference:** This pivotal event was the first AI conference, where the term "artificial intelligence" was coined. It was characterized by extreme optimism, with predictions that AI would be solved within a single summer [6](#page=6).
* **Early AI Programs:** The Logic Theorist was one of the first AI programs developed. The chatbot ELIZA, created between 1964 and 1966, simulated a psychotherapist and led to the "ELIZA Effect," where users attributed more understanding to the chatbot than it possessed [6](#page=6).
#### 1.3.3 The Turing Test .
Alan Turing proposed an influential experiment to assess a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. The test involves a human interrogator who interacts with both a human and a computer, aiming to distinguish between them through text-based conversations. If the interrogator cannot reliably tell which is the computer, the machine is considered to have passed the test [7](#page=7).
> **Tip:** The Turing Test is crucial because it provided one of the first objective measures for machine intelligence and shifted the focus to observable behavior rather than internal computational processes. It remains relevant today, prompting philosophical discussions about the nature of imitation versus genuine intelligence [7](#page=7).
#### 1.3.4 AI Winters: Hype versus reality
AI research has experienced periods of reduced funding and interest, known as AI winters, often due to a gap between inflated expectations and actual achievements [7](#page=7).
* **First Winter (1974-1980):** This period saw a halt in funding due to unmet promises and limitations in computational power and available data [7](#page=7).
* **Second Winter (1987-1993):** Expert systems, which were a major focus, proved to be too expensive and complex to maintain, leading to another downturn [7](#page=7).
#### 1.3.5 The revival (1990-2010)
Significant technological advancements led to a resurgence of AI research and development [8](#page=8).
* **Iconic Moments:**
* Deep Blue defeated world chess champion Garry Kasparov in 1997 [8](#page=8).
* IBM's Watson defeated champions on the quiz show Jeopardy! in 2011 [8](#page=8).
* **Paradigm Shift:** The focus moved from rule-based systems to data-driven AI, and from expert systems to machine learning approaches [8](#page=8).
#### 1.3.6 Generative AI and the Transformer architecture
Generative AI (Gen AI) is a recent development that stands out because it *creates* content rather than just classifying it. Gen AI can process context and generate coherent outputs across various modalities like text, images, audio, and video. A key technological breakthrough enabling this is the Transformer architecture, introduced in the 2017 Google paper "Attention Is All You Need". This architecture allows each word in a sentence to "pay attention" to other words for better contextual understanding, forming the basis for models like ChatGPT [14](#page=14).
### 1.4 AI today: Ubiquitous presence
AI has become deeply integrated into many aspects of daily life. Examples include [8](#page=8):
* **Cameras:** Object detection and filters [8](#page=8).
* **Navigation:** Real-time traffic information in maps [8](#page=8).
* **Entertainment:** Personalized recommendations on platforms like Netflix [8](#page=8).
* **Smart Homes:** Thermostats that learn user behavior [8](#page=8).
* **E-commerce:** Product recommendations on sites like Amazon [8](#page=8).
* **Social Media:** Curating user timelines [8](#page=8).
* **Healthcare:** Supporting scan analysis in hospitals [8](#page=8).
* **Finance:** Fraud detection in banking [8](#page=8).
* **Transportation:** Route optimization [8](#page=8).
* **Human Resources:** Screening CVs [8](#page=8).
* **Customer Service:** Chatbots for support [8](#page=8).
### 1.5 The future of AI
The future holds promises and possibilities for AI:
* **Autonomous Systems:** Fully self-driving vehicles, autonomous drones for deliveries, and self-optimizing supply chains are on the horizon [9](#page=9).
* **Human-AI Collaboration:** AI is envisioned as a permanent co-pilot, enhancing creativity and problem-solving abilities. This collaboration will also lead to the emergence of new professions and skill requirements [9](#page=9).
### 1.6 The downsides and risks of AI
Despite its advancements, AI presents several significant challenges and risks [19](#page=19).
* **Bias and Discrimination:** AI systems can perpetuate and amplify existing societal inequalities [19](#page=19).
* **Privacy and Surveillance:** Increased data collection raises concerns about privacy and the balance between security and freedom [19](#page=19).
* **Misinformation and Deepfakes:** The ability to generate realistic fake content can undermine trust in authentic information [19](#page=19).
* **High Costs:** Developing and deploying AI can be expensive [19](#page=19).
* **Environmental Impact:** The computational power required for AI can have a significant environmental footprint [19](#page=19).
> **Tip:** Critical use of AI is paramount. Your own knowledge remains essential, as AI functions as an amplifier. The better your understanding of a subject, the more effective your AI queries will be. Be aware of AI's limitations and understand when it is appropriate to use it. Remember that marketing claims may not always align with scientific reality, so avoid being misled by the hype [19](#page=19).
### 1.7 Key takeaways
* AI is a powerful tool but possesses inherent limitations [20](#page=20).
* Human expertise remains indispensable [20](#page=20).
* Contextual understanding and human judgment are crucial for effective AI application [20](#page=20).
* The impact of AI is real but nuanced [20](#page=20).
* Significant challenges such as bias, privacy concerns, and job displacement are genuine [20](#page=20).
* AI should be viewed as a means to an end, not an end in itself [20](#page=20).
---
# How artificial intelligence works
Artificial intelligence (AI) has evolved through distinct phases, from early rule-based systems to modern data-driven approaches like machine learning, deep learning, and generative AI, each with unique methodologies and capabilities [10](#page=10).
### 2.1 Evolution of AI methodologies
AI's development can be broadly categorized into several key eras [10](#page=10):
#### 2.1.1 Rule-based AI (1950-1990)
This "older" AI relied on human experts explicitly writing rules to guide problem-solving [10](#page=10) [22](#page=22).
* **Advantages:** Predictable, understandable, and controllable [10](#page=10).
* **Disadvantages:** Lacked flexibility and the ability to learn from new data [10](#page=10).
* **Example:** IFTTT (If This Then That) allows users to create automated workflows by setting up "if this then that" rules, linking smart devices and services [10](#page=10) [22](#page=22).
#### 2.1.2 Machine Learning (1990-2010)
Marking the beginning of "newer," data-driven AI, machine learning involves algorithms learning patterns directly from data [10](#page=10) [11](#page=11).
* **Advantages:** Ability to adapt to new situations [10](#page=10).
* **Disadvantages:** Requires significant amounts of labeled data [10](#page=10).
* **Example:** Orange is a data mining and machine learning software that allows users to experiment with these techniques [11](#page=11).
#### 2.1.3 Deep Learning (2010-Present)
Deep learning utilizes artificial neural networks with multiple layers, inspired by the structure of the human brain, to process complex patterns and relationships [10](#page=10) [11](#page=11) [24](#page=24).
* **Advantages:** Capable of recognizing highly complex patterns [10](#page=10).
* **Disadvantages:** Often considered a "black box," making it difficult to explain its reasoning [10](#page=10) [28](#page=28).
Deep learning employs three primary learning paradigms [11](#page=11):
* **Supervised Learning:** The model is trained using labeled examples [12](#page=12) [25](#page=25).
* **How it works:** Data (e.g., 10,000 photos of dogs and cats) is used to train the model to recognize patterns (e.g., ears, snout, size). When presented with a new photo, it can classify it as a dog or cat [12](#page=12).
* **Potential issues:** Insufficient data, bias in labels, overfitting, incorrect labels, or learning incorrect information [12](#page=12).
* **Example:** Teachable Machine by Google allows users to train neural networks with photos or sound and assign labels [13](#page=13).
* **Unsupervised Learning:** The model identifies patterns in data without predefined answers or labels [12](#page=12) [26](#page=26).
* **How it works:** Using a large dataset of unlabeled photos (e.g., 100,000), the model groups common characteristics [12](#page=12).
* **Potential issues:** Identifying non-existent correlations, ambiguity in group formation, no guarantee of actionable insights, and bias in the data [12](#page=12).
* **Reinforcement Learning:** Involves an agent learning through a system of rewards and penalties for its actions [13](#page=13) [27](#page=27).
* **How it works:** An agent performs actions, receiving a positive reward (e.g., +1) for good actions and a negative reward (e.g., -1) for bad ones, aiming to maximize cumulative reward over time [13](#page=13).
* **Potential issues:** Reward hacking, long training times, local optima, and reward mismatch [13](#page=13).
* **Real-world applications:** Self-driving cars use a combination of supervised learning (object recognition) and reinforcement learning (driving behavior). Chatbots integrate supervised learning (intent recognition) with reinforcement learning (conversation optimization) [13](#page=13).
#### 2.1.4 Generative AI (2020-Present)
Generative AI is characterized by its ability to create new content rather than just classifying or predicting [10](#page=10) [22](#page=22).
* **Key Feature:** Generates new variations within learned patterns. This can include text (essays, poems, code), images, music, speech, video, and 3D models [23](#page=23).
* **Transformer Architecture:** A significant breakthrough, particularly the "Attention is All You Need" paper, enabled models to process context more effectively by allowing each word to attend to other words in a sentence. This mechanism is foundational for models like ChatGPT [10](#page=10) [14](#page=14) [23](#page=23).
* **Creation vs. Analysis:** Traditional AI (1950-2020) focused on analyzing, classifying, and predicting, answering "what is happening?" or "what might happen?" Generative AI (2020+) focuses on creation, answering "create something new" by using learned patterns to generate novel content [22](#page=22).
### 2.2 Core mechanisms of modern AI
Modern AI, particularly large language models (LLMs), relies on sophisticated mechanisms to process information and generate output [24](#page=24) [25](#page=25) [26](#page=26).
#### 2.2.1 Tokenization
Computers operate on binary (0s and 1s), while humans use text. Tokenization is the process of breaking down text into smaller pieces, called tokens, which are then converted into numerical representations (IDs) within the AI model's vocabulary [24](#page=24).
* **Example:** The tokenizer at `https://platform.openai.com/tokenizer` can demonstrate this process [24](#page=24).
#### 2.2.2 The AI brain and parameters
The "AI brain" consists of interconnected layers, with parameters representing the strength of these connections [24](#page=24).
* **Parameters:** More parameters allow the AI model to recognize more patterns and are indicative of a "smarter" model. For instance, GPT-3 has 175 billion parameters, and GPT-4 has approximately 1 trillion parameters [24](#page=24).
* **Visualization:** The TensorFlow Projector can offer insights into these connections [24](#page=24).
#### 2.2.3 Model training
Training involves determining the model's parameters, often through two main stages [25](#page=25):
* **Pre-training:** This is the core learning process where the AI model is exposed to a massive dataset (petabytes of text) with the primary task of predicting the next word in a sentence [25](#page=25).
* **Process:** As the model makes predictions (e.g., predicting "mat" after "The cat sits on the..."), its parameters are adjusted. Stronger predictions reinforce parameters, while weaker ones weaken them. This iterative process, over billions of sentences, leads to increased accuracy [25](#page=25).
* **Fine-tuning:** A pre-trained model is like a vast encyclopedia. Fine-tuning refines this knowledge for specific applications.
* **Reinforcement Learning from Human Feedback (RLHF):** Human reviewers score different model responses to the same prompt. A reward model learns these human preferences, and the main model is then optimized based on this feedback. RLHF transforms a raw language model into a usable product [25](#page=25).
#### 2.2.4 Prompt to answer generation
The process of generating an answer from a user's prompt involves several steps [26](#page=26):
1. **Prompt Tokenization:** The input prompt is converted into tokens [26](#page=26).
2. **Layered Processing:** Each token passes through multiple layers (10-100) of the AI "brain" [26](#page=26).
* **Layer Functionality:** Early layers might recognize individual words, subsequent layers understand sentence structure, then meaning and context, and deeper layers can interpret emotion and intent [26](#page=26).
3. **Attention Mechanism:** All tokens "communicate" with each other via the attention mechanism, allowing for contextual understanding [26](#page=26).
4. **Answer Generation:** The AI model predicts the most probable next word sequentially, building the response word by word [26](#page=26).
* **Example:** The LLM visualization tool at `https://bbycroft.net/llm` can illustrate this process [26](#page=26).
> **Tip:** Creating your own AI model, like the "SoekiaGPT" example using fairy tales, involves analyzing word combinations (N-grams) to identify patterns in word sequences and suggest words based on frequency and context [26](#page=26).
### 2.3 Multimodal AI
Multimodal AI extends beyond text to incorporate and generate various forms of data, including images, audio, and video [14](#page=14) [32](#page=32) [35](#page=35).
* **Foundation in Language:** While diverse, many multimodal AI systems begin with language models because human communication is inherently multimodal and the internet is rich with textual data [35](#page=35).
* **Benefits:** Richer input leads to better output, unlocks new creative possibilities, and enables more efficient workflows [35](#page=35).
#### 2.3.1 Diffusion models for images
Diffusion models generate images through an inverse process, starting with noise and gradually removing it to form a coherent image, guided by a prompt [36](#page=36).
* **Training:** Involves a forward process (images to noise) and a reverse process (noise to images), learning pattern recognition between visual features and descriptions [36](#page=36).
#### 2.3.2 Neural audio models for sound
These models convert written text into realistic speech or music by building sound waves sample by sample [37](#page=37).
* **Prompt Influence:** The prompt dictates the voice, emotion, tempo, and style of the generated audio [37](#page=37).
* **Training:** Analyzes vast amounts of speech and music recordings to understand the relationship between sounds and text, including how emotions affect vocalization [37](#page=37).
#### 2.3.3 Temporal diffusion models for video
Temporal diffusion models add a time dimension to image diffusion, generating a sequence of consistent frames to create video [37](#page=37).
* **Prompt Influence:** The prompt specifies actions, camera movements, and overall style [37](#page=37).
* **Training:** Involves analyzing thousands of videos with descriptions to learn temporal coherence (how objects move between frames) and basic physics principles [37](#page=37).
#### 2.3.4 Prompts in multimodal AI
A consistent prompt structure can be used across different modalities, resulting in a family of outputs rather than identical results [38](#page=38).
* **Consistency:** While the exact output may vary (e.g., image composition, audio intonation, video camera angle), the core theme and style remain consistent [38](#page=38).
### 2.4 Understanding AI's "intelligence" and limitations
Despite advanced capabilities, AI possesses specific types of "intelligence" and significant limitations [27](#page=27) [28](#page=28).
#### 2.4.1 AI's performance and nature
AI can match or exceed human performance in specific tasks, such as reading comprehension, exams, and pattern recognition. However, it is a powerful specialist tool, not a substitute for human intelligence. AI is particularly well-suited for business tasks requiring focus, speed, and scale. It is crucial to remain critical, as its performance can sometimes mask its limitations [27](#page=27).
#### 2.4.2 Emergent Behavior
Emergent behavior refers to unexpected, complex skills that spontaneously appear when an AI model becomes sufficiently large and powerful [28](#page=28).
* **Examples:** Acquiring extensive knowledge from texts and code found on the internet [28](#page=28).
#### 2.4.3 The "black box" problem
The extreme complexity of modern AI models, with billions of parameters, makes it virtually impossible to fully comprehend their internal workings [28](#page=28).
* **Implications:** Skills are not explicitly programmed but emerge, leading to unpredictable output changes from minor input variations. This raises concerns about reliability, bias, and accountability [28](#page=28).
#### 2.4.4 Scale laws and AGI
Scale laws suggest that increasing data, parameters, and computational power leads to qualitative leaps in AI capabilities, potentially accelerating the path towards Artificial General Intelligence (AGI). However, the challenge lies in controlling systems that are not fully understood [29](#page=29).
### 2.5 Choosing and implementing AI tools
Selecting the right AI tool depends on various factors, as there isn't a single "best" model; rather, it's about finding the best match for a specific task [30](#page=30) [33](#page=33).
#### 2.5.1 Factors for consideration
* **Specialization vs. All-rounders:** Models can be generalists or highly specialized [30](#page=30).
* **Technical Differences:** Model size, context window, and training data vary [30](#page=30).
* **Business Models:** Tools range from free to paid and enterprise solutions [30](#page=30).
* **Open vs. Closed Source:** Transparency versus ease of use is a key distinction [30](#page=30).
* **Data Privacy:** Crucial considerations include data sensitivity, zero-retention policies, and compliance with privacy regulations. Platforms like Ollama allow models to run locally, enhancing privacy [25](#page=25) [31](#page=31).
* **Cost and Budget:** Start with free versions and upgrade only when clear added value is identified [32](#page=32).
* **Integration and Ecosystem:** Compatibility with existing software (e.g., Microsoft 365, Google Workspace) and API availability are important [33](#page=33).
* **Task and Specialization:** Choose tools based on specific tasks like research, analysis, creativity, or code writing [33](#page=33).
* **Speed vs. Quality:** Some tasks (e.g., brainstorming) can prioritize speed, while others (e.g., analysis) demand higher quality [34](#page=34).
* **Implementation:** User-friendliness, training and support availability, and regular evaluation of the tool's continued value are essential [34](#page=34).
> **Tip:** The AI landscape evolves rapidly, so focus on the added value a tool provides rather than chasing hype [34](#page=34).
---
# Impact and future of artificial intelligence
Artificial intelligence (AI) is rapidly evolving and becoming increasingly integrated into various aspects of daily life and business, promising significant transformations alongside notable challenges.
### 3.1 The current pervasiveness of AI
AI is no longer a futuristic concept but a present reality, embedded in numerous technologies and services used daily. Examples include object detection and filters in cameras, real-time traffic updates in mapping applications, personalized recommendations on streaming services, and smart thermostats that learn user behavior. Businesses leverage AI for functions like fraud detection in banking, route optimization in transportation, CV screening in HR, and customer service through chatbots [8](#page=8).
### 3.2 The evolution and "hype" of AI
The current AI surge is driven by a "perfect storm" of technological advancements. This includes a massive explosion of data generated by the internet, a significant increase in computing power enabled by GPUs and cloud computing, and revolutionary algorithms like Transformers. The open-source culture has further accelerated development by facilitating the sharing of research. This evolution has moved AI from research labs to mainstream consumer products, particularly highlighted by the "ChatGPT moment" in 2022, which offered an accessible interface capable of handling complex instructions [15](#page=15).
The adoption of AI is extraordinarily rapid, with Generative AI (Gen AI) being a General Purpose Technology akin to steam power or electricity, impacting all facets of life. Early adoption rates are unprecedented, with some platforms reaching 100 million users within two months. This rapid adoption is often described using the Gartner Hype Cycle, moving through phases like the Technology Trigger, Peak of Inflated Expectations, Trough of Disillusionment, Slope of Enlightenment, and ultimately the Plateau of Productivity as AI becomes a standard integrated technology [32](#page=32) [33](#page=33).
### 3.3 Impacts on productivity and business
AI is demonstrating a substantial impact on productivity. A study on consultants found tasks were completed 25% faster with AI, and the output quality improved by 40%. However, AI is most beneficial for tasks within its capacity, and performance can degrade for tasks beyond its capabilities [35](#page=35).
The benefits of AI often disproportionately favor experts. Experts, with their deep domain knowledge, can effectively leverage AI as a powerful combination, understanding and validating AI output, and thus becoming even better. Conversely, beginners may become overly reliant on AI, developing skills in instructing AI rather than understanding the output, leading to a potential decline in their own skill development [36](#page=36).
AI is a transformative force in the workplace, leading to changes in job roles, the necessity for reskilling, the creation of new jobs, and a redistribution of tasks rather than outright job elimination. AI excels at automating mundane tasks, reducing errors, improving decision-making through rapid and objective analysis, offering 24/7 availability and global scalability, and reducing risks in hazardous environments [37](#page=37) [38](#page=38).
Businesses are making significant investments in AI, with a majority of companies actively experimenting with it and prioritizing it as a strategic imperative [34](#page=34).
### 3.4 The future potential of AI
The future of AI envisions autonomous systems, including fully self-driving vehicles and autonomous drones for deliveries, as well as self-optimizing supply chains. Human-AI collaboration is expected to become more prevalent, with AI acting as a constant co-pilot to enhance creativity and problem-solving capabilities, leading to the emergence of new professions and required skills [19](#page=19).
### 3.5 Challenges and ethical considerations (the "downside")
Despite its immense potential, AI presents significant challenges and risks. These include [39](#page=39):
* **Bias and discrimination:** AI models can perpetuate and even amplify existing societal inequalities if trained on biased data. This can manifest in how AI generates content, for instance, by creating stereotypical images for professions like cleaners versus scientists [39](#page=39) .
* **Privacy and surveillance:** The use of AI raises concerns about data security, surveillance, and the balance between safety and freedom. Input data can be used for model training, and sensitive information can be stored on servers in other countries. Sensitive data categories include personal, financial, medical, business, and academic information [39](#page=39) .
* **Misinformation and deepfakes:** AI can be used to generate and spread false information, undermining trust in genuine content [39](#page=39).
* **High costs and environmental impact:** The development and deployment of AI can incur significant financial costs and have an environmental footprint [39](#page=39).
* **Hallucinations:** AI models can generate inaccurate information that is not based on factual evidence, presenting it confidently as truth. These "hallucinations" can contradict source information, lack substantiation, contradict previous outputs, or deviate from the original prompt. This occurs because AI models are trained to always provide an answer and may fill gaps with plausible-sounding fabrications without a verification mechanism [41](#page=41) [42](#page=42).
* **Knowledge cutoffs:** AI models are trained on data up to a specific point in time, meaning they lack information about events or developments after that date. Updating these models can take months or years [43](#page=43).
* **Over-dependence:** Excessive reliance on AI without critical evaluation or verification can lead to a decline in human skills, academic integrity issues, and a lack of problem-solving capability when AI is unavailable .
### 3.6 Critical and safe AI usage
To navigate the complexities of AI, critical usage is essential [40](#page=40).
* **Human expertise remains crucial:** AI acts as an amplifier, and the depth of one's own knowledge directly impacts the quality of questions and the ability to understand AI output [40](#page=40).
* **Understand AI limitations:** AI is not universally effective; knowing when to use it and when not to is vital. Marketing hype should not overshadow scientific reality [40](#page=40).
* **Fact-checking is paramount:** Users must scan for red flags and verify AI output through cross-referencing, triangulation with other AI tools, and consulting experts when necessary. Documenting the AI usage and any unverified information is also important .
* **Protecting privacy:** Sensitive information should be anonymized or generalized, and users should read privacy policies of AI tools and consider alternative solutions for highly confidential data. The general rule is to not share anything one wouldn't want to see on the internet .
* **Safe AI use in studies:** While AI can be a valuable tool for brainstorming, concept explanation, tutoring, proofreading, and research structure, academic integrity demands that it should not be used for plagiarism or to avoid learning .
Ultimately, AI is a tool, not an end in itself. Its power is immense but limited, and human context, judgment, and expertise remain indispensable [41](#page=41) .
---
# Effective prompting and AI usage in research
This topic explores how to effectively interact with AI models through precise prompting techniques and how to leverage AI tools for research while maintaining academic integrity and ethical standards [46](#page=46) [61](#page=61).
### 4.1 The importance of effective prompting
#### 4.1.1 Why good prompting matters
Effective prompting is crucial for maximizing the utility of AI models. The quality of the output is directly proportional to the quality of the prompt. Well-crafted prompts reduce the need for subsequent adjustments, saving time and computational resources, thus minimizing the significant environmental impact of AI usage. Understanding how to prompt well is key to better AI utilization [45](#page=45) [46](#page=46).
#### 4.1.2 Prompt engineering
Prompt engineering involves creating text instructions for AI models to elicit desired outputs. Prompts can be reused, though the output may vary [46](#page=46).
#### 4.1.3 Context window
The context window, measured in tokens, acts as the AI model's "working memory". It encompasses both the input prompt and the AI's generated output. A longer input consumes more of the context window, leaving less space for the output. If the input is too long, the AI may "forget" the beginning of the input, negatively impacting the quality and completeness of its response [47](#page=47).
### 4.2 Anatomy of a strong prompt
A robust prompt structure provides clear guidance to the AI, leading to more precise and useful results. Frameworks can offer structure and inspiration, especially for beginners, but should not be relied upon rigidly as AI models process content, not just labels [48](#page=48).
The key components of a strong prompt are:
* **ROLE**: Defines the persona, tone, perspective, and expertise level of the AI [49](#page=49).
* **Example:** "You are an experienced marketer specializing in B2B." [49](#page=49).
* **CONTEXT**: Provides background information, the objective, target audience, and any constraints [49](#page=49).
* **Example:** "This text is for new employees during their onboarding." [49](#page=49).
* **INSTRUCTIONS**: Specifies exactly what the AI needs to do, using clear action verbs [50](#page=50).
* **Example:** "Summarize this text into 3 key points." [50](#page=50).
* **EXAMPLES**: Illustrate the desired output, improving the AI's understanding through "few-shot prompting". This can include positive and negative examples [50](#page=50).
* **Example:** "Use this writing style: [example text." [50](#page=50).
* **OUTPUT & FORMAT**: Dictates the structure, length, language, and tone of the desired output [51](#page=51).
* **Example:** "Provide the answer as a table with 3 columns." [51](#page=51).
* **DON’T**: Specifies what should be avoided, increasing precision and reducing iterations [51](#page=51).
* **Example:** "Do not use jargon or technical terms." [51](#page=51).
> **Tip:** Frameworks are helpful but not sacred; adapt them to your specific needs [48](#page=48).
### 4.3 Advanced prompting techniques
#### 4.3.1 Avoiding suggestive questions
Suggestive prompts can lead to biased and one-sided answers. Neutral, open-ended questions that explore all options are preferable for comparisons, evaluations, and advice [52](#page=52).
* **Example:**
* **NEUTRAL:** "What are the pros and cons of our new product?"
* **SUGGESTIVE:** "Explain why our new product is great." [52](#page=52).
#### 4.3.2 Increasing the stakes
Making a task feel important can improve AI performance. Adding weight to the instruction can lead to better results [52](#page=52).
* **Example:** "Prepare this presentation as if you had to pitch it to the CEO." [52](#page=52).
#### 4.3.3 Prompt-chaining
This technique involves breaking down complex tasks into a sequence of smaller, interconnected prompts, where the output of one prompt becomes the input for the next. This is effective for complex workflows [53](#page=53).
* **Example:**
* **Step 1:** "Summarize this article into 5 key points."
* **Step 2:** "Using these 5 key points, provide 3 concrete recommendations for our company." [53](#page=53).
> **Tip:** Label intermediate results (e.g., "SUMMARY:...") to maintain clarity [53](#page=53).
#### 4.3.4 Chain-of-Thought (CoT)
CoT prompting encourages the AI to show its thinking process and intermediate steps. This is crucial for complex analyses, providing more transparency, improving accuracy, and facilitating learning [54](#page=54).
* **Example:**
* **Without CoT:** "Solve this equation: (3x + 5 = 20)"
* **With CoT:** "Solve this equation. Show all your intermediate steps and explain what you are doing: (3x + 5 = 20)" [54](#page=54).
#### 4.3.5 Tree-of-Thought (ToT)
ToT involves generating multiple solution paths, making it ideal for strategic planning, problem-solving, and brainstorming. It is advisable to limit the number of options (3-5) to keep the output manageable [54](#page=54).
* **Example:** For reducing office waste: 1) Generate 3 different strategies. 2) For each strategy, list the pros and cons. 3) Choose the best strategy and explain your choice [54](#page=54).
#### 4.3.6 Using external knowledge (RAG - Retrieval-Augmented Generation)
RAG enables AI models to base their answers on specific documents, helping to avoid hallucinations for factual, current, or company-specific queries. It is recommended to ask for citations from the sources used [55](#page=55).
* **Example:** "Using the attached annual report [file.pdf and strategy document [file.docx, answer the following question: 'What are our main growth opportunities for next year?'" [55](#page=55).
#### 4.3.7 Self-critique & Self-improvement
This technique involves asking the AI to generate output, then critically evaluate and improve it. It's particularly valuable for writing, policy documents, or any output where quality is paramount. Explicit rubrics can be used for the evaluation [55](#page=55).
* **Example:** "Write a piece of advice. Then, critically evaluate it and improve 3 weak points." [55](#page=55).
#### 4.3.8 Meta-prompting
Meta-prompting involves designing prompts that generate other prompts. This can lead to more powerful prompts, accelerate complex prompt creation, and automate workflows [56](#page=56).
> **Tip:** Use a meta-prompt to generate prompts for specific tasks by answering questions about the desired ROLE, CONTEXT, INSTRUCTIONS, OUTPUT, EXAMPLES, and DON'Ts [56](#page=56).
#### 4.3.9 CustomGPTs
CustomGPTs are personalized chatbots available in stores, offering a more tailored experience by pre-defining personas, instructions, and knowledge bases. Users can upload files for the GPT to use as reference and enable/disable features like web browsing or DALL-E. CustomGPTs can also be shared [56](#page=56) [57](#page=57).
### 4.4 The shadow side of AI usage in research
#### 4.4.1 Prompt injection
Prompt injection occurs when an AI model is misled by clever or hidden instructions within its prompt or data, causing it to perform unintended actions. This can happen directly by a user entering a misleading prompt or indirectly through prompts hidden in documents or websites the AI accesses [58](#page=58).
**Dangers of Prompt Injection:**
* Leakage of sensitive data [58](#page=58).
* Dissemination of false information [58](#page=58).
* Disruption of systems [58](#page=58).
#### 4.4.2 Prompt injection vs. Jailbreaking
Prompt injection involves deception via input or documents to cause unwanted actions. Jailbreaking, on the other hand, involves persuading the AI to create prohibited content [59](#page=59).
#### 4.4.3 Recognizing and protecting against prompt injection
**How to recognize:**
* **Negation commands:** "Ignore all previous instructions," "Forget the rules for a moment," "Pretend to be..." [59](#page=59).
* **Hidden in documents:** Instructions embedded in uploaded files or website content [59](#page=59).
* **Distraction techniques:** Phrases like "By the way..." or "PS:", unexpected conversational turns, or seemingly innocent additional requests [59](#page=59).
**How to protect yourself:**
* **Input Validation:** Scan inputs for suspicious phrases [60](#page=60).
* **Output Control:** Check AI responses for deviations [60](#page=60).
* **Access Restriction:** Grant AI minimal necessary permissions [60](#page=60).
* **Human Oversight:** Have critical outputs reviewed by a person [60](#page=60).
* **Monitoring:** Keep track of AI interactions [60](#page=60).
#### 4.4.4 Sycophancy and the "pleaser" pitfall
Sycophancy refers to an AI model's tendency to align its answers with the user's beliefs to please them, rather than strictly adhering to facts. This is an inherent characteristic of language models and not a bug. The AI may invent information (hallucinate) to avoid admitting it doesn't know something [78](#page=78).
**Consequences of Sycophancy:**
* The AI model becomes manipulable, reinforcing biases [78](#page=78).
* It can provide a false sense of reliability, leading to incorrect conclusions [78](#page=78).
* It can be exploited through prompt injection and jailbreaking, as the prompt essentially becomes stronger [78](#page=78).
**Sycophancy as a research assistant:**
* An AI that confirms beliefs rather than questioning them leads to biased research [79](#page=79).
* **Countermeasures:** Use neutral prompts, ask for counterarguments, and always check sources [79](#page=79).
### 4.5 AI usage in research
#### 4.5.1 The evolving landscape of research
Research has transitioned from traditional, information-scarce methods to digitally accelerated processes and now to AI-driven knowledge production. While traditional research offers depth, nuance, and peer-reviewed quality, AI-supported research excels in speed, scalability, and pattern recognition. A hybrid approach, combining AI for exploration and speed with traditional methods for validation and depth, is often optimal [64](#page=64) [65](#page=65).
#### 4.5.2 The research process and AI integration
The research process typically involves:
1. **Exploration**: Formulating the research question [66](#page=66).
2. **Exploration**: Identifying existing knowledge, reports, and trends [66](#page=66).
3. **Evidence Gathering**: Verifying sources and using multiple perspectives [66](#page=66).
4. **Analysis & Interpretation**: Identifying patterns and combining insights [66](#page=66).
5. **Synthesis**: Answering the research question and translating it into decisions [66](#page=66).
6. **Reflection**: Evaluating the process and the reliability of results [66](#page=66).
AI tools can assist in various stages of this process [15](#page=15).
> **Tip:** AI accelerates finding information, not necessarily understanding it [65](#page=65).
#### 4.5.3 AI tools for efficient research
Various AI tools can support different research phases:
* **LLMs (e.g., Claude, Gemini, ChatGPT, Mistral)**: Useful for brainstorming, generating research questions, refining ideas, and for reflection on biases or verification strategies. *Caution: Do not use as a primary source due to potential knowledge cutoffs and hallucinations.* [68](#page=68) [69](#page=69).
* **Perplexity**: An AI search engine that automatically provides source citations, aiding in exploration and synthesis [70](#page=70).
* **ResearchRabbit**: A tool for visual discovery of scientific papers through citation networks, useful for exploring and understanding research evolution [70](#page=70).
* **Elicit**: Facilitates systematic literature reviews by finding studies, summarizing findings and methodologies, and assisting with research questions [71](#page=71).
* **Consensus**: Displays scientific consensus on claims, ideal for validating assertions by showing the percentage of studies supporting a statement. *Important: Always check the number of studies and read key papers.* [72](#page=72).
* **ChatPDF**: Useful for quick screening and asking short questions about PDFs, allowing for summarization, methodology extraction, and finding specific quotes [73](#page=73).
* **NotebookLM**: Offers source grounding to prevent hallucinations, allows analysis of multiple papers simultaneously, and provides notebook organization [73](#page=73).
> **Tip:** Choose the right tool for the specific research goal, not just based on hype [73](#page=73).
#### 4.5.4 Prompting in research
When prompting for research:
* Define the AI's **role** (e.g., research assistant in marketing) [69](#page=69).
* Provide **context** (background and goal of the question) [69](#page=69).
* Give clear **instructions** on what needs to be done [69](#page=69).
* Define the desired **output** format and length [69](#page=69).
* Include **examples** and **don'ts** [69](#page=69).
* Employ more critical filtering; use neutral prompts and ask for sources and reliability [69](#page=69).
* Use AI as a reflection partner, not as a sole source of truth [69](#page=69).
### 4.6 Academic integrity with AI
#### 4.6.1 When to use AI and when not to
**Generally Permitted:**
* Generating ideas [74](#page=74).
* Structuring text [74](#page=74).
* Improving text [74](#page=74).
* Explaining complex concepts [74](#page=74).
* Searching for sources [74](#page=74).
* Supporting data analysis [74](#page=74).
**Generally Problematic:**
* Having AI write entire texts [74](#page=74).
* Copy-pasting without verification or critical thinking [74](#page=74).
* Hiding AI usage [74](#page=74).
* Fabricating sources [74](#page=74).
#### 4.6.2 Disclosing AI usage
Transparency about AI use is essential for academic integrity [75](#page=75).
* **Why cite AI:** For transparency, to credit developers, and to allow readers to consult the tool [75](#page=75).
* **When to cite:** For direct AI output (text, code, ideas), when AI has been used for support (analysis, ideas, structure), or whenever the AI model has contributed content significantly [75](#page=75).
#### 4.6.3 Correctly citing AI usage
* **Images:** "Image generated by DALL-E (OpenAI, 2023) with the prompt '...'" [75](#page=75).
* **Code:** Mention the tool and version; place code in an appendix and state it was AI-generated [75](#page=75).
* **Methodology:** Describe how AI was used (e.g., for literature review or analysis) [75](#page=75).
* **Avoid errors:** Always mention the version, date of access if no version is available, and the specific tool used, not just "an AI tool" [75](#page=75).
**APA Style Example:**
* **In-text:** According to Perplexity AI recent studies show... OR The analysis was performed with ChatGPT (OpenAI, 2023) [76](#page=76).
* **Reference List:** OpenAI.. ChatGPT (Mar 14 version) [Large language model. https://chat.openai.com/ OR Perplexity AI.. Perplexity [AI search assistant. https://www.perplexity.ai/ [76](#page=76).
#### 4.6.4 Responsible AI use in research
You remain ultimately responsible for your research. AI is a support tool, not a replacement for human intellect. Always use a verification model, document AI usage, and follow institutional guidelines. AI is a tool, not an author or evaluator [76](#page=76).
#### 4.6.5 The verification model
1. **Trace the source:** Request citations from AI (e.g., Perplexity) and check the original context [77](#page=77).
2. **Cross-check with other tools:** Ask the same question in multiple AIs and compare for consistency [77](#page=77).
3. **Human assessment:** Use your expertise, be critical, and seek a second opinion [77](#page=77).
**Red flags to watch for:**
* No source citation [77](#page=77).
* Vague or overly perfect answers [77](#page=77).
* Contradictory answers [77](#page=77).
* Claims that conflict with established knowledge [77](#page=77).
> **Tip:** If you encounter red flags, stop, verify the information, document inaccuracies, and reformulate your prompt [77](#page=77).
#### 4.6.6 Strong and ethical research with AI
**What works well:**
* Combine AI, literature, and data for completeness [80](#page=80).
* Always check sources, as AI can contain errors or hallucinations [80](#page=80).
* Be transparent about prompts, tools, and versions used [80](#page=80).
* Maintain a critical stance, evaluating content, not just phrasing [80](#page=80).
**What to avoid:**
* Blindly trusting AI or using it as a "truth machine" [80](#page=80).
* Lack of source citation or context [80](#page=80).
* Using AI as a substitute for analysis or reasoning [80](#page=80).
* Ignoring ethical principles like privacy and authorship [80](#page=80).
Combining AI speed with human judgment and research discipline is key to maintaining academic integrity. AI accelerates research, but you remain responsible. Use AI to think better, not to copy answers. Be transparent and honest about AI use and cite correctly [80](#page=80).
---
## Common mistakes to avoid
- Review all topics thoroughly before exams
- Pay attention to formulas and key definitions
- Practice with examples provided in each section
- Don't memorize without understanding the underlying concepts
Glossary
| Term | Definition |
|------|------------|
| Artificial Intelligence (AI) | Technology that simulates human intelligence, encompassing autonomy and adaptability, aiming to replicate cognitive abilities like learning and problem-solving. |
| Autonomy | The capacity of a system or entity to act and make decisions independently, without direct external control or intervention. |
| Adaptability | The ability to learn from experiences and adjust behavior or function in response to changing environments or new information. |
| Weak/Narrow AI | Artificial intelligence designed and trained for a specific, limited task, such as virtual assistants or recommendation systems, which cannot perform tasks outside its designated domain. |
| General AI (AGI) | A hypothetical form of artificial intelligence possessing the ability to understand, learn, and apply knowledge across a wide range of tasks and domains at a human-like level. |
| Super AI (ASI) | A theoretical form of artificial intelligence that would surpass human intelligence and cognitive abilities in virtually all aspects, including creativity, wisdom, and problem-solving. |
| Turing Test | An experimental method proposed by Alan Turing to assess a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human, based on its conversational responses. |
| AI Winters | Periods in the history of artificial intelligence characterized by reduced funding and interest, often following periods of over-optimistic predictions and unmet expectations regarding AI's capabilities. |
| Rule-Based AI | An earlier approach to AI that relies on explicitly programmed "if-then" rules to make decisions and solve problems, often used in expert systems. |
| Data-Driven AI | A modern approach to AI that learns patterns and makes decisions by analyzing large datasets, forming the basis for machine learning and deep learning. |
| Machine Learning | A subfield of AI that enables systems to learn from data and improve their performance on a specific task without being explicitly programmed, by identifying patterns and making predictions. |
| Generative AI | A type of artificial intelligence that focuses on creating new content, such as text, images, audio, or video, rather than just classifying or predicting existing data. |
| Transformer Architecture | A neural network architecture introduced in 2017 that significantly advanced the field of natural language processing by utilizing self-attention mechanisms to weigh the importance of different words in a sequence for context. |
| Bias (in AI) | The tendency of an AI system to produce results that are systematically prejudiced due to flawed data, algorithms, or assumptions in its training process, often reflecting and amplifying societal inequalities. |
| Misinformation | False or inaccurate information that is spread, regardless of intent to deceive, which can be amplified by AI technologies like deepfakes. |
| Deepfakes | Synthetic media in which a person in an existing image or video is replaced with someone else's likeness, often created using AI, which can be used to spread misinformation and erode trust. |
| :---------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Deep Learning | A subfield of machine learning, prominent since 2010, inspired by the structure of the human brain. It utilizes artificial neural networks with multiple layers to process complex patterns and relationships within data, enabling sophisticated recognition capabilities. |
| Attention Mechanism | A component within neural network architectures, particularly Transformers, that allows the model to focus on specific parts of the input sequence when processing information. It enables the model to understand the relationships between words in a sentence contextually. |
| Tokenization | The process of breaking down text into smaller units, called tokens, which are then converted into numerical representations (IDs). This is a crucial step for AI models to process and understand human language. |
| Parameters | In the context of AI models, parameters represent the strength of connections between "neurons" in a neural network. The number of parameters often correlates with the model's complexity and its ability to recognize intricate patterns. |
| Pre-training | The initial phase of training a large AI model, where it learns general patterns and knowledge from a vast dataset, typically by predicting the next word in a sentence. This process establishes the model's core capabilities before any fine-tuning. |
| Fine-tuning | A subsequent training stage where a pre-trained AI model is further adjusted using specific datasets or techniques like Reinforcement Learning from Human Feedback (RLHF). This process refines the model's behavior to align with desired outputs and human preferences. |
| Reinforcement Learning from Human Feedback (RLHF) | A technique used in fine-tuning AI models, especially language models, where human evaluators rank model outputs. This feedback is used to train a reward model, which then guides the optimization of the main AI model to produce more desirable responses. |
| Emergent Behavior | Complex and unexpected skills or abilities that spontaneously arise in AI models when they reach a certain scale and complexity. These capabilities are not explicitly programmed but emerge from the intricate interactions of the model's parameters and training data. |
| Black Box | A term used to describe AI systems, particularly deep learning models, whose internal workings are extremely complex and difficult to fully comprehend. The extreme number of parameters makes it challenging to understand the exact reasoning behind specific outputs. |
| Scale Law | The principle that increasing the amount of data, the number of parameters, and the computational power used to train an AI model leads to significant, qualitative improvements in its capabilities. |
| Multimodal AI | Artificial intelligence systems capable of processing and generating information across different types of data, such as text, images, audio, and video. This allows for richer input and more comprehensive understanding and creation of content. |
| Diffusion Models | A class of generative models used for creating images, audio, and video. These models work by gradually removing noise from a random signal, guided by a prompt, to produce a coherent output. |
| Object Detection | A computer vision technology that identifies and locates objects within an image or video, classifying them into predefined categories. |
| Autonomous Systems | Systems capable of operating independently without human intervention, making decisions and taking actions based on their environment and objectives. |
| Human-AI Collaboration | A partnership where humans and AI systems work together, leveraging each other's strengths to achieve better outcomes than either could alone. |
| General Purpose Technology (GPT) | A technology that has the potential to fundamentally change an economy or society, comparable to technologies like steam power or electricity, impacting many sectors. |
| Generative AI (Gen AI) | A type of artificial intelligence that can create new content, such as text, images, audio, or video, based on patterns learned from existing data. |
| Transformers (Model) | A type of neural network architecture that has revolutionized natural language processing and other AI tasks, known for its ability to handle sequential data effectively. |
| Large Language Models (LLMs) | Advanced AI models trained on vast amounts of text data, capable of understanding, generating, and processing human language for a wide range of applications. |
| Neural Audio Models | AI models that synthesize realistic speech or music from text, operating by constructing sound waves sample-by-sample. |
| Temporal Diffusion Models | An extension of diffusion models that incorporate a time dimension to generate sequences of frames, enabling the creation of videos. |
| Hallucinations (AI) | The phenomenon where AI models generate plausible-sounding but incorrect or fabricated information, often presenting it as factual. |
| Knowledge Cutoffs | The point in time when the training data for an AI model was last updated, meaning the model lacks information about events or developments that occurred after that date. |
| Bias (AI) | Systematic prejudice in AI systems, stemming from biased training data or algorithmic design, which can lead to unfair or discriminatory outcomes. |
| Over-reliance (AI) | Excessive dependence on AI tools, leading to a reduction in critical thinking, verification, and the development of one's own skills and knowledge. |
| Privacy Risks (AI) | Potential threats to the confidentiality and security of personal or sensitive data when using AI tools, including data storage, model training, and unauthorized access. |
| Prompt | A text instruction given to an AI model to guide its output. The quality of the prompt directly influences the quality of the AI's response. |
| Prompt Engineering | The practice of designing and refining prompts to elicit the best possible output from an AI model. This involves understanding how to phrase instructions, provide context, and specify desired formats. |
| Context Window | The AI model's "working memory," measured in tokens, that holds both the input (prompt) and the generated output. A larger context window allows for more input, but can reduce the space available for the output, potentially affecting quality. |
| Role (in Prompting) | Assigning an identity or persona to the AI model, which dictates its tone, perspective, and level of expertise for a given task. This helps frame the AI's response to align with specific needs. |
| Context (in Prompting) | Providing background information, goals, target audience, and constraints to the AI model to ensure its responses are relevant, accurate, and applicable to the specific situation. |
| Instructions (in Prompting) | Clear and specific directions given to the AI model detailing what action needs to be taken or what output is expected. Precise instructions lead to more precise results. |
| Examples (in Prompting) | Illustrative examples, both positive and negative, provided to the AI model to demonstrate the desired output style, format, or content. This technique, known as few-shot prompting, enhances accuracy. |
| Output & Format (in Prompting) | Defining the expected type of result (e.g., advice, analysis, creation) and its structure, length, language, and tone. This ensures the AI's output is usable and meets specific requirements. |
| Don't (in Prompting) | Specifying what the AI model should avoid, such as certain jargon, topics, or formats. This helps to increase precision and reduce the need for iterative refinement. |
| Prompt-Chaining | Breaking down a complex task into a sequence of smaller, interconnected prompts, where the output of one prompt serves as the input for the next. This is useful for managing complex workflows. |
| Chain-of-Thought (CoT) | A prompting technique where the AI is instructed to show its thinking process and intermediate steps when solving a problem. This enhances transparency, accuracy, and understanding, especially for complex analyses. |
| Tree-of-Thought (ToT) | A method that involves generating multiple potential solution paths or strategies for a problem. It is particularly useful for strategic planning, problem-solving, and brainstorming by exploring various options. |
| Retrieval-Augmented Generation (RAG) | A technique where AI model responses are based on specific documents or external knowledge sources. This helps to avoid hallucinations and ensure factual accuracy, especially for specific or current information. |
| Self-critique & Self-improvement | A prompting strategy where the AI is asked to generate output, then critically evaluate it, and subsequently improve upon identified weaknesses. This is crucial for tasks demanding high quality. |
| Meta-prompting | Designing prompts that themselves generate other prompts. This powerful technique can lead to more sophisticated prompts, accelerate complex prompt creation, and automate workflows. |
| Prompt Injection | A security vulnerability where an AI model is tricked by clever or hidden instructions embedded within its input or data, causing it to perform unintended actions. This can be direct or indirect. |
| Jailbreaking | A technique used to persuade an AI model to bypass its safety protocols and generate content that it would normally refuse to produce, such as forbidden or harmful information. |
| Sycophancy | The tendency of an AI model to align its responses with the user's beliefs or desires, potentially leading to biased or fabricated information rather than objective truth. This is an inherent characteristic of many language models. |
| Academic Integrity | Upholding honesty, transparency, and responsibility in research. This includes accurate citation, proper attribution of sources (including AI), and avoiding fabrication or falsification of data. |
| Hybrid Approach (AI in Research) | Combining the strengths of AI-driven research (speed, breadth) with traditional methods (validation, depth, human expertise) to achieve more comprehensive and reliable results. |
| Verification Model | A structured process for validating AI-generated information, typically involving tracing sources, cross-checking with other tools, and human judgment to ensure accuracy and reliability. |
| AI-generated Content | Any text, code, images, or other media produced by an artificial intelligence model. Proper attribution and transparency are crucial when using such content in academic or professional work. |
| Hallucination (in AI) | The generation of false or nonsensical information by an AI model, often presented with a high degree of confidence. This can occur when the AI lacks sufficient data or misinterprets patterns. |