Cover
Comença ara de franc Les 9-10-11 - slides_merged.pdf
Summary
# AI ethics and its core principles
AI ethics ensures artificial intelligence is developed and deployed in a manner that benefits humanity and upholds fundamental rights and societal values.
## 1. AI ethics and its core principles
AI ethics is a crucial field addressing the moral considerations surrounding the development, deployment, and impact of artificial intelligence technologies. It examines the potential harms and benefits of AI and seeks to establish guidelines for responsible innovation. The core of AI ethics lies in understanding and addressing several fundamental principles [11](#page=11) [14](#page=14) [16](#page=16) [17](#page=17) [19](#page=19) [21](#page=21) [23](#page=23) [2](#page=2) [5](#page=5) [6](#page=6).
### 1.1 The five pillars of AI ethics
The field of AI ethics is often structured around five key pillars [23](#page=23) [5](#page=5):
* **Privacy and data protection**: This principle emphasizes the importance of controlling personal data and respecting individuals' fundamental right to privacy. AI systems, especially those using large datasets, present specific challenges, as anonymization may be insufficient, and it can be difficult to "forget" data from a model. This is also addressed by regulations like the GDPR [22](#page=22) [23](#page=23) [5](#page=5) [6](#page=6).
* **Transparency and explainability**: AI decisions should be understandable and not operate as a "black box". Transparency does not necessarily mean a full technical explanation but rather a comprehensible explanation for humans. The GDPR includes a "right to explanation," which may involve human intervention in automated decisions. It's important to note that transparency about how a system works doesn't fully explain its impact on people [11](#page=11) [13](#page=13) [19](#page=19) [22](#page=22) [23](#page=23) [5](#page=5).
* **Fairness and bias mitigation**: Bias in AI refers to systematic deviations in decisions that disadvantage specific groups. Bias can arise unintentionally and is often persistent because AI learns from historical data that reflects past inequalities, leading to the reproduction of discrimination [14](#page=14) [16](#page=16) [22](#page=22) [23](#page=23) [5](#page=5).
* **Proxy discrimination** occurs when seemingly neutral features hide discriminatory practices, such as using postcodes as a proxy for ethnicity or names as a proxy for gender [14](#page=14).
* **Feedback loops** can exacerbate bias, where a biased decision reinforces the bias in the data, leading to more biased outcomes [14](#page=14).
* **Example:** An AI system for cancer detection might be 95% accurate for light skin but only 60-70% accurate for dark skin, widening the healthcare gap [16](#page=16).
* **Responsibility and reliability**: This principle concerns accountability when an AI system fails. Responsibility is only possible for reliable systems, which in turn requires someone to be accountable. AI systems should be consistent and predictable, though their experimental nature can complicate this. Responsibility in AI failures is often vague, with a tendency for parties to deflect blame [17](#page=17) [19](#page=19) [22](#page=22) [23](#page=23) [5](#page=5).
* **Robustness** is crucial for reliability and safety, ensuring a system continues to function under stress, protects against errors and attacks, and can handle failures without catastrophic consequences [17](#page=17).
* **Example:** The Dutch benefits scandal (toeslagenaffaire) involved an AI system for fraud risk that incorrectly flagged thousands of parents as fraudsters, leading to severe consequences like children being placed in care, due to a lack of transparency, explainability, responsibility, and human checks [19](#page=19).
* **Safety and security**: AI systems must be designed to be safe and secure, protecting against unintended harm and malicious attacks. This is particularly relevant for AI applications in critical sectors like healthcare or autonomous vehicles [17](#page=17) [21](#page=21) [22](#page=22) [23](#page=23) [2](#page=2) [5](#page=5).
* **Example:** AI-generated voice impersonations (deepfakes) can be used for fraud, such as a CEO's voice being faked to authorize a large sum of money transfer, raising questions about voice biometrics and identity verification [21](#page=21).
### 1.2 Regulatory frameworks
Regulatory frameworks are emerging to provide guidance and enforce ethical AI practices [23](#page=23).
* **GDPR (General Data Protection Regulation) / AVG (Algemene Verordening Gegevensbescherming)**: This European privacy law, enacted in 2018, grants citizens control over their data and establishes uniform rules for companies within the EU. It includes the right to explanation for automated decisions [11](#page=11) [6](#page=6).
* **EU AI Act**: This act categorizes AI systems based on their risk levels [5](#page=5).
* **Unacceptable risk**: These AI systems are prohibited, including social scoring, subliminal manipulation, and real-time facial recognition in public spaces [5](#page=5).
* **High risk**: This includes AI in recruitment, healthcare, and justice systems, as well as large foundational models (GPAI). These systems have mandatory requirements like human-in-the-loop oversight, documentation, and bias testing [5](#page=5).
* **Limited risk**: AI systems like chatbots and generative AI fall into this category. The main requirement is transparency, ensuring users know they are interacting with an AI [5](#page=5).
* **Low/No risk**: This category includes AI applications like spam filters and AI games, which have no additional requirements [5](#page=5).
* **Product Liability Directive (PLD)**: This regulation addresses liability for damages caused by products, including AI systems [23](#page=23).
### 1.3 Making ethical decisions with AI
Developing an ethical compass for AI projects involves asking critical questions and following a structured process [22](#page=22).
#### 1.3.1 Ethical questions for AI projects
When developing AI, consider questions across different dimensions [22](#page=22):
* **Fairness**: Does it discriminate? (e.g., bias in marketing targeting) [22](#page=22).
* **Transparency**: Can I explain it? (e.g., why credit was denied in finance) [22](#page=22).
* **Privacy**: Do I respect data? (e.g., facial recognition in retail) [22](#page=22).
* **Safety**: Can it cause harm? (e.g., self-driving cars in automotive) [22](#page=22).
* **Responsibility**: Who pays for errors? (e.g., incorrect diagnosis in healthcare) [22](#page=22).
* **Compliance**: Does it follow the law? (e.g., GDPR and AI Act in legal) [22](#page=22).
#### 1.3.2 A process for ethical decision-making
A structured approach to making ethical decisions includes [22](#page=22):
1. **Identify stakeholders**: Recognize all parties involved, such as users, employees, society, and data workers [22](#page=22).
2. **Analyze impact**: Evaluate the positive and negative effects on each stakeholder [22](#page=22).
3. **Identify trade-offs**: Determine which ethical pillars conflict and which parties benefit or lose out [22](#page=22).
4. **Plan for mitigation**: Develop strategies to protect vulnerable groups [22](#page=22).
5. **Monitor and adapt**: Continuously evaluate and improve the AI system's ethical performance [22](#page=22).
> **Tip:** It is crucial to remember that what is technically possible is not always ethically permissible, and what is legally allowed may not always be ethically responsible. Regulations often set the minimum standard, while ethics require going further [23](#page=23).
---
# Societal and environmental impact of AI
The adoption of Artificial Intelligence (AI) carries significant societal and environmental consequences, encompassing the exploitation of human labor, substantial ecological footprints, and the exacerbation of existing social divides [25](#page=25) [26](#page=26).
### 2.1 The full cost of AI
Beyond ethical considerations, AI has a tangible impact on both people and the planet. This presents a dual paradox: AI can be both a problem due to its immense energy, water, and labor demands, and a solution capable of improving sustainability, efficiency, and accessibility. The question remains as to who ultimately bears these costs [26](#page=26).
#### 2.1.1 Quantifying the impact
Concrete figures highlight the scale of AI's footprint. The training of GPT-3 alone required approximately 700,000 liters of water, equivalent to about 370 days of showering. Estimates for training GPT-4 suggest between 12,000 and 15,000 tons of CO₂ emissions, comparable to driving 5 million kilometers in a gasoline car. By 2025, AI adoption in the United States is projected to add 896,000 tons of CO₂ annually, similar to 300,000 transatlantic round-trip flights. The human cost is also stark: content moderators in Kenya earn between 1.50 and 2.50 dollars per hour for eight-hour shifts involving exposure to traumatic content. Data labelers in China work 12-hour shifts for 2 dollars per hour. The global water consumption by data centers is predicted to reach 560 billion liters annually by 2025, enough to fill 224 Olympic swimming pools. The rapid growth of AI is driving an exponential demand for raw materials and labor [26](#page=26).
### 2.2 Human costs
The societal impact of AI includes substantial human costs, often stemming from the hidden labor that powers these systems [25](#page=25) [27](#page=27).
#### 2.2.1 The invisible labor behind AI
Millions of "ghost workers" form the unseen workforce behind AI. These include:
* **Data labelers:** Responsible for classifying images, text, and videos [27](#page=27).
* **Content moderators:** Tasked with reviewing content related to violence, abuse, and extremism [27](#page=27).
* **Human feedback trainers:** Correct AI outputs through methods like Reinforcement Learning from Human Feedback (RLHF) [27](#page=27).
These individuals are primarily located in countries like Kenya, India, the Philippines, Romania, and Venezuela, working for major tech companies such as OpenAI, Meta, Amazon, Google, and Accenture. This labor is critical for AI development, yet it remains structurally invisible [27](#page=27).
#### 2.2.2 The toll of a "clean internet"
Content moderators, in particular, suffer significant mental health consequences from processing traumatic material for 8-10 hours daily. Studies indicate that 60-80% develop Post-Traumatic Stress Disorder (PTSD) symptoms, leading to depression, anxiety, sleep disturbances, and emotional numbness. Working conditions are arduous, with limited psychological support and strict non-disclosure agreements (NDAs) preventing them from discussing their experiences. Consequently, "clean" AI models are built upon deeply disturbing and traumatic human labor [28](#page=28).
#### 2.2.3 Data labeling exploitation
Data labeling work is characterized by exploitative labor conditions. Workers are often paid per task, leading to extremely low incomes that can be 30-50% below the living wage. As freelancers, they lack formal contracts, sick leave, vacation days, and job security, often working 10-12 hour days to earn sufficient income. This creates a paradox where AI giants like OpenAI, Meta, and Google achieve multi-billion dollar valuations, while "AI automation" merely shifts exploitation rather than eliminating labor. Automation does not inherently mean progress for everyone [28](#page=28).
#### 2.2.4 The price of polite AI
Reinforcement Learning from Human Feedback (RLHF) involves human feedback trainers meticulously aligning AI responses with human preferences. This process teaches AI what is considered "polite," "useful," and "safe," preventing it from adopting the often crude nature of unfiltered internet content. These trainers assess an average of 200-300 AI responses per hour, about four per minute. The irony is that while language models appear infinitely patient and polite, they are supported by exhausted workers under extreme pressure. Marketing claims that "AI is never tired" obscure the reality that humans must endure fatigue so that AI can appear tireless. As one worker noted, "We train AI to be polite, but we are treated impolitely." [29](#page=29).
##### 2.2.4.1 Economic drivers of exploitation
Significant wage disparities fuel this exploitation. Silicon Valley professionals may earn around 100,000 dollars annually, while workers in Kenya might earn approximately 3,000 dollars per year. This means that working 3.6 days in Silicon Valley can equate to a full year's salary in Kenya. Workers in these low-wage countries often lack basic labor rights, and NDAs are used to conceal issues of abuse and mistreatment. The myth of "fully automated" AI disguises a drive for rapid growth and short-term profit [29](#page=29).
#### 2.2.5 Globalization 2.0: Digital labor migration
Similar to the outsourcing of physical production to Asia in the 1980s and 1990s, digital labor is now being routed to the Global South [30](#page=30).
##### 2.2.5.1 Value distribution in the AI ecosystem
The distribution of value in the AI ecosystem is highly unequal:
* **Tech Giants (OpenAI, Meta):** Capture 70-80% of the value [30](#page=30).
* **Outsourcing Companies (Sama, TaskUs):** Achieve margins of 15-20% [30](#page=30).
* **Platforms (Scale AI):** Take 5-8% [30](#page=30).
* **Workers:** Receive only 2-5%, bearing the most risk for the least earnings [30](#page=30).
Crucially, no single entity feels fully responsible. Tech giants often claim ignorance of their contractors' practices, outsourcing firms cite compliance with local laws, and platforms position themselves merely as intermediaries [30](#page=30).
#### 2.2.6 Fairwork certification
The Fairwork project aims to promote fair labor practices in the AI industry through certification based on five AI-specific principles:
* **Fair Pay:** A living wage plus bonuses for particularly traumatic tasks [31](#page=31).
* **Fair Conditions:** Prioritizing mental health and limiting exposure to harmful content [31](#page=31).
* **Fair Contracts:** Offering permanent contracts and job security [31](#page=31).
* **Fair Management:** Eliminating arbitrary deactivation of accounts [31](#page=31).
* **Fair Representation:** Upholding the right to organize [31](#page=31).
Companies are scored out of 10, with scores potentially influencing their operations. For example, Sama received a 3/10, showing significant improvement after investigation. European Union regulations in this area are also under development [31](#page=31).
### 2.3 Ecological costs
AI systems have a substantial environmental footprint, impacting the planet through carbon emissions, water consumption, and electronic waste [25](#page=25) [32](#page=32).
#### 2.3.1 Carbon footprint (CO₂)
The carbon footprint of AI refers to the greenhouse gas emissions it produces in the form of CO₂. This impact is not limited to the training of Large Language Models (LLMs). Training GPT-3 generated 552 tons of CO₂, equivalent to 370 flights between New York and London. Training GPT-4 is estimated to produce around 15,000 tons of CO₂, comparable to the annual emissions of 3,260 cars. Even daily usage contributes: a single ChatGPT query emits approximately 1 gram of CO₂, which is 4-5 times more than a Google search. By 2025, AI usage in the United States is projected to generate 896,000 tons of CO₂ annually, equivalent to powering 195,000 cars for an entire year [32](#page=32).
#### 2.3.2 The datacenter energy crisis
Data centers currently consume 1-2% of global electricity, with this figure projected to increase by 20-30% annually due to AI expansion. The energy mix powering these centers is problematic, with 60% derived from fossil fuels (coal, gas), 25% from renewables, 10% from nuclear energy, and 5% from other sources. Despite Microsoft's goal to be "carbon negative by 2030," the reality is that AI expansion is increasing CO₂ emissions by over 30%. A proposed "solution" is nuclear energy, with Google, Microsoft, and Amazon announcing plans for nuclear reactors in September 2024 [32](#page=32).
#### 2.3.3 Thirsty datacenters
The cooling systems for data centers are a major source of water consumption, requiring approximately 1.9 liters of water per kilowatt-hour of electricity used. Training GPT-3 alone consumed 700,000 liters of water. Globally, data centers are projected to require 560 billion liters of water annually by 2025, equivalent to 224 Olympic swimming pools. By 2027, this demand is expected to rise to 1.7 trillion liters, or 4.7 billion liters per day – nearly three times Belgium's daily water consumption. This has led to conflicts over water resources: in Arizona, Google's data center competes with drinking water rationing, and in Georgia, residents lack drinking water near a Meta data center. The water is used for purification, making it safe for servers and readily available via municipal lines, but approximately 80% evaporates, resulting in permanent loss. While alternatives exist, they are underutilized [33](#page=33).
#### 2.3.4 Hardware lifecycle and e-waste crisis
The lifespan of a Graphics Processing Unit (GPU) is typically only 1.5-2 years, not due to malfunction but obsolescence. This contributes to an escalating e-waste problem, with 62 million tons generated annually in 2024, and less than 20% being recycled, leading to most of it ending up in landfills. E-waste contains valuable rare metals such as cobalt (often mined with child labor in the Congo), lithium (whose extraction causes water pollution), and gold and rare earth elements (extracted through toxic processes). The rapid obsolescence cycle ensures a continuous demand for these materials, perpetuating the need for new mining and associated labor issues [33](#page=33).
#### 2.3.5 Green AI: What works
Efforts are underway to develop more environmentally friendly AI, often termed "Green AI." [34](#page=34).
* **Model Efficiency:** The development of Smaller Language Models (SLMs) is a key strategy. Llama 3.3, with 70 billion parameters, is 60% smaller than GPT-3 (175 billion parameters) and achieves comparable performance at a significantly lower cost (25 times cheaper than GPT-4o). Phi-4, with only 14 billion parameters, outperforms GPT-4o in mathematics and reasoning despite being 92% smaller [34](#page=34).
* **Smart Training:** Finetuning existing models rather than training them from scratch reduces computational resources [34](#page=34).
* **Green Infrastructure:** Utilizing renewable energy sources is crucial. Iceland's data centers are powered by geothermal energy (over 90%), Norway's by hydroelectric power, and Scotland's by wind energy [34](#page=34).
* **Local Processing:** Running LLMs locally, such as with Ollama for models like Phi, can offer privacy and energy benefits by eliminating the need for constant datacenter connectivity for every query [34](#page=34).
#### 2.3.6 Jevon's Paradox in AI
Jevon's Paradox, which states that increased efficiency can lead to increased consumption, is highly relevant to AI. The paradox unfolds as follows [34](#page=34):
1. **Efficiency leads to lower costs:** More efficient AI models become cheaper to develop and operate [34](#page=34).
2. **Lower costs drive increased usage:** The reduced cost incentivizes wider adoption and more frequent use of AI [34](#page=34).
3. **Increased usage raises total impact:** Despite individual efficiency gains, the aggregate use of AI leads to a net increase in environmental impact, such as CO₂ emissions [34](#page=34).
Historically, this paradox was observed with coal-powered engines in the 1860s, where more efficient machines led to greater coal consumption. Similarly, more fuel-efficient cars encourage people to drive further. As Microsoft's CEO noted in 2025, "Jevons paradox strikes again!" Even efficient solutions can ultimately increase overall consumption when widely adopted [34](#page=34).
### 2.4 Social costs and divides
AI adoption exacerbates existing social inequalities, creating and widening various "divides." [25](#page=25) [35](#page=35).
#### 2.4.1 The labor market: winners and losers
The World Economic Forum's "Future of Jobs 2024" report offers an optimistic outlook, predicting the elimination of 85 million jobs but the creation of 97 million new ones, a net gain of 12 million. However, the reality is more complex. Approximately 1.1 billion people worldwide (40%) require retraining, but the pace of AI disruption outstrips the time needed for such reskilling efforts. McKinsey identifies three groups based on their ability to adapt to AI's impact [35](#page=35):
* **Winners:** Those with high levels of education and strong technical skills [35](#page=35).
* **Adapters:** Individuals who can reskill, though this process may be challenging [35](#page=35).
* **Losers:** Those with low educational attainment and no access to training opportunities [35](#page=35).
#### 2.4.2 The geopolitical struggle for AI
AI is increasingly viewed as a national security issue, leading to a technological race between the United States, China, and the European Union. This competition influences chip export restrictions and the control of data, now considered a strategic resource akin to "oil." Different regulatory approaches are emerging: the EU favors a risk-based approach with the AI Act, the US promotes lighter regulation to foster innovation, and China maintains strict state control over AI. This geopolitical landscape raises questions about influence and democracy, specifically concerning who dictates what constitutes "safe" or "allowed" AI, and how AI can be used for information warfare and foreign election interference [36](#page=36).
#### 2.4.3 Democracy under pressure
Deepfakes and AI-generated disinformation pose a significant threat to elections globally, with AI accelerating the spread of false information faster than fact-checkers can respond. AI systems themselves are vulnerable; it reportedly takes only about 250 targeted documents to subtly "poison" an LLM, enabling states and actors to manipulate AI systems and engage in new forms of information warfare. This erosion of trust fosters an environment where "everything can be fake." [36](#page=36).
#### 2.4.4 Social divides amplified by AI
AI amplifies existing inequalities across five key dimensions:
* **Digital Divide:** Unequal access to AI tools and the associated costs [37](#page=37).
* **Linguistic Divide:** Disparities in training data and AI performance across languages [37](#page=37).
* **Infrastructure Divide:** Differences in access to hardware, internet, and electricity [37](#page=37).
* **Knowledge Divide:** Issues of copyright, control, and ownership of knowledge generated or utilized by AI [37](#page=37).
* **Accessibility Divide:** AI design that caters to an "average" user rather than being inclusive for everyone [37](#page=37).
##### 2.4.4.1 The digital divide
The cost of AI tools is estimated to average 20-25 dollars per month by 2025. This cost represents a significant portion of monthly income in many regions: 1.3% in the US, 2-3% in Belgium, 46% in Kenya, and 35% in India. This creates a paradox: individuals in places like Kenya label data for AI to learn, yet the AI tools themselves are prohibitively expensive, consuming a large percentage of their salary. Big Tech benefits financially, while the workers contributing to AI development do not [37](#page=37).
##### 2.4.4.2 The linguistic divide
The distribution of training data heavily favors English, accounting for 52% of AI training data. Other languages are significantly underrepresented: Chinese at 8%, Spanish at 3%, French at 1%, Dutch at less than 1%, and African languages at less than 0.1%. This leads to a performance gap: GPT-4 achieves 85-90% accuracy in English but only 40-45% in Swahili, and its translation accuracy from English to Yoruba is below 30%. This disparity reinforces English as the global lingua franca [38](#page=38).
##### 2.4.4.3 The infrastructure divide
Internet access varies dramatically by region. Developed countries boast over 90% access with high speeds, while developing countries have 40-60% access, often with slow 3G/4G connections. The least developed countries have less than 20% reliable internet access. AI applications, particularly cloud-based tools and those involving image and video processing, require stable internet with high bandwidth, modern devices with GPUs, and continuous electricity supply [38](#page=38).
##### 2.4.4.4 The knowledge divide
AI models are often trained on millions of books, articles, and artworks without explicit consent or compensation for the creators. The ownership of AI-generated output remains a legal gray area worldwide, generally requiring sufficient human creativity to be eligible for copyright protection. Large tech companies possess the most extensive models, though open-source alternatives are growing, albeit lagging behind. Consequently, those who control data hold significant power [39](#page=39).
##### 2.4.4.5 The accessibility divide
The concept of "universal design" often defaults to designing for those in power, typically benefiting the "average" user who is implicitly assumed to be white, male, English-speaking, and without disabilities. This leads to "outliers" or "edge cases" being overlooked in the design process. Documented failures include healthcare AI missing melanoma detection in darker skin tones, speech recognition performing poorly for women and accents, and facial recognition exhibiting significantly higher error rates for Black women compared to white men [39](#page=39).
#### 2.4.5 How divides reinforce each other
The Matthew Effect, stating "the rich get richer," is evident in how these divides interact. Access to AI boosts productivity, leading to higher income, which in turn facilitates better access to AI and other resources. Conversely, a lack of access results in growing disadvantages and further exclusion. For instance, a Kenyan farmer lacking English proficiency, internet access, financial resources, and training opportunities is entirely excluded from the benefits of the AI revolution [40](#page=40).
#### 2.4.6 Inclusive AI: Is it possible?
There are success stories in 2025 demonstrating the potential for inclusive AI, such as Be My Eyes, Google Live Transcribe, Microsoft Accessibility, and AI in remote healthcare. Key factors for these successes include being free, offline, locally adapted, and fostering community ownership. Scalable inclusion requires diverse teams beyond just Silicon Valley, investment in infrastructure, prioritization of multilingual data and accessibility as core design principles, and supportive regulatory frameworks, as the market alone will not solve these issues [40](#page=40).
### 2.5 The balance: AI as a problem versus a solution
AI presents a complex duality, acting as both a source of significant problems and a potential provider of solutions [41](#page=41).
#### 2.5.1 AI as a problem
* **Human:** Over 10 million "ghost workers" involved in exploitation and trauma [41](#page=41).
* **Ecological:** Models generating 15,000 tons of CO₂ and data centers consuming 560 billion liters of water annually [41](#page=41).
* **Social:** Amplification of the Digital, Linguistic, Infrastructure, Knowledge, and Accessibility divides [41](#page=41).
#### 2.5.2 AI as a solution
* **Human:** Potential for improved labor conditions if fairness is prioritized [41](#page=41).
* **Ecological:** Applications in climate modeling, energy optimization, and Green AI development [41](#page=41).
* **Social:** Enhancing accessibility and healthcare services [41](#page=41).
The ultimate direction AI takes is a choice, necessitating critical evaluation and conscious decision-making [41](#page=41).
### 2.6 Key takeaways
* Behind "smart" AI are thousands of invisible human workers [41](#page=41).
* AI possesses a substantial ecological footprint [41](#page=41).
* AI exacerbates inequality through five key divides: Digital, Linguistic, Infrastructure, Knowledge, and Accessibility [41](#page=41).
* Efficiency alone does not solve environmental problems; Jevon's Paradox highlights this [41](#page=41).
* Viable alternatives exist, including Green AI, fair labor practices, and inclusive design [41](#page=41).
* The future of AI is a choice that requires critical and conscious engagement [41](#page=41).
---
# AI in the future of work and personal development
This topic explores the transformative impact of Artificial Intelligence (AI) on the future of work and personal development, emphasizing the critical role of human skills, the concept of co-intelligence, and the necessity of continuous learning in an evolving professional landscape [44](#page=44).
### 3.1 Understanding AI's Capabilities and Limitations
AI excels at tasks that are computationally intensive and data-driven, such as complex calculations, data analysis, and pattern recognition. Examples of tasks that are relatively easy for AI include playing chess at a grandmaster level, making medical diagnoses, and recognizing and categorizing images [44](#page=44).
Conversely, AI struggles with tasks that are inherently human and evolutionary, requiring nuanced understanding, common sense, and emotional intelligence. These difficult tasks include a toddler tying shoes, understanding sarcasm, and sensing when someone needs support. This distinction highlights that what is evolutionarily old for humans (like walking, talking, feeling) is difficult for AI, while what is recent (like calculation, logic, data processing) is easy for AI. Therefore, a key opportunity for individuals lies in focusing on what AI cannot yet replicate [44](#page=44).
> **Tip:** Moravec's paradox posits that it is easy for AI to perform complex tasks that require advanced reasoning but difficult for it to perform simple tasks that involve basic motor skills or common sense [44](#page=44).
### 3.2 Co-intelligence: Augmentation, Not Replacement
AI is best understood as a tool for augmentation, enhancing human capabilities rather than replacing humans entirely. The concept of "co-intelligence" suggests a synergistic relationship where AI handles tasks it excels at, such as speed, scale, and data processing, while humans focus on their strengths: creativity, ethics, and context. This collaborative approach, where AI generates proposals and humans make decisions, leads to a stronger outcome than either could achieve alone [45](#page=45).
Key conditions for successful co-intelligence include:
* Humans maintaining control over the process [45](#page=45).
* AI acting as a proposer, with humans making the final decisions [45](#page=45).
* Continuous critical evaluation of AI outputs [45](#page=45).
* Leveraging indispensable domain knowledge possessed by humans [45](#page=45).
#### 3.2.1 The Risk of Deskilling and Cognitive Offloading
The increasing reliance on AI tools raises concerns about "deskilling," where humans lose essential abilities because AI performs these tasks for them. Analogies include GPS reducing map-reading skills, autocorrect weakening spelling abilities, and calculators diminishing mental arithmetic proficiency. AI writing assistants can undermine writing skills by performing the writing task itself [45](#page=45).
This phenomenon is linked to "cognitive offloading," where the brain conserves energy by not investing it in tasks that are handled externally. The principle of "use it or lose it" applies, as neural connections for a disused skill weaken. It is crucial to actively practice and protect core human skills that should not be lost [45](#page=45).
### 3.3 Evaluating AI's Reliability and Application
AI systems are often described as "brittle, not robust," meaning they can fail unexpectedly. They may work perfectly until a slight deviation causes failure, without exhibiting "graceful degradation" where small errors lead to significant consequences. AI can also produce "hallucinations" – fabricated information presented as fact – without any warning. The critical stance of figures like Gary Marcus emphasizes the absolute necessity of validating AI outputs, especially for important decisions [46](#page=46).
#### 3.3.1 When AI Delivers Real Value
AI is most effective when:
* Tasks are repetitive and predictable, such as data entry, basic content generation, or email filtering [47](#page=47).
* Speed is prioritized over absolute perfection, for example, in brainstorming sessions, creating initial drafts, or summarizing research [47](#page=47).
* An expert is available to validate the AI's output, where AI generates and the expert refines [47](#page=47).
* Low-risk experiments are being conducted, such as creating prototypes, exploring ideas, or testing concepts [47](#page=47).
A general rule of thumb is to use AI for efficiency and humans for strategy and quality [47](#page=47).
#### 3.3.2 When AI Poses a Risk
AI should be used with extreme caution or avoided when:
* Human lives or well-being are at stake, such as in medical diagnoses or legal judgments without expert backup [48](#page=48).
* Privacy or confidentiality are paramount, involving sensitive personal or business data [48](#page=48).
* Nuance, empathy, and context are essential, as in conflict resolution or change management [48](#page=48).
* Ethical considerations are critical, such as in recruitment, performance reviews, or resource allocation [48](#page=48).
* The core skill itself needs to be preserved by the individual – skills one does not want to lose [48](#page=48).
* The AI's output cannot be verified, leading to uncertainty about its accuracy [48](#page=48).
The guiding principle in such situations is: "When in doubt, don't do it" [48](#page=48).
> **Tip:** Nicholas Carr's advice, "Don't automate what makes you human," underscores the importance of preserving core human capabilities [48](#page=48).
When considering automation, three critical questions should be asked:
1. **What am I automating?** Is it a core skill [48](#page=48)?
2. **Why am I automating this?** Is it out of laziness or for genuine added value [48](#page=48)?
3. **How do I remain involved?** Will I lose the skill [48](#page=48)?
### 3.4 The Ever-Changing Nature of Work
Technological advancements have historically transformed the nature of work, leading to shifts in employment and the economy. Examples include the steam engine in the 1800s, electricity in the 1900s, personal computers in the 1980s, and the internet in the 2000s. Each wave of technology initially sparked concerns about job displacement, but ultimately led to the creation of new sectors, increased prosperity, and a shift towards a knowledge-based economy. The current era of AI is also expected to lead to job transformations, and individuals must prepare for this evolution [49](#page=49).
#### 3.4.1 Skills for Today and Tomorrow
There is a significant skills mismatch globally, with billions needing to reskill or upskill. The process of retraining can take several years. Key areas of focus for future-proofing one's career include AI, data and technological literacy, critical thinking, and independent thought [50](#page=50).
Individuals should prioritize:
* Building foundational knowledge rather than solely focusing on tools [50](#page=50).
* Deepening domain expertise [50](#page=50).
* Embracing a "growth mindset" and committing to lifelong learning [50](#page=50).
* Maintaining a critical perspective on AI and its applications [50](#page=50).
### 3.5 Preserving Creativity and Authenticity
It is important to intentionally "turn off" AI for tasks requiring genuine creativity. Starting with analog methods like pen and paper can help generate raw ideas from personal experience before involving AI for refinement. This approach prevents "AI-slop," which refers to uniform, generic output resulting from everyone using the same AI tools, leading to predictable and uninspired results and a loss of authenticity. By generating initial ideas from one's own mind, individuals can stimulate unconventional connections, protect their original thought patterns, and train their creative abilities [51](#page=51).
#### 3.5.1 Common Pitfalls in AI Adoption
Several common mistakes hinder effective AI integration:
1. **FOMO-driven learning:** Trying every new tool without a clear focus; it's better to master two to three tools deeply [51](#page=51).
2. **Tutorial hell:** Watching endless tutorials without practical application; aim for 20% learning and 80% doing [51](#page=51).
3. **Blind adoption:** Using AI simply because it's possible, not because it's necessary; always ask "why" [51](#page=51).
4. **No validation:** Accepting AI output without verification; always verify everything [51](#page=51).
5. **Over-automation:** Automating everything, including core skills that should be maintained; practice selective adoption and protect essential competencies [51](#page=51).
### 3.6 Navigating Your Autonomous Future
Ultimately, the future with AI is not just about technology but about the choices individuals make. Every decision, from selecting tools to choosing which problems to address, has an impact. The crucial question is not "How do I use AI?" but "When should I use AI, and when should I not?". Maintaining autonomy, staying critical, and remaining curious are essential for navigating this evolving landscape [53](#page=53).
---
## Common mistakes to avoid
- Review all topics thoroughly before exams
- Pay attention to formulas and key definitions
- Practice with examples provided in each section
- Don't memorize without understanding the underlying concepts
Glossary
| Term | Definition |
|------|------------|
| Artificial Intelligence (AI) | A field of computer science focused on creating systems that can perform tasks typically requiring human intelligence, such as learning, problem-solving, and decision-making. |
| AI Ethics | The branch of ethics that studies and addresses the moral implications and consequences of artificial intelligence, focusing on issues like fairness, accountability, transparency, and the impact on society. |
| Privacy | The right of individuals to control their personal information and to be free from unwarranted surveillance or intrusion into their private lives. In the context of AI, it concerns how personal data is collected, used, and protected. |
| Transparency | The principle that AI systems should be understandable and their decision-making processes should be open to scrutiny. This allows users and stakeholders to comprehend how an AI arrived at a particular outcome. |
| Explainable AI (XAI) | A set of tools and techniques that enable human users to understand the results of AI. It aims to make AI models less of a "black box" by providing explanations for their predictions or decisions. |
| Fairness (in AI) | The principle that AI systems should treat all individuals and groups equitably, without discrimination or prejudice. This involves identifying and mitigating biases that can lead to unfair outcomes. |
| Bias (in AI) | A systematic deviation in AI decisions that unfairly disadvantages specific individuals or groups. Bias can arise from biased training data, algorithmic design, or how the AI is deployed. |
| Accountability | The obligation of individuals or organizations to take responsibility for the outcomes of AI systems, including any harm or errors they may cause. It addresses who is liable when an AI fails. |
| Robustness | The ability of an AI system to maintain its performance and functionality under stress, unexpected conditions, or malicious attacks. A robust AI should fail gracefully and predictably. |
| Data Poisoning | A type of cyberattack where malicious data is introduced into the training dataset of an AI model, aiming to corrupt its learning process and lead to incorrect or biased outputs. |
| Adversarial Attacks | Techniques used to trick AI models by making small, often imperceptible changes to input data that cause the model to misclassify or malfunction, leading to errors or security breaches. |
| Deepfakes | Synthesized media, typically videos or audio, that are manipulated using AI to depict individuals saying or doing things they never actually did, raising concerns about misinformation and defamation. |
| EU AI Act | A proposed regulation by the European Union aimed at establishing a legal framework for artificial intelligence, classifying AI systems by risk level and imposing obligations accordingly to ensure safety and fundamental rights. |
| GDPR (General Data Protection Regulation) | A comprehensive data protection and privacy law in the European Union that governs how personal data of EU citizens is processed and protected. It grants individuals significant control over their data. |
| Ghost Workers | Individuals, often located in lower-income countries, who perform the vast amount of invisible labor required to train and moderate AI systems, such as data labeling and content moderation, often under poor working conditions. |
| Content Moderator | A person responsible for reviewing and filtering user-generated content on online platforms to ensure it complies with community guidelines and legal standards, often exposed to disturbing material. |
| Data Labeling | The process of tagging or annotating raw data (images, text, audio, video) to make it understandable and usable for training machine learning models. |
| Reinforcement Learning from Human Feedback (RLHF) | A method used to fine-tune AI models, particularly large language models, by incorporating human preferences and feedback to improve their helpfulness, honesty, and harmlessness. |
| Jevon’s Paradox | An economic theory stating that technological advancements that increase efficiency in the use of a resource can lead to an increase in the total consumption of that resource, rather than a decrease. |
| Moravec’s Paradox | The observation that, contrary to traditional assumptions, high-level reasoning (like chess or complex calculations) requires little computation, whereas low-level sensory-motor skills (like perception and mobility) require enormous computational resources. |
| Co-Intelligence | A concept suggesting that human and artificial intelligence can work together, augmenting each other's capabilities to achieve better outcomes than either could alone. |
| Deskilling | The process by which workers lose their expertise or skills due to the automation of tasks, often leading to a reduction in job satisfaction and career progression. |
| Hallucinations (in AI) | When an AI model generates incorrect, nonsensical, or factually inaccurate information presented as if it were true. This is a common issue with generative AI models. |
| Digital Divide | The gap between individuals, households, businesses, and geographic areas at different socioeconomic levels with regard to their opportunities to access information and communication technologies (ICTs) and their use of the Internet for a wide variety of activities. |
| Linguistic Divide | The disparity in AI performance and accessibility due to differences in language, particularly the dominance of English in training data, which can disadvantage speakers of other languages. |
| Infrastructure Divide | The inequality in access to the necessary physical and digital infrastructure, such as reliable internet, electricity, and modern computing hardware, required for effective AI utilization. |
| Knowledge Divide | The gap in understanding, access, and control over knowledge, particularly concerning AI training data, intellectual property rights, and the ownership of AI-generated content. |
| Accessibility Divide | The disparity in how AI systems are designed and perform for users with different abilities, needs, and backgrounds, often favoring the "average" user and neglecting "outliers" or edge cases. |