Solved Exercises and Practice Problems for Artificial Intelligence: A Modern Approach (4th Edition)

Solved Exercises and Practice Problems for Artificial Intelligence: A Modern Approach (4th Edition)

As of 2026, Stuart Russell and Peter Norvig’s Artificial Intelligence: A Modern Approach (AIMA) remains the “gold standard” for AI education. While the field has shifted toward Large Language Models (LLMs) and Generative AI, the foundational principles—search, logic, and probability—are more critical than ever for understanding how these massive systems operate.

This guide provides solved exercises and practice problems organized by the textbook’s core themes, designed to help students master the logic behind the algorithms.

1. Intelligent Agents & Problem Solving (Chapters 2–6)

The “Agent” is the central protagonist of AIMA. To understand an agent, you must first define its PEAS (Performance, Environment, Actuators, Sensors).

Solved Exercise: PEAS for a 2026 Autonomous Medical Drone

Problem: Define the PEAS for a drone designed to deliver emergency medical supplies in a dense urban environment.

  • Performance Measure: Delivery speed, safety (avoiding collisions), battery efficiency, and success rate of package integrity.
  • Environment: Urban airspace, variable weather, tall buildings, moving pedestrians, and other drones.
  • Actuators: Rotors (for lift and steering), landing gear, package release mechanism, and signaling lights/speakers.
  • Sensors: LiDAR, GPS, high-resolution cameras, ultrasonic sensors, and wind speed indicators.

Master Class: A* Search and Heuristic Consistency

Problem: Given a node $n$, let $h(n)$ be the heuristic estimate. Prove that if $h(n)$ is consistent, it must also be admissible.

Logic:

A heuristic is consistent if for every node $n$ and every successor $n’$ of $n$ generated by any action $a$:

$$h(n) \leq c(n, a, n’) + h(n’)$$

And the heuristic of the goal state is $h(G) = 0$.

Solution:

By induction on the number of steps $k$ from the goal.

  1. Base case: If $n$ is the goal, $h(G) = 0$, which is $h^*(G)$. Admissible.
  2. Inductive step: $h(n) \leq c(n, a, n’) + h(n’)$. Since $h(n’)$ is admissible, $h(n’) \leq h^*(n’)$.
  3. Substituting gives $h(n) \leq c(n, a, n’) + h^*(n’)$. Since $c(n, a, n’) + h^*(n’)$ is the definition of the optimal path $h^*(n)$, we get $h(n) \leq h^*(n)$. Therefore, consistency implies admissibility.

2. Knowledge, Reasoning, and Planning (Chapters 7–12)

This section focuses on agents that maintain a mental model of the world using Propositional and First-Order Logic.

Solved Exercise: Resolution in Propositional Logic

Problem: Prove $R$ using Resolution given the following Knowledge Base (KB):

  1. $P \lor Q$
  2. $P \implies R$
  3. $Q \implies R$

Step-by-Step Logic:

  • Convert to Conjunctive Normal Form (CNF):
    1. $(P \lor Q)$
    2. $(\neg P \lor R)$
    3. $(\neg Q \lor R)$
  • Apply Resolution:
  • Resolve (1) and (2): $(\neg P \lor R)$ and $(P \lor Q)$ eliminates $P$, leaving $(Q \lor R)$.
  • Resolve the result $(Q \lor R)$ with (3) $(\neg Q \lor R)$: eliminates $Q$, leaving $(R \lor R)$, which simplifies to $R$.

3. Uncertain Knowledge and Reasoning (Chapters 13–17)

In the real world, agents rarely have perfect information. We use probability to quantify uncertainty.

Master Class: Bayesian Network Inference

Problem: In a simple network $A \to B$, where $A$ is “Rain” and $B$ is “Wet Grass,” given:

  • $P(A) = 0.2$
  • $P(B|A) = 0.9$
  • $P(B|\neg A) = 0.1$

Calculate $P(A|B)$ (The probability it rained given the grass is wet).

Solution:

Using Bayes’ Rule:

$$P(A|B) = \frac{P(B|A)P(A)}{P(B)}$$

First, find $P(B)$ using the Law of Total Probability:

$$P(B) = (P(B|A) \times P(A)) + (P(B|\neg A) \times P(\neg A))$$

$$P(B) = (0.9 \times 0.2) + (0.1 \times 0.8) = 0.18 + 0.08 = 0.26$$

Now, substitute back:

$$P(A|B) = \frac{0.18}{0.26} \approx 0.692$$

Answer: There is approximately a 69.2% chance it rained if the grass is wet.

4. Machine Learning (Chapters 18–21)

By 2026, the distinction between “Traditional ML” and “Deep Learning” has blurred, but the core objective remains: minimizing a loss function.

Practice Problem: Linear Regression Gradient Descent

Scenario: You have a single feature $x$ and a target $y$. Your model is $h_\theta(x) = \theta_1x$.

Task: If the learning rate is $\alpha = 0.1$, the current $\theta_1 = 0.5$, and you have a single training point $(1, 1)$, calculate the new value of $\theta_1$ after one update.

Solution Hint: Use the update rule $\theta_j := \theta_j – \alpha(h_\theta(x) – y)x$.

  1. $h_\theta(1) = 0.5 \times 1 = 0.5$
  2. Error $= 0.5 – 1 = -0.5$
  3. Update: $\theta_1 = 0.5 – 0.1(-0.5)(1) = 0.5 + 0.05 = \mathbf{0.55}$.

5. Additional Practice Problems

Try solving these to test your mastery of the “Modern Approach”:

  1. Search (Ch. 3): Draw a state-space graph where Breadth-First Search (BFS) finds the optimal goal, but Depth-First Search (DFS) gets stuck in an infinite loop.
  2. Games (Ch. 5): Perform Alpha-Beta pruning on a Minimax tree with a branching factor of 2 and depth of 3. What is the maximum number of leaves that can be pruned?
  3. Logic (Ch. 8): Translate the following sentence into First-Order Logic: “Every student who takes AI loves Norvig.”
  4. MDPs (Ch. 17): In a Markov Decision Process, if the discount factor $\gamma = 0$, how does the agent value future rewards?

Study Resources for 2026

To deepen your understanding, utilize these official digital companions:

  • AIMA Python (GitHub): The most active repository for pseudocode implementations. Search for aimacode/aima-python.
  • AIMA Exercises Portal: The official Berkeley-hosted site containing hundreds of additional community-vetted problems.
  • The 2026 Context: Remember that Chapters 24 and 25 (Deep Learning and NLP) are now best supplemented by studying Transformer architectures and Attention mechanisms, which have significantly evolved since the 4th edition’s initial release.

Final Advice

Success in Artificial Intelligence comes from the ability to translate a real-world problem into a formal model. Whether it is a search tree, a logic circuit, or a probability distribution, the tools provided in A Modern Approach are the foundation upon which all modern “Magic” (like LLMs) is built.