LeetCode #3642 — MEDIUM

Find Books with Polarized Opinions

Move from brute-force thinking to an efficient approach using core interview patterns strategy.

Solve on LeetCode
The Problem

Problem Statement

Table: books

+-------------+---------+
| Column Name | Type    |
+-------------+---------+
| book_id     | int     |
| title       | varchar |
| author      | varchar |
| genre       | varchar |
| pages       | int     |
+-------------+---------+
book_id is the unique ID for this table.
Each row contains information about a book including its genre and page count.

Table: reading_sessions

+----------------+---------+
| Column Name    | Type    |
+----------------+---------+
| session_id     | int     |
| book_id        | int     |
| reader_name    | varchar |
| pages_read     | int     |
| session_rating | int     |
+----------------+---------+
session_id is the unique ID for this table.
Each row represents a reading session where someone read a portion of a book. session_rating is on a scale of 1-5.

Write a solution to find books that have polarized opinions - books that receive both very high ratings and very low ratings from different readers.

  • A book has polarized opinions if it has at least one rating ≥ 4 and at least one rating ≤ 2
  • Only consider books that have at least 5 reading sessions
  • Calculate the rating spread as (highest_rating - lowest_rating)
  • Calculate the polarization score as the number of extreme ratings (ratings ≤ 2 or ≥ 4) divided by total sessions
  • Only include books where polarization score ≥ 0.6 (at least 60% extreme ratings)

Return the result table ordered by polarization score in descending order, then by title in descending order.
The polarization score should be rounded to 2 decimal places.

The result format is in the following example.

Example:

Input:

books table:

+---------+------------------------+---------------+----------+-------+
| book_id | title                  | author        | genre    | pages |
+---------+------------------------+---------------+----------+-------+
| 1       | The Great Gatsby       | F. Scott      | Fiction  | 180   |
| 2       | To Kill a Mockingbird  | Harper Lee    | Fiction  | 281   |
| 3       | 1984                   | George Orwell | Dystopian| 328   |
| 4       | Pride and Prejudice    | Jane Austen   | Romance  | 432   |
| 5       | The Catcher in the Rye | J.D. Salinger | Fiction  | 277   |
+---------+------------------------+---------------+----------+-------+

reading_sessions table:

+------------+---------+-------------+------------+----------------+
| session_id | book_id | reader_name | pages_read | session_rating |
+------------+---------+-------------+------------+----------------+
| 1          | 1       | Alice       | 50         | 5              |
| 2          | 1       | Bob         | 60         | 1              |
| 3          | 1       | Carol       | 40         | 4              |
| 4          | 1       | David       | 30         | 2              |
| 5          | 1       | Emma        | 45         | 5              |
| 6          | 2       | Frank       | 80         | 4              |
| 7          | 2       | Grace       | 70         | 4              |
| 8          | 2       | Henry       | 90         | 5              |
| 9          | 2       | Ivy         | 60         | 4              |
| 10         | 2       | Jack        | 75         | 4              |
| 11         | 3       | Kate        | 100        | 2              |
| 12         | 3       | Liam        | 120        | 1              |
| 13         | 3       | Mia         | 80         | 2              |
| 14         | 3       | Noah        | 90         | 1              |
| 15         | 3       | Olivia      | 110        | 4              |
| 16         | 3       | Paul        | 95         | 5              |
| 17         | 4       | Quinn       | 150        | 3              |
| 18         | 4       | Ruby        | 140        | 3              |
| 19         | 5       | Sam         | 80         | 1              |
| 20         | 5       | Tara        | 70         | 2              |
+------------+---------+-------------+------------+----------------+

Output:

+---------+------------------+---------------+-----------+-------+---------------+--------------------+
| book_id | title            | author        | genre     | pages | rating_spread | polarization_score |
+---------+------------------+---------------+-----------+-------+---------------+--------------------+
| 1       | The Great Gatsby | F. Scott      | Fiction   | 180   | 4             | 1.00               |
| 3       | 1984             | George Orwell | Dystopian | 328   | 4             | 1.00               |
+---------+------------------+---------------+-----------+-------+---------------+--------------------+

Explanation:

  • The Great Gatsby (book_id = 1):
    • Has 5 reading sessions (meets minimum requirement)
    • Ratings: 5, 1, 4, 2, 5
    • Has ratings ≥ 4: 5, 4, 5 (3 sessions)
    • Has ratings ≤ 2: 1, 2 (2 sessions)
    • Rating spread: 5 - 1 = 4
    • Extreme ratings (≤2 or ≥4): All 5 sessions (5, 1, 4, 2, 5)
    • Polarization score: 5/5 = 1.00 (≥ 0.6, qualifies)
  • 1984 (book_id = 3):
    • Has 6 reading sessions (meets minimum requirement)
    • Ratings: 2, 1, 2, 1, 4, 5
    • Has ratings ≥ 4: 4, 5 (2 sessions)
    • Has ratings ≤ 2: 2, 1, 2, 1 (4 sessions)
    • Rating spread: 5 - 1 = 4
    • Extreme ratings (≤2 or ≥4): All 6 sessions (2, 1, 2, 1, 4, 5)
    • Polarization score: 6/6 = 1.00 (≥ 0.6, qualifies)
  • Books not included:
    • To Kill a Mockingbird (book_id = 2): All ratings are 4-5, no low ratings (≤2)
    • Pride and Prejudice (book_id = 4): Only 2 sessions (< 5 minimum)
    • The Catcher in the Rye (book_id = 5): Only 2 sessions (< 5 minimum)

The result table is ordered by polarization score in descending order, then by book title in descending order.

Roadmap

  1. Brute Force Baseline
  2. Core Insight
  3. Algorithm Walkthrough
  4. Edge Cases
  5. Full Annotated Code
  6. Interactive Study Demo
  7. Complexity Analysis
Step 01

Brute Force Baseline

Problem summary: Table: books +-------------+---------+ | Column Name | Type | +-------------+---------+ | book_id | int | | title | varchar | | author | varchar | | genre | varchar | | pages | int | +-------------+---------+ book_id is the unique ID for this table. Each row contains information about a book including its genre and page count. Table: reading_sessions +----------------+---------+ | Column Name | Type | +----------------+---------+ | session_id | int | | book_id | int | | reader_name | varchar | | pages_read | int | | session_rating | int | +----------------+---------+ session_id is the unique ID for this table. Each row represents a reading session where someone read a portion of a book. session_rating is on a scale of 1-5. Write a solution to find books that have polarized opinions - books that receive both very high ratings and very low ratings from different readers. A book has

Baseline thinking

Start with the most direct exhaustive search. That gives a correctness anchor before optimizing.

Pattern signal: General problem-solving

Example 1

{"headers":{"books":["book_id","title","author","genre","pages"],"reading_sessions":["session_id","book_id","reader_name","pages_read","session_rating"]},"rows":{"books":[[1,"The Great Gatsby","F. Scott","Fiction",180],[2,"To Kill a Mockingbird","Harper Lee","Fiction",281],[3,"1984","George Orwell","Dystopian",328],[4,"Pride and Prejudice","Jane Austen","Romance",432],[5,"The Catcher in the Rye","J.D. Salinger","Fiction",277]],"reading_sessions":[[1,1,"Alice",50,5],[2,1,"Bob",60,1],[3,1,"Carol",40,4],[4,1,"David",30,2],[5,1,"Emma",45,5],[6,2,"Frank",80,4],[7,2,"Grace",70,4],[8,2,"Henry",90,5],[9,2,"Ivy",60,4],[10,2,"Jack",75,4],[11,3,"Kate",100,2],[12,3,"Liam",120,1],[13,3,"Mia",80,2],[14,3,"Noah",90,1],[15,3,"Olivia",110,4],[16,3,"Paul",95,5],[17,4,"Quinn",150,3],[18,4,"Ruby",140,3],[19,5,"Sam",80,1],[20,5,"Tara",70,2]]}}
Step 02

Core Insight

What unlocks the optimal approach

  • No official hints in dataset. Start from constraints and look for a monotonic or reusable state.
Interview move: turn each hint into an invariant you can check after every iteration/recursion step.
Step 03

Algorithm Walkthrough

Iteration Checklist

  1. Define state (indices, window, stack, map, DP cell, or recursion frame).
  2. Apply one transition step and update the invariant.
  3. Record answer candidate when condition is met.
  4. Continue until all input is consumed.
Use the first example testcase as your mental trace to verify each transition.
Step 04

Edge Cases

Minimum Input
Single element / shortest valid input
Validate boundary behavior before entering the main loop or recursion.
Duplicates & Repeats
Repeated values / repeated states
Decide whether duplicates should be merged, skipped, or counted explicitly.
Extreme Constraints
Upper-end input sizes
Re-check complexity target against constraints to avoid time-limit issues.
Invalid / Corner Shape
Empty collections, zeros, or disconnected structures
Handle special-case structure before the core algorithm path.
Step 05

Full Annotated Code

Source-backed implementations are provided below for direct study and interview prep.

// Accepted solution for LeetCode #3642: Find Books with Polarized Opinions
// Auto-generated Java example from py.
class Solution {
    public void exampleSolution() {
    }
}
// Reference (py):
// # Accepted solution for LeetCode #3642: Find Books with Polarized Opinions
// import pandas as pd
// from decimal import Decimal, ROUND_HALF_UP
// 
// 
// def find_polarized_books(
//     books: pd.DataFrame, reading_sessions: pd.DataFrame
// ) -> pd.DataFrame:
//     df = books.merge(reading_sessions, on="book_id")
//     agg_df = (
//         df.groupby(["book_id", "title", "author", "genre", "pages"])
//         .agg(
//             max_rating=("session_rating", "max"),
//             min_rating=("session_rating", "min"),
//             rating_spread=("session_rating", lambda x: x.max() - x.min()),
//             count_sessions=("session_rating", "count"),
//             low_or_high_count=("session_rating", lambda x: ((x <= 2) | (x >= 4)).sum()),
//         )
//         .reset_index()
//     )
// 
//     agg_df["polarization_score"] = agg_df.apply(
//         lambda r: float(
//             Decimal(r["low_or_high_count"] / r["count_sessions"]).quantize(
//                 Decimal("0.01"), rounding=ROUND_HALF_UP
//             )
//         ),
//         axis=1,
//     )
// 
//     result = agg_df[
//         (agg_df["count_sessions"] >= 5)
//         & (agg_df["max_rating"] >= 4)
//         & (agg_df["min_rating"] <= 2)
//         & (agg_df["polarization_score"] >= 0.6)
//     ]
// 
//     return result.sort_values(
//         by=["polarization_score", "title"], ascending=[False, False]
//     )[
//         [
//             "book_id",
//             "title",
//             "author",
//             "genre",
//             "pages",
//             "rating_spread",
//             "polarization_score",
//         ]
//     ]
Step 06

Interactive Study Demo

Use this to step through a reusable interview workflow for this problem.

Press Step or Run All to begin.
Step 07

Complexity Analysis

Time
O(n)
Space
O(1)

Approach Breakdown

BRUTE FORCE
O(n²) time
O(1) space

Two nested loops check every pair or subarray. The outer loop fixes a starting point, the inner loop extends or searches. For n elements this gives up to n²/2 operations. No extra space, but the quadratic time is prohibitive for large inputs.

OPTIMIZED
O(n) time
O(1) space

Most array problems have an O(n²) brute force (nested loops) and an O(n) optimal (single pass with clever state tracking). The key is identifying what information to maintain as you scan: a running max, a prefix sum, a hash map of seen values, or two pointers.

Shortcut: If you are using nested loops on an array, there is almost always an O(n) solution. Look for the right auxiliary state.
Coach Notes

Common Mistakes

Review these before coding to avoid predictable interview regressions.

Off-by-one on range boundaries

Wrong move: Loop endpoints miss first/last candidate.

Usually fails on: Fails on minimal arrays and exact-boundary answers.

Fix: Re-derive loops from inclusive/exclusive ranges before coding.