LeetCode #3793 — EASY

Find Users with High Token Usage

Build confidence with an intuition-first walkthrough focused on core interview patterns fundamentals.

Solve on LeetCode
The Problem

Problem Statement

Table: prompts

+-------------+---------+
| Column Name | Type    |
+-------------+---------+
| user_id     | int     |
| prompt      | varchar |
| tokens      | int     |
+-------------+---------+
(user_id, prompt) is the primary key (unique value) for this table.
Each row represents a prompt submitted by a user to an AI system along with the number of tokens consumed.

Write a solution to analyze AI prompt usage patterns based on the following requirements:

  • For each user, calculate the total number of prompts they have submitted.
  • For each user, calculate the average tokens used per prompt (Rounded to 2 decimal places).
  • Only include users who have submitted at least 3 prompts.
  • Only include users who have submitted at least one prompt with tokens greater than their own average token usage.

Return the result table ordered by average tokens in descending order, and then by user_id in ascending order.

The result format is in the following example.

Example:

Input:

prompts table:

+---------+--------------------------+--------+
| user_id | prompt                   | tokens |
+---------+--------------------------+--------+
| 1       | Write a blog outline     | 120    |
| 1       | Generate SQL query       | 80     |
| 1       | Summarize an article     | 200    |
| 2       | Create resume bullet     | 60     |
| 2       | Improve LinkedIn bio     | 70     |
| 3       | Explain neural networks  | 300    |
| 3       | Generate interview Q&A   | 250    |
| 3       | Write cover letter       | 180    |
| 3       | Optimize Python code     | 220    |
+---------+--------------------------+--------+

Output:

+---------+---------------+------------+
| user_id | prompt_count  | avg_tokens |
+---------+---------------+------------+
| 3       | 4             | 237.5      |
| 1       | 3             | 133.33     |
+---------+---------------+------------+

Explanation:

  • User 1:
    • Total prompts = 3
    • Average tokens = (120 + 80 + 200) / 3 = 133.33
    • Has a prompt with 200 tokens, which is greater than the average
    • Included in the result
  • User 2:
    • Total prompts = 2 (less than the required minimum)
    • Excluded from the result
  • User 3:
    • Total prompts = 4
    • Average tokens = (300 + 250 + 180 + 220) / 4 = 237.5
    • Has prompts with 300 and 250 tokens, both greater than the average
    • Included in the result

The Results table is ordered by avg_tokens in descending order, then by user_id in ascending order

Roadmap

  1. Brute Force Baseline
  2. Core Insight
  3. Algorithm Walkthrough
  4. Edge Cases
  5. Full Annotated Code
  6. Interactive Study Demo
  7. Complexity Analysis
Step 01

Brute Force Baseline

Problem summary: Table: prompts +-------------+---------+ | Column Name | Type | +-------------+---------+ | user_id | int | | prompt | varchar | | tokens | int | +-------------+---------+ (user_id, prompt) is the primary key (unique value) for this table. Each row represents a prompt submitted by a user to an AI system along with the number of tokens consumed. Write a solution to analyze AI prompt usage patterns based on the following requirements: For each user, calculate the total number of prompts they have submitted. For each user, calculate the average tokens used per prompt (Rounded to 2 decimal places). Only include users who have submitted at least 3 prompts. Only include users who have submitted at least one prompt with tokens greater than their own average token usage. Return the result table ordered by average tokens in descending order, and then by user_id in ascending order. The result

Baseline thinking

Start with the most direct exhaustive search. That gives a correctness anchor before optimizing.

Pattern signal: General problem-solving

Example 1

{"headers":{"prompts":["user_id","prompt","tokens"]},"rows":{"prompts":[[1,"Write a blog outline",120],[1,"Generate SQL query",80],[1,"Summarize an article",200],[2,"Create resume bullet",60],[2,"Improve LinkedIn bio",70],[3,"Explain neural networks",300],[3,"Generate interview Q&A",250],[3,"Write cover letter",180],[3,"Optimize Python code",220]]}}
Step 02

Core Insight

What unlocks the optimal approach

  • No official hints in dataset. Start from constraints and look for a monotonic or reusable state.
Interview move: turn each hint into an invariant you can check after every iteration/recursion step.
Step 03

Algorithm Walkthrough

Iteration Checklist

  1. Define state (indices, window, stack, map, DP cell, or recursion frame).
  2. Apply one transition step and update the invariant.
  3. Record answer candidate when condition is met.
  4. Continue until all input is consumed.
Use the first example testcase as your mental trace to verify each transition.
Step 04

Edge Cases

Minimum Input
Single element / shortest valid input
Validate boundary behavior before entering the main loop or recursion.
Duplicates & Repeats
Repeated values / repeated states
Decide whether duplicates should be merged, skipped, or counted explicitly.
Extreme Constraints
Upper-end input sizes
Re-check complexity target against constraints to avoid time-limit issues.
Invalid / Corner Shape
Empty collections, zeros, or disconnected structures
Handle special-case structure before the core algorithm path.
Step 05

Full Annotated Code

Source-backed implementations are provided below for direct study and interview prep.

// Accepted solution for LeetCode #3793: Find Users with High Token Usage
// Auto-generated Java example from py.
class Solution {
    public void exampleSolution() {
    }
}
// Reference (py):
// # Accepted solution for LeetCode #3793: Find Users with High Token Usage
// import pandas as pd
// 
// 
// def find_users_with_high_tokens(prompts: pd.DataFrame) -> pd.DataFrame:
//     df = prompts.groupby("user_id", as_index=False).agg(
//         prompt_count=("user_id", "size"),
//         avg_tokens=("tokens", "mean"),
//         max_tokens=("tokens", "max"),
//     )
// 
//     df["avg_tokens"] = df["avg_tokens"].round(2)
// 
//     df = df[(df["prompt_count"] >= 3) & (df["max_tokens"] > df["avg_tokens"])]
// 
//     df = (
//         df.sort_values(["avg_tokens", "user_id"], ascending=[False, True])
//         .loc[:, ["user_id", "prompt_count", "avg_tokens"]]
//         .reset_index(drop=True)
//     )
// 
//     return df
Step 06

Interactive Study Demo

Use this to step through a reusable interview workflow for this problem.

Press Step or Run All to begin.
Step 07

Complexity Analysis

Time
O(n)
Space
O(1)

Approach Breakdown

BRUTE FORCE
O(n²) time
O(1) space

Two nested loops check every pair or subarray. The outer loop fixes a starting point, the inner loop extends or searches. For n elements this gives up to n²/2 operations. No extra space, but the quadratic time is prohibitive for large inputs.

OPTIMIZED
O(n) time
O(1) space

Most array problems have an O(n²) brute force (nested loops) and an O(n) optimal (single pass with clever state tracking). The key is identifying what information to maintain as you scan: a running max, a prefix sum, a hash map of seen values, or two pointers.

Shortcut: If you are using nested loops on an array, there is almost always an O(n) solution. Look for the right auxiliary state.
Coach Notes

Common Mistakes

Review these before coding to avoid predictable interview regressions.

Off-by-one on range boundaries

Wrong move: Loop endpoints miss first/last candidate.

Usually fails on: Fails on minimal arrays and exact-boundary answers.

Fix: Re-derive loops from inclusive/exclusive ranges before coding.