Off-by-one on range boundaries
Wrong move: Loop endpoints miss first/last candidate.
Usually fails on: Fails on minimal arrays and exact-boundary answers.
Fix: Re-derive loops from inclusive/exclusive ranges before coding.
Move from brute-force thinking to an efficient approach using core interview patterns strategy.
Table: reactions
+--------------+---------+ | Column Name | Type | +--------------+---------+ | user_id | int | | content_id | int | | reaction | varchar | +--------------+---------+ (user_id, content_id) is the primary key (unique value) for this table. Each row represents a reaction given by a user to a piece of content.
Write a solution to identify emotionally consistent users based on the following requirements:
5 different content items.60% of their reactions are of the same type.Return the result table ordered by reaction_ratio in descending order and then by user_id in ascending order.
Note:
reaction_ratio should be rounded to 2 decimal placesThe result format is in the following example.
Example:
Input:
reactions table:
+---------+------------+----------+ | user_id | content_id | reaction | +---------+------------+----------+ | 1 | 101 | like | | 1 | 102 | like | | 1 | 103 | like | | 1 | 104 | wow | | 1 | 105 | like | | 2 | 201 | like | | 2 | 202 | wow | | 2 | 203 | sad | | 2 | 204 | like | | 2 | 205 | wow | | 3 | 301 | love | | 3 | 302 | love | | 3 | 303 | love | | 3 | 304 | love | | 3 | 305 | love | +---------+------------+----------+
Output:
+---------+-------------------+----------------+ | user_id | dominant_reaction | reaction_ratio | +---------+-------------------+----------------+ | 3 | love | 1.00 | | 1 | like | 0.80 | +---------+-------------------+----------------+
Explanation:
The Results table is ordered by reaction_ratio in descending order, then by user_id in ascending order.
Problem summary: Table: reactions +--------------+---------+ | Column Name | Type | +--------------+---------+ | user_id | int | | content_id | int | | reaction | varchar | +--------------+---------+ (user_id, content_id) is the primary key (unique value) for this table. Each row represents a reaction given by a user to a piece of content. Write a solution to identify emotionally consistent users based on the following requirements: For each user, count the total number of reactions they have given. Only include users who have reacted to at least 5 different content items. A user is considered emotionally consistent if at least 60% of their reactions are of the same type. Return the result table ordered by reaction_ratio in descending order and then by user_id in ascending order. Note: reaction_ratio should be rounded to 2 decimal places The result format is in the following example.
Start with the most direct exhaustive search. That gives a correctness anchor before optimizing.
Pattern signal: General problem-solving
{"headers":{"reactions":["user_id","content_id","reaction"]},"rows":{"reactions":[[1,101,"like"],[1,102,"like"],[1,103,"like"],[1,104,"wow"],[1,105,"like"],[2,201,"like"],[2,202,"wow"],[2,203,"sad"],[2,204,"like"],[2,205,"wow"],[3,301,"love"],[3,302,"love"],[3,303,"love"],[3,304,"love"],[3,305,"love"]]}}Source-backed implementations are provided below for direct study and interview prep.
// Accepted solution for LeetCode #3808: Find Emotionally Consistent Users
// Auto-generated Java example from py.
class Solution {
public void exampleSolution() {
}
}
// Reference (py):
// # Accepted solution for LeetCode #3808: Find Emotionally Consistent Users
// import pandas as pd
// from decimal import Decimal, ROUND_HALF_UP
//
//
// def find_emotionally_consistent_users(reactions: pd.DataFrame) -> pd.DataFrame:
// t = reactions.groupby(["user_id", "reaction"]).size().reset_index(name="cnt")
//
// s = (
// t.groupby("user_id")
// .agg(mx_cnt=("cnt", "max"), total_cnt=("cnt", "sum"))
// .reset_index()
// )
//
// s["reaction_ratio"] = (
// s["mx_cnt"]
// .div(s["total_cnt"])
// .apply(
// lambda x: float(
// Decimal(str(x)).quantize(Decimal("0.00"), rounding=ROUND_HALF_UP)
// )
// )
// )
//
// s = s[(s["reaction_ratio"] >= 0.60) & (s["total_cnt"] >= 5)]
//
// merged = pd.merge(
// s[["user_id", "mx_cnt", "reaction_ratio"]],
// t,
// left_on=["user_id", "mx_cnt"],
// right_on=["user_id", "cnt"],
// )
//
// result = (
// merged[["user_id", "reaction", "reaction_ratio"]]
// .rename(columns={"reaction": "dominant_reaction"})
// .sort_values(by=["reaction_ratio", "user_id"], ascending=[False, True])
// .reset_index(drop=True)
// )
//
// return result
// Accepted solution for LeetCode #3808: Find Emotionally Consistent Users
// Auto-generated Go example from py.
func exampleSolution() {
}
// Reference (py):
// # Accepted solution for LeetCode #3808: Find Emotionally Consistent Users
// import pandas as pd
// from decimal import Decimal, ROUND_HALF_UP
//
//
// def find_emotionally_consistent_users(reactions: pd.DataFrame) -> pd.DataFrame:
// t = reactions.groupby(["user_id", "reaction"]).size().reset_index(name="cnt")
//
// s = (
// t.groupby("user_id")
// .agg(mx_cnt=("cnt", "max"), total_cnt=("cnt", "sum"))
// .reset_index()
// )
//
// s["reaction_ratio"] = (
// s["mx_cnt"]
// .div(s["total_cnt"])
// .apply(
// lambda x: float(
// Decimal(str(x)).quantize(Decimal("0.00"), rounding=ROUND_HALF_UP)
// )
// )
// )
//
// s = s[(s["reaction_ratio"] >= 0.60) & (s["total_cnt"] >= 5)]
//
// merged = pd.merge(
// s[["user_id", "mx_cnt", "reaction_ratio"]],
// t,
// left_on=["user_id", "mx_cnt"],
// right_on=["user_id", "cnt"],
// )
//
// result = (
// merged[["user_id", "reaction", "reaction_ratio"]]
// .rename(columns={"reaction": "dominant_reaction"})
// .sort_values(by=["reaction_ratio", "user_id"], ascending=[False, True])
// .reset_index(drop=True)
// )
//
// return result
# Accepted solution for LeetCode #3808: Find Emotionally Consistent Users
import pandas as pd
from decimal import Decimal, ROUND_HALF_UP
def find_emotionally_consistent_users(reactions: pd.DataFrame) -> pd.DataFrame:
t = reactions.groupby(["user_id", "reaction"]).size().reset_index(name="cnt")
s = (
t.groupby("user_id")
.agg(mx_cnt=("cnt", "max"), total_cnt=("cnt", "sum"))
.reset_index()
)
s["reaction_ratio"] = (
s["mx_cnt"]
.div(s["total_cnt"])
.apply(
lambda x: float(
Decimal(str(x)).quantize(Decimal("0.00"), rounding=ROUND_HALF_UP)
)
)
)
s = s[(s["reaction_ratio"] >= 0.60) & (s["total_cnt"] >= 5)]
merged = pd.merge(
s[["user_id", "mx_cnt", "reaction_ratio"]],
t,
left_on=["user_id", "mx_cnt"],
right_on=["user_id", "cnt"],
)
result = (
merged[["user_id", "reaction", "reaction_ratio"]]
.rename(columns={"reaction": "dominant_reaction"})
.sort_values(by=["reaction_ratio", "user_id"], ascending=[False, True])
.reset_index(drop=True)
)
return result
// Accepted solution for LeetCode #3808: Find Emotionally Consistent Users
// Rust example auto-generated from py reference.
// Replace the signature and local types with the exact LeetCode harness for this problem.
impl Solution {
pub fn rust_example() {
// Port the logic from the reference block below.
}
}
// Reference (py):
// # Accepted solution for LeetCode #3808: Find Emotionally Consistent Users
// import pandas as pd
// from decimal import Decimal, ROUND_HALF_UP
//
//
// def find_emotionally_consistent_users(reactions: pd.DataFrame) -> pd.DataFrame:
// t = reactions.groupby(["user_id", "reaction"]).size().reset_index(name="cnt")
//
// s = (
// t.groupby("user_id")
// .agg(mx_cnt=("cnt", "max"), total_cnt=("cnt", "sum"))
// .reset_index()
// )
//
// s["reaction_ratio"] = (
// s["mx_cnt"]
// .div(s["total_cnt"])
// .apply(
// lambda x: float(
// Decimal(str(x)).quantize(Decimal("0.00"), rounding=ROUND_HALF_UP)
// )
// )
// )
//
// s = s[(s["reaction_ratio"] >= 0.60) & (s["total_cnt"] >= 5)]
//
// merged = pd.merge(
// s[["user_id", "mx_cnt", "reaction_ratio"]],
// t,
// left_on=["user_id", "mx_cnt"],
// right_on=["user_id", "cnt"],
// )
//
// result = (
// merged[["user_id", "reaction", "reaction_ratio"]]
// .rename(columns={"reaction": "dominant_reaction"})
// .sort_values(by=["reaction_ratio", "user_id"], ascending=[False, True])
// .reset_index(drop=True)
// )
//
// return result
// Accepted solution for LeetCode #3808: Find Emotionally Consistent Users
// Auto-generated TypeScript example from py.
function exampleSolution(): void {
}
// Reference (py):
// # Accepted solution for LeetCode #3808: Find Emotionally Consistent Users
// import pandas as pd
// from decimal import Decimal, ROUND_HALF_UP
//
//
// def find_emotionally_consistent_users(reactions: pd.DataFrame) -> pd.DataFrame:
// t = reactions.groupby(["user_id", "reaction"]).size().reset_index(name="cnt")
//
// s = (
// t.groupby("user_id")
// .agg(mx_cnt=("cnt", "max"), total_cnt=("cnt", "sum"))
// .reset_index()
// )
//
// s["reaction_ratio"] = (
// s["mx_cnt"]
// .div(s["total_cnt"])
// .apply(
// lambda x: float(
// Decimal(str(x)).quantize(Decimal("0.00"), rounding=ROUND_HALF_UP)
// )
// )
// )
//
// s = s[(s["reaction_ratio"] >= 0.60) & (s["total_cnt"] >= 5)]
//
// merged = pd.merge(
// s[["user_id", "mx_cnt", "reaction_ratio"]],
// t,
// left_on=["user_id", "mx_cnt"],
// right_on=["user_id", "cnt"],
// )
//
// result = (
// merged[["user_id", "reaction", "reaction_ratio"]]
// .rename(columns={"reaction": "dominant_reaction"})
// .sort_values(by=["reaction_ratio", "user_id"], ascending=[False, True])
// .reset_index(drop=True)
// )
//
// return result
Use this to step through a reusable interview workflow for this problem.
Two nested loops check every pair or subarray. The outer loop fixes a starting point, the inner loop extends or searches. For n elements this gives up to n²/2 operations. No extra space, but the quadratic time is prohibitive for large inputs.
Most array problems have an O(n²) brute force (nested loops) and an O(n) optimal (single pass with clever state tracking). The key is identifying what information to maintain as you scan: a running max, a prefix sum, a hash map of seen values, or two pointers.
Review these before coding to avoid predictable interview regressions.
Wrong move: Loop endpoints miss first/last candidate.
Usually fails on: Fails on minimal arrays and exact-boundary answers.
Fix: Re-derive loops from inclusive/exclusive ranges before coding.