Show HN: AlgoDrill – Interactive drills to stop forgetting LeetCode patterns

algodrill.io

153 points by henwfan 18 hours ago

I built AlgoDrill because I kept grinding LeetCode, thinking I knew the pattern, and then completely blanking when I had to implement it from scratch a few weeks later.

AlgoDrill turns NeetCode 150 and more into pattern-based drills: you rebuild the solution line by line with active recall, get first principles editorials that explain why each step exists, and everything is tagged by patterns like sliding window, two pointers, and DP so you can hammer the ones you keep forgetting. The goal is simple: turn familiar patterns into code you can write quickly and confidently in a real interview.

https://algodrill.io

Would love feedback on whether this drill-style approach feels like a real upgrade over just solving problems once, and what’s most confusing or missing when you first land on the site.

firsttracks 13 hours ago

Some feedback: The drill style approach seems helpful, but needing the variable names to exactly match threw me off. It would be great if we could _relax_ this constraint via a toggle for drill mode. "Precision Mode" feels like it's misnamed; when it's toggled on it feels more like a "guided mode" since chunks of boilerplate are written for you. It would be great if exiting Drill mode remembered choices, such as what portions were selected.

Ended up deciding to buy a subscription, but looks like the site still says "82% claimed" and "17 spots left". I appreciate the one-time purchase model, but feel that it's a bit shady of a tactic.

michaelmior 14 hours ago

What threw me off is the expectation that I use the same variable names and exact same code structure. There are many ways to implement effectively the same thing. I understand that it would be very challenging to implement a way to validate solutions in this way, but memorizing exact fragments of code feels like it's optimizing for the wrong thing.

  • VBprogrammer 14 hours ago

    Some might consider that a kind of commentary on the leet code interview format.

    • marssaxman 13 hours ago

      After hearing people complain about these fearsome "leetcode interviews" for what feels like a decade now, I have to wonder when I am finally going to encounter one. All I get are normal coding problems.

      • VBprogrammer 12 hours ago

        One man's leet code is another man's simple programming question which involves minimal domain knowledge...

        I've had candidates describe what I'd loosely call "warm-up" questions as leet code problems. Thing like finding the largest integer in an array or figuring out if a word is a palindrome.

        • cloverich 10 hours ago

          When people say leet code they usually mean problems that are easy once you know the algorithm, and hard to impossible (in an interview) otherwise.

          typical examples would be sorting algorithms or graph search problems, and some companies do indeed ask these; some big tech (the ones everyone studies for) may exclusively ask these. Thats imo largely because CS new grads are their primary pipeline.

  • henwfan 12 hours ago

    Thanks for taking the time to try it and write this up.

    You are right that the current check still leans too much toward my reference solution. It already ignores formatting and whitespace, but it is still quite literal about structure and identifiers, which nudges you toward writing my version instead of your own. There are many valid ways to express the same idea and I do not want to lock people into only mine.

    Where I want to take it is two clear modes. One mode tracks the editorial solution for people who want to learn that exact version for an interview, while still allowing harmless changes like different variable names and small structural tweaks. Another mode is more flexible and is meant to accept your own code as long as it is doing the same job. Over time the checker should be able to recognise your solution and adapt its objectives and feedback to what you actually wrote, instead of pushing you into my template. It should care more about whether you applied the right logic under time pressure than whether you matched my phrasing.

    There is also a small escape hatch already in the ui. If you completely blank or realise you have missed something, you can press the Stuck button to reveal the reference line and a short explanation, so you still move forward instead of getting blocked by one detail.

    You are pushing exactly on the area I plan to invest in most. The first version is intentionally literal so the feedback is never vague, but the goal is for the checker to become more adaptive over time rather than rigid, so it can meet people where they are instead of forcing everyone through one exact solution.

  • losteric 12 hours ago

    This by itself completely un-sold me. Requiring such rote memorization is a hard pass for me, it seems the user should just be able to self-assess whether they got it “right” (like Anki cards).

epolanski 17 hours ago

I like the idea, and you've got yourself a customer :)

The lifetime membership + launch discount was a good marketing bait I felt for.

Not really understanding the negativity here. We know for a fact that most of the people that master intellectual problems do so via pattern recognition, not by reasoning.

You show a chess master a position, he/she can instantly tell you what the best moves are without "thinking" or "calculating" because it's mostly pattern recognition.

Maths and algorithms fall in the same category. When approaching new problems, masters don't really start processing the information and reasoning about it, instead they use pattern recognition to find what are very similar problems.

The thing I really don't like is the lack of TypeScript or at least JavaScript, which are the most common languages out there. I really don't enjoy nor use Java/Python/C++.

  • embedding-shape 16 hours ago

    > We know for a fact that most of the people that master intellectual problems do so via pattern recognition, not by reasoning.

    Where is this fact stated, and who are "we" here? Sounds like an opinion or guess at best.

    > Not really understanding the negativity here

    There are two comments that could be read negativily, the rest is neutral or positive. I don't really understand the constant need for people to bring up what (they think) the rest of the comments said. Post your piece adding positivity if you want, but most of the time comments end up a fair mix so any time someone adds a snippet like that, it turns outdated in a few hours.

    • epolanski 16 hours ago

      There's lots of psychological and anthropological studies behind the fact that most experts in various fields excel due to pattern recognition not reasoning.

      Going back to the chess example, while chess masters are incredible at analyzing complex positions they can recognize as "similar to", their advantage over normal human beings is very small when positions are completely randomized.

      "Peak: Secrets from the New Science of Expertise", by Ericsson goes more in depth of the topic, but there's lots of literature on the topic.

      • hansmayer 14 hours ago

        > There's lots of psychological and anthropological studies behind the fact that most experts in various fields excel due to pattern recognition not reasoning.

        Pattern recognition in experts comes from combination of theoretical understanding and a lot of practical problem solving experience (which translates into patterns forming in way of neural paths) - not the other way around. If you dont understand the problem you are solving, then yes maybe you'll be able to throw a pattern at it and with a bit of luck solve it (kinda like how LLMs operate), but this will not lead to understanding. Memorising patterns isolated from theoretical backgrounds is not something that will create an expert in a field.

      • pcthrowaway 15 hours ago

        > their advantage over normal human beings is very small when positions are completely randomized.

        The book you referenced does not say they're comparable to normal players at playing from a random position.

        Normal players are almost as good as them at recalling a nonsensical board of random pieces.

        The suggestion that the advantage of a chess master over a normal player is "very small" at playing from a random position is laughable.

        • epolanski 15 hours ago

          I obviously meant it as a delta over the recognizable lines.

          • pegasus 14 hours ago

            That wasn't obvious at all. I interpreted it as the chess masters lacking an advantage at playing from randomized position, which would be consistent with the claim you were supporting, as opposed to recalling which is neither here nor there.

  • inesranzo 16 hours ago

    > Not really understanding the negativity here. We know for a fact that most of the people that master intellectual problems do so via pattern recognition, not by reasoning.

    > The lifetime membership + launch discount was a good marketing bait I felt for.

    The negativity here with me is because it feels like clickbait and like a scammy ad to manipulate me into purchasing.

    It is almost lying. I find it unethical and I don't think there are 17 lifetime access spots, it's just artificial hype that doesn't make sense to me.

    Marketing (at least like this) is basically lying.

    • epolanski 15 hours ago

      I agree fully, which is why I called it a (good) marketing bait. Worked on me.

      Might be because I'm also considering finding new clients/jobs, and apparently even for 2/3 months of collaborations people are sending me through several rounds of algo questions, so it was a nice add on top of my leetcode and codewars drills.

  • henwfan 16 hours ago

    Thank you, I really appreciate you signing up.

    I agree with you on pattern recognition. AlgoDrill is built around taking patterns people already understand and turning them into something their hands can write quickly under pressure. You rebuild the solution line by line with active recall, small objectives, and first principles explanations after each step, so it is more than just memorizing code.

    You are also right about the language gap. Right now the drills are Python first, but I am already working on full support for JavaScript, Java, and C++ across all problems, and I will have all of those in by the end of this year. I want people to be able to practice in the language they actually use every day, so your comment helps a lot.

    • johnhamlin 15 hours ago

      Another +1 for TypeScript from a new lifetime subscriber. Great site!

  • baq 16 hours ago

    I don't know if I feel any negativity, but this is the first time I actually thought 'the price of subscription is approximately equal the price of Opus tokens needed to build a custom version of this for myself'... and got a bit scared TBH

    • Mars008 15 hours ago

      > approximately equal the price of Opus tokens needed to build

      this is probably not accidental.

  • andoando 8 hours ago

    Agree with your overall message, but I don't think thats true for chess. Chess players wouldnt be spending an hour on their own move in a match where theyve been been studying the board for hours already if it were that simple

  • paddleon 15 hours ago

    > Not really understanding the negativity here.

    In the last year or so HN seems to have attracted a lot of people (plus some bots) who seem to have been socialized on Reddit.

    I don't know if these people are ignorant of what a good discussion forum can be (because they've never experienced one) or just don't care, but I do wish we could see more reflection on the second-order impacts of posting, and a move away from the reflexive negativity that mimics the outer face of good criticism while totally missing the thought and expertise good criticism requires.

    • kilroy123 14 hours ago

      I've been around here for over a decade. I'm telling you, this has been happening for longer than a year. I'd say the last ~4 years.

monooso 14 hours ago

I understand the pragmatic reasons behind such a decision, but insisting that I sign up with Google (and only Google) was an unfortunate blocker.

If anything, GitHub seems like a more obvious choice for such a site.

  • henwfan 11 hours ago

    That is fair. I went with Google first because it let me ship the first version quickly, but for a tool aimed at developers GitHub and simple email sign in make much more sense.

    I am working on both and plan to let people move their account once they are live if they would prefer not to use Google here.

Fire-Dragon-DoL an hour ago

I got hooked without realizing that I'm not super familiar with any of the languages. JS would help,but I was hoping to use Go (or Ruby at least).

Thank you either way, I purchased I license

wodenokoto 16 hours ago

Is it correctly understood that this is Anki for a subset of leetcode problems with study notes?

I bit more info on what NeetCode is, why I should focus on those 150 problems and how the drilling actually work would be helpful. Do I get asked to do the same problems on repeat? Is it the same problems reformulated over and over? Is there actualy any spaced repetition, or am I projecting?

  • henwfan 16 hours ago

    That is a good first approximation, but it is a bit more guided than a plain Anki deck. For each problem there is a structured study page and an interactive practice mode.

    NeetCode 150 is a popular curated list of LeetCode problems that covers the core interview patterns people expect nowadays, like sliding window, two pointers, trees, graphs, and dynamic programming. I used that set as the base so you are not guessing which problems to focus on, and more problems and patterns are being added on top of that core set regularly.

    On the study side, each problem has a consistent structure with the core idea, why that pattern applies, and a first principles walkthrough of the solution. On the practice side, the solution is broken into small steps. Each step has a clear objective in plain language, and you rebuild the code line by line by filling in the missing pieces. After you answer, you see a short first principles explanation tied to the line you just wrote, so you are actively recalling the logic instead of just reading notes.

    You can repeat problems and patterns as much as you want, mark problems as solved or unsolved, and filter by pattern so you can focus on the ones you struggle with most. There is not a full automatic review schedule yet. For now you choose what to review, and the goal is to use that progress data to track weak patterns, guide what you should drill next, and add more types of focused drills over time.

embedding-shape 17 hours ago

I learned the other day (https://news.ycombinator.com/item?id=46184676) that people who aren't students apparently use LeetCode too, for recreational purposes? I'm not sure why you'd work on someone else's imaginary problem instead of doing something for yourself, so apparently it's there and some people enjoy it, regardless of my understanding of it.

But then I don't know how to reconcile the idea that some people use LeetCode to pass interviews, some use it recreationally, but then this app seems to indicate some people use LeetCode to learn patterns to implement in the real world, which seems absolutely backwards to me. These are tiny examples, not "real programming" like you'd encounter in the world outside of computers, LeetCode can impossibly teach you how to create useful programs, it only teaches you syntax and specific problems.

So I guess take this as a word of caution, that no matter how much you grind LeetCode, nothing will prepare you to solve real world problems as practicing solving real world problems, and you don't need any platforms for that, just try to make your daily life better and you'll get better at it over time and with experience of making mistakes.

  • baq 17 hours ago

    > imaginary problem instead of doing something for yourself

    they're doing it for themselves just like when they solve sudokus, crosswords or play fortnite

  • another_twist 15 hours ago

    I do codeforces in my spare time. Sometimes I implement and ML paper. Other times, I like to slog through my implementation of Raft, Paxos and VR. Not everybody wants to build generic crud app number 1,200,674. Coding is for solving problems, the problems might be engineering or just pure fun.

  • mylifeandtimes 16 hours ago

    some people like to play with Rubiks Cubes, which among other things is a nice tactile way to learn some interesting advanced math

  • Vaslo 14 hours ago

    Seeing how other people solve problems opens up new ways for me to solve my own. Many people are not RTFM but instead want applied examples.

    • embedding-shape 13 hours ago

      > Many people are not RTFM but instead want applied examples.

      Yeah, this is me very much to the core of my bones, and I think that's why I don't find any pleasure or enjoyment from these synthetic coding challenges, and trying to understand those that do.

999900000999 13 hours ago

This might be the answer for me, you're breaking down all these questions into actual smaller steps and having the user write those out instead .

I dislike limited offers, because I think you're placing a bit of unfair pressure on the user to buy. But I went ahead and gave you 30 bucks.

I'm going to study this before my next interview, thank you

emaro 12 hours ago

I feel like this is a bit backwards. It seems to be an improvement over just grinding LeetCode, but I'd never work for a company expecting me to spit out LeetCode solutions quickly (recall). If they give me a LeetCode style problem and want to see how I approach this, what I know, how I deal with what I don't, then it's fine. But I think neither LeetCode or AlgoDrill are needed for this.

Or to put it another way, if I give some applicant a coding problem to solve, and they just write down the solution, I didn't learn much about them except they memorized the solution to my problem. That most likely means I gave them the wrong (too easy) problem. It will only increase the change of me hiring them by a tiny bit.

Edit: I don't hate the player, I hate the game.

  • notepad0x90 12 hours ago

    this type of stuff is generally for interviews. But it does tell you that the candidate has learned the patterns in question. That particular solution isn't important, but knowing good design patterns to solutions is. Knowing how a decent number of problems are best solves gives them a good intuition of how to tackle problems. Otherwise, they would tackle it using their intuition/vibes. There are books one can read to learn this stuff as well I'm sure, but how do you prove what knowledge you've retained?

    10 programmers will write 10 different ways to solve a simple problem. and that code is tech-debt other programmers have to maintain at some point. Just having coders that have the same base-level memorized problem solving patterns can ease that pain, and it can make collaboration/reviews easier down the road.

pxtail 16 hours ago

Nice, you have identified shovel very well.

hinicholas 14 hours ago

I like it. I subscribed. The check is definitely rough around the edges though. Memorizing the exact variable names is tough. I think the objectives should maybe give you the variable names it expects at least.

francoispiquard 17 hours ago

Seems like a good idea, is it the same kind of concept as the woodpecker method in chess ?

  • henwfan 17 hours ago

    Nice comparison. It is pretty similar in spirit to the woodpecker method.

    In chess you repeat the same positions until the patterns feel automatic. Here it is LeetCode problems. You keep seeing the same core patterns and rebuild the solution step by step. For each step and line there is a small objective first, and then a short first principles explanation after you answer, so you are not just memorizing code but training pattern recognition and understanding at the same time.

AidenVennis 16 hours ago

The website is missing information on which languages it supports. I was hoping for Typescript, but after registering I see that it's only Python at the moment and it seems Java and C++ are coming soon...

nialv7 5 hours ago

Actually curious, how often do you find uses for LeetCode patterns in your actual work?

  • JoeOfTexas 3 hours ago

    In web development, you mostly deal with data, sometimes you need to group that data, and some of these algos can help with that.

    Most useful when you work with large datasets, if you can reduce a workload that takes hours into minutes or less, congrats, otherwise, you are forced to wait the hours. Either way, job security.

apt-apt-apt-apt 12 hours ago

Lol I saw this being spammed in the comments on every reddit thread when looking for interview prep

quibono 7 hours ago

I want to like this. But... one has to write the answer in EXACTLY the same format, down to each variable name it seems?

skydan 12 hours ago

Why is text selection disabled in study mode? Is this an intentional design choice?

bochoh 14 hours ago

Solid platform - clean and useful for algorithm practice.

Quick suggestions:

  - GitHub OAuth would feel natural for devs.
  - Broaden language support (C#, TypeScript, Ruby).
  - Add dark/light mode toggle for comfort.
Excited to see where it goes — thanks for building.
  • henwfan 11 hours ago

    Thanks for the kind words, and for taking the time to write concrete suggestions.

    GitHub sign in is on the way. Right now it is Google only, but I am adding GitHub so it feels more natural for devs.

    For languages, the drills are Python first. Java, C++ and JavaScript will be fully supported by the end of this year across all problems.

    The site is dark by default today. A proper light and dark toggle is planned so people can pick what is more comfortable for longer sessions.

    Really appreciate you trying it this early and sharing where you would like it to go.

  • sumnole 12 hours ago

    Another to add to the list: Allow flexible naming. For example, drilling the two sum problem requires the user name the hashmap prev_map, but I feel memorizing this sort of stuff detracts from the lesson.

    • henwfan 11 hours ago

      Good point, and that matches other feedback I am seeing.

      You are right that in the current version the checker is still too literal about names and structure. In two sum for example it nudges you toward my map name instead of letting you use your own, which is not what I want to optimise for once you already know the idea.

      The plan from here is to keep an editorial mode for people who want to follow the exact solution and add a more flexible mode that accepts your own names and structure as long as it is doing the same job. Over time the checker should recognise what you actually wrote and adapt its objectives and feedback to that, instead of forcing everyone into one naming scheme.

pelagicAustral 16 hours ago

Any company using leetcode as their primary way to assess competency is time wasting, soulless black hole unworthy of any real talent.

  • noident 15 hours ago

    I don't like doing the leetcode grind, but all of the alternatives are strictly worse.

    * Take home projects filter out people with busy lives. Wastes 100 people's time to hire 1 person. Can't be sure they didn't cheat. No incentives to stop company from giving you a 10 hour assignment and then not looking at it. The candidate with the most time to waste wins.

    * Relying on academic credentials unfairly favors people from privileged backgrounds and doesn't necessarily correlate with skill as an engineer.

    * Skipping the tech interview and just talking about the candidate's experience is prone to favoring bullshitters, plus you'll miss smart people who haven't had their lucky break yet.

    * Asking "practical" questions tends to eliminate people without familiarity with your problem domain or tech stack.

    * We all know how asking riddles and brainteasers worked out.

    With leetcode, the curriculum is known up front and I have some assurance that the company has at least has some skin in the game when they schedule an engineer to evaluate me. It also tests your general knowledge and in some part intelligence as opposed to testing that you have some very narrow experience that happens to overlap with the job description.

    • stuaxo 12 hours ago

      Its not good for the whole cohort of people who are good at their jobs and aren't good at leetcode.

      You're filtering out people who don't have a lot of extra time on their hands to get good at one particular kind of puzzle.

      Time poor people like parents, or people that are talented but busy in their current jobs.

    • boredtofears 12 hours ago

      I've spent a heck of a lot more time grinding leetcode than I have working on take-home projects. I always enjoyed doing take-home's because I could really spend time on it and make it something worth showing off - if anything it always felt like the perfect low-stress way to show what you can do. It's amazing how many candidates don't take the time to make it look good (or even meet the objectives in many cases).

      Haven't done one since pre-LLM era though and that path seems like it might be completely infeasible for employers now.

      That said, the most productive interviews I've been a part of as both employee and employer have always been with the technical people that you'll actually work with and conversational in nature. You can learn a lot about what someone knows by listening to their experiences and opinions (but this depends greatly on the quality of the interviewer)

  • neilv 15 hours ago

    Any company still using LeetCode at all during interviews is signaling that either they are run like a frat house, or are so dim/indifferent that they're unwittingly cargo-culting one.

  • another_twist 15 hours ago

    Used to be in the same camp here until I had to interview for a specialist role. I'd happily swap Leetcode rounds and doing away with the highly subjective - design a class hierarchy nonsense.

stack_framer 5 hours ago

Is there any way to try it without signing in via Google?

  • ohghiZai 5 hours ago

    I’d sign up if there’s a way to not use Google sign in.

androng 15 hours ago

I tried the two sum and found it kind of strange to do line by line recall, I thought the only way we could memorize hundreds of leetcode is to think in chunks that are several lines, not one line at a time

HenryQuillin 8 hours ago

Nice! How long will leetcode style interviews stay around for though...

kybernetyk 17 hours ago

That's certainly a (to me) very unusual way to learn programming.

  • qwertytyyuu 16 hours ago

    It’s not about learning programming, more about learning how to solve leet code problems quickly as I understand it

australium 12 hours ago

I want to test out the platform but I'm getting an SSL error on account creation - anyone else?

another_twist 15 hours ago

This is a good product, the mechanism for me was an excel sheet. I wont sign up though, I've ground enough LC. These days I dont even prep for algorithm rounds and still manage to land offers. But I'd have appreciated this when grinding myself.

dragochat 17 hours ago

...the f?! why are we interviewing ppl for things like this?!

you either:

(a) want DEEP understanding of math and proofs behind algorithms etc.

(b) can get away with very high level understanding, and refer to documentation and/or use LLMs for implementation details help

there is no real world use case for a middle-ground (c) where you want someone with algo implementation details rote-memorized in their brain and without the very deep understanding that would make the rote-memorization unnecessary!

  • komali2 16 hours ago

    > there is no real world use case for a middle-ground (c) where you want someone with algo implementation details rote-memorized in their brain and without the very deep understanding that would make the rote-memorization unnecessary!

    I was watching a video recently talking about how Facebook is adding KPIs for its engineers' LLM usage. As in, you will be marked negatively in your performance review if your code is good but you didn't use AI enough.

    I think, you and I agree, that's obviously stupid right? I imagined myself as an engineer at Facebook, reading this email come through. I can imagine two paths: I roll my eyes, find a way to auto-prompt an LLM to fulfill my KPI needs, and go back to working with my small working group of "underrecognized folks that are doing actual work that keeps the company's products functioning against all odds." Or, the other path: I put on my happy company stooge hat, install 25 VScode LLM forks, start writing a ton of internal and external posts about how awesome AI is and how much more productive I am with it, and get almost 0 actual work done but score the highest on the AI KPIs.

    In the second path, I believe I will be more capitalistically rewarded (promotions, cushy middle/upper management job where I don't have to do any actual work). In the first, I believe I will be more fulfilled.

    Now consider the modern interview: the market is flooded with engineers after the AI layoffs. There's a good set of startups out there that will appreciate an excellent, pragmatic engineer with a solid portfolio, but there's the majority of other gigs, for which I need to pass a leetcode interview, and nothing else really matters.

    If I can't get into one of the good startups, then, I guess I'm going to put on my dipshit spinny helicopter hat and play the stupid clown game with all the managers so I can have money.

    • ivape 12 hours ago

      I think the influx of many truly self-driven and resourceful self-taught programmers in the 2010s established a perceptible need (not necessarily an accurate one) of needing to "properly vet" non-degreeed candidates. Stuff like Leetcode is what emerged. The truth is, the "vetting" was originally done via self-selection. Generally computer-oriented and creative people gravitated toward application development and it was worth something to the world. The world probably didn't know how to value this group of people, so continuously tried to put in some kind of formal process.

      But like Art, the artists came from everywhere. We're being dishonest if we don't acknowledge what truly made these developers get to where they are, and it wasn't because they originally went "Oh, I know what I'll do, I'll do thousands of Leetcode problems', that is absolutely not the true story of the developer in the last decade.

      Leetcode is a sloppy attempt at recognizing and appropriately handling developers. It was an "attempt", a failed one imho. It fundamentally ignores the spirit in which these developers operated in, it reduces them to gym rats, and that's not how they got there.

      This being a spiritual problem is what makes the most consistent sense. Even those that grind Leetcode will tell you their heart is not in it (just like GP mentioned above).

  • bko 16 hours ago

    Maybe it's just me, but I want people that are reasonably competent and you can work with. Maybe there are some jobs that require deep understanding of maths/proofs etc, but those are what, maybe 1 in 100 engineering jobs?

    More often than not a deep interest in a particular technical domain is a liability. It's like that guy that insists on functional programming design patterns that insists on a fold with tail recursion where simple mutation could have easily sufficed. Or endless optimization, abstraction and forced patterns. Bro, you're working on building a crud app, we don't need spacecraft design.

    • only-one1701 16 hours ago

      The math puzzles like this are supposed to show deep mastery. I assure you that you don’t need DP in 99.999% if cases as well, but idiots are still asking house robber.

  • farhanhubble 17 hours ago

    People are sheep. Someone somewhere used mathematical puzzles as interview questions. That someone became big. Others assumed it was because their interview process was amazing and followed blindly. Soon enough the process started to be gamed.

    I'm seeing this trend again in the field of AI where math olympiad participants are being given God like status by a few companies and the media.

    Truth is even the most prolific computational scientists will flunk these idiotic interviews.

    • netdevphoenix 16 hours ago

      Hundred percent. Classic example of academic smarts vs real world smarts.

      It's why developers as a group will lose negotiating power over time. You would expect a smart person to question why that 'problem' exists in the first place rather than forge ahead and making a solution for a problem that doesn't exist. It's like your manager telling you to write a software that does something, whatever that is. Your first question should be why and you should not type a single letter until you understand the domain and whether a software solution is needed in the first place.

      For all the intellectuality modern devs give to themselves, they are still asking how high when told to jump. And in some cases even bragging about jump heights. Only difference is that many devs look down upon others (or simply are unable to understand those) who refuse to jump.

      We all know devs have better things to focus on, given the state of modern software development.

    • MyHonestOpinon 12 hours ago

      I am guilty of this. I started asking simple programming questions back in the early 90s. It was just a way to see if interviewee knew how to use for loops and conditionals, to see if they can solve simple problems. It was great when taken unprepared, but once people started drilling and memorizing them, the problems became a lot harder. It got to the point where you really have to study, it is not enough to have 20 years of professional programming experience.

      Fun story. For years, I used a set of problems that I took from a very old programming book. I have probably seen dozens of solutions for these problems. About 6 years, in an interview, somebody happen to ask me about one of these problems. So, I wrote the solution and the interviewer told me it was wrong, but he couldn't tell me why it was wrong. Then he proceded to clean the screen. (It was remote interview). So I flunk the interview with a problem that I knew back and forth.

    • ascorbic 15 hours ago

      Yes, and it's mostly the fault of a handful of companies like Google and Facebook that were started by founders who were still in college, so choose interview problems that look like CS algo puzzles instead of anything related to real work.

  • petesergeant 16 hours ago

    > why are we interviewing ppl for things like this?!

    Ship has definitely sailed

dzonga 16 hours ago

and yet people still can't build software.

now the same people in the industry advocating for leetcode are also advocating for vibecoding. I wonder if an LLM is made to do leetcode before approval for vibecoding.

day in day out, the software gets worse, delayed, shipped with bugs, very slow yet yeah prove to us you can build software by doing puzzles

if you advocate for leetcode - fxxk yxx.

Surac 15 hours ago

sorry for asking: what does grinding LeetCode mean?

  • neilv 15 hours ago

    The phrase "grinding LeetCode" refers to a kind of unmentionable self-stimulus indulged in by people who want tech jobs money, but are bad at software engineering, and who want to work with other people who are bad at software engineering.

    It was most popular during zero interest rate phenomenon, when there were numerous investment scams based on startup companies that could have a very lucrative "exit" for those running the scheme, despite losing money as a business.

    LeetCode falls out of favor when companies realize they need to build viable businesses, and need software engineers rather than theatre performances.

    • koakuma-chan 10 hours ago

      What if I want to work at big tech? Does your message still apply, or if I want to work at big tech, it means I just want tech jobs money, and am bad at software engineering?

      • neilv 9 hours ago

        You could be an outlier. I, too, wanted to work at a particular Big Tech.

        But then I looked again at the prep materials they recommended for their frat hazing interview theatre, and it was so depressingly trashy, that it made me not want to work there anymore.

        And things I read publicly (e.g., culture of disingenuous mercenary careerism, and hiring scraping the bottom of the barrel that knows only the interview gaming) and hear privately (worse) mean that probably it was for the best that I didn't move there, though the bigger bank account would've been nice.

  • dsr_ 15 hours ago

    "grinding" is doing something repetitively, with the connotation that it is difficult and goal-oriented.

    "farming" is the same but without the difficulty: just doing an easy but boring task repeatedly because it gets you something else that you want.

ErroneousBosh 5 hours ago

Why do you need to "grind LeetCode"?

  • linguae 5 hours ago

    Some job positions are so competitive to get that a candidate with good data structures and algorithms skills but who hasn’t seen a specific LeetCode problem before and needs to solve it on the spot may lose out to a candidate who “grinded LeetCode.” It’s kind of like how a good student still needs to prep for standardized tests.

inesranzo 15 hours ago

This project has potential but there are some issues with "Marketing" (I call this lying depending on how it's done)

Please stop with the false urgency and borderline lying to people saying there are 17 spots when they most likely aren't.

Doing this to sell more is unethical and dishonest.

I think if this project didn't do this it might work and go far.

clbrmbr 13 hours ago

Rust version?

game_the0ry 15 hours ago

Nice work, this is a pretty cool project.

But fuck leetcode. With AI, its obsolete at this point.

  • another_twist 15 hours ago

    Not really, its quite easy to tell if you havent prepped well. The place where AI is good is online assessments.

constantcrying 17 hours ago

The idea of getting quizzed on how good you are at recalling specific patterns in algorithm construction is completely and utterly bizarre.

I get that some people feel forced into it, but nobody can believe that this is an appropriate measure to judge programmers on. Sure, being able to understand and implement algorithms is important, but this is not what this is training for.

  • henwfan 17 hours ago

    I mostly agree that the interview format itself is strange. I do not think people should be judged mainly on how many patterns they can recall on command.

    The reality for a lot of candidates is that they still face rounds that look exactly like that, and they stress out even when they understand the ideas. I built this for that group, where the bottleneck is turning a pattern they already know into code under a clock. Each step in the drills is tied to a first principles explanation, so the focus is on the reasoning behind the pattern, not trivia.

  • netdevphoenix 16 hours ago

    It's just a power move on devs. People come on HN to brag about crazy high comp and how devs are untouchable. The reality is that if you feel the need to do circus tricks for someone in exchange for a role that makes you happy, you got no leverage. While this might have been less obvious during the late 10s and early 20s with all the fancy pods, consoles, free high quality fresh meals and what not that Big Tech used to offer to devs, it is certainly harder to deny nowadays.

smetannik 4 hours ago

Yet another paid tool.

Leetcode wants subscription, NeetCode wants subscription, and now - yet another one thing.