tony a day ago

Keep files <1000 lines. If you can.

Keep chats <30 minutes, ideally 20-minute continuous segments.

Use a `notes/TODO.md` file to main a checklist of objectives between chats. You can have claude update it.

Commit to version control often, for code you supervised that _does_ look good. Squash later.

This glitch often begins to happen around the time you'd be seeing "Start a new chat for better results - New chat" on the bottom right.

If you don't supervise, you will get snagged, and if you miss it and continue, it'll continue writing code under the assumption the deletion was fine: potentially losing the very coverage you'd hope to have gained.

If it does happen, try to scroll up to the chat before it happened and "Restore checkpoint"

claude-3.7-sonnet-thinking, Cursor 1.96.2

  • oriel 6 hours ago

    A small note: 1.96.2 is the VSCode version, the Cursor version latest i think is 0.46.x.

    I'll also say that "Restore checkpoint" often causes crashes or inconsistency in the indexed files. I've found using git and explicit full reindexing has solved more problems than the AI itself.

  • namanyayg a day ago

    Nice tips! I'm working on a Cursor extension to automate checklist and project spec maintenance.

    It improves the context that Cursor has and reduces hallucinations significantly. It's early, but 400 users say it's a lifesaver.

    Shoot me an email? hi [at] nmn.gl or cal.com/namanyayg/giga

  • babyent a day ago

    What if..

    ..I just code it myself?

    • theshrike79 a day ago

      You can.

      ...or you can tell the LLM "write me a go application that adds links from this JSON dump of wallabag.it to raindrop.io" and it's done in 10 minutes.

      (It did use the wrong API for checking if a link already exists, that was an additional 5 minutes)

      I've been doing this shit for a LONG time and it'd take me a way longer than 10 minutes to dig through the API docs and write the boilerplate required to poke the API with something relevant.

      No, you can't have it solve the Florbargh Problem, but for 100% unoriginal boilerplate API glue it's a fantastic time saver.

      • an_guy 18 hours ago

        How do you deal with LLM hallucinating api parameter or endpoints or using imaginary library?

        • ErikBjare 11 hours ago

          You notice it and tell it what to use instead

        • theshrike79 11 hours ago

          Then I write it myself or tell it to correct itself. They tend to be confidently incorrect in many cases, but especially with the online ones you can tell them that "this bit is bullshit" and they'll try again with a different lib.

          Works for common stuff, not so much for highly specialised things. An LLM can't know something it hasn't been "taught".

  • guicho271828 a day ago

    > Keep files <1000 lines. If you can.

    > Keep chats <30 minutes, ideally 20-minute continuous segments.

    > ...

    Is it just me or does this sound like a standard coding/mentoring practice?

anonzzzies 2 days ago

Yep, it's worse than it was, like the other poster in this thread, i am back just copying to and fro to claude and chatgpt as it just works better.

I am starting to wonder how this will all end up. For instance with API use, we (my company) can burn $100s/day with sometimes insanely bad results all of a sudden. Now I bet I signed away all my rights, but in some countries that doesn't cut mustard for consumers buying things. If an API delivers very solid results one day and crap the next and I spent a lot of money, how does that work? There are many people on reddit/youtube speculating why claude sometimes responds like a brilliant coder and sometimes as if it had a full frontal lobotomy. I see this in Cursor too.

  • acoard a day ago

    I'm sympathetic to the issue of services getting worse, it sucks, but

    > If an API delivers very solid results one day and crap the next and I spent a lot of money, how does that work? There are many people on reddit/youtube speculating why claude sometimes responds like a brilliant coder and sometimes as if it had a full frontal lobotomy. I see this in Cursor too.

    This seems like an incredible over-reach. There's no predatory behaviour here. You're free to cancel at any time.

    It's an incredibly fast moving field, a frontier field in software. To say that, in order to charge for something, you are legally bound to never make mistakes and have regressions, is an incredibly hostile environment to work in. You'll stifle growth if people think experiments might come with lawsuits unless they're positive it leads to improvement.

    If they decided they were going to lock everything to gpt-2 and refuse to pay back any people who bought yearly subscriptions, sure I would be agreeable to considering this a bait-and-switch hoodwink. But that is clearly not happening here.

  • nyarlathotep_ 19 hours ago

    How is it possible that users report that but we now see Mr Levels on twitter posting an example of a ThreeJS game thing that he allegedly 100% Cursor'd his way to?

    Is behavior that inconsistent?

    I've used GitHub copilot plenty, and I've observed various "regressions" and inconsistencies, but I've never come even close to that much of a project being totally LLM-generated.

    • anonzzzies 18 hours ago

      Levels is a programmer though; if you have the patience you can make software like that; you might have to be very precise telling it what to fix when it's stuck and cannot fix it itself, which is then just basically voice coding. Typing is much faster and less frustrating, but you can do it; a non programmer would not be able to though.

  • jerpint 2 days ago

    I’ve been noticing this too. The other day, cursor’s version of Claude sonnet (3.7) added a

    with open(file) as f:

      pd.read_csv(f)
    
    This was a mistake not worthy of even gpt3… I’ve also noticed I get overall better suggestions from Claude desktop app.

    I wonder why

    • anonzzzies a day ago

      I cannot find it anymore, but there is a youtube video of someone making / training an llm who showed a lower precision switch switching his small model from bad to lobotomised and he even mentions he thinks it is what Anthropic does when the servers are overloaded. I notice it, but have no proof. There seems to be a large opportunity for screwing people over without consequences though, especially when using these API's at scale.

  • namanyayg a day ago

    I'm a solo founder in SF building something to fix this, let's talk? Email in my profile

rm_-rf_root a day ago

What I’ve learned:

- Use apply (in chat) or composer only if you’re more interested in finding a quick solution than the risk to local code. Often Cursor removes important comments by default.

- Use chat. Create new chats when it doesn’t have the latest version of your code or history/shadow workspace is confusing it. Add relevant files or @Codebase or both.

- Learn to undo. Use git and checkout the files/directories again if needed. I don’t use composer, so files never get deleted.

- Autocomplete is often fairly terrible and regularly gets in the way of trying to see what you’re typing or trying to view. Hit the escape key regularly.

- Use Claude 3.7 for regular coding and 3.7 Thinking for larger things to solve.

  • lolinder a day ago

    > Use git

    This is honestly the only part of this that matters if you do it right. Use composer, but only on a clean git tree. Apply, then look at the git diff and correct anything you don't like. Test it and commit it, then repeat.

    Composer and apply are only dangerous if you're not committing regularly. If you never run them while you have uncommitted changes, you can't lose working code.

oriel 6 hours ago

After working with this tooling for about a month, I've found the biggest struggle is adapting my mental model as the paradigms change. Because make no mistake, this is a research-stage tool that has been thrust into the spotlight while in its infancy.

There's definitely shift between versions of cursor. One time I had an unpleasant crash that also upgraded the version, and suddenly .cursor/ files I thought were working (but hadnt been) showed up mid prompt, and derailed the entire composer run.

I've tried ideas ranging from TODO documents, append-only work logs, blueprint documents, faux-team interactions through role definitions, and much much more.

The most success I've had is by adding manual lines at the end of every prompt, akin to phrases like "be respectful of the existing code and functionality, make deliberate changes with caution and consideration". Beyond the normal "plan and take it step by step" type phrasing.

It would also be nice if .cursor/ rule files actually were actively used, instead of sometimes being hooked in if you wrote the description right.

Sometimes I have great success at utilizing specialized roles, with named team members and explicit direction, to elect the "appropriate" role for the items being worked on, and discuss pros/cons and facilitate brainstorming.

At the end of the day, I think I end up doing an annealing pattern for changes:

- Build the features and breaking is okay so long as we get the features.

- Distill the built features into a doc with examples and roll back the code by git.

- Build a v2 based on the distillation.

- Rinse repeat, usually until v4 or v5, when i tear it all down and write it 99% by hand from the distilled documents.

Definitely a rocky road so far.

siva7 a day ago

Do careful code reviews after every change by cursor. Do small changes, break up bigger tasks. Be very precise in your instructions. It doesn't replace an experienced developer. Not much different than how a senior engineer would tackle a task. Everything else will result in throw-away spaghetti crap like bolt and similiar-minded frameworks which will collapse in production on day 1 (if you get that far). It's tempting to take it as an autopilot but the results are much better if you treat it as a copilot.

  • gryfft a day ago

    > It's tempting to take it as an autopilot but the results are much better if you treat it as a copilot.

    As the son of a pilot, this sentence is really funny to me. A real-world pilot would switch the places of 'copilot' and 'autopilot' in your metaphor-- the autopilot maintains course on a given vector but isn't the same thing as a human being in command of the vehicle for exactly the reasons you refer to.

    • siva7 a day ago

      It's frightening when i think of that Boeing & Airbus are trying to replace the human copilot with AI / computer systems so that in future only one human pilot will command a passenger jet.

BrandiATMuhkuh a day ago

Same here. It seems to me Sonnet 3.7 causes it.

Just today, I asked Sonnet 3.7 to use the app's accent color, which can be found in the @global.css, to apply it to some element. It went on a massive tangent, scanning all sorts of documents, etc. I then asked Sonnet 3.5 for the same thing, and it simply took the accent color and applied it. No tangent, no grep, ...

  • FlyingAvatar a day ago

    The Sonnet 3.7 'Agent' implementation in Cursor is terrible. It takes 10x longer to produce a worse response. I don't understand if this is a cost saving measure to reduce token count, but it is really bad.

    Thankfully, switching to Sonnet 3.7 in 'Ask' mode has been essentially fine and is comparable to how Sonnet 3.5 performed for me.

    • stuaxo 14 hours ago

      Different models behave differently- it's possible that for 3.7 to work it needs different system prompts.

  • zsoltkacsandi a day ago

    Sonnet 3.7 is terrible, it was a huge mistake from Antrophic to release it and make 3.6 unavailable.

    It does things I didn’t ask, while deleting random things I just asked to add in the previous prompt, it’s a mess.

    However it does one thing very well: it can make me angry very quickly, like a real human.

    • timabdulla a day ago

      My feeling (totally unproven) is that in the drive to make Sonnet 3.7 more "agentic", they've lost some of its ability to actually just stick to what you asked it to do. It seems that it "wants" (I know, it's not sentient!) to be more in the driver's seat now.

      Definitely can be very annoying if you do just want it to execute on a set of instructions.

      • 4b11b4 a day ago

        This can be mostly mitigated with the right system prompt although I noticed occasionally the prompt will be ignored (~1/20).

      • FlyingAvatar a day ago

        Yes, the Agent mode is terrible. Have you tried using the Ask mode instead?

      • zsoltkacsandi a day ago

        I don't know but if it wants to be in the driver seat, maybe it can pay the subscription fee for itself, because it is barely usable for anything I paid for before.

        To me it's not a good deal that I pay for something that makes up the things it wants, and completely disregards what I asked. It's like the recipe for the worst SaaS you ever used.

    • seunosewa a day ago

      It seems to perform worse in Cursor, so maybe that's where the problem is.

      • zsoltkacsandi a day ago

        Yeah, I am just considering cancelling my subscription and go back to ChatGPT.

    • actualwitch a day ago

      wdym unavailable? it's still on api under `claude-3-5-sonnet-20241022`.

      • mort96 a day ago

        What do the numbers '3-5' in that name refer to if it's version 3.6?

        • timabdulla a day ago

          There was never a Sonnet 3.6. They released what is commonly known as 3.6 as "Sonnet 3.5 (New)". Then, because so many folks ended up referring to it as 3.6, they decided to call this new model 3.7, as the mental territory for 3.6 was already occupied by 3.5 (New). Not confusing in the slightest!

      • zsoltkacsandi a day ago

        Under unavailable I meant unavailable. I was talking about 3.6 not 3.5.

        • timabdulla a day ago

          There is no 3.6. There is 3.5 and 3.5 (New), both of which remain available.

          • zsoltkacsandi a day ago

            Ah, you are right, I found it (it was under the more menu on the UI).

ost-ing 2 days ago

Is anyone actually getting a performance boon having these llms directly integrated into the editor? I find copy pasting blocks of code to chatgpt works really well to get help in isolation, especially if im doing something mathematical or algorithmic, or researching a new data struct or theory. Do I want it to run loose in my repository? Absolutely not

  • smallerfish a day ago

    Supermaven (which is owned by Cursor) in Intellij is fantastic - not perfect, but definitely speeds up boilerplate work. If you are e.g. doing a bunch of repetitive refactoring, after one or two iterations it will pick up the pattern and start suggesting it for you. Same goes for writing things like DAOs or resources, where you already have the pattern established for a different domain in your codebase.

    I use Claude web to kickstart entirely new work or to attempt large refactors.

  • i_love_retros a day ago

    As far as editor integrations go I've only used vscode copilot, and it's the method I've had least success with.

    Aider is impressive but I can't use deepseek api at work so aider becomes too expensive using sonnet.

    Which leaves me copying pasting into kagi code assistant as my most feasible and reliable llm coding tool at work. And its actually very good, but the copy pasting gets tedious.

  • mindwok a day ago

    I guess it depends on the kind of work you’re doing. I use it for UI development and backend API stuff and it’s stupid how much work it does for me. I literally write a composer prompt, alt tab for 2 minutes, come back and it’s implemented a brand new endpoint with new models, GraphQL resolvers, working tests etc. It’s awesome.

  • goodoldneon a day ago

    I use Cursor and I get a lot of benefit from its autocomplete suggestions, but its composer is horrible so I never use it. The dream of telling AI to make changes on its own hasn’t arrived

  • dutchCourage a day ago

    It took me a while to make the switch from copy pasting in a chat to Cursor, but I do see an improvement in my productivity in a couple of ways. For context, this is doing front-end development and knowing what I'm doing. The benefits aren't there if I'm using a technology/library I'm not familiar with.

    For the most part, I just type code the same way I use to but I get:

    - an auto-complete on steroids

    - the tab feature reminding me of impacted code I forgot to update after making a change elsewhere (big one as I easily get distracted).

    I very rarely use the chat/composer. Usually I'm faster by going through files manually and making changes myself helped by the features mentioned above.

  • lolinder a day ago

    I find copy pasting blocks of code to be pretty worthless. It takes way too much effort to give it enough context for it to do anything sensible with my code, so it's usually better to just not try.

    Cursor is the first time that I have felt that a chat-like UX could actually be useful for coding, because it gets the context for me. I still prefer autocomplete (and Cursor's autocomplete is very very good), but chat is actually occasionally useful for me in large projects. Without Cursor chat is only useful for one-off no-or-low-context scripts.

  • mcintyre1994 a day ago

    I don’t think it’s much better when I’m asking for a chunk of code, but the autocomplete is an improvement over not having it for me. Also I like being able to easily give it a relevant file from the code and having it follow my style/use the libraries I’m using etc. without having to type all that stuff out in the prompt. I think it’s worth having it in the editor, but it’s not as good as it needs to be for some of what they say it can do yet.

  • mock-possum a day ago

    I’m not, honestly - it generally takes about the same amount of time, if not more, to carefully handhold the llm through properly implementing the feature.

    At this point it’s really a question of how you’d rather spend your time - managing an AI? Or writing code?

    Even though generally I prefer the latter, it’s fun to take a break and give the former a short occasionally. I’d say currently I let cursor do its thing for maybe 1 out of every 5 or 6 tickets, usually just for the sake of variety, or if I’m spinning my wheels and need to look at something to get started.

techpineapple a day ago

I’ve been trying to “vibe code” with cursor, and it’s pretty bad. Lots of regressions, I get stuck in loops where it keeps breaking something. Maybe it’s the language I’m using (typescript) to a less applicable application (games) or the model (Claude 3.5/7) but I am very surprised that people assert a non-developer could vibe code a full application.

In the one hand it’s impressive how quickly I could make a basic UI, on the other hand it’s quite unimpressive my ability to get the Ui to actually do anything, and the number of very basic mistakes (whoops, forgot to add types or here let me import a file that doesn’t exist)

I even had a weird thing where it asked me to paste in the code of a file, even though the file was explicitly added to its context.

nextweek2 a day ago

I have a lot of .cursor/rules

If it goes off track I put a rule in there. It’s like a junior developer that I have to keep constraining it to project goals, coding styles and other aspects.

I have different files in there to help with being able to reuse rules for different projects.

Overtime it’s getting better at staying on track.

  • braebo a day ago

    Have you posted your rules anywhere online?

pizza 15 hours ago

Context length, context length, context length. LLM architectures (there are rare exceptions) inherently get worse at answering problems given too much context. From a user point of view, these appear as just idiosyncrasies to each model's ability to stay coherent - you just have to hope the devs are cognizant of this behavior in distributions of data close to yours.

Keep the input minimal. Keep a set of gold standard tests running in a loop to catch problems quick. Don't tune out. Debate whether you really need to use that new model you haven't worked with that much yet just because it's newer. And double check you aren't being sold e.g. a quantized version of the model.

jvanderbot a day ago

It's not just cursor.

I use Aider for side projects (with Claude) and for some reason it will also delete working code when making a partial correction. It just throws out the good with the bad when I suggest something.

  • alecco a day ago

    I think all these tools have a long way to go. Including Claude Code. I couldn't make any behave properly. I hope some day they become useful for complex repositories.

pawelduda 11 hours ago

I often tell cursor to "Keep the changes to an absolute minimum, don't do anything out of scope". I find it helps prevent a sneaky component structure/CSS class change or something along these lines

muzani a day ago

Claude 3.7 feels overtuned. I asked it to write tests. It replicated my code into mocks and "monkey-patched" it to pass tests. Basically if it didn't pass tests, it will rewrite the mock to pass.

It seems that cursor-thinking will come up with 3 options and pick the dumbest one. Leading to much worse performance than non-thinking sometimes.

A big part of it seems to be increased focus on following instructions vs 3.5. If you don't tell it to not cheat, it cheats lol. Sometimes it's even aware of this, saying things like "I should be careful to not delete this" but deleting it is the fastest path to solving the question as asked, so it deletes.

gizmo 2 days ago

Sonnet 3.5 is better at respecting your code than 3.7.

stuaxo 14 hours ago

Do people use Cursor in containers or just set it going in their actual terminal ?

I can't imagine not using something like PodMan, I just haven't set it up, so haven't tried Cursor.

neuralkoi 2 days ago

Same here. I had an issue yesterday where it liberally deleted a chunk of code it must have thought was extraneous but at the same time introduced a huge privacy vulnerability.

If you use Cursor for personal projects, I recommend reviewing each change very, very carefully.

haffi112 a day ago

It has happened to me on very large files and when you've had a long composer session. I usually check every file that is being modified for large removals because of this. It's a bit annoying, but something I can tolerate.

  • mort96 a day ago

    How does it feel, being a full time code reviewer responsible for reviewing the output of an idiot which removes and breaks stuff randomly?

  • binarymax a day ago

    This seems like more than just a bit annoying. This would be a deal breaker for me. How can you tolerate this?

futhey a day ago

This isn't new or unique. The model seems to be too focused on the task at hand, "cleaning up" unrelated code.

I basically have to branch, commit every time it makes any progress at all, and squash later. There are built-in checkpoints that basically do this.

I actually run this side-by-side with my preferred IDE, and GitHub Desktop (to visualize the diffs). So, prompt -> Claude makes a change -> I view the diff -> I make some edits -> Commit -> back to Cursor.

viraptor 17 hours ago

This seems very language specific. I haven't seen it in ruby, but in python the autocompletion tries to delete a correct end of the function all the time.

delduca a day ago

I do not trust AI on my editor, I prefer to use on a separate application.

electroly a day ago

Go back to Claude 3.5. There are still issues with 3.7 in Cursor.

  • dnh44 a day ago

    I've found 3.7-thinking to be much better for SwiftUI than 3.5. No random deletions, no hallucinations.

    It does seem to be pretty keen on going a step further than prompted though but the code works well so I can't really complain about that.

4b11b4 a day ago

I don't understand most of the comments on this thread. They must not know about the built in restore points, git commits and diff, system prompts, and cursor rules.

Yes, these models are merely approximations, but things aren't just blindly bad as it these comments make it seem.

Edit: Yes, Sonnet 3.7 is eager, but I'd have to assume is designed that way. Yes, sometimes Sonnet ignores my system prompt. Again, these things are merely approximations that map from tokens to tokens. They are not reasoning or intelligent.

ph4evers a day ago

One of the devs just announced an upcoming fix for version 0.47 on Reddit. Seems like they have some problems with integrating Sonnet 3.7 in Cursor.

himeexcelanta a day ago

I was getting this with claude (sonnet) 3.7

Restarting the editor worked! Can also try restarting the computer.

singularity2001 a day ago

Not just working code also useful comments, but this behavior is not specific to cursor

Jotalea a day ago

I don't know man, I'm busy learning how to use neovim.

GardenLetter27 a day ago

I wish it were easier to just send code to a chat and then get a diff back, and limit the files or lines that can be changed.

Too many of these tools seem to either give no diff, or make absolutely massive edits.

pranav7 2 days ago

have experienced it a couple times. 3.7 seems to go on random tangents unrelated to the original ask.

android521 19 hours ago

i tried vibe coding and it was kind of working like magic unit it didn't. It took much longer to recover my working code. Now i have given up on vibe coding. AI is only used for small cope clearly defined tasks to speed things up.

miwaniza a day ago

Just happened to me: Sonnet 3.7 was editing one file by my request. Then Cursor agent process was interrupted because file was edited ¯\_(ツ)_/¯. Happened multiple times, with 3.5 version too. No external changes were issued.

MrMcCall a day ago

No. Never.

And neither has a drunken meth addict on an oz of shrooms and ketamine while snorting low-grade fentanyl.

Because I care about my craft.

> Any tips on fixing those?

Understand that no matter how fancy the guess-the-next-token machine is, it will NEVER replace the hard graft of logically deducing how a change is going to percolate througout your codebase.

The programmer's motto should resemble the old Porshe one:

  Logical reasoning: accept no substitute
When an engine can use my codebase to build up an internal logical structure of its cascading effects where potential changes to the code can be what-if'd as a kind of diff, then I will consider it to be worthy of evaluation.

Until then, I feel like an NBA player seeing that their opponent has chosen to only dribble with their ass.

I don't think you folks realize that you're the ones hallucinating. Predicting the next token is never going to be anywhere near 100% successful, especially for interesting projects.

Seriously, folks. I mean, week after week we keep seeing this shit show up here and y'all're like "You got any tips how I can keep smoking meth but not lose my last three teeth?"

  • cpursley a day ago

    tl;tr: get off my lawn.

    • MrMcCall a day ago

      No, I'm suggesting you not shit on your lawn, because I find my never shitting on my lawn prevents that ugly smell that you keep complaining about.

      • cpursley a day ago

        Well, whatever. I've already shipped several profitable projects which lean heavily on LLMs to fill in my knowledge & skill gaps.

        • MrMcCall a day ago

          from your website:

          > In the AI future people will be hungry for content that is curated by people they trust.

          I'm all for trustworth people and trustworthy work. Curation by sensible humans is precisely what all information needs.

          I wish you the best of luck!