

This is why I say some people are going to lose their jobs to engineers using AI correctly, lol.


This is why I say some people are going to lose their jobs to engineers using AI correctly, lol.


What are you even trying to say? You have no idea what these products are, but you think they are going to fail?
Our company does market research and test pilots with customers, we aren’t just devs operating in a bubble pushing AI.
We are listening and responding to customer needs and investing in areas that drive revenue using this technology sparingly.


These tools are mostly determistic applications following the same methodology we’ve used for years in the industry. The development cycle has been accelerated. We are decoupled from specific LLM providers by using LiteLLM, prompt management, and abstractions in our application.
Losing a hosted LLM provider means we prox6 litellm to something out without changing contracts with our applications.


Well, I typed it with my fingers.


Incorrect, but okay.


We use a layered architecture following best practices and have guardrails, observability and evaluations of the AI processes. We have pilot programs and internal SMEs doing thorough testing before launch. It’s modeled after the internal programs we’ve had success with.
We are doing this very responsibly, and deliver a product our customers are asking for, with the tools to help calibrate minor things based on analytics.
We take data governance and security compliance seriously.


While it’s possible to see gains in complex problems through brute force, learning more about prompt engineering is a powerful way to save time, money, tokens and frustration.
I see a lot of people saying, “I tried it and it didn’t work,” but have they read the guides or just jumped right in?
For example, if you haven’t read the claude code guide, you might have never setup mcp servers or taken advantage of slash commands.
Your CLAUDE.md might be trash, and maybe you’re using @file wrong and blowing tokens or biasing your context wrong.
LLMs context windows can only scale so far before you start seeing diminishing returns, especially if the model or tools is compacting it.
https://www.promptingguide.ai/
https://www.anthropic.com/engineering/claude-code-best-practices
There are community guides that take this even further, but these are some starting references I found very valuable.


In my opinon, Codex is fine, but copilot has better support across AI providers (mode models), and Claude is a better developer.


Sure thing, crazy how anti AI lemmy users are!


I get it. I was a huge skeptic 2 years ago, and I think that’s part of the reason my company asked me to join our emerging AI team as an Individual Contributor. I didn’t understand why I’d want a shitty junior dev doing a bad job… but the tools, the methodology, the gains… they all started to get better.
I’m now leading that team, and we’re not only doing accelerated development, we’re building products with AI that have received positive feedback from our internal customers, with a launch of our first external AI product going live in Q1.


If you’re not already messing with mcp tools that do browser orchestration, you might want to investigate that.
For example, if you setup puppeteer, you can have a natural conversation about the website you’re working on, and the agent can orchestrate your browser for you. The implication is that the agent can get into a feedback loop on its own to verify the feature you’re asking it to build.
I don’t want to make any assumptions about additional tooling, but this is a great one in this space https://www.agentql.com/


That’s a great methodology for a new adopter.
Curious if you read about it, or did it out of mistrust for the AI?


Great? Business is making money. I already explained we have human reviewed PRs on top of full test coverage and other validations.
We’re compliant on security policies at our organization, and we have no trouble maintaining what the current code we’re generating because it’s based on years of well defined patterns and best practices that we score internally across the entirety of engineering at our organization.
As more examples in the real world:
Aider has written 7% of its own code (outdated, now 70%) | aider https://aider.chat/2024/05/24/self-assembly.html
https://aider.chat/HISTORY.html
LibreChat is largely contributed to by Claude Code, it’s the current best open source ChatGPT client, and they’ve just been acquired by ClickHouse.
https://clickhouse.com/blog/clickhouse-acquires-librechat
https://github.com/danny-avila/LibreChat/commits/main/
Such suffering from the quality! So much worse than our legacy monolith!


Cursor and Claude Code are currently top tier.
GitHub Copilot is catching up, and at a $20/mo price point, it is one of the best ways to get started. Microsoft is slow rolling some of the delivery of features, because they can just steal the ideas from other projects that do it first. VScode also has extensions worth looking at: Cline and RooCode
Claude Code is better than just using Claude in cursor or copilot. Claude Code has next level magic that dispells some of the myths being propagated here about “ai bad at thing” because of the strong default prompts and validation they have built into it. You can say dumb human ignorant shit, and it will implicitly do a better job than others tools you give the same commands to.
To REALLY utilize claude code YOU MUST configure mcp tools… context7 is a critical one that avoids one of those footguns, “the model was trained on older versions of these libraries.”
Cursor hosts models with their own secret sauce that improves their behavior. They hardforked VSCode to make a deeper integrated experience.
Avoid antigravity (google) and Kiro (Amazon). They don’t offer enough value over the others right now.
If you already have an openai account, codex is worth trying, it’s like Claude Code, but not as good.
JetBrains… not worth it for me.
Aider is an honorable mention.


We have human code review and our backlog has been well curated prior to AI. Strongly definitely acceptance criteria, good application architecture, unit tests with 100% coverage, are just a few ways we keep things on the rails.
I don’t see what the idea of paircoding has to do with this. Never did I claim I’m one shotting agents.


Enforced by the same police that helped ICE be violent towards peaceful protestors?


Your anecdote is not helpful without seeing the inputs, prompts and outputs. What you’re describing sounds like not using the correct model, providing good context or tools with a reasoning model that can intelligently populate context for you.
My own anecdotes:
In two years we have gone from copy/pasting 50-100 line patches out of ChatGPT, to having agent enabled IDEs help me greenfield full stack projects, or maintain existing ones.
Our product delivery has been accelerated while delivering the same quality standards verified by our internal best practices we’ve our codified with determistic checks in CI pipelines.
The power come from planning correctly. We’re in the realm of context engineering now, and learning to leverage the right models with the right tools in the right workflow.
Most novice users have the misconception that you can tell it to “bake a cake” and get the cake ypu had in your mind. The reality is that baking a cake can be broken down into a recipe with steps that can be validated. You as the human-in-the-loop can guide it to bake your vision, or design your agent in such a way that it can infer more information about the cake you desire.
I don’t place a power drill on the table and say “build a shelf,” expecting it to happen, but marketing of AI has people believing they can.
Instead, you give an intern a power drill with a step-by-step plan with all the components and on-the-job training available on demand.
If you’re already good at the SDLC, you are rewarded. Some programmers aren’t good a project management, and will find this transition difficult.
You won’t lose your job to AI, but you will lose your job to the human using AI correctly. This isn’t speculation either, we’re also seeing workforce reduction supplemented by Senior Developers leveraging AI.


How to get a REAL ID and use it for travel, https://www.usa.gov/real-id


This is why the seedbox SaaS market exists. Providing turn key hosted solutions, the only heavy lifting is the configuration which takes some reading to understand.
Check out the Servarr Wiki, Ombi, Syncthing as a starting point for media discovery and curration tooling.
Early adopters will be rewarded by having better methodology by the time the tooling catches up.
Too busy trying to dunk on me than understand that you have some really helpful tools already.