I wanted to provide an update on my AI project experience.
When I started two months ago, I would have told you I could complete my project before I went to sleep - Deepseek was that good at doing exactly what I wanted... Over the last few weeks I've struggled to make much progress just like when I was coding manually. When I first told it what I wanted, it designed it perfectly. Then I started to add features and that is where the problem started.
DS would write a function a different way than it wrote for me a day before (example: passing a parameter in post body vs querystring) and then my code would break. Rather than fixing what I knew was the issue, I wanted DS to recognize it. It did, but only after I provided fresh affected code and/or table definitions. Since I used DS to code things I had no idea of (event handlers on browser clicks for example), it would tell me I was missing this and add it to that, but then later it may tell me to add the same this to a different that, duplicating functions and leading to troubleshooting anxiety. Also, every now and then DS will give me code that contains a missing closing tag...
I wanted AI to code this entire project based on my input. I continue to believe that AI is great for mundane tasks like syntax or logic checking, but it cannot be used as a substitute for overall project development. If I wasn't a coder before, I would have no idea what to ask or tell DS. Those of you who are waiting on some features I promised; I still feel I can complete it by the end pf this month.
I can definitely relate to this. AI is impressive at getting an initial design or feature off the ground, but once a project grows, consistency and architectural awareness become the real challenge. Another issue I’ve noticed is that AI doesn’t retain a true mental model of the entire codebase, so small implementation differences can quietly compound into bigger problems. Used as a productivity aid rather than a replacement for engineering judgment, it’s still incredibly valuable.