I opened up DeepSeek and typed something like "Using this page (
https://bitcointalk.org/index.php?action=profile;u=30747) can you identify all values and put them in a C# datatable?" and within a minute it spit out code that took me
weeks to first write.

I have been using it for what all programmers hate - error handling. And with the incredible context Deepseek manages, I don't need to issue the same instructions over and over. I can write "How about this code?" and it will apply every necessary check for nulls, etc., as long as I asked it to do so previously in the same chat. Being in Alberta, I can use it easily until around 2200, then the server starts getting busy.
So I installed Deepseek on my local server (HP DL380) with 64GB RAM and Xeon processors. Downloaded the 90GB database in under 5 mins* (lol) and it installed fast! But there is no video card in this server. I asked it "How can I reduce cat allergens?" - it took 17 mins to think and then spit out a word about every two seconds. Not usable at all. That was the full R1:70b. So I tried the limited R1:8b, got a notice there was a new version and asked "What version are you?" and it still took almost five mins thinking.
* When I first got into BBSes, I could download 1MB an hour. Now I can download 100MB a second (a 360,000x increase in 30 years).
If you are interested in trying DeepSeek on a computer
with a video card, here is a straightforward guide for running the AI and the web GUI.
https://www.tecmint.com/run-deepseek-locally-on-linux/How come I have never noticed this thread of yours?

I'm not much of an expert in this field, nor I have sufficient experience like you do, but I have a curious mind to try out new things, experiment on them and learn.
Currently, I'm doing experiments with different LLM models locally through Ollama, and LM studio. Mostly it works, but when a large enough model is used, the responses are very slow, it becomes a nightmare, even for my high end device, it struggles to keep up with it (p.s: RTX 5060Ti 16GB, 32GB RAM, R9 7900X).
The maximum I used with was 32 billion parameters, which is the deepseek r1. It's slow but still usable.
