Bitcoin Forum
March 13, 2026, 12:22:10 PM *
News: Latest Bitcoin Core release: 30.2 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: NVIDIA kendi yapay zeka ajanını piyasaya sürüyor  (Read 32 times)
Yabani (OP)
Full Member
***
Offline Offline

Activity: 581
Merit: 173


View Profile
March 11, 2026, 08:04:25 AM
 #1

Quote
NVIDIA şimdi kendi açık pençe 🤯 kumandasını piyasaya sürüyor.

Nemoclaw, kurumsal dünya için geliştirilmiş açık kaynaklı bir yapay zeka ajanı olacak.

Sadece Nvidia GPU'larla değil, AMD, Intel veya başka herhangi bir donanımla da çalışabilir.

Bu, yapay zekâ ajanlarının yönü için çok büyük bir sinyal. OpenClaw dünyaya nelerin mümkün olduğunu gösterdi. Hiçbir yere gitmiyorlar.

https://x.com/_MaxBlade/status/2031397388674388409

piyasa farklı yönlere doğru gidecek gibi duruyor
mandown
Legendary
*
Offline Offline

Activity: 2590
Merit: 2121



View Profile
March 12, 2026, 08:14:28 AM
 #2

Açık kaynak kodlu versiyonunu paylaştılar lm studio 120b parametreli ve tamamen 1milyon contentli versiyonu verdi bile.

Quote
Memory Requirements
To run the smallest Nemotron 3 Super, you need at least 83 GB of RAM.

Capabilities
Nemotron 3 Super models support tool use and reasoning. They are available in gguf.

About Nemotron 3 Super
undefined
Nemotron-3-Super-120B-A12B is a large language model (LLM) trained by NVIDIA, designed to deliver strong agentic, reasoning, and conversational capabilities. It is optimized for collaborative agents and high-volume workloads such as IT ticket automation. The model has 12B active parameters and 120B parameters in total.

Like other models in the Nemotron family, it responds to user queries and tasks by first generating a reasoning trace and then concluding with a final response. The model's reasoning capabilities can be configured through a flag in the chat template.

The model employs a hybrid Latent Mixture-of-Experts (LatentMoE) architecture, utilizing interleaved Mamba-2 and MoE layers, along with select Attention layers. Distinct from the Nano model, the Super model incorporates Multi-Token Prediction (MTP) layers for faster text generation and improved quality, and it is trained using NVFP4 quantization to maximize compute efficiency. The model has 12B active parameters and 120B parameters in total.

The supported languages include: English, French, German, Italian, Japanese, Spanish, and Chinese.

Model Details
Architecture: Mixture of Experts (MoE) with Hybrid Transformer-Mamba Architecture
Supports Token Budget for providing optimal accuracy with minimum reasoning token generation
Accuracy: Leading accuracy on Artificial Analysis Intelligence Index
Model size: 120B with 12B active parameters
Context length: up to 1M
Modalities: Text-only
License
This model is provided under the NVIDIA Open Model License

Şuan indirme aşamasındayım  80gb'dan fazla ram'e ihtiyaç var.

https://lmstudio.ai/models/nemotron-3-super

Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!