TinyTorch, close to the metal #1080
Replies: 1 comment 9 replies
-
|
Very cool @sueszli! Reimplementing autograd in C is a great way to really internalize what's going on at the systems level. Memory arenas, contiguous allocation, manual lifetime management — you're confronting decisions that higher-level frameworks hide. Thanks for sharing this. I have actually seen yours and it is nice! and also I do think I replied in the past 😓 but can't remember 😅 Anwyays as you know -- If you're interested, we also have TinyTorch at tinytorch.ai as the hands-on companion to the textbook, plus hardware kits at mlsysbook.ai/kits (Arduino, Raspberry Pi, Seeed). We're also working on interactive labs that should be out soon. Would love your feedback given your low-level systems perspective. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
After working through the TinyTorch book, I reimplemented the core ideas and exercises in C to force a lower-level understanding of autograd and ML system mechanics.
The result is a small PyTorch-like framework built from scratch in C, including tensor ops and a reverse-mode autograd engine. The project was primarily an exercise in understanding design tradeoffs, memory layout (e.g. Contiguous Allocation, Memory Arenas) and execution at the systems level.
Sharing it here in case it is useful or interesting to others working through TinyTorch 🌸 ✨
Thanks to Prof. Vijay Janapa Reddi and the TinyTorch community for the material and inspiration!
Yahya
Beta Was this translation helpful? Give feedback.
All reactions