Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Read this article: https://dl.acm.org/doi/10.1145/3712285.3759827 Training algorithms are relatively simple (base training, fine-tuning, RL), but the scale is critical. I.e., the engineering infrastructure. The authors recommend a 128 GPU cluster minimum and many petabytes of training data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: