양쪽 이전 판이전 판다음 판 | 이전 판 |
distributed_computing [2020/02/04 20:48] – rex8312 | distributed_computing [2024/03/23 02:38] (현재) – 바깥 편집 127.0.0.1 |
---|
====== Distributed Computing ====== | ====== Distributed Computing ====== |
| |
| * [[https://arxiv.org/pdf/2001.12004.pdf|Neural MMO v1.3: A Massively Multiagent Game Environment |
| for Training and Evaluating Neural Networks, 2020-02]] |
| * OpenAI, NeuralMMO |
| * [[https://arxiv.org/pdf/1912.06680.pdf|Dota 2 with Large Scale Deep Reinforcement Learning, 2019-12]] |
| * OpenAI, OpenAI Five, Dota |
| * [[https://medium.com/daangn/pytorch-multi-gpu-%ED%95%99%EC%8A%B5-%EC%A0%9C%EB%8C%80%EB%A1%9C-%ED%95%98%EA%B8%B0-27270617936b|PyTorch Multi-GPU 제대로 학습하기-당근마켓]] |
| |
* http://mpi4py.scipy.org/docs/usrman/index.html | * http://mpi4py.scipy.org/docs/usrman/index.html |
* https://arrow.apache.org/ | * https://arrow.apache.org/ |
* https://ray-project.github.io/2017/10/15/fast-python-serialization-with-ray-and-arrow.html | * https://ray-project.github.io/2017/10/15/fast-python-serialization-with-ray-and-arrow.html |
* [[https://medium.com/daangn/pytorch-multi-gpu-%ED%95%99%EC%8A%B5-%EC%A0%9C%EB%8C%80%EB%A1%9C-%ED%95%98%EA%B8%B0-27270617936b|PyTorch Multi-GPU 제대로 학습하기-당근마켓]] | |
| |
| ===== Distribued SGD ===== |
| |
| * [[https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/|ZeRO & DeepSpeed: New system optimizations enable training models with over 100 billion parameters]] |
| * Microsoft, DeepSpeed, pytorch, Language Model |
| * [[https://devblogs.nvidia.com/fast-multi-gpu-collectives-nccl/|NCCL]] |
| * Nvidia |
| * https://www.fast.ai/2018/08/10/fastai-diu-imagenet/ |
| * [[http://seba1511.net/dist_blog/#ref-gorila|An Introduction to Distributed Deep Learning, 2016-12]] |
| |
===== Streaming processing ===== | ===== Streaming processing ===== |
| |
* https://github.com/robinhood/faust | * https://github.com/robinhood/faust |