====== PyTorch ======
===== examples =====
* https://github.com/pytorch/examples
* https://github.com/bharathgs/Awesome-pytorch-list
* http://pytorch.org/tutorials/
* https://github.com/salesforce/matchbox/blob/master/examples/transformer.py
* https://github.com/rasbt/deeplearning-models
====== Optimization ======
* [[https://nvlabs.github.io/eccv2020-mixed-precision-tutorial/files/szymon_migacz-pytorch-performance-tuning-guide.pdf|pytorch performance tuning guide, 2020-08]]
* https://www.youtube.com/watch?v=9mS1fIYj1So
* https://twitter.com/karpathy/status/1299921324333170689
===== Dataloader ======
* https://tanelp.github.io/posts/a-bug-that-plagues-thousands-of-open-source-ml-projects/
===== Pytorch visualization =====
* https://github.com/szagoruyko/pytorchviz
===== PyTorch + TensorBoard =====
# https://gist.github.com/avacariu/92db7b7f1fa1696279e9029f0237e2d4
#!/usr/bin/env python3.6
import time
import tensorflow as tf
summary_writer = tf.summary.FileWriter("./logdir")
for i in range(100_000):
s = tf.Summary(value=[tf.Summary.Value(tag="the-value", simple_value=i/10)])
summary_writer.add_summary(s, i)
time.sleep(0.5)
# while the above is running, execute
# tensorboard --logdir ./logdir
# See https://github.com/lanpa/tensorboard-pytorch/blob/master/tensorboardX/summary.py
# for different kinds of summaries you can create (e.g. histograms)
* https://github.com/lanpa/tensorboard-pytorch
* https://github.com/TeamHG-Memex/tensorboard_logger
===== Deployment =====
* https://medium.com/@nicolas.metallo/deploy-your-pytorch-model-to-production-f69460192217
* pytorch + Nodejs
* http://blog.christianperone.com/2018/10/pytorch-1-0-tracing-jit-and-libtorch-c-api-to-integrate-pytorch-into-nodejs/
===== MongoDB에 모델 저장 =====
col.insert_one(dict(state_dict=cloudpickle.dumps(net.state_dict()))
state_dict = cloudpickle.loads(col.find_one()['state_dict'])
===== gradient 직접 수정 =====
loss.backward()
for p in model.parameters():
p.grad *= C # or whatever other operation
optimizer.step()
===== gradient 사용 안함(학습 안함) =====
for param in model.parameters():
param.requires_grad = False
===== 네트워크 시각화 =====
* https://github.com/lutzroeder/netron
===== others =====
* https://github.com/uber/pyro
* https://www.kymat.io/
* https://github.com/rtqichen/torchdiffeq/tree/master/examples
* https://github.com/keunwoochoi/torchaudio-contrib
===== pytorch - graph =====
* https://github.com/facebookresearch/PyTorch-BigGraph
===== deployment =====
* https://towardsdatascience.com/how-to-pytorch-in-production-743cb6aac9d4
* https://github.com/pytorch/QNNPACK
====== Polyak averaging =====
soft_tau = 0.01
for target_param, param in zip(target_net.parameters(), net.parameters()):
target_param.data.copy_(target_param.data * (1.0 - soft_tau) + param.data * soft_tau)
* https://towardsdatascience.com/soft-actor-critic-demystified-b8427df61665
===== GPU ======
* https://towardsdatascience.com/speed-up-your-algorithms-part-1-pytorch-56d8a4ae7051
===== PyTroch TVM =====
* https://tvm.ai/2019/05/30/pytorch-frontend
===== PyTorch C++ =====
* https://krshrimali.github.io/Announcing-PyTorch-CPP-Series/
===== options =====
torch.set_printoptions(linewidth=120)
torch.set_grad_enabled(True)
===== custom layer 구현 =====
* https://towardsdatascience.com/how-to-build-your-own-pytorch-neural-network-layer-from-scratch-842144d623f6
===== RL =====
* RLkit: Reinforcement learning framework and algorithms implemented in PyTorch
* https://github.com/vitchyr/rlkit
===== scikit -> pytorch ====
* https://github.com/microsoft/hummingbird
===== Lazy Tensor =====
* http://www.kernel-operations.io/keops/index.html
{{tag>pytorch python}}
===== Computer Vision =====
* https://github.com/kornia/kornia
{{tag>pytorch CV vision kornia}}
===== Pytorch for ARM64 =====
* https://mathinf.com/pytorch/arm64/
{{tag>pytorch arm64 rasberry_pi}}