• 내용으로 건너뛰기

Out of the Box

사용자 도구

  • 로그인

사이트 도구

  • 최근 바뀜
  • 미디어 관리자
  • 사이트맵
추적: • language_models_are_unsupervised_multitask_learners • 2023-08_jiang_chinese_open_foundation_language_model • 2021-01_zero-shot_text-to-image_generation • 2023-04_gymnax_reinforcement_learning_environments_in_jax • 2020-05_a_distributional_view_on_multi-objective_policy_optimization • sphinx • faster • 2024-01_continual_learning_with_pre-trained_models_a_survey • 2024-01_speechagents_human-communication_simulation_with_multi-modal_multi-agent_systems • system_monitoring

tag:openai

TAG: OpenAI

  • 2018-03 On First-Order Meta-Learning Algorithms
2021/07/20 07:57Hyunsoo Park
  • 2018-10 Exploration by Random Network Distillation
2021/03/25 22:09Hyunsoo Park
  • 2021-01 Multi-task curriculum learning in a complex, visual, hard-exploration domain: Minecraft
2021/06/29 18:10Hyunsoo Park
  • 2021-01 Zero-Shot Text-to-Image Generation
2021/02/28 14:56Hyunsoo Park
  • 2021-02 Learning Transferable Visual Models From Natural Language Supervision
2021/07/17 14:21Hyunsoo Park
  • 2021-06 Extracting Training Data from Large Language Models
2021/07/17 06:57Hyunsoo Park
  • [GPT-2] Language Models are Unsupervised Multitask Learners
2020/07/21 17:16Hyunsoo Park
  • [GPT-3] Language Models are Few-Shot Learners
2020/07/23 14:41Hyunsoo Park
  • [GPT] Improving Language Understanding by Generative Pre-Training
2020/07/21 16:46Hyunsoo Park
  • Generative Pretraining from Pixels
2020/07/20 12:57Hyunsoo Park
  • How AI Training Scales / An Empirical Model of Large-Batch Training
2020/07/23 14:15Hyunsoo Park

문서 도구

  • 문서 보기
  • 이전 판
  • 역링크
  • Fold/unfold all
  • 맨 위로
별도로 명시하지 않을 경우, 이 위키의 내용은 다음 라이선스에 따라 사용할 수 있습니다: CC Attribution-Noncommercial-Share Alike 4.0 International
CC Attribution-Noncommercial-Share Alike 4.0 International Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki