알파고에서 챗GPT, 그리고 NVIDIA까지 - 현업 AI 개발자가 알기 쉽게 알려주는, 이제와서 어디 물어보기 애매한 AI 상식

Поделиться
HTML-код
  • Опубликовано: 26 ноя 2024

Комментарии • 4

  • @gnyang
    @gnyang Месяц назад +1

    감사히 보겠읍니다

  • @ggyunol
    @ggyunol  Месяц назад +2

    자막으로 인하여 인용한 자료 출처가 보이지 않아, 자료 출처를 댓글로 대신합니다.

    • @ggyunol
      @ggyunol  Месяц назад

      # 발표자료에 인용한 자료들 출처
      "Lee Sedol (B) vs AlphaGo (W), 2016, Game 1", Wesalius.
      commons.wikimedia.org/wiki/File:Lee_Sedol_%28B%29_vs_AlphaGo_%28W%29_-_Game_1.svg
      “DQN Breakout”, DeepMind.
      ruclips.net/video/TmPfTpjtdgg/видео.html
      "Deep Blue versus Kasparov, 1996, Game 1", Morn.
      commons.wikimedia.org/wiki/File:Deep_Blue_versus_Kasparov,_1996,_Game_1.gif
      "Mastering the game of Go with deep neural networks and tree search.", Silver, D., Huang, A., Maddison, C. et al. Nature 529, 484-489 (2016).
      doi.org/10.1038/nature16961
      “Error rate history on ImageNet (showing best result per team and up to 10 entries per year)”, Gkrusze.
      en.wikipedia.org/wiki/ImageNet#/media/File:ImageNet_error_rate_history_(just_systems).svg
      “ImageNet: A Large-Scale Hierarchical Image Database”, Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li and Li Fei-Fei.
      www.image-net.org/challenges/LSVRC/2010/index.php
      “ImageNet Classification with Deep Convolutional Neural Networks”, Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton.
      papers.nips.cc/paper_files/paper/2012/hash/c399862d3b9d6b76c8436e924a68c45b-Abstract.html
      “NVIDIA GTX 580 Running Crysis!”, Motherboards.org.
      ruclips.net/video/ifF1CVq5xUY/видео.html
      “SMI32-stained pyramidal neurons in cerebral cortex”, UC Regents Davis campus.
      en.wikipedia.org/wiki/Neuron#/media/File:Smi32neuron.jpg
      “Structure of a typical neuron”, Mauro Lanari.
      en.wikipedia.org/wiki/Neuron#/media/File:Neuron_Hand-tuned2.svg
      “Artificial neural network with layer coloring”, Glosser.ca.
      en.wikipedia.org/wiki/Neural_network_(machine_learning)#/media/File:Colored_neural_network.svg
      “A Neural Network Playground”, Daniel Smilkov, Shan Carter.
      playground.tensorflow.org
      “Agent57: Outperforming the human Atari benchmark”,
      Adrià Puigdomènech, Bilal Piot, Steven Kapturowski, Pablo Sprechmann, Alex Vitvitskyi, Daniel Guo, Charles Blundell.
      deepmind.google/discover/blog/agent57-outperforming-the-human-atari-benchmark/
      “Teachable Machine”, Google.
      teachablemachine.withgoogle.com/
      “Breaking the curse of small datasets in Machine Learning: Part 1”, Jyoti Prakash Maheswari.
      towardsdatascience.com/breaking-the-curse-of-small-datasets-in-machine-learning-part-1-36f28b0c044d
      “What Is Overfitting?”, MathWorks.
      www.mathworks.com/discovery/overfitting.html
      “CIFAR-10 and CIFAR-100 datasets”, Alex Krizhevsky.
      www.cs.toronto.edu/~kriz/cifar.html
      “Unit 1. Introduction to Deep Reinforcement Learning”, Thomas Simonini.
      huggingface.co/learn/deep-rl-course/unit1/hands-on
      “Unit 2. Introduction to Q-Learning”, Thomas Simonini.
      huggingface.co/learn/deep-rl-course/unit2/mc-vs-td
      “Training AI to Play Pokemon with Reinforcement Learning”, Peter Whidden.
      ruclips.net/video/DcYLT37ImBY/видео.html
      “t-SNE Map”, Cyril Diagne, Nicolas Barradeau & Simon Doury.
      experiments.withgoogle.com/t-sne-map
      “Deep Learning in a Nutshell: Core Concepts”, Tim Dettmers.
      developer.nvidia.com/blog/deep-learning-nutshell-core-concepts/
      “Computer Vision - What Is it and Why Does It Matter?”, NVIDIA.
      www.nvidia.com/en-us/glossary/computer-vision/
      "The AI feedback loop: Researchers warn of ‘model collapse’ as AI trains on AI-generated content", Carl Franzen.
      venturebeat.com/ai/the-ai-feedback-loop-researchers-warn-of-model-collapse-as-ai-trains-on-ai-generated-content/
      “Generation loss: FLIF vs WebP vs BPG vs JPEG”, Jon Sneyers.
      ruclips.net/video/_h5gC3EzlJg/видео.html
      “THE CURSE OF RECURSION: TRAINING ON GENERATED DATA MAKES MODELS FORGET”, Ilia Shumailov.
      arxiv.org/pdf/2305.17493
      “Illustrated Guide to Transformer”, Hong Jing.
      jinglescode.github.io/2020/05/27/illustrated-guide-transformer/
      “Improving language understanding with unsupervised learning”, Alec Radford et al.
      openai.com/index/language-unsupervised/
      “How GPT3 Works - Visualizations and Animations”, Jay Alammar.
      jalammar.github.io/how-gpt3-works-visualizations-animations/
      “Improving language understanding with unsupervised learning”, Alec Radford et al.
      openai.com/index/language-unsupervised/
      “Naver sentiment movie corpus v1.0”, Lucy Park.
      github.com/e9t/nsmc/
      “Transformer Explainer”, Aeree Cho et al.
      poloclub.github.io/transformer-explainer/
      “Aligning language models to follow instructions”, Ryan Lowe, Jan Leike.
      openai.com/index/instruction-following/
      “Shoggoth with Smiley Face”, @anthrupad
      x.com/anthrupad/status/1622349563922362368
      “Language Models are Few-Shot Learners”, Tom B. Brown et al.
      arxiv.org/abs/2005.14165
      “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models”, Jason Wei et al.
      arxiv.org/abs/2201.11903
      “Large Language Models are Zero-Shot Reasoners”, Takeshi Kojima et al.
      arxiv.org/abs/2205.11916
      “Deep Neural Networks for RUclips Recommendations”, Paul Covington, Jay Adams, Emre Sargin.
      static.googleusercontent.com/media/research.google.com/ko//pubs/archive/45530.pdf

  • @푸른별-t6x
    @푸른별-t6x Месяц назад

    '-'b