Это видео недоступно.
Сожалеем об этом.

Understanding How Neural Review Works in Supermemo

Поделиться
HTML-код
  • Опубликовано: 2 июл 2024
  • Support my work ko-fi.com/guillempalausalva
    Patreon: / pleasurable_learning
    Website: www.pleasurable-learning.com
    Schedule a tutoring session with me: pleasurable-learning.com/view...
    Dive into the world of Neural Review in Supermemo! Learn how to effectively use this feature to enhance your study sessions. We'll explore the importance of quitting previous reviews, navigating subsets, and optimizing your learning with practical tips and tricks. Perfect for anyone looking to get the most out of their Supermemo experience.
    Socials:
    / pleasurablelearning
    Instagram: / pleasurable.learning
    SM community discord: / discord
    All feedback is welcome!
    #Supermemo #incrementalreading

Комментарии • 10

  • @jeroboam4486
    @jeroboam4486 Месяц назад +1

    Hi, don't take it the wrong way this is an honest question: I wonder why you would encumber your memory with trivial data like wood smoke inside the house or schooling in Japan. I understand that you can learn and retain much more with supermemo but I would have thought you'd be selective of what you will remember forever.
    It might be eveident that I'm not using SM yet.

    • @PleasurableLearning
      @PleasurableLearning  Месяц назад +1

      That is the beauty of free learning. Oneself decides what is important or rellevant. The examples you mentioned might be trivial for you but not for me. I do care about schooling arround the world. You can set priorities to reflect the importance and if it is too low you won't create it in the first place

    • @jeroboam4486
      @jeroboam4486 Месяц назад +1

      @@PleasurableLearning Ok got it thanks. I wonder how selective one must be about the knowledge stored in SM. I read many many articles every day, should I be ruthless about what will made into items or is it OK to make a lot? I'm quite a data hoarder but I'm afraid my memory won't scale espaceilly with incremental reading.

    • @PleasurableLearning
      @PleasurableLearning  Месяц назад +1

      @@jeroboam4486 That requires some real usage. With some time you will realize if you are shooting short or too far and you can fine tune your importability thrshold and what deserved an item. Usually, a beginner will tend to make items of every single piece of information, which is not good.
      Your memory won't scale, your memory retention of what you decided to remember will :)

    • @jeroboam4486
      @jeroboam4486 Месяц назад

      @@PleasurableLearning thanks.

  • @dosgos
    @dosgos Месяц назад

    If one does a "neural review" is the alg changed on any items?

    • @PleasurableLearning
      @PleasurableLearning  Месяц назад +1

      In the video I cover an exmple of an outstanding topic and item. You log the repetition while on neural review. What I didn't check is the case of getting the same element twice during the same day. I can experiment on a small collection as there are more edge cases

    • @dosgos
      @dosgos Месяц назад

      @@PleasurableLearning Got it!

    • @PleasurableLearning
      @PleasurableLearning  Месяц назад +1

      @@dosgos I confirmed that an element cannot enter the neural queue twice on the same day, unless the user manually adds it again to the outstanding queue

    • @dosgos
      @dosgos Месяц назад

      @@PleasurableLearning Thank you for the follow up