Rethinking Research Data | Kristin Briney | TEDxUWMilwaukee

Поделиться
HTML-код
  • Опубликовано: 9 ноя 2015
  • The United States spends billions of dollars every year to publicly support research that has resulted in critical innovations and new technologies. Unfortunately, the outcome of this work, published articles, only provides the story of the research and not the actual research itself. This often results in the publication of irreproducible studies or even falsified findings, and it requires significant resources to discern the good research from the bad. There is way to improve this process, however, and that is to publish both the article and the data supporting the research. Shared data helps researchers identify irreproducible results. Additionally, shared data can be reused in new ways to generate new innovations and technologies. We need researchers to “React Differently” with respect to their data to make the research process more efficient, transparent, and accountable to the public that funds them.
    Kristin Briney is a Data Services Librarian at the University of Wisconsin-Milwaukee. She has a PhD in physical chemistry, a Masters in library and information studies, and currently works to help researchers manage their data better. She is the author of “Data Management for Researchers” and regular blogs about data best practices at dataabinitio.com.
    This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at ted.com/tedx

Комментарии • 6

  • @TheAaron2442
    @TheAaron2442 5 лет назад +8

    I love this idea, and I couldn't agree more with her. Data must be published along side articles, or at least references to where it can be easily obtained. Humanity needs this.
    I can sympathize with the data hoarders though. Data is costly. How do we recompense the research entities that spent the money on gathering it. I foresee competitors that would just snap up the freely available data, run it through better/different algorithms, and ultimately publish their own findings at little to no cost. Great talk.

    • @ZeroBene
      @ZeroBene 4 года назад

      I agree with you! Very important and efficient step. In the social sciences we need to be careful though: Because a lot of data is available and can be combined, even depersonalized research can be personalized. People might not risk that their survey data could get available and linked to their names (big data usage, companies, regimes,..), so lots of people wouldnt want to participate and the data would thus be biased a lot more. How can we avoid this? Only access to data for institutional researchers with a certain trustworthiness and secrecy?

  • @tubate20092
    @tubate20092 2 года назад

    Hmm, maybe an UNO-institution which secures and governs the usage and storage of data?
    It could be at least something, that would enforce regulations and standards,
    while maintaining and controlling reasonable access to this data.

  • @venkatasivagabbita788
    @venkatasivagabbita788 3 года назад

    Did you ever Consider - that DATA can BE REVERSE ENGINEERED - to FIT the ARTICLE?

  • @venkatasivagabbita788
    @venkatasivagabbita788 3 года назад

    I know. I am a researcher. I hate the PUBLISH or PERISH game that INSTITUTIONS OF ALL SHAPES AND SIZES AND REPUTATIONS PLAY. THEY are responsible for the MALAISE. There is no point in differentiating between research publications, when almost ALL of them are debatable and often wrong. There is no such thing as MORE WRONG. All research - the way it is carried out currently - which is the way the journals expect you to publish them - are completely questionable opinions - they call them hypotheses - posing as rigorously established truths. IT IS A VERY SERIOUS PROBLEM.

  • @venkatasivagabbita788
    @venkatasivagabbita788 3 года назад

    You are talking about deliberate fraud. How about WILFUL IGNORANCE?