Is the famous black hole image "wrong"?

Поделиться
HTML-код
  • Опубликовано: 21 ноя 2024

Комментарии • 1,6 тыс.

  • @DrBecky
    @DrBecky  14 дней назад +40

    Go to ground.news/drbecky to stay fully informed with the latest Space and Science news. Save 50% off the Vantage plan through my link for unlimited access this month only.

    • @mattblack6736
      @mattblack6736 14 дней назад +1

      Hiii Dr Becky, can you clear something up for me (or anyone else for that matter). I hear the phrase you'd have to travel faster then the speed of light to escape a black hole pop up everywhere such as in your video. But I also remember hearing from somewhere that the arrow of time flows towards the singularity past the event horizon. Does that not mean speed is irrelevant as to escape a black hole you'd have to travel backwards in time?? OR have I gotten some wires crossed somewhere....

    • @osmosisjones4912
      @osmosisjones4912 14 дней назад +1

      Star Trek original series did do parallel earth where Rome never fell.
      And after already did alternate dimension parallel univeses .

    • @oortcloud8078
      @oortcloud8078 14 дней назад +2

      Thank you Dr Becky, I see it's the thoughtful "chin rub" with quizzical expression today! Very nice. 🤔

    • @oortcloud8078
      @oortcloud8078 14 дней назад +2

      ​​ Hiiii Matt, (Black for that Matter, very nice play on words there!) Nothing can travel faster than the speed of light. The only way to escape a Black hole is through Hawking Radiation, but it gets very complicated to type. Although, you are essentially correct. As we fall towards the event horizon, we are falling through three dimensions of space and one of time. Spacetime. However, once we cross the event horizon, then we are no longer falling through space and we're only falling through time. Of course I'm not an expert and may revise that later? Take care. 😅
      Edit: Or maybe another way to look at it is, after the event horizon. The time dimension flips to a space dimension, so we're just falling through space, without any time. Einstein makes my head hurt.

    • @cybermonkeys
      @cybermonkeys 14 дней назад +1

      Can Matt Black read that, and does it really matter that they're black holes?

  • @sirenbrian
    @sirenbrian 14 дней назад +742

    It's not a real Dr Becky video until she says "nought point nought nought nought nought nought ..." :)

    • @williamhoward7121
      @williamhoward7121 14 дней назад +14

      Way back in the 1960s in the USA, The Beverly Hillbillies show was on TV. Jethro loved to use the Nought Nought Nought as well. Talk about different ends of the spectrum!

    • @merkulus-n5v
      @merkulus-n5v 14 дней назад +6

      @@williamhoward7121 Noughts and gizintas!

    • @lanatrzczka
      @lanatrzczka 14 дней назад +4

      So stupid

    • @gary.h.turner
      @gary.h.turner 14 дней назад +6

      Has the EHT image analysis all come to nought? 0️⃣😁

    • @TheGhostGuitars
      @TheGhostGuitars 14 дней назад +9

      First we have Sagan's "Billions," now we have Dr. Becky's "Noughts."

  • @jcortese3300
    @jcortese3300 14 дней назад +317

    I have to admit, when I first saw the EHT image, my first thought was whether they'd assumed what a black hole "should" look like in the data processing. And indeed they had used AI trained at least partly on what they thought it "should" look like to create gap-filling data. Honestly, that worried me.
    I don't think the EHT team is entirely off just because things tend to look like what general relativity expects them to look like, the old "Oh look, Einstein's right" is one of the most common results in physics. But in an era when people act like being disagreed with is tantamount to being decapitated, I love seeing people inviting disagreement and disagreeing in a civilized way. I don't doubt that others will download the data and chime in over the coming years, and I look forward to it.

    • @pjrahal8351
      @pjrahal8351 14 дней назад +5

      Love your thoughts on this

    • @francis5518
      @francis5518 14 дней назад +13

      I share your concern (If I decipher blurry images of people with an AI fed with pictures of my grandfather, won't I unavoidably arrive to the conclusion that everyone in the original images were my grandfather's cousins??)

    • @antonystringfellow5152
      @antonystringfellow5152 14 дней назад +3

      If he can't rebut the team's rebuttal, that'll be pretty much the end of the argument.
      1. Dr. Miyoshi didn't use all the data. The team did.
      2. The ring image does not require any assumptions about the shape.
      In which case, for his argument to stand, Dr. Miyoshi or someone else will have do this work again with all the data.

    • @oasntet
      @oasntet 14 дней назад +11

      @@antonystringfellow5152 Feed the algorithm data from something else (simulated or real), and see if rings pop out. I was skeptical of the original image for the same reasons; ML is notoriously unreliable so you need a correct negative result to ensure it isn't just hallucinating what you want it to see. I had assumed the paper covered their bases regarding the ML, but it doesn't sound like they did.

    • @DCourt_1
      @DCourt_1 14 дней назад +4

      Confirmation bias is the mouse who swears there is a cat nearby, and then spends all his time looking for the cat to prove he is right. The mouse will auto select out of the gene pool in order to prove itself right.
      A machine will only confirm it's evaluations based on the the data it was shown. If it was shown a cat, it will find a cat...

  • @MarcoRoepers
    @MarcoRoepers 14 дней назад +256

    I think what Miyoshi, Kato & Makino did was a necessary thing. Someone should challenge the findings of the EHT. It has to be put to the test.

    • @bobbygetsbanned6049
      @bobbygetsbanned6049 11 дней назад +16

      100%. If the results can't be reproduced it shouldn't pass peer review. The overlay from 14:43-14:50 looks extremely damning, it's pretty hard to believe that's a coincidence. There's already been too much academic fraud because peer reviewers relied on the authors of the paper being the "experts", which allowed obviously fraudulent results to pass peer review. This could easily become another massive blunder if they rely on the EHT team to pass peer review with results that can't be reproduced because they are the experts.

    • @givemespace2742
      @givemespace2742 11 дней назад +2

      The discussion has to be had. I

    • @bjornfeuerbacher5514
      @bjornfeuerbacher5514 10 дней назад +7

      @@bobbygetsbanned6049 "If the results can't be reproduced it shouldn't pass peer review."
      Did you miss the fact that a reanalysis of the data was already done by several other groups, and that their findings agreed with the EHT?

    • @cobra6481
      @cobra6481 10 дней назад

      @@bjornfeuerbacher5514 ..my two cents.. cuz ya seem a bit snippy..
      Bobby is MOST LIKELY in accord with you as their comment reads like: "100%.. [adds as a general statement]... Any science that gets published by any person/team should be able to pass peer review. [continues thoughts on overlay now and maybe DOESn't realize others have interpreted the data the same way]...."

    • @DCourt_1
      @DCourt_1 10 дней назад +3

      @@cobra6481 I wouldn't take anything bjornfeuerbacher5514 too seriously. None of the roughly 20-ish comments by bjornfeuerbacher5514 are particularly insightful, and usually say "didn't you watch the video? Didn't you read the blog post?". Where you would say "snippy", I would posit that bjornfeuerbacher5514 is "pointless" and "unhinged".

  • @pjrahal8351
    @pjrahal8351 14 дней назад +63

    So much respect for you taking the time to say, "Hey, this isn't my research field, im NOT an expert in this," before saying you'd tend to lean towards the EHT team. 🙌🙌🙌 thank you!

    • @DeJach
      @DeJach 14 дней назад +2

      I liked that too. I did a synthetic aperture radar (SAR) project for my senior year in college over a decade ago, it touches on some of the same issues on signal integration and processing... Needless to say, the math is hard. I've forgotten most of it 🥺

    • @markjamesrodgers
      @markjamesrodgers 2 дня назад +1

      YES!!! Particularly when it would be so easy for her to have done the opposite and say “As a black hole expert…”

  • @SoonRaccoon
    @SoonRaccoon 14 дней назад +68

    In the documentary about EHT's process, one team created sets of test data, then other teams ran the data through their algorithms to reconstruct the image. One of the test images was a snowman, and they were able to successfully reconstruct it. It might be interesting to see what M24's algorithm does with that test data.

    • @yuvalne
      @yuvalne 14 дней назад

      +

    • @zeitgeistx5239
      @zeitgeistx5239 12 дней назад +3

      And in Google’s Alpha Go documentary you see them directly contradicting their own narrative about Alpha Go (that’s it’s not some general intelligence but just an algorithm that learned Go strategies). Just because they talked about it in a documentary doesn’t mean it’s true.

    • @jamesduncan578
      @jamesduncan578 11 дней назад +2

      A snowman, LOL

    • @uexp4
      @uexp4 7 дней назад

      @@zeitgeistx5239 This comment is tantamount to saying there is no such thing as AI because its actually just algorithms. The concept of back-propagation that underpins a large section of AI research is an algorithm to a-fix weights and values to nodes. But this has created systems like Chat GPT and Alpha Go that are clearly more than just algorithms. No one has claimed they have achieved Artificial General Intelligence, you are creating a strawman, and are arguing in bad faith. You dont understand what you disagree with.

  • @BytebroUK
    @BytebroUK 14 дней назад +95

    Maybe it's just me, but I totally love your 'bullet points'. Classic presentation training from when I was much younger - "First tell them what you're going to tell them,. Now tell them. Now tell them what you just told them".
    Oh, and I would REALLY like to see an Earth-sized disco ball - thank you for putting such an odd thing in my head.

    • @Penfold101
      @Penfold101 13 дней назад

      Downside - if it was solid it would have so much mass it would instantly collapse into a sphere…

    • @therealpbristow
      @therealpbristow 11 дней назад

      @@Penfold101 Not necessarily. There are stronger materials than rock.

    • @nobodyimportant7804
      @nobodyimportant7804 8 дней назад

      "First tell them what you're going to tell them,. Now tell them. Now tell them what you just told them"
      I hate this saying.
      Are people that stupid that they need to have a presentation in such a dumbed-down fashion?

    • @therealpbristow
      @therealpbristow 8 дней назад

      @@nobodyimportant7804 Depends on the context. Particularly with a longer, information-dense presentations, it's often helpful just to get people's expectations of what they're going to get from it straightened out at the start, so that they aren't sitting there wondering "are they ever going to talk about X...?" which would distract them from what you actually *are* talking about. And then at the end, people often appreciate having a quick reminder of the key points of what they heard, so that even if some of the in-depth stuff went over their heads at least they can be confident they've learned *something*. I've left many a lecture with basically just the end summary written down in huge capitals, apart from a few notes about specific bits that I really wanted to capture, so that I could sit back and get the overall flow of connections between ideas while the speaker was talking, and then afterwards go away and read up the details at my own, slow-thinking pace. While doing that, I would recogniser bits of what the speaker had said, and gradually a proper, complete understanding would come to together in my head.

    • @BytebroUK
      @BytebroUK 7 дней назад

      @@nobodyimportant7804 When you're talking to a roomful of people, many of whom are probably not invested in listening too much, it works :)

  • @ForeFrontAstronomy
    @ForeFrontAstronomy 14 дней назад +116

    Dr. Miyoshi is also an expert in VLBI techniques. His work was highlighted in the Scientific Background on the Nobel Prize in Physics 2020: "Theoretical Foundation for Black Holes and the Supermassive Compact Object at the Galactic Centre" by the Nobel Committee for Physics (available on the Nobel Prize website). The ongoing debates on the accuracy of black hole imaging make this an exceptionally fascinating topic!

    • @antonystringfellow5152
      @antonystringfellow5152 14 дней назад +5

      But his analysis is incomplete, only using one night's data, and it makes a false claim about the assumption of the shape.
      Also, no mention of the polarization.
      That doesn't bode well for his side of the argument, does it?

    • @Doodelz02
      @Doodelz02 14 дней назад +1

      Lay person here. I too would like to understand how using one night's data is viable for refuting the original Author's conclusion. If the earth's spin yields more data from varying viewpoints, that seems intuitively logical to my brain (he says, bravely).

    • @JdeBP
      @JdeBP 14 дней назад +6

      @@antonystringfellow5152 The EHTC rebuttal makes no mention of "one night's data". Dr Becky said that, but the bullet point on screen at the time does not, nor does the actual web log post if one goes and reads it directly. _There is no_ "one night's data" rebuttal and _no claim that Miyoshi did that_ . The EHTC's actual rebuttal is that the EHTC produced a whole range of results with differing methodologies, and only 2% of them did not show a ring, whereas M24 produced one result with one methodology. The closest that it comes to any "one night" argument is _another_ bullet point that points out that if one observes Sagittarius A* for 12 hours, the fact that it can vary over the course of a mere 10 minutes needs to be taken into account, and M24 did not do that.

    • @donaldkasper8346
      @donaldkasper8346 11 дней назад

      Origin is probably (0,0) in the donut center, meaning this is probably an FFT data compression artifact.

    • @uenonopanda
      @uenonopanda 11 дней назад

      ML reconstruction could be faulty. ML may be biased to generate what they want to see.

  • @TimelyAbyss
    @TimelyAbyss 14 дней назад +372

    Filling in missing data with AI trained on what you ‘expect’ to be there seems pretty biased towards your expectations doesn’t it?

    • @olasek7972
      @olasek7972 14 дней назад +25

      you clearly missed the rebuttal to the „expect” argument

    • @dariusduesentrieb
      @dariusduesentrieb 14 дней назад +10

      You could just train it on all plausible version of what to expect, including a blob instead of a ring.

    • @vast634
      @vast634 14 дней назад +32

      The first result looked like the image of a cat, but was then retrained.

    • @samuelgarrod8327
      @samuelgarrod8327 14 дней назад +26

      Artificial intelligence = Genuine stupidity.

    • @TheRealWormbo
      @TheRealWormbo 14 дней назад +30

      @@samuelgarrod8327 Please let's be careful here. While yes, no AI concept actually is "intelligent", there are very different AI concepts at work. The one that is genuinely stupid for most applications is "generative AI", aka. Large Language Models (LLM), where you issue a prompt, and it derives the statistically most likely response from a massive model of inputs. I would *hope* that all this black hole imagine effort doesn't use *that* kind of technology.

  • @SnailingNinja
    @SnailingNinja 14 дней назад +138

    As a person working in machine learning for over a decade, I can confirm that this type of problem, which we call data imputation, is very complicated. It depends on the machine learning model you use to fill the gaps, and the proportion of usable data you have in the first place. In the TED talk snippet you showed, it looked to me as if the proportion of sky covered was pretty small compared. The smaller the proportion of original data, the more uncertainty you have in the values you fill in the gaps.
    Then you need to think about the location of the observatories used: where are they located? Do they cover a representative sample of space or do their locations bias the data collection in some way. I’m not an astronomer, but the fact that we know there are clusters and super clusters of galaxies means to me that matter is not distributed randomly. If we combine that with non-random locations of observatories, the combined sample of space could be non-random, I.e. biased towards more donut shaped structures. The machine learning model used would likely pick up on this when filling the gaps leading to the artifacts that the Japanese team claims.
    Another tricky aspect is the choice of which gap to fill first, because the order plays a crucial role. To avoid this you need to repeat the gap filling process many times (multiple imputation), each time starting with a different gap and randomising the order. Then for each gap you average over all the runs. The question is, how many runs do you need? Again it depends on the proportion of gaps to raw data, and the total number of gaps. The number of runs required can be huge, and that costs money. If you stop too soon you may induce bias in the values you put in the gaps.
    Anyway, I thought you presented the topic very well even though it’s not quite your field of expertise!

    • @benjaminshropshire2900
      @benjaminshropshire2900 14 дней назад +11

      Yes; this. The AI component of the EHT is the least mature technology I know of in the process and thus the most likely source of errors.
      I wonder if they could get a real world data set, with a comparable level of completeness, detail and resolution, but of something that we directly image much better and see if the process produces a correct image?

    • @lorrinbarth1969
      @lorrinbarth1969 14 дней назад +10

      Its like they used a pinhole video camera to photograph a living room. The got a bit of green from the sofa, a bit of brown from the arm chair and a bright flash from the window. Then they used a computer with a catalog of living room pictures to find a best match. There, that's the living room. Better to say this might be what the living room looks like.

    • @hansvanzutphen
      @hansvanzutphen 14 дней назад +14

      I am actually really surprised that they chose to use machine learning for this. Typically, there are algorithms that you can use to reconstruct missing data, if you make certain assumptions. But in that case those assumptions are well-defined. With AI, you typically don't know what the model is doing or why.

    • @stevewatson6839
      @stevewatson6839 14 дней назад +5

      And the difference between this and simply making shit up is what exactly?

    • @JveFQ
      @JveFQ 14 дней назад +10

      @@hansvanzutphen They used multiple different reconstruction techniques, of which only one used machine learning, with the others being more traditional methods. There were other contingencies which one can read about in the paper or their talks, but in short they were very careful with the use of AI.

  • @BruceKoerner
    @BruceKoerner 13 дней назад +91

    The key fact that comes out at me is that EHT is running at the limits of its resolution. Therefore any detail in the image may be an artifact. The frightening thing about this is that they tried to bridge that gap with machine learning, which coerces results into known patterns instead of admitting that it is unsure.

    • @defies4626
      @defies4626 10 дней назад +9

      Agreed. the last few years has made 'machine learning' anything a black mark against it. It is too easy to let them hallucinate. And impossible to prevent it.

    • @bjornfeuerbacher5514
      @bjornfeuerbacher5514 10 дней назад +5

      "The frightening thing about this is that they tried to bridge that gap with machine learning, which coerces results into known patterns instead of admitting that it is unsure."
      That is a _huge_ oversimplification of what they actually did do. Did you watch Becky's video? Did you read the blog post by the EHT team?

    • @boonheeliew2488
      @boonheeliew2488 10 дней назад

      in the first place, there is no light interference, therefore the interferometer does not exist. it is just the name used to fool all of us.

    • @beenaplumber8379
      @beenaplumber8379 9 дней назад +3

      "...instead of admitting that it is unsure." That's not a binary. Data can be biased toward known patterns, AND the authors can (and should) analyze (and usually quantify) the resulting uncertainty. Nothing in science is sure except the existence of uncertainty. In my field (neuroscience, now retired) that usually meant deriving p values from the appropriate statistical tests. In physics it seems to be more about sigma. I don't have a clue how they analyze radio telescope + machine learning data like this, but it's scientifically meaningless if they haven't analyzed their own uncertainties or addressed them in some way. I think the heart of this criticism and response are the assumptions each team is making. I have to say I think the EHTC seems unfortunately dismissive of the criticism against their work.
      I agree that "EHT is running at the limits of its resolution. Therefore any detail in the image may be an artifact." That's probably their greatest weakness, but even so they should be able to analyze the odds of artifact (amplified by AI) vs. truly representative imagery. They seem to have done that by using numerous different methods and independent analyses to verify the ring-like shape, but I get the impression that these methods are all novel, so what's the baseline for reliable validation other than selecting methods that yield a consistent result that makes sense to them, which might more accurately reflect a flawed assumption?

    • @epicridesandtours
      @epicridesandtours 9 дней назад

      @@defies4626 statistics have been used to "fill in gaps" in scientific observations for a very long time. So first, a "valid" number of results ("sample") must be gathered before coming to a valid conclusion in any study.
      Deciding what constitues a valid sample is, becomes key.
      The correct statistical method for the test is also vital. What we have here is two teams arguung about the statistical methodology.

  • @patchvonbraun
    @patchvonbraun 14 дней назад +32

    NRAO holds a "synthesis imaging" summer school (or at least used to). The textbook they have for that course is, well, "dense", and something like 600 pages. Using interferometry for image synthesis is not for the faint of heart. I will note that similar techniques are used to produce images from CAT scans, and I think that in the very early days of radio interferometry and CAT scans there was a lot of "cross-pollination" going on.

    • @ausgoogtube01
      @ausgoogtube01 11 дней назад

      Interferometry synthesis - not for the faint of heart.
      Gosh, theres a data point.

  • @RossFenton-co4sn
    @RossFenton-co4sn 14 дней назад +95

    About 30 years ago I was working on trying to use radar information to get images of aircraft for identification purposes. We found that what you call the PSF results in smearing, The data from a single input point is spread over many output points. And there is no way of reversing the process, the information is lost and no algorithm can recover it. On noisy pictures image processing tends to give inconsistent results and have an alarming tendency to produce artefacts that were not in the original image. I suspect this is why they tried machine learning. But that cannot recover the lost data. In very crude terms, machine learning takes lots of data and keeps making guesses until it finds one that meets the fitness requirements. It can never tell you why it works or that it will work on new data. It is also very dependent on the data provided and the data set size. The images must be regarded as an educated guess at best.

    • @WhiteGandalfs
      @WhiteGandalfs 14 дней назад +6

      Well: IF you HAPPEN TO HAVE a "PROVEN MODEL" of what produced the original data, you may well matrix-divide the raw image by the wave functions of the proven model to get a highly beautified and sharpened image of the originating object. If, on the other hand, you have NO "PROVE" AT ALL for the model you assume, then what you get is at best one of-a-multitude of possible hypotheses on what could have caused the original data.
      If you happen to actually test the generated images against the very basics of assumptions of the models you put in - for example having the black hole accretion disc imperatively being aligned with the galaxy disc -, you immediately are forced to dismiss those "images of black holes".

    • @HunsterMonter
      @HunsterMonter 14 дней назад +2

      The PSF affects the original image via convolution, which does have an inverse. Of course, you still have to be careful about noise, but it is in theory possible to get the (unique) original image back

    • @TheRealWormbo
      @TheRealWormbo 14 дней назад +1

      @@HunsterMonter Am I understanding correctly that the original image is essentially a very low resolution pixelated image? Something like maybe 5x5 pixels for the entire black hole, potentially with pixels missing?

    • @HunsterMonter
      @HunsterMonter 13 дней назад

      ​@@TheRealWormbo I don't know the exact resolution, but there were enough pixels to get a good image. The problem was that the "mirror" of the telescope was broken into tiny bits scattered around the globe, so there was missing spots the image­. This is why they had to use ML, to fill in the gaps and get a full picture. The PSF is something else that they had to account for. If you look at an image by Hubble or JWST, you will see spikes around stars. These are diffaction spikes. They look pretty on pictures, but if you are trying to do science, they get in the way. Every single telescope has a unique diffraction pattern, and mathematically, it is possible to correct for that effect and get the image without any diffraction spikes just with the geometry of the telescope.

    • @samcerulean1412
      @samcerulean1412 13 дней назад

      @@TheRealWormboI don’t think so, I think data is extracted from various images from different telescopes of various resolutions. Then that is fed into a image reconstruction method, not too dissimilar to that of photogrammetry, combined with AI.

  • @dinodinoulis923
    @dinodinoulis923 14 дней назад +27

    When I watched the original presentation on this, I did question whether the machine learning might have been fitting the data to the theoretical expectations of what the black hole was supposed to look like. I don’t have enough details, or understand the processes or mathematics used in the analysis, but I do have a background in machine learning so I know this is a possibility. I’m glad these researchers questioned the results and am very interested in hearing about the final verdict on this.

    • @busomite
      @busomite 11 дней назад +1

      I had similar thoughts when the first images were coming out, and I questioned the data fitting as well, this point about it being within the resolution error of the imaging wasn’t presented. IIRC, if you watch the TED talk that Dr. Becky refers to the researcher talks about exactly this and doing their best to make sure the ML wasn’t giving them what they wanted to see. I’m still skeptical.

    • @daveherd6864
      @daveherd6864 10 дней назад +2

      Just rember behind it was a few million at stake behind it in funding

    • @busomite
      @busomite 10 дней назад +1

      @@daveherd6864 that’s true of the counter paper as well, pursuing science costs money. You can’t discount it as an influence, but money influences nearly all of our actions in society, so I wouldn’t put a huge weight on it. They would have been needing money if they’d been pursuing some other thing in science as well.

    • @samlevi4744
      @samlevi4744 День назад

      17:07 shows multiple citations of independent analyses confirming their results.

  • @grantdillon3420
    @grantdillon3420 11 дней назад +36

    Adversarial peer review is at the fucking soul of science. This is exactly what should happen!

    • @greggstrasser5791
      @greggstrasser5791 8 дней назад

      Much of what the cabal puts out seems like BS at first glance, but we're supposed to accept it. The dumber ones are the first to understand it.
      In a camouflage class in Basic Training, the instructor asked if anybody can see his tank parked in the woods way over yonder. There were dumbasses talking to people like they were talking to dumbasses. "You can't see it? It's right there!"
      The instructor was smiling.
      There was no tank in the woods.
      How many contemporaries Peer Reviewed Einstein?
      Hint: The number is less than 2 but more than -1.

    • @benwu7980
      @benwu7980 6 дней назад

      A true cornerstone of science.

    • @greggstrasser5791
      @greggstrasser5791 5 дней назад

      How many contemporaries peer reviewed Einstein?
      How many were men?
      Ya, baby... what's THAT tell ya?

  • @Hiddensecret9
    @Hiddensecret9 14 дней назад +9

    What’s especially encouraging is that the EHT data is available for others to analyze, inviting contributions from different perspectives. Over the years, as other scientists scrutinize and build on this data, we can hope to see a clearer, more comprehensive picture of black holes and their surrounding environments. Disagreement, in this case, isn’t just welcome-it’s essential for advancing our understanding of these fascinating cosmic phenomena.

    • @benwu7980
      @benwu7980 5 дней назад +1

      One of the rebuttal points is that they didn't release the uncalibrated raw data
      To me, that should be the first, not the fourth point since the other points could then be addressed independently.
      Using ML trained on what should expect as as output also seems a little off to me for the reasons stated of bias.

    • @nil2k
      @nil2k 5 дней назад +1

      As someone who writes software for fun in retirement after decades of getting paid to write software, I looked for the raw data to process right after watching this video, and I was unable to find any thing that even remotely looks like unprocessed raw data.

    • @nil2k
      @nil2k 5 дней назад

      I should have mentioned that on the plus side I did find the github repository EHT released.

    • @benwu7980
      @benwu7980 4 дня назад

      @@nil2k I'm not sure if uncalibrated is the same as unprocessed in this case, but neither are the raw data.
      I'd imagine that dealing with the raw data would be extremely difficult, since that would mean the raw data from each observatory before they do their own calibrations, then have to mesh all those into the 'disco ball' model.
      With those data sets, the other 3 points could be tackled separately.

  • @NicholasA231
    @NicholasA231 14 дней назад +30

    The number of people who can really assess this debate is obviously very small, and I'm not, even in the most infinitesimal way, capable of having even the dumbest of discussions with any of them.
    Disclaimer out of the way, I was fully expecting these results to be challenged, and would have bet money that their images aren't "right".
    Reason being that, after listening to the EHT team in many discussions, listening to their explanations of the process, and watching their documentary including the process of deciding on the accurate final images, I had the distinct impression of a culture that sent all kinds of signals to my intuition that they were...how to say it... high on their own supply? I felt like they demonstrated that they were extremely cautious in their analyses, but somehow it felt like box-checking. Like they had a sense of inevitability in the result. Like the human ego just isn't capable of being detached enough from something so precious to one's core identity. The sheer scale and effort of the project is just overwhelming. I saw that they recognized this and knew it was a massive danger - as any truly good scientist does - but I just got the sense that there wasn't anyone strong enough or oppositional enough to not walk down the predestined path to the predicted result.
    Once I saw that image processing included using what is basically very complex confirmation bias to derive the images (telling AI what it "should" look like) I just couldn't have a lot of confidence in it.
    I'm highly likely to be wrong, but my life experience has been that when I get these kinds of intuitions about expert positions on things, I end up being right way more often than I have any business being.
    Very curious to see how this plays out.

    • @mattscarf
      @mattscarf 10 дней назад

      Very good points. The risk of human biases and behaviours not being accounted for here is significant, with groupthink being one of the possibilities you highlighted.

  • @sk43999
    @sk43999 14 дней назад +9

    Back in the day, the most common way to make complex maps from radio interferometry data was with Hogbom Clean. But there was no way to estimate how reliable it was. Then came maximum entropy - I used it with both radio interferometric and optical data, but once again, how reliable was it? Now we have "machine learning", and the same questions keep getting repeated.
    North Cascades - I've been there. Very nice (and remote) park.

    • @jozefcyran2589
      @jozefcyran2589 4 дня назад

      So HOW RELIABLE was any of the methods? You can't criticize the reliability of the methods if you never assessed them to be false. And there are arguments why they should be 'roughly good'

  • @mylesmacleod4306
    @mylesmacleod4306 14 дней назад +9

    I used to process seismographic data. Deconvolution is an issue in that field as well. It's an important concept. Is it big in your type of astronomical research? Perhaps you could do a video about that. Kudos to you for discussing methodology at all.

  • @alanwilson175
    @alanwilson175 14 дней назад +14

    Becky - The issue here is that the EHT resolution is at the ultimate limit for a small emitter like SGR A*. AI techniques can extend that resolution, but the results start to take on some risky results, such as hallucinations. Both teams have to resolve that problem, and it is not really such a surprise that they have different results. Its kind of like the popular images of the "face on mars" several years ago. The shadows in that mars image made our human vision detect a "face", because we are naturally adapted to see faces, especially eyes, even if the actual image is distorted, blurred, and noisy. The face turned out to be an optical hallucination when better resolution images were available. In this case for SGR A*, I suspect we will have to get more data to resolve the image better. In the mean time, I have to place more trust in the larger team. More team members should be better at finding and correcting defects.

    • @tesseract_1982
      @tesseract_1982 14 дней назад +1

      Mars face: I particularly remember the random black dots from radiation, one of which mimicked a nostril very convincingly. 😅

    • @stargazer7644
      @stargazer7644 14 дней назад

      I'm not sure how you can classify an optical image taken with a camera with no manipulation as a "hallucination". None of this data was made up. The geography was there. The shadows were there. They happened to combine in such a way in that image to look like a face. If you were to image the same area again under the same conditions you'd get the same shadows and you'd still get an image that looks like a face.

    • @alanwilson175
      @alanwilson175 13 дней назад +5

      @@stargazer7644 The point I tried to make with the comparison was that the so-called face on mars was a hallucination by our neural networks for vision. Our neural networks for vision are biased in that way. It happens to work often enough that we accept it. The analogy is with the AI network made with computer models of artificial neural networks that analyzed the EHT data for SGR A*. The other part of the analogy is that the resolution of the face of mars was figured out later with more data. I suspect the same will occur with SGR A*.

    • @whitmckinley961
      @whitmckinley961 13 дней назад +1

      As to trust, I don’t distrust either group. Large and small teams both have benefits and weaknesses due to size.

    • @meslud
      @meslud 12 дней назад

      Is that true? I mean, if a lot of people want something to work out, because they worked on it for a good part of their career, they definitely will be biased.

  • @drmaybe7680
    @drmaybe7680 13 дней назад +5

    Thanks Dr Becky for an interesting presentation. I've worked now and then in radio interferometry and have a healthy respect for the difficulties of phase calibration, which are particularly severe in VLBI, when one doesn't have the luxury of having all the antennas directly wired up to a common phase standard. I'd love to have time to play with this data myself instead of what they presently pay me for, which is trudging with an Augean-sized shovel and broom through acres of other people's crap x-ray code. Wisely, with the Fourier-innocent public in mind, you omitted mention of this particular F word, but I don't have any popularity to worry about so I can mention for readers that what an interferometer measures or samples is the Fourier transform of what is on the sky. I'd be interested to fit directly in the Fourier plane to compare the likelihood of a ring-shaped versus a condensed object. I have to say also that I think a machine learning approach is quite wrong for this application. People throw ML at everything just now without stopping to consider whether such an approach suits the problem or not. It's not a magic wand and indeed it leaves one open to just such criticism as by Miyoshi et al, because sure you've generated your nice picture, but you can't exactly explain how, because it is all hidden in the ML black box.

  • @Pistolsatsean
    @Pistolsatsean 14 дней назад +12

    I did enjoy seeing this discussion play out.

  • @francoisscala417
    @francoisscala417 14 дней назад +57

    I have one simple question: have they tested their observation and treatment algorithm on normal stars ? To check that normal star do not produce a donuts shape image as well...

    • @cawareyoudoin7379
      @cawareyoudoin7379 14 дней назад +4

      Exactly my thought.
      Well, since the SMBH's theoretical ring is so massive, maybe not a single star, but a cluster or a glowing dust cloud, or something else more comparable to the accretion disc.

    • @ifkekanrunning4768
      @ifkekanrunning4768 14 дней назад

      Yeah, taking the produced image and reverse the PSF, will that result in the raw data picked up by all sensors? I guess they have both defined and checked the PSF using nearby stars and comparing the sensor data with a ”normal” telescope image data.

    • @factChecker01
      @factChecker01 14 дней назад +3

      Good question. I believe that is how they "trained" their artificial intelligence part of the work. They looked at known objects and the associated combined telescope data to teach the algorithm.

    • @isaacyonemoto
      @isaacyonemoto 14 дней назад +13

      That's exactly what they did in the japanese paper (not with a real star but with simulated data), and it does make a donut. Fig. 10

    • @azimali322
      @azimali322 14 дней назад

      It's a simple question with a hard control: If you want to check it with a "normal" star then you have to apply these techniques to a star that has the same radio "brightness" as these black holes.
      I'm not an expert, but I have to imagine that those stars that have that kind of radio brightness probably exist, but those stars are so hard to separate from background radio noise that you would have to identify that normal star from other techniques. THEN you have to apply the same radio collection strategy as they did for the black hole (lots of nights of capturing data at all of the same telescopes) to rule whether you would generate a similar image as the black hole image that they generated, or a simple blur spot image. Probably not an easy experiment to replicate, but probably will need to be replicated to further prove the results of the original study should still stand.

  • @whitmckinley961
    @whitmckinley961 14 дней назад +20

    EHT’s responses are unsigned blog posts. Credit to Miyoshi, et al, for signing their rebuttal. Are anonymous responses standard in this academic community?
    It is also odd for EHT to claim they have provided detailed descriptions of their methods, while Patel’s paper notes the scripts for image postprocessing and generation are not available. The 2024 EHT response does nothing to enlighten us on their methods, choosing instead to critique Miyoshi. Those critiques may be fair, but EHT’s scripts and methods remain opaque.

    • @OriginalOmgCow
      @OriginalOmgCow 10 дней назад

      Unsigned can be assumed the entire team. It's not that unusual.

    • @whitmckinley961
      @whitmckinley961 10 дней назад

      Thank you. It is helpful to know the usual practice. The response certainly appears to be speaking for the entire team, but as you say, that’s nothing more than an assumption.

  • @dougstewart3681
    @dougstewart3681 14 дней назад +58

    When they used AI with simulated data to try and fill in the missing data, that is the point that set off red flags for me! The AI is no better then the data that it learns from. and if they use simulated data then it is only what the teem wanted it to be! Not "real" data!

    • @cubeflinger
      @cubeflinger 11 дней назад +2

      Machine learning is not gen ai

    • @lloydgush
      @lloydgush 11 дней назад

      As I expected, they just introduced the conclusion in the premise.

  • @onetwothreeabc
    @onetwothreeabc 14 дней назад +3

    The EHC team has huge financial interest to put a positive story. Their big size does not give them more credibility.

  • @jrs77
    @jrs77 14 дней назад +15

    I do astrophotography myself with off the shelf amateur equipment, and if the resolution isn't there, then the resolution isn't there to produce an image.
    So I am hesitent ever since I saw the first images back in 2019 about this method to begin with. You can't just say our models do take it into account and walk away. The smallest detail any telescope can produce is easy enough to calculate and if the event horizon is smaller than this theoretical minimum, then you simply have nothing to show in my honest oppinion.
    These images look like they should be right, but that's just an assumption based on our best models, not on actual data. The resolution just isn't there and that's what Myoshi et al claims correctly from my amateur astrophotography POV.

    • @poruatokin
      @poruatokin 14 дней назад +1

      Agree, hence aperture fever being a real thing!!

    • @monochr0m
      @monochr0m 12 дней назад +1

      You do understand that photography is vastly, vastly different from interferometry right...?

    • @jrs77
      @jrs77 11 дней назад +1

      @@monochr0m Resolution is still resolution. Nothing changes there in how much detail you can resolve. Be it radiotelescopes, mirrors, refractors or whatever.
      Even if they had a telescope the size of earth without the need for interferometry , they still would not have enough resolution to resolve an object the size of M87s blackhole or SagA*. Simple as that.
      To circumvent this fundamental resolution problem they do throw their models into the equation and "simulate" the endresult based on the low resolution data the actually gathered.

  • @patchvonbraun
    @patchvonbraun 14 дней назад +7

    I had to explain the "Airy Disc" and diffraction through a circular aperture to our student volunteer the other day. We were trying to make observations of 3C348 during the day, but the Sun was contaminating our side-lobes too much to succeed. Actual instruments diverge from the theoretical quite a bit, because real instruments aren't perfect. Incidentally, one of the versions of the CLEAN algorithm was invented by a former colleague of mine when we were both at Nortel, 17 years ago. David Steer and I filed a patent together (on some wireless security stuff--unrelated to his grad-school work).

    • @therealpbristow
      @therealpbristow 7 дней назад

      Nortel, 17 years ago? Which site? (I was at Harlow. =:o} )

    • @patchvonbraun
      @patchvonbraun 7 дней назад

      @@therealpbristow Mostly at the Carling, Ottawa site.

  • @DCourt_1
    @DCourt_1 14 дней назад +3

    Just my thoughts, but reporting the EHTC's rebuttal posted on their website is problematic. The Japanese group published their rebuttal in a peer reviewed journal. Only counter-rebuttals from the EHTC group that are likewise peer reviewed should be taken seriously, and not ad hoc posts from their website.

  • @adrianswriting
    @adrianswriting 14 дней назад +136

    The massive problem with any machine-learning algorithm is that you're training it to produce an expected result. Therefore, you should NEVER use machine-learning to produce an image of something hitherto unseen using noisy data, because all you're going to get is what you expected to see, not what's there. Or, to state the EHT's own comment, "ring-like structures are unambiguously recoved under a broad range of imaging assumptions". In other words, 'we definitely got what we expected to see'. As for EHT being more likely to be right because they put more time and effort in, well, when you put that much time, effort and money, there's a HUGE pressure to produce something impressive.

    • @theguyfromsaturn
      @theguyfromsaturn 14 дней назад +22

      I agree. That was my feeling too. Those algorithms are "interpolation" tool. You should not use them for "extrapolation".

    • @cadams1607
      @cadams1607 14 дней назад +4

      They're programed with the same inherent weaknesses we have. The biggest being confirmation bias.

    • @antonystringfellow5152
      @antonystringfellow5152 14 дней назад +4

      Did you forget about the polarization?

    • @dougyoud5944
      @dougyoud5944 14 дней назад +2

      It’s been a while since I watched the talk etc, but I thought they used training data primarily of non-black hole images; cats, bridges etc….

    • @adrianswriting
      @adrianswriting 14 дней назад +1

      @@antonystringfellow5152 I can't see how that would help, as it would suffer the same noise, distortion, and gaps in coverage as the radio data.

  • @GonzoTehGreat
    @GonzoTehGreat 7 дней назад

    Please take a moment to recognize that Becky put together this detailed video explanation of the disagreement, including an overview of the published work by both research teams and uploaded it to HER channel, yet still had the INTEGRITY to point out that she's NOT an expert in radio astronomy interferometry (herself) , so she can't offer an informed view.
    Massive kudos to her for doing this!
    IF ONLY all science communicators (and journalists) were as honest and humble...
    Becky you're exemplary! Please keep showing us how it should be done. ❤👍

  • @mykofreder1682
    @mykofreder1682 14 дней назад +7

    You use some process trained on models to edit your image, it is no surprise the image ends up looking like the model. It is also odd there is no edge on angle with both images, you can see the full center.

    • @tesseract_1982
      @tesseract_1982 14 дней назад +1

      Correction on your last point: the apparent angle can be explained by the way the light from the accretion disc gets bent directly around the black hole. Look at the artistic renderings/simulations to see what I mean - one can see what is directly behind the black hole, even if looking at it directly from the plane of the accretion disc.

  • @Shazbat5
    @Shazbat5 11 дней назад

    Your "what do you think, Becky" discussion was the best part of a very good video. Thanks for your effort!

  • @TricksterDaemon-jw9hi
    @TricksterDaemon-jw9hi 11 дней назад +4

    I love how groups studying the same phenomenon, coming up with differing solutions, are like WWF pro wrestlers talking smack about their rivals. But instead of power slams and suplexes, they use research and data, and instead of shouting their spit into a microphone about how they're gonna totally own, they are publishing research papers.
    I mean in the end, there is still professional respect, and everyone is playing by the rules. But in either case it is still a lot of fun to watch.

  • @KatarupaYT
    @KatarupaYT 14 дней назад +4

    I'm glad these images are coming under some more scrutiny now, I'm no expert but the whole methodology especially the use of machine learning always made the final result seem way too good to be true, and very much dependent on many possible biases that could be manipulated to produce a desirable result which is more recognisable to a layman as "black hole-y" and therefore more likely to become big mainstream news.

  • @NZRic001
    @NZRic001 14 дней назад +3

    NZ, happy for the update! Great breakdown on this...

  • @ReedCBowman
    @ReedCBowman 10 дней назад

    Yes! This kind of deep explanation of a controversy is some of the most valuable content on RUclips. It reminds me of my very favorite videos of yours, explaining the historical process of figuring out some of the things that are now considered basic to modern Astronomy (and I'd really love to see more of that - and see other people in other scientific fields emulate it). One of the best ways to really build lay understanding of these things.

  • @lordmuntague
    @lordmuntague 14 дней назад +4

    7:34 - Hang on... on the top of that hill on the left, surely that's a TIE Fighter?!

  • @SidheBySidhe
    @SidheBySidhe 8 дней назад

    For the first time, I understand the meaning and origin of an artifact. There are artifacts on x-rays and sometimes MRIs I suppose that can lead to diagnoses that are wrong. I did learn that, but I didn’t understand how that could happen. Your explanation is making that very clear wowyou’re very good teacher. I thank you.

  • @ogshotglass9291
    @ogshotglass9291 14 дней назад +9

    I'm only on 2:22 in the video, and my photography brain has kicked in: in some cases, if you edit an image with one step being done in a different order, you will come out with your image being totally different. While this obviously is more complicated than just editing a photo, I imagine the concept could be applicable.
    Edit: at 19:07 in the video, EHTC mentioned my exact thought. For me, sometimes I have had this happen when the overall photo is dark to the point of being nearly black and has low contrast. You do it one particular edit before another, and it will somehow pop with more contrast, while flipping the order does nothing to improve it

  • @MateusAntonioBittencourt
    @MateusAntonioBittencourt 8 дней назад

    I'm skeptical of the EHT image, because of the AI model used to recreate the image. We humans are fallible and biased, and we pass this to the AI models we train, so the team who was looking for the black hole and WANTED to get a good image of it, also trained the AI model for it. Which makes it biased since it's a model designed to create black holes images.
    I think if they had double blinded the "experiment", with the people who designed the model, not knowing what was gonna be imaged. This would result in a more faithful model.
    I'm glad other people are taking a look into this data, and I hope we can soon have a better understanding of it.

  • @renxva1593
    @renxva1593 14 дней назад +3

    so excited for a new video!! watching a new video by dr becky is probably one of the best things to do on a Thursday evening:)

  • @mikederas8530
    @mikederas8530 9 дней назад

    Great perspective on how the scientific method works, especially with big collaborations.

  • @picksalot1
    @picksalot1 14 дней назад +3

    Simple solution is for the original imaging team to provide the process they used to generate the image. If the image can't be generated by a "competent" independent group/team, then the original image is probably a fabrication. If the image can be generated, but violates the tolerances of the equipment and/or what can be deduced from the data, then the image is probably a fabrication.

  • @robertgaines-tulsa
    @robertgaines-tulsa 7 дней назад

    I love science because it decodes reality rather than just insisting on a belief. This back and forth collaboration is argumenting at its best. This is how we decode reality. It's so much better than just forming something in your imagination and declaring that to be what is. Even if we run into a disappointing failure, we can take comfort in knowing that it is wrong and continue to work at finding what is right.

  • @harrkev
    @harrkev 14 дней назад +34

    I remain skeptical about any AI-generated results. AI can be good at getting patterns out of the data, but the patterns have to be independently verified apart of AI.
    AI might be good at inventing new drugs, but the results need to be verified.
    AI might be good at generating images, but I can get AI to generate any image that I want, apart from reality.

    • @renmaddox
      @renmaddox 14 дней назад +5

      My very passing understanding was that the algorithm wasn't guided to produce an image that matched the expected image, so the fact that it _did_ match is a mild sort of independent verification.
      ETA: If I give a generically-trained algorithm a blurry image of what I believe to be a Dalmatian (due to independent evidence other than the blurry image), and it produces an image of a Dalmatian, that feels meaningful. Could it be a coincidence? Certainly, but that doesn't seem particularly likely.

    • @cawareyoudoin7379
      @cawareyoudoin7379 14 дней назад +5

      I was much more inclined to believe the machine learning (I'm not calling it AI) result a few years back, when I understood less of how it actually works, and how unreliable it can be.

    • @keithnicholas
      @keithnicholas 14 дней назад +8

      it's incorrect to think of the machine learning as the same kind of image generation AI that's generally available. The principle algorithm they use is called PRIMO. They also use CNN. The main thing here is really the algorithms is designed more for reconstruction/recognition not generation.

    • @haydon524
      @haydon524 14 дней назад

      ​@keithnicholas Do you know of anywhere I could read more about the ML specifically?

    • @keithnicholas
      @keithnicholas 14 дней назад

      @@haydon524 I responded with some links, but seems to be filtered out, you can look up the paper "The Image of the M87 Black Hole Reconstructed with PRIMO"

  • @drgmymail
    @drgmymail 12 дней назад

    My 10c ....that you have the abillity to get data to see anything is good enough for me...because its better than weve ever had before

  • @DarthStuticus
    @DarthStuticus 14 дней назад +4

    It was a toss up I think between when you and PBS Spacetime would get to this one first. looks like you are the winner.

  • @Jonathan-ug9yu
    @Jonathan-ug9yu 9 дней назад

    Are you a teacher?
    It's exceptional how good you are at translating information into understanding

  • @antonystringfellow5152
    @antonystringfellow5152 14 дней назад +4

    Good work covering this story. There were a few details in there that I hadn't heard before.
    A little puzzled by some of the comments here though. I'm not sure these people watched the same video as me as they only seem aware of the fact that the image was enhanced using AI. It's like they never heard the rest of the story.
    Strange.

    • @Novarcharesk
      @Novarcharesk 14 дней назад +2

      Because that image was being sold as a photograph. Not something that AI assisted in creating.
      That is a massive difference.

    • @JdeBP
      @JdeBP 14 дней назад +3

      In fairness, even if you watched this whole video you won't have the whole story. When Dr Becky is giving that "one night's data" explanation at 18:42 that's not what the bullet point on screen is saying, nor what the actual web log post says if one goes and reads it firsthand. In fact that part of the EHTC's rebuttal is _not that at all_ , but rather that the EHTC produced a whole range of results with differing methodologies and only 2% didn't come up with a ring, whereas M24 only produced one result with one methodology.

    • @palladin9479
      @palladin9479 11 дней назад +1

      They didn't use AI to "enhance" the image, they used it to extrapolate data points from miniscule amounts due to not enough observatories. AI is useful for interpolation, but terrible for extrapolation. You will always get what you expected to get, regardless of if it's accurate.

  • @mr.bennett108
    @mr.bennett108 10 дней назад

    I work in computer science, and often we have to collect a LOT of benchmarking data for performance evaluation. BUT, for whatever reason, even AFTER giving over the data and charts, someone will say "but does it actually FEEL faster?" BECAUSE of what you said: some people DO just kinda "know" the data better. If you stare at data long enough, AND know what you're looking at, you can kinda just start SEEING things. And there is ABSOLUTELY a bias in the crunching of that data once you "see" it, and I have had my ass HANDED TO ME many a time because I misread the data, but I think I will BARELY side with the Horizon group, but not for their data familiarity, but ONLY because of the WAY they have LEVERAGED that data familiarity to UNCOVER very special and unique science

  • @2Burgers_1Pizza
    @2Burgers_1Pizza 14 дней назад +3

    The shapes look similar to me, save for one thing; Could it be that og team clumped the upper region of the radio spectrum, so it just wasn't computed by the machine learning leaving a "blank" result, where the other team didn't, making the central region appear brighter, which would make this all just a resolution issue?

  • @davidswinnard7565
    @davidswinnard7565 10 дней назад

    Interesting as always.
    Random aside, The North Cascades t-shirt - after living just across the border (northern) from that area for over 60 years, my wife and I just did a short road trip through the park. Definitely worth seeing. (didn't get the t-shirt as the park facilities were closed for the winter.)

  • @hllok
    @hllok 14 дней назад +2

    As a novice, something has always bothered me about these images. Both are from the perfect angle, a perfect right angle, to the ring around the event horizon. Like a photo of Saturn with its ring as a perfect circle around the planet. And we just happened to be at that perfect angle. Also, and more importantly. Isn’t that an IMPOSSIBLE angle for us to the Milky Way Supermassibe black hole? Assumably, the ring around the black hole is in the same plane as the bulk of the galaxy.

    • @robertbutsch1802
      @robertbutsch1802 13 дней назад +1

      The rings around the black parts of the image simply represent severely gravitationally lensed radio light from behind the black hole. They do not represent accretion disks. The latter are not present in the images. This is such a common misunderstanding that I’m surprised science popularizers/influencers have not bothered to clarify it after all this time.

    • @declanwk1
      @declanwk1 13 дней назад

      @@robertbutsch1802 surely since it is the light emitted from the accretion disc that we are seeing, the rings do represent the accretion disc, albeit severely distorted images of the disc.

    • @Mandrak789
      @Mandrak789 12 дней назад +1

      That part is probably not problematic because light is heavily bent around the black hole in a way that we are always seeing the back of it. So it doesn't really matter at which angle we are looking, we will always see the accretion disk as whole.

  • @jimmeade2976
    @jimmeade2976 12 дней назад

    I'm an engineer and I understand most technical topics, but I don't understand how telescopes scattered around the globe are essentially one big telescope, and can "see" things too small for each of the component telescopes to see. Let me use a simple analogy. Let's say the entire population of the United States looks up at the moon on the same night and tries to see the Apollo 11 Lunar Lander. That, in essence, makes one huge "telescope" but since none of the observers will see the lander (it's too small), neither will the huge "telescope".
    Dr Becky, Perhaps you could produce a video explaining, in detail how the "one huge telescope" concept works. We'd all appreciate that. Having said you are not an expert interferometry and huge radio telescopes, perhaps you could do a joint video with one of your colleagues who is.

  • @gordonhamilton7160
    @gordonhamilton7160 14 дней назад +3

    I was always suspicious of the follow-up image with the swirls.

  • @fwd79
    @fwd79 13 дней назад

    Not exactly "Crisis of Cosmology" but really intriguing, thank you Dr Becky for breaking it down, hopefully the image will be proven correct.

  • @robwalker4548
    @robwalker4548 14 дней назад +36

    Any process that involves filling in missing data with what you expect might be there should be a red light for anyone.

    • @RideAcrossTheRiver
      @RideAcrossTheRiver 14 дней назад +1

      Remember what happened to Geordi when he did that!

    • @BillericaBunnies
      @BillericaBunnies 14 дней назад

      Absolutely. It is data falsification, pure and simple.

    • @Veklim
      @Veklim 14 дней назад +15

      If that's your position then you should never trust a single thing you see ever again, because at it's fundamental core, this is EXACTLY how the brain interprets optical input and translates it into 'vision'. I do understand and (partially) agree with the sentiment, don't get me wrong, but I would be remiss if I were to fail to point out this fact regarding vision. Ultimately, what we need to refine our understanding and improve the models are independant data from other perspectives, distances and spectra, which is, alas, unlikely to occur any time soon.

    • @MK-je7kz
      @MK-je7kz 14 дней назад

      @@Veklim This is BS response. Brain is not a computer. Anyway AI is famous for hallucinating and it's response depend on data it was taught on. I'm inclined not to believe these images until the same process (same AI, same gaps) is proven correct by imaging known targets (forming solar systems for example) producing results that are very close to images by other methods.

    • @greensteve9307
      @greensteve9307 14 дней назад

      LOL

  • @lukefuller284
    @lukefuller284 14 дней назад +1

    I applaud your communication of a very technical topic outside of your expertise. I'm an engineer who interfaces with many different disciplines, fumbling around in unfamiliar territory, so your admission of working outside your domain definitely resonated here :D

  • @5pac3man
    @5pac3man 14 дней назад +3

    SUPER INTRESTING! This was easy to understand. Thank you!

  • @seanmostert4213
    @seanmostert4213 12 дней назад

    The image from M87* is taken from almost directly above the black hole due to its orientation from our perspective, and the image of Sagittarius A* was captured from a side on perspective being at the centre of our galaxy.
    Now look at both images side by side 0:29 (left image is top view and right image is side view) and you will see now with both perspectives it is not a sphere shape but a vortex with three regions of influence rotating about a centre point.
    This explains why General Relativity breaks down at the event horizon, our math dives off to infinity because the fabric of space time goes down a hole so linear measurement we take look like they are going to infinity but what's actually happening is the fabric of space and time is going down a hole and actually going towards infinity in both opposite directions to the centre of the hole. This explains why black holes have jets either side too.
    Remember our math is but a slice through a 3D object moving in a 3D way through a 3D space. If you slice up a tornado and do math on one slither of it you won't understand it completely, and when your straight line related to your equation reaches the edge of a hole, measurements go to infinity because they are falling down a hole and the line you are measuring is bending with space and time and is now pointing in a different direction, you think you are measuring a straight line but the geometry of that straight line in reality is a curve that becomes a vortex.

  • @zriraum
    @zriraum 14 дней назад +5

    Weekly space getaway is here!

  • @mikeciul8599
    @mikeciul8599 12 дней назад

    This topic seems like the perfect opportunity for a collab with Dr. Fatima!

  • @ArtFreeman
    @ArtFreeman 14 дней назад +4

    This is very interesting. Thank you for explaining it.

  • @romado59
    @romado59 14 дней назад +2

    I have said this before, but will repeat it. To image the shadow the diameter of the imaging device needs to be 1.1 times the diameter of the earth.The number of individual devices is to few, being only five if my memory is correct. There were only five or seven clear days of observation; in the past I have seen 60 data stacks to get a good "image".Too much of the data collected was not use for being corrupted which will skew the data sets. AI learn is ok if you make sure you don't feedback loop other AI learning into the data. EHT team used temple - probability matching which in itself leads to artifacts. Personal a 200 person team seem more likely to error and rush to publish maybe?

  • @HeeBeeGeeBee392
    @HeeBeeGeeBee392 14 дней назад +12

    That the original purported images might be artefacts of the analysis seems like a real possibility given how far the team was pushing the techniques. My own inclination would have been to use maximum entropy (or its dual, autoregression) analysis for image interpolation rather than AI, but I realise I'm forty years out of date on such things and I could be spouting nonsense. Having a variety of methods applied to the same data by other teams would seem one way to help resolve (sic) the issue, but "You're going to need a bigger baseline" is the message I take from this.

    • @slugface322
      @slugface322 14 дней назад +3

      @HeeBeeGeeBee392
      At least now there is no longer any debate regarding their existence.

    • @Andromedon777
      @Andromedon777 14 дней назад +1

      What is your intent using (sic) in your paragraph? Just curious as I don't see it too much in speech and not too familiar with it

    • @benjaminshropshire2900
      @benjaminshropshire2900 14 дней назад +5

      @@Andromedon777 I suspect that "resolve" has an intentional double meaning that is being highlighted. "(sic)" is basically saying "pun 100% intended".

    • @slugface322
      @slugface322 14 дней назад

      @@Andromedon777
      He used it incorrectly.

    • @Andromedon777
      @Andromedon777 14 дней назад +1

      @@slugface322 When is it usually used? When quoting?

  • @zapfanzapfan
    @zapfanzapfan 12 дней назад

    Oh, good, an explanation of the controversy I can understand.
    What we really need is a longer baseline, maybe a couple of radio telescopes in high orbits.

  • @francis5518
    @francis5518 14 дней назад +2

    If they trained the algorithm to decode the data with images that represented an expected shape, doesn't that unavoidably bias how the data will be interpreted?? Should the image obtained be considered a model rather than a "picture"?

    • @williamschlosser
      @williamschlosser 10 дней назад +1

      It is not a picture or photo. You can look at real photos in Halton Arp's books, like "Seeing Red".

  • @charlesbradshaw3281
    @charlesbradshaw3281 10 дней назад

    Myoshi, Kito and Makino 2022 make some very good points about the M87 EHT images. I did not read the equivalent Sag A* paper. As an old VLBI observer, their processing makes more sense to me, but I may be a little biased toward old school. I found the EHT-team rebuttals a little glib. However, the discussion is good and you made a nice presentation of the conflict.

  • @Jolielegal
    @Jolielegal 14 дней назад +5

    Great video. I love seeing these science disputes happen in real time. In science, conflicts are resolved through long discussions, analysis and arguments, not with emotions, intimidation and misinformation. I wish things were like this in all aspects of our society.

    • @456MrPeople
      @456MrPeople 14 дней назад +1

      Well the reality is that scientists are also humans and human biases and emotions can unknowingly influence data analysis that is thought to be objective. Although it is much less likely in circumstances like these, it should never be left off the table.

    • @tbird-z1r
      @tbird-z1r 14 дней назад

      No they're not. There's so much histrionics, play to emotion, lies and deception.

  • @hamag1973
    @hamag1973 11 дней назад +1

    Sky Scholar has also explaind that picture as "not possible", or something like it's not possible to messure and get an image the way they clamed they did.

    • @darklight2.1
      @darklight2.1 11 дней назад

      Pierre Marie Robitaille doesn't know the first thing about the complex imaging technology used in the black hole image.
      He also doesn't understand how atmospheric pressure works, thinks that the cosmic microwave background radiation comes from the Earth's oceans, that all of the data from the Parker Solar Probe (as well as any other research that contradicts his theory) was faked and the sun actually is made of liquid metal hydrogen.
      His doctorate is in zoology, not astrophysics and he worked on MRI technology until he was released from his position at Ohio State University.
      Since then he has not published a single paper on astrophysics to a peer reviewed journal and his audience consists primarily of science-deniers on YTube.
      He is, in fact, the textbook definition of a crackpot.
      People need to vet their sources better.

  • @JohnBayko
    @JohnBayko 14 дней назад +14

    When the M87 image was released, it seemed plausible to me because the accretion disk (and by implication the black hole rotation) was facing us, as was the galaxy. With Sagittarius A* I also expected it to be aligned with the galaxy, but again it was facing us. The explanation was that there’s nothing to force a black hole rotation to align with the surrounding galaxy, which is fair enough. But what are the odds that an otherwise random orientation would face us? Twice?

    • @JdeBP
      @JdeBP 14 дней назад +12

      Relativistic effects mean that you can see the front and back portions of the disc even when you are mostly edge on. It is not as simple as viewing Saturn's rings. The NASA JPL cartoon at 05:10 is a bit simplified, but notice that at most angles you can see the whole of the disc.

    • @declanwk1
      @declanwk1 13 дней назад +5

      the gravitational field is so strong near the black hole that light from the accretion disc is bent towards us whatever direction we are looking from. As far as I know we would see the doughnut shape from every angle.

    • @JohnBayko
      @JohnBayko 13 дней назад

      @@JdeBP you can see the back of the disk, but the shape and brightness changes. I’d expect lobes, with one much darker, like the newer paper suggests.

    • @InXLsisDeo
      @InXLsisDeo 12 дней назад +3

      it's not facing us. what you see is the "shadow" of the black hole, aka an image that is strongly distorted by relativistic effects.

  • @KiithnarasAshaa
    @KiithnarasAshaa 14 дней назад +2

    In all fairness, if the team that assembled the data says you have to process the data in exactly the same way in order to get the same result they did...something about that reflexively tells me it's not great science. The best science is verifiable, falsifiable, and repeatable. At the same time, processing the data in a way that doesn't make sense is of course going to produce results that don't mean anything or looks wildly different.
    As a...um...well, I'm not a _professional_ physicist, but studying and understanding physics and the math of physics is one of my greater passions, sooo...can I still be a "physicist" even if I don't have a degree or relevant employment because of institutional and social obstacles? As whatever I am, I am highly appreciative of your professional analysis and thoughts.

  • @CanuckBeaver
    @CanuckBeaver 14 дней назад +3

    Excellent video where you cover both sides so fairly.

  • @Phapchamp
    @Phapchamp 14 дней назад +2

    ML program they have been using as well as any raw data they collected are available to public. Japanese team just needs to use it and feed it false data to see if the image still turns out donut shaped. If not EHT is just faking stuff for media recognition.

  • @SOOKIE42069
    @SOOKIE42069 14 дней назад +13

    I am a computer scientist and as far as I'm concerned, you can't call these things "images of black holes" if they were postprocessed via machine learning. Instead, they're simply a prediction of what a black hole *might look like* based on existing images (which, remember, there are no existing images of black holes). I have no doubt a true image of one of these black holes would look very similar to the event horizon images, but these aren't that. They're a snapshot of a simulation extrapolated from a low-resolution image of reality.

    • @spacemanmat
      @spacemanmat 13 дней назад

      Yeah, it really sounds like they simply don’t have the data. It may turn out to be a reasonable accurate picture of a black hole but I think they haven’t actually managed to prove it is.

  • @PBeringer
    @PBeringer 10 дней назад +1

    So ashamed that the 64m Murriyang Radio Telescope (formerly Parkes RT) and the 70m antenna at Tidbinbilla (part of the DSN but also used for GBRA) weren't part of the EHT collaboration. The 70m DSS-43 antenna is the only one on the planet capable of sending commands to Voyager 2, so possibly too frequently indisposed, but there are other capable antennae on that site. It's been four decades since we had a Commonwealth Government with any interest in funding science or even education properly, so maybe I shouldn't be so surprised that CSIRO and/or any Australian universities weren't involved in taking data for the EHT. :(

  • @speadskater
    @speadskater 14 дней назад +12

    This here is the scientific process and its beautiful.

    • @MijinLaw
      @MijinLaw 14 дней назад +1

      Yes I would have liked a bit more emphasis that both teams are doing good work here (not that she said the opposite): publishing all the data, then another team coming forward with a potential issue etc. If it turns out the original analysis was incorrect, that's fine, there was no deception here.

    • @tbird-z1r
      @tbird-z1r 14 дней назад +1

      The Japanese team are low creativity, and low ability.

    • @poruatokin
      @poruatokin 14 дней назад

      @@tbird-z1r .....because?

    • @tbird-z1r
      @tbird-z1r 14 дней назад

      @poruatokin It's hard to know if it's cultural or genetic, as there's important interplay there. As always with these questions it's a mixture of both. I don't think their "shame" culture helps.

    • @speadskater
      @speadskater 14 дней назад +1

      @@tbird-z1r your commend adds absolutely nothing to this discussion thread.

  • @mltamarlin
    @mltamarlin 10 дней назад

    I have to say that from my experience working with big teams is that most of the time the one single type of analysis is done by one or two people, and very few in the team understand exactly what was done. So I wouldn't take the size of the team as an indicator of anything.

  • @TheDanEdwards
    @TheDanEdwards 14 дней назад +4

    _Edutainment_ and _Sciencetainment_ can never replace actually learning topics, which takes a lot of work. And that is why we see masses of people get so easily confused by headlines - they have not put in the work.

  • @harrydarling4180
    @harrydarling4180 11 дней назад

    Excellent video Becky. More regular people like me need to understand how detailed and difficult the scientific process is.

  • @midasjones4384
    @midasjones4384 14 дней назад +48

    The thing about black holes is they're black and the thing about space is it's black, mostly. So it sort of sneaked up on me.

    • @slugface322
      @slugface322 14 дней назад +3

      It snuck up on you?

    • @K1ngfast24
      @K1ngfast24 14 дней назад +6

      Holly, Red Dwarf !

    • @bobothebob4716
      @bobothebob4716 14 дней назад +3

      Weirdly enough I think it would be more accurate to say that space is mostly not black (unless you are near a back hole or something). It only appears black due to exposure times of cameras or the sensitivity of our eyes.

    • @innerfield5481
      @innerfield5481 14 дней назад

      So is the black in space (the edge of the universe) the same black as a black hole? If not then what’s the difference ?
      Serious question. If there is no difference then what …

    • @slugface322
      @slugface322 14 дней назад +3

      @bobothebob4716
      If yer eyes could see a spectrum from VHF to gamma you'd be: blinded by the light!

  • @jessicamorgan3073
    @jessicamorgan3073 11 дней назад

    Thanks for explaining this, Dr. Becky

  • @salange17
    @salange17 14 дней назад +5

    They've got doctorates in astrophysics and I don't, but if you tell me somebody took some data and ran it through an AI algorithm trained on simulations of what black holes looked like (rings) and the result matched those simulations even though you can't see the ring in the raw data... sounds like BS to me!

    • @cawareyoudoin7379
      @cawareyoudoin7379 14 дней назад

      Well, it wasn't just one algorithm; what they claim is that they ran it with several data sets: renditions of black holes but also just a general images, and something else I now forgot.

  • @DooferHein
    @DooferHein 12 дней назад

    Unabhängig von ihrer sympathischen Ausstrahlung und ihrer fachlichen Kompetenz, ich liebe ihre Augen!

  • @tbird-z1r
    @tbird-z1r 14 дней назад +11

    Video starts 4:50

  • @jacksquiggle3238
    @jacksquiggle3238 8 дней назад

    Thank you for giving us your very much qualified 'leaning' on this issue. Thank you for explaining those qualifiers. Most of all, thanks for talking about psf deconvolution. That's a real blast from my past!
    A great video of a complex tangle of a subject.

  • @mostlymessingabout
    @mostlymessingabout 14 дней назад +5

    well of course its wrong... it was never an "image"

  • @allanlees299
    @allanlees299 14 дней назад +1

    There are two issues with using AI to "fill in the gaps." The first issue is the simple one of: how many gaps are being "filled" compared to solid data from the radio telescopes? The worse the ratio, the less reliable the gap-filling. The second issue is that AI models are extremely sensitive to training data, hyper-parameter tuning, and learning reinforcement. As the original team hoped to image a black hole's event horizon, it's not beyond the bounds of possibility that unconsciously they nudged their AI model toward a state where it would indeed output such an image. AI models can be powerful tools, but we're still in the very early stages of understanding how to use these tools properly.

  • @torbjorn.b.g.larsson
    @torbjorn.b.g.larsson 14 дней назад +4

    Nice walk through! I did react on the "one image" reconstruction, but I didn't catch that the same group had made a similar previous claim. For me, the polarization data reconstruction was the clincher though, it is more structure and it is aligned with the ring structure.

  • @annmoore6678
    @annmoore6678 10 дней назад

    Thank you so much for that review of the current expert thinking on the two black holes appearing in those images. As a member of the general public who knows nought point nought-to-infinity about any of this, I am inclined to lean toward the consensus of the larger group of experts, at least until we learn more. It's reassuring when Dr. Becky says that she leans that way as well.

  • @BenWard29
    @BenWard29 9 дней назад +5

    Here's hoping we don't get a Sabine Hossenfelder video with a clickbait title. Some predictions... "Black Holes are B.S.", "Why Black Holes Aren't Science", "Black Holes are Failing", "Science is Dying Because of Ugly Black Holes"

  • @tubePEB
    @tubePEB 11 дней назад

    Well done Becky. I'm not buying the PSF limitation argument based on your report they only analyzed a short duration of the data. If the full duration of the data is used that enables "Super Resolution" effects to provide higher resolution than the PSF. However it introduces time ambiguity - - if the brightness is really a full ring or a circulating blob. I'm betting the EHT team has a way to minimize that ambiguity.

  • @christosvoskresye
    @christosvoskresye 14 дней назад +4

    Ah, yes: It's always better to replace one's own biases with the biases of an AI.
    At least if you are paid to do so.

  • @AHBdV
    @AHBdV 14 дней назад +1

    The PSF argument is incredibly strong. Its difficult to understand the intricacies of image processing. But no matter what processing you do, you are still limited by the PSF of your observation instrument. An image with a ring the size of the PSF is highly suspect! Indeed, that looks like an airy-disc.
    Moreover, nobody would ever claim to have actually resolved a structure when it is a ring with the diameter of the airy-disc.
    Would have believe an image where the ring is 1 pixel wide and the whole in the middle is also only 1 pixel? That's pretty much what is happening here if the PSF argument is correct!

  • @mickeydr
    @mickeydr 14 дней назад +4

    Another made up image that was totally ridicolous was the one representing the vectors of the material spinning around the event horizon. Come on. That one was shameful .

    • @olasek7972
      @olasek7972 14 дней назад

      No, we don’t know if these images are made up, not yet

    • @williamschlosser
      @williamschlosser 10 дней назад

      Want to see real photos, instead of creations? Try Halton Arp's books, like "Seeing Red".

  • @borisborcic
    @borisborcic 12 дней назад

    Salient since day one is that the images possess a special feature : some relatively straightforward algorithm will compress them with extraordinary accuracy to incredibly little.
    Therefore the EHTC owes us well-reasoned, official numerical figures for the minute sizes of said littles.
    An appreciation for the meaning of their value would then come with apps to explore continuous* manifolds of same-little-sized images; by acting on controls initially tuned to the historical image.
    Ideally, the manifolds-generating algorithms (in §-1) would read off the good reasoning of the official figures (in §-2)
    I believe fair at once to anyway award the *Duchamp Pissoir Prize* to the *EHTC* while noting these two cases contrast like do our opposite poles.
    *Congratulations, EHTC!*

  • @petarswift5089
    @petarswift5089 14 дней назад +3

    As a Serb, I am skeptical that black holes exist at all

  • @chipmunk449
    @chipmunk449 14 дней назад +2

    Personal thought is that this particular subject and science in general has a phenomenal way of having rational discussion and debate. If the Japanese happen to be looking at the data wrong … boom! We have more scientists furthering their own knowledge in their chosen field and pushing us all forward with our collective knowledge of our universe ❤ Love it and cheers Becky for the video 😊

  • @qhershey2008
    @qhershey2008 14 дней назад +20

    so they train AI what they believe data should be?

    • @john-or9cf
      @john-or9cf 14 дней назад +3

      No surprise, that’s what “AI” is, intelligent it ain’t. Just regurgitates what it’s correlated from human sources. Hmm, wonder what happens when it gets so arrogant to quote itself…

    • @lagrangewei
      @lagrangewei 14 дней назад +1

      what we can do is to give the AI the result of incomplete data (basically simulate the telescope backward) and let it iterate until it matches the original, so the are not training it based on what they believe the data is, they are letting the AI play a guessing game until it can guess fairly accurately, i assume somewhere in the paper they have some number of their confidence of accuracy. but utimately these system are guesstimation, you can never reach 100%.

    • @john-or9cf
      @john-or9cf 14 дней назад +3

      @ Agreed, the quality of the training data is critical. As I recall the original team had several groups independently developing what they thought the image should look like and the “best” one was selected…

    • @takanara7
      @takanara7 14 дней назад

      @@john-or9cf yeah all the teams came up with basically the same image. Also they trained the AI not just on images of black holes but also random images off the internet and stuff, so their algorithm will reconstruct images of like cats or whatever if were in space and glowing in radio light... supposedly.

    • @lurker668
      @lurker668 14 дней назад

      ​@@john-or9cfthere is no AI. All this AI slogans are Marketing to make money on algorithm. Sell everything call everything Ai. People are dumb that's why it works