Go to ground.news/drbecky to stay fully informed with the latest Space and Science news. Save 50% off the Vantage plan through my link for unlimited access this month only.
Hiii Dr Becky, can you clear something up for me (or anyone else for that matter). I hear the phrase you'd have to travel faster then the speed of light to escape a black hole pop up everywhere such as in your video. But I also remember hearing from somewhere that the arrow of time flows towards the singularity past the event horizon. Does that not mean speed is irrelevant as to escape a black hole you'd have to travel backwards in time?? OR have I gotten some wires crossed somewhere....
Hiiii Matt, (Black for that Matter, very nice play on words there!) Nothing can travel faster than the speed of light. The only way to escape a Black hole is through Hawking Radiation, but it gets very complicated to type. Although, you are essentially correct. As we fall towards the event horizon, we are falling through three dimensions of space and one of time. Spacetime. However, once we cross the event horizon, then we are no longer falling through space and we're only falling through time. Of course I'm not an expert and may revise that later? Take care. 😅 Edit: Or maybe another way to look at it is, after the event horizon. The time dimension flips to a space dimension, so we're just falling through space, without any time. Einstein makes my head hurt.
Way back in the 1960s in the USA, The Beverly Hillbillies show was on TV. Jethro loved to use the Nought Nought Nought as well. Talk about different ends of the spectrum!
I have to admit, when I first saw the EHT image, my first thought was whether they'd assumed what a black hole "should" look like in the data processing. And indeed they had used AI trained at least partly on what they thought it "should" look like to create gap-filling data. Honestly, that worried me. I don't think the EHT team is entirely off just because things tend to look like what general relativity expects them to look like, the old "Oh look, Einstein's right" is one of the most common results in physics. But in an era when people act like being disagreed with is tantamount to being decapitated, I love seeing people inviting disagreement and disagreeing in a civilized way. I don't doubt that others will download the data and chime in over the coming years, and I look forward to it.
I share your concern (If I decipher blurry images of people with an AI fed with pictures of my grandfather, won't I unavoidably arrive to the conclusion that everyone in the original images were my grandfather's cousins??)
If he can't rebut the team's rebuttal, that'll be pretty much the end of the argument. 1. Dr. Miyoshi didn't use all the data. The team did. 2. The ring image does not require any assumptions about the shape. In which case, for his argument to stand, Dr. Miyoshi or someone else will have do this work again with all the data.
@@antonystringfellow5152 Feed the algorithm data from something else (simulated or real), and see if rings pop out. I was skeptical of the original image for the same reasons; ML is notoriously unreliable so you need a correct negative result to ensure it isn't just hallucinating what you want it to see. I had assumed the paper covered their bases regarding the ML, but it doesn't sound like they did.
Confirmation bias is the mouse who swears there is a cat nearby, and then spends all his time looking for the cat to prove he is right. The mouse will auto select out of the gene pool in order to prove itself right. A machine will only confirm it's evaluations based on the the data it was shown. If it was shown a cat, it will find a cat...
100%. If the results can't be reproduced it shouldn't pass peer review. The overlay from 14:43-14:50 looks extremely damning, it's pretty hard to believe that's a coincidence. There's already been too much academic fraud because peer reviewers relied on the authors of the paper being the "experts", which allowed obviously fraudulent results to pass peer review. This could easily become another massive blunder if they rely on the EHT team to pass peer review with results that can't be reproduced because they are the experts.
@@bobbygetsbanned6049 "If the results can't be reproduced it shouldn't pass peer review." Did you miss the fact that a reanalysis of the data was already done by several other groups, and that their findings agreed with the EHT?
@@bjornfeuerbacher5514 ..my two cents.. cuz ya seem a bit snippy.. Bobby is MOST LIKELY in accord with you as their comment reads like: "100%.. [adds as a general statement]... Any science that gets published by any person/team should be able to pass peer review. [continues thoughts on overlay now and maybe DOESn't realize others have interpreted the data the same way]...."
@@cobra6481 I wouldn't take anything bjornfeuerbacher5514 too seriously. None of the roughly 20-ish comments by bjornfeuerbacher5514 are particularly insightful, and usually say "didn't you watch the video? Didn't you read the blog post?". Where you would say "snippy", I would posit that bjornfeuerbacher5514 is "pointless" and "unhinged".
So much respect for you taking the time to say, "Hey, this isn't my research field, im NOT an expert in this," before saying you'd tend to lean towards the EHT team. 🙌🙌🙌 thank you!
I liked that too. I did a synthetic aperture radar (SAR) project for my senior year in college over a decade ago, it touches on some of the same issues on signal integration and processing... Needless to say, the math is hard. I've forgotten most of it 🥺
In the documentary about EHT's process, one team created sets of test data, then other teams ran the data through their algorithms to reconstruct the image. One of the test images was a snowman, and they were able to successfully reconstruct it. It might be interesting to see what M24's algorithm does with that test data.
And in Google’s Alpha Go documentary you see them directly contradicting their own narrative about Alpha Go (that’s it’s not some general intelligence but just an algorithm that learned Go strategies). Just because they talked about it in a documentary doesn’t mean it’s true.
@@zeitgeistx5239 This comment is tantamount to saying there is no such thing as AI because its actually just algorithms. The concept of back-propagation that underpins a large section of AI research is an algorithm to a-fix weights and values to nodes. But this has created systems like Chat GPT and Alpha Go that are clearly more than just algorithms. No one has claimed they have achieved Artificial General Intelligence, you are creating a strawman, and are arguing in bad faith. You dont understand what you disagree with.
Maybe it's just me, but I totally love your 'bullet points'. Classic presentation training from when I was much younger - "First tell them what you're going to tell them,. Now tell them. Now tell them what you just told them". Oh, and I would REALLY like to see an Earth-sized disco ball - thank you for putting such an odd thing in my head.
"First tell them what you're going to tell them,. Now tell them. Now tell them what you just told them" I hate this saying. Are people that stupid that they need to have a presentation in such a dumbed-down fashion?
@@nobodyimportant7804 Depends on the context. Particularly with a longer, information-dense presentations, it's often helpful just to get people's expectations of what they're going to get from it straightened out at the start, so that they aren't sitting there wondering "are they ever going to talk about X...?" which would distract them from what you actually *are* talking about. And then at the end, people often appreciate having a quick reminder of the key points of what they heard, so that even if some of the in-depth stuff went over their heads at least they can be confident they've learned *something*. I've left many a lecture with basically just the end summary written down in huge capitals, apart from a few notes about specific bits that I really wanted to capture, so that I could sit back and get the overall flow of connections between ideas while the speaker was talking, and then afterwards go away and read up the details at my own, slow-thinking pace. While doing that, I would recogniser bits of what the speaker had said, and gradually a proper, complete understanding would come to together in my head.
Dr. Miyoshi is also an expert in VLBI techniques. His work was highlighted in the Scientific Background on the Nobel Prize in Physics 2020: "Theoretical Foundation for Black Holes and the Supermassive Compact Object at the Galactic Centre" by the Nobel Committee for Physics (available on the Nobel Prize website). The ongoing debates on the accuracy of black hole imaging make this an exceptionally fascinating topic!
But his analysis is incomplete, only using one night's data, and it makes a false claim about the assumption of the shape. Also, no mention of the polarization. That doesn't bode well for his side of the argument, does it?
Lay person here. I too would like to understand how using one night's data is viable for refuting the original Author's conclusion. If the earth's spin yields more data from varying viewpoints, that seems intuitively logical to my brain (he says, bravely).
@@antonystringfellow5152 The EHTC rebuttal makes no mention of "one night's data". Dr Becky said that, but the bullet point on screen at the time does not, nor does the actual web log post if one goes and reads it directly. _There is no_ "one night's data" rebuttal and _no claim that Miyoshi did that_ . The EHTC's actual rebuttal is that the EHTC produced a whole range of results with differing methodologies, and only 2% of them did not show a ring, whereas M24 produced one result with one methodology. The closest that it comes to any "one night" argument is _another_ bullet point that points out that if one observes Sagittarius A* for 12 hours, the fact that it can vary over the course of a mere 10 minutes needs to be taken into account, and M24 did not do that.
@@samuelgarrod8327 Please let's be careful here. While yes, no AI concept actually is "intelligent", there are very different AI concepts at work. The one that is genuinely stupid for most applications is "generative AI", aka. Large Language Models (LLM), where you issue a prompt, and it derives the statistically most likely response from a massive model of inputs. I would *hope* that all this black hole imagine effort doesn't use *that* kind of technology.
As a person working in machine learning for over a decade, I can confirm that this type of problem, which we call data imputation, is very complicated. It depends on the machine learning model you use to fill the gaps, and the proportion of usable data you have in the first place. In the TED talk snippet you showed, it looked to me as if the proportion of sky covered was pretty small compared. The smaller the proportion of original data, the more uncertainty you have in the values you fill in the gaps. Then you need to think about the location of the observatories used: where are they located? Do they cover a representative sample of space or do their locations bias the data collection in some way. I’m not an astronomer, but the fact that we know there are clusters and super clusters of galaxies means to me that matter is not distributed randomly. If we combine that with non-random locations of observatories, the combined sample of space could be non-random, I.e. biased towards more donut shaped structures. The machine learning model used would likely pick up on this when filling the gaps leading to the artifacts that the Japanese team claims. Another tricky aspect is the choice of which gap to fill first, because the order plays a crucial role. To avoid this you need to repeat the gap filling process many times (multiple imputation), each time starting with a different gap and randomising the order. Then for each gap you average over all the runs. The question is, how many runs do you need? Again it depends on the proportion of gaps to raw data, and the total number of gaps. The number of runs required can be huge, and that costs money. If you stop too soon you may induce bias in the values you put in the gaps. Anyway, I thought you presented the topic very well even though it’s not quite your field of expertise!
Yes; this. The AI component of the EHT is the least mature technology I know of in the process and thus the most likely source of errors. I wonder if they could get a real world data set, with a comparable level of completeness, detail and resolution, but of something that we directly image much better and see if the process produces a correct image?
Its like they used a pinhole video camera to photograph a living room. The got a bit of green from the sofa, a bit of brown from the arm chair and a bright flash from the window. Then they used a computer with a catalog of living room pictures to find a best match. There, that's the living room. Better to say this might be what the living room looks like.
I am actually really surprised that they chose to use machine learning for this. Typically, there are algorithms that you can use to reconstruct missing data, if you make certain assumptions. But in that case those assumptions are well-defined. With AI, you typically don't know what the model is doing or why.
@@hansvanzutphen They used multiple different reconstruction techniques, of which only one used machine learning, with the others being more traditional methods. There were other contingencies which one can read about in the paper or their talks, but in short they were very careful with the use of AI.
The key fact that comes out at me is that EHT is running at the limits of its resolution. Therefore any detail in the image may be an artifact. The frightening thing about this is that they tried to bridge that gap with machine learning, which coerces results into known patterns instead of admitting that it is unsure.
Agreed. the last few years has made 'machine learning' anything a black mark against it. It is too easy to let them hallucinate. And impossible to prevent it.
"The frightening thing about this is that they tried to bridge that gap with machine learning, which coerces results into known patterns instead of admitting that it is unsure." That is a _huge_ oversimplification of what they actually did do. Did you watch Becky's video? Did you read the blog post by the EHT team?
"...instead of admitting that it is unsure." That's not a binary. Data can be biased toward known patterns, AND the authors can (and should) analyze (and usually quantify) the resulting uncertainty. Nothing in science is sure except the existence of uncertainty. In my field (neuroscience, now retired) that usually meant deriving p values from the appropriate statistical tests. In physics it seems to be more about sigma. I don't have a clue how they analyze radio telescope + machine learning data like this, but it's scientifically meaningless if they haven't analyzed their own uncertainties or addressed them in some way. I think the heart of this criticism and response are the assumptions each team is making. I have to say I think the EHTC seems unfortunately dismissive of the criticism against their work. I agree that "EHT is running at the limits of its resolution. Therefore any detail in the image may be an artifact." That's probably their greatest weakness, but even so they should be able to analyze the odds of artifact (amplified by AI) vs. truly representative imagery. They seem to have done that by using numerous different methods and independent analyses to verify the ring-like shape, but I get the impression that these methods are all novel, so what's the baseline for reliable validation other than selecting methods that yield a consistent result that makes sense to them, which might more accurately reflect a flawed assumption?
@@defies4626 statistics have been used to "fill in gaps" in scientific observations for a very long time. So first, a "valid" number of results ("sample") must be gathered before coming to a valid conclusion in any study. Deciding what constitues a valid sample is, becomes key. The correct statistical method for the test is also vital. What we have here is two teams arguung about the statistical methodology.
NRAO holds a "synthesis imaging" summer school (or at least used to). The textbook they have for that course is, well, "dense", and something like 600 pages. Using interferometry for image synthesis is not for the faint of heart. I will note that similar techniques are used to produce images from CAT scans, and I think that in the very early days of radio interferometry and CAT scans there was a lot of "cross-pollination" going on.
About 30 years ago I was working on trying to use radar information to get images of aircraft for identification purposes. We found that what you call the PSF results in smearing, The data from a single input point is spread over many output points. And there is no way of reversing the process, the information is lost and no algorithm can recover it. On noisy pictures image processing tends to give inconsistent results and have an alarming tendency to produce artefacts that were not in the original image. I suspect this is why they tried machine learning. But that cannot recover the lost data. In very crude terms, machine learning takes lots of data and keeps making guesses until it finds one that meets the fitness requirements. It can never tell you why it works or that it will work on new data. It is also very dependent on the data provided and the data set size. The images must be regarded as an educated guess at best.
Well: IF you HAPPEN TO HAVE a "PROVEN MODEL" of what produced the original data, you may well matrix-divide the raw image by the wave functions of the proven model to get a highly beautified and sharpened image of the originating object. If, on the other hand, you have NO "PROVE" AT ALL for the model you assume, then what you get is at best one of-a-multitude of possible hypotheses on what could have caused the original data. If you happen to actually test the generated images against the very basics of assumptions of the models you put in - for example having the black hole accretion disc imperatively being aligned with the galaxy disc -, you immediately are forced to dismiss those "images of black holes".
The PSF affects the original image via convolution, which does have an inverse. Of course, you still have to be careful about noise, but it is in theory possible to get the (unique) original image back
@@HunsterMonter Am I understanding correctly that the original image is essentially a very low resolution pixelated image? Something like maybe 5x5 pixels for the entire black hole, potentially with pixels missing?
@@TheRealWormbo I don't know the exact resolution, but there were enough pixels to get a good image. The problem was that the "mirror" of the telescope was broken into tiny bits scattered around the globe, so there was missing spots the image. This is why they had to use ML, to fill in the gaps and get a full picture. The PSF is something else that they had to account for. If you look at an image by Hubble or JWST, you will see spikes around stars. These are diffaction spikes. They look pretty on pictures, but if you are trying to do science, they get in the way. Every single telescope has a unique diffraction pattern, and mathematically, it is possible to correct for that effect and get the image without any diffraction spikes just with the geometry of the telescope.
@@TheRealWormboI don’t think so, I think data is extracted from various images from different telescopes of various resolutions. Then that is fed into a image reconstruction method, not too dissimilar to that of photogrammetry, combined with AI.
When I watched the original presentation on this, I did question whether the machine learning might have been fitting the data to the theoretical expectations of what the black hole was supposed to look like. I don’t have enough details, or understand the processes or mathematics used in the analysis, but I do have a background in machine learning so I know this is a possibility. I’m glad these researchers questioned the results and am very interested in hearing about the final verdict on this.
I had similar thoughts when the first images were coming out, and I questioned the data fitting as well, this point about it being within the resolution error of the imaging wasn’t presented. IIRC, if you watch the TED talk that Dr. Becky refers to the researcher talks about exactly this and doing their best to make sure the ML wasn’t giving them what they wanted to see. I’m still skeptical.
@@daveherd6864 that’s true of the counter paper as well, pursuing science costs money. You can’t discount it as an influence, but money influences nearly all of our actions in society, so I wouldn’t put a huge weight on it. They would have been needing money if they’d been pursuing some other thing in science as well.
Much of what the cabal puts out seems like BS at first glance, but we're supposed to accept it. The dumber ones are the first to understand it. In a camouflage class in Basic Training, the instructor asked if anybody can see his tank parked in the woods way over yonder. There were dumbasses talking to people like they were talking to dumbasses. "You can't see it? It's right there!" The instructor was smiling. There was no tank in the woods. How many contemporaries Peer Reviewed Einstein? Hint: The number is less than 2 but more than -1.
What’s especially encouraging is that the EHT data is available for others to analyze, inviting contributions from different perspectives. Over the years, as other scientists scrutinize and build on this data, we can hope to see a clearer, more comprehensive picture of black holes and their surrounding environments. Disagreement, in this case, isn’t just welcome-it’s essential for advancing our understanding of these fascinating cosmic phenomena.
One of the rebuttal points is that they didn't release the uncalibrated raw data To me, that should be the first, not the fourth point since the other points could then be addressed independently. Using ML trained on what should expect as as output also seems a little off to me for the reasons stated of bias.
As someone who writes software for fun in retirement after decades of getting paid to write software, I looked for the raw data to process right after watching this video, and I was unable to find any thing that even remotely looks like unprocessed raw data.
@@nil2k I'm not sure if uncalibrated is the same as unprocessed in this case, but neither are the raw data. I'd imagine that dealing with the raw data would be extremely difficult, since that would mean the raw data from each observatory before they do their own calibrations, then have to mesh all those into the 'disco ball' model. With those data sets, the other 3 points could be tackled separately.
The number of people who can really assess this debate is obviously very small, and I'm not, even in the most infinitesimal way, capable of having even the dumbest of discussions with any of them. Disclaimer out of the way, I was fully expecting these results to be challenged, and would have bet money that their images aren't "right". Reason being that, after listening to the EHT team in many discussions, listening to their explanations of the process, and watching their documentary including the process of deciding on the accurate final images, I had the distinct impression of a culture that sent all kinds of signals to my intuition that they were...how to say it... high on their own supply? I felt like they demonstrated that they were extremely cautious in their analyses, but somehow it felt like box-checking. Like they had a sense of inevitability in the result. Like the human ego just isn't capable of being detached enough from something so precious to one's core identity. The sheer scale and effort of the project is just overwhelming. I saw that they recognized this and knew it was a massive danger - as any truly good scientist does - but I just got the sense that there wasn't anyone strong enough or oppositional enough to not walk down the predestined path to the predicted result. Once I saw that image processing included using what is basically very complex confirmation bias to derive the images (telling AI what it "should" look like) I just couldn't have a lot of confidence in it. I'm highly likely to be wrong, but my life experience has been that when I get these kinds of intuitions about expert positions on things, I end up being right way more often than I have any business being. Very curious to see how this plays out.
Very good points. The risk of human biases and behaviours not being accounted for here is significant, with groupthink being one of the possibilities you highlighted.
Back in the day, the most common way to make complex maps from radio interferometry data was with Hogbom Clean. But there was no way to estimate how reliable it was. Then came maximum entropy - I used it with both radio interferometric and optical data, but once again, how reliable was it? Now we have "machine learning", and the same questions keep getting repeated. North Cascades - I've been there. Very nice (and remote) park.
So HOW RELIABLE was any of the methods? You can't criticize the reliability of the methods if you never assessed them to be false. And there are arguments why they should be 'roughly good'
I used to process seismographic data. Deconvolution is an issue in that field as well. It's an important concept. Is it big in your type of astronomical research? Perhaps you could do a video about that. Kudos to you for discussing methodology at all.
Becky - The issue here is that the EHT resolution is at the ultimate limit for a small emitter like SGR A*. AI techniques can extend that resolution, but the results start to take on some risky results, such as hallucinations. Both teams have to resolve that problem, and it is not really such a surprise that they have different results. Its kind of like the popular images of the "face on mars" several years ago. The shadows in that mars image made our human vision detect a "face", because we are naturally adapted to see faces, especially eyes, even if the actual image is distorted, blurred, and noisy. The face turned out to be an optical hallucination when better resolution images were available. In this case for SGR A*, I suspect we will have to get more data to resolve the image better. In the mean time, I have to place more trust in the larger team. More team members should be better at finding and correcting defects.
I'm not sure how you can classify an optical image taken with a camera with no manipulation as a "hallucination". None of this data was made up. The geography was there. The shadows were there. They happened to combine in such a way in that image to look like a face. If you were to image the same area again under the same conditions you'd get the same shadows and you'd still get an image that looks like a face.
@@stargazer7644 The point I tried to make with the comparison was that the so-called face on mars was a hallucination by our neural networks for vision. Our neural networks for vision are biased in that way. It happens to work often enough that we accept it. The analogy is with the AI network made with computer models of artificial neural networks that analyzed the EHT data for SGR A*. The other part of the analogy is that the resolution of the face of mars was figured out later with more data. I suspect the same will occur with SGR A*.
Is that true? I mean, if a lot of people want something to work out, because they worked on it for a good part of their career, they definitely will be biased.
Thanks Dr Becky for an interesting presentation. I've worked now and then in radio interferometry and have a healthy respect for the difficulties of phase calibration, which are particularly severe in VLBI, when one doesn't have the luxury of having all the antennas directly wired up to a common phase standard. I'd love to have time to play with this data myself instead of what they presently pay me for, which is trudging with an Augean-sized shovel and broom through acres of other people's crap x-ray code. Wisely, with the Fourier-innocent public in mind, you omitted mention of this particular F word, but I don't have any popularity to worry about so I can mention for readers that what an interferometer measures or samples is the Fourier transform of what is on the sky. I'd be interested to fit directly in the Fourier plane to compare the likelihood of a ring-shaped versus a condensed object. I have to say also that I think a machine learning approach is quite wrong for this application. People throw ML at everything just now without stopping to consider whether such an approach suits the problem or not. It's not a magic wand and indeed it leaves one open to just such criticism as by Miyoshi et al, because sure you've generated your nice picture, but you can't exactly explain how, because it is all hidden in the ML black box.
I have one simple question: have they tested their observation and treatment algorithm on normal stars ? To check that normal star do not produce a donuts shape image as well...
Exactly my thought. Well, since the SMBH's theoretical ring is so massive, maybe not a single star, but a cluster or a glowing dust cloud, or something else more comparable to the accretion disc.
Yeah, taking the produced image and reverse the PSF, will that result in the raw data picked up by all sensors? I guess they have both defined and checked the PSF using nearby stars and comparing the sensor data with a ”normal” telescope image data.
Good question. I believe that is how they "trained" their artificial intelligence part of the work. They looked at known objects and the associated combined telescope data to teach the algorithm.
It's a simple question with a hard control: If you want to check it with a "normal" star then you have to apply these techniques to a star that has the same radio "brightness" as these black holes. I'm not an expert, but I have to imagine that those stars that have that kind of radio brightness probably exist, but those stars are so hard to separate from background radio noise that you would have to identify that normal star from other techniques. THEN you have to apply the same radio collection strategy as they did for the black hole (lots of nights of capturing data at all of the same telescopes) to rule whether you would generate a similar image as the black hole image that they generated, or a simple blur spot image. Probably not an easy experiment to replicate, but probably will need to be replicated to further prove the results of the original study should still stand.
EHT’s responses are unsigned blog posts. Credit to Miyoshi, et al, for signing their rebuttal. Are anonymous responses standard in this academic community? It is also odd for EHT to claim they have provided detailed descriptions of their methods, while Patel’s paper notes the scripts for image postprocessing and generation are not available. The 2024 EHT response does nothing to enlighten us on their methods, choosing instead to critique Miyoshi. Those critiques may be fair, but EHT’s scripts and methods remain opaque.
Thank you. It is helpful to know the usual practice. The response certainly appears to be speaking for the entire team, but as you say, that’s nothing more than an assumption.
When they used AI with simulated data to try and fill in the missing data, that is the point that set off red flags for me! The AI is no better then the data that it learns from. and if they use simulated data then it is only what the teem wanted it to be! Not "real" data!
I do astrophotography myself with off the shelf amateur equipment, and if the resolution isn't there, then the resolution isn't there to produce an image. So I am hesitent ever since I saw the first images back in 2019 about this method to begin with. You can't just say our models do take it into account and walk away. The smallest detail any telescope can produce is easy enough to calculate and if the event horizon is smaller than this theoretical minimum, then you simply have nothing to show in my honest oppinion. These images look like they should be right, but that's just an assumption based on our best models, not on actual data. The resolution just isn't there and that's what Myoshi et al claims correctly from my amateur astrophotography POV.
@@monochr0m Resolution is still resolution. Nothing changes there in how much detail you can resolve. Be it radiotelescopes, mirrors, refractors or whatever. Even if they had a telescope the size of earth without the need for interferometry , they still would not have enough resolution to resolve an object the size of M87s blackhole or SagA*. Simple as that. To circumvent this fundamental resolution problem they do throw their models into the equation and "simulate" the endresult based on the low resolution data the actually gathered.
I had to explain the "Airy Disc" and diffraction through a circular aperture to our student volunteer the other day. We were trying to make observations of 3C348 during the day, but the Sun was contaminating our side-lobes too much to succeed. Actual instruments diverge from the theoretical quite a bit, because real instruments aren't perfect. Incidentally, one of the versions of the CLEAN algorithm was invented by a former colleague of mine when we were both at Nortel, 17 years ago. David Steer and I filed a patent together (on some wireless security stuff--unrelated to his grad-school work).
Just my thoughts, but reporting the EHTC's rebuttal posted on their website is problematic. The Japanese group published their rebuttal in a peer reviewed journal. Only counter-rebuttals from the EHTC group that are likewise peer reviewed should be taken seriously, and not ad hoc posts from their website.
The massive problem with any machine-learning algorithm is that you're training it to produce an expected result. Therefore, you should NEVER use machine-learning to produce an image of something hitherto unseen using noisy data, because all you're going to get is what you expected to see, not what's there. Or, to state the EHT's own comment, "ring-like structures are unambiguously recoved under a broad range of imaging assumptions". In other words, 'we definitely got what we expected to see'. As for EHT being more likely to be right because they put more time and effort in, well, when you put that much time, effort and money, there's a HUGE pressure to produce something impressive.
Please take a moment to recognize that Becky put together this detailed video explanation of the disagreement, including an overview of the published work by both research teams and uploaded it to HER channel, yet still had the INTEGRITY to point out that she's NOT an expert in radio astronomy interferometry (herself) , so she can't offer an informed view. Massive kudos to her for doing this! IF ONLY all science communicators (and journalists) were as honest and humble... Becky you're exemplary! Please keep showing us how it should be done. ❤👍
You use some process trained on models to edit your image, it is no surprise the image ends up looking like the model. It is also odd there is no edge on angle with both images, you can see the full center.
Correction on your last point: the apparent angle can be explained by the way the light from the accretion disc gets bent directly around the black hole. Look at the artistic renderings/simulations to see what I mean - one can see what is directly behind the black hole, even if looking at it directly from the plane of the accretion disc.
I love how groups studying the same phenomenon, coming up with differing solutions, are like WWF pro wrestlers talking smack about their rivals. But instead of power slams and suplexes, they use research and data, and instead of shouting their spit into a microphone about how they're gonna totally own, they are publishing research papers. I mean in the end, there is still professional respect, and everyone is playing by the rules. But in either case it is still a lot of fun to watch.
I'm glad these images are coming under some more scrutiny now, I'm no expert but the whole methodology especially the use of machine learning always made the final result seem way too good to be true, and very much dependent on many possible biases that could be manipulated to produce a desirable result which is more recognisable to a layman as "black hole-y" and therefore more likely to become big mainstream news.
Yes! This kind of deep explanation of a controversy is some of the most valuable content on RUclips. It reminds me of my very favorite videos of yours, explaining the historical process of figuring out some of the things that are now considered basic to modern Astronomy (and I'd really love to see more of that - and see other people in other scientific fields emulate it). One of the best ways to really build lay understanding of these things.
For the first time, I understand the meaning and origin of an artifact. There are artifacts on x-rays and sometimes MRIs I suppose that can lead to diagnoses that are wrong. I did learn that, but I didn’t understand how that could happen. Your explanation is making that very clear wowyou’re very good teacher. I thank you.
I'm only on 2:22 in the video, and my photography brain has kicked in: in some cases, if you edit an image with one step being done in a different order, you will come out with your image being totally different. While this obviously is more complicated than just editing a photo, I imagine the concept could be applicable. Edit: at 19:07 in the video, EHTC mentioned my exact thought. For me, sometimes I have had this happen when the overall photo is dark to the point of being nearly black and has low contrast. You do it one particular edit before another, and it will somehow pop with more contrast, while flipping the order does nothing to improve it
I'm skeptical of the EHT image, because of the AI model used to recreate the image. We humans are fallible and biased, and we pass this to the AI models we train, so the team who was looking for the black hole and WANTED to get a good image of it, also trained the AI model for it. Which makes it biased since it's a model designed to create black holes images. I think if they had double blinded the "experiment", with the people who designed the model, not knowing what was gonna be imaged. This would result in a more faithful model. I'm glad other people are taking a look into this data, and I hope we can soon have a better understanding of it.
Simple solution is for the original imaging team to provide the process they used to generate the image. If the image can't be generated by a "competent" independent group/team, then the original image is probably a fabrication. If the image can be generated, but violates the tolerances of the equipment and/or what can be deduced from the data, then the image is probably a fabrication.
I love science because it decodes reality rather than just insisting on a belief. This back and forth collaboration is argumenting at its best. This is how we decode reality. It's so much better than just forming something in your imagination and declaring that to be what is. Even if we run into a disappointing failure, we can take comfort in knowing that it is wrong and continue to work at finding what is right.
I remain skeptical about any AI-generated results. AI can be good at getting patterns out of the data, but the patterns have to be independently verified apart of AI. AI might be good at inventing new drugs, but the results need to be verified. AI might be good at generating images, but I can get AI to generate any image that I want, apart from reality.
My very passing understanding was that the algorithm wasn't guided to produce an image that matched the expected image, so the fact that it _did_ match is a mild sort of independent verification. ETA: If I give a generically-trained algorithm a blurry image of what I believe to be a Dalmatian (due to independent evidence other than the blurry image), and it produces an image of a Dalmatian, that feels meaningful. Could it be a coincidence? Certainly, but that doesn't seem particularly likely.
I was much more inclined to believe the machine learning (I'm not calling it AI) result a few years back, when I understood less of how it actually works, and how unreliable it can be.
it's incorrect to think of the machine learning as the same kind of image generation AI that's generally available. The principle algorithm they use is called PRIMO. They also use CNN. The main thing here is really the algorithms is designed more for reconstruction/recognition not generation.
@@haydon524 I responded with some links, but seems to be filtered out, you can look up the paper "The Image of the M87 Black Hole Reconstructed with PRIMO"
Good work covering this story. There were a few details in there that I hadn't heard before. A little puzzled by some of the comments here though. I'm not sure these people watched the same video as me as they only seem aware of the fact that the image was enhanced using AI. It's like they never heard the rest of the story. Strange.
In fairness, even if you watched this whole video you won't have the whole story. When Dr Becky is giving that "one night's data" explanation at 18:42 that's not what the bullet point on screen is saying, nor what the actual web log post says if one goes and reads it firsthand. In fact that part of the EHTC's rebuttal is _not that at all_ , but rather that the EHTC produced a whole range of results with differing methodologies and only 2% didn't come up with a ring, whereas M24 only produced one result with one methodology.
They didn't use AI to "enhance" the image, they used it to extrapolate data points from miniscule amounts due to not enough observatories. AI is useful for interpolation, but terrible for extrapolation. You will always get what you expected to get, regardless of if it's accurate.
I work in computer science, and often we have to collect a LOT of benchmarking data for performance evaluation. BUT, for whatever reason, even AFTER giving over the data and charts, someone will say "but does it actually FEEL faster?" BECAUSE of what you said: some people DO just kinda "know" the data better. If you stare at data long enough, AND know what you're looking at, you can kinda just start SEEING things. And there is ABSOLUTELY a bias in the crunching of that data once you "see" it, and I have had my ass HANDED TO ME many a time because I misread the data, but I think I will BARELY side with the Horizon group, but not for their data familiarity, but ONLY because of the WAY they have LEVERAGED that data familiarity to UNCOVER very special and unique science
The shapes look similar to me, save for one thing; Could it be that og team clumped the upper region of the radio spectrum, so it just wasn't computed by the machine learning leaving a "blank" result, where the other team didn't, making the central region appear brighter, which would make this all just a resolution issue?
Interesting as always. Random aside, The North Cascades t-shirt - after living just across the border (northern) from that area for over 60 years, my wife and I just did a short road trip through the park. Definitely worth seeing. (didn't get the t-shirt as the park facilities were closed for the winter.)
As a novice, something has always bothered me about these images. Both are from the perfect angle, a perfect right angle, to the ring around the event horizon. Like a photo of Saturn with its ring as a perfect circle around the planet. And we just happened to be at that perfect angle. Also, and more importantly. Isn’t that an IMPOSSIBLE angle for us to the Milky Way Supermassibe black hole? Assumably, the ring around the black hole is in the same plane as the bulk of the galaxy.
The rings around the black parts of the image simply represent severely gravitationally lensed radio light from behind the black hole. They do not represent accretion disks. The latter are not present in the images. This is such a common misunderstanding that I’m surprised science popularizers/influencers have not bothered to clarify it after all this time.
@@robertbutsch1802 surely since it is the light emitted from the accretion disc that we are seeing, the rings do represent the accretion disc, albeit severely distorted images of the disc.
That part is probably not problematic because light is heavily bent around the black hole in a way that we are always seeing the back of it. So it doesn't really matter at which angle we are looking, we will always see the accretion disk as whole.
I'm an engineer and I understand most technical topics, but I don't understand how telescopes scattered around the globe are essentially one big telescope, and can "see" things too small for each of the component telescopes to see. Let me use a simple analogy. Let's say the entire population of the United States looks up at the moon on the same night and tries to see the Apollo 11 Lunar Lander. That, in essence, makes one huge "telescope" but since none of the observers will see the lander (it's too small), neither will the huge "telescope". Dr Becky, Perhaps you could produce a video explaining, in detail how the "one huge telescope" concept works. We'd all appreciate that. Having said you are not an expert interferometry and huge radio telescopes, perhaps you could do a joint video with one of your colleagues who is.
If that's your position then you should never trust a single thing you see ever again, because at it's fundamental core, this is EXACTLY how the brain interprets optical input and translates it into 'vision'. I do understand and (partially) agree with the sentiment, don't get me wrong, but I would be remiss if I were to fail to point out this fact regarding vision. Ultimately, what we need to refine our understanding and improve the models are independant data from other perspectives, distances and spectra, which is, alas, unlikely to occur any time soon.
@@Veklim This is BS response. Brain is not a computer. Anyway AI is famous for hallucinating and it's response depend on data it was taught on. I'm inclined not to believe these images until the same process (same AI, same gaps) is proven correct by imaging known targets (forming solar systems for example) producing results that are very close to images by other methods.
I applaud your communication of a very technical topic outside of your expertise. I'm an engineer who interfaces with many different disciplines, fumbling around in unfamiliar territory, so your admission of working outside your domain definitely resonated here :D
The image from M87* is taken from almost directly above the black hole due to its orientation from our perspective, and the image of Sagittarius A* was captured from a side on perspective being at the centre of our galaxy. Now look at both images side by side 0:29 (left image is top view and right image is side view) and you will see now with both perspectives it is not a sphere shape but a vortex with three regions of influence rotating about a centre point. This explains why General Relativity breaks down at the event horizon, our math dives off to infinity because the fabric of space time goes down a hole so linear measurement we take look like they are going to infinity but what's actually happening is the fabric of space and time is going down a hole and actually going towards infinity in both opposite directions to the centre of the hole. This explains why black holes have jets either side too. Remember our math is but a slice through a 3D object moving in a 3D way through a 3D space. If you slice up a tornado and do math on one slither of it you won't understand it completely, and when your straight line related to your equation reaches the edge of a hole, measurements go to infinity because they are falling down a hole and the line you are measuring is bending with space and time and is now pointing in a different direction, you think you are measuring a straight line but the geometry of that straight line in reality is a curve that becomes a vortex.
I have said this before, but will repeat it. To image the shadow the diameter of the imaging device needs to be 1.1 times the diameter of the earth.The number of individual devices is to few, being only five if my memory is correct. There were only five or seven clear days of observation; in the past I have seen 60 data stacks to get a good "image".Too much of the data collected was not use for being corrupted which will skew the data sets. AI learn is ok if you make sure you don't feedback loop other AI learning into the data. EHT team used temple - probability matching which in itself leads to artifacts. Personal a 200 person team seem more likely to error and rush to publish maybe?
That the original purported images might be artefacts of the analysis seems like a real possibility given how far the team was pushing the techniques. My own inclination would have been to use maximum entropy (or its dual, autoregression) analysis for image interpolation rather than AI, but I realise I'm forty years out of date on such things and I could be spouting nonsense. Having a variety of methods applied to the same data by other teams would seem one way to help resolve (sic) the issue, but "You're going to need a bigger baseline" is the message I take from this.
Oh, good, an explanation of the controversy I can understand. What we really need is a longer baseline, maybe a couple of radio telescopes in high orbits.
If they trained the algorithm to decode the data with images that represented an expected shape, doesn't that unavoidably bias how the data will be interpreted?? Should the image obtained be considered a model rather than a "picture"?
Myoshi, Kito and Makino 2022 make some very good points about the M87 EHT images. I did not read the equivalent Sag A* paper. As an old VLBI observer, their processing makes more sense to me, but I may be a little biased toward old school. I found the EHT-team rebuttals a little glib. However, the discussion is good and you made a nice presentation of the conflict.
Great video. I love seeing these science disputes happen in real time. In science, conflicts are resolved through long discussions, analysis and arguments, not with emotions, intimidation and misinformation. I wish things were like this in all aspects of our society.
Well the reality is that scientists are also humans and human biases and emotions can unknowingly influence data analysis that is thought to be objective. Although it is much less likely in circumstances like these, it should never be left off the table.
Sky Scholar has also explaind that picture as "not possible", or something like it's not possible to messure and get an image the way they clamed they did.
Pierre Marie Robitaille doesn't know the first thing about the complex imaging technology used in the black hole image. He also doesn't understand how atmospheric pressure works, thinks that the cosmic microwave background radiation comes from the Earth's oceans, that all of the data from the Parker Solar Probe (as well as any other research that contradicts his theory) was faked and the sun actually is made of liquid metal hydrogen. His doctorate is in zoology, not astrophysics and he worked on MRI technology until he was released from his position at Ohio State University. Since then he has not published a single paper on astrophysics to a peer reviewed journal and his audience consists primarily of science-deniers on YTube. He is, in fact, the textbook definition of a crackpot. People need to vet their sources better.
When the M87 image was released, it seemed plausible to me because the accretion disk (and by implication the black hole rotation) was facing us, as was the galaxy. With Sagittarius A* I also expected it to be aligned with the galaxy, but again it was facing us. The explanation was that there’s nothing to force a black hole rotation to align with the surrounding galaxy, which is fair enough. But what are the odds that an otherwise random orientation would face us? Twice?
Relativistic effects mean that you can see the front and back portions of the disc even when you are mostly edge on. It is not as simple as viewing Saturn's rings. The NASA JPL cartoon at 05:10 is a bit simplified, but notice that at most angles you can see the whole of the disc.
the gravitational field is so strong near the black hole that light from the accretion disc is bent towards us whatever direction we are looking from. As far as I know we would see the doughnut shape from every angle.
In all fairness, if the team that assembled the data says you have to process the data in exactly the same way in order to get the same result they did...something about that reflexively tells me it's not great science. The best science is verifiable, falsifiable, and repeatable. At the same time, processing the data in a way that doesn't make sense is of course going to produce results that don't mean anything or looks wildly different. As a...um...well, I'm not a _professional_ physicist, but studying and understanding physics and the math of physics is one of my greater passions, sooo...can I still be a "physicist" even if I don't have a degree or relevant employment because of institutional and social obstacles? As whatever I am, I am highly appreciative of your professional analysis and thoughts.
ML program they have been using as well as any raw data they collected are available to public. Japanese team just needs to use it and feed it false data to see if the image still turns out donut shaped. If not EHT is just faking stuff for media recognition.
I am a computer scientist and as far as I'm concerned, you can't call these things "images of black holes" if they were postprocessed via machine learning. Instead, they're simply a prediction of what a black hole *might look like* based on existing images (which, remember, there are no existing images of black holes). I have no doubt a true image of one of these black holes would look very similar to the event horizon images, but these aren't that. They're a snapshot of a simulation extrapolated from a low-resolution image of reality.
Yeah, it really sounds like they simply don’t have the data. It may turn out to be a reasonable accurate picture of a black hole but I think they haven’t actually managed to prove it is.
So ashamed that the 64m Murriyang Radio Telescope (formerly Parkes RT) and the 70m antenna at Tidbinbilla (part of the DSN but also used for GBRA) weren't part of the EHT collaboration. The 70m DSS-43 antenna is the only one on the planet capable of sending commands to Voyager 2, so possibly too frequently indisposed, but there are other capable antennae on that site. It's been four decades since we had a Commonwealth Government with any interest in funding science or even education properly, so maybe I shouldn't be so surprised that CSIRO and/or any Australian universities weren't involved in taking data for the EHT. :(
Yes I would have liked a bit more emphasis that both teams are doing good work here (not that she said the opposite): publishing all the data, then another team coming forward with a potential issue etc. If it turns out the original analysis was incorrect, that's fine, there was no deception here.
@poruatokin It's hard to know if it's cultural or genetic, as there's important interplay there. As always with these questions it's a mixture of both. I don't think their "shame" culture helps.
I have to say that from my experience working with big teams is that most of the time the one single type of analysis is done by one or two people, and very few in the team understand exactly what was done. So I wouldn't take the size of the team as an indicator of anything.
_Edutainment_ and _Sciencetainment_ can never replace actually learning topics, which takes a lot of work. And that is why we see masses of people get so easily confused by headlines - they have not put in the work.
Weirdly enough I think it would be more accurate to say that space is mostly not black (unless you are near a back hole or something). It only appears black due to exposure times of cameras or the sensitivity of our eyes.
So is the black in space (the edge of the universe) the same black as a black hole? If not then what’s the difference ? Serious question. If there is no difference then what …
They've got doctorates in astrophysics and I don't, but if you tell me somebody took some data and ran it through an AI algorithm trained on simulations of what black holes looked like (rings) and the result matched those simulations even though you can't see the ring in the raw data... sounds like BS to me!
Well, it wasn't just one algorithm; what they claim is that they ran it with several data sets: renditions of black holes but also just a general images, and something else I now forgot.
Thank you for giving us your very much qualified 'leaning' on this issue. Thank you for explaining those qualifiers. Most of all, thanks for talking about psf deconvolution. That's a real blast from my past! A great video of a complex tangle of a subject.
There are two issues with using AI to "fill in the gaps." The first issue is the simple one of: how many gaps are being "filled" compared to solid data from the radio telescopes? The worse the ratio, the less reliable the gap-filling. The second issue is that AI models are extremely sensitive to training data, hyper-parameter tuning, and learning reinforcement. As the original team hoped to image a black hole's event horizon, it's not beyond the bounds of possibility that unconsciously they nudged their AI model toward a state where it would indeed output such an image. AI models can be powerful tools, but we're still in the very early stages of understanding how to use these tools properly.
Nice walk through! I did react on the "one image" reconstruction, but I didn't catch that the same group had made a similar previous claim. For me, the polarization data reconstruction was the clincher though, it is more structure and it is aligned with the ring structure.
Thank you so much for that review of the current expert thinking on the two black holes appearing in those images. As a member of the general public who knows nought point nought-to-infinity about any of this, I am inclined to lean toward the consensus of the larger group of experts, at least until we learn more. It's reassuring when Dr. Becky says that she leans that way as well.
Here's hoping we don't get a Sabine Hossenfelder video with a clickbait title. Some predictions... "Black Holes are B.S.", "Why Black Holes Aren't Science", "Black Holes are Failing", "Science is Dying Because of Ugly Black Holes"
Well done Becky. I'm not buying the PSF limitation argument based on your report they only analyzed a short duration of the data. If the full duration of the data is used that enables "Super Resolution" effects to provide higher resolution than the PSF. However it introduces time ambiguity - - if the brightness is really a full ring or a circulating blob. I'm betting the EHT team has a way to minimize that ambiguity.
The PSF argument is incredibly strong. Its difficult to understand the intricacies of image processing. But no matter what processing you do, you are still limited by the PSF of your observation instrument. An image with a ring the size of the PSF is highly suspect! Indeed, that looks like an airy-disc. Moreover, nobody would ever claim to have actually resolved a structure when it is a ring with the diameter of the airy-disc. Would have believe an image where the ring is 1 pixel wide and the whole in the middle is also only 1 pixel? That's pretty much what is happening here if the PSF argument is correct!
Another made up image that was totally ridicolous was the one representing the vectors of the material spinning around the event horizon. Come on. That one was shameful .
Salient since day one is that the images possess a special feature : some relatively straightforward algorithm will compress them with extraordinary accuracy to incredibly little. Therefore the EHTC owes us well-reasoned, official numerical figures for the minute sizes of said littles. An appreciation for the meaning of their value would then come with apps to explore continuous* manifolds of same-little-sized images; by acting on controls initially tuned to the historical image. Ideally, the manifolds-generating algorithms (in §-1) would read off the good reasoning of the official figures (in §-2) I believe fair at once to anyway award the *Duchamp Pissoir Prize* to the *EHTC* while noting these two cases contrast like do our opposite poles. *Congratulations, EHTC!*
Personal thought is that this particular subject and science in general has a phenomenal way of having rational discussion and debate. If the Japanese happen to be looking at the data wrong … boom! We have more scientists furthering their own knowledge in their chosen field and pushing us all forward with our collective knowledge of our universe ❤ Love it and cheers Becky for the video 😊
No surprise, that’s what “AI” is, intelligent it ain’t. Just regurgitates what it’s correlated from human sources. Hmm, wonder what happens when it gets so arrogant to quote itself…
what we can do is to give the AI the result of incomplete data (basically simulate the telescope backward) and let it iterate until it matches the original, so the are not training it based on what they believe the data is, they are letting the AI play a guessing game until it can guess fairly accurately, i assume somewhere in the paper they have some number of their confidence of accuracy. but utimately these system are guesstimation, you can never reach 100%.
@ Agreed, the quality of the training data is critical. As I recall the original team had several groups independently developing what they thought the image should look like and the “best” one was selected…
@@john-or9cf yeah all the teams came up with basically the same image. Also they trained the AI not just on images of black holes but also random images off the internet and stuff, so their algorithm will reconstruct images of like cats or whatever if were in space and glowing in radio light... supposedly.
@@john-or9cfthere is no AI. All this AI slogans are Marketing to make money on algorithm. Sell everything call everything Ai. People are dumb that's why it works
Go to ground.news/drbecky to stay fully informed with the latest Space and Science news. Save 50% off the Vantage plan through my link for unlimited access this month only.
Hiii Dr Becky, can you clear something up for me (or anyone else for that matter). I hear the phrase you'd have to travel faster then the speed of light to escape a black hole pop up everywhere such as in your video. But I also remember hearing from somewhere that the arrow of time flows towards the singularity past the event horizon. Does that not mean speed is irrelevant as to escape a black hole you'd have to travel backwards in time?? OR have I gotten some wires crossed somewhere....
Star Trek original series did do parallel earth where Rome never fell.
And after already did alternate dimension parallel univeses .
Thank you Dr Becky, I see it's the thoughtful "chin rub" with quizzical expression today! Very nice. 🤔
Hiiii Matt, (Black for that Matter, very nice play on words there!) Nothing can travel faster than the speed of light. The only way to escape a Black hole is through Hawking Radiation, but it gets very complicated to type. Although, you are essentially correct. As we fall towards the event horizon, we are falling through three dimensions of space and one of time. Spacetime. However, once we cross the event horizon, then we are no longer falling through space and we're only falling through time. Of course I'm not an expert and may revise that later? Take care. 😅
Edit: Or maybe another way to look at it is, after the event horizon. The time dimension flips to a space dimension, so we're just falling through space, without any time. Einstein makes my head hurt.
Can Matt Black read that, and does it really matter that they're black holes?
It's not a real Dr Becky video until she says "nought point nought nought nought nought nought ..." :)
Way back in the 1960s in the USA, The Beverly Hillbillies show was on TV. Jethro loved to use the Nought Nought Nought as well. Talk about different ends of the spectrum!
@@williamhoward7121 Noughts and gizintas!
So stupid
Has the EHT image analysis all come to nought? 0️⃣😁
First we have Sagan's "Billions," now we have Dr. Becky's "Noughts."
I have to admit, when I first saw the EHT image, my first thought was whether they'd assumed what a black hole "should" look like in the data processing. And indeed they had used AI trained at least partly on what they thought it "should" look like to create gap-filling data. Honestly, that worried me.
I don't think the EHT team is entirely off just because things tend to look like what general relativity expects them to look like, the old "Oh look, Einstein's right" is one of the most common results in physics. But in an era when people act like being disagreed with is tantamount to being decapitated, I love seeing people inviting disagreement and disagreeing in a civilized way. I don't doubt that others will download the data and chime in over the coming years, and I look forward to it.
Love your thoughts on this
I share your concern (If I decipher blurry images of people with an AI fed with pictures of my grandfather, won't I unavoidably arrive to the conclusion that everyone in the original images were my grandfather's cousins??)
If he can't rebut the team's rebuttal, that'll be pretty much the end of the argument.
1. Dr. Miyoshi didn't use all the data. The team did.
2. The ring image does not require any assumptions about the shape.
In which case, for his argument to stand, Dr. Miyoshi or someone else will have do this work again with all the data.
@@antonystringfellow5152 Feed the algorithm data from something else (simulated or real), and see if rings pop out. I was skeptical of the original image for the same reasons; ML is notoriously unreliable so you need a correct negative result to ensure it isn't just hallucinating what you want it to see. I had assumed the paper covered their bases regarding the ML, but it doesn't sound like they did.
Confirmation bias is the mouse who swears there is a cat nearby, and then spends all his time looking for the cat to prove he is right. The mouse will auto select out of the gene pool in order to prove itself right.
A machine will only confirm it's evaluations based on the the data it was shown. If it was shown a cat, it will find a cat...
I think what Miyoshi, Kato & Makino did was a necessary thing. Someone should challenge the findings of the EHT. It has to be put to the test.
100%. If the results can't be reproduced it shouldn't pass peer review. The overlay from 14:43-14:50 looks extremely damning, it's pretty hard to believe that's a coincidence. There's already been too much academic fraud because peer reviewers relied on the authors of the paper being the "experts", which allowed obviously fraudulent results to pass peer review. This could easily become another massive blunder if they rely on the EHT team to pass peer review with results that can't be reproduced because they are the experts.
The discussion has to be had. I
@@bobbygetsbanned6049 "If the results can't be reproduced it shouldn't pass peer review."
Did you miss the fact that a reanalysis of the data was already done by several other groups, and that their findings agreed with the EHT?
@@bjornfeuerbacher5514 ..my two cents.. cuz ya seem a bit snippy..
Bobby is MOST LIKELY in accord with you as their comment reads like: "100%.. [adds as a general statement]... Any science that gets published by any person/team should be able to pass peer review. [continues thoughts on overlay now and maybe DOESn't realize others have interpreted the data the same way]...."
@@cobra6481 I wouldn't take anything bjornfeuerbacher5514 too seriously. None of the roughly 20-ish comments by bjornfeuerbacher5514 are particularly insightful, and usually say "didn't you watch the video? Didn't you read the blog post?". Where you would say "snippy", I would posit that bjornfeuerbacher5514 is "pointless" and "unhinged".
So much respect for you taking the time to say, "Hey, this isn't my research field, im NOT an expert in this," before saying you'd tend to lean towards the EHT team. 🙌🙌🙌 thank you!
I liked that too. I did a synthetic aperture radar (SAR) project for my senior year in college over a decade ago, it touches on some of the same issues on signal integration and processing... Needless to say, the math is hard. I've forgotten most of it 🥺
YES!!! Particularly when it would be so easy for her to have done the opposite and say “As a black hole expert…”
In the documentary about EHT's process, one team created sets of test data, then other teams ran the data through their algorithms to reconstruct the image. One of the test images was a snowman, and they were able to successfully reconstruct it. It might be interesting to see what M24's algorithm does with that test data.
+
And in Google’s Alpha Go documentary you see them directly contradicting their own narrative about Alpha Go (that’s it’s not some general intelligence but just an algorithm that learned Go strategies). Just because they talked about it in a documentary doesn’t mean it’s true.
A snowman, LOL
@@zeitgeistx5239 This comment is tantamount to saying there is no such thing as AI because its actually just algorithms. The concept of back-propagation that underpins a large section of AI research is an algorithm to a-fix weights and values to nodes. But this has created systems like Chat GPT and Alpha Go that are clearly more than just algorithms. No one has claimed they have achieved Artificial General Intelligence, you are creating a strawman, and are arguing in bad faith. You dont understand what you disagree with.
Maybe it's just me, but I totally love your 'bullet points'. Classic presentation training from when I was much younger - "First tell them what you're going to tell them,. Now tell them. Now tell them what you just told them".
Oh, and I would REALLY like to see an Earth-sized disco ball - thank you for putting such an odd thing in my head.
Downside - if it was solid it would have so much mass it would instantly collapse into a sphere…
@@Penfold101 Not necessarily. There are stronger materials than rock.
"First tell them what you're going to tell them,. Now tell them. Now tell them what you just told them"
I hate this saying.
Are people that stupid that they need to have a presentation in such a dumbed-down fashion?
@@nobodyimportant7804 Depends on the context. Particularly with a longer, information-dense presentations, it's often helpful just to get people's expectations of what they're going to get from it straightened out at the start, so that they aren't sitting there wondering "are they ever going to talk about X...?" which would distract them from what you actually *are* talking about. And then at the end, people often appreciate having a quick reminder of the key points of what they heard, so that even if some of the in-depth stuff went over their heads at least they can be confident they've learned *something*. I've left many a lecture with basically just the end summary written down in huge capitals, apart from a few notes about specific bits that I really wanted to capture, so that I could sit back and get the overall flow of connections between ideas while the speaker was talking, and then afterwards go away and read up the details at my own, slow-thinking pace. While doing that, I would recogniser bits of what the speaker had said, and gradually a proper, complete understanding would come to together in my head.
@@nobodyimportant7804 When you're talking to a roomful of people, many of whom are probably not invested in listening too much, it works :)
Dr. Miyoshi is also an expert in VLBI techniques. His work was highlighted in the Scientific Background on the Nobel Prize in Physics 2020: "Theoretical Foundation for Black Holes and the Supermassive Compact Object at the Galactic Centre" by the Nobel Committee for Physics (available on the Nobel Prize website). The ongoing debates on the accuracy of black hole imaging make this an exceptionally fascinating topic!
But his analysis is incomplete, only using one night's data, and it makes a false claim about the assumption of the shape.
Also, no mention of the polarization.
That doesn't bode well for his side of the argument, does it?
Lay person here. I too would like to understand how using one night's data is viable for refuting the original Author's conclusion. If the earth's spin yields more data from varying viewpoints, that seems intuitively logical to my brain (he says, bravely).
@@antonystringfellow5152 The EHTC rebuttal makes no mention of "one night's data". Dr Becky said that, but the bullet point on screen at the time does not, nor does the actual web log post if one goes and reads it directly. _There is no_ "one night's data" rebuttal and _no claim that Miyoshi did that_ . The EHTC's actual rebuttal is that the EHTC produced a whole range of results with differing methodologies, and only 2% of them did not show a ring, whereas M24 produced one result with one methodology. The closest that it comes to any "one night" argument is _another_ bullet point that points out that if one observes Sagittarius A* for 12 hours, the fact that it can vary over the course of a mere 10 minutes needs to be taken into account, and M24 did not do that.
Origin is probably (0,0) in the donut center, meaning this is probably an FFT data compression artifact.
ML reconstruction could be faulty. ML may be biased to generate what they want to see.
Filling in missing data with AI trained on what you ‘expect’ to be there seems pretty biased towards your expectations doesn’t it?
you clearly missed the rebuttal to the „expect” argument
You could just train it on all plausible version of what to expect, including a blob instead of a ring.
The first result looked like the image of a cat, but was then retrained.
Artificial intelligence = Genuine stupidity.
@@samuelgarrod8327 Please let's be careful here. While yes, no AI concept actually is "intelligent", there are very different AI concepts at work. The one that is genuinely stupid for most applications is "generative AI", aka. Large Language Models (LLM), where you issue a prompt, and it derives the statistically most likely response from a massive model of inputs. I would *hope* that all this black hole imagine effort doesn't use *that* kind of technology.
As a person working in machine learning for over a decade, I can confirm that this type of problem, which we call data imputation, is very complicated. It depends on the machine learning model you use to fill the gaps, and the proportion of usable data you have in the first place. In the TED talk snippet you showed, it looked to me as if the proportion of sky covered was pretty small compared. The smaller the proportion of original data, the more uncertainty you have in the values you fill in the gaps.
Then you need to think about the location of the observatories used: where are they located? Do they cover a representative sample of space or do their locations bias the data collection in some way. I’m not an astronomer, but the fact that we know there are clusters and super clusters of galaxies means to me that matter is not distributed randomly. If we combine that with non-random locations of observatories, the combined sample of space could be non-random, I.e. biased towards more donut shaped structures. The machine learning model used would likely pick up on this when filling the gaps leading to the artifacts that the Japanese team claims.
Another tricky aspect is the choice of which gap to fill first, because the order plays a crucial role. To avoid this you need to repeat the gap filling process many times (multiple imputation), each time starting with a different gap and randomising the order. Then for each gap you average over all the runs. The question is, how many runs do you need? Again it depends on the proportion of gaps to raw data, and the total number of gaps. The number of runs required can be huge, and that costs money. If you stop too soon you may induce bias in the values you put in the gaps.
Anyway, I thought you presented the topic very well even though it’s not quite your field of expertise!
Yes; this. The AI component of the EHT is the least mature technology I know of in the process and thus the most likely source of errors.
I wonder if they could get a real world data set, with a comparable level of completeness, detail and resolution, but of something that we directly image much better and see if the process produces a correct image?
Its like they used a pinhole video camera to photograph a living room. The got a bit of green from the sofa, a bit of brown from the arm chair and a bright flash from the window. Then they used a computer with a catalog of living room pictures to find a best match. There, that's the living room. Better to say this might be what the living room looks like.
I am actually really surprised that they chose to use machine learning for this. Typically, there are algorithms that you can use to reconstruct missing data, if you make certain assumptions. But in that case those assumptions are well-defined. With AI, you typically don't know what the model is doing or why.
And the difference between this and simply making shit up is what exactly?
@@hansvanzutphen They used multiple different reconstruction techniques, of which only one used machine learning, with the others being more traditional methods. There were other contingencies which one can read about in the paper or their talks, but in short they were very careful with the use of AI.
The key fact that comes out at me is that EHT is running at the limits of its resolution. Therefore any detail in the image may be an artifact. The frightening thing about this is that they tried to bridge that gap with machine learning, which coerces results into known patterns instead of admitting that it is unsure.
Agreed. the last few years has made 'machine learning' anything a black mark against it. It is too easy to let them hallucinate. And impossible to prevent it.
"The frightening thing about this is that they tried to bridge that gap with machine learning, which coerces results into known patterns instead of admitting that it is unsure."
That is a _huge_ oversimplification of what they actually did do. Did you watch Becky's video? Did you read the blog post by the EHT team?
in the first place, there is no light interference, therefore the interferometer does not exist. it is just the name used to fool all of us.
"...instead of admitting that it is unsure." That's not a binary. Data can be biased toward known patterns, AND the authors can (and should) analyze (and usually quantify) the resulting uncertainty. Nothing in science is sure except the existence of uncertainty. In my field (neuroscience, now retired) that usually meant deriving p values from the appropriate statistical tests. In physics it seems to be more about sigma. I don't have a clue how they analyze radio telescope + machine learning data like this, but it's scientifically meaningless if they haven't analyzed their own uncertainties or addressed them in some way. I think the heart of this criticism and response are the assumptions each team is making. I have to say I think the EHTC seems unfortunately dismissive of the criticism against their work.
I agree that "EHT is running at the limits of its resolution. Therefore any detail in the image may be an artifact." That's probably their greatest weakness, but even so they should be able to analyze the odds of artifact (amplified by AI) vs. truly representative imagery. They seem to have done that by using numerous different methods and independent analyses to verify the ring-like shape, but I get the impression that these methods are all novel, so what's the baseline for reliable validation other than selecting methods that yield a consistent result that makes sense to them, which might more accurately reflect a flawed assumption?
@@defies4626 statistics have been used to "fill in gaps" in scientific observations for a very long time. So first, a "valid" number of results ("sample") must be gathered before coming to a valid conclusion in any study.
Deciding what constitues a valid sample is, becomes key.
The correct statistical method for the test is also vital. What we have here is two teams arguung about the statistical methodology.
NRAO holds a "synthesis imaging" summer school (or at least used to). The textbook they have for that course is, well, "dense", and something like 600 pages. Using interferometry for image synthesis is not for the faint of heart. I will note that similar techniques are used to produce images from CAT scans, and I think that in the very early days of radio interferometry and CAT scans there was a lot of "cross-pollination" going on.
Interferometry synthesis - not for the faint of heart.
Gosh, theres a data point.
About 30 years ago I was working on trying to use radar information to get images of aircraft for identification purposes. We found that what you call the PSF results in smearing, The data from a single input point is spread over many output points. And there is no way of reversing the process, the information is lost and no algorithm can recover it. On noisy pictures image processing tends to give inconsistent results and have an alarming tendency to produce artefacts that were not in the original image. I suspect this is why they tried machine learning. But that cannot recover the lost data. In very crude terms, machine learning takes lots of data and keeps making guesses until it finds one that meets the fitness requirements. It can never tell you why it works or that it will work on new data. It is also very dependent on the data provided and the data set size. The images must be regarded as an educated guess at best.
Well: IF you HAPPEN TO HAVE a "PROVEN MODEL" of what produced the original data, you may well matrix-divide the raw image by the wave functions of the proven model to get a highly beautified and sharpened image of the originating object. If, on the other hand, you have NO "PROVE" AT ALL for the model you assume, then what you get is at best one of-a-multitude of possible hypotheses on what could have caused the original data.
If you happen to actually test the generated images against the very basics of assumptions of the models you put in - for example having the black hole accretion disc imperatively being aligned with the galaxy disc -, you immediately are forced to dismiss those "images of black holes".
The PSF affects the original image via convolution, which does have an inverse. Of course, you still have to be careful about noise, but it is in theory possible to get the (unique) original image back
@@HunsterMonter Am I understanding correctly that the original image is essentially a very low resolution pixelated image? Something like maybe 5x5 pixels for the entire black hole, potentially with pixels missing?
@@TheRealWormbo I don't know the exact resolution, but there were enough pixels to get a good image. The problem was that the "mirror" of the telescope was broken into tiny bits scattered around the globe, so there was missing spots the image. This is why they had to use ML, to fill in the gaps and get a full picture. The PSF is something else that they had to account for. If you look at an image by Hubble or JWST, you will see spikes around stars. These are diffaction spikes. They look pretty on pictures, but if you are trying to do science, they get in the way. Every single telescope has a unique diffraction pattern, and mathematically, it is possible to correct for that effect and get the image without any diffraction spikes just with the geometry of the telescope.
@@TheRealWormboI don’t think so, I think data is extracted from various images from different telescopes of various resolutions. Then that is fed into a image reconstruction method, not too dissimilar to that of photogrammetry, combined with AI.
When I watched the original presentation on this, I did question whether the machine learning might have been fitting the data to the theoretical expectations of what the black hole was supposed to look like. I don’t have enough details, or understand the processes or mathematics used in the analysis, but I do have a background in machine learning so I know this is a possibility. I’m glad these researchers questioned the results and am very interested in hearing about the final verdict on this.
I had similar thoughts when the first images were coming out, and I questioned the data fitting as well, this point about it being within the resolution error of the imaging wasn’t presented. IIRC, if you watch the TED talk that Dr. Becky refers to the researcher talks about exactly this and doing their best to make sure the ML wasn’t giving them what they wanted to see. I’m still skeptical.
Just rember behind it was a few million at stake behind it in funding
@@daveherd6864 that’s true of the counter paper as well, pursuing science costs money. You can’t discount it as an influence, but money influences nearly all of our actions in society, so I wouldn’t put a huge weight on it. They would have been needing money if they’d been pursuing some other thing in science as well.
17:07 shows multiple citations of independent analyses confirming their results.
Adversarial peer review is at the fucking soul of science. This is exactly what should happen!
Much of what the cabal puts out seems like BS at first glance, but we're supposed to accept it. The dumber ones are the first to understand it.
In a camouflage class in Basic Training, the instructor asked if anybody can see his tank parked in the woods way over yonder. There were dumbasses talking to people like they were talking to dumbasses. "You can't see it? It's right there!"
The instructor was smiling.
There was no tank in the woods.
How many contemporaries Peer Reviewed Einstein?
Hint: The number is less than 2 but more than -1.
A true cornerstone of science.
How many contemporaries peer reviewed Einstein?
How many were men?
Ya, baby... what's THAT tell ya?
What’s especially encouraging is that the EHT data is available for others to analyze, inviting contributions from different perspectives. Over the years, as other scientists scrutinize and build on this data, we can hope to see a clearer, more comprehensive picture of black holes and their surrounding environments. Disagreement, in this case, isn’t just welcome-it’s essential for advancing our understanding of these fascinating cosmic phenomena.
One of the rebuttal points is that they didn't release the uncalibrated raw data
To me, that should be the first, not the fourth point since the other points could then be addressed independently.
Using ML trained on what should expect as as output also seems a little off to me for the reasons stated of bias.
As someone who writes software for fun in retirement after decades of getting paid to write software, I looked for the raw data to process right after watching this video, and I was unable to find any thing that even remotely looks like unprocessed raw data.
I should have mentioned that on the plus side I did find the github repository EHT released.
@@nil2k I'm not sure if uncalibrated is the same as unprocessed in this case, but neither are the raw data.
I'd imagine that dealing with the raw data would be extremely difficult, since that would mean the raw data from each observatory before they do their own calibrations, then have to mesh all those into the 'disco ball' model.
With those data sets, the other 3 points could be tackled separately.
The number of people who can really assess this debate is obviously very small, and I'm not, even in the most infinitesimal way, capable of having even the dumbest of discussions with any of them.
Disclaimer out of the way, I was fully expecting these results to be challenged, and would have bet money that their images aren't "right".
Reason being that, after listening to the EHT team in many discussions, listening to their explanations of the process, and watching their documentary including the process of deciding on the accurate final images, I had the distinct impression of a culture that sent all kinds of signals to my intuition that they were...how to say it... high on their own supply? I felt like they demonstrated that they were extremely cautious in their analyses, but somehow it felt like box-checking. Like they had a sense of inevitability in the result. Like the human ego just isn't capable of being detached enough from something so precious to one's core identity. The sheer scale and effort of the project is just overwhelming. I saw that they recognized this and knew it was a massive danger - as any truly good scientist does - but I just got the sense that there wasn't anyone strong enough or oppositional enough to not walk down the predestined path to the predicted result.
Once I saw that image processing included using what is basically very complex confirmation bias to derive the images (telling AI what it "should" look like) I just couldn't have a lot of confidence in it.
I'm highly likely to be wrong, but my life experience has been that when I get these kinds of intuitions about expert positions on things, I end up being right way more often than I have any business being.
Very curious to see how this plays out.
Very good points. The risk of human biases and behaviours not being accounted for here is significant, with groupthink being one of the possibilities you highlighted.
Back in the day, the most common way to make complex maps from radio interferometry data was with Hogbom Clean. But there was no way to estimate how reliable it was. Then came maximum entropy - I used it with both radio interferometric and optical data, but once again, how reliable was it? Now we have "machine learning", and the same questions keep getting repeated.
North Cascades - I've been there. Very nice (and remote) park.
So HOW RELIABLE was any of the methods? You can't criticize the reliability of the methods if you never assessed them to be false. And there are arguments why they should be 'roughly good'
I used to process seismographic data. Deconvolution is an issue in that field as well. It's an important concept. Is it big in your type of astronomical research? Perhaps you could do a video about that. Kudos to you for discussing methodology at all.
Becky - The issue here is that the EHT resolution is at the ultimate limit for a small emitter like SGR A*. AI techniques can extend that resolution, but the results start to take on some risky results, such as hallucinations. Both teams have to resolve that problem, and it is not really such a surprise that they have different results. Its kind of like the popular images of the "face on mars" several years ago. The shadows in that mars image made our human vision detect a "face", because we are naturally adapted to see faces, especially eyes, even if the actual image is distorted, blurred, and noisy. The face turned out to be an optical hallucination when better resolution images were available. In this case for SGR A*, I suspect we will have to get more data to resolve the image better. In the mean time, I have to place more trust in the larger team. More team members should be better at finding and correcting defects.
Mars face: I particularly remember the random black dots from radiation, one of which mimicked a nostril very convincingly. 😅
I'm not sure how you can classify an optical image taken with a camera with no manipulation as a "hallucination". None of this data was made up. The geography was there. The shadows were there. They happened to combine in such a way in that image to look like a face. If you were to image the same area again under the same conditions you'd get the same shadows and you'd still get an image that looks like a face.
@@stargazer7644 The point I tried to make with the comparison was that the so-called face on mars was a hallucination by our neural networks for vision. Our neural networks for vision are biased in that way. It happens to work often enough that we accept it. The analogy is with the AI network made with computer models of artificial neural networks that analyzed the EHT data for SGR A*. The other part of the analogy is that the resolution of the face of mars was figured out later with more data. I suspect the same will occur with SGR A*.
As to trust, I don’t distrust either group. Large and small teams both have benefits and weaknesses due to size.
Is that true? I mean, if a lot of people want something to work out, because they worked on it for a good part of their career, they definitely will be biased.
Thanks Dr Becky for an interesting presentation. I've worked now and then in radio interferometry and have a healthy respect for the difficulties of phase calibration, which are particularly severe in VLBI, when one doesn't have the luxury of having all the antennas directly wired up to a common phase standard. I'd love to have time to play with this data myself instead of what they presently pay me for, which is trudging with an Augean-sized shovel and broom through acres of other people's crap x-ray code. Wisely, with the Fourier-innocent public in mind, you omitted mention of this particular F word, but I don't have any popularity to worry about so I can mention for readers that what an interferometer measures or samples is the Fourier transform of what is on the sky. I'd be interested to fit directly in the Fourier plane to compare the likelihood of a ring-shaped versus a condensed object. I have to say also that I think a machine learning approach is quite wrong for this application. People throw ML at everything just now without stopping to consider whether such an approach suits the problem or not. It's not a magic wand and indeed it leaves one open to just such criticism as by Miyoshi et al, because sure you've generated your nice picture, but you can't exactly explain how, because it is all hidden in the ML black box.
I did enjoy seeing this discussion play out.
I have one simple question: have they tested their observation and treatment algorithm on normal stars ? To check that normal star do not produce a donuts shape image as well...
Exactly my thought.
Well, since the SMBH's theoretical ring is so massive, maybe not a single star, but a cluster or a glowing dust cloud, or something else more comparable to the accretion disc.
Yeah, taking the produced image and reverse the PSF, will that result in the raw data picked up by all sensors? I guess they have both defined and checked the PSF using nearby stars and comparing the sensor data with a ”normal” telescope image data.
Good question. I believe that is how they "trained" their artificial intelligence part of the work. They looked at known objects and the associated combined telescope data to teach the algorithm.
That's exactly what they did in the japanese paper (not with a real star but with simulated data), and it does make a donut. Fig. 10
It's a simple question with a hard control: If you want to check it with a "normal" star then you have to apply these techniques to a star that has the same radio "brightness" as these black holes.
I'm not an expert, but I have to imagine that those stars that have that kind of radio brightness probably exist, but those stars are so hard to separate from background radio noise that you would have to identify that normal star from other techniques. THEN you have to apply the same radio collection strategy as they did for the black hole (lots of nights of capturing data at all of the same telescopes) to rule whether you would generate a similar image as the black hole image that they generated, or a simple blur spot image. Probably not an easy experiment to replicate, but probably will need to be replicated to further prove the results of the original study should still stand.
EHT’s responses are unsigned blog posts. Credit to Miyoshi, et al, for signing their rebuttal. Are anonymous responses standard in this academic community?
It is also odd for EHT to claim they have provided detailed descriptions of their methods, while Patel’s paper notes the scripts for image postprocessing and generation are not available. The 2024 EHT response does nothing to enlighten us on their methods, choosing instead to critique Miyoshi. Those critiques may be fair, but EHT’s scripts and methods remain opaque.
Unsigned can be assumed the entire team. It's not that unusual.
Thank you. It is helpful to know the usual practice. The response certainly appears to be speaking for the entire team, but as you say, that’s nothing more than an assumption.
When they used AI with simulated data to try and fill in the missing data, that is the point that set off red flags for me! The AI is no better then the data that it learns from. and if they use simulated data then it is only what the teem wanted it to be! Not "real" data!
Machine learning is not gen ai
As I expected, they just introduced the conclusion in the premise.
The EHC team has huge financial interest to put a positive story. Their big size does not give them more credibility.
I do astrophotography myself with off the shelf amateur equipment, and if the resolution isn't there, then the resolution isn't there to produce an image.
So I am hesitent ever since I saw the first images back in 2019 about this method to begin with. You can't just say our models do take it into account and walk away. The smallest detail any telescope can produce is easy enough to calculate and if the event horizon is smaller than this theoretical minimum, then you simply have nothing to show in my honest oppinion.
These images look like they should be right, but that's just an assumption based on our best models, not on actual data. The resolution just isn't there and that's what Myoshi et al claims correctly from my amateur astrophotography POV.
Agree, hence aperture fever being a real thing!!
You do understand that photography is vastly, vastly different from interferometry right...?
@@monochr0m Resolution is still resolution. Nothing changes there in how much detail you can resolve. Be it radiotelescopes, mirrors, refractors or whatever.
Even if they had a telescope the size of earth without the need for interferometry , they still would not have enough resolution to resolve an object the size of M87s blackhole or SagA*. Simple as that.
To circumvent this fundamental resolution problem they do throw their models into the equation and "simulate" the endresult based on the low resolution data the actually gathered.
I had to explain the "Airy Disc" and diffraction through a circular aperture to our student volunteer the other day. We were trying to make observations of 3C348 during the day, but the Sun was contaminating our side-lobes too much to succeed. Actual instruments diverge from the theoretical quite a bit, because real instruments aren't perfect. Incidentally, one of the versions of the CLEAN algorithm was invented by a former colleague of mine when we were both at Nortel, 17 years ago. David Steer and I filed a patent together (on some wireless security stuff--unrelated to his grad-school work).
Nortel, 17 years ago? Which site? (I was at Harlow. =:o} )
@@therealpbristow Mostly at the Carling, Ottawa site.
Just my thoughts, but reporting the EHTC's rebuttal posted on their website is problematic. The Japanese group published their rebuttal in a peer reviewed journal. Only counter-rebuttals from the EHTC group that are likewise peer reviewed should be taken seriously, and not ad hoc posts from their website.
The massive problem with any machine-learning algorithm is that you're training it to produce an expected result. Therefore, you should NEVER use machine-learning to produce an image of something hitherto unseen using noisy data, because all you're going to get is what you expected to see, not what's there. Or, to state the EHT's own comment, "ring-like structures are unambiguously recoved under a broad range of imaging assumptions". In other words, 'we definitely got what we expected to see'. As for EHT being more likely to be right because they put more time and effort in, well, when you put that much time, effort and money, there's a HUGE pressure to produce something impressive.
I agree. That was my feeling too. Those algorithms are "interpolation" tool. You should not use them for "extrapolation".
They're programed with the same inherent weaknesses we have. The biggest being confirmation bias.
Did you forget about the polarization?
It’s been a while since I watched the talk etc, but I thought they used training data primarily of non-black hole images; cats, bridges etc….
@@antonystringfellow5152 I can't see how that would help, as it would suffer the same noise, distortion, and gaps in coverage as the radio data.
Please take a moment to recognize that Becky put together this detailed video explanation of the disagreement, including an overview of the published work by both research teams and uploaded it to HER channel, yet still had the INTEGRITY to point out that she's NOT an expert in radio astronomy interferometry (herself) , so she can't offer an informed view.
Massive kudos to her for doing this!
IF ONLY all science communicators (and journalists) were as honest and humble...
Becky you're exemplary! Please keep showing us how it should be done. ❤👍
You use some process trained on models to edit your image, it is no surprise the image ends up looking like the model. It is also odd there is no edge on angle with both images, you can see the full center.
Correction on your last point: the apparent angle can be explained by the way the light from the accretion disc gets bent directly around the black hole. Look at the artistic renderings/simulations to see what I mean - one can see what is directly behind the black hole, even if looking at it directly from the plane of the accretion disc.
Your "what do you think, Becky" discussion was the best part of a very good video. Thanks for your effort!
I love how groups studying the same phenomenon, coming up with differing solutions, are like WWF pro wrestlers talking smack about their rivals. But instead of power slams and suplexes, they use research and data, and instead of shouting their spit into a microphone about how they're gonna totally own, they are publishing research papers.
I mean in the end, there is still professional respect, and everyone is playing by the rules. But in either case it is still a lot of fun to watch.
I'm glad these images are coming under some more scrutiny now, I'm no expert but the whole methodology especially the use of machine learning always made the final result seem way too good to be true, and very much dependent on many possible biases that could be manipulated to produce a desirable result which is more recognisable to a layman as "black hole-y" and therefore more likely to become big mainstream news.
NZ, happy for the update! Great breakdown on this...
Yes! This kind of deep explanation of a controversy is some of the most valuable content on RUclips. It reminds me of my very favorite videos of yours, explaining the historical process of figuring out some of the things that are now considered basic to modern Astronomy (and I'd really love to see more of that - and see other people in other scientific fields emulate it). One of the best ways to really build lay understanding of these things.
7:34 - Hang on... on the top of that hill on the left, surely that's a TIE Fighter?!
For the first time, I understand the meaning and origin of an artifact. There are artifacts on x-rays and sometimes MRIs I suppose that can lead to diagnoses that are wrong. I did learn that, but I didn’t understand how that could happen. Your explanation is making that very clear wowyou’re very good teacher. I thank you.
I'm only on 2:22 in the video, and my photography brain has kicked in: in some cases, if you edit an image with one step being done in a different order, you will come out with your image being totally different. While this obviously is more complicated than just editing a photo, I imagine the concept could be applicable.
Edit: at 19:07 in the video, EHTC mentioned my exact thought. For me, sometimes I have had this happen when the overall photo is dark to the point of being nearly black and has low contrast. You do it one particular edit before another, and it will somehow pop with more contrast, while flipping the order does nothing to improve it
I'm skeptical of the EHT image, because of the AI model used to recreate the image. We humans are fallible and biased, and we pass this to the AI models we train, so the team who was looking for the black hole and WANTED to get a good image of it, also trained the AI model for it. Which makes it biased since it's a model designed to create black holes images.
I think if they had double blinded the "experiment", with the people who designed the model, not knowing what was gonna be imaged. This would result in a more faithful model.
I'm glad other people are taking a look into this data, and I hope we can soon have a better understanding of it.
so excited for a new video!! watching a new video by dr becky is probably one of the best things to do on a Thursday evening:)
Great perspective on how the scientific method works, especially with big collaborations.
Simple solution is for the original imaging team to provide the process they used to generate the image. If the image can't be generated by a "competent" independent group/team, then the original image is probably a fabrication. If the image can be generated, but violates the tolerances of the equipment and/or what can be deduced from the data, then the image is probably a fabrication.
I love science because it decodes reality rather than just insisting on a belief. This back and forth collaboration is argumenting at its best. This is how we decode reality. It's so much better than just forming something in your imagination and declaring that to be what is. Even if we run into a disappointing failure, we can take comfort in knowing that it is wrong and continue to work at finding what is right.
I remain skeptical about any AI-generated results. AI can be good at getting patterns out of the data, but the patterns have to be independently verified apart of AI.
AI might be good at inventing new drugs, but the results need to be verified.
AI might be good at generating images, but I can get AI to generate any image that I want, apart from reality.
My very passing understanding was that the algorithm wasn't guided to produce an image that matched the expected image, so the fact that it _did_ match is a mild sort of independent verification.
ETA: If I give a generically-trained algorithm a blurry image of what I believe to be a Dalmatian (due to independent evidence other than the blurry image), and it produces an image of a Dalmatian, that feels meaningful. Could it be a coincidence? Certainly, but that doesn't seem particularly likely.
I was much more inclined to believe the machine learning (I'm not calling it AI) result a few years back, when I understood less of how it actually works, and how unreliable it can be.
it's incorrect to think of the machine learning as the same kind of image generation AI that's generally available. The principle algorithm they use is called PRIMO. They also use CNN. The main thing here is really the algorithms is designed more for reconstruction/recognition not generation.
@keithnicholas Do you know of anywhere I could read more about the ML specifically?
@@haydon524 I responded with some links, but seems to be filtered out, you can look up the paper "The Image of the M87 Black Hole Reconstructed with PRIMO"
My 10c ....that you have the abillity to get data to see anything is good enough for me...because its better than weve ever had before
It was a toss up I think between when you and PBS Spacetime would get to this one first. looks like you are the winner.
Are you a teacher?
It's exceptional how good you are at translating information into understanding
Good work covering this story. There were a few details in there that I hadn't heard before.
A little puzzled by some of the comments here though. I'm not sure these people watched the same video as me as they only seem aware of the fact that the image was enhanced using AI. It's like they never heard the rest of the story.
Strange.
Because that image was being sold as a photograph. Not something that AI assisted in creating.
That is a massive difference.
In fairness, even if you watched this whole video you won't have the whole story. When Dr Becky is giving that "one night's data" explanation at 18:42 that's not what the bullet point on screen is saying, nor what the actual web log post says if one goes and reads it firsthand. In fact that part of the EHTC's rebuttal is _not that at all_ , but rather that the EHTC produced a whole range of results with differing methodologies and only 2% didn't come up with a ring, whereas M24 only produced one result with one methodology.
They didn't use AI to "enhance" the image, they used it to extrapolate data points from miniscule amounts due to not enough observatories. AI is useful for interpolation, but terrible for extrapolation. You will always get what you expected to get, regardless of if it's accurate.
I work in computer science, and often we have to collect a LOT of benchmarking data for performance evaluation. BUT, for whatever reason, even AFTER giving over the data and charts, someone will say "but does it actually FEEL faster?" BECAUSE of what you said: some people DO just kinda "know" the data better. If you stare at data long enough, AND know what you're looking at, you can kinda just start SEEING things. And there is ABSOLUTELY a bias in the crunching of that data once you "see" it, and I have had my ass HANDED TO ME many a time because I misread the data, but I think I will BARELY side with the Horizon group, but not for their data familiarity, but ONLY because of the WAY they have LEVERAGED that data familiarity to UNCOVER very special and unique science
The shapes look similar to me, save for one thing; Could it be that og team clumped the upper region of the radio spectrum, so it just wasn't computed by the machine learning leaving a "blank" result, where the other team didn't, making the central region appear brighter, which would make this all just a resolution issue?
Interesting as always.
Random aside, The North Cascades t-shirt - after living just across the border (northern) from that area for over 60 years, my wife and I just did a short road trip through the park. Definitely worth seeing. (didn't get the t-shirt as the park facilities were closed for the winter.)
As a novice, something has always bothered me about these images. Both are from the perfect angle, a perfect right angle, to the ring around the event horizon. Like a photo of Saturn with its ring as a perfect circle around the planet. And we just happened to be at that perfect angle. Also, and more importantly. Isn’t that an IMPOSSIBLE angle for us to the Milky Way Supermassibe black hole? Assumably, the ring around the black hole is in the same plane as the bulk of the galaxy.
The rings around the black parts of the image simply represent severely gravitationally lensed radio light from behind the black hole. They do not represent accretion disks. The latter are not present in the images. This is such a common misunderstanding that I’m surprised science popularizers/influencers have not bothered to clarify it after all this time.
@@robertbutsch1802 surely since it is the light emitted from the accretion disc that we are seeing, the rings do represent the accretion disc, albeit severely distorted images of the disc.
That part is probably not problematic because light is heavily bent around the black hole in a way that we are always seeing the back of it. So it doesn't really matter at which angle we are looking, we will always see the accretion disk as whole.
I'm an engineer and I understand most technical topics, but I don't understand how telescopes scattered around the globe are essentially one big telescope, and can "see" things too small for each of the component telescopes to see. Let me use a simple analogy. Let's say the entire population of the United States looks up at the moon on the same night and tries to see the Apollo 11 Lunar Lander. That, in essence, makes one huge "telescope" but since none of the observers will see the lander (it's too small), neither will the huge "telescope".
Dr Becky, Perhaps you could produce a video explaining, in detail how the "one huge telescope" concept works. We'd all appreciate that. Having said you are not an expert interferometry and huge radio telescopes, perhaps you could do a joint video with one of your colleagues who is.
I was always suspicious of the follow-up image with the swirls.
Not exactly "Crisis of Cosmology" but really intriguing, thank you Dr Becky for breaking it down, hopefully the image will be proven correct.
Any process that involves filling in missing data with what you expect might be there should be a red light for anyone.
Remember what happened to Geordi when he did that!
Absolutely. It is data falsification, pure and simple.
If that's your position then you should never trust a single thing you see ever again, because at it's fundamental core, this is EXACTLY how the brain interprets optical input and translates it into 'vision'. I do understand and (partially) agree with the sentiment, don't get me wrong, but I would be remiss if I were to fail to point out this fact regarding vision. Ultimately, what we need to refine our understanding and improve the models are independant data from other perspectives, distances and spectra, which is, alas, unlikely to occur any time soon.
@@Veklim This is BS response. Brain is not a computer. Anyway AI is famous for hallucinating and it's response depend on data it was taught on. I'm inclined not to believe these images until the same process (same AI, same gaps) is proven correct by imaging known targets (forming solar systems for example) producing results that are very close to images by other methods.
LOL
I applaud your communication of a very technical topic outside of your expertise. I'm an engineer who interfaces with many different disciplines, fumbling around in unfamiliar territory, so your admission of working outside your domain definitely resonated here :D
SUPER INTRESTING! This was easy to understand. Thank you!
The image from M87* is taken from almost directly above the black hole due to its orientation from our perspective, and the image of Sagittarius A* was captured from a side on perspective being at the centre of our galaxy.
Now look at both images side by side 0:29 (left image is top view and right image is side view) and you will see now with both perspectives it is not a sphere shape but a vortex with three regions of influence rotating about a centre point.
This explains why General Relativity breaks down at the event horizon, our math dives off to infinity because the fabric of space time goes down a hole so linear measurement we take look like they are going to infinity but what's actually happening is the fabric of space and time is going down a hole and actually going towards infinity in both opposite directions to the centre of the hole. This explains why black holes have jets either side too.
Remember our math is but a slice through a 3D object moving in a 3D way through a 3D space. If you slice up a tornado and do math on one slither of it you won't understand it completely, and when your straight line related to your equation reaches the edge of a hole, measurements go to infinity because they are falling down a hole and the line you are measuring is bending with space and time and is now pointing in a different direction, you think you are measuring a straight line but the geometry of that straight line in reality is a curve that becomes a vortex.
Weekly space getaway is here!
This topic seems like the perfect opportunity for a collab with Dr. Fatima!
This is very interesting. Thank you for explaining it.
I have said this before, but will repeat it. To image the shadow the diameter of the imaging device needs to be 1.1 times the diameter of the earth.The number of individual devices is to few, being only five if my memory is correct. There were only five or seven clear days of observation; in the past I have seen 60 data stacks to get a good "image".Too much of the data collected was not use for being corrupted which will skew the data sets. AI learn is ok if you make sure you don't feedback loop other AI learning into the data. EHT team used temple - probability matching which in itself leads to artifacts. Personal a 200 person team seem more likely to error and rush to publish maybe?
That the original purported images might be artefacts of the analysis seems like a real possibility given how far the team was pushing the techniques. My own inclination would have been to use maximum entropy (or its dual, autoregression) analysis for image interpolation rather than AI, but I realise I'm forty years out of date on such things and I could be spouting nonsense. Having a variety of methods applied to the same data by other teams would seem one way to help resolve (sic) the issue, but "You're going to need a bigger baseline" is the message I take from this.
@HeeBeeGeeBee392
At least now there is no longer any debate regarding their existence.
What is your intent using (sic) in your paragraph? Just curious as I don't see it too much in speech and not too familiar with it
@@Andromedon777 I suspect that "resolve" has an intentional double meaning that is being highlighted. "(sic)" is basically saying "pun 100% intended".
@@Andromedon777
He used it incorrectly.
@@slugface322 When is it usually used? When quoting?
Oh, good, an explanation of the controversy I can understand.
What we really need is a longer baseline, maybe a couple of radio telescopes in high orbits.
If they trained the algorithm to decode the data with images that represented an expected shape, doesn't that unavoidably bias how the data will be interpreted?? Should the image obtained be considered a model rather than a "picture"?
It is not a picture or photo. You can look at real photos in Halton Arp's books, like "Seeing Red".
Myoshi, Kito and Makino 2022 make some very good points about the M87 EHT images. I did not read the equivalent Sag A* paper. As an old VLBI observer, their processing makes more sense to me, but I may be a little biased toward old school. I found the EHT-team rebuttals a little glib. However, the discussion is good and you made a nice presentation of the conflict.
Great video. I love seeing these science disputes happen in real time. In science, conflicts are resolved through long discussions, analysis and arguments, not with emotions, intimidation and misinformation. I wish things were like this in all aspects of our society.
Well the reality is that scientists are also humans and human biases and emotions can unknowingly influence data analysis that is thought to be objective. Although it is much less likely in circumstances like these, it should never be left off the table.
No they're not. There's so much histrionics, play to emotion, lies and deception.
Sky Scholar has also explaind that picture as "not possible", or something like it's not possible to messure and get an image the way they clamed they did.
Pierre Marie Robitaille doesn't know the first thing about the complex imaging technology used in the black hole image.
He also doesn't understand how atmospheric pressure works, thinks that the cosmic microwave background radiation comes from the Earth's oceans, that all of the data from the Parker Solar Probe (as well as any other research that contradicts his theory) was faked and the sun actually is made of liquid metal hydrogen.
His doctorate is in zoology, not astrophysics and he worked on MRI technology until he was released from his position at Ohio State University.
Since then he has not published a single paper on astrophysics to a peer reviewed journal and his audience consists primarily of science-deniers on YTube.
He is, in fact, the textbook definition of a crackpot.
People need to vet their sources better.
When the M87 image was released, it seemed plausible to me because the accretion disk (and by implication the black hole rotation) was facing us, as was the galaxy. With Sagittarius A* I also expected it to be aligned with the galaxy, but again it was facing us. The explanation was that there’s nothing to force a black hole rotation to align with the surrounding galaxy, which is fair enough. But what are the odds that an otherwise random orientation would face us? Twice?
Relativistic effects mean that you can see the front and back portions of the disc even when you are mostly edge on. It is not as simple as viewing Saturn's rings. The NASA JPL cartoon at 05:10 is a bit simplified, but notice that at most angles you can see the whole of the disc.
the gravitational field is so strong near the black hole that light from the accretion disc is bent towards us whatever direction we are looking from. As far as I know we would see the doughnut shape from every angle.
@@JdeBP you can see the back of the disk, but the shape and brightness changes. I’d expect lobes, with one much darker, like the newer paper suggests.
it's not facing us. what you see is the "shadow" of the black hole, aka an image that is strongly distorted by relativistic effects.
In all fairness, if the team that assembled the data says you have to process the data in exactly the same way in order to get the same result they did...something about that reflexively tells me it's not great science. The best science is verifiable, falsifiable, and repeatable. At the same time, processing the data in a way that doesn't make sense is of course going to produce results that don't mean anything or looks wildly different.
As a...um...well, I'm not a _professional_ physicist, but studying and understanding physics and the math of physics is one of my greater passions, sooo...can I still be a "physicist" even if I don't have a degree or relevant employment because of institutional and social obstacles? As whatever I am, I am highly appreciative of your professional analysis and thoughts.
Excellent video where you cover both sides so fairly.
ML program they have been using as well as any raw data they collected are available to public. Japanese team just needs to use it and feed it false data to see if the image still turns out donut shaped. If not EHT is just faking stuff for media recognition.
I am a computer scientist and as far as I'm concerned, you can't call these things "images of black holes" if they were postprocessed via machine learning. Instead, they're simply a prediction of what a black hole *might look like* based on existing images (which, remember, there are no existing images of black holes). I have no doubt a true image of one of these black holes would look very similar to the event horizon images, but these aren't that. They're a snapshot of a simulation extrapolated from a low-resolution image of reality.
Yeah, it really sounds like they simply don’t have the data. It may turn out to be a reasonable accurate picture of a black hole but I think they haven’t actually managed to prove it is.
So ashamed that the 64m Murriyang Radio Telescope (formerly Parkes RT) and the 70m antenna at Tidbinbilla (part of the DSN but also used for GBRA) weren't part of the EHT collaboration. The 70m DSS-43 antenna is the only one on the planet capable of sending commands to Voyager 2, so possibly too frequently indisposed, but there are other capable antennae on that site. It's been four decades since we had a Commonwealth Government with any interest in funding science or even education properly, so maybe I shouldn't be so surprised that CSIRO and/or any Australian universities weren't involved in taking data for the EHT. :(
This here is the scientific process and its beautiful.
Yes I would have liked a bit more emphasis that both teams are doing good work here (not that she said the opposite): publishing all the data, then another team coming forward with a potential issue etc. If it turns out the original analysis was incorrect, that's fine, there was no deception here.
The Japanese team are low creativity, and low ability.
@@tbird-z1r .....because?
@poruatokin It's hard to know if it's cultural or genetic, as there's important interplay there. As always with these questions it's a mixture of both. I don't think their "shame" culture helps.
@@tbird-z1r your commend adds absolutely nothing to this discussion thread.
I have to say that from my experience working with big teams is that most of the time the one single type of analysis is done by one or two people, and very few in the team understand exactly what was done. So I wouldn't take the size of the team as an indicator of anything.
_Edutainment_ and _Sciencetainment_ can never replace actually learning topics, which takes a lot of work. And that is why we see masses of people get so easily confused by headlines - they have not put in the work.
Excellent video Becky. More regular people like me need to understand how detailed and difficult the scientific process is.
The thing about black holes is they're black and the thing about space is it's black, mostly. So it sort of sneaked up on me.
It snuck up on you?
Holly, Red Dwarf !
Weirdly enough I think it would be more accurate to say that space is mostly not black (unless you are near a back hole or something). It only appears black due to exposure times of cameras or the sensitivity of our eyes.
So is the black in space (the edge of the universe) the same black as a black hole? If not then what’s the difference ?
Serious question. If there is no difference then what …
@bobothebob4716
If yer eyes could see a spectrum from VHF to gamma you'd be: blinded by the light!
Thanks for explaining this, Dr. Becky
They've got doctorates in astrophysics and I don't, but if you tell me somebody took some data and ran it through an AI algorithm trained on simulations of what black holes looked like (rings) and the result matched those simulations even though you can't see the ring in the raw data... sounds like BS to me!
Well, it wasn't just one algorithm; what they claim is that they ran it with several data sets: renditions of black holes but also just a general images, and something else I now forgot.
Unabhängig von ihrer sympathischen Ausstrahlung und ihrer fachlichen Kompetenz, ich liebe ihre Augen!
Video starts 4:50
This "in video" ads are getting ridiculous.
Thanks
Thank you!
Thank you for giving us your very much qualified 'leaning' on this issue. Thank you for explaining those qualifiers. Most of all, thanks for talking about psf deconvolution. That's a real blast from my past!
A great video of a complex tangle of a subject.
well of course its wrong... it was never an "image"
There are two issues with using AI to "fill in the gaps." The first issue is the simple one of: how many gaps are being "filled" compared to solid data from the radio telescopes? The worse the ratio, the less reliable the gap-filling. The second issue is that AI models are extremely sensitive to training data, hyper-parameter tuning, and learning reinforcement. As the original team hoped to image a black hole's event horizon, it's not beyond the bounds of possibility that unconsciously they nudged their AI model toward a state where it would indeed output such an image. AI models can be powerful tools, but we're still in the very early stages of understanding how to use these tools properly.
Nice walk through! I did react on the "one image" reconstruction, but I didn't catch that the same group had made a similar previous claim. For me, the polarization data reconstruction was the clincher though, it is more structure and it is aligned with the ring structure.
Thank you so much for that review of the current expert thinking on the two black holes appearing in those images. As a member of the general public who knows nought point nought-to-infinity about any of this, I am inclined to lean toward the consensus of the larger group of experts, at least until we learn more. It's reassuring when Dr. Becky says that she leans that way as well.
Here's hoping we don't get a Sabine Hossenfelder video with a clickbait title. Some predictions... "Black Holes are B.S.", "Why Black Holes Aren't Science", "Black Holes are Failing", "Science is Dying Because of Ugly Black Holes"
Well done Becky. I'm not buying the PSF limitation argument based on your report they only analyzed a short duration of the data. If the full duration of the data is used that enables "Super Resolution" effects to provide higher resolution than the PSF. However it introduces time ambiguity - - if the brightness is really a full ring or a circulating blob. I'm betting the EHT team has a way to minimize that ambiguity.
Ah, yes: It's always better to replace one's own biases with the biases of an AI.
At least if you are paid to do so.
The PSF argument is incredibly strong. Its difficult to understand the intricacies of image processing. But no matter what processing you do, you are still limited by the PSF of your observation instrument. An image with a ring the size of the PSF is highly suspect! Indeed, that looks like an airy-disc.
Moreover, nobody would ever claim to have actually resolved a structure when it is a ring with the diameter of the airy-disc.
Would have believe an image where the ring is 1 pixel wide and the whole in the middle is also only 1 pixel? That's pretty much what is happening here if the PSF argument is correct!
Another made up image that was totally ridicolous was the one representing the vectors of the material spinning around the event horizon. Come on. That one was shameful .
No, we don’t know if these images are made up, not yet
Want to see real photos, instead of creations? Try Halton Arp's books, like "Seeing Red".
Salient since day one is that the images possess a special feature : some relatively straightforward algorithm will compress them with extraordinary accuracy to incredibly little.
Therefore the EHTC owes us well-reasoned, official numerical figures for the minute sizes of said littles.
An appreciation for the meaning of their value would then come with apps to explore continuous* manifolds of same-little-sized images; by acting on controls initially tuned to the historical image.
Ideally, the manifolds-generating algorithms (in §-1) would read off the good reasoning of the official figures (in §-2)
I believe fair at once to anyway award the *Duchamp Pissoir Prize* to the *EHTC* while noting these two cases contrast like do our opposite poles.
*Congratulations, EHTC!*
As a Serb, I am skeptical that black holes exist at all
Same about Bosnia.
Check your pants 😏😁
Personal thought is that this particular subject and science in general has a phenomenal way of having rational discussion and debate. If the Japanese happen to be looking at the data wrong … boom! We have more scientists furthering their own knowledge in their chosen field and pushing us all forward with our collective knowledge of our universe ❤ Love it and cheers Becky for the video 😊
so they train AI what they believe data should be?
No surprise, that’s what “AI” is, intelligent it ain’t. Just regurgitates what it’s correlated from human sources. Hmm, wonder what happens when it gets so arrogant to quote itself…
what we can do is to give the AI the result of incomplete data (basically simulate the telescope backward) and let it iterate until it matches the original, so the are not training it based on what they believe the data is, they are letting the AI play a guessing game until it can guess fairly accurately, i assume somewhere in the paper they have some number of their confidence of accuracy. but utimately these system are guesstimation, you can never reach 100%.
@ Agreed, the quality of the training data is critical. As I recall the original team had several groups independently developing what they thought the image should look like and the “best” one was selected…
@@john-or9cf yeah all the teams came up with basically the same image. Also they trained the AI not just on images of black holes but also random images off the internet and stuff, so their algorithm will reconstruct images of like cats or whatever if were in space and glowing in radio light... supposedly.
@@john-or9cfthere is no AI. All this AI slogans are Marketing to make money on algorithm. Sell everything call everything Ai. People are dumb that's why it works