Can synthetic data be as effective as real data? Wouldn’t model getting trained with synthetic data be giving false results when used against real data?
What is very interesting about this concept is the validity and reliability of them. Why they don't talk about it! it's essential when we talk about mathematical set's of any data!
Great series from IBM in general and this instructor specifically . Slightly hopeful on the Southampton bit but if you can't dream, what's the point of it all😃
Yes, cool stuff. We use synthetic data for tracking trucks in the field. By taking existing labeled data and transforming the truck in three dimensions to get the additional data for the model.
I find it difficult to stop thinking about Martin Keen, and his prediction about Southampton's future in the Premier League. It's quite remarkable that both Southampton and Leicester will be battling it out in the Championship to regain their positions in the top tier in 2025. A great example of the problems with synthetic data.
Takeaway: Made up data can be used to deal with biased real word data and can be obtained from data sources or transforming existing data by adding noise or using GANs.
Interesting, if rather simplistic. Having spent the past 5/6 years developing a synthetic police-data model, it is not easy or cheap (if time is factored in). Rows and rows of financial transactions might be easy to generate, less so, complex family groups, locations, incidents and crimes, vehicles, organisations, where these are interlinked, related and reflect real-world scenarios. Whilst IBM has some excellent tools such as i2 and Watson, the real data in those systems would be unlikely to be made available for sythesising.
Synthetic data has been very useful in my field (gene regulatory networks; maps of interactions that affect gene expression within cells). We can't manually test the interactions of tens of thousands of genes, especially across tens/hundreds of thousands of species, so we predict them using large molecular datasets. The problem is, how can you evaluate the accuracy of a prediction algorithm if you don't know what's true or false? Synthetic data is super useful, since you can generate data with known interactions that you can compare to. Algorithms can then be ranked on how close their predictions match the synthetic dataset. A great example is the GNW DREAM Network Inference Challenge, if you want to see how they use this!
I love how Synthetic Data works. It's very, very useful. I just really worry that people will start training models on Synthetic data in scenarios in which it would be dangerous to use data that is not perfectly based in reality.
The following is the reply from Copilot: Can you generate a list of EPL winners from 2024 to 2033 Copilot Using the web Sent by Copilot: I can’t predict the future, so I can’t provide a list of EPL winners from 2024 to 2033. However, I can share the most recent winners up to 2024: 2023-24: Manchester City1 2022-23: Manchester City 2021-22: Manchester City 2020-21: Manchester City 2019-20: Liverpool Manchester City has been quite dominant in recent years. Do you have a favorite team you’re hoping will break their streak?
How is this not basing later models on copies of copies of potentially incorrect data? Won't we end up with piles of structurally sound, true seeming noise eventually?
Imagine I use the latest gpt model to scrape the wiki page regarding a political view point and generate 10 new pages of slightly different content based on that. All 10 will contain the lacks, flaws and biases in the original. What does thus achieve? We already have enough examples of the language in use so it's not for that. If it's for quality facts, you're not generating synthetic facts, just copies of previously learned ones. Is it just a way to get around intellectual property laws by making copies of something no one owns?
Can synthetic data be as effective as real data? Wouldn’t model getting trained with synthetic data be giving false results when used against real data?
I am amazed how this dude can write backwards so perfectly
See ibm.biz/write-backwards
@@IBMTechnology aha! I knew some sorcery was involved
Have you noticed they all seem to be left handed 🧐
What is very interesting about this concept is the validity and reliability of them. Why they don't talk about it! it's essential when we talk about mathematical set's of any data!
I like this friendly instructor
Great video! Best of luck SFC😂
Amazing series and very classical and engrossing style of explanation... keep up the good work
You are a very good teacher. Do you have a full course on this?
Great series from IBM in general and this instructor specifically . Slightly hopeful on the Southampton bit but if you can't dream, what's the point of it all😃
I appreciate your generous use of "slightly hopeful" 🙂
Always had a sweet spot for the saints… hope my club could give a new home for JWP
Yes, cool stuff. We use synthetic data for tracking trucks in the field. By taking existing labeled data and transforming the truck in three dimensions to get the additional data for the model.
Sounds like you used existing real data about the trucks. How is that synthetic data? I fear I'm misunderstanding this.
I think this video might have jinxed Southampton. Instead of winning the Premier league they are now getting relegated.😢
I find it difficult to stop thinking about Martin Keen, and his prediction about Southampton's future in the Premier League. It's quite remarkable that both Southampton and Leicester will be battling it out in the Championship to regain their positions in the top tier in 2025. A great example of the problems with synthetic data.
Excellent!!!
What kind of transparent white board is he using to write on? Very cool. Have not quite seen this before.
Takeaway:
Made up data can be used to deal with biased real word data and can be obtained from data sources or transforming existing data by adding noise or using GANs.
Interesting, if rather simplistic. Having spent the past 5/6 years developing a synthetic police-data model, it is not easy or cheap (if time is factored in). Rows and rows of financial transactions might be easy to generate, less so, complex family groups, locations, incidents and crimes, vehicles, organisations, where these are interlinked, related and reflect real-world scenarios. Whilst IBM has some excellent tools such as i2 and Watson, the real data in those systems would be unlikely to be made available for sythesising.
Why is it not called a fake message that is not clear in the video..
Synthetic data has been very useful in my field (gene regulatory networks; maps of interactions that affect gene expression within cells). We can't manually test the interactions of tens of thousands of genes, especially across tens/hundreds of thousands of species, so we predict them using large molecular datasets.
The problem is, how can you evaluate the accuracy of a prediction algorithm if you don't know what's true or false? Synthetic data is super useful, since you can generate data with known interactions that you can compare to. Algorithms can then be ranked on how close their predictions match the synthetic dataset. A great example is the GNW DREAM Network Inference Challenge, if you want to see how they use this!
I love how Synthetic Data works. It's very, very useful. I just really worry that people will start training models on Synthetic data in scenarios in which it would be dangerous to use data that is not perfectly based in reality.
Loved this so much wow
Can we add regional human corruption to make synthetic data more reliable one also and should it be under noise?
Very simply explained ...👍
so cool, thanks
Programming/MetaProgramming/Hypergramming.
Hypergramming is AI created synthetic databasing.
The following is the reply from Copilot:
Can you generate a list of EPL winners from 2024 to 2033
Copilot
Using
the web
Sent by Copilot:
I can’t predict the future, so I can’t provide a list of EPL winners from 2024 to 2033.
However, I can share the most recent winners up to 2024:
2023-24: Manchester City1
2022-23: Manchester City
2021-22: Manchester City
2020-21: Manchester City
2019-20: Liverpool
Manchester City has been quite dominant in recent years.
Do you have a favorite team you’re hoping will break their streak?
using the prem was the perfect hook icl
How is this not basing later models on copies of copies of potentially incorrect data? Won't we end up with piles of structurally sound, true seeming noise eventually?
Imagine I use the latest gpt model to scrape the wiki page regarding a political view point and generate 10 new pages of slightly different content based on that. All 10 will contain the lacks, flaws and biases in the original. What does thus achieve? We already have enough examples of the language in use so it's not for that. If it's for quality facts, you're not generating synthetic facts, just copies of previously learned ones. Is it just a way to get around intellectual property laws by making copies of something no one owns?
Excellent presentation and excellent instructor.
Thank you so much
Very interesting..
Thanks for the video.
May I ask... is this British accent?
It is. Although I have been in the US for a good while now, so maybe a bit of a Mid-Atlantic accent.
@@MartinKeen Thank You.
INTERESTING.😀
Which bord you use.. ?
The one that uses spell checker.
the dataset that has SFC as potential winner of the PL, is the first I would throw away🤣🤣🤣🤣🤣🤣
nice, now I can generate data for my HIV viral load detector model at no cost
is this the hallucinations from llms like chatgpt that everyone hates put to good use?
Bro did Southampton dirty😂😂
Its been all downhill since Lawrie McMenemy left. #sfc
Sadly true - and that was 45 years ago!
when he said, the years when my team won the prem title, i said, lies, all lies 😀😆
currently models are being trained on sh*****Ty Ai-generated videos. definition of downward spiral--
hey am from the future 2024, and SFC is not winning the premier league, sorry😂!! educative lesson💯
lol u kiddin southanpton next winners haahha..