I'm a believer that a skilled analyst who is comfortable using AI is the best combination for speed and effective analysis. Great head to head challenge!
Absolutely. Despite the title, it's really about showing how humans and AI can each play to their strengths to produce the best outcome possible (and also shine a light on some of the MAJOR risks of blindly trusting model outputs)
@@Chris-at-Maven and that last sentence is what grinds my gears with all these tech evangelists PRETENDING that everyone can do any job by using ai. It's just creating vast misconceptions and is really harmful tbh.
Rookie mistake, as an expert PowerBI user and Python user, I would have simply asked GPT to produce the code in python to perform exactly what need to do. Those with domain knowledge are the ultimate users of GPT. Because we know our process in detail, we have provide GPT with a high quality prompt to produce exactly what I'm looking for. I could have produced a better chart that explained the data a bit better in Python. The only reason I'd graph something in PowerBI is if Management wants to "play" with the data. If, I'm producing for internal use of other domain knowledge users, python is more powerful.
Chris Dutton did it better: he considered multiple events per athlete, the duality between summer and winter games, and historical events that occurred in the twentieth century. Alice Zhao could have guided ChatGPT more using the prompt, but the AI alone, although extremely fast, is not able to grasp the nuances
Indeed, Chris vs. AI 10:1. However, Chris definitely used much more time to prepare - which is irrelvant though, as proper analysis often are not a game of speed.
I would love to see how expert data analyst leverage GenAI to make data analyst process faster and better. Like asking ChatGPT why in certain year Olyompic participan is lower or asking chat GPT to make a list of countries based on regional, continent, developing/ develop. That way, some tedious job can be offloaded to GenAI hence the expert can focus more on creative work.
Really cool experiment - the conclusions are spot on. But if you want a truly interactive data plot, you can ask for python and customise from there. Still, at the end of the day, chatgpt excels at repetitive rudimentary tasks (coz that's what it's trained on) but our expertise is always needed to: 1. Direct it 2. Tailor specific needs Interestingly, There's a paper published recently showing that AI outperforms humans in QA which is honestly one less tedious thing for us to think about.
Perhaps using GPT to generate the NOC to Continent relationship table and pulling that into the Power BI Data set could give a deeper analysis of the data.
What no one realised is that when you click on the star like icon, Chat GPT shows the actual python code it used to analyse and create the visualisation. It is quite useful and you can modify it to get far more detail results.
I enjoyed watching from the beginning. Alice could have done more with the prompts, but GenAI appears to require human guiding. I also considered premium AI versions such as GPT 4o and the use cases for which they were created, and whether they could have provided a much more clear and brief analysis. Chris Dutton is at the top of the perk list for data analysis. It would have been appropriate to compare the AI to a typical data analyst. Special thanks to Chris for the insights! We both, me and the AI, have a lot to learn from you.
Well said… Data analyst/ human can interpret data more granulated manner by analysing what’s the cause of it While AI is used as a supplements… Highly impressed by AI but at the end we have to actually finding it’s actual RC(root cause) and it’s recommendations according to dynamic market trends.
The slicing of the season dimension was really an important insight in data that chatgpt ignored 😊. The main difference between AI and the human Evolutionary aspect is the way humans have feelings. So chatgpt can never replace the human mindset but surely can assist the human generations to come.
I think this excercise was a great experiment. My conclusion is that Ai will most likely take over the role of building dash boards and the tech stuff. It got to the point of the first viz, as the (very senior) analyst, 80% quicker. That meant that the skill of building PowerBi and data tables etc is fast becomming obsolete, which is good news. The analyst was able to demostrate more value (much more) in recognising the distinct count and the change in the games frequency. So in the future if your value add is limited to building dash boards and you dont really understand the data, then your job is very much under threat. Lets be honest if you see data as just dots on a page (exactly like ChatGPT did in this example) then you have little value to offer going forward.
Much respect for the expert analyst, looking to become one someday, but if the initial promt was "How many DISTINCT athletes are from each country, each year?", ChatGPT gives the correct answer
True, and you might even get the correct answer sometimes with the exact prompt in the video. That said, many people (especially non-Analysts) won't know exactly how to frame the question in an optimal way, let alone how to QA the result.
Right. chatGPT doesn't replace experts, it's just a powerful tool. If you are an expert, you can use that tool really well, and if you are not an expert, the tool becomes significantly less useful. It's like saying "does this CNC machine make machinist obsolete?" No it doesn't, it makes machinist much more productive. If you put a ignoramus Infront of a CNC Machine, he won't produce anything, and yet CNC machines are a godsend. What would intrigue me is, what could the professional data analyst have done USING chat GPT? He is the one that is capable of guiding chat GPT to be useful for analysing and visualizing data.
First step is to load CSV files in Excel, to check the data? Not like using Python, pandas, some basic EDA?😅 Power BI for an available interactive visual vs static visual from AI seems unfair 😅
Yeah we could certainly improve the output with more/better prompting (assuming you know what questions to ask). That said, many people still likely wouldn't catch the calculation error unless they specifically asked for validation
@@Chris-at-Maven Here's an example i used Gpt -4o to analyze "Analyze a data set containing 120 years of Olympic history, and use data visualization to explore how the number of athletes from each country has trended over time." and by applying prompt engineering Chatgpt came up with "Analyze the Prompt Understand the Main Task and Objectives: The main task is to analyze a data set containing 120 years of Olympic history and use data visualization to explore how the number of athletes from each country has trended over time. Key Areas of Focus: 1. Analyzing the data set to identify the number of athletes from each country in the Olympics. 2. Utilizing data visualization techniques to illustrate the trends over the 120-year period. Generate AI Agent Profile AI Name: Olympic Athlete Participation Analyzer AI Role: Data Analyst and Visualization Specialist AI Objectives: 1. Analyze the 120-year Olympic history data set to identify the number of athletes from each country. 2. Create visualizations to showcase the trends in athlete participation over time. Define AI Tasks Task 1: Action 1: Extract and clean the data set to obtain the relevant information on athlete participation from each country. Task 2: Action 2: Analyze the cleaned data to calculate the number of athletes representing each country in each Olympic year. Task 3: Action 3: Utilize data visualization tools to create visual representations (such as line graphs or heat maps) that illustrate the trends in athlete participation over the 120-year period. Task 4: Action 4: Identify any significant patterns or changes in athlete participation and prepare a summary report. Execution and Self-Reflection Loop Execute Task 1: Action: Extract and clean the data set. Self-Reflection: Evaluate the accuracy and completeness of the extracted data, refine if necessary. Execute Task 2: Action: Analyze the cleaned data for athlete participation. Self-Reflection: Assess the precision of the analysis, refine if necessary. Continue the Loop: Execution: Continue executing tasks in sequence. Self-Reflection: After each task, evaluate performance and make necessary refinements. Completion: Continue the loop until all tasks are refined and objectives are achieved."
@@Chris-at-Maven What i actually want to know your perspective on how significantly did chatgpt perform im sure this won't be perfect. (So here i used Gpt4o to analyze the task) "Analyze a data set containing 120 years of Olympic history, and use data visualization to explore how the number of athletes from each country has trended over time." (by applying prompt engineering skills Chatgpt has come up with.) ""Analyze a data set containing 120 years of Olympic history, and use data visualization to explore how the number of athletes from each country has trended over time." AI Agent Prompt Generator Framework Given Prompt: "Analyze a data set containing 120 years of Olympic history, and use data visualization to explore how the number of athletes from each country has trended over time." 1. Analyze the Prompt Understand the Main Task and Objectives: Comprehension: The primary task is to analyze a historical data set of Olympic athletes spanning 120 years. Key Objectives: Analyze the number of athletes from each country. Visualize trends over time. Key Areas of Focus: Data analysis techniques. Data visualization methods. Historical trends in Olympic participation by country. 2. Generate AI Agent Profile AI Name: Olympic Data Analyst AI Role: Data Analyst and Visualization Specialist AI Objectives: Analyze 120 years of Olympic athlete data. Identify trends in the number of athletes from each country. Create visualizations to represent these trends over time. 3. Define AI Tasks Task 1: Action: Collect and preprocess the data set containing 120 years of Olympic history. Task 2: Action: Clean and normalize the data to ensure consistency and accuracy. Task 3: Action: Analyze the data to identify the number of athletes from each country for each Olympic event. Task 4: Action: Generate time-series visualizations to show trends in the number of athletes from each country over time. Task 5: Action: Interpret the visualizations to draw meaningful insights about the trends. 4. Execution and Self-Reflection Loop Execute Task 1: Action: Collect and preprocess the data set. Self-Reflection: Evaluate the completeness and quality of the data. Refine the preprocessing steps if necessary. Execute Task 2: Action: Clean and normalize the data. Self-Reflection: Assess the consistency and accuracy of the cleaned data. Make adjustments as needed. Execute Task 3: Action: Analyze the data to identify the number of athletes from each country. Self-Reflection: Review the analysis for accuracy and comprehensiveness. Refine the analysis approach if required. Execute Task 4: Action: Generate time-series visualizations. Self-Reflection: Evaluate the clarity and effectiveness of the visualizations. Improve the visualization techniques if necessary. Execute Task 5: Action: Interpret the visualizations to draw insights. Self-Reflection: Assess the insights for relevance and depth. Refine the interpretation process if needed. Continue the Loop: Execution: Continue executing tasks in sequence. Self-Reflection: After each task, evaluate performance and make necessary refinements. Completion: Continue the loop until all tasks are refined and objectives are achieved. Implementation Steps Analyze the Prompt: Break down the prompt to understand the main task and objectives. Identify key areas of focus to address the objectives effectively. Generate AI Agent Profile: Dynamically generate the AI name based on the task. Define the AI role according to the task requirements. Establish clear and specific objectives for the AI to achieve. Define AI Tasks: Break down each objective into specific, actionable tasks. Ensure each task is clearly defined and aligned with the corresponding objective. Execution and Self-Reflection Loop: Execute each task sequentially. After each task, perform a self-reflection to evaluate performance. Refine the approach based on self-reflection before moving to the next task. Continue the loop until all tasks are completed and objectives are achieved. Example Application Given Prompt: "Analyze a data set containing 120 years of Olympic history, and use data visualization to explore how the number of athletes from each country has trended over time." AI Agent Profile AI Name: Olympic Data Analyst AI Role: Data Analyst and Visualization Specialist AI Objectives: Analyze 120 years of Olympic athlete data. Identify trends in the number of athletes from each country. Create visualizations to represent these trends over time. AI Tasks Task 1: Action: Collect and preprocess the data set containing 120 years of Olympic history. Task 2: Action: Clean and normalize the data to ensure consistency and accuracy. Task 3: Action: Analyze the data to identify the number of athletes from each country for each Olympic event. Task 4: Action: Generate time-series visualizations to show trends in the number of athletes from each country over time. Task 5: Action: Interpret the visualizations to draw meaningful insights about the trends. Execution and Self-Reflection Loop Execute Task 1: Action: Collect and preprocess the data set. Self-Reflection: Evaluate the completeness and quality of the data. Refine the preprocessing steps if necessary. Execute Task 2: Action: Clean and normalize the data. Self-Reflection: Assess the consistency and accuracy of the cleaned data. Make adjustments as needed. Execute Task 3: Action: Analyze the data to identify the number of athletes from each country. Self-Reflection: Review the analysis for accuracy and comprehensiveness. Refine the analysis approach if required. Execute Task 4: Action: Generate time-series visualizations. Self-Reflection: Evaluate the clarity and effectiveness of the visualizations. Improve the visualization techniques if necessary. Execute Task 5: Action: Interpret the visualizations to draw insights. Self-Reflection: Assess the insights for relevance and depth. Refine the interpretation process if needed. Continue the Loop: Execution: Continue executing tasks in sequence. Self-Reflection: After each task, evaluate performance and make necessary refinements. Completion: Continue the loop until all tasks are refined and objectives are achieved. Output and Reporting Monthly Report: Summarize key findings and trends. Dashboard: Real-time data on the number of athletes from each country over time. Alerts: Notifications for significant changes or trends in the data. Recommendations: Insights on historical trends and potential future projections. Continuous Improvement Refinement: Improve data sources and preprocessing techniques for better coverage and accuracy. Machine Learning: Incorporate machine learning to better identify and visualize trends. Expansion: Extend analysis to include additional dimensions such as athlete performance, gender, and event types. User Feedback: Gather feedback to enhance the relevance and usability of outputs. This framework provides a structured approach to developing an AI agent capable of analyzing and visualizing historical Olympic data."
One can still ask Chat GPT to breakdown how it made those calculations. which can be corrected if necessary. let alone your high personal level of expertise in Excel and Power BI
Absolutely! With more/better prompting we likely could have gotten to a very similar output, but the point here is to shine a light on some of the strengths and weaknesses of these tools, and why QA and strong foundational skills still matter
AI is way more efficient. Look at how long he took to do that task. You could've just promoted chatgpt to do what he did rather than ambiguously saying 'describe the data', the key is to use proper prompts.
Yeah we definitely could have improved the output with more/better prompting, but the goal here is really just to showcase some of the potential risks when you blindly trust AI-generated outputs. And yeah the speed is impressive, but doesn't matter much in cases where precision is critical.
Both analysis are not correct. The Soviet Union (USSR) was a large country with 15 constituent republics, and it competed in the Olympics as a single entity. This means athletes from all 15 republics could be on the Soviet Olympic team. After the USSR dissolved in 1991, the 15 republics became independent countries. Since then, each country sends its own athletes to the Olympics. So, Russia only sends athletes who are citizens of Russia.
I don't think AI will replace data professionals any time soon, even if they completely nail the technical stuff. Being a great analyst requires a lot of uniquely human skills like creativity, communication, empathy, intuition, etc.
AI is a huge security risk to your private data. There are newspapers and other corporate entities who are suing an AI company for using their products to train their AI. Plus the subscription and monthly costs of AI are too exorbitant for most small and medium businesses. Even enterprise companies are penny-pinching now.
I'm a believer that a skilled analyst who is comfortable using AI is the best combination for speed and effective analysis. Great head to head challenge!
Absolutely. Despite the title, it's really about showing how humans and AI can each play to their strengths to produce the best outcome possible (and also shine a light on some of the MAJOR risks of blindly trusting model outputs)
@@Chris-at-Maven and that last sentence is what grinds my gears with all these tech evangelists PRETENDING that everyone can do any job by using ai. It's just creating vast misconceptions and is really harmful tbh.
Rookie mistake, as an expert PowerBI user and Python user, I would have simply asked GPT to produce the code in python to perform exactly what need to do. Those with domain knowledge are the ultimate users of GPT. Because we know our process in detail, we have provide GPT with a high quality prompt to produce exactly what I'm looking for. I could have produced a better chart that explained the data a bit better in Python. The only reason I'd graph something in PowerBI is if Management wants to "play" with the data. If, I'm producing for internal use of other domain knowledge users, python is more powerful.
This is awesome. ChatGPT is decent, but still has a long way to go. I'll take Chris Dutton over the machines all day!!
😂
Pause
Chris Dutton did it better: he considered multiple events per athlete, the duality between summer and winter games, and historical events that occurred in the twentieth century. Alice Zhao could have guided ChatGPT more using the prompt, but the AI alone, although extremely fast, is not able to grasp the nuances
Thanks for the shoutout!
Indeed, Chris vs. AI 10:1.
However, Chris definitely used much more time to prepare - which is irrelvant though, as proper analysis often are not a game of speed.
I would love to see how expert data analyst leverage GenAI to make data analyst process faster and better. Like asking ChatGPT why in certain year Olyompic participan is lower or asking chat GPT to make a list of countries based on regional, continent, developing/ develop.
That way, some tedious job can be offloaded to GenAI hence the expert can focus more on creative work.
This is a great video on how ChatGPT, Copilot, etc. can tee up the data for an analyst to bring it home! Really enjoyed watching this!!!
Glad you enjoyed it!
Really cool experiment - the conclusions are spot on.
But if you want a truly interactive data plot, you can ask for python and customise from there.
Still, at the end of the day, chatgpt excels at repetitive rudimentary tasks (coz that's what it's trained on) but our expertise is always needed to:
1. Direct it
2. Tailor specific needs
Interestingly, There's a paper published recently showing that AI outperforms humans in QA which is honestly one less tedious thing for us to think about.
Perhaps using GPT to generate the NOC to Continent relationship table and pulling that into the Power BI Data set could give a deeper analysis of the data.
What no one realised is that when you click on the star like icon, Chat GPT shows the actual python code it used to analyse and create the visualisation. It is quite useful and you can modify it to get far more detail results.
True!
I enjoyed watching from the beginning. Alice could have done more with the prompts, but GenAI appears to require human guiding. I also considered premium AI versions such as GPT 4o and the use cases for which they were created, and whether they could have provided a much more clear and brief analysis.
Chris Dutton is at the top of the perk list for data analysis. It would have been appropriate to compare the AI to a typical data analyst. Special thanks to Chris for the insights! We both, me and the AI, have a lot to learn from you.
Well said! We do plan to feature different analysts throughout the series (not just me), some more senior than others.
ChatGPT can make 2+2 mistakes so the human always does it better. ChatGPT can say sorry in many ways though.
🤣🤣
Well said… Data analyst/ human can interpret data more granulated manner by analysing what’s the cause of it
While AI is used as a supplements…
Highly impressed by AI but at the end we have to actually finding it’s actual RC(root cause) and it’s recommendations according to dynamic market trends.
The slicing of the season dimension was really an important insight in data that chatgpt ignored 😊.
The main difference between AI and the human Evolutionary aspect is the way humans have feelings. So chatgpt can never replace the human mindset but surely can assist the human generations to come.
Impressed on how Chris went about it 😊
I think this excercise was a great experiment. My conclusion is that Ai will most likely take over the role of building dash boards and the tech stuff. It got to the point of the first viz, as the (very senior) analyst, 80% quicker. That meant that the skill of building PowerBi and data tables etc is fast becomming obsolete, which is good news. The analyst was able to demostrate more value (much more) in recognising the distinct count and the change in the games frequency. So in the future if your value add is limited to building dash boards and you dont really understand the data, then your job is very much under threat. Lets be honest if you see data as just dots on a page (exactly like ChatGPT did in this example) then you have little value to offer going forward.
Much respect for the expert analyst, looking to become one someday, but if the initial promt was "How many DISTINCT athletes are from each country, each year?", ChatGPT gives the correct answer
True, and you might even get the correct answer sometimes with the exact prompt in the video. That said, many people (especially non-Analysts) won't know exactly how to frame the question in an optimal way, let alone how to QA the result.
@@Chris-at-Maven Well the prompt wasn't good enough to begin with.
Fascinating, thank you so much for this interesting example of AI vs. Human. Also, well done, Chris, impressive (as always!).
can the gpt connect directly to servers?
I had love this video. Greetings from Argentina!
The prompts were not comprehensive at all.
Right.
chatGPT doesn't replace experts, it's just a powerful tool.
If you are an expert, you can use that tool really well, and if you are not an expert, the tool becomes significantly less useful.
It's like saying "does this CNC machine make machinist obsolete?"
No it doesn't, it makes machinist much more productive.
If you put a ignoramus Infront of a CNC Machine, he won't produce anything, and yet CNC machines are a godsend.
What would intrigue me is, what could the professional data analyst have done USING chat GPT?
He is the one that is capable of guiding chat GPT to be useful for analysing and visualizing data.
Wow! Insane skills and a super interesting topic! Love it
Glad you found it helpful!
This is really amazing.
First step is to load CSV files in Excel, to check the data? Not like using Python, pandas, some basic EDA?😅 Power BI for an available interactive visual vs static visual from AI seems unfair 😅
Yup we could definitely use Pandas for EDA too, just trying to give viewers a quick look at the data
Interesting Concept! Maybe you should increase the odds By using atleast Some good Prompt engineering skills for Chatgpt
Yeah we could certainly improve the output with more/better prompting (assuming you know what questions to ask). That said, many people still likely wouldn't catch the calculation error unless they specifically asked for validation
@@Chris-at-Maven Here's an example i used Gpt -4o to analyze "Analyze a data set containing 120 years of Olympic history, and use data visualization to explore how the number of athletes from each country has trended over time." and by applying prompt engineering Chatgpt came up with "Analyze the Prompt
Understand the Main Task and Objectives:
The main task is to analyze a data set containing 120 years of Olympic history and use data visualization to explore how the number of athletes from each country has trended over time.
Key Areas of Focus:
1. Analyzing the data set to identify the number of athletes from each country in the Olympics.
2. Utilizing data visualization techniques to illustrate the trends over the 120-year period.
Generate AI Agent Profile
AI Name: Olympic Athlete Participation Analyzer
AI Role: Data Analyst and Visualization Specialist
AI Objectives:
1. Analyze the 120-year Olympic history data set to identify the number of athletes from each country.
2. Create visualizations to showcase the trends in athlete participation over time.
Define AI Tasks
Task 1:
Action 1: Extract and clean the data set to obtain the relevant information on athlete participation from each country.
Task 2:
Action 2: Analyze the cleaned data to calculate the number of athletes representing each country in each Olympic year.
Task 3:
Action 3: Utilize data visualization tools to create visual representations (such as line graphs or heat maps) that illustrate the trends in athlete participation over the 120-year period.
Task 4:
Action 4: Identify any significant patterns or changes in athlete participation and prepare a summary report.
Execution and Self-Reflection Loop
Execute Task 1:
Action: Extract and clean the data set.
Self-Reflection: Evaluate the accuracy and completeness of the extracted data, refine if necessary.
Execute Task 2:
Action: Analyze the cleaned data for athlete participation.
Self-Reflection: Assess the precision of the analysis, refine if necessary.
Continue the Loop:
Execution: Continue executing tasks in sequence.
Self-Reflection: After each task, evaluate performance and make necessary refinements.
Completion: Continue the loop until all tasks are refined and objectives are achieved."
@@Chris-at-Maven What i actually want to know your perspective on how significantly did chatgpt perform im sure this won't be perfect.
(So here i used Gpt4o to analyze the task)
"Analyze a data set containing 120 years of Olympic history, and use data visualization to explore how the number of athletes from each country has trended over time."
(by applying prompt engineering skills Chatgpt has come up with.)
""Analyze a data set containing 120 years of Olympic history, and use data visualization to explore how the number of athletes from each country has trended over time."
AI Agent Prompt Generator Framework
Given Prompt:
"Analyze a data set containing 120 years of Olympic history, and use data visualization to explore how the number of athletes from each country has trended over time."
1. Analyze the Prompt
Understand the Main Task and Objectives:
Comprehension: The primary task is to analyze a historical data set of Olympic athletes spanning 120 years.
Key Objectives:
Analyze the number of athletes from each country.
Visualize trends over time.
Key Areas of Focus:
Data analysis techniques.
Data visualization methods.
Historical trends in Olympic participation by country.
2. Generate AI Agent Profile
AI Name:
Olympic Data Analyst
AI Role:
Data Analyst and Visualization Specialist
AI Objectives:
Analyze 120 years of Olympic athlete data.
Identify trends in the number of athletes from each country.
Create visualizations to represent these trends over time.
3. Define AI Tasks
Task 1:
Action: Collect and preprocess the data set containing 120 years of Olympic history.
Task 2:
Action: Clean and normalize the data to ensure consistency and accuracy.
Task 3:
Action: Analyze the data to identify the number of athletes from each country for each Olympic event.
Task 4:
Action: Generate time-series visualizations to show trends in the number of athletes from each country over time.
Task 5:
Action: Interpret the visualizations to draw meaningful insights about the trends.
4. Execution and Self-Reflection Loop
Execute Task 1:
Action: Collect and preprocess the data set.
Self-Reflection: Evaluate the completeness and quality of the data. Refine the preprocessing steps if necessary.
Execute Task 2:
Action: Clean and normalize the data.
Self-Reflection: Assess the consistency and accuracy of the cleaned data. Make adjustments as needed.
Execute Task 3:
Action: Analyze the data to identify the number of athletes from each country.
Self-Reflection: Review the analysis for accuracy and comprehensiveness. Refine the analysis approach if required.
Execute Task 4:
Action: Generate time-series visualizations.
Self-Reflection: Evaluate the clarity and effectiveness of the visualizations. Improve the visualization techniques if necessary.
Execute Task 5:
Action: Interpret the visualizations to draw insights.
Self-Reflection: Assess the insights for relevance and depth. Refine the interpretation process if needed.
Continue the Loop:
Execution: Continue executing tasks in sequence.
Self-Reflection: After each task, evaluate performance and make necessary refinements.
Completion: Continue the loop until all tasks are refined and objectives are achieved.
Implementation Steps
Analyze the Prompt:
Break down the prompt to understand the main task and objectives.
Identify key areas of focus to address the objectives effectively.
Generate AI Agent Profile:
Dynamically generate the AI name based on the task.
Define the AI role according to the task requirements.
Establish clear and specific objectives for the AI to achieve.
Define AI Tasks:
Break down each objective into specific, actionable tasks.
Ensure each task is clearly defined and aligned with the corresponding objective.
Execution and Self-Reflection Loop:
Execute each task sequentially.
After each task, perform a self-reflection to evaluate performance.
Refine the approach based on self-reflection before moving to the next task.
Continue the loop until all tasks are completed and objectives are achieved.
Example Application
Given Prompt:
"Analyze a data set containing 120 years of Olympic history, and use data visualization to explore how the number of athletes from each country has trended over time."
AI Agent Profile
AI Name:
Olympic Data Analyst
AI Role:
Data Analyst and Visualization Specialist
AI Objectives:
Analyze 120 years of Olympic athlete data.
Identify trends in the number of athletes from each country.
Create visualizations to represent these trends over time.
AI Tasks
Task 1:
Action: Collect and preprocess the data set containing 120 years of Olympic history.
Task 2:
Action: Clean and normalize the data to ensure consistency and accuracy.
Task 3:
Action: Analyze the data to identify the number of athletes from each country for each Olympic event.
Task 4:
Action: Generate time-series visualizations to show trends in the number of athletes from each country over time.
Task 5:
Action: Interpret the visualizations to draw meaningful insights about the trends.
Execution and Self-Reflection Loop
Execute Task 1:
Action: Collect and preprocess the data set.
Self-Reflection: Evaluate the completeness and quality of the data. Refine the preprocessing steps if necessary.
Execute Task 2:
Action: Clean and normalize the data.
Self-Reflection: Assess the consistency and accuracy of the cleaned data. Make adjustments as needed.
Execute Task 3:
Action: Analyze the data to identify the number of athletes from each country.
Self-Reflection: Review the analysis for accuracy and comprehensiveness. Refine the analysis approach if required.
Execute Task 4:
Action: Generate time-series visualizations.
Self-Reflection: Evaluate the clarity and effectiveness of the visualizations. Improve the visualization techniques if necessary.
Execute Task 5:
Action: Interpret the visualizations to draw insights.
Self-Reflection: Assess the insights for relevance and depth. Refine the interpretation process if needed.
Continue the Loop:
Execution: Continue executing tasks in sequence.
Self-Reflection: After each task, evaluate performance and make necessary refinements.
Completion: Continue the loop until all tasks are refined and objectives are achieved.
Output and Reporting
Monthly Report:
Summarize key findings and trends.
Dashboard:
Real-time data on the number of athletes from each country over time.
Alerts:
Notifications for significant changes or trends in the data.
Recommendations:
Insights on historical trends and potential future projections.
Continuous Improvement
Refinement:
Improve data sources and preprocessing techniques for better coverage and accuracy.
Machine Learning:
Incorporate machine learning to better identify and visualize trends.
Expansion:
Extend analysis to include additional dimensions such as athlete performance, gender, and event types.
User Feedback:
Gather feedback to enhance the relevance and usability of outputs.
This framework provides a structured approach to developing an AI agent capable of analyzing and visualizing historical Olympic data."
Can't even compare with Chris solution... Not yet anyway 😄
I really like PowerBI but my company is leaning towards tableau.
One can still ask Chat GPT to breakdown how it made those calculations. which can be corrected if necessary. let alone your high personal level of expertise in Excel and Power BI
Absolutely! With more/better prompting we likely could have gotten to a very similar output, but the point here is to shine a light on some of the strengths and weaknesses of these tools, and why QA and strong foundational skills still matter
If u use something wrong to give yourself confident that you are better, doesn’t make it better 😊
AI is way more efficient. Look at how long he took to do that task. You could've just promoted chatgpt to do what he did rather than ambiguously saying 'describe the data', the key is to use proper prompts.
Yeah we definitely could have improved the output with more/better prompting, but the goal here is really just to showcase some of the potential risks when you blindly trust AI-generated outputs. And yeah the speed is impressive, but doesn't matter much in cases where precision is critical.
@@Chris-at-Maven , AI generated vague outputs from vague inputs.
AI can be more precise when inputs are more precise.
Sir power Bi dax course please
That's great
ChatGPT would probably crash on the datasets I'm working with, sooooooo clean and accessible 😄
No way people use bright theme for ChatGPT, hurts to watch 😅
Gpt is supplement no replacement ❤
👌 best
First learn how to write good prompts, and yes LLMs already took many jobs.
I think Claude Sonnet would do much better
We’ll be testing more models like Copilot, Gemini and Claude Sonnet soon!
Both analysis are not correct.
The Soviet Union (USSR) was a large country with 15 constituent republics, and it competed in the Olympics as a single entity. This means athletes from all 15 republics could be on the Soviet Olympic team.
After the USSR dissolved in 1991, the 15 republics became independent countries. Since then, each country sends its own athletes to the Olympics. So, Russia only sends athletes who are citizens of Russia.
Yeah there are a bunch of similar nuances that we would uncover with a bit more time
We are indirectly asking openai to make Chatgpt smarter, at the same time telling openai.... " don't take our jobs!"
I don't think AI will replace data professionals any time soon, even if they completely nail the technical stuff. Being a great analyst requires a lot of uniquely human skills like creativity, communication, empathy, intuition, etc.
AI will help Data analyst to be more analyst rather than coders.
Lol
We're safe
For now at least😅
The host sucks using chatgpt
AI is a huge security risk to your private data.
There are newspapers and other corporate entities who are suing an AI company for using their products to train their AI.
Plus the subscription and monthly costs of AI are too exorbitant for most small and medium businesses.
Even enterprise companies are penny-pinching now.