@@RitheshSreenivasan In your github code the parameter "likes_count" should be replaced by "nlikes" (like in the video) that was the problem and for me i needed to dowload tensorflow thanks for everything
Response from ChatGPT: To scrape emotions data in Roman Urdu from Twitter, you can use the Tweepy library, which provides an easy-to-use interface for interacting with the Twitter API. Here's an example Python code that demonstrates how to scrape emotions data from Twitter using Tweepy: ```python import tweepy # Twitter API credentials consumer_key = "YOUR_CONSUMER_KEY" consumer_secret = "YOUR_CONSUMER_SECRET" access_token = "YOUR_ACCESS_TOKEN" access_token_secret = "YOUR_ACCESS_TOKEN_SECRET" # Authenticate with the Twitter API auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) api = tweepy.API(auth) # Keywords to search for emotions in Roman Urdu keywords = ["happy", "sad", "angry", "fear"] # Scrape emotions data from Twitter emotions_data = [] for keyword in keywords: tweets = tweepy.Cursor(api.search, q=keyword, lang="ur").items(100) # Adjust the number of items as per your requirements for tweet in tweets: emotion = { "keyword": keyword, "text": tweet.text, "created_at": tweet.created_at } emotions_data.append(emotion) # Print the scraped emotions data for emotion in emotions_data: print(emotion) ``` Before running the code, make sure you have installed the Tweepy library (`pip install tweepy`) and have obtained your Twitter API credentials by creating a Twitter Developer account and setting up a new application. Replace the `"YOUR_CONSUMER_KEY"`, `"YOUR_CONSUMER_SECRET"`, `"YOUR_ACCESS_TOKEN"`, and `"YOUR_ACCESS_TOKEN_SECRET"` placeholders with your actual API credentials. In the code above, the `keywords` list contains the emotions you want to search for in Roman Urdu. The code will retrieve 100 tweets per keyword (you can adjust this number) that match the specified emotions and language ("ur" for Urdu). The scraped data includes the keyword, the text of the tweet, and the timestamp of when the tweet was created. You can modify or extend the code to extract additional information from the tweets if needed. Please note that web scraping Twitter is subject to Twitter's API usage limits and terms of service. Make sure to comply with the guidelines and use the data responsibly.
Sir, I tried to run this part, c.Search = "lockdown in India" # Custom output format c.Format = "Username: {username} | Tweet: {tweet}" c.Limit = 100 c.Since = '2021-03-30' c.Pandas = True a = twint.run.Search(c) after that, the error is showing like this. CRITICAL:root:twint.run:Twint:Feed:noDataExpecting value: line 1 column 1 (char 0) sleeping for 1.0 secs CRITICAL:root:twint.run:Twint:Feed:noDataExpecting value: line 1 column 1 (char 0) sleeping for 8.0 secs CRITICAL:root:twint.run:Twint:Feed:noDataExpecting value: line 1 column 1 (char 0) sleeping for 27.0 sec Can you please help me with that?
Very clear explanation, thank you. You are awesome!
Thank You!!
Thank you for the video, would been nice if you did some animated visualizations using the data?
Thank You. I will try in the future
Hello, what if we want to search ampersand(&) symbol such as H&M? what will be the command? Thank you
Look at the documentation
Could not find the Guest token in HTML .....error
Refer to Twint Github
Sir Please make a video of on how to finetune and train our own model for domain specific dateset.
Sure will try to make a video
@@RitheshSreenivasan That would be amazing if you could. Great content.
the function column_names() doesn't work
Please have a look at documentation for latest version
@@RitheshSreenivasan thanks
@@RitheshSreenivasan In your github code the parameter "likes_count" should be replaced by "nlikes" (like in the video) that was the problem
and for me i needed to dowload tensorflow thanks for everything
I want to scrap the emotions data in roman urdu like happy sad angry fear
How can I do this
Please please please please reply me fast its urgent
Response from ChatGPT:
To scrape emotions data in Roman Urdu from Twitter, you can use the Tweepy library, which provides an easy-to-use interface for interacting with the Twitter API. Here's an example Python code that demonstrates how to scrape emotions data from Twitter using Tweepy:
```python
import tweepy
# Twitter API credentials
consumer_key = "YOUR_CONSUMER_KEY"
consumer_secret = "YOUR_CONSUMER_SECRET"
access_token = "YOUR_ACCESS_TOKEN"
access_token_secret = "YOUR_ACCESS_TOKEN_SECRET"
# Authenticate with the Twitter API
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)
# Keywords to search for emotions in Roman Urdu
keywords = ["happy", "sad", "angry", "fear"]
# Scrape emotions data from Twitter
emotions_data = []
for keyword in keywords:
tweets = tweepy.Cursor(api.search, q=keyword, lang="ur").items(100) # Adjust the number of items as per your requirements
for tweet in tweets:
emotion = {
"keyword": keyword,
"text": tweet.text,
"created_at": tweet.created_at
}
emotions_data.append(emotion)
# Print the scraped emotions data
for emotion in emotions_data:
print(emotion)
```
Before running the code, make sure you have installed the Tweepy library (`pip install tweepy`) and have obtained your Twitter API credentials by creating a Twitter Developer account and setting up a new application. Replace the `"YOUR_CONSUMER_KEY"`, `"YOUR_CONSUMER_SECRET"`, `"YOUR_ACCESS_TOKEN"`, and `"YOUR_ACCESS_TOKEN_SECRET"` placeholders with your actual API credentials.
In the code above, the `keywords` list contains the emotions you want to search for in Roman Urdu. The code will retrieve 100 tweets per keyword (you can adjust this number) that match the specified emotions and language ("ur" for Urdu).
The scraped data includes the keyword, the text of the tweet, and the timestamp of when the tweet was created. You can modify or extend the code to extract additional information from the tweets if needed.
Please note that web scraping Twitter is subject to Twitter's API usage limits and terms of service. Make sure to comply with the guidelines and use the data responsibly.
@@RitheshSreenivasan Thanks for your response
hello bro i want to ask some questions
What do you want to ask
How we can install the transformers with pytorch in anaconda
Create conda virtual environment. . Then install transformers and pytorch in your virtual environment
conda create -prefix pytorchvenv python=3.6
conda activate pytorchenv
pip install transformers
pip install pytorch
Sir, I tried to run this part,
c.Search = "lockdown in India"
# Custom output format
c.Format = "Username: {username} | Tweet: {tweet}"
c.Limit = 100
c.Since = '2021-03-30'
c.Pandas = True
a = twint.run.Search(c)
after that, the error is showing like this.
CRITICAL:root:twint.run:Twint:Feed:noDataExpecting value: line 1 column 1 (char 0)
sleeping for 1.0 secs
CRITICAL:root:twint.run:Twint:Feed:noDataExpecting value: line 1 column 1 (char 0)
sleeping for 8.0 secs
CRITICAL:root:twint.run:Twint:Feed:noDataExpecting value: line 1 column 1 (char 0)
sleeping for 27.0 sec
Can you please help me with that?
Please refer to twint documentation. I have only explored twint to the extent required for this video
did u solve it?
uninstall and get the latest version
same issue ....did you solved that error? any help appreciated