How To Generate Google Maps Leads with Selenium Python

Поделиться
HTML-код
  • Опубликовано: 8 фев 2025
  • 💥 Use Michael at checkout to Get Extra 2GB of proxy for any package (except trial): go.nodemaven.c...
    🤖 Captcha Solver: bit.ly/capsolv... (Use Michael for 5% bonus)
    📸 Capture Screenshot API: capturescreens...
    🎥 Video Description:
    In Episode 14 of the Python Selenium Tutorial Series, we tackle a highly practical and valuable topic: How to Generate Google Maps Leads using Selenium in Python. This episode is dedicated to uncovering the power of web scraping for lead generation on one of the most extensive business directories available: Google Maps.
    🔍 Why is This Important?
    Understanding how to efficiently generate leads from Google Maps can significantly benefit businesses and freelancers who rely on local and global business information for marketing, sales, or research. We'll explore how automated web scraping can streamline this process, saving time and providing a competitive edge in various industries.
    🧩 Learn About:
    The fundamentals of web scraping with Selenium in Python and its application for extracting business information from Google Maps.
    Setting up Selenium with the right configurations for successful scraping, including the use of proxies to avoid IP blocking and maintain privacy.
    Techniques for navigating, searching, and extracting detailed business information such as names, addresses, contact details, and reviews from Google Maps.
    Best practices for managing and storing the scraped data effectively for marketing or analysis purposes.
    🔗 Helpful Links:
    NodeMaven Proxy Provider: bit.ly/nodemav...
    PixelScan Proxy Checker Extension: addons.mozilla...
    Selenium Wire Package: pypi.org/proje...
    Source Code: github.com/mic...
    💬 Join our Community:
    Discord Link: / discord
    💖 Support:
    FYPFans: fyp.fans/micha...
    Revolut: revolut.me/mic...
    Bitcoin Wallet: bc1qk39u8vtpnfw283ql567zrhsrvjj0a58mvv0rdm
    Ethereum Wallet: 0x5e7BD4f473f153d400b39D593A55D68Ce80F8a2e
    USD-T (TRC20) Wallet: TRPLBBri3Rc2YGJ2cyK75jLsrztCT4ZPe8
    🌐 Connect with Us:
    Website: websidev.com
    Linkedin: / michael-kitas-638aa4209
    Instagram: / michael_kitas
    Github: github.com/mic...
    📧 Business Email:
    support@websidev.com
    🏷 Tags:
    #PythonTutorial #SeleniumPython #PythonSeleniumTutorial #SeleniumTutorialforBeginners
    Celebrate learning and stay connected for more insightful tutorials on Python and Selenium! 🎉

Комментарии • 22

  • @irfanshaikh262
    @irfanshaikh262 11 месяцев назад +3

    Absolutely fantastic.
    Loved the way you fooled the tech gaint by still being able to extract the divs even when it was designed to supress it.

  • @alexandratravels
    @alexandratravels 11 месяцев назад +2

    Great tutorial Michael🙏
    Clear, conci: and very informative. Thanks for the valuable insights and tips!

  • @Faybmi
    @Faybmi 5 месяцев назад +1

    Watching from Kyrgyzstan🇰🇬, great video thank you❤

  • @siddharthchaberia1968
    @siddharthchaberia1968 Месяц назад +1

    Now for numbers, we need to add another simulation, by clicking the link, and Google has now hidden it from inital view :) Maybe they saw this video

    • @MichaelKitas
      @MichaelKitas  Месяц назад +1

      @@siddharthchaberia1968 Yeah they make changes frequently on the UI 😅

  • @ShemelesBekele-f6i
    @ShemelesBekele-f6i 17 дней назад

    Great work. i have got import seleniumwire not resolve. can you help me

  • @michaelrstudley
    @michaelrstudley 6 месяцев назад +1

    Great video, thank you

  • @OmarFreeManX
    @OmarFreeManX 5 месяцев назад +1

    Great Michael 👏

  • @MagicJar172
    @MagicJar172 Месяц назад +1

    what if I want to do scrapping at the country level, not city or province level

    • @MichaelKitas
      @MichaelKitas  Месяц назад

      @@MagicJar172 Perhaps search for “keyword” in “country” but it won’t be as effective. Working on a new solution currently for this, where we can get literally ALL businesses for any niche and location, shoot me a message if you are interested. Email: mixaliskitas@gmail.com

  • @youtubesearch7239
    @youtubesearch7239 10 месяцев назад

    Great explanation brother can you please make a video on Scraping USPS shipment tracking. Lot's of appreciations from India 🇮🇳

  • @melowmelowclub2329
    @melowmelowclub2329 9 месяцев назад +1

    Thank you, let me follow your steps

  • @grahamsabin6286
    @grahamsabin6286 7 месяцев назад

    Hi Michael, great video thank you for the help! I'm trying to use this now to scrape for phone numbers, but I think Google has updated where the phone numbers are stored, so now you need to click on the item card in order to find it as it no longer exists in the text. Just something interesting that I noticed. Maybe you have a better work around.

  • @adarmawan117
    @adarmawan117 9 месяцев назад +1

    Hi, michael.
    Can you please make videos about SeleniumBase.? How to use, and all about that library,?
    Thanks in advance.

  • @nemem02
    @nemem02 9 месяцев назад +1

    Excelent material, just one question, I want to extract the physical adress and I cant find a way, it doesnt appear to have a unique attribute to use it on the search by CSS. I am struggling and losing my sight hahah... pls help !

    • @MichaelKitas
      @MichaelKitas  9 месяцев назад +3

      Hello, got the solution for you:
      Use this selector: .fontBodyMedium .W4Efsd:nth-child(1)
      You will get both proffesion and address
      Then use this: ·
      And split the text using the above character and you will get the proffesion and address seperately

    • @nemem02
      @nemem02 9 месяцев назад +1

      @@MichaelKitas Thank you so much!! I will try it now ! Great great thanks!!!

    • @nemem02
      @nemem02 9 месяцев назад +2

      @@MichaelKitas SOLVED !!! Now it works I used this : I dont need the profession so I only got the adress.
      try:
      data['address'] = item.find_element(By.CSS_SELECTOR, '.fontBodyMedium > div:nth-child(4) > div:nth-child(1) > span:nth-child(2) > span:nth-child(2)').text
      except Exception:
      pass
      THIs is the BEST scraping code I have encountered. Most of the other ones dont work as clean as your code. GREAT and thank you so much !!! I have seen like 10 different programs and most of them dont scroll the data so they only get like 20 results. This one with the Js coding is PERFECT. Thanks again. !!!

    • @MichaelKitas
      @MichaelKitas  9 месяцев назад +1

      @@nemem02 Very glad it worked out 🙂

  • @muhammadsalmandata
    @muhammadsalmandata 11 месяцев назад +1

    How to solve facebook marketplace duplicate photos upload problem? Please make video on it