Microsoft Fabric: Import Azure SQL Data to Warehouse | Multiple tables using Pipeline

Поделиться
HTML-код
  • Опубликовано: 18 янв 2025

Комментарии • 7

  • @adilmajeed8439
    @adilmajeed8439 Год назад +4

    The best part of fabric, once the data ingested in the sql end point layer, it is available as a dataset which can be used within PBI, thanks for your awesome videos - please keep continue sharing your knowledge base

    • @AmitChandak
      @AmitChandak  Год назад +1

      Thank you so much for your kind words! I'm thrilled to hear that you find the fabric feature valuable. I appreciate your support and encouragement, and I'll definitely continue sharing my knowledge. Stay tuned for more awesome videos! 🙏

  • @NaaneVinu
    @NaaneVinu Год назад

    is it possible to stream continuously in the data from sql server as the new records are added to source database?

  • @saumyakapoor6772
    @saumyakapoor6772 Год назад

    This is an off topic here, but how does data governance look like in Fabric?

    • @AmitChandak
      @AmitChandak  Год назад +1

      Please refer, if these can help
      learn.microsoft.com/en-us/fabric/governance/governance-compliance-overview
      learn.microsoft.com/en-us/fabric/security/security-overview

  • @kalpanap5687
    @kalpanap5687 Год назад +1

    Great Explanation sir
    but how to give the parameters for database level and server level. Please explain in detail sir.
    Thank You

    • @AmitChandak
      @AmitChandak  Год назад +1

      Please check if this code can help
      from sqlalchemy import create_engine
      # Define the database and server parameters
      database_name = 'your_database_name'
      server_name = 'your_server_name'
      username = 'your_username'
      password = 'your_password'
      port = 'your_port_number'
      # Create a dictionary of additional parameters
      # For example, setting the connection timeout and encoding
      additional_params = {
      'connect_timeout': 10,
      'encoding': 'utf-8'
      }
      # Create the connection URL using the parameters and additional parameters
      url = f'mssql+pyodbc://{username}:{password}@{server_name}:{port}/{database_name}?driver=ODBC+Driver+17+for+SQL+Server'
      # Create the engine using the connection URL and additional parameters
      engine = create_engine(url, connect_args=additional_params)
      # Use the engine to perform database operations
      # For example, execute a query
      result = engine.execute('SELECT * FROM your_table')
      for row in result:
      print(row)
      # Remember to properly handle exceptions and close the connection when you're done
      engine.dispose()