The best part of fabric, once the data ingested in the sql end point layer, it is available as a dataset which can be used within PBI, thanks for your awesome videos - please keep continue sharing your knowledge base
Thank you so much for your kind words! I'm thrilled to hear that you find the fabric feature valuable. I appreciate your support and encouragement, and I'll definitely continue sharing my knowledge. Stay tuned for more awesome videos! 🙏
Please refer, if these can help learn.microsoft.com/en-us/fabric/governance/governance-compliance-overview learn.microsoft.com/en-us/fabric/security/security-overview
Please check if this code can help from sqlalchemy import create_engine # Define the database and server parameters database_name = 'your_database_name' server_name = 'your_server_name' username = 'your_username' password = 'your_password' port = 'your_port_number' # Create a dictionary of additional parameters # For example, setting the connection timeout and encoding additional_params = { 'connect_timeout': 10, 'encoding': 'utf-8' } # Create the connection URL using the parameters and additional parameters url = f'mssql+pyodbc://{username}:{password}@{server_name}:{port}/{database_name}?driver=ODBC+Driver+17+for+SQL+Server' # Create the engine using the connection URL and additional parameters engine = create_engine(url, connect_args=additional_params) # Use the engine to perform database operations # For example, execute a query result = engine.execute('SELECT * FROM your_table') for row in result: print(row) # Remember to properly handle exceptions and close the connection when you're done engine.dispose()
The best part of fabric, once the data ingested in the sql end point layer, it is available as a dataset which can be used within PBI, thanks for your awesome videos - please keep continue sharing your knowledge base
Thank you so much for your kind words! I'm thrilled to hear that you find the fabric feature valuable. I appreciate your support and encouragement, and I'll definitely continue sharing my knowledge. Stay tuned for more awesome videos! 🙏
is it possible to stream continuously in the data from sql server as the new records are added to source database?
This is an off topic here, but how does data governance look like in Fabric?
Please refer, if these can help
learn.microsoft.com/en-us/fabric/governance/governance-compliance-overview
learn.microsoft.com/en-us/fabric/security/security-overview
Great Explanation sir
but how to give the parameters for database level and server level. Please explain in detail sir.
Thank You
Please check if this code can help
from sqlalchemy import create_engine
# Define the database and server parameters
database_name = 'your_database_name'
server_name = 'your_server_name'
username = 'your_username'
password = 'your_password'
port = 'your_port_number'
# Create a dictionary of additional parameters
# For example, setting the connection timeout and encoding
additional_params = {
'connect_timeout': 10,
'encoding': 'utf-8'
}
# Create the connection URL using the parameters and additional parameters
url = f'mssql+pyodbc://{username}:{password}@{server_name}:{port}/{database_name}?driver=ODBC+Driver+17+for+SQL+Server'
# Create the engine using the connection URL and additional parameters
engine = create_engine(url, connect_args=additional_params)
# Use the engine to perform database operations
# For example, execute a query
result = engine.execute('SELECT * FROM your_table')
for row in result:
print(row)
# Remember to properly handle exceptions and close the connection when you're done
engine.dispose()