About This Page
This page is part of the Azure documentation. It contains code examples and configuration instructions for working with Azure services.
Bias Analysis
Bias Types:
⚠️
powershell_heavy
⚠️
windows_first
⚠️
windows_tools
⚠️
missing_linux_example
Summary:
The documentation provides a PowerShell example as the primary or only command-line method for triggering the Azure Data Factory pipeline, with no equivalent Bash or Linux-native example. PowerShell and Windows tools (e.g., Power BI Desktop) are mentioned explicitly and in detail, while Linux alternatives or parity are not addressed. The ordering and phrasing suggest a Windows-first approach, and Linux users are left to infer or adapt steps themselves.
Recommendations:
- Provide a Bash/Azure CLI example for triggering the Data Factory pipeline, not just PowerShell.
- Explicitly mention that PowerShell commands can be run cross-platform (if true), or provide guidance for Linux/Mac users.
- List both PowerShell and Bash/Azure CLI options side-by-side or in parallel sections to ensure parity.
- Mention or link to Power BI alternatives for Linux users, or clarify that Power BI Desktop is Windows-only and suggest Power BI web as an option.
- Avoid language that implies PowerShell/Windows is the default or only supported environment.
- Ensure that all steps (especially automation and scripting) have clear, tested instructions for both Windows and Linux users.
Create pull request
Flagged Code Snippets
To trigger the pipeline, you have two options. You can:
* Trigger the Data Factory pipeline in PowerShell. Replace `RESOURCEGROUP` and `DataFactoryName` with the appropriate values, and then run the following commands:
Re-execute `Get-AzDataFactoryV2PipelineRun` as needed to monitor progress.
Or you can:
* Open the data factory and select **Author & Monitor**. Trigger the `IngestAndTransform` pipeline from the portal. For information on how to trigger pipelines through the portal, see [Create on-demand Apache Hadoop clusters in HDInsight by using Azure Data Factory](hdinsight-hadoop-create-linux-clusters-adf.md#trigger-a-pipeline).
To verify that the pipeline has run, take one of the following steps:
* Go to the **Monitor** section in your data factory through the portal.
* In Azure Storage Explorer, go to your Data Lake Storage Gen2 storage account. Go to the `files` file system, and then go to the `transformed` folder. Check the folder contents to see if the pipeline succeeded.
For other ways to transform data by using HDInsight, see [this article on using Jupyter Notebook](/azure/hdinsight/spark/apache-spark-load-data-run-query).
### Create a table on the Interactive Query cluster to view data on Power BI
1. Copy the `query.hql` file to the LLAP cluster by using the secure copy (SCP) command. Enter the command:
This script creates a managed table on the Interactive Query cluster that you can access from Power BI.
### Create a Power BI dashboard from sales data
1. Open Power BI Desktop.
1. On the menu, go to **Get data** > **More...** > **Azure** > **HDInsight Interactive Query**.
1. Select **Connect**.
1. In the **HDInsight Interactive Query** dialog:
1. In the **Server** text box, enter the name of your LLAP cluster in the format of `https://LLAPCLUSTERNAME.azurehdinsight.net`.
1. In the **database** text box, enter **default**.
1. Select **OK**.
1. In the **AzureHive** dialog:
1. In the **User name** text box, enter **admin**.
1. In the **Password** text box, enter **Thisisapassword1**.
1. Select **Connect**.
1. From **Navigator**, select **sales** or **sales_raw** to preview the data. After the data is loaded, you can experiment with the dashboard that you want to create. To get started with Power BI dashboards, see the following articles:
* [Introduction to dashboards for Power BI designers](/power-bi/service-dashboards)
* [Tutorial: Get started with the Power BI service](/power-bi/service-get-started)
## Clean up resources
If you're not going to continue to use this application, delete all resources so that you aren't charged for them.
1. To remove the resource group, enter the command: