This page contains Windows bias

About This Page

This page is part of the Azure documentation. It contains code examples and configuration instructions for working with Azure services.

Bias Analysis

Bias Types:
⚠️ windows_first
⚠️ windows_tools
⚠️ missing_linux_example
Summary:
The documentation generally focuses on Linux container development and provides a Linux-first workflow, but there are subtle signs of Windows bias. The sample images are sourced from a repository named 'Cognitive-CustomVision-Windows', and there is no mention of a Linux-specific sample image repository. In the prerequisites, both Linux and Windows device setup links are provided, but the Windows link is listed second. However, there are no PowerShell or Windows command-line examples, and all CLI instructions use Bash syntax. The documentation does not provide explicit Linux shell alternatives for every step, and some references (such as the sample repo and image paths) are Windows-centric. There is also a lack of explicit instructions for running the workflow on a native Linux desktop (as opposed to a VM or container), and no mention of Linux-specific troubleshooting or differences.
Recommendations:
  • Provide sample image repositories or paths that are not Windows-specific, or clarify that the sample repo is cross-platform.
  • Include explicit Linux-native instructions and troubleshooting steps, not just for containerized environments but also for common Linux distributions.
  • Where sample paths or repositories are named with 'Windows', also mention or provide equivalent Linux-named resources.
  • Ensure all screenshots and file path examples use cross-platform or Linux-style paths (e.g., forward slashes), or provide both styles.
  • Add a section or notes about any differences or considerations when running the workflow on Windows vs. Linux hosts.
  • If referencing Windows devices or tools, ensure Linux equivalents are always mentioned first and with equal detail.
GitHub Create pull request

Scan History

Date Scan ID Status Bias Status
2025-09-16 00:00 #113 completed ✅ Clean
2025-09-15 00:00 #112 completed ✅ Clean
2025-09-14 00:00 #111 completed ✅ Clean
2025-09-13 00:00 #110 completed ✅ Clean
2025-09-12 00:00 #109 completed ✅ Clean
2025-09-11 00:00 #108 completed ✅ Clean
2025-09-10 00:00 #107 completed ✅ Clean
2025-09-09 00:00 #106 completed ✅ Clean
2025-09-08 00:00 #105 completed ✅ Clean
2025-09-07 00:00 #104 completed ✅ Clean
2025-09-06 00:00 #103 completed ✅ Clean
2025-09-05 00:00 #102 completed ✅ Clean
2025-09-04 00:00 #101 completed ✅ Clean
2025-09-03 00:00 #100 completed ✅ Clean
2025-08-29 00:01 #95 completed ✅ Clean
2025-08-27 00:01 #93 in_progress ✅ Clean
2025-08-22 00:01 #88 completed ✅ Clean
2025-08-17 00:01 #83 in_progress ✅ Clean
2025-07-13 21:37 #48 completed ❌ Biased
2025-07-12 23:44 #41 in_progress ❌ Biased
2025-07-09 13:09 #3 cancelled ✅ Clean
2025-07-08 04:23 #2 cancelled ❌ Biased

Flagged Code Snippets

2. Return to your Custom Vision project and select **Add images**. 3. Browse to the git repo that you cloned locally, and navigate to the first image folder, **Cognitive-CustomVision-Windows / Samples / Images / Hemlock**. Select all 10 images in the folder, and then select **Open**. 4. Add the tag **hemlock** to this group of images, and then press **enter** to apply the tag. 5. Select **Upload 10 files**. ![Upload hemlock tagged files to Custom Vision](./media/tutorial-deploy-custom-vision/upload-hemlock.png) 6. When the images are uploaded successfully, select **Done**. 7. Select **Add images** again. 8. Browse to the second image folder, **Cognitive-CustomVision-Windows / Samples / Images / Japanese Cherry**. Select all 10 images in the folder and then **Open**. 9. Add the tag **japanese cherry** to this group of images and press **enter** to apply the tag. 10. Select **Upload 10 files**. When the images are uploaded successfully, select **Done**. 11. After tagging and uploading both sets of images, select **Train** to train the classifier. ### Export your classifier 1. After training your classifier, select **Export** on the Performance page of the classifier. ![Export your trained image classifier](./media/tutorial-deploy-custom-vision/export.png) 2. Select **DockerFile** for the platform. 3. Select **Linux** for the version. 4. Select **Export**. 5. After the export completes, select **Download** and save the .zip package locally on your computer. Extract all files from the package. Use these files to create an IoT Edge module that contains the image classification server. When you reach this point, you've finished creating and training your Custom Vision project. You'll use the exported files in the next section, but you're done with the Custom Vision web page. ## Create an IoT Edge solution You now have the files for a container version of your image classifier on your development machine. In this section, you set up the image classifier container to run as an IoT Edge module. You also create a second module that posts requests to the classifier and sends the results as messages to IoT Hub. ### Create a new solution A solution is a logical way of developing and organizing multiple modules for a single IoT Edge deployment. A solution contains code for one or more modules and the deployment manifest that declares how to configure them on an IoT Edge device. Create the solution using the *Azure IoT Edge Dev Tool* command-line (CLI) development tool. The simplest way to use the tool is to [Run the IoT Edge Dev Container with Docker](https://github.com/Azure/iotedgedev/blob/main/docs/environment-setup/run-devcontainer-docker.md). 1. Create a directory named **classifier** and change to the directory.
7. Save the **requirements.txt** file. ### Add a test image to the container Instead of using a real camera to provide an image feed for this scenario, we're going to use a single test image. A test image is included in the GitHub repo that you downloaded for the training images earlier in this tutorial. 1. Navigate to the test image, located at **Cognitive-CustomVision-Windows** / **Samples** / **Images** / **Test**. 2. Copy **test_image.jpg** 3. Browse to your IoT Edge solution directory and paste the test image in the **modules** / **cameracapture** folder. The image should be in the same folder as the main.py file that you edited in the previous section. 4. In Visual Studio Code, open the **Dockerfile.amd64** file for the cameracapture module. 5. After the line that establishes the working directory, `WORKDIR /app`, add the following line of code: