Purge Old Azure Log Analytics Ingested Data

This article is based on Microsoft document:

Assuming you have a Log Analytics workspace in a resource group, you can call the API url:

POST https://management.azure.com/subscriptions/your-subsc-ription-id/resourceGroups/yor-resource-group-rg/providers/Microsoft.OperationalInsights/workspaces/your-log-analytics-name/purge?api-version=2020-08-01

You need to pass Authorization as Bearer token in the header.

The body of the POST request will contain a filter and a table like this:

{
  "table": "Heartbeat",
  "filters": [
    {
      "column": "TimeGenerated",
      "operator": "<",
      "value": "2021-10-09T00:00:00"
    }
  ]
}

The response will have a header like

x-ms-status-location: https://management.azure.com/subscriptions/{subscriptioId}/resourceGroups/{resourceGroupName}/providers/microsoft.operationalinsights/workspaces/{workspaceName}/operations/purge-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx?api-version=2017-01-01-preview

This is a GET url to send and see the status of the operation. This Url is also given as the body of the first POST request.

The status will be something like:

{
    "status": "pending"
}

Tip: You can find the records to delete using a simple query like this:

W3CIISLog 
| where TimeGenerated > ago(32d)
| summarize count() by bin(TimeGenerated, 1d)


Heartbeat 
| where TimeGenerated > ago(32d)
| summarize count() by bin(TimeGenerated, 1d)

Get Function Token CLI

 Notice the name of the token which is created as App keys in the function.

function GetFunctionToken {
    param (
        [string[]]$funcResGroupName,
        [string[]]$funcName
    )

    # Get function token
    $url = "/subscriptions/$subscriptionId/resourceGroups/$funcResGroupName/providers/Microsoft.Web/sites/$funcName/host/default/listKeys?api-version=2018-11-01"

    $functionCode = az rest --method post --uri $url --query functionKeys.appkeyName --output tsv
    return $functionCode
}

Publish packages to nuget.org

You can follow different path in order to publish your package into nuget.org. The prerequisites and the steps are described in Microsoft documentation page.

If you get an Error 403 (The specified API Key is invalid, has expired, or does not have permission to access the specified package.) it simply indicates that the package name you chose has already been published and you are not allowed to overwrite it. Which makes sense when you upload yours and do not want other people make changes to it without you providing them an API Key.

One good practice is to create packages using a different API key than upgrading to a new version. This will make sure that your deployment pipeline will fail if the name does not exist and it would create a new package every time you deploy it.

When publishing a nuget package to Azure DevOps Artifacts you do similarly using nuget push command:

    - task: DotNetCoreCLI@2
      displayName: 'dotnet push'
      inputs:
        command: push
        packagesToPush: '$(Build.ArtifactStagingDirectory)/*.nupkg'
        nuGetFeedType: 'internal'
        publishVstsFeed: 'feed-name-or-id'
      condition: and(succeeded(), eq(variables[‘Build.SourceBranch’], ‘refs/heads/master’))

 

Test & Feedback extension

 This extension can be installed on Chrome, Edge and FireFox in any operating systems. You simply connect the extension to your organization Azure Devops. It helps you write notes, take screenshots and add a bug when you exploring the website your team is responsible for.

See more:   https://docs.microsoft.com/en-us/azure/devops/test/perform-exploratory-tests?view=azure-devops 

You can even work in Standalone mode which makes the report in your Download folder of the browser.

Azure Synapse Analytics Day

This page summarizes some highlights from the Azure Immersion Workshop on Analytics.

On GitHub:

What is Delta Lake: from docs.microsoft.com 

Delta Lake is an open-source storage layer that brings ACID (atomicity, consistency, isolation, and durability) transactions to Apache Spark and big data workloads.

It supports many features lie ACID, Time Travel and Open format which enables Apache Parquet into baseline

Comparing Synapse Analytics versus Azure Data Factory
https://docs.microsoft.com/en-us/azure/synapse-analytics/data-integration/concepts-data-factory-differences 

Tutorial: Get Started 
https://docs.microsoft.com/en-us/azure/synapse-analytics/get-started

White Paper: POLARIS – The Distributed SQL Engine in Azure Synapse
https://www.vldb.org/pvldb/vol13/p3204-saborit.pdf 

Unleash Your SAP Data
https://go.qlik.com/Qlik-Unleash-Your-SAP-Data.html

Experience with free account
https://azure.microsoft.com/en-us/free/synapse-analytics/

Application Insights – Sampling

 

In Application Insights, using sampling is an effective way to reduce the amount of telemetry data that is sent to Application Insights. If you are worried about high storage costs if all telemetry data gets sent to Application Insights, you can make use of sampling in Application Insights.

By default Application Insights sampling is already enabled when you use the ASP.NET, ASP.NET Core software development kits.

For ASP.Net applications you can configure adaptive sampling by tuning parameters in the ApplicationInsights.config file. Some of the settings are

<MaxTelemetryItemsPerSecond>5</MaxTelemetryItemsPerSecond>

This will ensure to limit the number of items that are sent to Application Insights

<EvaluationInterval>00:00:15</EvaluationInterval>

This is the interval at which the current rate of telemetry is reevaluated

<InitialSamplingPercentage>100</InitialSamplingPercentage>

This is the amount of telemetry data that is sent when the application is started

In ASP.Net Core applications, there is no ApplicationInsights.config file, so the configuration is done via code.

Adaptive sampling is enabled by default for all ASP.NET Core applications

For more information on adaptive sampling, you can use the below link

https://docs.microsoft.com/en-us/azure/azure-monitor/app/sampling

Azure Active Directory Commands

You may request to read registrations in AAD even if you have not enough permissions to do that from the Azure Portal. You can, for example request the name of the groups you are in, or even the members of a group or an application.

First thing you need to do in PowerShell is to make sure you have Azure AD module installed. Then you need to log into your Azure AD

# Connect to Azure AD
Connect-AzureAD
# In case Connect-AzureAD is not recognized as a cpommandlet, install it:
# Install-Module AzureAD -Force

Now you can query the AAD. The followi

Now you can query the AAD. The following are some samples:

# Get the name of applications that I have been part of?
Get-AzureADUser -SearchString "Pouya Panahy" | Get-AzureADUserAppRoleAssignment -All $true

# Get the list of groups that I am part of
Get-AzureADUser -SearchString "Pouya Panahy" `
  | Get-AzureADUserMembership -All $true `
  | Sort-Object -Property DisplayName 

# Where am I direct descendent from
Get-AzRoleAssignment -SignInName 'p.panahy@company.nl'

# Show all rights I've got
Get-AzRoleAssignment -SignInName 'p.panahy@company.nl' -ExpandPrincipalGroups  `
 | Sort-Object -Property DisplayName `
 | Select-Object ObjectType, RoleDefinitionName, DisplayName, Scope `
 | Format-Table

# Is my application registered?
Get-AzureADUser -SearchString "Pouya Panahy" `
 | Get-AzureADUserCreatedObject -All $true `
 | Sort-Object -Property ObjectType `
 | Select-Object ObjectType, AppId, DisplayName, HomePage, IdentifierUris `
 | Format-Table

# Looking for an application that some one else have registered
Get-AzureADServicePrincipal -All $true -Filter "startswith(DisplayName, 'AppName')"

# Who has access to my resources in a given resource group?
Get-AzRoleAssignment -Scope "/subscriptions/xxxxxxxx-xxxx-xxxx-dxxx-xxxxxxxxxxxx/resourceGroups/res-grp-name"  `
 | Sort-Object -Property RoleDefinitionName, DisplayName `
 | Select-Object ObjectType, RoleDefinitionName, DisplayName, Scope `
 | Format-Table

# List the members of a group
Get-AzureAdGroup -All $true -SearchString 'Group Name' | Get-AzureADGroupMember

 

OWASP test in Release Pipeline

In this page we are going to add some tasks in Azure Release pipeline to do the tests.

Prerequisites

There is already a docker image containing  Zap2 files and a Python file called zap-baseline.py to run the process. The image is called owasp/zap2docker-stable requires a shared folder to put the report in it. To mount a file share I use a storage account in azure containing the shared location called security. I generate the Key to access the shared location and start the process.

When process has been completed you need to have a file called OWASPToNUnit3.xslt to convert the report into an NUnit file that can be published as a test result.

OWASP Stage Tasks

There are 3 tasks in this stage:

  1. OWASP in Azure CLI
    which stes up a Container Instance that runs the tests
  2.  Transforming PowerShell Script
    which uses a PowerShell script to transform the result into NUnit
  3. Publish Test Results
    which makes the result visible in the pipeline as Test Results

Stage Tasks Yaml

steps: 
  - task: AzureCLI@2 
    displayName: 'OWASP in Azure CLI ' 
    inputs: 
      azureSubscription: 'Owasp_grp_sp' 
      scriptType: ps 
    scriptLocation: inlineScript 
    inlineScript: | 
    $key='"the-Key-to-Storage-Account-shared-location=="' 
    $ZAP_COMMAND="/zap/zap-baseline.py -t """"https://the-url-to-test.something"""" -x OWASP-ZAP-Report.xml" 

    az container create `
       --resource-group owasp_grp `
       --name owasp ` 
       --image owasp/zap2docker-stable ` 
       --ip-address public `
       --ports 8080 `
       --azure-file-volume-account-name owaspstore1000 `
       --azure-file-volume-account-key $key `
       --azure-file-volume-share-name security `
       --azure-file-volume-mount-path /zap/wrk/ `
       --command-line $ZAP_COMMAND 
    az storage file download `
       --account-name owaspstore1000 `
       --account-key $key `
       -s security `
       -p OWASP-ZAP-Report.xml `
       --dest %SYSTEM_DEFAULTWORKINGDIRECTORY%OWASP-ZAP-Report.xml 
       
  - powershell: | 
     ## The powershell task for converting the test report 
     $XslPath = "$($Env:SYSTEM_DEFAULTWORKINGDIRECTORY)\_Managed-Security/OWASPToNUnit3.xslt"
     $XmlInputPath = "$($Env:SYSTEM_DEFAULTWORKINGDIRECTORY)\OWASP-ZAP-Report.xml"
     $XmlOutputPath = "$($Env:SYSTEM_DEFAULTWORKINGDIRECTORY)\Converted-OWASP-ZAP-Report.xml"
     $XslTransform = New-Object System.Xml.Xsl.XslCompiledTransform
     $XslTransform.Load($XslPath)
     $XslTransform.Transform($XmlInputPath, $XmlOutputPath)
    displayName: 'Transforming PowerShell Script'


  - task: PublishTestResults@2
    displayName: 'Publish Test Results Converted-OWASP-ZAP-Report.xml'
    inputs:
      testResultsFormat: NUnit
      testResultsFiles: 'Converted-OWASP-ZAP-Report.xml'

 

 

Docker and Webapp

In this page we will create a WebApp that serves a Docker image and see how to put it in CI/CD.

Create a Web App

If you are creating a web app in Azure portal, start with selecting Docker Container as Publish property in Basics tab. Choose Linux as Operating System. For this example a Standard SKU would just do fine.
Next, on the Docker-tab select your existing azurecr.io container registry and specify the Image:Tag you want to serve.

My tempate looks something inline with the following:

"resources": [{
        "apiVersion": "2018-11-01",
        "name": "[parameters('name')]",
        "type": "Microsoft.Web/sites",
        "location": "[parameters('location')]",
        "tags": {},
        "dependsOn": ["[concat('Microsoft.Web/serverfarms/', parameters('hostingPlanName'))]"],
        "properties": {
            "name": "[parameters('name')]",
            "siteConfig": {
                "appSettings": [{
                        "name": "DOCKER_REGISTRY_SERVER_URL",
                        "value": "[parameters('dockerRegistryUrl')]"
                    }, {
                        "name": "DOCKER_REGISTRY_SERVER_USERNAME",
                        "value": "[parameters('dockerRegistryUsername')]"
                    }, {
                        "name": "DOCKER_REGISTRY_SERVER_PASSWORD",
                        "value": "[parameters('dockerRegistryPassword')]"
                    }, {
                        "name": "WEBSITES_ENABLE_APP_SERVICE_STORAGE",
                        "value": "false"
                    }
                ],
                "linuxFxVersion": "[parameters('linuxFxVersion')]",
                "appCommandLine": "",
                "alwaysOn": "[parameters('alwaysOn')]"
            },
            "serverFarmId": "[concat('/subscriptions/', parameters('subscriptionId'),'/resourcegroups/', parameters('serverFarmResourceGroup'), '/providers/Microsoft.Web/serverfarms/', parameters('hostingPlanName'))]",
            "clientAffinityEnabled": false
        }
    }, {
        "apiVersion": "2018-11-01",
        "name": "[parameters('hostingPlanName')]",
        "type": "Microsoft.Web/serverfarms",
        "location": "West Europe",
        "kind": "linux",
        "tags": {},
        "dependsOn": [],
        "properties": {
            "perSiteScaling": false,
            "maximumElasticWorkerCount": 1,
            "isSpot": false,
            "reserved": true,
            "isXenon": false,
            "hyperV": false,
            "targetWorkerCount": 0,
            "targetWorkerSizeId": 0
        },
        "sku": {
            "Tier": "Standard",
            "Name": "S1",
            "size": "S1",
            "family": "S",
            "capacity": 1
        }
    }

 

As a security matter I have to point out the fact that the Web App is connecting to Azure Container Registry using 3 configuration items i.e. Server Url, Server Username and Server Password. These items are visible in Azure Portal Configuration:
DOCKER_REGISTRY_SERVER_URL, DOCKER_REGISTRY_SERVER_USERNAME, DOCKER_REGISTRY_SERVER_PASSWORD

Deploy Docker Image

In both cases where the Docker image gets pulled from Container Registry, you need to restart the instance in Container Instance and also in Web App Docker instance.

An other option would be to move the pull task into Azure Pipeline using. My example is defined as follows:

steps: 
– task: AzureRmWebAppDeployment@4 
  displayName: ‘Deploy Azure App Service’ 
  inputs: 
    azureSubscription: ‘$(Parameters.ConnectedServiceName)’ 
    appType: ‘Web App for Containers (Linux)’ 
    WebAppName: ‘$(Parameters.WebAppName)’ 
    DockerNamespace: ‘https://securedcontainerregistry.azurecr.io’  
    DockerRepository: ‘https://securedcontainerregistry.azurecr.io/securedazurelib’  
    DockerImageTag: ‘latest’  
    StartupCommand: ”

Docker and Container Registry in Azure

The purpose of this page is to show the steps to create a simple webapp running as a docker container in Azure.

Create Container Registry

I am using a new Azure Container Registry in my resource group called SecuredContainerRegistry which I will refer to throughout this page. I have created this using a Basic SKU which is sufficient enough for this purpose. If you need a private endpoint you need to change he SKU to premium.

"resources": [{
        "type": "Microsoft.ContainerRegistry/registries",
        "apiVersion": "2020-11-01-preview",
        "name": "SecuredContainerRegistry",
        "location": "[resourceGroup().location]",
        "dependsOn": [],
        "tags": "[variables('tagsArray')]",
        "sku": {
            "name": "Basic",
            "tier": "Basic"
        },
        "properties": {
            "adminUserEnabled": true,
            "publicNetworkAccess": "Enabled",
            "zoneRedundancy": "Disabled"
        }
    }

The main change after creating the Container Registry using Default options is to enable Admin user which allows us to login used by docker.

Next important change we do on this resource is to register this resource in AAD by giving it a System assigned Identity using the portal.

Add Service connection

Next you need to add a service connection in your Azure Devops project using service principal authentication that to get access to Azure Container Registry. In the popup select Azure Container registery as Registry type, then select your azure container registry and give the service connection a name.

Buid and Deploy Docker project

Create a .net core application including a Dockerfile for windows. When you choose in Visual Studio it generates a Dockerfile which is not completely working. The following example is changed version of that file which works fine:

#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.
#Depending on the operating system of the host machines(s) that will build or run the containers, the image specified in the FROM statement may need to be changed.
#For more information, please see https://aka.ms/containercompat


FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443


FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY ["*.csproj", "SecuredWebApi/"]
RUN dotnet restore "SecuredWebApi/SecuredWebApi.csproj"
WORKDIR "/src/SecuredWebApi"
COPY . .
RUN dotnet build "SecuredWebApi.csproj" -c Release -o /app/build


FROM build AS publish
RUN dotnet publish "SecuredWebApi.csproj" -c Release -o /app/publish


FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "SecuredWebApi.dll"]

Create Container Instance

Next step is to create an Azure Container instance. In my example I gave it the name container-instance. During the creation you need to connect this to a container registry. You can choose Azure Container Registry created in the first step above. Once you have a successful build you will have an image available to associate it with it. Based on my sample project I named this instance secured-container-instance and Azure portal automatically recognizes the container registery when you select the Image Source from Azure Container Registry. For this project I exposed ports 80 and port 443
resources": [ { "location": "westeurope", "name": "secured-container-instance", "type": "Microsoft.ContainerInstance/containerGroups", "apiVersion": "2021-03-01", "properties": { "containers": [ { "name": "secured-container-instance", "properties": { "image": "securedcontainerregistry.azurecr.io/securedazurelib:latest", "resources": { "requests": { "cpu": "1", "memoryInGB": "1.5" } }, "ports": [ { "protocol": "TCP", "port": 80 }, { "protocol": "TCP", "port": 443 } ], } } ], "restartPolicy": "[parameters('restartPolicy')]", "osType": "Linux", "imageRegistryCredentials": [ { "server": "securedcontainerregistry.azurecr.io", "username": "[parameters('imageUsername')]", "password": "[parameters('imagePassword')]" } ], "ipAddress": { "type": "Public", "ports": "80 (TCP), 443 (TCP)" } }, "tags": {} }

Start the instance

By starting the instance on Container the image gets pulled and deployed in the container instance. On the Azure Portal you can look into the public ip adress and check the website is running and accessible.

A container instance can be started using docker command: docker run securedcontainerregistry.azurecr.io/securedazurelib:latest The following job tasks will start an instance in an Azure pipeline

jobs:
  - job: RunTest
    workspace:
      clean: all
    pool:
      vmImage: 'ubuntu-latest'
    steps:
    - task: Docker@2
      displayName: Login to ACR
      inputs:
        command: login
        containerRegistry: securedcontainerregistry
    - script: |
        docker run securedcontainerregistry.azurecr.io/somerepo/securedazurelib:latest

Security

You can register the container instance in AAD using Manage Identity and then assign a role in KeyVault for that identity to allow access to secrets.