Quantcast
Channel: ASP.NET Core – Software Engineering
Viewing all 269 articles
Browse latest View live

Angular SPA with an ASP.NET Core API using Azure AD Auth and user access tokens

$
0
0

This post shows how to authenticate an Angular SPA application using Azure AD and consume secure data from an ASP.NET Core API which is protected by Azure AD. Azure AD App registrations are used to configure and setup the authentication and authorization. The Angular application uses the OpenID Connect Code flow with PKCE and the silent renew is implemented using iframes.

Code: https://github.com/damienbod/AzureAD-Auth-MyUI-with-MyAPI

Posts in this Series

Setup the SPA APP registration

In this demo, we will create an APP registration for the Angular application, which will use the API from the first blog in this series. The SPA is a public client and so user access tokens are used. An application running in the browser cannot keep a secret and cannot use a service to service API. In your Azure AD tenant. add a new App registration and select a single page application.

The redirct URLs need to match your Angular application. For development, we use localhost. The silent renew URL is also required.

Add the API permissions which are required for the UI and the API requests. The Web API which was created in the previous blog needs to be added here, so that the SPA application can access the API which is protected by Azure AD.

The email claim is added to the access token and the id token as an optional claim. This is used in the API and the UI.

Angular application

The Angular single page application is implemented using the angular-auth-oidc-client npm package. Open ID Connect code flow with PKCE is used to authenticate. The Angular application was created using Angular-CLI.

{
  "name": "angular-oidc-oauth2",
  "version": "0.0.0",
  "scripts": {
    "ng": "ng",
    "start": "ng serve --ssl true -o",
    "build": "ng build",
    "test": "ng test",
    "lint": "ng lint",
    "e2e": "ng e2e"
  },
  "private": true,
  "dependencies": {
    "@angular/animations": "~9.1.3",
    "@angular/common": "~9.1.3",
    "@angular/compiler": "~9.1.3",
    "@angular/core": "~9.1.3",
    "@angular/forms": "~9.1.3",
    "@angular/platform-browser": "~9.1.3",
    "@angular/platform-browser-dynamic": "~9.1.3",
    "@angular/router": "~9.1.3",
    "angular-auth-oidc-client": "^11.1.3",
    "rxjs": "~6.5.4",
    "tslib": "^1.10.0",
    "zone.js": "~0.10.2"
  },

In the angular.json file, the silent renew html needs to be added to the assets, and the https configuration which uses your developer certificates are added to the serve json object. Here’s a link to an example.

angular.json silent-renew.html

angular.json certificates

The silent renew for code flow in an iframe can be created as follows:

<!doctype html>
<html>
<head>
    <base href="./">
    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>silent-renew</title>
    <meta http-equiv="content-type" content="text/html; charset=utf-8" />
</head>
<body>

  <script>
    window.onload = function () {
      /* The parent window hosts the Angular application */
      var parent = window.parent;
      /* Send the id_token information to the oidc message handler */
      var event = new CustomEvent('oidc-silent-renew-message', { detail: window.location });
      parent.dispatchEvent(event);
    };
  </script>
</body>
</html>

In the app.module, the OIDC Azure configuration is added. This example is for a user of a tenant. The tenant ‘7ff95b15-dc21-4ba6-bc92-824856578fc1’ is used for the token server and the authWellknownEndpoint. Code flow is configured and the silent renew is activated and the redirect is setup like configured in the App registration. The ID token is used for the user data and the user data request is not activated. The scope api://98328d53-55ec-4f14-8407-0ca5ff2f2d20/access_as_user needs to be requested to access the API.

export function configureAuth(oidcConfigService: OidcConfigService) {
  return () =>
    oidcConfigService.withConfig({
            stsServer: 'https://login.microsoftonline.com/7ff95b15-dc21-4ba6-bc92-824856578fc1/v2.0',
            authWellknownEndpoint: 'https://login.microsoftonline.com/7ff95b15-dc21-4ba6-bc92-824856578fc1/v2.0',
            redirectUrl: window.location.origin,
            clientId: 'ad6b0351-92b4-4ee9-ac8d-3e76e5fd1c67',
            scope: 'openid profile email api://98328d53-55ec-4f14-8407-0ca5ff2f2d20/access_as_user',
            responseType: 'code',
            silentRenew: true,
            maxIdTokenIatOffsetAllowedInSeconds: 600,
            issValidationOff: false, // this needs to be true if using a common endpoint in Azure
            autoUserinfo: false,
            silentRenewUrl: window.location.origin + '/silent-renew.html',
            logLevel: LogLevel.Debug
    });
}

A HttpInterceptor is used to add the access token to all requests which match a base address of the API. It is really important that the access token is only sent to APIs where the access token was intended to be used. Don’t not send the access token with every HTTP request. Localhost with port 44390 is where the API secured with Azure AD is hosted for our development.

import { HttpInterceptor, HttpRequest, HttpHandler } from '@angular/common/http';
import { Injectable } from '@angular/core';
import { AuthService } from './auth.service';

@Injectable()
export class AuthInterceptor implements HttpInterceptor {
  private secureRoutes = ['https://localhost:44390'];

  constructor(private authService: AuthService) {}

  intercept(
    request: HttpRequest<any>,
    next: HttpHandler
  ) {
    if (!this.secureRoutes.find((x) => request.url.startsWith(x))) {
      return next.handle(request);
    }

    const token = this.authService.token;

    if (!token) {
      return next.handle(request);
    }

    request = request.clone({
      headers: request.headers.set('Authorization', 'Bearer ' + token),
    });

    return next.handle(request);
  }
}

The home component starts the sign in for the APP and the user. If successful, the API can be called and the data is returned.

import { Component, OnInit } from '@angular/core';
import { Observable, of } from 'rxjs';
import { catchError } from 'rxjs/operators';
import { AuthService } from '../auth.service';
import { HttpClient } from '@angular/common/http';

@Component({
  selector: 'app-home',
  templateUrl: 'home.component.html',
})
export class HomeComponent implements OnInit {
  userData$: Observable<any>;
  dataFromAzureProtectedApi$: Observable<any>;
  isAuthenticated$: Observable<boolean>;
  constructor(
    private authservice: AuthService,
    private httpClient: HttpClient
  ) {}

  ngOnInit() {
    this.userData$ = this.authservice.userData;
    this.isAuthenticated$ = this.authservice.signedIn;
  }

  callApi() {
    this.dataFromAzureProtectedApi$ = this.httpClient
      .get('https://localhost:44390/weatherforecast')
      .pipe(catchError((error) => of(error)));
  }
  login() {
    this.authservice.signIn();
  }

  forceRefreshSession() {
    this.authservice.forceRefreshSession().subscribe((data) => {
      console.log('Refresh completed');
    });
  }

  logout() {
    this.authservice.signOut();
  }
}

Start the API and then the Angular application. After you login, the SPA can call the API and access the API data.

Links:

https://github.com/AzureAD/microsoft-identity-web

https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2

https://jwt.io/

https://www.npmjs.com/package/angular-auth-oidc-client


Restricting access to an Azure AD protected API using Azure AD Groups

$
0
0

This post shows how to restrict access to an ASP.NET Core API to only allow users from a defined Azure AD group to use a protected API. The API uses an Azure App registration for authorization. The user signs in with an ASP.NET Core Razor page application or an Angular App and can access the API if the user is authorized, ie a member of the group which is required for access.

Code: https://github.com/damienbod/AzureAD-Auth-MyUI-with-MyAPI

Posts in this Series

In the Azure Active directory tenant, create new users or add existing users to the tenant. In this demo, an admin user which will be added to the AAD group was added. The access will be enabled for the group. We also add a second user which will not be a member of the group to test.

Create a new group in the Azure Active directory. Click the All Groups button, and then New Group.

Add the group name, and fill out the fields. We use a group type Security.

Add the members to the group as required.

Select Enterprise applications in the Azure AD and select the API to restrict the access. Select the api, which was created in the first blog post in this serious.

In the Properties, set the User assignment required? to yes. Now only users which are added in the app registration, can access the API.

Now click the Users and Groups. We will add a new user. This is used to add a new group as well as a new user.

You should be able to add a new group now. If you cannot add a new group, it is because you don’t have the correct Azure AD license.

Select the group you created above.

Now the applications can be used. Only users which are members of the admin group can request the scope to access the API.

Using the admin user, everything works as expected.

Using a user which is not in this group, the sign in fails and the identity cannot login to access the API.

Next step would be to script this using Azure CLI or Azure Arm templates.

Links:

https://github.com/AzureAD/microsoft-identity-web

https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2

https://jwt.io/

https://www.npmjs.com/package/angular-auth-oidc-client

Using External Inputs in Azure Durable functions

$
0
0

This post shows how to implement an Azure Durable function flow with an external HTTP API input. The flow is started using a HTTP request, runs an activity, waits for the external input from a HTTP API which could be any Azure function input and then runs a second activity. The application is implemented using C# and uses Version 3 Azure functions.

Code: https://github.com/damienbod/AzureDurableFunctions

Azure Durable Functions

Azure Durable functions provides a simple way of implementing workflows in a serverless architecture. Durable functions are built on top of Azure functions and supports chaining, Fan-out/fan-in, external inputs, stateful flows, eternal flows with excellent diagnostic and monitoring APIs.

Durable function Orchestration

Orchestrations connect the activities of workflows or sub orchestrations together. The flows can be stateful and are slightly uncharacteristic to normal application code execution. The code in the orchestration can be re-run many times, but the activities are only run once. The results of the activities are always returned. The orchestration function code must be deterministic. The orchestrations are durable and reliable. When the code is run the first time, it is not replaying and after this, the part of the code is replaying. This can be checked with the IsReplaying property of the IDurableOrchestrationContext context. Every time the orchestration is started, it uses an instance ID to connect to the future steps , past steps. You can provide your own instance ID or auto generate this.

Further docs can be found here.

using System.Collections.Generic;
using System.Threading.Tasks;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.DurableTask;
using Microsoft.Extensions.Logging;
using MyAzureFunctions.Model;

namespace MyAzureFunctions.Orchestrations
{
    public class MyOrchestration
    {
        [FunctionName(Constants.MyOrchestration)]
        public async Task<MyOrchestrationDto> RunOrchestrator(
            [OrchestrationTrigger] IDurableOrchestrationContext context,
            ILogger log)
        {
            var myOrchestrationDto = new MyOrchestrationDto
            {
                InputStartData = context.GetInput<string>()
            };

            if (!context.IsReplaying)
            {
                log.LogWarning($"begin MyOrchestration with input {context.GetInput<string>()}");
            }

            var myActivityOne = await context.CallActivityAsync<string>(
                Constants.MyActivityOne, context.GetInput<string>());

            myOrchestrationDto.MyActivityOneResult = myActivityOne;

            if(!context.IsReplaying)
            {
                log.LogWarning($"myActivityOne completed {myActivityOne}");
            }

            var myActivityTwoInputEvent = await context.WaitForExternalEvent<string>(
                Constants.MyExternalInputEvent);
            myOrchestrationDto.ExternalInputData = myActivityTwoInputEvent;

            var myActivityTwo = await context.CallActivityAsync<string>(
                Constants.MyActivityTwo, myActivityTwoInputEvent);

            myOrchestrationDto.MyActivityTwoResult = myActivityTwo;

            if (!context.IsReplaying)
            {
                log.LogWarning($"myActivityTwo completed {myActivityTwo}");
            }

            return myOrchestrationDto;
        }
    }
}

Durable function Activities

Activities are normally only executed once for each run of a flow. The activity can use the IDurableActivityContext as an input parameter, or some typed parameter. You can only pass a single param to an activity. This is where you can implement the business of the flow. The orchestration glues this together. The result of the activity can be used many times, but the activity itself is only run once, unless using the ContinueAsNew method.

using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.DurableTask;
using Microsoft.Extensions.Logging;

namespace MyAzureFunctions.Activities
{
    public class MyActivities
    {
        [FunctionName(Constants.MyActivityOne)]
        public string MyActivityOne([ActivityTrigger] IDurableActivityContext context, ILogger log)
        {
            string name = context.GetInput<string>();
            log.LogInformation($"Activity {Constants.MyActivityOne} {name}.");
            return $"{Constants.MyActivityOne} {name}!";
        }

        [FunctionName(Constants.MyActivityTwo)]
        public string MyActivityTwo([ActivityTrigger] IDurableActivityContext context, ILogger log)
        {
            string name = context.GetInput<string>();
            log.LogInformation($"Activity {Constants.MyActivityTwo} {name}.");
            return $"{Constants.MyActivityTwo} {name}!";
        }

    }
}

External Input

External inputs in a workflow are extremely useful when waiting for an HTTP call, any UI user event, or any external events. A timer can be set to execute a separate flow result if the event is not called or completed within a certain time limit. This can be very useful. The example below is waiting for an API call with the instance ID and raises the event with the data intended for the flow. The orchestration would then continue with the next part of the flow using the data from this event code.

using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;
using Microsoft.Azure.WebJobs.Extensions.DurableTask;

namespace MyAzureFunctions.Apis
{
    public class ExternalHttpPostInput
    {
        [FunctionName(Constants.ExternalHttpPostInput)]
        public async Task<IActionResult> Run(
            [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
            [DurableClient] IDurableOrchestrationClient client,

            ILogger log)
        {
            string instanceId = req.Query["instanceId"];
            var status = await client.GetStatusAsync(instanceId);
            await client.RaiseEventAsync(instanceId, Constants.MyExternalInputEvent, "inputDataTwo");
          
            log.LogInformation("C# HTTP trigger function processed a request.");

            string responseMessage = string.IsNullOrEmpty(instanceId)
                ? "This HTTP triggered function executed successfully. Pass an instanceId in the query string"
                : $"Received, processing, {instanceId}";

            return new OkObjectResult(responseMessage);
        }
    }
}

Start the flow with an HTTP API call

The orchestration is started using the StartNewAsync method. The example starts this with a null for the instance ID so that a new ID is auto generated.

using System.Threading.Tasks;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.Extensions.Logging;
using Microsoft.Azure.WebJobs.Extensions.DurableTask;
using System.Net.Http;

namespace MyAzureFunctions.Apis
{
    public class BeginFlowWithHttpPost
    {
        [FunctionName(Constants.BeginFlowWithHttpPost)]
        public async Task<HttpResponseMessage> HttpStart(
          [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestMessage req,
          [DurableClient] IDurableOrchestrationClient starter,
          ILogger log)
        {
            string instanceId = await starter.StartNewAsync(Constants.MyOrchestration, null, "input data to start flow");
            log.LogInformation($"Started orchestration with ID = '{instanceId}'.");

            return starter.CreateCheckStatusResponse(req, instanceId);
        }
    }
}

When the API is requested, the flow starts and the durable function details are returned. This would need to be changed in a production app, security would be required, only a post request should be supported and properly logging, diagnostics would need to be supported.

The state of the flow can be viewed using the statusQueryGetUri link.

After the external HTTP is called, the flow continues. The state is then updated, and the end result can be viewed, used in any way.

Links:

https://docs.microsoft.com/en-us/azure/azure-functions/durable/

https://github.com/Azure/azure-functions-durable-extension

https://damienbod.com/2019/03/14/running-local-azure-functions-in-visual-studio-with-https/

Microsoft Azure Storage Explorer

Microsoft Azure Storage Emulator

Install the Azure Functions Core Tools

NodeJS

Azure CLI

Azure SDK

Visual Studio zure development extensions

Azure Functions Configuration and Secrets Management

$
0
0

This post shows how to configure Azure Function projects so that no secrets are required in the local.settings.json or in the code. Secrets for the project are saved in the user secrets of the project, or in the app settings of the deployment. The deployment should/can use Azure Key Vault for the secrets and not the app.settings of the deployment (or key vault). The aim is to remove the secrets from the code and the local.settings.json file. I see this committed in many solutions with secrets.

Code: https://github.com/damienbod/AzureDurableFunctions

The local.settings.json file can be used to add app settings for local development in your Azure Function project. In this file, are standard configuration values which are not secrets and this file can be committed to the git repository. In this demo, we added a MyConfiguration class with two values. Note that the configuration is not added inside the Values object. I prefer this.

local.settings.json

{
  "IsEncrypted": false,
  "Values": {
    "AzureWebJobsStorage": "UseDevelopmentStorage=true",
    "AzureWebJobsSecretStorageType": "Files",
    "FUNCTIONS_WORKER_RUNTIME": "dotnet"
  },
  "MyConfiguration": {
    "Name": "Lilly",
    "AmountOfRetries": 7
  }
}

The Azure functions project requires the Microsoft.Extensions.Configuration.UserSecrets nuget package to support user secrets for local development.

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <TargetFramework>netcoreapp3.1</TargetFramework>
    <AzureFunctionsVersion>v3</AzureFunctionsVersion>
    <UserSecretsId>222f37ac-e563-4ba8-8e33-ee799c456135</UserSecretsId>
  </PropertyGroup>
  <ItemGroup>
    <PackageReference Include="Microsoft.Azure.WebJobs.Extensions.DurableTask" Version="2.2.2" />
    <PackageReference Include="Microsoft.Extensions.Configuration.UserSecrets" Version="3.1.0" />
    <PackageReference Include="Microsoft.NET.Sdk.Functions" Version="3.0.7" />
    <PackageReference Include="Microsoft.Extensions.Configuration" Version="3.1.5" />
    <PackageReference Include="Microsoft.Extensions.Logging.Console" Version="3.1.5" />
    <PackageReference Include="Microsoft.Azure.Functions.Extensions" Version="1.0.0" />
    <PackageReference Include="System.Configuration.ConfigurationManager" Version="4.7.0" />
  </ItemGroup>
  <ItemGroup>
    <None Update="host.json">
      <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
    </None>
    <None Update="local.settings.json">
      <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
      <CopyToPublishDirectory>Never</CopyToPublishDirectory>
    </None>
  </ItemGroup>
</Project>

In the secrets json file which is in your profile and not the source code, you can add the super secrets.

{
  "MyConfigurationSecrets": {
    "MySecretOne": "secret one",
    "MySecretTwo": "secret two"
  }
}

The Azure Functions project uses DI and so has a Startup class which inherits from the FunctionsStartup class. The configuration is setup using the ConfigurationBuilder and uses the optional Json files and the optional user secrets. It is important that these are optional. The configuration classes are added using the IOption interface.

using Microsoft.Azure.Functions.Extensions.DependencyInjection;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using MyAzureFunctions;
using MyAzureFunctions.Activities;
using System;
using System.Configuration;
using System.Reflection;

[assembly: FunctionsStartup(typeof(Startup))]

namespace MyAzureFunctions
{
    public class Startup : FunctionsStartup
    {
        public override void Configure(IFunctionsHostBuilder builder)
        {
            var config = new ConfigurationBuilder()
               .SetBasePath(Environment.CurrentDirectory)
               .AddJsonFile("local.settings.json", true)
               .AddUserSecrets(Assembly.GetExecutingAssembly(), true)
               .AddEnvironmentVariables()
               .Build();

            builder.Services.AddSingleton<IConfiguration>(config);

            builder.Services.AddScoped<MyActivities>();

            builder.Services.AddOptions<MyConfiguration>()
                .Configure<IConfiguration>((settings, configuration) =>
                {
                    configuration.GetSection("MyConfiguration").Bind(settings);
                });

            builder.Services.AddOptions<MyConfigurationSecrets>()
                .Configure<IConfiguration>((settings, configuration) =>
                {
                    configuration.GetSection("MyConfigurationSecrets").Bind(settings);
                });
        }
    }
}

The configurations can then be used like any ASP.NET Core project.

using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.DurableTask;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Options;

namespace MyAzureFunctions.Activities
{
    public class MyActivities
    {
        private readonly MyConfiguration _myConfiguration;
        private readonly MyConfigurationSecrets _myConfigurationSecrets;

        public MyActivities(IOptions<MyConfiguration> myConfiguration, 
            IOptions<MyConfigurationSecrets> myConfigurationSecrets)
        {
            _myConfiguration = myConfiguration.Value;
            _myConfigurationSecrets = myConfigurationSecrets.Value;
        }

When configuring this in Azure, the app settings need to be added to the Azure APP service hosting the functions.

With this, they is no need to add secrets anymore to the local.settings.json file of the Azure Function projects.

Links:

https://docs.microsoft.com/en-us/azure/azure-functions/functions-how-to-use-azure-function-app-settings

https://docs.microsoft.com/en-us/azure/azure-functions/durable/

https://github.com/Azure/azure-functions-durable-extension

https://damienbod.com/2019/03/14/running-local-azure-functions-in-visual-studio-with-https/

Microsoft Azure Storage Explorer

Microsoft Azure Storage Emulator

Install the Azure Functions Core Tools

NodeJS

Azure CLI

Azure SDK

Visual Studio zure development extensions

Using Key Vault and Managed Identities with Azure Functions

$
0
0

This article shows how Azure Key Vault could be used together with Azure Functions. The Azure Functions can use the system assigned identity to access the Key Vault. This needs to be configured in the Key Vault access policies using the service principal. By using the Microsoft.Azure.KeyVault and the Microsoft.Extensions.Configuration.AzureKeyVault nuget packages, defining direct references in the Azure Functions configuration is not required. The secrets can be read directly from the Key Vault. This also has the advantage of referencing only the secret and not the direct version of the secret. The latest version of the secret is used (depending on the cache)

Code: https://github.com/damienbod/AzureDurableFunctions

Posts in this series

The configuration is setup in the Startup class which inherits from the FunctionsStartup class. We use a string property AzureKeyVaultEndpoint which is used to decide if the Key Vault configuration should be used or not. For local development, Key Vault is not used, user secrets are used. For the Azure deployment, the AzureKeyVaultEndpoint is set with the value of your Key Vault. The configuration is read into the application and added as options to the DI.

using Microsoft.Azure.Functions.Extensions.DependencyInjection;
using Microsoft.Azure.KeyVault;
using Microsoft.Azure.Services.AppAuthentication;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using MyAzureFunctions;
using MyAzureFunctions.Activities;
using System;
using System.Reflection;

[assembly: FunctionsStartup(typeof(Startup))]

namespace MyAzureFunctions
{
    public class Startup : FunctionsStartup
    {
        public override void Configure(IFunctionsHostBuilder builder)
        {
            var keyVaultEndpoint = Environment.GetEnvironmentVariable("AzureKeyVaultEndpoint");

            if (!string.IsNullOrEmpty(keyVaultEndpoint))
            {
                // using Key Vault, either local dev or deployed
                var azureServiceTokenProvider = new AzureServiceTokenProvider();
                var keyVaultClient = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));

                var config = new ConfigurationBuilder()
                        .AddAzureKeyVault(keyVaultEndpoint)
                        .SetBasePath(Environment.CurrentDirectory)
                        .AddJsonFile("local.settings.json", true)
                        .AddEnvironmentVariables()
                    .Build();

                builder.Services.AddSingleton<IConfiguration>(config);
            }
            else
            {
                // local dev no Key Vault
                var config = new ConfigurationBuilder()
               .SetBasePath(Environment.CurrentDirectory)
               .AddJsonFile("local.settings.json", true)
               .AddUserSecrets(Assembly.GetExecutingAssembly(), true)
               .AddEnvironmentVariables()
               .Build();

                builder.Services.AddSingleton<IConfiguration>(config);
            }

            builder.Services.AddScoped<MyActivities>();

            builder.Services.AddOptions<MyConfiguration>()
                .Configure<IConfiguration>((settings, configuration) =>
                {
                    configuration.GetSection("MyConfiguration").Bind(settings);
                });

            builder.Services.AddOptions<MyConfigurationSecrets>()
                .Configure<IConfiguration>((settings, configuration) =>
                {
                    configuration.GetSection("MyConfigurationSecrets").Bind(settings);
                });
        }
    }
}

The local.settings.json contains the configurations for the Azure Functions. (No secrets). The AzureKeyVaultEndpoint has no value. If this was set with the URL of a Key Vault, this would activate the Key Vault for local development.

{
  "IsEncrypted": false,
  "Values": {
    "AzureWebJobsStorage": "UseDevelopmentStorage=true",
    "AzureWebJobsSecretStorageType": "Files",
    "FUNCTIONS_WORKER_RUNTIME": "dotnet",
    "AzureKeyVaultEndpoint": ""
  },
  "MyConfiguration": {
    "Name": "Lilly",
    "AmountOfRetries": 7
  }
}

The MyConfigurationSecrets class is used to hold the secret configurations.

namespace MyAzureFunctions
{
    public class MyConfigurationSecrets
    {
        public string MySecretOne { get; set; }
        public string MySecretTwo { get; set; }
    }
}

The configuration can be used then like any ASP.NET Core application. The services are added in the constructor and can be used as required.

using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.DurableTask;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Options;

namespace MyAzureFunctions.Activities
{
    public class MyActivities
    {
        private readonly MyConfiguration _myConfiguration;
        private readonly MyConfigurationSecrets _myConfigurationSecrets;

        public MyActivities(IOptions<MyConfiguration> myConfiguration, 
            IOptions<MyConfigurationSecrets> myConfigurationSecrets)
        {
            _myConfiguration = myConfiguration.Value;
            _myConfigurationSecrets = myConfigurationSecrets.Value;
        }

When deploying, the Azure Functions needs access to the Key Vault. The Azure Functions requires a system assigned Identity. You can activate this, or check that it is created in the Azure portal.

In the Azure Key Vault add a new Access policy.

Search for the required system Identity, ie your Azure Functions, and add the required permissions as your app needs.

The secret configurations are no longer required in the App.Settings of the Azure Functions.

When the functions are called, the actual version is used depending on the cache.

Links:

https://damienbod.com/2018/12/23/using-azure-key-vault-with-asp-net-core-and-azure-app-services/

https://docs.microsoft.com/en-us/azure/azure-functions/functions-how-to-use-azure-function-app-settings

https://docs.microsoft.com/en-us/azure/azure-functions/durable/

https://github.com/Azure/azure-functions-durable-extension

https://damienbod.com/2019/03/14/running-local-azure-functions-in-visual-studio-with-https/

Microsoft Azure Storage Explorer

Microsoft Azure Storage Emulator

Install the Azure Functions Core Tools

NodeJS

Azure CLI

Azure SDK

Visual Studio zure development extensions

Waiting for Azure Durable Functions to complete

$
0
0

The article show how an Azure Durable Function can be used to process a HTTP API request which waits for the completion result. This can be required when you have no control over the client application calling the API and the process requires asynchronous operations like further API calls and so on. The Azure Durable Function could call other APIs, run separate processes and it is unknown when this is finished. If you could control the client starting the process, you would not wait, but use a callback, for example in the last activity.

Code: https://github.com/damienbod/AzureDurableFunctions

Posts in this series

The API call underneath handles the client request using a HTTP POST request. The response is or can be specific for the client. The Azure Durable Function is implemented and processed in the Processing class. This returns the result directly. The data received in the body of the request is passed as a parameter. The data returned also needs to be in the format required by the client, and not the format you use.

using System.Threading.Tasks;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.Extensions.Logging;
using System.Net.Http;
using Microsoft.AspNetCore.Mvc;
using DurableWait.Model;
using Microsoft.Azure.WebJobs.Extensions.DurableTask;

namespace DurableWait.Apis
{
    
    public class BeginFlowWithHttpPost
    {
        private readonly Processing _processing;

        public BeginFlowWithHttpPost(Processing processing)
        {
            _processing = processing;
        }

        [FunctionName(Constants.BeginFlowWithHttpPost)]
        public async Task<IActionResult> HttpStart(
          [HttpTrigger(AuthorizationLevel.Anonymous, "post")] HttpRequestMessage request,
          [DurableClient] IDurableOrchestrationClient client,
          ILogger log)
        {
            log.LogInformation("Started new flow");

            BeginRequestData beginRequestData = await request.Content.ReadAsAsync<BeginRequestData>();
            log.LogInformation($"Started new flow with ID = '{beginRequestData.Id}'.");

            return await _processing.ProcessFlow(beginRequestData, request, client);
        }
    }
}

The Processing class starts the Azure Durable Function and waits for this to complete. The IDurableOrchestrationClient interface is passed as a parameter from the Azure Function. The MyOrchestration orchestration is started and the method waits for this to complete or timeout using the WaitForCompletionOrCreateCheckStatusResponseAsync method. If the process times out, the result is returned without a completed status. An InternalServerError 500 result could be returned for this and the status can be set to terminated. If the Azure Durable Function completes successfully, the result needs to be mapped to the caller’s client API required body result, not the output of the Azure Durable Function. This can be created using the data from the status request. The CompleteResponseData data is produced using the data from the Azure Durable Function output and returned to the client.

using DurableWait.Model;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs.Extensions.DurableTask;
using Microsoft.Extensions.Logging;
using System;
using System.Net;
using System.Net.Http;
using System.Threading.Tasks;
using System.Web.Http;

namespace DurableWait
{
    public class Processing
    {
        private readonly ILogger<Processing> _log;

        public Processing(ILoggerFactory loggerFactory)
        {
            _log = loggerFactory.CreateLogger<Processing>();
        }

        public async Task<IActionResult> ProcessFlow(
            BeginRequestData beginRequestData, 
            HttpRequestMessage request,
            IDurableOrchestrationClient client)
        {
            await client.StartNewAsync(Constants.MyOrchestration, beginRequestData.Id, beginRequestData);
            _log.LogInformation($"Started orchestration with ID = '{beginRequestData.Id}'.");

            TimeSpan timeout = TimeSpan.FromSeconds(7);
            TimeSpan retryInterval = TimeSpan.FromSeconds(1);

            await client.WaitForCompletionOrCreateCheckStatusResponseAsync(
                request,
                beginRequestData.Id,
                timeout,
                retryInterval);

            var data = await client.GetStatusAsync(beginRequestData.Id);

            // timeout
            if(data.RuntimeStatus != OrchestrationRuntimeStatus.Completed)
            {
                await client.TerminateAsync(beginRequestData.Id, "Timeout something took too long");
                return new ContentResult()
                {
                    Content = "{ error: \"Timeout something took too long\" }",
                    ContentType = "application/json",
                    StatusCode = (int)HttpStatusCode.InternalServerError
                };
            }
            var output = data.Output.ToObject<MyOrchestrationDto>();

            var completeResponseData = new CompleteResponseData
            {
                BeginRequestData = output.BeginRequest,
                Id2 = output.BeginRequest.Id + ".v2",
                MyActivityTwoResult = output.MyActivityTwoResult
            };

            return new OkObjectResult(completeResponseData);
        }
    }
}

The MyOrchestration class implements the Azure Durable Function orchestration. This has two activities and uses the body from the client API call as the input data. The result of each activity is added to the orchestration data.

using System.Collections.Generic;
using System.Threading.Tasks;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.DurableTask;
using Microsoft.Extensions.Logging;
using DurableWait.Model;
using DurableWait;

namespace MyAzureFDurableWaitunctions.Orchestrations
{
    public class MyOrchestration
    {
        [FunctionName(Constants.MyOrchestration)]
        public async Task<MyOrchestrationDto> RunOrchestrator(
            [OrchestrationTrigger] IDurableOrchestrationContext context,
            ILogger log)
        {
            var myOrchestrationDto = new MyOrchestrationDto
            {
                BeginRequest = context.GetInput<BeginRequestData>()
            };

            if (!context.IsReplaying)
            {
                log.LogWarning($"begin MyOrchestration with input id {myOrchestrationDto.BeginRequest.Id}");
            }

            var myActivityOne = await context.CallActivityAsync<string>(
                Constants.MyActivityOne, context.GetInput<BeginRequestData>());

            myOrchestrationDto.MyActivityOneResult = myActivityOne;

            if(!context.IsReplaying)
            {
                log.LogWarning($"myActivityOne completed {myActivityOne}");
            }

            var myActivityTwo = await context.CallActivityAsync<string>(
                Constants.MyActivityTwo, myOrchestrationDto);

            myOrchestrationDto.MyActivityTwoResult = myActivityTwo;

            if (!context.IsReplaying)
            {
                log.LogWarning($"myActivityTwo completed {myActivityTwo}");
            }

            return myOrchestrationDto;
        }
    }
}

The Startup classes adds the services to the DI so that construction injection can be used in the implementation classes.

using Microsoft.Azure.Functions.Extensions.DependencyInjection;
using Microsoft.Azure.KeyVault;
using Microsoft.Azure.Services.AppAuthentication;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using DurableWait;
using DurableWait.Activities;
using System;
using System.Reflection;

[assembly: FunctionsStartup(typeof(Startup))]

namespace DurableWait
{
    public class Startup : FunctionsStartup
    {
        public override void Configure(IFunctionsHostBuilder builder)
        {
            var keyVaultEndpoint = Environment.GetEnvironmentVariable("AzureKeyVaultEndpoint");

            if (!string.IsNullOrEmpty(keyVaultEndpoint))
            {
                // using Key Vault, either local dev or deployed
                var azureServiceTokenProvider = new AzureServiceTokenProvider();
                var keyVaultClient = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));

                var config = new ConfigurationBuilder()
                        .AddAzureKeyVault(keyVaultEndpoint)
                        .SetBasePath(Environment.CurrentDirectory)
                        .AddJsonFile("local.settings.json", true)
                        .AddEnvironmentVariables()
                    .Build();

                builder.Services.AddSingleton<IConfiguration>(config);
            }
            else
            {
                // local dev no Key Vault
                var config = new ConfigurationBuilder()
               .SetBasePath(Environment.CurrentDirectory)
               .AddJsonFile("local.settings.json", true)
               .AddUserSecrets(Assembly.GetExecutingAssembly(), true)
               .AddEnvironmentVariables()
               .Build();

                builder.Services.AddSingleton<IConfiguration>(config);
            }

            builder.Services.AddOptions<MyConfiguration>()
                .Configure<IConfiguration>((settings, configuration) =>
                {
                    configuration.GetSection("MyConfiguration").Bind(settings);
                });

            builder.Services.AddOptions<MyConfigurationSecrets>()
                .Configure<IConfiguration>((settings, configuration) =>
                {
                    configuration.GetSection("MyConfigurationSecrets").Bind(settings);
                });

            builder.Services.AddLogging();
            builder.Services.AddScoped<MyActivities>();
            builder.Services.AddScoped<Processing>();
        }
    }
}

If the process completes successfully, the result gets returned as required.

If the process fails, an error message is returned after the timeout. This was simulated using a thread sleep in an activity. The API call is set to timeout after 7 seconds.

Links:

https://damienbod.com/2018/12/23/using-azure-key-vault-with-asp-net-core-and-azure-app-services/

https://docs.microsoft.com/en-us/azure/azure-functions/functions-how-to-use-azure-function-app-settings

https://docs.microsoft.com/en-us/azure/azure-functions/durable/

https://github.com/Azure/azure-functions-durable-extension

https://damienbod.com/2019/03/14/running-local-azure-functions-in-visual-studio-with-https/

Microsoft Azure Storage Explorer

Microsoft Azure Storage Emulator

Install the Azure Functions Core Tools

NodeJS

Azure CLI

Azure SDK

Visual Studio zure development extensions

Azure Durable Functions Monitoring and Diagnostics

$
0
0

The post shows some of the possibilities to monitor Azure Durable Functions and how diagnostic APIs could be implemented.

Code: https://github.com/damienbod/AzureDurableFunctions

Posts in this series

Diagnostic APIs

Azure Functions could be used to add APIs which can request the status of the different orchestration instances or even complete lists of flows using the Azure Durable Function APIs.

The IDurableOrchestrationClient provides different APIs for displaying the state of the different orchestrations. The GetStatusAsync method can be used to return the actual status of a single orchestration instance. The status can be returned with the history if required.

[FunctionName(Constants.Diagnostics)]
public async Task<IActionResult> Diagnostics(
 [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = null)] HttpRequest req,
 [DurableClient] IDurableOrchestrationClient starter,
 ILogger log)
{
	string instanceId = req.Query["instanceId"];
	log.LogInformation($"Started DiagnosticsApi with ID = '{instanceId}'.");

	var data = await starter.GetStatusAsync(instanceId, true);
	return new OkObjectResult(data);
}

This can be viewed in the browser using a HTTP Get request with the instanceId query parameter.

An Azure Function GetCompletedFlows can be implemented to return a list of orchestration as required. The following example returns all completed instances which ran in the last N days read from the query parameter days. If nothing is defined, the last day is returned.

[FunctionName(Constants.GetCompletedFlows)]
public async Task<IActionResult> GetCompletedFlows(
[HttpTrigger(AuthorizationLevel.Anonymous, "get")] HttpRequest req,
[DurableClient] IDurableOrchestrationClient client,
ILogger log)
{
	var runtimeStatus = new List<OrchestrationRuntimeStatus> {
		OrchestrationRuntimeStatus.Completed
	};

	return await FindOrchestrations(req, client, runtimeStatus,
		DateTime.UtcNow.AddDays(GetDays(req)),
		DateTime.UtcNow, true);
}

private static int GetDays(HttpRequest req)
{
	string daysString = req.Query["days"];
	if (!string.IsNullOrEmpty(daysString))
	{
		var ok = int.TryParse(daysString, out int days);
		if (!ok)
		{
			return -1;
		}
		return -days;
	}

	return -1;
}

The FindOrchestrations implements the search using the ListInstancesAsync method from the Azure Durable Functions IDurableOrchestrationClient interface. This takes a OrchestrationStatusQueryCondition parameter which can be used as required.

private async Task<IActionResult> FindOrchestrations(
	HttpRequest req,  
	IDurableOrchestrationClient client,
	IEnumerable<OrchestrationRuntimeStatus> runtimeStatus,
	DateTime from,
	DateTime to,
	bool showInput = false)
{
	// Define the cancellation token.
	CancellationTokenSource source = new CancellationTokenSource();
	CancellationToken token = source.Token;

	var instances = await client.ListInstancesAsync(
		new OrchestrationStatusQueryCondition
		{
			CreatedTimeFrom = from,
			CreatedTimeTo = to,
			RuntimeStatus = runtimeStatus,
			ShowInput = showInput
		},
		token
	);

	return new OkObjectResult(instances);
}

When the functions are running, the data can be viewed directly in the browser using a simple GET request. If processing senstive data, these APIs would have to be secured.

DurableFunctionsMonitor

DurableFunctionsMonitor provides another way of monitoring your Azure Durable Functions. This is an excellent OSS project which can be added to Visual Studio Code as an extension or used directly.

https://github.com/scale-tone/DurableFunctionsMonitor

I added this to Visual Studio Code as an extension. Once installed, you need to configure this. In the Visual Studio Code explorer, you should have a Durable Function menu. Click the plug and add your connection string to your storage. This can be found using the Azure Storage Explorer and when you click the Storage Account, which you want to view, the properties will display your connection string.

Then you need to select the Durable Function Hub. This can be found in the Storage Explorer in the Tables. You will have one for the History and one for the Instances. Use the name before this.

Now the Functions can be viewed and active requests can be used like purge, terminate and so on.

Azure Durable Functions are built using Azure Functions which is an Azure App Service. This also provides further monitoring and diagnostic APIs for further standard monitoring. The persisted Azure Durable Function data can also be directly viewed in the Azure Storage Explorer.

Links:

https://github.com/scale-tone/DurableFunctionsMonitor

https://www.npmjs.com/package/azure-functions-core-tools

https://damienbod.com/2018/12/23/using-azure-key-vault-with-asp-net-core-and-azure-app-services/

https://docs.microsoft.com/en-us/azure/azure-functions/functions-how-to-use-azure-function-app-settings

https://docs.microsoft.com/en-us/azure/azure-functions/durable/

https://github.com/Azure/azure-functions-durable-extension

https://damienbod.com/2019/03/14/running-local-azure-functions-in-visual-studio-with-https/

Microsoft Azure Storage Explorer

Microsoft Azure Storage Emulator

Install the Azure Functions Core Tools

NodeJS

Azure CLI

Azure SDK

Visual Studio zure development extensions

Symmetric and Asymmetric Encryption in .NET Core

$
0
0

This post looks at symmetric and asymmetric encryption and how this could be implemented in .NET Core. Symmetric encryption is fast and can encrypt or decrypt large amounts of text, streams or files but requires a shared key. Asymmetric encryption can be used without shared a key, but can only encrypt or decrypt small texts depending of the key size.

Code: https://github.com/damienbod/SendingEncryptedData

Symmetric Encryption in .NET Core

System.Security.Cryptography implements and provides the APIs for encryption in .NET Core. In this example, a simple text will be encrypted. The key is created using a random string created using the RNGCryptoServiceProvider class. Each session for encryption and decryption is created and returned as base 64 strings. Using the key and the IV properties, strings can be encrypted or decrypted.

public (string Key, string IVBase64) InitSymmetricEncryptionKeyIV()
{
	var key = GetEncodedRandomString(32); // 256
	Aes cipher = CreateCipher(key);
	var IVBase64 = Convert.ToBase64String(cipher.IV);
	return (key, IVBase64);
}

private string GetEncodedRandomString(int length)
{
	var base64 = Convert.ToBase64String(GenerateRandomBytes(length));
	return base64;
}

private Aes CreateCipher(string keyBase64)
{
	// Default values: Keysize 256, Mode CBC, Padding PKC27
	Aes cipher = Aes.Create();

	cipher.Padding = PaddingMode.ISO10126;
	cipher.Key = Convert.FromBase64String(keyBase64);

	return cipher;
}

private byte[] GenerateRandomBytes(int length)
{
	using var randonNumberGen = new RNGCryptoServiceProvider();
	var byteArray = new byte[length];
	randonNumberGen.GetBytes(byteArray);
	return byteArray;
}

The Encrypt method takes the three parameters and produces an encrypted text which can only be decrypted using the same key and IV base 64 strings. If encrypting large amounts of text, then a CryptoStream should be used. See the example in the Microsoft docs.

public string Encrypt(string text, string IV, string key)
{
	Aes cipher = CreateCipher(key);
	cipher.IV = Convert.FromBase64String(IV);

	ICryptoTransform cryptTransform = cipher.CreateEncryptor();
	byte[] plaintext = Encoding.UTF8.GetBytes(text);
	byte[] cipherText = cryptTransform.TransformFinalBlock(plaintext, 0, plaintext.Length);

	return Convert.ToBase64String(cipherText);
}

The Decrypt method takes the same three parameters as the Encrypt method and produces a decrypted text.

public string Decrypt(string encryptedText, string IV, string key)
{
	Aes cipher = CreateCipher(key);
	cipher.IV = Convert.FromBase64String(IV);

	ICryptoTransform cryptTransform = cipher.CreateDecryptor();
	byte[] encryptedBytes = Convert.FromBase64String(encryptedText);
	byte[] plainBytes = cryptTransform.TransformFinalBlock(encryptedBytes, 0, encryptedBytes.Length);

	return Encoding.UTF8.GetString(plainBytes);
}

A simple console application can be used to demonstrate that the symmetric encryption in .NET Core works using AES. For this to work, the key and the IV needs to be shared to decrypt the encrypted text. This would also work with small changes for streams or files.

using EncryptDecryptLib;
using System;

namespace ConsoleCreateEncryptedText
{
    class Program
    {
        static void Main(string[] args)
        {
           
            var text = "I have a big dog. You've got a cat. We all love animals!";


            Console.WriteLine("-- Encrypt Decrypt symmetric --");
            Console.WriteLine("");

            var symmetricEncryptDecrypt = new SymmetricEncryptDecrypt();
            var (Key, IVBase64) = symmetricEncryptDecrypt.InitSymmetricEncryptionKeyIV();

            var encryptedText = symmetricEncryptDecrypt.Encrypt(text, IVBase64, Key);

            Console.WriteLine("-- Key --");
            Console.WriteLine(Key);
            Console.WriteLine("-- IVBase64 --");
            Console.WriteLine(IVBase64);

            Console.WriteLine("");
            Console.WriteLine("-- Encrypted Text --");
            Console.WriteLine(encryptedText);

            var decryptedText = symmetricEncryptDecrypt.Decrypt(encryptedText, IVBase64, Key);

            Console.WriteLine("-- Decrypted Text --");
            Console.WriteLine(decryptedText);
        }
    }
}

Asymmetric Encryption in .NET Core

Asymmetric encryption is great in that a shared secret is not required. The text is encrypted with a private key and can be decrypted with the public key from same RSA. The text is limited in size depending on the key size.

public string Encrypt(string text, RSA rsa)
{
	byte[] data = Encoding.UTF8.GetBytes(text);
	byte[] cipherText = rsa.Encrypt(data, RSAEncryptionPadding.Pkcs1);
	return Convert.ToBase64String(cipherText);
}

public string Decrypt(string text, RSA rsa)
{
	byte[] data = Convert.FromBase64String(text); 
	byte[] cipherText = rsa.Decrypt(data, RSAEncryptionPadding.Pkcs1);
	return Encoding.UTF8.GetString(cipherText);
}

The CreateRsaPublicKey and the CreateRsaPrivateKey static utility methods create an RSA from a X509Certificate2. The private key RSA is used for decryption, the public key RSA is used for Encryption.

using System.Security.Cryptography;
using System.Security.Cryptography.X509Certificates;

namespace EncryptDecryptLib
{
    public static class Utils
    {
        public static RSA CreateRsaPublicKey(X509Certificate2 certificate)
        {
            RSA publicKeyProvider = certificate.GetRSAPublicKey();
            return publicKeyProvider;
        }

        public static RSA CreateRsaPrivateKey(X509Certificate2 certificate)
        {
            RSA privateKeyProvider = certificate.GetRSAPrivateKey();
            return privateKeyProvider;
        }
    }
}

A X509Certificate2 certificate is used to encrypt and decrypt the strings. CertificateManager was used to create the certificate.

using CertificateManager;
using CertificateManager.Models;
using System;
using System.Collections.Generic;
using System.Security.Cryptography;
using System.Security.Cryptography.X509Certificates;
using System.Text;

namespace EncryptDecryptLib
{
    public class CreateRsaCertificates
    {
        public static X509Certificate2 CreateRsaCertificate(
          CreateCertificates createCertificates, int keySize)
        {
            var basicConstraints = new BasicConstraints
            {
                CertificateAuthority = true,
                HasPathLengthConstraint = true,
                PathLengthConstraint = 2,
                Critical = false
            };

            var subjectAlternativeName = new SubjectAlternativeName
            {
                DnsName = new List<string>
                {
                    "SigningCertificateTest",
                }
            };

            var x509KeyUsageFlags = X509KeyUsageFlags.KeyCertSign
               | X509KeyUsageFlags.DigitalSignature
               | X509KeyUsageFlags.KeyEncipherment
               | X509KeyUsageFlags.CrlSign
               | X509KeyUsageFlags.DataEncipherment
               | X509KeyUsageFlags.NonRepudiation
               | X509KeyUsageFlags.KeyAgreement;

            var enhancedKeyUsages = new OidCollection
            {
                OidLookup.CodeSigning,
                OidLookup.SecureEmail,
                OidLookup.TimeStamping 
            };

            var certificate = createCertificates.NewRsaSelfSignedCertificate(
                new DistinguishedName { CommonName = "SigningCertificateTest" },
                basicConstraints,
                new ValidityPeriod
                {
                    ValidFrom = DateTimeOffset.UtcNow,
                    ValidTo = DateTimeOffset.UtcNow.AddYears(1)
                },
                subjectAlternativeName,
                enhancedKeyUsages,
                x509KeyUsageFlags,
                new RsaConfiguration
                {
                    KeySize = keySize, 
                    RSASignaturePadding = RSASignaturePadding.Pkcs1,
                    HashAlgorithmName = HashAlgorithmName.SHA256
                });

            return certificate;
        }
    }
}

A console application is used to create a RSA certificate with a key size of 2048. This certificate is then used to encrypt a text, and then decrypt the encrypted text again.

using CertificateManager;
using EncryptDecryptLib;
using Microsoft.Extensions.DependencyInjection;
using System;

namespace ConsoleAsymmetricEncryption
{
    class Program
    {
        static void Main(string[] args)
        {
            Console.WriteLine("Hello World!");
            var serviceProvider = new ServiceCollection()
                .AddCertificateManager()
                .BuildServiceProvider();

            var cc = serviceProvider.GetService<CreateCertificates>();

            var cert2048 = CreateRsaCertificates.CreateRsaCertificate(cc, 2048);

            var text = "I have a big dog. You've got a cat. We all love animals!";

            Console.WriteLine("-- Encrypt Decrypt asymmetric --");
            Console.WriteLine("");

            var asymmetricEncryptDecrypt = new AsymmetricEncryptDecrypt();

            var encryptedText = asymmetricEncryptDecrypt.Encrypt(text,
                Utils.CreateRsaPublicKey(cert2048));

            Console.WriteLine("");
            Console.WriteLine("-- Encrypted Text --");
            Console.WriteLine(encryptedText);

            var decryptedText = asymmetricEncryptDecrypt.Decrypt(encryptedText,
               Utils.CreateRsaPrivateKey(cert2048));

            Console.WriteLine("-- Decrypted Text --");
            Console.WriteLine(decryptedText);
        }
    }
}



Asymmetric encryption is slow compared to symmetric encryption and has a size limit. Symmetric encryption requires a shared key. In the next blog, we will use the asymmetric encryption and the symmetric encryption together and get the benefits of both to send encrypted texts to targeted identities.

Links:

https://docs.microsoft.com/en-us/dotnet/standard/security/encrypting-data

https://docs.microsoft.com/en-us/dotnet/standard/security/decrypting-data

https://docs.microsoft.com/en-us/aspnet/core/security/data-protection/configuration/overview

https://edi.wang/post/2019/1/15/caveats-in-aspnet-core-data-protection

https://docs.microsoft.com/en-us/dotnet/api/system.security.cryptography.protecteddata.unprotect

https://docs.microsoft.com/en-us/dotnet/standard/security/how-to-use-data-protection

https://edi.wang/post/2019/1/15/caveats-in-aspnet-core-data-protection

https://docs.microsoft.com/en-us/dotnet/api/system.security.cryptography.aes?view=netcore-3.1

https://docs.microsoft.com/en-us/dotnet/standard/security/cross-platform-cryptography

https://dev.to/stratiteq/cryptography-with-practical-examples-in-net-core-1mc4

https://www.tpeczek.com/2020/08/supporting-encrypted-content-encoding.html

https://cryptobook.nakov.com/


Encrypting texts for an Identity in ASP.NET Core Razor Pages using AES and RSA

$
0
0

The article shows how encrypted texts can be created for specific users in an ASP.NET Core Razor page application. Symmetric encryption is used to encrypt the text or the payload. Asymmetric encryption is used to encrypt the AES key and the IV of the symmetric encryptions. Each ASP.NET Core Identity has an associated X509Certificate2 with a private key and a public key. The public key, which is saved in a Microsoft SQL database in the PEM format, is used to encrypt the key used for the AES encryption. Only the owner of the private key can then decrypt this.

Code: https://github.com/damienbod/SendingEncryptedData

Registering users using ASP.NET Core Identity

A standard ASP.NET Core application was created using ASP.NET Core Identity. The Razor pages from Identity were scaffolded into the application and a new class ApplicationUser was created which inherits from the IdentityUser class. This was then used everywhere instead of the IdentityUser class. The ApplicationUser has two extra properties, PemPrivateKey and PemPublicKey. This is used to save the RSA certificate with a 3072 key size and saved to the database using the PEM format.

public class ApplicationUser : IdentityUser
{
	public string PemPrivateKey { get; set; }

	public string PemPublicKey { get; set; }
}

The Register Razor Page is changed from the default scaffolded page to create a new RSA certificate for each new Identity. The X509Certificate2 was created using the CertificateManager Nuget package.

private readonly CreateCertificates _createCertificates;
private readonly ImportExportCertificate _importExportCertificate;
		
public RegisterModel(
	UserManager<ApplicationUser> userManager,
	SignInManager<ApplicationUser> signInManager,
	ILogger<RegisterModel> logger,
	IEmailSender emailSender,
	CreateCertificates createCertificates,
	ImportExportCertificate importExportCertificate)
{
	_userManager = userManager;
	_signInManager = signInManager;
	_logger = logger;
	_emailSender = emailSender;
	_createCertificates = createCertificates;
	_importExportCertificate = importExportCertificate;
}

The certificate is exported to a public key certificate in the PEM format and a PEM private key. This is then saved to the database for each new Identity. This PEM strings will be used when encrypting or decrypting texts.

 public async Task<IActionResult> OnPostAsync(string returnUrl = null)
{
	returnUrl = returnUrl ?? Url.Content("~/");
	ExternalLogins = (await _signInManager.GetExternalAuthenticationSchemesAsync()).ToList();
	if (ModelState.IsValid)
	{
		var identityRsaCert3072 = CreateRsaCertificates.CreateRsaCertificate(_createCertificates, 3072);
		var publicKeyPem = _importExportCertificate.PemExportPublicKeyCertificate(identityRsaCert3072);
		var privateKeyPem = _importExportCertificate.PemExportRsaPrivateKey(identityRsaCert3072);

		var user = new ApplicationUser { 
			UserName = Input.Email, 
			Email = Input.Email,
			PemPrivateKey = privateKeyPem,
			PemPublicKey = publicKeyPem
		};

		var result = await _userManager.CreateAsync(user, Input.Password);
		if (result.Succeeded)
		{
			_logger.LogInformation("User created a new account with password.");

			var code = await _userManager.GenerateEmailConfirmationTokenAsync(user);
			code = WebEncoders.Base64UrlEncode(Encoding.UTF8.GetBytes(code));
			var callbackUrl = Url.Page(
				"/Account/ConfirmEmail",
				pageHandler: null,
				values: new { area = "Identity", userId = user.Id, code = code, returnUrl = returnUrl },
				protocol: Request.Scheme);

			await _emailSender.SendEmailAsync(Input.Email, "Confirm your email",
				$"Please confirm your account by <a href='{HtmlEncoder.Default.Encode(callbackUrl)}'>clicking here</a>.");

			if (_userManager.Options.SignIn.RequireConfirmedAccount)
			{
				return RedirectToPage("RegisterConfirmation", new { email = Input.Email, returnUrl = returnUrl });
			}
			else
			{
				await _signInManager.SignInAsync(user, isPersistent: false);
				return LocalRedirect(returnUrl);
			}
		}
		foreach (var error in result.Errors)
		{
			ModelState.AddModelError(string.Empty, error.Description);
		}
	}

	// If we got this far, something failed, redisplay form
	return Page();
}

Now all Identities have a certificate saved in the PEM format in two fields which can be used to encrypt or decrypt the texts.

Creating Encrypted texts for an Identity

A new Razor page was created to provide the UI to encrypt the texts. After you login, you can select any Identity which exists in the database. This is the target person to receive the encrypted message. A text can be pasted into the text area and the encrypt button submits the form as a HTTP Post.

@page
@model ExchangeSecureTexts.Pages.EncryptTextModel
@{
}

<form asp-page="EncryptTextModel" method="post">

    <div class="form-group">
        <label for="TargetUserEmail">Encrypted Text intended for Identity with Email address </label>
        <select name="TargetUserEmail" asp-items="Model.Users" class="form-control"></select>
    </div>

    <div class="form-group">
        <label for="Message">Message:</label>
        <textarea class="form-control" rows="5" id="Message" name="Message">@Model.Message</textarea>
        <span asp-validation-for="Message" style="color:red"></span>
    </div>

    <button type="submit" class="btn btn-primary" style="width:100%">Encrypt</button>

</form>

<br />
<br />
<textarea class="form-control" rows="5" id="EncryptedMessage" name="EncryptedMessage" readonly>@Model.EncryptedMessage</textarea>

The code behind the view of the Razor page requires properties to bind to. The SymmetricEncryptDecrypt, AsymmetricEncryptDecrypt, ApplicationDbContext , ImportExportCertificate services which were added to the DI in the startup class and are added in the constructor. The SymmetricEncryptDecrypt and the AsymmetricEncryptDecrypt services are used to encrypt and decrypt the texts. The ApplicationDbContext is used to find the Identities and select the public PEM string for the target user. The ImportExportCertificate is used to import the PEM string and create a X509Certificate2 to encrypt the key and the IV used for the AES encryption.

public class EncryptTextModel : PageModel
{
	private readonly SymmetricEncryptDecrypt _symmetricEncryptDecrypt;
	private readonly AsymmetricEncryptDecrypt _asymmetricEncryptDecrypt;
	private readonly ApplicationDbContext _applicationDbContext;
	private readonly ImportExportCertificate _importExportCertificate;

	[BindProperty]
	[Required]
	public string TargetUserEmail { get; set; }

	[BindProperty]
	[Required]
	public string Message { get; set; }

	[BindProperty]
	public string EncryptedMessage { get; set; }

	public List<SelectListItem> Users { get; set; }

	public EncryptTextModel(SymmetricEncryptDecrypt symmetricEncryptDecrypt,
		AsymmetricEncryptDecrypt asymmetricEncryptDecrypt,
		ApplicationDbContext applicationDbContext,
		ImportExportCertificate importExportCertificate)
	{
		_symmetricEncryptDecrypt = symmetricEncryptDecrypt;
		_asymmetricEncryptDecrypt = asymmetricEncryptDecrypt;
		_applicationDbContext = applicationDbContext;
		_importExportCertificate = importExportCertificate;
	}

	public IActionResult OnGet()
	{
		// not good if you have a lot of users
		Users = _applicationDbContext.Users.Select(a =>
							 new SelectListItem
							 {
								 Value = a.Email.ToString(),
								 Text = a.Email
							 }).ToList();

		return Page();
	}
}

The HTTP Post creates the encrypted data and returns this to the UI. Like in the previous post, the symmetric encryption creates a new AES key and IV in a base 64 format. The target Identity email is used to get the public Key PEM string and a X509Certificate2 certificate is created from this. The key and the IV are then encrypted using this RSA asymmetric encryption. The data is then serialized in a Json string format and returned to the UI.

public IActionResult OnPost()
{
	if (!ModelState.IsValid)
	{
		// Something failed. Redisplay the form.
		return OnGet();
	}

	var (Key, IVBase64) = _symmetricEncryptDecrypt.InitSymmetricEncryptionKeyIV();

	var encryptedText = _symmetricEncryptDecrypt.Encrypt(Message, IVBase64, Key);

	var targetUserPublicCertificate = GetCertificateWithPublicKeyForIdentity(TargetUserEmail);

	var encryptedKey = _asymmetricEncryptDecrypt.Encrypt(Key,
		Utils.CreateRsaPublicKey(targetUserPublicCertificate));

	var encryptedIV = _asymmetricEncryptDecrypt.Encrypt(IVBase64,
		Utils.CreateRsaPublicKey(targetUserPublicCertificate));

	var encryptedDto = new EncryptedDto
	{
		EncryptedText = encryptedText,
		Key = encryptedKey,
		IV = encryptedIV
	};

	string jsonString = JsonSerializer.Serialize(encryptedDto);

	EncryptedMessage = $"{jsonString}";

	// Redisplay the form.
	return OnGet();

}

private X509Certificate2 GetCertificateWithPublicKeyForIdentity(string email)
{
	var user = _applicationDbContext.Users.First(user => user.Email == email);
	var cert = _importExportCertificate.PemImportCertificate(user.PemPublicKey);
	return cert;
}

The data can now be copy pasted and sent as an email over an insecure channel or whatever. The Decrypt Razor Page can be used to read the data.

Decrypting texts

The Decrypt Razor Page takes the encrypted Json string and submits a HTTP POST request. The original text is returned if the Identity trying to read the data has the correct PEM private key to encrypt the key and the IV.

@page
@model ExchangeSecureTexts.Pages.DecryptTextModel
@{
}

<form asp-page="DecryptTextModel" method="post">


    <div class="form-group">
        <label for="Message">EncryptedMessage:</label>
        <textarea class="form-control" rows="5" id="Message" name="EncryptedMessage">@Model.EncryptedMessage</textarea>
        <span asp-validation-for="EncryptedMessage" style="color:red"></span>
    </div>

    <button type="submit" class="btn btn-primary" style="width:100%">Decrypt</button>

</form>

<br />
<br />
<textarea class="form-control" rows="5" id="Message" name="Message" readonly>@Model.Message</textarea>

The Post method gets the RSA certificate for the Identity using the public and the private PEM strings in the database. The key and the IV are then decrypted using the RSA private key. The key and the IV are used to decrypt the AES encryption.

public IActionResult OnPost()
{
	if (!ModelState.IsValid)
	{
		// Something failed. Redisplay the form.
		return OnGet();
	}

	var cert = GetCertificateWithPrivateKeyForIdentity();

	var encryptedDto = JsonSerializer.Deserialize<EncryptedDto>(EncryptedMessage);

	var key = _asymmetricEncryptDecrypt.Decrypt(encryptedDto.Key,
		Utils.CreateRsaPrivateKey(cert));

	var IV = _asymmetricEncryptDecrypt.Decrypt(encryptedDto.IV,
		Utils.CreateRsaPrivateKey(cert));

	var text = _symmetricEncryptDecrypt.Decrypt(encryptedDto.EncryptedText, IV, key);

	Message = $"{text}";

	// Redisplay the form.
	return OnGet();

}

private X509Certificate2 GetCertificateWithPrivateKeyForIdentity()
{
	var user = _applicationDbContext.Users.First(user => user.Email == User.Identity.Name);

	var certWithPublicKey = _importExportCertificate.PemImportCertificate(user.PemPublicKey);
	var privateKey = _importExportCertificate.PemImportPrivateKey(user.PemPrivateKey);

	var cert = _importExportCertificate.CreateCertificateWithPrivateKey(
		certWithPublicKey, privateKey);

	return cert;
}

The encrypted text can now be read again.

By using asymmetric encryption in this way together with symmetric encryption, a text can be sent over an insecure channel in a safe way, like for example in an email. The sender knows that only the owner of the private key can read the message. The receiver of the message does not know who sent the message. This can be solved using Hashes and Digital Signatures. This is also required when using CBC mode in AES. The private key is saved in the database in a plain PEM format. This is not good, because if the database gets lost, or made public, then all messages can be read. This needs protection. The users in the encrypt Razor page select is just returned as a list. A search function would need to be implemented here or paging. Error handling would also be required for a real application.

Links:

https://docs.microsoft.com/en-us/dotnet/standard/security/encrypting-data

https://docs.microsoft.com/en-us/dotnet/standard/security/decrypting-data

https://docs.microsoft.com/en-us/aspnet/core/security/data-protection/configuration/overview

https://docs.microsoft.com/en-us/dotnet/api/system.security.cryptography.protecteddata.unprotect

https://docs.microsoft.com/en-us/dotnet/standard/security/how-to-use-data-protection

https://docs.microsoft.com/en-us/dotnet/api/system.security.cryptography.aes?view=netcore-3.1

https://docs.microsoft.com/en-us/dotnet/standard/security/cross-platform-cryptography

https://docs.microsoft.com/en-us/dotnet/standard/security/vulnerabilities-cbc-mode

https://edi.wang/post/2019/1/15/caveats-in-aspnet-core-data-protection

https://dev.to/stratiteq/cryptography-with-practical-examples-in-net-core-1mc4

https://www.tpeczek.com/2020/08/supporting-encrypted-content-encoding.html

https://cryptobook.nakov.com/

https://www.meziantou.net/cryptography-in-dotnet.htm

Securing Azure Key Vault inside a VNET and using from an Azure Function

$
0
0

This post shows how an Azure Key Vault can be protected inside an Azure virtual network. The deployment is setup so that only applications in the same VNET can access the Key Vault. To implement this, the access to the Key Vault is restricted to the VNET and secondly, the applications accessing the Key Vault requires an access policy. Managed Identities can be used for this. In the deployment, an Azure Function uses the secret from the Key Vault.

Code: https://github.com/damienbod/AzureFunctionsSecurity

Blogs in the series

Azure Deployment

The application deployment is setup so that the Azure Key is not accessible from the internet. Only applications inside the VNET can used the Key Vault.

Creating the Deployment

A new Azure Key Vault can be created and added to the required resource group.

Add the Key Vault to your Virtual network. Select the subnet where the Azure Function is deployed. The Azure Function was added to the VNET in this post.

The Azure Key Vault should be configured to use the Virtual network subnets now.

The secrets can only be configured or used from inside the VNET. If you require to view, add or update secrets, you would need to configure the Key Vault Firewall to allow your IP to access the secrets.

The Access policies must be configured for the applications which require access. Add only the required policies, at the least GET and list secrets to read the secrets.

The Azure Functions need to be setup to use the Key Vault need. See the post for the details.

In the setup, the web application is connected to the internet. The Web application use the Azure Function to get data. In the Azure Function the secret is used and returned in the API call.

public class RandomStringFunction
{
	private readonly ILogger _log;
	private readonly MyConfigurationSecrets _myConfigurationSecrets;

	public RandomStringFunction(ILoggerFactory loggerFactory,
		IOptions<MyConfigurationSecrets> myConfigurationSecrets)
	{
		_log = loggerFactory.CreateLogger<RandomStringFunction>();
		_myConfigurationSecrets = myConfigurationSecrets.Value;
	}

	[FunctionName("RandomString")]
	public IActionResult RandomString(
		[HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = null)] HttpRequest req)
	{
		_log.LogInformation("C# HTTP trigger RandomStringAuthLevelAnonymous processed a request.");

		return new OkObjectResult($"{_myConfigurationSecrets.MySecret}  {GetEncodedRandomString()}");
	}

When the application is called, the secret is displayed in the web application as a proof of concept.

If the Azure Function is called directly, the access is denied. This was configured in the previous post.

Now it looks like everything is working as planned. To test if the Key Vault is not accessible from the internet, the Azure Functions can be run locally and configured to use the Key Vault. When the Functions are started, the application tries to access the Vault and a 403 is returned.

If we want to run locally, the Azure Key Vault Firewall needs to allow the IP of the host connecting to it, or the client app needs to be deployed inside the VNET.

Now we have deployed the applications secured using network security. The next step would be to add application security like authentication and authorization and session protection.

Links:

https://docs.microsoft.com/en-us/azure/azure-functions/security-concepts

https://docs.microsoft.com/en-us/azure/virtual-network/

https://docs.microsoft.com/en-us/azure/virtual-network/tutorial-restrict-network-access-to-resources

https://docs.microsoft.com/en-us/azure/virtual-network/quickstart-create-nat-gateway-portal

http://www.subnet-calculator.com/

https://damienbod.com/2020/07/20/using-key-vault-and-managed-identities-with-azure-functions/

https://docs.microsoft.com/en-us/azure/key-vault/

Securing Azure Functions using Azure AD JWT Bearer token authentication for user access tokens

$
0
0

This post shows how to implement OAuth security for an Azure Function using user-access JWT Bearer tokens created using Azure AD and App registrations. A client web application implemented in ASP.NET Core is used to authenticate and the access token created for the identity is used to access the API implemented using Azure Functions. Microsoft.Identity.Web is used to authenticate the user and the application.

Code: https://github.com/damienbod/AzureFunctionsSecurity

Blogs in the series

Setup Azure Functions Auth

Using JWT Bearer tokens in Azure Functions is not supported per default. You need to implement the authorization and access token validation yourself, although ASP.NET Core provides many APIs which make this easy. I implemented this example based on the excellent blogs from Christos Matskas and Boris Wilhelms. Thanks for these.

The AzureADJwtBearerValidation class uses the Azure AD configuration and uses the configured values to fetch the Azure Active Directory well known endpoints for your tenant. The access token is validated and the required scope (access_as_user) is validated as well as the OAuth standard validations.

The claims from the access token are returned in a ClaimsPrincipal and can be used as required. The class can be extended to validate different scopes or whatever you require for your application.

using System;
using System.IdentityModel.Tokens.Jwt;
using System.Linq;
using System.Security.Claims;
using System.Threading.Tasks;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.Logging;
using Microsoft.IdentityModel.Protocols;
using Microsoft.IdentityModel.Protocols.OpenIdConnect;
using Microsoft.IdentityModel.Tokens;

namespace FunctionIdentityUserAccess
{
    public class AzureADJwtBearerValidation
    {
        private IConfiguration _configuration;
        private ILogger _log;
        private const string scopeType = @"http://schemas.microsoft.com/identity/claims/scope";
        private ConfigurationManager<OpenIdConnectConfiguration> _configurationManager;
        private ClaimsPrincipal _claimsPrincipal;

        private string _wellKnownEndpoint = string.Empty;
        private string _tenantId = string.Empty;
        private string _audience = string.Empty;
        private string _instance = string.Empty;
        private string _requiredScope = "access_as_user";

        public AzureADJwtBearerValidation(IConfiguration configuration, ILoggerFactory loggerFactory)
        {
            _configuration = configuration;
            _log = loggerFactory.CreateLogger<AzureADJwtBearerValidation>();

            _tenantId = _configuration["AzureAd:TenantId"];
            _audience = _configuration["AzureAd:ClientId"];
            _instance = _configuration["AzureAd:Instance"];
            _wellKnownEndpoint = $"{_instance}{_tenantId}/v2.0/.well-known/openid-configuration";
        }

        public async Task<ClaimsPrincipal> ValidateTokenAsync(string authorizationHeader)
        {
            if (string.IsNullOrEmpty(authorizationHeader))
            {
                return null;
            }

            if (!authorizationHeader.Contains("Bearer"))
            {
                return null;
            }

            var accessToken = authorizationHeader.Substring("Bearer ".Length);

            var oidcWellknownEndpoints = await GetOIDCWellknownConfiguration();
 
            var tokenValidator = new JwtSecurityTokenHandler();

            var validationParameters = new TokenValidationParameters
            {
                RequireSignedTokens = true,
                ValidAudience = _audience,
                ValidateAudience = true,
                ValidateIssuer = true,
                ValidateIssuerSigningKey = true,
                ValidateLifetime = true,
                IssuerSigningKeys = oidcWellknownEndpoints.SigningKeys,
                ValidIssuer = oidcWellknownEndpoints.Issuer
            };

            try
            {
                SecurityToken securityToken;
                _claimsPrincipal = tokenValidator.ValidateToken(accessToken, validationParameters, out securityToken);

                if (IsScopeValid(_requiredScope))
                {
                    return _claimsPrincipal;
                }

                return null;
            }
            catch (Exception ex)
            {
                _log.LogError(ex.ToString());
            }
            return null;
        }

        public string GetPreferredUserName()
        {
            string preferredUsername = string.Empty;
            var preferred_username = _claimsPrincipal.Claims.FirstOrDefault(t => t.Type == "preferred_username");
            if (preferred_username != null)
            {
                preferredUsername = preferred_username.Value;
            }

            return preferredUsername;
        }

        private async Task<OpenIdConnectConfiguration> GetOIDCWellknownConfiguration()
        {
            _log.LogDebug($"Get OIDC well known endpoints {_wellKnownEndpoint}");
            _configurationManager = new ConfigurationManager<OpenIdConnectConfiguration>(
                 _wellKnownEndpoint, new OpenIdConnectConfigurationRetriever());

            return await _configurationManager.GetConfigurationAsync();
        }

        private bool IsScopeValid(string scopeName)
        {
            if (_claimsPrincipal == null)
            {
                _log.LogWarning($"Scope invalid {scopeName}");
                return false;
            }

            var scopeClaim = _claimsPrincipal.HasClaim(x => x.Type == scopeType)
                ? _claimsPrincipal.Claims.First(x => x.Type == scopeType).Value
                : string.Empty;

            if (string.IsNullOrEmpty(scopeClaim))
            {
                _log.LogWarning($"Scope invalid {scopeName}");
                return false;
            }

            if (!scopeClaim.Equals(scopeName, StringComparison.OrdinalIgnoreCase))
            {
                _log.LogWarning($"Scope invalid {scopeName}");
                return false;
            }

            _log.LogDebug($"Scope valid {scopeName}");
            return true;
        }
    }
}

When using Microsoft.IdentityModel.Protocols.OpenIdConnect you need to add the _FunctionsSkipCleanOutput to your Azure function project file, otherwise you will have runtime exceptions. System.IdentityModel.Tokens.Jwt is also required.

 <PropertyGroup>
    <TargetFramework>netcoreapp3.1</TargetFramework>
    <AzureFunctionsVersion>v3</AzureFunctionsVersion>
    <_FunctionsSkipCleanOutput>true</_FunctionsSkipCleanOutput>
    <LangVersion>latest</LangVersion>
 </PropertyGroup>

 <ItemGroup>
    <PackageReference Include="Microsoft.NET.Sdk.Functions" Version="3.0.9" />
    <PackageReference Include="Microsoft.Azure.Functions.Extensions" Version="1.1.0" />
    <PackageReference Include="Microsoft.Azure.WebJobs.Extensions.Storage" Version="4.0.2" />
    <PackageReference Include="Microsoft.Azure.KeyVault" Version="3.0.5" />
    
    <PackageReference Include="Microsoft.Extensions.Configuration.AzureKeyVault" Version="3.1.8" />
    <PackageReference Include="Microsoft.Extensions.Configuration.UserSecrets" Version="3.1.8" />
    <PackageReference Include="Microsoft.Extensions.Configuration" Version="3.1.8" />
    <PackageReference Include="Microsoft.Extensions.Logging.Console" Version="3.1.8" />
    <PackageReference Include="System.Configuration.ConfigurationManager" Version="4.7.0" />
    
    <PackageReference Include="System.IdentityModel.Tokens.Jwt" Version="6.7.1" />
    <PackageReference Include="Microsoft.IdentityModel.Protocols.OpenIdConnect" Version="6.7.1" />
  </ItemGroup>

The AzureADJwtBearerValidation service is added to the DI in the startup class.

[assembly: FunctionsStartup(typeof(Startup))]
namespace FunctionIdentityUserAccess
{

    public class Startup : FunctionsStartup
    {
        public override void Configure(IFunctionsHostBuilder builder)
        {
            builder.Services.AddScoped<AzureADJwtBearerValidation>();
        }

Add the AzureAd configurations to the local settings as required and also to the Azure Functions configurations in the portal.

  "AzureAd": {
    "Instance": "https://login.microsoftonline.com/",
    "TenantId": "[Enter 'common', or 'organizations' or the Tenant Id (Obtained from the Azure portal. Select 'Endpoints' from the 'App registrations' blade and use the GUID in any of the URLs), e.g. da41245a5-11b3-996c-00a8-4d99re19f292]",
    "ClientId": "[Enter the Client Id (Application ID obtained from the Azure portal), e.g. ba74781c2-53c2-442a-97c2-3d60re42f403]"
  }

The Azure function RandomString can use the AzureADJwtBearerValidation service to validate the access token and get the claims back as required. If the access token is invalid, then a 401 is returned, otherwise the response as required.

namespace FunctionIdentityUserAccess
{
    public class RandomStringFunction
    {
        private readonly ILogger _log;
        private readonly AzureADJwtBearerValidation _azureADJwtBearerValidation;

        public RandomStringFunction(ILoggerFactory loggerFactory,
            AzureADJwtBearerValidation azureADJwtBearerValidation)
        {
            _log = loggerFactory.CreateLogger<RandomStringFunction>();;
            _azureADJwtBearerValidation = azureADJwtBearerValidation;
        }

        [FunctionName("RandomString")]
        public async Task<IActionResult> RandomString(
            [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = null)] HttpRequest req)
        {
            try
            {
                _log.LogInformation("C# HTTP trigger RandomStringAuthLevelAnonymous processed a request.");
                
                ClaimsPrincipal principal; // This can be used for any claims
                if ((principal = await _azureADJwtBearerValidation.ValidateTokenAsync(req.Headers["Authorization"])) == null)
                {
                    return new UnauthorizedResult();
                }

                return new OkObjectResult($"Bearer token claim preferred_username: {_azureADJwtBearerValidation.GetPreferredUserName()}  {GetEncodedRandomString()}");
            }
            catch (Exception ex)
            {
                return new OkObjectResult($"{ex.Message}");
            }
        }

Azure App Registrations

Azure App Registrations is used to setup the Azure AD configuration is described in this blog.

Login and use an ASP.NET Core API with Azure AD Auth and user access tokens

The Microsoft.Identity.Web also provides great examples and docs on how to configure or to create the App registration as required for your use case.

Setup Web App

The ASP.NET Core application uses Azure AD to login and access the Azure Function using the access token to get the data from the function. The Web application uses AddMicrosoftIdentityWebAppAuthentication for authentication and the will get an access token for the API. The EnableTokenAcquisitionToCallDownstreamApi is used the setup the API auth with your initial scopes.

public void ConfigureServices(IServiceCollection services)
{
	services.AddHttpClient();

	services.AddOptions();

	string[] initialScopes = Configuration.GetValue<string>
		("CallApi:ScopeForAccessToken")?.Split(' ');

	services.AddMicrosoftIdentityWebAppAuthentication(Configuration)
		.EnableTokenAcquisitionToCallDownstreamApi(initialScopes)
		.AddInMemoryTokenCaches();

	services.AddRazorPages().AddMvcOptions(options =>
	{
		var policy = new AuthorizationPolicyBuilder()
			.RequireAuthenticatedUser()
			.Build();
		options.Filters.Add(new AuthorizeFilter(policy));
	}).AddMicrosoftIdentityUI();
}

public void Configure(IApplicationBuilder app)
{
	app.UseHttpsRedirection();
	app.UseStaticFiles();

	app.UseRouting();

	app.UseAuthentication();
	app.UseAuthorization();

	app.UseEndpoints(endpoints =>
	{
		endpoints.MapRazorPages();
	});
}

The OnGetAsync method of a Razor page calls the Azure Function API using the access token from the AAD.

private readonly ILogger<IndexModel> _logger;
private readonly IHttpClientFactory _clientFactory;
private readonly IConfiguration _configuration;
private readonly ITokenAcquisition _tokenAcquisition;

[BindProperty]
public string RandomString {get;set;}

public IndexModel(IHttpClientFactory clientFactory, 
	ITokenAcquisition tokenAcquisition, 
	IConfiguration configuration, 
	ILogger<IndexModel> logger)
{
	_logger = logger;
	_clientFactory = clientFactory;
	_configuration = configuration;
	_tokenAcquisition = tokenAcquisition;
}

public async Task OnGetAsync()
{
	var client = _clientFactory.CreateClient();

	var scope = _configuration["CallApi:ScopeForAccessToken"];
	var accessToken = await _tokenAcquisition
		.GetAccessTokenForUserAsync(new[] { scope });

	client.DefaultRequestHeaders.Authorization = 
		new AuthenticationHeaderValue("Bearer", accessToken);
	client.DefaultRequestHeaders.Accept.Add(
		new MediaTypeWithQualityHeaderValue("application/json"));

	RandomString = await client.GetStringAsync(
		_configuration["CallApi:FunctionsApiUrl"]);
}

When the applications are started, the Razor page Web APP can be used to login and after a successful login, it gets the perferred_name claim from the Azure Function if the access token is authorized to access the Azure function API.

Notes

This Azure Functions solution would be the way to access functions from a SPA application. If using server rendered applications, you have other possibilities to setup the authorization.

Azure Functions does not provide any out-of-the-box solutions for JWT Bearer token authorization or introspection with reference tokens, which is not optimal. If implementing only APIs, ASP.NET Core Web API projects would be a better solution where standard authorization flows, standard libraries and better tooling are per default.

Microsoft.Identity.Web is great for authentication when using explicitly with Azure AD and no other authentication systems. In-memory cache is a problem when using this together with Web APP and APIs.

Links

https://cmatskas.com/create-an-azure-ad-protected-api-that-calls-into-cosmosdb-with-azure-functions-and-net-core-3-1/

https://anthonychu.ca/post/azure-functions-app-service-openid-connect-auth0/

https://docs.microsoft.com/en-us/azure/app-service/configure-authentication-provider-openid-connect

https://github.com/Azure/azure-functions-vs-build-sdk/issues/397

https://blog.wille-zone.de/post/secure-azure-functions-with-jwt-token/#secure-azure-functions-with-jwt-access-tokens

https://github.com/AzureAD/microsoft-identity-web

https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2

https://jwt.io/

Implement Azure AD Client credentials flow using Client Certificates for service APIs

$
0
0

This post shows how to implement an Azure client credential flows to access an API for a service-to-service connection. No user is involved in this flow. A client certificate (Private Key JWT authentication) is used to get the access token and the token is used to access the API which is then used and validated in the API. Azure Key Vault is used to create and provide the client certificate.

Code: https://github.com/damienbod/AzureADAuthRazorUiServiceApiCertificate

Create a client certificate in Azure Key Vault

A self signed certificate with a key size of at least 2048 and key type RSA is used to validate the client requesting the access token. In your Azure Vault create a new certificate.

Download the .cer file which contains the public key. This will be uploaded to the Azure App Registration.

Setup the Azure App Registration for the Service API

A new Azure App Registration can be created for the Service API. This API will use a client certificate to request access tokens. The public key of the certificate needs to be added to the registration. In the Certificates & Secrets, upload the .cer file which was downloaded from the Key Vault.

No user is involved in the client credentials flow. In Azure, scopes cannot be used because consent is required to use scopes (Azure specific). Two roles are added to the access token for the application access and these roles can then be validated in the API. Open the Manifest and update the “appRoles” to include the required roles. The allowedMemberTypes should be Application.

Every time an access token is requested for the API, the roles will be added to the token. The “clientId/.default” scope is used to request the access token, ie no consent and all claims are added. The required claims can be added using the API permissions.

In the API permissions/Add a permission/My APIs select application and then the API Azure App Registration and add the roles which where created in the Manifest.

The Azure App Registration and the Key Vault are now ready so that client certificates can be used to request an access token which can be used to get data from the API.

Using the Azure Key Vault certificate

Microsoft.Identity.Web is used to implement the code along with Azure SDK to access the Key Vault.

Managed identities are used to access the Key Vault from the application. The Key Vault needs to be configured for the identities in the access policies. When running from the local dev environment in Visual Studio, the logged in user needs to have certificate access to the Key Vault. The deployed Azure App Service would also need this (if deploying to Azure App Services).

// Use Key Vault to get certificate
var azureServiceTokenProvider = new AzureServiceTokenProvider();
// using managed identities
var kv = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));

// Get the certificate from Key Vault
var identifier = _configuration["CallApi:ClientCertificates:0:KeyVaultCertificateName"];
var cert = await GetCertificateAsync(identifier, kv);

A X509Certificate2 can then be created from the Azure SDK CertificateVersionBundle returned from the GetCertificateAsync method.

private async Task<X509Certificate2> GetCertificateAsync(string identitifier, KeyVaultClient keyVaultClient)
{
	var vaultBaseUrl = _configuration["CallApi:ClientCertificates:0:KeyVaultUrl"];

	var certificateVersionBundle = await keyVaultClient.GetCertificateAsync(vaultBaseUrl, identitifier);
	var certificatePrivateKeySecretBundle = await keyVaultClient.GetSecretAsync(certificateVersionBundle.SecretIdentifier.Identifier);
	var privateKeyBytes = Convert.FromBase64String(certificatePrivateKeySecretBundle.Value);
	var certificateWithPrivateKey = new X509Certificate2(privateKeyBytes, (string)null, X509KeyStorageFlags.MachineKeySet);
	return certificateWithPrivateKey;
}

Implement the API client using IConfidentialClientApplication and certificates

The IConfidentialClientApplication interface is used to setup the Azure client credentials flow. This is part of the Microsoft.Identity.Client namespace. The certificate from Key Vault is used to create the Access token request. The …/.default scope must be used for this flow in Azure. The AcquireTokenForClient is then used to send the request for the access token.

https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow#second-case-access-token-request-with-a-certificate

var scope = _configuration["CallApi:ScopeForAccessToken"];
var authority = $"{_configuration["CallApi:Instance"]}{_configuration["CallApi:TenantId"]}";

// client credentials flows, get access token
IConfidentialClientApplication app = ConfidentialClientApplicationBuilder
         .Create(_configuration["CallApi:ClientId"])
         .WithAuthority(new Uri(authority))
         .WithCertificate(cert)
         .WithLogging(MyLoggingMethod, Microsoft.Identity.Client.LogLevel.Verbose,
             enablePiiLogging: true, enableDefaultPlatformLogging: true)
         .Build();

var accessToken = await app.AcquireTokenForClient(new[] { scope }).ExecuteAsync();

The access token returned from the AcquireTokenForClient method can then be used to access the API. This is added as a HTTP header.

client.BaseAddress = new Uri(_configuration["CallApi:ApiBaseAddress"]);
client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken.AccessToken);
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));

// use access token and get payload
var response = await client.GetAsync("weatherforecast");
if (response.IsSuccessStatusCode)
{
	var responseContent = await response.Content.ReadAsStringAsync();
	var data = JArray.Parse(responseContent);

	return data;
}

The app.settings contains the configuration for the Service API and the Azure App registration specifics. The ScopeForAccessToken uses the api://–clientid–/.default as this is required for Azure client credentials flow. The ClientCertificates contains the key vault settings as defined in the Microsoft.Identity.Web docs.

"CallApi": {
	"ScopeForAccessToken": "api://b178f3a5-7588-492a-924f-72d7887b7e48/.default",
	"ApiBaseAddress": "https://localhost:44390",
	"Instance": "https://login.microsoftonline.com/",
	"Domain": "damienbodhotmail.onmicrosoft.com",
	"TenantId": "7ff95b15-dc21-4ba6-bc92-824856578fc1",
	"ClientId": "b178f3a5-7588-492a-924f-72d7887b7e48",
	"ClientCertificates": [
	  {
		"SourceType": "KeyVault",
		"KeyVaultUrl": "https://damienbod.vault.azure.net",
		"KeyVaultCertificateName": "ServiceApiCert"
	  }
	]
},

Logging the client calls

A delegate method can be used to add your own specific logging of the IConfidentialClientApplication implementation. MyLoggingMethod implements this as shown in the docs.

void MyLoggingMethod(Microsoft.Identity.Client.LogLevel level, string message, bool containsPii)
{
	_logger.LogInformation($"MSAL {level} {containsPii} {message}");
}

This can then be used by implementing the WithLogging method. In production deployments, the demo configurations should be changed.

// client credentials flows, get access token
IConfidentialClientApplication app = ConfidentialClientApplicationBuilder
         .Create(_configuration["CallApi:ClientId"])
         .WithAuthority(new Uri(authority))
         .WithCertificate(cert)
         .WithLogging(MyLoggingMethod, Microsoft.Identity.Client.LogLevel.Verbose,
             enablePiiLogging: true, enableDefaultPlatformLogging: true)
         .Build();

Securing the API

The API now needs to enforce the security and validate the access token. This API can only be used by services and client certificate authentication is required. The AddMicrosoftIdentityWebApiAuthentication extension method adds the Microsoft.Identity.Web code configuration. This is configured to use and check the client certificate. The azpacr claim and the azp claim are validated in the AddAuthorization method. The azpacr value must be two, meaning a client certificate was used for authentication. The required roles are also validated using an authorization policy.

public void ConfigureServices(IServiceCollection services)
{
	JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();
	IdentityModelEventSource.ShowPII = true;

	services.AddSingleton<IAuthorizationHandler, HasServiceApiRoleHandler>();

	services.AddMicrosoftIdentityWebApiAuthentication(Configuration);

	services.AddAuthorization(options =>
	{
		options.AddPolicy("ValidateAccessTokenPolicy", validateAccessTokenPolicy =>
		{
			validateAccessTokenPolicy.Requirements.Add(new HasServiceApiRoleRequirement());
			
			// Validate ClientId from token
			validateAccessTokenPolicy.RequireClaim("azp", Configuration["AzureAd:ClientId"]);

			// only allow tokens which used "Private key JWT Client authentication"
			// // https://docs.microsoft.com/en-us/azure/active-directory/develop/access-tokens
			// Indicates how the client was authenticated. For a public client, the value is "0". 
			// If client ID and client secret are used, the value is "1". 
			// If a client certificate was used for authentication, the value is "2".
			validateAccessTokenPolicy.RequireClaim("azpacr", "2");
		});
	});

	services.AddControllers();
}

The configuration for the API contains the Azure App Registration specifics as well as the certificate details to get the certificate from the Key Vault.

"AzureAd": {
	"Instance": "https://login.microsoftonline.com/",
	"Domain": "damienbodhotmail.onmicrosoft.com",
	"TenantId": "7ff95b15-dc21-4ba6-bc92-824856578fc1",
	"ClientId": "b178f3a5-7588-492a-924f-72d7887b7e48",
	"ClientCertificates": [
	  {
		"SourceType": "KeyVault",
		"KeyVaultUrl": "https://damienbod.vault.azure.net",
		"KeyVaultCertificateName": "ServiceApiCert"
	  }
	]
},

In the Controller for the API, the ValidateAccessTokenPolicy is applied.

[Authorize(Policy = "ValidateAccessTokenPolicy")]
[ApiController]
[Route("[controller]")]
public class WeatherForecastController : ControllerBase

The HasServiceApiRoleHandler implements the HasServiceApiRoleRequirement requirement. This checks if the required role is present.

public class HasServiceApiRoleHandler : AuthorizationHandler<HasServiceApiRoleRequirement>
{
	protected override Task HandleRequirementAsync(AuthorizationHandlerContext context, 
          HasServiceApiRoleRequirement requirement)
	{
		if (context == null)
			throw new ArgumentNullException(nameof(context));
		if (requirement == null)
			throw new ArgumentNullException(nameof(requirement));

		var roleClaims = context.User.Claims.Where(t => t.Type == "roles");

		if (roleClaims != null && HasServiceApiRole(roleClaims))
		{
			context.Succeed(requirement);
		}

		return Task.CompletedTask;
	}

	private bool HasServiceApiRole(IEnumerable<Claim> roleClaims)
	{
		foreach(var role in roleClaims)
		{
			if("service-api" == role.Value)
			{
				return true;
			}
		}

		return false;
	}
}

Using a client certificate to identify an application client calling an API can be very useful. If you do not implement both the client and the API of a confidential client, using certificates instead of secrets can be very useful, as you do not have to share a secret. The client can provide a public key, and the server can validate this. If you control both the client and the API, then both APIs could use the same secret from the same Key Vault. Private Key JWT authentication for other flow types and other API types such as access_as_user or OBO flows is also supported using Microsoft.Identity.Web.

Links

https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow#second-case-access-token-request-with-a-certificate

https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Client-credential-flows

https://tools.ietf.org/html/rfc7523

https://openid.net/specs/openid-connect-core-1_0.html#ClientAuthentication

https://docs.microsoft.com/en-us/azure/active-directory/develop/access-tokens

https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Client-Assertions

https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow

https://github.com/AzureAD/microsoft-identity-web/wiki/Using-certificates#describing-client-certificates-to-use-by-configuration

https://www.scottbrady91.com/OAuth/Removing-Shared-Secrets-for-OAuth-Client-Authentication

https://github.com/KevinDockx/ApiSecurityInDepth

https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki

Implement a full text search using Azure Cognitive Search in ASP.NET Core

$
0
0

This article shows how to implement a full text search in ASP.NET Core using Azure Cognitive Search. The search results are returned using paging and the search index can be created, deleted from an ASP.NET Core Razor Page UI.

Code: https://github.com/damienbod/AspNetCoreAzureSearch

Creating the Search in the Azure Portal

In the Azure Portal, search for Azure Cognitive Search and create a new search service. Create the search using the portal wizard and choose the correct pricing model as required. The free version supports three indexes but does not support managed identities. This is good for exploring, evaluating the service.

If using the free version, you will need to use API keys to access the search service. This can be found in the Keys blade of the created cognitive search.

Of course the Azure Cognitive Search could also be created using Azure CLI, Arm templates or Powershell. The service can also be created direct from code.

Create an Azure Cognitive Search index

In the ASP.NET Core Razor page application, the Azure.Search.Documents nuget package is used to create and search the Azure Cognitive search service. Add this to your project.

The index and the document field definitions can be created in different ways. We will use attributes and add these to the document search class properties to define the fields of the documents.

public class PersonCity
{
   [SimpleField(IsFilterable = true, IsKey = true)]
   public string Id { get; set; }

   [SearchableField(IsFilterable = true, IsSortable = true)]
   public string Name { get; set; }

   [SearchableField(IsFilterable = true, IsSortable = true)]
   public string FamilyName { get; set; }

   [SearchableField(IsFilterable = true, IsSortable = true)]
   public string Info { get; set; }

   [SearchableField(IsFilterable = true, IsSortable = true, IsFacetable = true)]
   public string CityCountry { get; set; }

   [SearchableField(IsFilterable = true, IsSortable = true, IsFacetable = true)]
   public string Metadata { get; set; }

   [SearchableField(IsFilterable = true, IsSortable = true)]
   public string Web { get; set; }

   [SearchableField(IsFilterable = true, IsSortable = true)]
   public string Github { get; set; }

   [SearchableField(IsFilterable = true, IsSortable = true)]
   public string Twitter { get; set; }

   [SearchableField(IsFilterable = true, IsSortable = true)]
   public string Mvp { get; set; }
}

A SearchProvider class was created to managed and search the indexes and the documents. The SearchProvider was added to the DI in the startup class. The configuration secrets for the Azure search was added to the user secrets of the project. A SearchIndexClient and a SearchClient instance was created using the configurations for your service. The SearchIndexClient can the be used to create a new instance using the CreateIndexAsync method and the field builder which uses the attribute definitions.

public class SearchProvider
{
	private readonly SearchIndexClient _searchIndexClient;
	private readonly SearchClient _searchClient;
	private readonly IConfiguration _configuration;
	private readonly IHttpClientFactory _httpClientFactory;
	private readonly string _index;

	public SearchProvider(IConfiguration configuration, IHttpClientFactory httpClientFactory)
	{
            _configuration = configuration;
            _httpClientFactory = httpClientFactory;
            _index = configuration["PersonCitiesIndexName"];

            Uri serviceEndpoint = new Uri(configuration["PersonCitiesSearchUri"]);
            AzureKeyCredential credential = new AzureKeyCredential(configuration["PersonCitiesSearchApiKey"]);

            _searchIndexClient = new SearchIndexClient(serviceEndpoint, credential);
            _searchClient = new SearchClient(serviceEndpoint, _index, credential);
            
	}

	public async Task CreateIndex()
	{
            FieldBuilder bulder = new FieldBuilder();
            var definition = new SearchIndex(_index, 
               bulder.Build(typeof(PersonCity)));

            await _searchIndexClient.CreateIndexAsync(definition)
             .ConfigureAwait(false);
	}

Once created, VS Code with the Azure Cognitive Search extension can be used to view the index with the created fields. This can also be viewed, managed in the Azure portal.

https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurecognitivesearch

Adding search documents

Now that the index exists, it needs some documents so that we can search. Azure Cognitive Search provides many powerful ways of importing data into the search indexes, this is one of its strengths. In this demo, we added some data from a data helper class and uploaded the documents in a batch. This could be the way to add data if using the search as a secondary search engine for your solution.

public async Task AddDocumentsToIndex(List<PersonCity> personCities)
{
	var batch = IndexDocumentsBatch.Upload(personCities);
	await _searchClient.IndexDocumentsAsync(batch)
	  .ConfigureAwait(false);
}

The ASP.NET Razor Search Admin page provides a post method OnPostAddDataAsync to add the index documents.

public async Task<ActionResult> OnPostAddDataAsync()
{
	try
	{
		PersonCityData.CreateTestData();
		await _searchProvider.AddDocumentsToIndex(PersonCityData.Data);
		Messages = new[] {
			new AlertViewModel("success", "Documented added", 
"The Azure Search documents were uploaded! The Document Count takes n seconds to update!"),
		};
		var indexStatus = await _searchProvider.GetIndexStatus();
		IndexExists = indexStatus.Exists;
		DocumentCount = indexStatus.DocumentCount;
		return Page();
	}
	catch (Exception ex)
	{
		Messages = new[] {
			new AlertViewModel("danger", "Error adding documents", ex.Message),
		};
		return Page();
	}
}

The view uses a Bootstrap 4 card to display this and documents can be added to the index.

<div class="card">
	<div class="card-body">
		<h5 class="card-title">Add Documents to index: @Model.IndexName</h5>
		<p class="card-text">Add documents to the Azure Cognitive search index: @Model.IndexName.</p>
	</div>
	<div class="card-footer text-center">
		<form asp-page="/SearchAdmin" asp-page-handler="AddData">
			<button type="submit" class="btn btn-primary col-sm-6">
				Add
			</button>
		</form>
	</div>
</div>

Checking the status of the index

In the ASP.NET Core search administration Razor Page view, we would like to be able to see if the index exists and how many documents exist in the index. The easiest way to do this, is to use the REST API from the Azure search service. The HttpClient is used and the count is returned or a 404.

public async Task<(bool Exists,long DocumentCount)> GetIndexStatus()
{
	try
	{
		var httpClient = _httpClientFactory.CreateClient();
		httpClient.DefaultRequestHeaders.CacheControl = new CacheControlHeaderValue
		{
			NoCache = true,
		};
		httpClient.DefaultRequestHeaders.Add("api-key", _configuration["PersonCitiesSearchApiKey"]);

		var uri = $"{_configuration["PersonCitiesSearchUri"]}/indexes/{_index}/docs/$count?api-version=2020-06-30";
		var data = await httpClient.GetAsync(uri);
		if (data.StatusCode == System.Net.HttpStatusCode.NotFound)
		{
			return (false, 0);
		}
		var payload = await data.Content.ReadAsStringAsync();
		return (true, int.Parse(payload));
	}
	catch
	{
		return (false, 0);
	}
}

When the application is started, the search admin displays the amount of documents, can create or delete the index and add documents to the index.

Implementing a search with Paging

The search is implemented using the QueryPagingFull method which uses the SearchAsync method. The QueryType is set to SearchQueryType.Full in the options so that we can use a fuzzy search. The page size and the range for the paging is defined at the top. The SearchAsync returns a SearchResults object which contains the results. This can then be used as required.

public async Task QueryPagingFull(SearchData model, int page, int leftMostPage)
{
	var pageSize = 4;
	var maxPageRange = 7;
	var pageRangeDelta = maxPageRange - pageSize;

	var options = new SearchOptions
	{
		Skip = page * pageSize,
		Size = pageSize,
		IncludeTotalCount = true, 
		QueryType= SearchQueryType.Full
	};

	model.PersonCities = await _searchClient.SearchAsync<PersonCity>(
	      model.SearchText, options).ConfigureAwait(false);
	model.PageCount = ((int)model.PersonCities.TotalCount + pageSize - 1) / pageSize;
	model.CurrentPage = page;
	if (page == 0)
	{
		leftMostPage = 0;
	}
	else if (page <= leftMostPage)
	{
		leftMostPage = Math.Max(page - pageRangeDelta, 0);
	}
	else if (page >= leftMostPage + maxPageRange - 1)
	{
		leftMostPage = Math.Min(page - pageRangeDelta, model.PageCount - maxPageRange);
	}
	model.LeftMostPage = leftMostPage;
	model.PageRange = Math.Min(model.PageCount - leftMostPage, maxPageRange);
}

The Razor Page uses the SearchProvider and sets up the models so the view can display the data and call the search APIs.

using Azure.Search.Documents.Models;
using Microsoft.AspNetCore.Mvc;
using Microsoft.AspNetCore.Mvc.RazorPages;
using Microsoft.Extensions.Logging;
using System.Threading.Tasks;

namespace AspNetCoreAzureSearch.Pages
{
    public class SearchModel : PageModel
    {
        private readonly SearchProvider _searchProvider;
        private readonly ILogger<IndexModel> _logger;

        public string SearchText { get; set; }
        public int CurrentPage { get; set; }
        public int PageCount { get; set; }
        public int LeftMostPage { get; set; }
        public int PageRange { get; set; }
        public string Paging { get; set; }
        public int PageNo { get; set; }
        public SearchResults<PersonCity> PersonCities;

        public SearchModel(SearchProvider searchProvider,
            ILogger<IndexModel> logger)
        {
            _searchProvider = searchProvider;
            _logger = logger;
        }

        public void OnGet()
        {
        }

        public async Task<ActionResult> OnGetInitAsync(string searchText)
        {
            SearchData model = new SearchData
            {
                SearchText = searchText
            };

            await _searchProvider.QueryPagingFull(model, 0, 0).ConfigureAwait(false);

            SearchText = model.SearchText;
            CurrentPage = model.CurrentPage;
            PageCount = model.PageCount;
            LeftMostPage = model.LeftMostPage;
            PageRange = model.PageRange;
            Paging = model.Paging;
            PersonCities = model.PersonCities;

            return Page();
        }

        public async Task<ActionResult> OnGetPagingAsync(SearchData model)
        {
            int page;

            switch (model.Paging)
            {
                case "prev":
                    page = PageNo - 1;
                    break;

                case "next":
                    page = PageNo + 1;
                    break;

                default:
                    page = int.Parse(model.Paging);
                    break;
            }

            int leftMostPage = LeftMostPage;

            await _searchProvider.QueryPagingFull(model, page, leftMostPage).ConfigureAwait(false);

            PageNo = page;
            SearchText = model.SearchText;
            CurrentPage = model.CurrentPage;
            PageCount = model.PageCount;
            LeftMostPage = model.LeftMostPage;
            PageRange = model.PageRange;
            Paging = model.Paging;
            PersonCities = model.PersonCities;

            return Page();
        }

    }
}

The view uses Bootstrap 4 and displays the results. All requests are sent using HTTP GET which are cached and can be navigated using the back button. The searchText is added to the query string and also the handler required for the Razor page.

@page "{handler?}"
@model SearchModel
@{
    ViewData["Title"] = "Search with Paging";
}

<form asp-page="/Search" asp-page-handler="Init" method="get">
    <div class="searchBoxForm">
        @Html.TextBoxFor(m => m.SearchText, new { @class = "searchBox" }) 
        <input class="searchBoxSubmit" type="submit" value="">
    </div>
</form>

@if (Model.PersonCities != null)
{
    <p class="sampleText">
        Found @Model.PersonCities.TotalCount Documents
    </p>

    var results = Model.PersonCities.GetResults().ToList();

    @for (var i = 0; i < results.Count; i++)
    {
        <div>
            <b><span><a href="@results[i].Document.Web">@results[i].Document.Name @results[i].Document.FamilyName</a>: @results[i].Document.CityCountry</span></b>
            @if (!string.IsNullOrEmpty(results[i].Document.Twitter))
            {
                <a href="@results[i].Document.Twitter"><img src="/images/socialTwitter.png" /></a>
            }
            @if (!string.IsNullOrEmpty(results[i].Document.Github))
            {
                <a href="@results[i].Document.Github"><img src="/images/github.png" /></a>
            }
            @if (!string.IsNullOrEmpty(results[i].Document.Mvp))
            {
                <a href="@results[i].Document.Mvp"><img src="/images/mvp.png" width="24" /></a>
            }
            <br />
            <em><span>@results[i].Document.Metadata</span></em><br />
            @Html.TextArea($"desc{1}", results[i].Document.Info, new { @class = "infotext" })
            <br />
        </div>
    }
}

@if (Model != null && Model.PageCount > 1)
{
    <table>
        <tr>
            <td>
                @if (Model.CurrentPage > 0)
                {
                    <p class="pageButton">
                        <a href="/Search?handler=Paging&paging=0&SearchText=@Model.SearchText">|<</a>
                    </p>
                }
                else
                {
                    <p class="pageButtonDisabled">|&lt;</p>
                }
            </td>

            <td>
                @if (Model.CurrentPage > 0)
                {
                    <p class="pageButton">
                        <a href="/Search?handler=Paging&paging=prev&SearchText=@Model.SearchText"><</a>
                    </p>
                }
                else
                {
                    <p class="pageButtonDisabled">&lt;</p>
                }
            </td>

            @for (var pn = Model.LeftMostPage; pn < Model.LeftMostPage + Model.PageRange; pn++)
            {
                <td>
                    @if (Model.CurrentPage == pn)
                    {
                        <p class="pageSelected">@(pn + 1)</p>
                    }
                    else
                    {
                        <p class="pageButton">
                            @{var p1 = Model.PageCount - 1;}
                            <a href="/Search?handler=Paging&paging=@pn&SearchText=@Model.SearchText">@(pn + 1)</a>
                        </p>
                    }
                </td>

            }

            <td>
                @if (Model.CurrentPage < Model.PageCount - 1)
                {
                    <p class="pageButton">
                        @{var p1 = Model.PageCount - 1;}
                        <a href="/Search?handler=Paging&paging=next&SearchText=@Model.SearchText">></a>
                    </p>
                }
                else
                {
                    <p class="pageButtonDisabled">&gt;</p>
                }
            </td>

            <td>
                @if (Model.CurrentPage < Model.PageCount - 1)
                {
                    <p class="pageButton">
                        @{var p7 = Model.PageCount - 1;}
                        <a href="/Search?handler=Paging&paging=@p7&SearchText=@Model.SearchText">>|</a>
                    </p>
                }
                else
                {
                    <p class="pageButtonDisabled">&gt;|</p>
                }
            </td>
        </tr>
    </table>
}

Searching, Fuzzy Search

The search can be used by entering a text and clicking the search icon. The results and paging are returned as defined. Ten results were found for “Switzerland” using a full word match.

If you spell the required word incorrectly or leave out a letter, no results will be returned.

This can improved or allowed by using a fuzzy search. The “~” can be added to use a fuzzy search for Azure Cognitive Search. Then the ten results will be found again. Azure Cognitive search supports different types of search queries, search filters and the indexes can be created to support different search types.

Notes

The demo here was built using the Azure search samples found here.

Links

https://docs.microsoft.com/en-us/azure/search

https://docs.microsoft.com/en-us/azure/search/search-what-is-azure-search

https://docs.microsoft.com/en-us/rest/api/searchservice/

https://github.com/Azure-Samples/azure-search-dotnet-samples/

https://channel9.msdn.com/Shows/AI-Show/Azure-Cognitive-Search-Deep-Dive-with-Debug-Sessions

https://channel9.msdn.com/Shows/AI-Show/Azure-Cognitive-Search-Whats-new-in-security

Using encrypted access tokens in Azure with Microsoft.Identity.Web and Azure App registrations

$
0
0

This post shows how to use encrypted access tokens with Azure AD App registrations using Microsoft.Identity.Web. By using encrypted access tokens, only applications with access to the private key can decrypt the tokens. When using encrypted tokens, you can prevent access tokens data being used or read by such tools as https://jwt.ms or https://jwt.io and prevent the payload claims being read.

Code: https://github.com/damienbod/AzureADAuthRazorUiServiceApiCertificate

Posts in this series

Setup

Two applications were created to demonstrate the AAD token encryption. An ASP.NET Core application was created which implements an API using Microsoft.Identity.Web to secure the API. The API uses an encrypted token. Secondly, a UI app was created to login to AAD and request the API using the API access_as_user scope. The decryption, encryption certificate was created in Azure Key Vault and the public key .cer file was downloaded. This public key is used in the Azure App Registration for the token encryption.

Setting up the Azure App Registration

The Azure App registration for the Web API is setup to use token encyption. The token which was created in Azure Key Vault can be added to the keyCredentials array in the App Azure Registration manifest file. The customKeyIdentifier is the thumbprint and the usage is set to Encrypt. The value property contains the base64 .cer file which was download from your Key Vault.

"keyCredentials": [
	{
		"customKeyIdentifier": "E1454F331F3DBF52523AAF0913DB521849E05AD3",
		"endDate": "2021-10-20T12:19:52Z",
		"keyId": "53095330-1680-4a8d-bf0d-8d0d042fe88b",
		"startDate": "2020-10-20T12:09:52Z",
		"type": "AsymmetricX509Cert",
		"usage": "Encrypt",
		"value": "--your base 64 .cer , ie public key --",
		"displayName": "CN=myTokenEncyptionCert"
	},

],

The tokenEncryptionKeyId property in the Azure App Registration manifest is used to define the certificate which will be used for token encryption. This is set to the keyId of the certificate definition in the keyCredentials array.

"tokenEncryptionKeyId": "53095330-1680-4a8d-bf0d-8d0d042fe88b"

Note: If you upload the certificate to the Azure App registration using the portal, the usage will be set to verify and it cannot be used for token encryption.

Configuration of API application

The ASP.NET Core application uses AddMicrosoftIdentityWebApiAuthentication with the default AzureAD configuration. This authorizes the API requests.

public void ConfigureServices(IServiceCollection services)
{
            JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();
            IdentityModelEventSource.ShowPII = true;

            services.AddMicrosoftIdentityWebApiAuthentication(Configuration);

            services.AddControllers(options =>
            {
                var policy = new AuthorizationPolicyBuilder()
                    .RequireAuthenticatedUser()
                   // .RequireClaim("email") // disabled this to test with users that have no email (no license added)
                    .Build();
                options.Filters.Add(new AuthorizeFilter(policy));
            });
}

The app.settings defines the TokenDecryptionCertificates to use the Key Vault Certificate which is used for the token decryption. This is the same certificate which was used in the Azure App Registration. We used the public key from the certificate in the manifest definition.

  "AzureAd": {
    "Instance": "https://login.microsoftonline.com/",
    "Domain": "damienbodhotmail.onmicrosoft.com",
    "TenantId": "7ff95b15-dc21-4ba6-bc92-824856578fc1",
    "ClientId": "fdc48df2-2b54-411b-a684-7d9868ce1a95",
    "TokenDecryptionCertificates": [
      {
        "SourceType": "KeyVault",
        "KeyVaultUrl": "https://damienbod.vault.azure.net",
        "KeyVaultCertificateName": "DecryptionCertificateCert2"
      }
    ]
  },

Configuration of UI application which calls API

The UI application which logins in, gives consent does not require the TokenDecryptionCertificates to use the API. It just uses a ClientCertificates to verify itself. This is not the same certicate and will have a verify usage in the Azure AD App Registration manifest.

"AzureAd": {
	"Instance": "https://login.microsoftonline.com/",
	"Domain": "damienbodhotmail.onmicrosoft.com",
	"TenantId": "7ff95b15-dc21-4ba6-bc92-824856578fc1",
	"ClientId": "8e2b45c2-cad0-43c3-8af2-b32b73de30e4",
	"CallbackPath": "/signin-oidc",
	"SignedOutCallbackPath ": "/signout-callback-oidc",
	"ClientCertificates": [
	  {
		"SourceType": "KeyVault",
		"KeyVaultUrl": "https://damienbod.vault.azure.net",
		"KeyVaultCertificateName": "DcPortalCert"
	  }
	]
},
"CallApi": {
	"ScopeForAccessToken": "api://fdc48df2-2b54-411b-a684-7d9868ce1a95/access_as_user",
	"ApiBaseAddress": "https://localhost:44390"
},

If you start the applications without the App registration token encryption you can debug the application and view the token claims as the default produces a JWT access token.

When using token encryption, the payload can no longer be viewed.

Links

https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/howto-saml-token-encryption

Authentication and the Azure SDK

https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow#second-case-access-token-request-with-a-certificate

https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Client-credential-flows

https://tools.ietf.org/html/rfc7523

https://openid.net/specs/openid-connect-core-1_0.html#ClientAuthentication

https://docs.microsoft.com/en-us/azure/active-directory/develop/access-tokens

https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Client-Assertions

https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow

https://github.com/AzureAD/microsoft-identity-web/wiki/Using-certificates#describing-client-certificates-to-use-by-configuration

API Security with OAuth2 and OpenID Connect in Depth with Kevin Dockx, August 2020

https://www.scottbrady91.com/OAuth/Removing-Shared-Secrets-for-OAuth-Client-Authentication

https://github.com/KevinDockx/ApiSecurityInDepth

https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki

Using Azure Cognitive Search Suggesters in ASP.NET Core and Autocomplete

$
0
0

This post shows how to implement an autocomplete in an ASP.NET Core Razor Page using Azure Cognitive Search Suggesters.

Code: https://github.com/damienbod/AspNetCoreAzureSearch

Posts in this series

Create the index with the Suggester

To use Suggesters in Azure Cognitive Search, the index requires a suggester definition. This can be implemented using the Azure.Search.Documents Nuget package. A new SearchIndex can be created and the Suggesters list can be used to add a new suggester. The name of the suggester is required so that it can be used in the search and the fields of the search index which are to be used in the suggester. The suggester fields must exist in the index.

public async Task CreateIndex()
{
	FieldBuilder bulder = new FieldBuilder();
	var definition = new SearchIndex(
		_index, bulder.Build(typeof(PersonCity)));
		
	definition.Suggesters.Add(
		new SearchSuggester(
			"personSg", new string[] 
			{ "Name", "FamilyName", "Info", "CityCountry" }
	));

	await _searchIndexClient.CreateIndexAsync(definition)
		.ConfigureAwait(false);
}

Searching the index using the Suggester

The SearchClient is used to send the search requests to Azure Cognitive Search. The Suggest method prepares the SuggestOptions options and uses the Azure.Search.Documents SuggestAsync method of the SearchClient instance to query the index.

using Azure;
using Azure.Search.Documents;
using Azure.Search.Documents.Models;
using Microsoft.Extensions.Configuration;
using System;
using System.Threading.Tasks;

namespace AspNetCoreAzureSearch
{
    public class SearchProviderAutoComplete
    {
        private readonly SearchClient _searchClient;
        private readonly string _index;

        public SearchProviderAutoComplete(IConfiguration configuration)
        {
            _index = configuration["PersonCitiesIndexName"];

            Uri serviceEndpoint = new Uri(configuration["PersonCitiesSearchUri"]);
            AzureKeyCredential credential = 
               new AzureKeyCredential(configuration["PersonCitiesSearchApiKey"]);
            _searchClient = new SearchClient(serviceEndpoint, _index, credential);

        }

        public async Task<SuggestResults<PersonCity>> Suggest(
            bool highlights, bool fuzzy, string term)
        {
            SuggestOptions sp = new SuggestOptions()
            {
                UseFuzzyMatching = fuzzy, 
                Size = 5,
            };
            sp.Select.Add("Id");
            sp.Select.Add("Name");
            sp.Select.Add("FamilyName");
            sp.Select.Add("Info");
            sp.Select.Add("CityCountry");
            sp.Select.Add("Web");

            if (highlights)
            {
                sp.HighlightPreTag = "<b>";
                sp.HighlightPostTag = "</b>";
            }

            var suggestResults = await _searchClient.SuggestAsync<PersonCity>(term, "personSg", sp)
              .ConfigureAwait(false);
            return suggestResults.Value;
        }
    }
}

Autocomplete in ASP.NET Razor Page

The OnGetAutoCompleteSuggest method of the Razor Page sends a suggest search request using no highlighting and a fuzzy search with the required term. If you were using an Azure Cognitive Search AuthComplete together with the suggester, then maybe the fuzzy could be set to false.

using Azure.Search.Documents.Models;
using Microsoft.AspNetCore.Mvc;
using Microsoft.AspNetCore.Mvc.RazorPages;
using Microsoft.Extensions.Logging;
using System.Collections.Generic;
using System.Threading.Tasks;

namespace AspNetCoreAzureSearch.Pages
{
    public class SearchAutoCompleteModel : PageModel
    {
        private readonly SearchProviderAutoComplete _searchProviderAutoComplete;
        private readonly ILogger<IndexModel> _logger;

        public string SearchText { get; set; }

        public SuggestResults<PersonCity> PersonCities;

        public SearchAutoCompleteModel(SearchProviderAutoComplete searchProviderAutoComplete,
            ILogger<IndexModel> logger)
        {
            _searchProviderAutoComplete = searchProviderAutoComplete;
            _logger = logger;
        }

        public void OnGet()
        {
        }

        public async Task<ActionResult> OnGetAutoCompleteSuggest(string term)
        {
            PersonCities = await _searchProviderAutoComplete.Suggest(false, true, term);
            SearchText = term;

            return new JsonResult(PersonCities.Results);
        }
    }
}

The razor page view sends the ajax request using the jQuery autocomplete and processes the result. The data is displayed in a bootstrap 4 card.

@page "{handler?}"
@model SearchAutoCompleteModel
@{
    ViewData["Title"] = "Auto complete suggester";
}

<fieldset class="form">
    <legend>Search for a person in the search engine</legend>
    <table width="500">
        <tr>
            <th></th>
        </tr>
        <tr>
            <td>
                <input class="form-control" id="autocomplete" type="text" style="width:500px" />
            </td>
        </tr>
    </table>
</fieldset>

<br />

<div class="card">
    <h5 class="card-header">
        <span id="docName"></span>
        <span id="docFamilyName"></span>
    </h5>
    <div class="card-body">
        <p class="card-text"><span id="docInfo"></span></p>
        <p class="card-text"><span id="docCityCountry"></span></p>
        <p class="card-text"><span id="docWeb"></span></p>
    </div>
</div>

@section scripts
{
    <script type="text/javascript">
        var items;
        $(document).ready(function () {
            $("input#autocomplete").autocomplete({
                source: function (request, response) {
                    $.ajax({
                        url: "SearchAutoComplete/AutoCompleteSuggest",
                        dataType: "json",
                        data: {
                            term: request.term,
                        },
                        success: function (data) {
                            var itemArray = new Array();
                            for (i = 0; i < data.length; i++) {
                                itemArray[i] = {
                                    label: data[i].document.name + " " + data[i].document.familyName,
                                    value: data[i].document.name + " " + data[i].document.familyName,
                                    data: data[i]
                                }
                            }

                            console.log(itemArray);
                            response(itemArray);
                        },
                        error: function (data, type) {
                            console.log(type);
                        }
                    });
                },
                select: function (event, ui) {
                    $("#docNameId").text(ui.item.data.id);
                    $("#docName").text(ui.item.data.document.name);
                    $("#docFamilyName").text(ui.item.data.document.familyName);
                    $("#docInfo").text(ui.item.data.document.info);
                    $("#docCityCountry").text(ui.item.data.document.cityCountry);
                    $("#docWeb").text(ui.item.data.document.web);
                    console.log(ui.item);
                }
            });
        });
    </script>
}

The required npm packages are loaded using npm and bundled using the bundle Config. The javascript autocomplete is part of the jquery-ui-dist npm package. Or course if you implement a Vue.JS, React or Angular UI, you would use a different autocomplete lib.

{
  "version": "1.0.0",
  "name": "asp.net",
  "private": true,
  "devDependencies": {
    "bootstrap": "4.5.3",
    "jquery": "3.5.1",
    "jquery-ui-dist": "^1.12.1",
    "jquery-validation": "1.19.2",
    "jquery-validation-unobtrusive": "3.2.11",
    "jquery-ajax-unobtrusive": "3.2.6",
    "popper.js": "^1.16.1"
  },
  "dependencies": {
  }
}

When the application is started, the search runs as required.

This could also be implemented using the word autocomplete together with the search but returning the results in a table view or something similar.

Links

https://docs.microsoft.com/en-us/rest/api/searchservice/autocomplete

https://docs.microsoft.com/en-us/azure/search

https://docs.microsoft.com/en-us/azure/search/search-what-is-azure-search

https://docs.microsoft.com/en-us/rest/api/searchservice/

https://github.com/Azure-Samples/azure-search-dotnet-samples/

https://channel9.msdn.com/Shows/AI-Show/Azure-Cognitive-Search-Deep-Dive-with-Debug-Sessions

https://channel9.msdn.com/Shows/AI-Show/Azure-Cognitive-Search-Whats-new-in-security


Implement a Blazor full text search using Azure Cognitive Search

$
0
0

This article shows how to implement a full text search in Blazor using Azure Cognitive Search. The search results are returned using paging and the search index can be created, deleted from a Blazor application.

Code: https://github.com/damienbod/AspNetCoreAzureSearch

Posts in this series

Creating the Blazor App

The Blazor application was created using Visual Studio. The application requires an API which will be used to access, request the Azure Cognitive search service. We do not want to access the Azure Cognitive Search service directly from the WASM application because the free version requires an API key (or the paid versions can use an API key) and an API key cannot be stored safely in a SPA. The WASM app will only use its backend in the same domain and can be secured as required. The trusted backend can forward requests to other APIs, in our case to the Azure Cognitive search.

Creating an ASP.NET Core hosted Blazor application is slightly hidden. Once you select the Blazor WASM as your UI in Visual Studio, you need to select the ASP.NET Core hosted checkbox in the second step. This could probably be created using the dotnet new command as well, maybe someone knows how to do this.

The template creates three projects, a client, a server and a shared project. The Blazor application can be started from Visual Studio using the Server project. If you start the Client project, the API calls will not work. The Startup class is configured to use Blazor and the Azure Cognitive search client providers. The Azure.Search.Documents was added to the project file.

To setup the Search services, please refer to this blog, or the offical docs.

using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;

namespace BlazorAzureSearch.Server
{
    public class Startup
    {
        public Startup(IConfiguration configuration)
        {
            Configuration = configuration;
        }

        public IConfiguration Configuration { get; }

        public void ConfigureServices(IServiceCollection services)
        {
            services.AddScoped<SearchProviderIndex>();
            services.AddScoped<SearchProviderPaging>();
            services.AddScoped<SearchProviderAutoComplete>();

            services.AddHttpClient();

            services.AddControllersWithViews();
            services.AddRazorPages();
        }

        public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
        {
            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
                app.UseWebAssemblyDebugging();
            }
            else
            {
                app.UseExceptionHandler("/Error");
                app.UseHsts();
            }

            app.UseHttpsRedirection();
            app.UseBlazorFrameworkFiles();
            app.UseStaticFiles();

            app.UseRouting();

            app.UseEndpoints(endpoints =>
            {
                endpoints.MapRazorPages();
                endpoints.MapControllers();
                endpoints.MapFallbackToFile("index.html");
            });
        }
    }
}

Implement the Server APIs

The Blazor Server project implements two APIs to support the Blazor WASM API calls. One API for the Azure Cognitive index management and one API for the search. The Search paging API provides two methods which can request a search using paging. The API takes the returned Azure.Search.Documents results and maps the results to a POCO for the WASM UI.

using BlazorAzureSearch.Shared;
using Microsoft.AspNetCore.Mvc;
using Microsoft.AspNetCore.Mvc.Infrastructure;
using Microsoft.Extensions.Logging;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;

namespace BlazorAzureSearch.Server.Controllers
{
    [ApiController]
    [Route("api/[controller]")]
    public class SearchPagingController : ControllerBase
    {
        private readonly SearchProviderPaging _searchProvider;
        private readonly ILogger<SearchAdminController> _logger;

        public SearchPagingController(SearchProviderPaging searchProvider,
        ILogger<SearchAdminController> logger)
        {
            _searchProvider = searchProvider;
            _logger = logger;
        }

        [HttpGet]
        public async Task<SearchData> Get(string searchText)
        {
            SearchData model = new SearchData
            {
                SearchText = searchText
            };

            await _searchProvider.QueryPagingFull(model, 0, 0).ConfigureAwait(false);

            return model;
        }

        [HttpPost]
        [Route("Paging")]
        public async Task<SearchDataDto> Paging([FromBody] SearchDataDto searchDataDto)
        {
            int page;

            switch (searchDataDto.Paging)
            {
                case "prev":
                    page = searchDataDto.CurrentPage - 1;
                    break;

                case "next":
                    page = searchDataDto.CurrentPage + 1;
                    break;

                default:
                    page = int.Parse(searchDataDto.Paging);
                    break;
            }

            int leftMostPage = searchDataDto.LeftMostPage;

            SearchData model = new SearchData
            {
                SearchText = searchDataDto.SearchText,
                LeftMostPage = searchDataDto.LeftMostPage,
                PageCount = searchDataDto.PageCount,
                PageRange = searchDataDto.PageRange,
                Paging = searchDataDto.Paging,
                CurrentPage = searchDataDto.CurrentPage
            };

            await _searchProvider.QueryPagingFull(model, page, leftMostPage).ConfigureAwait(false);

           
            var results = new SearchDataDto
            {
                SearchText = model.SearchText,
                LeftMostPage = model.LeftMostPage,
                PageCount = model.PageCount,
                PageRange = model.PageRange,
                Paging = model.Paging,
                CurrentPage = model.CurrentPage,
                Results = new SearchResultItems
                {
                   PersonCities = new List<PersonCityDto>(),
                   TotalCount = model.PersonCities.TotalCount.GetValueOrDefault()
                }
            };

            var docs =  model.PersonCities.GetResults().ToList();
            foreach(var doc in docs)
            {
                results.Results.PersonCities.Add(new PersonCityDto
                {
                    CityCountry = doc.Document.CityCountry,
                    FamilyName = doc.Document.FamilyName,
                    Github = doc.Document.Github,
                    Id = doc.Document.Id,
                    Info = doc.Document.Info,
                    Metadata = doc.Document.Metadata,
                    Mvp = doc.Document.Mvp,
                    Name = doc.Document.Name,
                    Twitter = doc.Document.Twitter,
                    Web = doc.Document.Web
                });
            }

            return results;
        }
    }
}

The SearchProviderPaging provider implements the Azure Cognitive Search service client using the Azure SDK nuget package. This class uses the user secrets and sets the search configuration. The paging was implemented based on the offical documentation samples for paging.

using Azure;
using Azure.Search.Documents;
using Azure.Search.Documents.Models;
using BlazorAzureSearch.Shared;
using Microsoft.Extensions.Configuration;
using System;
using System.Threading.Tasks;

namespace BlazorAzureSearch.Server
{
    public class SearchProviderPaging
    {
        private readonly SearchClient _searchClient;
        private readonly string _index;

        public SearchProviderPaging(IConfiguration configuration)
        {
            _index = configuration["PersonCitiesIndexName"];

            Uri serviceEndpoint = new Uri(configuration["PersonCitiesSearchUri"]);
            AzureKeyCredential credential = new AzureKeyCredential(configuration["PersonCitiesSearchApiKey"]);
            _searchClient = new SearchClient(serviceEndpoint, _index, credential);
        }

        public async Task QueryPagingFull(SearchData model, int page, int leftMostPage)
        {
            var pageSize = 4;
            var maxPageRange = 7;
            var pageRangeDelta = maxPageRange - pageSize;

            var options = new SearchOptions
            {
                Skip = page * pageSize,
                Size = pageSize,
                IncludeTotalCount = true,
                QueryType = SearchQueryType.Full
            }; // options.Select.Add("Name"); // add this explicitly if all fields are not required

            model.PersonCities = await _searchClient.SearchAsync<PersonCity>(model.SearchText, options).ConfigureAwait(false);
            model.PageCount = ((int)model.PersonCities.TotalCount + pageSize - 1) / pageSize;
            model.CurrentPage = page;
            if (page == 0)
            {
                leftMostPage = 0;
            }
            else if (page <= leftMostPage)
            {
                leftMostPage = Math.Max(page - pageRangeDelta, 0);
            }
            else if (page >= leftMostPage + maxPageRange - 1)
            {
                leftMostPage = Math.Min(page - pageRangeDelta, model.PageCount - maxPageRange);
            }
            model.LeftMostPage = leftMostPage;
            model.PageRange = Math.Min(model.PageCount - leftMostPage, maxPageRange);
        }
    }
}

The SearchAdminController implements the API for the search administration Blazor UI. This API was created just for the Blazor view UI. The API makes it possible to create or delete an index, or add data to the index. The status of the index can also be queried.

using Azure.Search.Documents.Indexes.Models;
using BlazorAzureSearch.Shared;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Logging;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;

namespace BlazorAzureSearch.Server.Controllers
{
    [ApiController]
    [Route("api/[controller]")]
    public class SearchAdminController : ControllerBase
    {
        private readonly SearchProviderIndex _searchProviderIndex;
        private readonly ILogger<SearchAdminController> _logger;

        public SearchAdminController(SearchProviderIndex searchProviderIndex,
            ILogger<SearchAdminController> logger)
        {
            _searchProviderIndex = searchProviderIndex;
            _logger = logger;
        }

        [HttpGet]
        [Route("IndexStatus")]
        public async Task<IndexStatus> IndexStatus()
        {
            var indexStatus = await _searchProviderIndex.GetIndexStatus().ConfigureAwait(false);
            return new IndexStatus
            {
                IndexExists = indexStatus.Exists,
                DocumentCount = indexStatus.DocumentCount
            };
        }

        [HttpPost]
        [Route("DeleteIndex")]
        public async Task<IndexResult> DeleteIndex([FromBody] string indexName)
        {
            var deleteIndex = new IndexResult();
            if (string.IsNullOrEmpty(indexName))
            {
                deleteIndex.Messages = new List<AlertViewModel> {
                    new AlertViewModel("danger", "no indexName defined", "Please provide the index name"),
                };
                return deleteIndex;
            }

            try
            {
                await _searchProviderIndex.DeleteIndex(indexName).ConfigureAwait(false);

                deleteIndex.Messages = new List<AlertViewModel> {
                    new AlertViewModel("success", "Index Deleted!", "The Azure Search Index was successfully deleted!"),
                };
                var indexStatus = await _searchProviderIndex.GetIndexStatus().ConfigureAwait(false);
                deleteIndex.Status.IndexExists = indexStatus.Exists;
                deleteIndex.Status.DocumentCount = indexStatus.DocumentCount;
                return deleteIndex;
            }
            catch (Exception ex)
            {
                deleteIndex.Messages = new List<AlertViewModel> {
                    new AlertViewModel("danger", "Error deleting index", ex.Message),
                };
                return deleteIndex;
            }
        }

        [HttpPost]
        [Route("AddData")]
        public async Task<IndexResult> AddData([FromBody]string indexName)
        {
            var addData = new IndexResult();
            if (string.IsNullOrEmpty(indexName))
            {
                addData.Messages = new List<AlertViewModel> {
                    new AlertViewModel("danger", "no indexName defined", "Please provide the index name"),
                };
                return addData;
            }
            try
            {
                PersonCityData.CreateTestData();
                await _searchProviderIndex.AddDocumentsToIndex(PersonCityData.Data).ConfigureAwait(false);
                addData.Messages = new List<AlertViewModel>{
                    new AlertViewModel("success", "Documented added", "The Azure Search documents were uploaded! The Document Count takes n seconds to update!"),
                };
                var indexStatus = await _searchProviderIndex.GetIndexStatus().ConfigureAwait(false);
                addData.Status.IndexExists = indexStatus.Exists;
                addData.Status.DocumentCount = indexStatus.DocumentCount;
                return addData;
            }
            catch (Exception ex)
            {
                addData.Messages = new List<AlertViewModel> {
                    new AlertViewModel("danger", "Error adding documents", ex.Message),
                };
                return addData;
            }
        }

        [HttpPost]
        [Route("CreateIndex")]
        public async Task<IndexResult> CreateIndex([FromBody] string indexName)
        {
            var createIndex = new IndexResult();
            if (string.IsNullOrEmpty(indexName))
            {
                createIndex.Messages = new List<AlertViewModel> {
                    new AlertViewModel("danger", "no indexName defined", "Please provide the index name"),
                };
                return createIndex;
            }

            try
            {
                await _searchProviderIndex.CreateIndex().ConfigureAwait(false);
                createIndex.Messages = new List<AlertViewModel>  {
                    new AlertViewModel("success", "Index created", "The Azure Search index was created successfully!"),
                };
                var indexStatus = await _searchProviderIndex.GetIndexStatus().ConfigureAwait(false);
                createIndex.Status.IndexExists = indexStatus.Exists;
                createIndex.Status.DocumentCount = indexStatus.DocumentCount;
                return createIndex;
            }
            catch (Exception ex)
            {
                createIndex.Messages = new List<AlertViewModel> {
                    new AlertViewModel("danger", "Error creating index", ex.Message),
                };
                return createIndex;
            }

        }
    }
}

The SearchProviderIndex implements the management APIs for the Azure Cognitive search. The provider uses the Azure.Search.Documents Azure SDK package as well as the REST API directly to access the Azure Cognitive services. The Azure.Search.Documents Azure SDK provides no easy way to query the document count and the status of the index, so the REST API was used directly.

using Azure;
using Azure.Search.Documents;
using Azure.Search.Documents.Indexes;
using Azure.Search.Documents.Indexes.Models;
using Azure.Search.Documents.Models;
using Microsoft.Extensions.Configuration;
using System;
using System.Collections.Generic;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Threading.Tasks;

namespace BlazorAzureSearch.Server
{
    public class SearchProviderIndex
    {
        private readonly SearchIndexClient _searchIndexClient;
        private readonly SearchClient _searchClient;
        private readonly IConfiguration _configuration;
        private readonly IHttpClientFactory _httpClientFactory;
        private readonly string _index;

        public SearchProviderIndex(IConfiguration configuration, IHttpClientFactory httpClientFactory)
        {
            _configuration = configuration;
            _httpClientFactory = httpClientFactory;
            _index = configuration["PersonCitiesIndexName"];

            Uri serviceEndpoint = new Uri(configuration["PersonCitiesSearchUri"]);
            AzureKeyCredential credential = new AzureKeyCredential(configuration["PersonCitiesSearchApiKey"]);

            _searchIndexClient = new SearchIndexClient(serviceEndpoint, credential);
            _searchClient = new SearchClient(serviceEndpoint, _index, credential);

        }

        public async Task CreateIndex()
        {
            FieldBuilder bulder = new FieldBuilder();
            var definition = new SearchIndex(_index, bulder.Build(typeof(PersonCity)));
            definition.Suggesters.Add(new SearchSuggester(
                "personSg", new string[] { "Name", "FamilyName", "Info", "CityCountry" }
            ));

            await _searchIndexClient.CreateIndexAsync(definition).ConfigureAwait(false);
        }

        public async Task DeleteIndex(string indexName)
        {
            await _searchIndexClient.DeleteIndexAsync(indexName).ConfigureAwait(false);
        }

        public async Task<(bool Exists, long DocumentCount)> GetIndexStatus()
        {
            try
            {
                var httpClient = _httpClientFactory.CreateClient();
                httpClient.DefaultRequestHeaders.CacheControl = new CacheControlHeaderValue
                {
                    NoCache = true,
                };
                httpClient.DefaultRequestHeaders.Add("api-key", _configuration["PersonCitiesSearchApiKey"]);

                var uri = $"{_configuration["PersonCitiesSearchUri"]}/indexes/{_index}/docs/$count?api-version=2020-06-30";
                var data = await httpClient.GetAsync(uri).ConfigureAwait(false);
                if (data.StatusCode == System.Net.HttpStatusCode.NotFound)
                {
                    return (false, 0);
                }
                var payload = await data.Content.ReadAsStringAsync().ConfigureAwait(false);
                return (true, int.Parse(payload));
            }
            catch
            {
                return (false, 0);
            }
        }

        public async Task AddDocumentsToIndex(List<PersonCity> personCities)
        {
            var batch = IndexDocumentsBatch.Upload(personCities);
            await _searchClient.IndexDocumentsAsync(batch).ConfigureAwait(false);
        }
    }
}

Implement the Blazor UI

The Blazor WASM client project implements two razor views, one for the administration of the index and one for the paging search. The Blazor razor files contains both the template code as well as the code behind. This is the default. The navigation names, routes were changed but otherwise the UI is the same as the default Blazor templates from ASP.NET Core.

The search results are displayed in a list and calls the code behind methods which call the APIs of the Blazor Server project.

@page "/searchpaging"
@using BlazorAzureSearch.Shared
@inject HttpClient Http
@inject NavigationManager NavManager

<EditForm Model="@SearchData" class="centerMiddle">
    <div class="searchBoxForm">
        <InputText @bind-Value="SearchData.SearchText" class="searchBox"></InputText>
        <input class="searchBoxSubmit" @onclick="@(e => SearchPager(0.ToString(), SearchData.SearchText))">
    </div>
</EditForm>

@if (Loading)
{
    <div class="spinner d-flex align-items-center justify-content-center fixedSpinner" >
        <div class="spinner-border text-success" role="status">
            <span class="sr-only">Loading...</span>
        </div>
    </div>
} 

@if (SearchData.Results.PersonCities != null)
{
    <p class="sampleText centerMiddle">
        Found @SearchData.Results.TotalCount Documents
    </p>

    var results = SearchData.Results.PersonCities;

    @for (var i = 0; i < results.Count; i++)
    {
<div>
    <b><span><a href="@results[i].Web">@results[i].Name @results[i].FamilyName</a>: @results[i].CityCountry &nbsp;</span></b>
    @if (!string.IsNullOrEmpty(results[i].Twitter))
    {
        <a href="@results[i].Twitter"><img src="/images/socialTwitter.png" /></a>
    }
    @if (!string.IsNullOrEmpty(results[i].Github))
    {
        <a href="@results[i].Github"><img src="/images/github.png" /></a>
    }
    @if (!string.IsNullOrEmpty(results[i].Mvp))
    {
        <a href="@results[i].Mvp"><img src="/images/mvp.png" width="24" /></a>
    }
    <br />
    <em><span>@results[i].Metadata</span></em><br />
    <textarea class="infotext">@results[i].Info</textarea>
    <br />
</div>
    }
}

<div class="container">
    <div class="row">
        <div class="col">
            @if (SearchData.PageCount > 1)
            {
                <table class="col">
                    <tr class="col">
                        <td>
                            @if (SearchData.CurrentPage > 0)
                            {
                                <p class="pageButton">
                                    <button class="btn btn-link"
                                            @onclick="@(e => SearchPager(0.ToString(), SearchData.SearchText))">
                                        |<
                                    </button>
                                </p>
                            }
                            else
                            {
                                <p class="pageButtonDisabled">|&lt;</p>
                            }
                        </td>

                        <td>
                            @if (SearchData.CurrentPage > 0)
                            {
                                var prev = "prev";
                                <p class="pageButton">
                                    <button class="btn btn-link" @onclick="@(e => SearchPager(prev, SearchData.SearchText))"><</button>
                                </p>
                            }
                            else
                            {
                                <p class="pageButtonDisabled">&lt;</p>
                            }
                        </td>

                        @for (var pn = SearchData.LeftMostPage; pn < SearchData.LeftMostPage + SearchData.PageRange; pn++)
                        {
                            <td>
                                @if (SearchData.CurrentPage == pn)
                                {
                                    <p class="pageSelected">@(pn + 1)</p>
                                }
                                else
                                {
                                    <p class="pageButton">
                                        @{
                                            var p1 = SearchData.PageCount - 1;
                                            var plink = pn.ToString();
                                        }
                                        <button class="btn btn-link"
                                                @onclick="@(e => SearchPager(plink, SearchData.SearchText))">
                                            @(pn + 1)
                                        </button>
                                    </p>
                                }
                            </td>

                        }

                        <td>
                            @if (SearchData.CurrentPage < SearchData.PageCount - 1)
                            {

                                <p class="pageButton">
                                    @{
                                        var p1 = SearchData.PageCount - 1;
                                        var next = "next";
                                    }
                                    <button class="btn btn-link"
                                            @onclick="@(e => SearchPager(next, SearchData.SearchText))">
                                        >
                                    </button>
                                </p>
                            }
                            else
                            {
                                <p class="pageButtonDisabled">&gt;</p>
                            }
                        </td>

                        <td>
                            @if (SearchData.CurrentPage < SearchData.PageCount - 1)
                            {
                                <p class="pageButton">
                                    @{var p7 = SearchData.PageCount - 1;}
                                    <button class="btn btn-link"
                                            @onclick="@(e => SearchPager(p7.ToString(), SearchData.SearchText))">
                                        >|
                                    </button>
                                </p>
                            }
                            else
                            {
                                <p class="pageButtonDisabled">&gt;|</p>
                            }
                        </td>
                    </tr>
                </table>
            }
        </div>
   
    </div>

</div>

The code behind implements the OnInitializedAsync so the search can be started from a simple GET using query string parameters. The Search method uses the UI values and sends the API call requests to the APIs in the Blazor server project.

@code {

    private bool Loading { get; set; } = false;
    private SearchDataDto SearchData { get; set; } = new SearchDataDto();

    private int PageNo { get; set; }

    protected override async Task OnInitializedAsync()
    {
        var uri = NavManager.ToAbsoluteUri(NavManager.Uri);
        if (Microsoft.AspNetCore.WebUtilities.QueryHelpers.ParseQuery(uri.Query).TryGetValue("paging", out var queryParamPaging))
        {
            SearchData.Paging = queryParamPaging;
        }
        if (Microsoft.AspNetCore.WebUtilities.QueryHelpers.ParseQuery(uri.Query).TryGetValue("SearchText", out var queryParamSearchText))
        {
            SearchData.SearchText = queryParamSearchText;
        }

        if (!string.IsNullOrEmpty(queryParamSearchText) ||
            !string.IsNullOrEmpty(queryParamPaging))
        {
            await Search();
        }
    }

    private async Task SearchPager(string paging, string searchText)
    {
        SearchData.Paging = paging.ToString();
        SearchData.SearchText = searchText;
        await Search();
    }

    private async Task Search()
    {
        Loading = true;
        int page;

        switch (SearchData.Paging)
        {
            case "prev":
                page = PageNo - 1;
                break;

            case "next":
                page = PageNo + 1;
                break;

            default:
                page = int.Parse(SearchData.Paging);
                break;
        }

        int leftMostPage = SearchData.LeftMostPage;

        var searchData = new SearchDataDto
        {
            SearchText = SearchData.SearchText,
            CurrentPage = SearchData.CurrentPage,
            PageCount = SearchData.PageCount,
            LeftMostPage = SearchData.LeftMostPage,
            PageRange = SearchData.PageRange,
            Paging = SearchData.Paging
        };

        var response = await Http.PostAsJsonAsync<SearchDataDto>("api/SearchPaging/Paging", searchData);
        response.EnsureSuccessStatusCode();
        string responseBody = await response.Content.ReadAsStringAsync();

        var searchDataResult = System.Text.Json.JsonSerializer.Deserialize<SearchDataDto>(responseBody);

        PageNo = page;
        SearchData = searchDataResult;
        Loading = false;
    }

}

The Blazor Search Admin view can create or delete the index. Data can be added and the status of the index is displayed. The UI is implemented using Bootstrap 4 css.

@page "/searchadmin"
@using BlazorAzureSearch.Shared
@inject HttpClient Http

<div class="jumbotron jumbotron-fluid">
    <div class="container">
        <h1 class="display-4">Index: @IndexName</h1>
        <p class="lead">Exists: <span class="badge badge-secondary">@IndexExists</span>  Documents Count: <span class="badge badge-light">@DocumentCount</span> </p>
    </div>
</div>

@if (Loading)
{
    <div class="spinner d-flex align-items-center justify-content-center fixedSpinner">
        <div class="spinner-border text-success" role="status">
            <span class="sr-only">Loading...</span>
        </div>
    </div>
}

<div class="card-deck">
    <div class="card">
        <div class="card-body">
            <h5 class="card-title">Create index: @IndexName</h5>
            <p class="card-text">Click to create a new index in Azure Cognitive search called @IndexName.</p>
        </div>
        <div class="card-footer text-center">
            <button class="btn btn-primary col-sm-6" @onclick="CreateIndex">
                Create
            </button>
        </div>
    </div>
    <div class="card">
        <div class="card-body">
            <h5 class="card-title">Add Documents to index: @IndexName</h5>
            <p class="card-text">Add documents to the Azure Cognitive search index: @IndexName.</p>
        </div>
        <div class="card-footer text-center">
            <button class="btn btn-primary col-sm-6" @onclick="AddData">
                Add
            </button>
        </div>
    </div>
    <div class="card">
        <div class="card-body">
            <h5 class="card-title">Delete index: @IndexName</h5>
            <p class="card-text">Delete Azure Cognitive search index: @IndexName.</p>
        </div>
        <div class="card-footer text-center">
            <button type="submit" class="btn btn-danger col-sm-6" @onclick="DeleteIndex">
                Delete
            </button>
        </div>
    </div>
</div>

<br />

@if (Messages != null)
{
    @foreach (var msg in Messages)
    {
        <div class="alert alert-@msg.AlertType alert-dismissible fade show" role="alert">
            <strong>@msg.AlertTitle</strong> @msg.AlertMessage
            <button type="button" class="close" data-dismiss="alert" aria-label="Close">
                <span aria-hidden="true">&times;</span>
            </button>
        </div>
    }
}

The OnInitializedAsync method gets the status of the index. The other methods are used to prepare the data and forward the calls to the Server API.

@code {
    private bool Loading { get; set; } = false;
    private List<AlertViewModel> Messages = null;
    private string IndexName { get; set; } = "personcities";
    private bool IndexExists { get; set; }
    private long DocumentCount { get; set; }

    protected override async Task OnInitializedAsync()
    {
        Console.WriteLine("On Init");

        Loading = true;
        var status = await Http.GetFromJsonAsync<IndexStatus>("api/SearchAdmin/IndexStatus");
        IndexExists = status.IndexExists;
        DocumentCount = status.DocumentCount;
        Loading = false;
    }

    private async Task DeleteIndex()
    {
        Loading = true;
        var response = await Http.PostAsJsonAsync<string>("api/SearchAdmin/DeleteIndex", IndexName);
        response.EnsureSuccessStatusCode();
        string responseBody = await response.Content.ReadAsStringAsync();

        var deleteIndex = System.Text.Json.JsonSerializer.Deserialize<IndexResult>(responseBody);

        Messages = deleteIndex.Messages;
        if (Messages.Count > 0 && Messages[0].AlertType == "success")
        {
            IndexExists = deleteIndex.Status.IndexExists;
            DocumentCount = deleteIndex.Status.DocumentCount;
        }
        Loading = false;
        Console.WriteLine($"DocumentCount: {DocumentCount}");
    }

    private async Task AddData()
    {
        Loading = true;
        var response = await Http.PostAsJsonAsync<string>("api/SearchAdmin/AddData", IndexName);
        response.EnsureSuccessStatusCode();
        string responseBody = await response.Content.ReadAsStringAsync();

        var addData = System.Text.Json.JsonSerializer.Deserialize<IndexResult>(responseBody);

        Messages = addData.Messages;
        if (Messages.Count > 0 && Messages[0].AlertType == "success")
        {
            IndexExists = addData.Status.IndexExists;
            DocumentCount = addData.Status.DocumentCount;
        }
        Loading = false;
        Console.WriteLine($"DocumentCount: {DocumentCount}");
    }

    private async Task CreateIndex()
    {
        try
        {
            Loading = true;
            var response = await Http.PostAsJsonAsync<string>("api/SearchAdmin/CreateIndex", IndexName);
            response.EnsureSuccessStatusCode();
            string responseBody = await response.Content.ReadAsStringAsync();

            var createIndex = System.Text.Json.JsonSerializer.Deserialize<IndexResult>(responseBody);

            Messages = createIndex.Messages;
            if (Messages.Count > 0 && Messages[0].AlertType == "success")
            {
                IndexExists = createIndex.Status.IndexExists;
                DocumentCount = createIndex.Status.DocumentCount;
            }
        }
        finally
        {
            Loading = false;
            Console.WriteLine($"DocumentCount: {DocumentCount}");
        }
    }
}

When the application is run, the application search works by clicking the search button. I haven’t figured out how to bind an onenter event in an input text box in Blazor but this should be pretty easy.

The Search Admin view looks as follows and can edit the data as required.

Running the code yourself

To try this yourself, just clone the github repo and create an Azure Cognitive Search service. Add your keys to the user secrets in the Server project and test away. I’m pretty new to Blazor, so send your PRs or add issues if you see ways of improving this.

{
  "PersonCitiesSearchUri": "--url--",
  "PersonCitiesSearchApiKey": "--secret--",
  "PersonCitiesIndexName": "personcities"
}

Links

https://docs.microsoft.com/en-us/aspnet/core/blazor

https://docs.microsoft.com/en-us/azure/search

https://docs.microsoft.com/en-us/azure/search/search-what-is-azure-search

https://docs.microsoft.com/en-us/rest/api/searchservice/

https://github.com/Azure-Samples/azure-search-dotnet-samples/

https://channel9.msdn.com/Shows/AI-Show/Azure-Cognitive-Search-Deep-Dive-with-Debug-Sessions

https://channel9.msdn.com/Shows/AI-Show/Azure-Cognitive-Search-Whats-new-in-security

https://azure.github.io/azure-sdk/releases/latest/index.html

https://chrissainty.com/working-with-query-strings-in-blazor/

Implement a Web APP and an ASP.NET Core Secure API using Azure AD which delegates to a second API

$
0
0

This article shows how an ASP.NET Core Web application can authenticate and access a downstream API using user access tokens and delegate to another API in Azure AD also using user access tokens. Microsoft.Identity.Web is used in all three applications to acquire the tokens for the Web API and the access tokens for the two APIs.

Code: https://github.com/damienbod/AzureADAuthRazorUiServiceApiCertificate

Setup and App registrations

The applications are setup as follows.

The applications implement the OAuth 2.0 On-Behalf-Of flow (OBO) and is made easy be using the Microsoft.Identity.Web Nuget packages.

The three applications require App registrations. The first Azure App registration exposes an API using the access_as_user scope. Nothing more is required here. This is the API at the end of the chain.

The API in the middle requires the API permission from the previously created App registration and exposes its own API, again the access_as_user scope. The Web API requires a secret to get the delegated access token and so a client secret is configured in this App registration. (Or a client certificate).

The API permissions is setup to use the scope from the other API.

And it exposes it’s own access_as_user scope.

The Web App requires a Web setup with a client secret (or client certificate) and the API permission from the middle API is added here.

Web Application which calls the first API

The Web APP with the UI interaction uses two Nuget packages, Microsoft.Identity.Web and Microsoft.Identity.Web.UI to implement the authentication and the authorization client for the API. The application is setup to acquire an access token using the EnableTokenAcquisitionToCallDownstreamApi method with the scope from the User API One.

public void ConfigureServices(IServiceCollection services)
{
	services.AddTransient<UserApiOneService>();
	services.AddHttpClient();

	services.AddOptions();

	string[] initialScopes = Configuration.GetValue<string>(
		"UserApiOne:ScopeForAccessToken")?.Split(' ');

	services.AddMicrosoftIdentityWebAppAuthentication(Configuration)
		.EnableTokenAcquisitionToCallDownstreamApi(initialScopes)
		.AddInMemoryTokenCaches();

	services.AddRazorPages().AddMvcOptions(options =>
	{
		var policy = new AuthorizationPolicyBuilder()
			.RequireAuthenticatedUser()
			.Build();
		options.Filters.Add(new AuthorizeFilter(policy));
	}).AddMicrosoftIdentityUI();
}

The two nuget packages are added to the csproj file.

<PackageReference Include="Microsoft.Identity.Web" Version="1.2.0" />
<PackageReference Include="Microsoft.Identity.Web.UI" Version="1.2.0" />

The configuration is setup to use the data for the applicaitons defined in the APP registrations. The scope matches the scope from the User API One.

{
  "AzureAd": {
    "Instance": "https://login.microsoftonline.com/",
    "Domain": "damienbodhotmail.onmicrosoft.com",
    "TenantId": "7ff95b15-dc21-4ba6-bc92-824856578fc1",
    "ClientId": "46d2f651-813a-4b5c-8a43-63abcb4f692c",
    "CallbackPath": "/signin-oidc",
    "SignedOutCallbackPath ": "/signout-callback-oidc"
  },
  "UserApiOne": {
    // UserApiOne
    "ScopeForAccessToken": "api://b2a09168-54e2-4bc4-af92-a710a64ef1fa/access_as_user",
    "ApiBaseAddress": "https://localhost:44395"
  },

}

The API client implementation uses the ITokenAcquisition to get the access token for the identity and access the API.

using Microsoft.Extensions.Configuration;
using Microsoft.Identity.Web;
using Newtonsoft.Json.Linq;
using System;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Threading.Tasks;

namespace WebAppUserApis
{
    public class UserApiOneService
    {
        private readonly IHttpClientFactory _clientFactory;
        private readonly ITokenAcquisition _tokenAcquisition;
        private readonly IConfiguration _configuration;

        public UserApiOneService(IHttpClientFactory clientFactory, 
            ITokenAcquisition tokenAcquisition, 
            IConfiguration configuration)
        {
            _clientFactory = clientFactory;
            _tokenAcquisition = tokenAcquisition;
            _configuration = configuration;
        }

        public async Task<JArray> GetApiDataAsync()
        {
            try
            {
                var client = _clientFactory.CreateClient();

                var scope = _configuration["UserApiOne:ScopeForAccessToken"];
                var accessToken = await _tokenAcquisition.GetAccessTokenForUserAsync(new[] { scope });

                client.BaseAddress = new Uri(_configuration["UserApiOne:ApiBaseAddress"]);
                client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken);
                client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
       
                var response = await client.GetAsync("weatherforecast");
                if (response.IsSuccessStatusCode)
                {
                    var responseContent = await response.Content.ReadAsStringAsync();
                    var data = JArray.Parse(responseContent);

                    return data;
                }

                throw new ApplicationException($"Status code: {response.StatusCode}, Error: {response.ReasonPhrase}");
            }
            catch (Exception e)
            {
                throw new ApplicationException($"Exception {e}");
            }
        }
    }
}

The Web App requires a user secret to access and authenticate. This could also be done using a client certificate. A client secret is used in this example and this must match the secret setup in the Web App registration.

{
  "AzureAd": {
    "ClientSecret": "--your secret for WebApp App Registration--" 
  }
}

API which calls the second API

The UI facing API uses a second API for separate data. The second API is also a user access token API and uses delegated tokens to access the data it protects. The API is not used from the UI application. When the access token from the the UI application is used to access the first API, it uses this to get another token to access the access token. This is all setup in the Startup class of the UI facing API. The AddMicrosoftIdentityWebApiAuthentication method is used to setup the API and it enables token acquisition for the second API. This is very simple when using Microsoft.Identity.Web.

public void ConfigureServices(IServiceCollection services)
{
	services.AddTransient<UserApiTwoService>();
	services.AddHttpClient();

	services.AddOptions();

	JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();
	// IdentityModelEventSource.ShowPII = true;
	// JwtSecurityTokenHandler.DefaultMapInboundClaims = false;

	services.AddMicrosoftIdentityWebApiAuthentication(Configuration)
		.EnableTokenAcquisitionToCallDownstreamApi()
		.AddInMemoryTokenCaches();

	services.AddControllers(options =>
	{
		var policy = new AuthorizationPolicyBuilder()
			.RequireAuthenticatedUser()
		   // .RequireClaim("email")
			.Build();
		options.Filters.Add(new AuthorizeFilter(policy));
	});
}

The app.settings are configured to use the Azure AD API registration and the scope for the second application.

{
  "AzureAd": {
    "Instance": "https://login.microsoftonline.com/",
    "Domain": "damienbodhotmail.onmicrosoft.com",
    "TenantId": "7ff95b15-dc21-4ba6-bc92-824856578fc1",
    "ClientId": "b2a09168-54e2-4bc4-af92-a710a64ef1fa"
  },
  "UserApiTwo": {
    "ScopeForAccessToken": "api://72286b8d-5010-4632-9cea-e69e565a5517/access_as_user",
    "ApiBaseAddress": "https://localhost:44396"
  },

}

The UserApiTwoService gets an access token for the API two scope and this is used to access the Web API controllers to return the data.

using Microsoft.Extensions.Configuration;
using Microsoft.Identity.Web;
using Newtonsoft.Json.Linq;
using System;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Threading.Tasks;

namespace UserApiOne
{
    public class UserApiTwoService
    {
        private readonly IHttpClientFactory _clientFactory;
        private readonly ITokenAcquisition _tokenAcquisition;
        private readonly IConfiguration _configuration;

        public UserApiTwoService(IHttpClientFactory clientFactory, 
            ITokenAcquisition tokenAcquisition, 
            IConfiguration configuration)
        {
            _clientFactory = clientFactory;
            _tokenAcquisition = tokenAcquisition;
            _configuration = configuration;
        }

        public async Task<JArray> GetApiDataAsync()
        {
            try
            {
                var client = _clientFactory.CreateClient();

                var scope = _configuration["UserApiTwo:ScopeForAccessToken"];
                var accessToken = await _tokenAcquisition.GetAccessTokenForUserAsync(new[] { scope });

                client.BaseAddress = new Uri(_configuration["UserApiTwo:ApiBaseAddress"]);
                client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken);
                client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
       
                var response = await client.GetAsync("weatherforecast");
                if (response.IsSuccessStatusCode)
                {
                    var responseContent = await response.Content.ReadAsStringAsync();
                    var data = JArray.Parse(responseContent);

                    return data;
                }

                throw new ApplicationException($"Status code: {response.StatusCode}, Error: {response.ReasonPhrase}");
            }
            catch (Exception e)
            {
                throw new ApplicationException($"Exception {e}");
            }
        }
    }
}

The get an access token for the second API, a client secret or a client certificate is required. The client second is used and this is defined in the first Web API. This can be added to your user secrets or an Azure Key Vault.

{
  "AzureAd": {
    "ClientSecret": "--your secret for UserApiOne  App Registration--" 
  }
}

Second API

The API two is configured in the Startup class to require Azure AD delegrated access tokens. The AddMicrosoftIdentityWebApiAuthentication method is used with no extra configuration. Scopes and roles should be validated as well. This can be done her, or in policies or using the helper methods from the Azure AD Microsoft.Identity.Web packages.

public void ConfigureServices(IServiceCollection services)
{
	JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();
	// IdentityModelEventSource.ShowPII = true;
	// JwtSecurityTokenHandler.DefaultMapInboundClaims = false;

	services.AddMicrosoftIdentityWebApiAuthentication(Configuration);

	services.AddControllers(options =>
	{
		var policy = new AuthorizationPolicyBuilder()
			.RequireAuthenticatedUser()
		   // .RequireClaim("email") 
			.Build();
		options.Filters.Add(new AuthorizeFilter(policy));
	});
}

The Azure AD configuration in the app.settings are standard like in the documentation.

{
  "AzureAd": {
    "Instance": "https://login.microsoftonline.com/",
    "Domain": "damienbodhotmail.onmicrosoft.com",
    "TenantId": "7ff95b15-dc21-4ba6-bc92-824856578fc1",
    "ClientId": "72286b8d-5010-4632-9cea-e69e565a5517"
  },

}

The VerifyUserHasAnyAcceptedScope can be used to validate a required scope for the delegated access token.

[HttpGet]
public IEnumerable<WeatherForecast> Get()
{
	string[] scopeRequiredByApi = new string[] { "access_as_user" };
	HttpContext.VerifyUserHasAnyAcceptedScope(scopeRequiredByApi);

	// ...
}

When the applications are run, the UI web application authenticates and gets an access token for Web API one. Web API one authorizes the access token and gets an access token for Web API two. Web API two authorizes the access token and returns the data. Web API one gets the data from Web API two and then returns data to the Web App. The full request chain works and uses user access tokens without making the second aPI availoble to the UI application.

Links

https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/howto-saml-token-encryption

Authentication and the Azure SDK

https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow#second-case-access-token-request-with-a-certificate

https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Client-credential-flows

https://tools.ietf.org/html/rfc7523

https://openid.net/specs/openid-connect-core-1_0.html#ClientAuthentication

https://docs.microsoft.com/en-us/azure/active-directory/develop/access-tokens

https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Client-Assertions

https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow

https://github.com/AzureAD/microsoft-identity-web/wiki/Using-certificates#describing-client-certificates-to-use-by-configuration

API Security with OAuth2 and OpenID Connect in Depth with Kevin Dockx, August 2020

https://www.scottbrady91.com/OAuth/Removing-Shared-Secrets-for-OAuth-Client-Authentication

https://github.com/KevinDockx/ApiSecurityInDepth

https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki

https://docs.microsoft.com/en-us/azure/active-directory/develop/scenario-protected-web-api-verification-scope-app-roles

Using Microsoft Graph API in ASP.NET Core

$
0
0

This post shows how Microsoft Graph API can be used in both ASP.NET Core UI web applications and also ASP.NET Core APIs for delegated identity flows. The ASP.NET Core applications are secured using Microsoft.Identity.Web. In the API project, the Graph API client is used in a delegated flow with user access tokens getting an access token for the graph API on behalf of the identity created from the access token used to request the API.

Code: https://github.com/damienbod/AspNetCoreUsingGraphApi

Using Graph API from an ASP.NET Core UI application

Using the Graph API client in an ASP.NET Core UI web application can be implemented using the Microsoft.Identity.Web.MicrosoftGraph nuget package. This can be added to the project file as well as the Azure authentication packages. If using the beta version, switch to the Microsoft.Identity.Web.MicrosoftGraphBeta nuget package.

<Project Sdk="Microsoft.NET.Sdk.Web">

  <PropertyGroup>
    <TargetFramework>net5.0</TargetFramework>
    <UserSecretsId>c27d164f-2839-4f2b-a533-da54a470d29a</UserSecretsId>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.Identity.Web" Version="1.3.0" />
    <PackageReference Include="Microsoft.Identity.Web.UI" Version="1.3.0" />
    <PackageReference Include="Microsoft.Identity.Web.MicrosoftGraphBeta" Version="1.3.0" />
  </ItemGroup>
  
</Project>

The application authentication and the authorization are setup in the Startup class. The AddMicrosoftGraph method is used to add the required scopes for your Graph API calls. The AddMicrosoftIdentityWebAppAuthentication method is used in the UI ASP.NET Core application.

public void ConfigureServices(IServiceCollection services)
{
	string[] initialScopes = Configuration
		.GetValue<string>("DownstreamApi:Scopes")?.Split(' ');

	services.AddHttpClient();
	services.AddScoped<GraphApiClientUI>();
	services.AddScoped<ApiService>();

	services.AddMicrosoftIdentityWebAppAuthentication(Configuration)
		.EnableTokenAcquisitionToCallDownstreamApi(initialScopes)
		.AddMicrosoftGraph(
			Configuration["DownstreamApi:BaseUrl"],
			Configuration.GetValue<string>("DownstreamApi:Scopes"))
		.AddInMemoryTokenCaches();

	services.AddControllersWithViews(options =>
	{
		var policy = new AuthorizationPolicyBuilder()
			.RequireAuthenticatedUser()
			.Build();
		options.Filters.Add(new AuthorizeFilter(policy));
	}).AddMicrosoftIdentityUI();

	services.AddRazorPages();
}

The GraphServiceClient service can be added directly in the services and used to access the Graph API. This is really simple in a UI application and you don’t need to handle token requests or anything else like this, all is implemented in the Microsoft.Identity.Web packages. But if you require special scopes or would like to handle this yourself, this is possible and the GraphServiceClient instance can be created as shown below.

public class GraphApiClientUI
{
	private readonly GraphServiceClient _graphServiceClient;

	public GraphApiClientUI(ITokenAcquisition tokenAcquisition,
		GraphServiceClient graphServiceClient)
	{
		_graphServiceClient = graphServiceClient;
	}

	public async Task<User> GetGraphApiUser()
	{
		return await _graphServiceClient.Me.Request()
			.GetAsync().ConfigureAwait(false);
	}

The Graph API data can be returned in the UI views of the ASP.NET Core application.

[Authorize]
public class HomeController : Controller
{
	private readonly GraphApiClientUI _graphApiClientUI;

	public HomeController(GraphApiClientUI graphApiClientUI)
	{
		_graphApiClientUI = graphApiClientUI;
	}

	[AuthorizeForScopes(ScopeKeySection = "DownstreamApi:Scopes")]
	public async Task<IActionResult> Index()
	{
		var user = await _graphApiClientUI.GetGraphApiUser()
			.ConfigureAwait(false);

		ViewData["ApiResult"] = user.DisplayName;

		return View();
	}

Using Graph API from an ASP.NET Core API

Using Graph API from an ASP.NET Core API application is different to a UI application. The Graph API is called on behalf of the identity created from the access token calling the API. This is a delegated user access token. The Azure AD client security for the API can be setup using the AddMicrosoftIdentityWebApiAuthentication method.

public void ConfigureServices(IServiceCollection services)
{
	services.AddHttpClient();
	services.AddScoped<GraphApiClientDirect>();

	services.AddMicrosoftIdentityWebApiAuthentication(Configuration)
		.EnableTokenAcquisitionToCallDownstreamApi()
		.AddInMemoryTokenCaches();

	services.AddControllers(options =>
	{
		var policy = new AuthorizationPolicyBuilder()
			.RequireAuthenticatedUser()
			// .RequireClaim("email") 
			.Build();
		options.Filters.Add(new AuthorizeFilter(policy));
	});

	services.AddSwaggerGen(c =>
	{
		c.SwaggerDoc("v1", new OpenApiInfo { 
			Title = "WebAPiUsingGraphApi", Version = "v1" });
	});
}

In the service responsible for implementing the Graph API client, the ITokenAcquisition interface is required as well as the IHttpClientFactory interface. When creating new instances of the GraphServiceClient, the IHttpClientFactory interface is used to create the HttpClient which is used by the Graph API client.

public class GraphApiClientDirect
{
	private readonly ITokenAcquisition _tokenAcquisition;
	private readonly IHttpClientFactory _clientFactory;

	public GraphApiClientDirect(ITokenAcquisition tokenAcquisition,
		IHttpClientFactory clientFactory)
	{
		_clientFactory = clientFactory;
		_tokenAcquisition = tokenAcquisition;
	}

A new access token is requested for the required scopes using the GetAccessTokenForUserAsync method. This returns a delegated access token and the token is then used in the DelegateAuthenticationProvider. The GraphServiceClient client is created using the HttpClient which was created using the IHttpClientFactory _clientFactory. The lifecycle of the HttpClients are handled correctly then.

private async Task<GraphServiceClient> GetGraphClient(string[] scopes)
{
	var token = await tokenAcquisition.GetAccessTokenForUserAsync(
	 scopes).ConfigureAwait(false);

	var client = _clientFactory.CreateClient();
	client.BaseAddress = new Uri("https://graph.microsoft.com/beta");
	client.DefaultRequestHeaders.Accept.Add(
		new MediaTypeWithQualityHeaderValue("application/json"));

	GraphServiceClient graphClient = new GraphServiceClient(client)
	{
		AuthenticationProvider = new DelegateAuthenticationProvider(
		async (requestMessage) =>
		{
			requestMessage.Headers.Authorization = 
				new AuthenticationHeaderValue("bearer", token);
		})
	};

	return graphClient;
}

The Graph API client can then be used to request data from Azure Microsoft Graph API.

public async Task<User> GetGraphApiUser()
{
	var graphclient = await GetGraphClient(
		new string[] { "User.ReadBasic.All", "user.read" })
	   .ConfigureAwait(false);

	return await graphclient.Me.Request()
	   .GetAsync().ConfigureAwait(false);
}

This could also be used to request files from Sharepoint or any other resource made available through the Graph rest APIs.

public async Task<string> GetSharepointFile()
{
	var graphclient = await GetGraphClient(
		new string[] { "user.read", "AllSites.Read" }
	).ConfigureAwait(false);

	var user = await graphclient.Me.Request().GetAsync().ConfigureAwait(false);

	if (user == null)
		throw new NotFoundException($"User not found in AD.");

	var sharepointDomain = "damienbodtestsharing.sharepoint.com";
	var relativePath = "/sites/TestDoc";
	var fileName = "aad_ms_login_02.png";

	var site = await graphclient
		.Sites[sharepointDomain]
		.SiteWithPath(relativePath)
		.Request()
		.GetAsync().ConfigureAwait(false);

	var drive = await graphclient
		.Sites[site.Id]
		.Drive
		.Request()
		.GetAsync().ConfigureAwait(false);

	var items = await graphclient
		.Sites[site.Id]
		.Drives[drive.Id]
		.Root
		.Children
		.Request().GetAsync().ConfigureAwait(false);

	var file = items
		.FirstOrDefault(f => f.File != null && f.WebUrl.Contains(fileName));

	var stream = await graphclient
		.Sites[site.Id]
		.Drives[drive.Id]
		.Items[file.Id].Content
		.Request()
		.GetAsync().ConfigureAwait(false);

	var fileAsString = StreamToString(stream);
	return fileAsString;
}

The Graph API client service can then be used in the API which is protected using the Microsoft.Identity.Web packages.

[Authorize]
[ApiController]
[Route("[controller]")]
public class GraphCallsController : ControllerBase
{
	private readonly GraphApiClientDirect _graphApiClientDirect;

	public GraphCallsController(GraphApiClientDirect graphApiClientDirect)
	{
		_graphApiClientDirect = graphApiClientDirect;
	}

	[HttpGet]
	public async Task<string> Get()
	{
		var user = await _graphApiClientDirect.GetGraphApiUser()
			.ConfigureAwait(false);

		return user.DisplayName;
	}

}

When using the Graph API client in ASP.NET Core applications for delegated access user access tokens, the correct initialization should be used. If creating the instance yourself, use the IHttpClientFactory to create the HttpClient instance used in the client. If you create Graph API requests using different scopes, you would also need to use the ITokenAcquisition and the IHttpClientFactory interfaces to create the GraphApiClient.

Links:

https://developer.microsoft.com/en-us/graph/

https://docs.microsoft.com/en-us/aspnet/core/fundamentals/http-requests

https://docs.microsoft.com/en-us/dotnet/api/system.net.http.httpclient

Securing an ASP.NET Core API which uses multiple access tokens

$
0
0

This post shows how an ASP.NET Core API can authorize API calls which use different access tokens from different identity providers or different access tokens from the same identity provider but created for different clients and containing different claims. The access tokens are validated using JWT Bearer authentication as well as an authorization policy which can validate the specific claims in the access tokens.

Code: https://github.com/damienbod/ApiJwtWithTwoSts

The ConfigureServices method adds the authentication services using the AddAuthentication method. Two schemes are added, one for each access token. JWT Bearer tokens are used and the Authority and the Audience properties are used to define the auth. If introspection is used, you would define a secret here as well.

The MyApiHandler is added as a service. This provides a way to fulfil the MyApiRequirement which is used in the policy MyPolicy.

Swagger services are added with support for JWT Bearer to make it easier to test.

public void ConfigureServices(IServiceCollection services)
{
	services.AddSingleton<IAuthorizationHandler, MyApiHandler>();

	services.AddAuthentication(
	    IdentityServerAuthenticationDefaults.AuthenticationScheme)
		.AddJwtBearer("SchemeStsA", options =>
		{
			options.Audience = "ProtectedApiResourceA";
			options.Authority = "https://localhost:44318";
		})
		.AddJwtBearer("SchemeStsB", options =>
		{
			options.Audience = "ProtectedApiResourceB";
			options.Authority = "https://localhost:44367";
		});

	services.AddAuthorization(options =>
	{
		options.DefaultPolicy = new AuthorizationPolicyBuilder()
			.RequireAuthenticatedUser()
			.AddAuthenticationSchemes("SchemeStsA", "SchemeStsB")
			.Build();

		options.AddPolicy("MyPolicy", policy =>
		{
			policy.AddRequirements(new MyApiRequirement());
		});
	});

	services.AddControllers();

	services.AddSwaggerGen(c =>
	{
		// add JWT Authentication
		var securityScheme = new OpenApiSecurityScheme
		{
			Name = "JWT Authentication",
			Description = "Enter JWT Bearer token **_only_**",
			In = ParameterLocation.Header,
			Type = SecuritySchemeType.Http,
			Scheme = "bearer", // must be lower case
			BearerFormat = "JWT",
			Reference = new OpenApiReference
			{
				Id = JwtBearerDefaults.AuthenticationScheme,
				Type = ReferenceType.SecurityScheme
			}
		};
		c.AddSecurityDefinition(securityScheme.Reference.Id, securityScheme);
		c.AddSecurityRequirement(new OpenApiSecurityRequirement
		{
			{securityScheme, new string[] { }}
		});

		c.SwaggerDoc("v1", new OpenApiInfo
		{
			Title = "An API ",
			Version = "v1",
			Description = "An API",
			Contact = new OpenApiContact
			{
				Name = "damienbod",
				Email = string.Empty,
				Url = new Uri("https://damienbod.com/"),
			},
		});
	});
}

The Configure method adds the support for Swagger with the JWT Bearer auth UI and the standard middleware setup like the templates.

public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
	// IdentityModelEventSource.ShowPII = true;
	JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();

	if (env.IsDevelopment())
	{
		app.UseDeveloperExceptionPage();
	}
	else
	{
		app.UseExceptionHandler("/Error");
		app.UseHsts();
	}

	app.UseSwagger();
	app.UseSwaggerUI(c =>
	{
		c.SwaggerEndpoint("/swagger/v1/swagger.json", "Service API One");
		c.RoutePrefix = string.Empty;
	});

	app.UseStaticFiles();
	app.UseRouting();
	app.UseAuthentication();
	app.UseAuthorization();

	app.UseEndpoints(endpoints =>
	{
		endpoints.MapControllers();
	});
}

A new class MyApiRequirement was created which implements the IAuthorizationRequirement interface.

using Microsoft.AspNetCore.Authorization;

namespace WebApi
{
    public class MyApiRequirement : IAuthorizationRequirement
    {
    }
}

The MyApiHandler implements the AuthorizationHandler with the requirement MyApiRequirement. This is used to implement the logic to fulfil the requirement MyApiRequirement. In this demo, depending on the client_id claim in the access token, a different scope is required to fulfil the requirement. Any logic can be used here depending on your business requirements.

using Microsoft.AspNetCore.Authorization;
using System;
using System.Linq;
using System.Security.Claims;
using System.Threading.Tasks;

namespace WebApi
{
    public class MyApiHandler : AuthorizationHandler<MyApiRequirement>
    {
        protected override Task HandleRequirementAsync(
           AuthorizationHandlerContext context, MyApiRequirement requirement)
        {
            if (context == null)
                throw new ArgumentNullException(nameof(context));
            if (requirement == null)
                throw new ArgumentNullException(nameof(requirement));

            var client_id = context.User.Claims
                 .FirstOrDefault(t => t.Type == "client_id");
            var scope = context.User.Claims
                 .FirstOrDefault(t => t.Type == "scope");

            if (AccessTokenValid(client_id, scope))
            {
                context.Succeed(requirement);
            }

            return Task.CompletedTask;
        }

        private bool AccessTokenValid(Claim client_id, Claim scope)
        {
            if (client_id != null && client_id.Value == "CC_STS_A")
            {
                return StsAScopeAValid(scope);
            }

            if (client_id != null && client_id.Value == "CC_STS_B")
            {
                return StsBScopeBValid(scope);
            }

            return false;
        }

        private bool StsAScopeAValid(Claim scope)
        {
            if (scope != null && scope.Value == "scope_a")
            {
                return true;
            }

            return false;
        }

        private bool StsBScopeBValid(Claim scope)
        {
            if (scope != null && scope.Value == "scope_b")
            {
                return true;
            }

            return false;
        }

    }
}

The policy and the authentication schemes can be used in ASP.NET Core controllers. Every Authorize attribute must succeed, if access is given to the request with the access token calling the API. This is why the single policy was used to implement the different authorization rules for the different access tokens. If this was more complex, it would make sense to have a single controller for each access token type. The allowed schemes can be defined in a comma separated string.

using System.Collections.Generic;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Mvc;

namespace WebApi.Controllers
{
    [Route("api/[controller]")]
    public class ValuesController : Controller
    {
        [Authorize(AuthenticationSchemes = "SchemeStsA,SchemeStsB", Policy = "MyPolicy")]
        [HttpGet]
        public IEnumerable<string> Get()
        {
            return new string[] { "data 1 from the api", "data 2 from the api" };
        }
    }
}

Getting an access token

In the example, IdentityServer4 is used as the identity provider and the client credential flow is used to get an access token for the APP to APP access. The trusted client uses a shared secret to get the token. OAuth have some RFCs which can improve this and avoid the use of a shared secrets or if all applications are under your control, you could use Azure Key Vault to share the secret which is auto generate in an Azure DevOps pipeline.

private async Task<AccessTokenItem> getApiToken(string api_name, string api_scope, string secret)
{
	try
	{
		var disco = await HttpClientDiscoveryExtensions.GetDiscoveryDocumentAsync(
			_httpClient,
			_authConfigurations.Value.StsServer);

		if (disco.IsError)
		{
			_logger.LogError($"disco error Status code: {disco.IsError}, Error: {disco.Error}");
			throw new ApplicationException($"Status code: {disco.IsError}, Error: {disco.Error}");
		}

		var tokenResponse = await HttpClientTokenRequestExtensions.RequestClientCredentialsTokenAsync(_httpClient, new ClientCredentialsTokenRequest
		{
			Scope = api_scope,
			ClientSecret = secret,
			Address = disco.TokenEndpoint,
			ClientId = api_name
		});

		if (tokenResponse.IsError)
		{
			_logger.LogError($"tokenResponse.IsError Status code: {tokenResponse.IsError}, Error: {tokenResponse.Error}");
			throw new ApplicationException($"Status code: {tokenResponse.IsError}, Error: {tokenResponse.Error}");
		}

		return new AccessTokenItem
		{
			ExpiresIn = DateTime.UtcNow.AddSeconds(tokenResponse.ExpiresIn),
			AccessToken = tokenResponse.AccessToken
		};

	}
	catch (Exception e)
	{
		_logger.LogError($"Exception {e}");
		throw new ApplicationException($"Exception {e}");
	}
}

Using Postman

Postman can also be used to get an access token for this OAuth client credentials flow.

POST https://localhost:44367/connect/token

scope:scope_b
client_id:CC_STS_B
client_secret:cc_secret
grant_type:client_credentials

This uses the parameters like shown above.

Calling the payload API

The access token can be used to access the payload data. This can be added directly to your Swagger client.

And the request will be sent and the data can returned.

The access token can also be used in C# code to request the data.

public async Task<JArray> GetApiDataAsync()
{
	try
	{
		var client = _clientFactory.CreateClient();

		client.BaseAddress = new Uri(_authConfigurations.Value.ProtectedApiUrl);

		var access_token = await _apiTokenClient.GetApiToken(
			"CC_STS_B",
			"scope_b",
			"cc_secret"
		);

		client.SetBearerToken(access_token);

		var response = await client.GetAsync("api/values");
		if (response.IsSuccessStatusCode)
		{
			var responseContent = await response.Content.ReadAsStringAsync();
			var data = JArray.Parse(responseContent);

			return data;
		}

		throw new ApplicationException($"Status code: {response.StatusCode}, Error: {response.ReasonPhrase}");
	}
	catch (Exception e)
	{
		throw new ApplicationException($"Exception {e}");
	}
}

Links

https://docs.microsoft.com/en-us/aspnet/core/security/authorization/introduction

Using multiple APIs in Angular and ASP.NET Core with Azure AD authentication

$
0
0

This article shows how an Angular application could be used to access many APIs in a secure way. An API is created specifically for the Angular UI and the further APIs can only be access from the trusted backend which is under our control.

Code: https://github.com/damienbod/AzureADAuthRazorUiServiceApiCertificate

Setup

The applications are setup so that the Angular application only accesses a single API which was created specifically for the UI. All other APIs are deployed in a trusted zone and require a secret or a certificate to use the service. With this, only a single access token leaves the secure zone and there is no need to handle multiple tokens in an unsecure browser. Secondly the API calls can be optimized so that the network loads which come with so many SPAs can be improved. The API is our gateway to the data required by the UI.

This is very like the backend for frontend application architecture (BFF) which is more secure than this setup because the security for the UI is also implemented in the trusted backend for the UI, ie (no access tokens in the browser storage, no refresh/renew in the browser). The advantage here is the structure is easier to setup with existing UI teams, backend teams and the technology stacks like ASP.NET Core, Angular support this structure better.

In this demo, we will be implementing the SPA in Angular but this could easily be switched out for a Blazor, React or a Vue.js UI. The Authentication is implemented using Azure AD.

The APIs

The API which was created for the UI uses Microsoft.Identity.Web to implement the Azure AD security. All API HTTP requests to this service require a valid access token which was created for this service. In the Startup class, the AddMicrosoftIdentityWebApiAuthentication is used to add the auth services for Azure AD to the application. The AddHttpClient is used so that the IHttpClientFactory can be used to access the downstream APIs. The different API client services are added as scoped services. CORS is setup so the Angular application can access the API. The CORS setup for the UI API calls should be configured as strict as possible. An authorize policy is added which validates the azp claim. This value must match the App registration setup for your UI application. If different UIs or different access tokens are allowed, then you would have to change this. An in memory cache is used to store the downstream API access tokens. The API access three different types of downstream APIs, a delegated API which uses the OBO flow to get a token, an application API, which uses the client credentials flow and the default scope and a graph API delegated API which uses the OBO flow again.


public void ConfigureServices(IServiceCollection services)
{
	services.AddHttpClient();
	services.AddOptions();

	JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();
	IdentityModelEventSource.ShowPII = true;
	JwtSecurityTokenHandler.DefaultMapInboundClaims = false;

	services.AddCors(options =>
	{
		options.AddPolicy("AllowAllOrigins",
			builder =>
			{
				builder
					.AllowCredentials()
					.WithOrigins(
						"https://localhost:4200")
					.SetIsOriginAllowedToAllowWildcardSubdomains()
					.AllowAnyHeader()
					.AllowAnyMethod();
			});
	});

	services.AddScoped<GraphApiClientService>();
	services.AddScoped<ServiceApiClientService>();
	services.AddScoped<UserApiClientService>();

	services.AddMicrosoftIdentityWebApiAuthentication(Configuration)
		 .EnableTokenAcquisitionToCallDownstreamApi()
		 .AddInMemoryTokenCaches();

	services.AddControllers(options =>
	{
		var policy = new AuthorizationPolicyBuilder()
			.RequireAuthenticatedUser()
			.Build();
		options.Filters.Add(new AuthorizeFilter(policy));
	});

	services.AddAuthorization(options =>
	{
		options.AddPolicy("ValidateAccessTokenPolicy", validateAccessTokenPolicy =>
		{
			// Validate ClientId from token
			// only accept tokens issued ....
			validateAccessTokenPolicy.RequireClaim("azp", "ad6b0351-92b4-4ee9-ac8d-3e76e5fd1c67");
		});
	});

	// .... + swagger
}

The API using no extra services

The API which returns data directly uses the correct JwtBearerDefaults.AuthenticationScheme scheme to validate the token and requires that the ValidateAccessTokenPolicy succeeds the authorize checks. Then the data is returned. This is pretty straight forward.

using System.Collections.Generic;
using Microsoft.AspNetCore.Authentication.JwtBearer;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Mvc;

namespace ApiWithMutlipleApis.Controllers
{
    [Authorize(Policy = "ValidateAccessTokenPolicy", 
        AuthenticationSchemes = JwtBearerDefaults.AuthenticationScheme)]
    [ApiController]
    [Route("[controller]")]
    public class DirectApiController : ControllerBase
    {
        [HttpGet]
        public IEnumerable<string> Get()
        {
            return new List<string> { "some data", "more data", "loads of data" };
        }
    }
}

API which uses the Application API

The ServiceApiCallsController implements the API will uses the ServiceApiClientService to request data from the application API. This is an APP to APP request and cannot be used from any type of SPA because the API can only be accessed by using a secret or a certificate. SPAs cannot keep or use secrets. Using it from our trusted web API solves this and it can use the data as needed or allowed.

using System.Collections.Generic;
using System.Threading.Tasks;
using ApiWithMutlipleApis.Services;
using Microsoft.AspNetCore.Authentication.JwtBearer;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Mvc;

namespace ApiWithMutlipleApis.Controllers
{
    [Authorize(Policy = "ValidateAccessTokenPolicy", AuthenticationSchemes = JwtBearerDefaults.AuthenticationScheme)]
    [ApiController]
    [Route("[controller]")]
    public class ServiceApiCallsController : ControllerBase
    {
        private ServiceApiClientService _serviceApiClientService;

        public ServiceApiCallsController(ServiceApiClientService serviceApiClientService)
        {
            _serviceApiClientService = serviceApiClientService;
        }

        [HttpGet]
        public async Task<IEnumerable<string>> Get()
        {
            return await _serviceApiClientService.GetApiDataAsync();
        }
    }
}

The ServiceApiClientService uses the ITokenAcquisition to get an access token for the .default scope of the API. The access_as_application scope is added to the Azure App Registration for this API. The access token is requested using the OAuth client credentials flow. This flow is normal not used for delegated users. This is good if you have some type of global service or application level type of features with no users involved.

using Microsoft.Identity.Web;
using System;
using System.Collections.Generic;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Text.Json;
using System.Threading.Tasks;

namespace ApiWithMutlipleApis.Services
{
    public class ServiceApiClientService
    {
        private readonly IHttpClientFactory _clientFactory;
        private readonly ITokenAcquisition _tokenAcquisition;

        public ServiceApiClientService(
            ITokenAcquisition tokenAcquisition,
            IHttpClientFactory clientFactory)
        {
            _clientFactory = clientFactory;
            _tokenAcquisition = tokenAcquisition;
        }

        public async Task<IEnumerable<string>> GetApiDataAsync()
        {

            var client = _clientFactory.CreateClient();

            var scope = "api://b178f3a5-7588-492a-924f-72d7887b7e48/.default"; // CC flow access_as_application";
            var accessToken = await _tokenAcquisition.GetAccessTokenForAppAsync(scope);

            client.BaseAddress = new Uri("https://localhost:44324");
            client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken);
            client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));

            var response = await client.GetAsync("ApiForServiceData");
            if (response.IsSuccessStatusCode)
            {
                var data = await JsonSerializer.DeserializeAsync<List<string>>(
                    await response.Content.ReadAsStreamAsync());

                return data;
            }

            throw new Exception("oh no...");
        }
    }
}

API using the delegated API

The DelegatedUserApiCallsController is used to access a downstream API with uses delegated access tokens. This would be more the standard type of request in Azure. The UserApiClientService is used to access the API.

using System.Collections.Generic;
using System.Threading.Tasks;
using ApiWithMutlipleApis.Services;
using Microsoft.AspNetCore.Authentication.JwtBearer;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Mvc;

namespace ApiWithMutlipleApis.Controllers
{
    [Authorize(Policy = "ValidateAccessTokenPolicy", AuthenticationSchemes = JwtBearerDefaults.AuthenticationScheme)]
    [ApiController]
    [Route("[controller]")]
    public class DelegatedUserApiCallsController : ControllerBase
    {
        private UserApiClientService _userApiClientService;

        public DelegatedUserApiCallsController(UserApiClientService userApiClientService)
        {
            _userApiClientService = userApiClientService;
        }

        [HttpGet]
        public async Task<IEnumerable<string>> Get()
        {
            return await _userApiClientService.GetApiDataAsync();
        }
    }
}

The UserApiClientService uses the ITokenAcquisition to get an access token for the access_as_user scope of the API. The access_as_user scope is added to the Azure App Registration for this API. The access token is requested using the On behalf flow (OBO). The access token are added to an in memory cache.

using Microsoft.Identity.Web;
using System;
using System.Collections.Generic;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Text.Json;
using System.Threading.Tasks;

namespace ApiWithMutlipleApis.Services
{
    public class UserApiClientService
    {
        private readonly IHttpClientFactory _clientFactory;
        private readonly ITokenAcquisition _tokenAcquisition;

        public UserApiClientService(
            ITokenAcquisition tokenAcquisition,
            IHttpClientFactory clientFactory)
        {
            _clientFactory = clientFactory;
            _tokenAcquisition = tokenAcquisition;
        }

        public async Task<IEnumerable<string>> GetApiDataAsync()
        {

            var client = _clientFactory.CreateClient();

            var scopes = new List<string> { "api://b2a09168-54e2-4bc4-af92-a710a64ef1fa/access_as_user" };
            var accessToken = await _tokenAcquisition.GetAccessTokenForUserAsync(scopes);

            client.BaseAddress = new Uri("https://localhost:44395");
            client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken);
            client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));

            var response = await client.GetAsync("ApiForUserData");
            if (response.IsSuccessStatusCode)
            {
                var data = await JsonSerializer.DeserializeAsync<List<string>>(
                    await response.Content.ReadAsStreamAsync());

                return data;
            }

            throw new Exception("oh no...");
        }
    }
}

API using the Graph API

The GraphApiCallsController API is used to access the Microsoft Graph API using the GraphApiClientService. This service uses a delegated access token to access the Microsoft Graph API delegated APIs which have been exposed in the Azure App Registration.

using System.Collections.Generic;
using System.Threading.Tasks;
using ApiWithMutlipleApis.Services;
using Microsoft.AspNetCore.Authentication.JwtBearer;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Mvc;

namespace ApiWithMutlipleApis.Controllers
{
    [Authorize(Policy = "ValidateAccessTokenPolicy", AuthenticationSchemes = JwtBearerDefaults.AuthenticationScheme)]
    [ApiController]
    [Route("[controller]")]
    public class GraphApiCallsController : ControllerBase
    {
        private GraphApiClientService _graphApiClientService;

        public GraphApiCallsController(GraphApiClientService graphApiClientService)
        {
            _graphApiClientService = graphApiClientService;
        }

        [HttpGet]
        public async Task<IEnumerable<string>> Get()
        {
            var userData = await _graphApiClientService.GetGraphApiUser();
            return new List<string> { $"DisplayName: {userData.DisplayName}",
                $"GivenName: {userData.GivenName}", $"AboutMe: {userData.AboutMe}" };
        }
    }
}

The GraphApiClientService uses the ITokenAcquisition to get an access token for required Graph API scopes. Microsoft Graph API has also its own internal auth provider which also implements access token management like the Microsoft.Identity.Web. You could also use it. I use the ITokenAcquisition for token management like the previous two APIs for consistency.

using Microsoft.Graph;
using Microsoft.Identity.Web;
using System;
using System.IO;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Threading.Tasks;

namespace ApiWithMutlipleApis.Services
{
    public class GraphApiClientService
    {
        private readonly ITokenAcquisition _tokenAcquisition;
        private readonly IHttpClientFactory _clientFactory;

        public GraphApiClientService(ITokenAcquisition tokenAcquisition,
            IHttpClientFactory clientFactory)
        {
            _clientFactory = clientFactory;
            _tokenAcquisition = tokenAcquisition;
        }

        public async Task<User> GetGraphApiUser()
        {
            var graphclient = await GetGraphClient(new string[] { "User.ReadBasic.All", "user.read" })
               .ConfigureAwait(false);

            return await graphclient.Me.Request().GetAsync().ConfigureAwait(false);
        }

        public async Task<string> GetGraphApiProfilePhoto()
        {
            try
            {
                var graphclient = await GetGraphClient(new string[] { "User.ReadBasic.All", "user.read" })
               .ConfigureAwait(false);

                var photo = string.Empty;
                // Get user photo
                using (var photoStream = await graphclient.Me.Photo
                    .Content.Request().GetAsync().ConfigureAwait(false))
                {
                    byte[] photoByte = ((MemoryStream)photoStream).ToArray();
                    photo = Convert.ToBase64String(photoByte);
                }

                return photo;
            }
            catch
            {
                return string.Empty;
            }   
        }

       
        private async Task<GraphServiceClient> GetGraphClient(string[] scopes)
        {
            var token = await _tokenAcquisition.GetAccessTokenForUserAsync(
             scopes).ConfigureAwait(false);

            var client = _clientFactory.CreateClient();
            client.BaseAddress = new Uri("https://graph.microsoft.com/beta");
            client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));

            GraphServiceClient graphClient = new GraphServiceClient(client)
            {
                AuthenticationProvider = new DelegateAuthenticationProvider(async (requestMessage) =>
                {
                    requestMessage.Headers.Authorization = new AuthenticationHeaderValue("bearer", token);
                })
            };

            graphClient.BaseUrl = "https://graph.microsoft.com/beta";
            return graphClient;
        }
    }
}

In the app.settings.json file, add the Azure AD App registration settings to match the the configuration for this application.

{
  "AzureAd": {
    "Instance": "https://login.microsoftonline.com/",
    "Domain": "your domain",
    "TenantId": "your tenant id",
    "ClientId": "your client id"
  }
}

Add the ClientSecret to the user secrets in your application. In a deployed version, you could add this to your Azure Key Vault.

{
  "AzureAd": {
    "ClientSecret": "your app registration secret"
  }
}

The Azure APIs which are used from this API must be exposed here. A client secret is also added to the App registration definition for the API project. Application scopes as well as delegated scopes are exposed here. This client secret is used to access the downstream APIs exposed here. You could also use a certificate instead of a client secret.

The Application API

The application API is very simple to setup. This uses the standard Microsoft.Identity.Web settings for an API. The authorization middleware checks that the azpacr claim has a value of 1 to make sure only a token which used a secret to get the access token can access this API. If using certificates, the value would be 2. The azp is used to validate that the correct Web API requested the access token.

public void ConfigureServices(IServiceCollection services)
{
	JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();
	IdentityModelEventSource.ShowPII = true;
	JwtSecurityTokenHandler.DefaultMapInboundClaims = false;

	services.AddSingleton<IAuthorizationHandler, HasServiceApiRoleHandler>();

	services.AddMicrosoftIdentityWebApiAuthentication(Configuration);

	services.AddControllers();

	services.AddAuthorization(options =>
	{
		options.AddPolicy("ValidateAccessTokenPolicy", validateAccessTokenPolicy =>
		{
			validateAccessTokenPolicy.Requirements.Add(new HasServiceApiRoleRequirement());
			
			// Validate id of application for which the token was created
			// In this case the UI application 
			validateAccessTokenPolicy.RequireClaim("azp", "2b50a014-f353-4c10-aace-024f19a55569");

			// only allow tokens which used "Private key JWT Client authentication"
			// // https://docs.microsoft.com/en-us/azure/active-directory/develop/access-tokens
			// Indicates how the client was authenticated. For a public client, the value is "0". 
			// If client ID and client secret are used, the value is "1". 
			// If a client certificate was used for authentication, the value is "2".
			validateAccessTokenPolicy.RequireClaim("azpacr", "1");
		});
	});

	// add swagger ...

}

The AuthorizationHandler is used to fulfil the requirement HasServiceApiRoleRequirement which the API uses in its policy to authorize the access token. The authorization middlerware validates that the service-api scope claim is present in the access token.

using Microsoft.AspNetCore.Authorization;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Security.Claims;
using System.Threading.Tasks;

namespace ServiceApi
{
    public class HasServiceApiRoleHandler : AuthorizationHandler<HasServiceApiRoleRequirement>
    {
        protected override Task HandleRequirementAsync(AuthorizationHandlerContext context, HasServiceApiRoleRequirement requirement)
        {
            if (context == null)
                throw new ArgumentNullException(nameof(context));
            if (requirement == null)
                throw new ArgumentNullException(nameof(requirement));

            var roleClaims = context.User.Claims.Where(t => t.Type == "roles");

            if (roleClaims != null && HasServiceApiRole(roleClaims))
            {
                context.Succeed(requirement);
            }

            return Task.CompletedTask;
        }

        private bool HasServiceApiRole(IEnumerable<Claim> roleClaims)
        {
            // we could also validate the "access_as_application" scope
            foreach(var role in roleClaims)
            {
                if("service-api" == role.Value)
                {
                    return true;
                }
            }

            return false;
        }
    }
}

The API uses the Policy ValidateAccessTokenPolicy to authorize the access token.

using System.Collections.Generic;
using Microsoft.AspNetCore.Authentication.JwtBearer;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Mvc;

namespace ServiceApi.Controllers
{
    [Authorize(Policy = "ValidateAccessTokenPolicy", AuthenticationSchemes = JwtBearerDefaults.AuthenticationScheme)]
    [ApiController]
    [Route("[controller]")]
    public class ApiForServiceDataController : ControllerBase
    {
        [HttpGet]
        public IEnumerable<string> Get()
        {
            return new List<string> { "app-app Service API data 1", "service API data 2" };
        }
    }
}

User API for the delegated access

The API which uses the delegated access token which the frontend API got by using the OBO flow, is implemented like in this blog: Implement a Web APP and an ASP.NET Core Secure API using Azure AD which delegates to a second API. Again the azpacr claim is used to check that a client secret was used to get the access token requesting the API.

services.AddAuthorization(options =>
{
	options.AddPolicy("ValidateAccessTokenPolicy", validateAccessTokenPolicy =>
	{
		validateAccessTokenPolicy.RequireClaim("azp", "2b50a014-f353-4c10-aace-024f19a55569");

		validateAccessTokenPolicy.RequireClaim("azpacr", "1");
	});
});

The Angular UI

Code: Angular CLI project

The UI part of the solution is implemented in Angular. The Angular SPA application which runs completely in the browser of the client needs to authenticate and store its tokens somewhere in the browser, usually in the session state. The Angular SPA cannot keep a secret, it is a public client. To authenticate, the application uses an Azure AD public client created using an Azure App Registration. The Azure App Registration is setup to support the OIDC Connect code flow with PKCE and uses a delegated access token for our backend. It has only access to the top API.

Only the single access token is moved around and stored in the public zone. This access token should have a short lifespan and be renewed or refreshed. There are two ways of renewing or refreshing access tokens in a SPA. One way is to silent renew in an Iframe but this is getting blocked now by Safari and Brave and soon other browsers. The second way is to use refresh tokens. This can lead to other security problems, but the risks can be reduced by using best practices like one-time usage and so on. Another way of reducing the risk would be to use the revocation endpoint to invalidate the refresh token, access token but this is not supported yet by Azure AD. Using reference tokens would also help but this is also not supported by Azure AD. For this reason, as little as possible should be implemented in the unsecure browser. Using multiple access tokens in your SPA is not a good idea. To get a second access token, a full UI authenticate is required (silent or in a popup, app redirect) and then the second access token would also be public. We want as few as possible public security parts.

The npm package angular-auth-oidc-client can be used to implement the security flows for the Angular app. Other Angular npm packages also work fine, you can choose the one you like or know best. Add the security lib configuration to the app.module which matches the Azure App Registration for this APP.

We will use an Auth Guard to protect the routes which must be protected. You MUST leave the default route and maybe an error or info route unprotected due to the constraints of the Open ID Connect code flow. The redirect steps of the flow CANNOT be protected with the auth guard. The auth guard is added to the routes.


export function configureAuth(oidcConfigService: OidcConfigService) {
  return () =>
    oidcConfigService.withConfig({
            stsServer: 'https://login.microsoftonline.com/7ff95b15-dc21-4ba6-bc92-824856578fc1/v2.0',
            authWellknownEndpoint: 'https://login.microsoftonline.com/7ff95b15-dc21-4ba6-bc92-824856578fc1/v2.0',
            redirectUrl: window.location.origin,
            clientId: 'ad6b0351-92b4-4ee9-ac8d-3e76e5fd1c67',
            scope: 'openid profile email api://2b50a014-f353-4c10-aace-024f19a55569/access_as_user offline_access',
            responseType: 'code',
            silentRenew: true,
            useRefreshToken: true,
            maxIdTokenIatOffsetAllowedInSeconds: 600,
            issValidationOff: false,
            autoUserinfo: false,
            logLevel: LogLevel.Debug
    });
}

@NgModule({
  declarations: [
    AppComponent,
    HomeComponent,
    NavMenuComponent,
    UnauthorizedComponent,
    DirectApiCallComponent,
    GraphApiCallComponent,
    ApplicationApiCallComponent,
    DelegatedApiCallComponent
  ],
  imports: [
    BrowserModule,
    RouterModule.forRoot([
    { path: '', redirectTo: 'home', pathMatch: 'full' },
    { path: 'home', component: HomeComponent },
    { path: 'directApiCall', component: DirectApiCallComponent, canActivate: [AuthorizationGuard] },
    { path: 'graphApiCall', component: GraphApiCallComponent, canActivate: [AuthorizationGuard] },
    { path: 'applicationApiCall', component: ApplicationApiCallComponent, canActivate: [AuthorizationGuard] },
    { path: 'delegatedApiCall', component: DelegatedApiCallComponent, canActivate: [AuthorizationGuard] },
    { path: 'unauthorized', component: UnauthorizedComponent },
  ], { relativeLinkResolution: 'legacy' }),
    AuthModule.forRoot(),
    HttpClientModule,
  ],
  providers: [
    OidcConfigService,
    {
      provide: APP_INITIALIZER,
      useFactory: configureAuth,
      deps: [OidcConfigService],
      multi: true,
    },
    {
      provide: HTTP_INTERCEPTORS,
      useClass: AuthInterceptor,
      multi: true,
    },
    AuthorizationGuard
  ],
  bootstrap: [AppComponent],
})
export class AppModule {}

The AuthorizationGuard is implemented using the CanActivate. The oidcSecurityService.isAuthenticated$ pipe can be used to check.

import { Injectable } from '@angular/core';
import { ActivatedRouteSnapshot, CanActivate, Router, RouterStateSnapshot } from '@angular/router';
import { OidcSecurityService } from 'angular-auth-oidc-client';
import { Observable } from 'rxjs';
import { map } from 'rxjs/operators';

@Injectable({ providedIn: 'root' })
export class AuthorizationGuard implements CanActivate {
    constructor(private oidcSecurityService: OidcSecurityService, private router: Router) {}

    canActivate(route: ActivatedRouteSnapshot, state: RouterStateSnapshot): Observable<boolean> {
        return this.oidcSecurityService.isAuthenticated$.pipe(
            map((isAuthorized: boolean) => {
                console.log('AuthorizationGuard, canActivate isAuthorized: ' + isAuthorized);

                if (!isAuthorized) {
                    this.router.navigate(['/unauthorized']);
                    return false;
                }

                return true;
            })
        );
    }
}

The angular-auth-oidc-client this.authService.checkAuth() method is called once in the app.component class. This is part of the default route. When the redirect from the security flow calls back or the app is refreshed in the browser, the correct state will be initialized for the APP.

import { Component, OnInit } from '@angular/core';
import { AuthService } from './auth.service';

@Component({
  selector: 'app-root',
  templateUrl: 'app.component.html',
})
export class AppComponent implements OnInit {
  constructor(public authService: AuthService) {}

  ngOnInit() {
    this.authService
      .checkAuth()
      .subscribe((isAuthenticated) =>
        console.log('app authenticated', isAuthenticated)
      );
  }
}

An AuthInterceptor is used to add the access token to the outgoing HTTP calls. The HttpInterceptor is for ALL HTTP requests, so care needs to be taken that the access token is only sent when making an HTTP request to one of the APIs for which the access token was intended for.

import { HttpInterceptor, HttpRequest, HttpHandler } from '@angular/common/http';
import { Injectable } from '@angular/core';
import { AuthService } from './auth.service';

@Injectable()
export class AuthInterceptor implements HttpInterceptor {
  private secureRoutes = ['https://localhost:44390'];

  constructor(private authService: AuthService) {}

  intercept(
    request: HttpRequest<any>,
    next: HttpHandler
  ) {
    if (!this.secureRoutes.find((x) => request.url.startsWith(x))) {
      return next.handle(request);
    }

    const token = this.authService.token;

    if (!token) {
      return next.handle(request);
    }

    request = request.clone({
      headers: request.headers.set('Authorization', 'Bearer ' + token),
    });

    return next.handle(request);
  }
}

The DirectApiCallComponent implements the view uses the HttpClient to get the secure data from the API protected with Azure AD.

import { HttpClient } from '@angular/common/http';
import { Component, OnInit } from '@angular/core';
import { Observable } from 'rxjs';
import { finalize } from 'rxjs/operators';
import { AuthService } from '../auth.service';

@Component({
  selector: 'app-direct-api-call',
  templateUrl: 'directApiCall.component.html',
})
export class DirectApiCallComponent implements OnInit {
  userData$: Observable<any>;
  dataFromAzureProtectedApi$: Observable<any>;
  isAuthenticated$: Observable<boolean>;
  httpRequestRunning = false;

  constructor(
    private authService: AuthService,
    private httpClient: HttpClient
  ) {}

  ngOnInit() {
    this.userData$ = this.authService.userData$;
    this.isAuthenticated$ = this.authService.signedIn$;
  }

  callApi() {
    this.httpRequestRunning = true;
    this.dataFromAzureProtectedApi$ = this.httpClient
      .get('https://localhost:44390/DirectApi')
      .pipe(finalize(() => (this.httpRequestRunning = false)));
  }
}

The data is displayed in the template for the Angular component.


<div *ngIf="isAuthenticated$ | async as isAuthenticated">

  <button class="btn btn-primary" type="button" (click)="callApi()" [disabled]="httpRequestRunning">
    <span class="spinner-border spinner-border-sm" role="status" aria-hidden="true" [hidden]="!httpRequestRunning" ></span>
    Request Data
  </button>

  <br/><br/>

  Is Authenticated: {{ isAuthenticated }}

  <br/><br/>

  <div class="card">
    <div class="card-header">Data from direct API</div>
    <div class="card-body">
      <pre>{{ dataFromAzureProtectedApi$ | async | json }}</pre>
    </div>
  </div>

</div>


Now everything is working and the applications can be started and run.

By using ASP.NET Core as a gateway for further APIs or services, it is extremely easy to add further things like Databases, Storage, Azure Service Bus, IoT solutions, or any type of Azure / Cloud service as all have uncomplicated solutions for ASP.NET Core.

The solution could then be further improved by adding network security. A simple VNET could be created and the protected APIs can be made only available inside the VNET. This costs nothing and is simple to implement. You could use Cloudflare as a firewall or Azure Firewall.

In a follow up post to this, I plan to implement authorization using roles and groups.

Links

https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/howto-saml-token-encryption

Authentication and the Azure SDK

https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow#second-case-access-token-request-with-a-certificate

https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Client-credential-flows

https://tools.ietf.org/html/rfc7523

https://openid.net/specs/openid-connect-core-1_0.html#ClientAuthentication

https://docs.microsoft.com/en-us/azure/active-directory/develop/access-tokens

https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Client-Assertions

https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow

https://github.com/AzureAD/microsoft-identity-web/wiki/Using-certificates#describing-client-certificates-to-use-by-configuration

API Security with OAuth2 and OpenID Connect in Depth with Kevin Dockx, August 2020

https://www.scottbrady91.com/OAuth/Removing-Shared-Secrets-for-OAuth-Client-Authentication

https://github.com/KevinDockx/ApiSecurityInDepth

https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki

https://docs.microsoft.com/en-us/azure/active-directory/develop/scenario-protected-web-api-verification-scope-app-roles

Viewing all 269 articles
Browse latest View live