Contact Repository is a data integration service between Oriflame’s internal systems and various external marketing tools (such as Salesforce Marketing Cloud). Technically, it’s a set of Azure functions that leverage other Azure resources (Blob Storage, Table Storage, Service Bus, etc.) to achieve the main goal.

During the upgrade (refactor) of Azure functions to the isolated .NET 6 functions, we also focused on some security aspects. Unfortunately, we couldn’t entirely focus on these aspects during the proof-of-concept phase 😱. By implementing them now, we have significantly improved the security level of the entire service.

Managed Identity

One of the most important improvements we have made is to use managed identities whenever possible. This has changed the application settings of our App Service. Originally, the settings could look like this:

As you can see there are:

  • A connection string to Storage Account (in AzureWebJobsStorage, BlobStorageConfigurations:ConnectionString and WEBSITE_CONTENTAZUREFILECONNECTIONSTRING).
  • An endpoint of Service Bus (in ServiceBusConfigurations:ConnectionString).

After refactoring, the settings look much friendlier even to security experts:

All sensitive information has been removed:

  • A connection string to Storage Account has been replaced by the reference to our Key Vault in cases where the managed identity is not supported by Microsoft yet (AzureWebJobsStorage, WEBSITE_CONTENTAZUREFILECONNECTIONSTRING).
  • The blob service endpoint (BlobServiceUri) or the table service endpoint (TableStorageUri) is used for accessing the Storage Account.
  • The service bus name is used (ServiceBusConnection__fullyQualifiedNamespace) for accessing the queue in the Service Bus.

Azure RBAC

Azure’s role-based access control is the second very important piece in this puzzle. While managed identities allow you to assign an identifier (managed by Azure) to your function or specific individuals, RBAC provides a number of different roles that can be assigned to those identities. This allows us to control the level of privileges that a function receives. Typically, we need to allow a function to:

  • read from a table
  • write to a table
  • upload a blob
  • read a blob
  • send a message to a topic/queue
  • etc.

All this can be easily achieved by using only the built-in roles, although creating your own custom roles is possible.

The roles themselves are the first security perspective offered by Azure. Namely, they define a set of actions that can be performed by the entity (see, for example, the Storage Table Data Contributor and Storage Table Data Reader roles). The other security perspective is scope. In the case of table storage, this can be the specific table or the entire table storage. With these options, you can always try to effectively follow the principle of least privilege.

Infrastructure As A Code: Bicep

As for the infrastructure, almost the entire Contact Repository project (95%) is written in Bicep. There are only a few small parts in ARM, which still need to be rewritten, as well as a small PowerShell script 😉. Let us dive a bit into this Bicep code to see how we handle this security aspect.

Take a look at the following image to familiarise yourself with the basic approach:

  • For each built-in role we need to assign, there is a separate module.
  • To assign the role, you need to know its name (ID). The list of all built-in roles can be found here.
  • In the above example, you can see two possibilities: an assignment for the specific table (if the table parameter was specified) or an assignment for the whole table storage (otherwise).

Here you can see how the module could be used at the end:

// Allow function to access the logging table
// See https://docs.microsoft.com/en-us/azure/storage/tables/authorize-access-azure-active-directory#azure-built-in-roles-for-tables
module StorageTableDataContributorForHashGroupCompactingLogging '../RoleAssignments/StorageTableDataContributor.bicep' = {
  name: 'tabContrForHashGroupCompacting-logging-${marketCode}-${buildNumber}'
  params: {
    tableName: '${marketCodeLower}logging'
    storageAccountName: storageAccountName
    principalId: HashGroupCompactingModule.outputs.functionPrincipalId
    principalType: 'ServicePrincipal'
  }
}

Development and non-development environments

To simplify the development process, we have a different security setup on the DEV environment. To clarify the situation, the DEV environment is the first Azure environment where developers test their functionality using real Azure resources. Until then, they test the functionality in the local environment. During this transition from the local workstation to the Azure environment, many different issues can arise. To limit the problems associated with insufficient rights, the necessary roles are assigned directly to the group of developers:

Of course, the functions in the next environment (UAT), where this privileged access is missing, must be provided with the correct roles!

Leave a Reply

Blog at WordPress.com.

%d bloggers like this: