Azure Vnet Service Endpoints

Azure Virtual Network Service Endpoints enable an additional level of security for our Azure services by extending the identity of a subnet to the service.  This step allows apps with a private IP on the subnet to be the only source of inbound traffic, effectively reducing the surface area of traffic that can route to the Azure service.


For this blog post, I’ve set up an example app with Terraform. The code is located in GitHub along with a readme on how to build the network.  The Terraform script builds a virtual network with service endpoints enabled and a virtual machine that uses the Azure CLI to put blobs in a storage container.

Why we use service endpoints

Last week our team deployed a 3rd party app into an Azure Virtual Network that connected to two separate Azure services (storage being one of them). By default, storage accounts allow access to blobs and files through an https tunnel and key. Depending on your organizations risk tolerance and data sensitivity, that may be no problem. In this case, the data being stored was sensitive and we wanted to limit access to only our app identified in the vnet.

The rules were:

  1.  The storage account should only accept traffic from apps identified in the front subnet of our virtual network.
  2. Traffic from Azure services not owned by our organization should be accepted by the Azure storage account.

From the Azure Virtual Network Service Endpoint Overview:

Service endpoints provide the ability to secure Azure service resources to your virtual network, by extending VNet identity to the service. This provides improved security by fully removing public Internet access to resources, and allowing traffic only from your virtual network.

Service endpoints paired with whitelisting an external IP allowed us to lock down access to the storage account.

Coming up, I’ll cover how you set up an Azure Storage Account to only allow traffic from your trusted subnet. We’ll use an Ubuntu virtual machine with the AZ CLI to mimic our app and write blobs to storage. Using request logs, you should see that the inbound IP addresses are now from our subnet vs the azure assigned IP of the virtual machine.

First, let us see what happens when our app creates blobs in the storage account without service endpoints enabled.

Storage Traffic without service Endpoints enabled

The first time I set up the example app (network.tf), I wanted to see what the requesting IP to the storage account was without service endpoints enabled. I suspected (incorrectly) that the IP would be the public IP address I had assigned to the virtual machines NIC.

Lets break down what the request with no service endpoints does:

  1. Inside the virtual network is a subnet called front. Front contains an Ubunt Virtual Machine with the Azure CLI installed. Note that I added a public IP address for testing to make it easy to ssh into the box.
  2. Using the Azure CLI, I push a blob to the storage account using the public DNS name for storage (saendpointsdemowu2) and an access key.
  3. The traffic is routed over the Azure backbone to the blob storage, using https and the shared key authentication
  4. A blob is written to the storage account endpointdemo.

If you are using the code from GitHub, the storage.sh script can be executed remotely to write from the VM to storage.

Two important notes here are the ‘front’ subnet has no service endpoints enabled and that the storage account is open to all networks.

After puting a new blob from the VM to the storage account, I reviewed the logs to see the requested IP address of  10.147.212.130:56748. The private IP on the virtual network for the VM is 10.200.10.5 and the public IP is 13.66.204.253.  As I mentioned previously, I expected the inbound IP to the storage account to be my public IP. However, Azure will route a request from an app hosted in azure to any azure service through their own backbone vs the internet.

From the Virtual Networks UDR Overview:

If the destination address is for one of Azure’s services, Azure routes the traffic directly to the service over Azure’s backbone network, rather than routing the traffic to the Internet. Traffic between Azure services does not traverse the Internet, regardless of which Azure region the virtual network exists in, or which Azure region an instance of the Azure service is deployed in.

Request logs for Azure storage are in the $logs container. The PutBlob request split apart shows the inbound ip.

The next step is locking down our Azure storage account to only recognize traffic from the front subnet of our virtual network. This is extending the identity of the virtual network to the storage account by adding the subnet to the storage account and enabling the Microsoft.Storage service endpoint within the front subnet.

ENAbling service endpoints

Validating your endpoints

Lets review the effective routes for the virtual machine before and after the service endpoints were enabled.  Effective routes represent the route table used by the virtual machines network interface.

Before Endpoints

After Endpoints

After our endpoints were enabled, two new routes were added for 13.66.176.16/128 and 13.71.200.64/28.  These routes tell traffic leaving the virtual machine to use the Next Hope Type of VirtualNetworkServiceEndpoint.

Reviewing the inbound IP after service endpoints are enabled shows that the IP has to our internal subnet private IP of 10.200.10.5.   The storage newtork is now aware of the subnet identity allowing just private IPs from that network to connect.

The final workflow shows the Microsoft.Storage service endpoint enabled in our front subnet and the storage account associated with the virtual network.

About the author

mlarned

Add comment

By mlarned