Quantcast
Channel: Don't Code Tired
Viewing all 207 articles
Browse latest View live

Three Wins Technique Review - a Simple Productivity Hack to Deliver What Matters

$
0
0

I started using the Three Wins Technique about five years ago. It has proven to be a simple technique to help me focus on, and deliver, what is important.

The Three Wins Technique is very simple to implement, you can do it on paper, sticky notes, OneNote, Evernote or any other tool for that matter. You can read more about it in this 2014 article. I first learned about the concept from Getting Results the Agile Way, and while I don’t use that whole system, I’ve found that using the idea of three wins is beneficial as a standalone technique.

A Quick Overview of the Three Wins Technique

Essentially you define (and actually write down) what three things you want to get done. Sounds simple but it can be incredibly powerful when done properly. The wins should not just be simply tasks to be ticked off, they should be “wins” – things that make you feel a sense of accomplishment and progress when you complete them.

How I Use the Three Wins Technique

Wins can be defined for whatever timeframe you want (weekly, monthly, etc.). I define wins at four levels:

  1. What are the 3 wins for this year (I define these in January each year).
  2. What are the 3 wins for this moth (defined at the start of each month).
  3. What are the 3 wins for this week (defined on Monday morning).
  4. What are the 3 wins for today.

When defining the daily wins I glance at this week’s wins. When I’m, writing down this week’s 3 wins I glance at this month’s wins. When I’m writing down this month’s wins I glance at this year’s 3 wins.

This method means that the overarching wins for the year are always kept in mind when defining more granular wins. The more granular wins (daily, weekly) should usually contribute to progress on the year’s wins. This isn’t always the case though as other things still need to be done that don’t directly contribute to a yearly win.

If you find that your more granular wins are almost never contributing to the yearly wins then you either have a problem with your focus or incorrect or unrealistic yearly wins. - Tweet This

I use OneNote for most of my note taking/planning/etc. Below is a screenshot mock-up of what my 3 wins page looks like, I create a new page at the start of each year so I can look back at all the wins from the previous years.

Using OneNote to plan and track your 3 win

Notice in the preceding screenshot, one of the Jan wins is marked with a “!” – I use this when I fail to achieve a win, this helps me to notice failure patterns.

You could have separate 3 wins pages for work life, personal life, fitness/health, etc. or combine categories – whatever works best for you. I tend to use the technique for work-related things, but given the success I’ve had with it I should probably start using it in other areas of my life. I may also start to create 3 wins for the decade to provide some long term perspective.

If you give this technique a try and find it useful, please share this page with your fellow developers, friends, and colleagues.


Improving Azure Functions Blob Trigger Performance and Reliability - Part 1: Memory Usage

$
0
0

This is the first part of a series or articles.

When creating blob-triggered Azure Functions there are some memory usage considerations to bear in mind.

“The consumption plan limits a function app on one virtual machine (VM) to 1.5 GB of memory. Memory is used by each concurrently executing function instance and by the Functions runtime itself.” [Microsoft]

A blob-triggered function can execute concurrently and internally uses a queue: “the maximum number of concurrent function invocations is controlled by the queues configuration in host.json. The default settings limit concurrency to 24 invocations. This limit applies separately to each function that uses a blob trigger.” [Microsoft]

So, if you have 1 blob-triggered function in a Function App, with the default concurrency setting of 24, you could have a maximum of 24 (1 * 24) concurrently executing function invocations. (The documentation describes this as per-VM concurrency, with 2 VMs you could have 48 (2vm * 1 * 24 concurrently executing function invocations.)

If you had 3 blob-triggered functions in a Function App (assuming 1 VM) then you could have 72 (3 * 24) concurrently executing function invocations.

Because the consumption plan “limits a function app on one virtual machine (VM) to 1.5 GB of memory”, if you are processing blobs that are non-trivial in size then you may need to consider overall memory usage.

OutOfMemoryException When Using Azure Functions Blob Trigger

As an example, suppose the following function exists:

public static class BlobPerformanceAndReliability
{
    [FunctionName("BlobPerformanceAndReliability")]
    public static void Run(
        [BlobTrigger("big-blobs/{name}")]string blob, 
        string name, 
        [Blob("big-blobs-out")] out string foundData,
        ILogger log)
    {
        log.LogInformation($"C# Blob trigger function Processed blob\n Name:{name} \n Size: {blob.Length} Bytes");

        // Code to find and output a specific line
        foundData = "This line will never be reached if out of memory";
    }
}

The preceding function code is triggered by blobs in the big-blobs container, the omitted code towards the end of the function would find a specific line of text in the blob and output it to big-blobs-out.

We can create a large file (appx. 1.8 GB) with the following code in a console app:

using System.IO;

namespace ConsoleApp1
{
    class Program
    {
        static void Main(string[] args)
        {
            using (var sw = new StreamWriter(@"c:\temp\bigblob.txt"))
            {
                for (int i = 0; i < 40_000_000; i++)
                {
                    sw.WriteLine("Some line we are not interested in processing");
                }
                sw.WriteLine("Data: 42");
            }
        }
    }
}

The contents of the last line in the file will be set to “Data: 42”.

If we run the function app locally and upload this big file to the Azure Storage Emulator, the function will trigger and will error with: “System.Private.CoreLib: Exception while executing function: BlobPerformanceAndReliability. Microsoft.Azure.WebJobs.Host: One or more errors occurred. (Exception binding parameter 'blob') (Exception binding parameter 'name'). Exception binding parameter 'blob'. System.Private.CoreLib: Exception of type 'System.OutOfMemoryException' was thrown.”.

The reason for this is that when you bind a blob trigger/input and bind to string or byte[] the entire blob will be read into memory, if the blob is too big (and/or there are other function invocations executing concurrently also processing big files) it will exceed the memory restrictions of the Functions Runtime.

Processing Large Blobs with Azure Functions

Instead of binding to string or byte[], you can bind to a Stream. This will not load the entire blob into memory and will allow you to instead process it incrementally.

The function can be re-written as follows:

public static class BlobPerformanceAndReliability
{
    [FunctionName("BlobPerformanceAndReliability")]
    public static void Run(
        [BlobTrigger("big-blobs/{name}")]Stream blob,
        string name,
        [Blob("big-blobs-out/{name}")] out string foundData,
        ILogger log)
    {
        log.LogInformation($"C# Blob trigger function Processed blob\n Name:{name} \n Size: {blob.Length} Bytes");

        // Code to find and output a specific line            

        foundData = null; // Don't write an output blob by default

        string line;

        using (var sr = new StreamReader(blob))
        {                
            while (!sr.EndOfStream)
            {
                line = sr.ReadLine();

                if (line.StartsWith("Data"))
                {
                    foundData = line;
                    break;
                }                    
            }
        }            
    }
}

If you’re not familiar with using streams in .NET, check out my Working with Files and Streams in C# Pluralsight course.

If we force the same blob to be reprocessed with this new function code, there will be no error and the output blob containing “Data: 42” will be seen in the big-blobs-out container.

Another thing to bear in mind when processing large files is that there is a timeout on function execution.

In the next part of this series we’ll look at how to improve the responsiveness of function execution when new blobs are written and also improve the reliability and reduce the chances of blobs being missed.

Improving Azure Functions Blob Trigger Performance and Reliability - Part 2: Processing Delays and Missed Blobs

$
0
0

This is the second part of a series or articles.

When you add a new blob, your blob-triggered function may not be triggered immediately: “If the blob container being monitored contains more than 10,000 blobs, the Functions runtime scans log files to watch for new or changed blobs. This process can result in delays. A function might not get triggered until several minutes or longer after the blob is created.” [Microsoft]

Also when scanning log files to find new blobs that need processing, there’s “no guarantee that all events are captured. Under some conditions, logs may be missed.” [Microsoft]

This means that it is possible for some new blobs to be missed and not processed.

Using a Storage Queue to Trigger Processing of New Blobs

One alternative to reduce the likelihood of missed blobs and also improve the responsiveness of blob processing is to use a slightly more  complex (but still relatively straight forward) approach.

Essentially this alternative approach has the following workflow:

  1. New blob written to blob storage
  2. Write message to storage queue containing new blob path
  3. Queue-triggered function gets message from step 2
  4. Blob processing occurs

(Note that this alternative approach may not suit all situations depending on how new blobs are making their way into blob storage – who or whatever is writing the blob in step 1 also needs to be able to write a queue message.)

Blob Writing

This approach requires that when a blob is written, a queue message is also written.

As a simple example, this could be from client code as follows:

using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Blob;
using System.IO;
using System.Threading.Tasks;

namespace AddNewBlob
{
    class Program
    {
        static async Task Main(string[] args)
        {
            CloudStorageAccount storageAccount = CloudStorageAccount.DevelopmentStorageAccount;
            CloudBlobClient cloudBlobClient = storageAccount.CreateCloudBlobClient();
            CloudBlobContainer cloudBlobContainer = cloudBlobClient.GetContainerReference("food-in");
            CloudBlockBlob cloudBlockBlob = cloudBlobContainer.GetBlockBlobReference("recipe1.txt");

            await WriteBlob();
            await WriteMessage();

            async Task WriteBlob()
            {
                using (var stream = await cloudBlockBlob.OpenWriteAsync())
                using (var sw = new StreamWriter(stream))
                {
                    await sw.WriteLineAsync("carrot");
                    await sw.WriteLineAsync("steak");
                    await sw.WriteLineAsync("apple");
                }
            }

            async Task WriteMessage()
            {
                var queueClient = storageAccount.CreateCloudQueueClient();
                var queue = queueClient.GetQueueReference("food-in");
                await queue.AddMessageAsync(new Microsoft.WindowsAzure.Storage.Queue.CloudQueueMessage("recipe1.txt"));
            }
        }

        
    }
}

Or perhaps the blob data comes in via an HTTP-triggered function as follows:

public static class AddRecipe
{
    [FunctionName("AddRecipe")]
    [return: Queue("food-in")]
    public static async Task<string> Run(
        [HttpTrigger(AuthorizationLevel.Function, "post", Route = null)] HttpRequest req,            
        ILogger log)
    {
        log.LogInformation("C# HTTP trigger function processed a request.");
        string ingredients = await new StreamReader(req.Body).ReadToEndAsync();

        // validation/error code omitted for demo purposes

        var blobName = Guid.NewGuid().ToString();

        await WriteBlob(); // ensure blob is written *before* function returns and add message to the queue

        return blobName; // write to queue


        async Task WriteBlob()
        {
            var account = CloudStorageAccount.DevelopmentStorageAccount; // In real app load this from secure config location
            var blobClient = account.CreateCloudBlobClient();
            var blobContainer = blobClient.GetContainerReference("food-in");
            var cloudBlockBlob = blobContainer.GetBlockBlobReference(blobName);
            await cloudBlockBlob.UploadTextAsync(ingredients);
        }
    }
}

Notice in the preceding code , the writing of the blob is being done explicitly in code to ensure that the queue message isn’t added until the blob is definitely available to be processed by the next function in the chain. (See this related GitHub issue).

More Reliable Blob Processing

The next function is where the actual processing of the new blob is carried out, it is however triggered from a queue rather than relying on a blob trigger:

public static class ProcessFoodBlobs
{
    private static readonly string[] _meats = { "steak", "chicken", "venison" };      

    [FunctionName("ProcessFoodBlobs")]
    public static void Run(
        [QueueTrigger("food-in")]string newBlobPath, 
        [Blob("food-in/{queueTrigger}")] string foods,
        [Blob("food-out/{queueTrigger}.vegetarian")] out string vegetarian,
        [Blob("food-out/{queueTrigger}.nonvegetarian")] out string nonVegetarian,
        ILogger log)
    {
        vegetarian = null;
        nonVegetarian = null;

        string[] foodLines = foods.Split(new[] {"\r\n", "\n"  }, StringSplitOptions.RemoveEmptyEntries);


        foreach (var food in foodLines)
        {
            var isMeat = _meats.Contains(food);

            if (isMeat)
            {
                nonVegetarian += food + Environment.NewLine;
            }
            else
            {
                vegetarian += food + Environment.NewLine;
            }
        }    
    }
}

In the preceding code we’re making use of automatic input blob binding.

Summary

This approach may offer some benefits at the cost of some additional complexity if you have a lot of blobs being written/stored/processed. It also has some other considerations to bear in mind such as what happens if the blob is deleted or changed before the message is picked up off the queue? As with all things you should consider your own requirements and ensure you do thorough testing which includes performance/load/stress testing.

Improving Azure Functions Blob Trigger Performance and Reliability - Part 3: Using Event Grid to Respond to New Blobs

$
0
0

In the previous part of the series we saw how to improve the reliability of responding to new blobs by introducing a queue.This required the introduction of a Storage Queue to the solution and also that the writer of new blobs also write a queue message.

In this article, instead of manually writing messages to a queue on blob creation, we use Event Grid events.

Azure Event Grid has support for Blob Storage, meaning that when a new blob is written, Event Grid will notice this. We can then trigger an Azure Function from this Event Grid event.

This approach can improve the reliability and responsiveness compared to using a simple blob trigger: “Blob storage events are reliably sent to the Event grid service which provides reliable delivery services to your applications through rich retry policies and dead-letter delivery.” [Microsoft]

Creating an Event Grid Triggered Function

The following Azure Function code is a modified version of the code used in the previous article:

public static class ProcessFoodBlobsEventGrid
{
    private static readonly string[] _meats = { "steak", "chicken", "venison" };

    [FunctionName("ProcessFoodBlobsEventGrid")]
    public static void Run(
     [EventGridTrigger]EventGridEvent blobCreatedEvent,
     [Blob("{data.url}")] string foods, // assumes small blob size so using string not stream
     [Blob("{data.url}.vegetarian")] out string vegetarian,
     [Blob("{data.url}.nonvegetarian")] out string nonVegetarian,
     ILogger log)
    {
        log.LogInformation("Processing a blob created event");

        StorageBlobCreatedEventData createdEvent = ((JObject)blobCreatedEvent.Data).ToObject<StorageBlobCreatedEventData>();

        log.LogInformation($"Blob: {createdEvent.Url}");
        log.LogInformation($"Api operation: {createdEvent.Api}");

        vegetarian = null;
        nonVegetarian = null;

        string[] foodLines = foods.Split(new[] { "\r\n", "\n" }, StringSplitOptions.RemoveEmptyEntries);


        foreach (var food in foodLines)
        {
            var isMeat = _meats.Contains(food);

            if (isMeat)
            {
                nonVegetarian += food + Environment.NewLine;
            }
            else
            {
                vegetarian += food + Environment.NewLine;
            }
        }
    }
}

In the preceding code, the [EventGridTrigger]EventGridEvent blobCreatedEvent will cause the function to be trigged based on an Event Grid event being directed to the function.

The input blob binding [Blob("{data.url}")] string foods uses a binding expression and accesses the data.url property from the JSON data that’s contained in the event (this comes from the event schema for Blob Storage). The 2 output bindings also use the original blob path/name and append .vegetarian or .nonvegetarian. This implementation writes output blobs to the same container as the input blob. You could also use dynamic binding in Azure Functions with imperative runtime bindings to just extract the filename from the blob and write the output blobs to a different container.

Creating an Event Subscription for New Blobs

The function needs an event subscription to be created in Azure to recognize when new blobs are written and invoke the function. This can be done by navigating to the storage account (v2) in the Azure Portal and clicking the Events link. You can then add a new event subscription as the following screenshot shows (note the Defined Event Types is set to Blob Created):

Creating a new Azure Event Grid Subscription to trigger an Azure Function

You can also specify subject filters to limit the event to a specific container and/or file type as the following screenshot shows:

Configuring Azure Event Grid subscription to filter on blob storage containers

You could also specify dead-lettering and retry policies in case the Function App is unable to respond.

Now when a blob is added, the event subscription will notice it and invoke the function.

Ultimately “Use the Event Grid trigger instead of the Blob storage trigger for blob-only storage accounts, for high scale, or to reduce latency.” [Microsoft]

Improving Azure Functions Blob Trigger Performance and Reliability - Part 4: Periodically Checking for Unprocessed Blobs

$
0
0

In the this final part of this series we wrap up by briefly discussing some ways to check for blobs that have not been processed correctly.

When using Azure Functions, a timer trigger can be used to automatically periodically execute a function based on a CRON expression. The following code is and example of a timer-triggered function:

public static class CheckBlobs
{
    [FunctionName("CheckBlobs")]
    public static void Run(
        [TimerTrigger("0 */5 * * * *")]TimerInfo myTimer, 
        ILogger log)
    {
        log.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}");

        // Blob checking logic here
    }
}

There are a number of ways to check for unprocessed blobs depending on the solution you are building, some examples:

  • Check that an output blob exists for every input blob
  • Use a database to keep track of blobs that were uploaded and compare this to actual output blobs
  • If blobs are deleted after they have been processed, check there are no blobs in the container
  • Etc.

Some things to bear in mind if implementing this kind of checking include:

  • How often/when to run the function?
  • How long after a blob is written should you give it to be processed normally?
  • Will running this function interfere with any other processing in the system?
  • What if a blob is due to be processed (e.g. message sitting in a queue but not yet processed)? Could this create false positives or cause duplication of processing?
  • How long does the checking function take to execute? Will it take too long as the number of blobs increases and will the function be terminated by the runtime?
  • How/who do you notify of missed blobs (email, SMS, create ticket in CRM/bug system, etc.)?
  • Do you try to perform auto-retry of processing? Again, could this cause duplication, errors, etc.?

You could also use logging/Application Insights to provide you with information, or write every incoming blob name to a database and update that record when a blob has been processed, this way unprocessed blobs can be found with a simple “IF NOT PROCESSED” query.

Creating Custom Azure Functions Bindings

$
0
0

(This article refers to Azure Functions v2)

Out of the box, Azure Functions comes with a range of triggers, input bindings, and output bindings to work with blobs, queues, HTTP, etc.

You can also create your own input and/or output bindings.

Overview of Custom Azure Function Bindings

The general process to create a custom binding is:

  1. Create a class library (e.g. .NET Standard)
  2. Create a class that implements IAsyncCollector<T>
  3. Implement the AddAsync method from the interface and add code to perform some output
  4. Create a custom C# attribute (check out my course if you’ve never created custom attributes before ) to represent the binding attribute that will be used in functions
  5. Create a class that implements IExtensionConfigProvider that wires up the new binding
  6. In the Function App, create a startup class to register your custom extension

An Example Scenario – Integrating with Pushover

Pushover is a service (with accompanying phone app) that lets you send notifications to your phone.There is a 7 day free trial to start with and the simplest way to get started is to search for the Pushover app in the app store on your phone. Once registered you can follow the instructions to set up your application/device that push notifications can be sent to. At the end of this process, ultimately you will end up with a user key and and application API token key. Both of these are needed when calling the Pushover API.

In the following example, a custom Azure Function output binding will be created that can be used from any Azure Function to send notifications. The output could be anything however, for example you could replace the Pushover API call with a Twitter call, LinkedIn call, etc.

The following example creates an output binding but you can also create bindings that get input and passes it to a function.

Creating a POCO to Represent a Notification Message

In the new class library project, add a class:

namespace PushoverBindingExtensions
{
    public class PushoverNotification
    {
        public string Title { get; set; }
        public string Message { get; set; }
    }
}

Implementing an IAsyncCollector

The next step is to create a class that implements IAsyncCollector<T> where T is the “data” that we want to pass to the output binding; in our case its the custom POCO class we just created but this could be a primitive type such as a string.

using Microsoft.Azure.WebJobs;
using System.Collections.Generic;
using System.Net.Http;
using System.Threading;
using System.Threading.Tasks;

namespace PushoverBindingExtensions
{
    internal class PushoverNotificationAsyncCollector : IAsyncCollector<PushoverNotification>
    {
        private static readonly HttpClient _httpClient = new HttpClient();

        private PushoverAttribute _pushoverAttribute;

        public PushoverNotificationAsyncCollector(PushoverAttribute attribute)
        {
            _pushoverAttribute = attribute;
        }

        public async Task AddAsync(PushoverNotification notification, CancellationToken cancellationToken = default(CancellationToken))
        {
            await SendNotification(notification);
        }

        public Task FlushAsync(CancellationToken cancellationToken = default(CancellationToken))
        {
            return Task.CompletedTask;
        }

        private async Task SendNotification(PushoverNotification notification)
        {
            var parameters = new Dictionary<string, string>
                {
                    { "token", _pushoverAttribute.AppToken },
                    { "user", _pushoverAttribute.UserKey },
                    { "title", notification.Title },
                    { "message", notification.Message }
                };

            var response = await _httpClient.PostAsync("https://api.pushover.net/1/messages.json", new FormUrlEncodedContent(parameters));
            response.EnsureSuccessStatusCode();
        }
    }
}

The key point in the preceding code is the implemented AddAsync method. This method gets called by your function when you use the binding, in this example it’s calling into the SendNotification method that talks to the Pushover API.

Notice that the constructor takes an instance of a PushoverAttribtue which we’ll define next.

Defining a Custom Binding Attribute for Azure Functions

The following code defines a .NET attribute that will be used to decorate parameters in function run methods:

using Microsoft.Azure.WebJobs.Description;
using System;

namespace PushoverBindingExtensions
{
    [Binding]
    [AttributeUsage(AttributeTargets.Parameter | AttributeTargets.ReturnValue)]
    public class PushoverAttribute : Attribute
    {
        public PushoverAttribute(string appToken, string userKey)
        {
            AppToken = appToken;
            UserKey = userKey;
        }

        [AutoResolve]        
        public string AppToken { get; set; }

        [AutoResolve]       
        public string UserKey { get; set; }
    }
}

Note in the preceding code that the [Binding] attribute has been applied and the attribute has been limited to use on parameters and return values. (You can learn more about how to create custom attributes in my Pluralsight course).

This attribute definition has a couple of string properties to represent the Pushover user and app tokens. Notice these properties have been decorated with the [AutoResolve] attribute. This enables binding expressions for the properties and allows them to be read from AppSettings with %% syntax.

Creating the Azure Function Binding Custom Extension

The next step is to create a class that implements the IExtensionConfigProvider interface. This class will define the rules that are applicable to the custom binding:

using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Description;
using Microsoft.Azure.WebJobs.Host.Config;

namespace PushoverBindingExtensions
{
    [Extension("PushoverExtensions")]
    public class PushoverExtensions : IExtensionConfigProvider
    {
        public void Initialize(ExtensionConfigContext context)
        {
            var rule = context.AddBindingRule<PushoverAttribute>();            

            rule.BindToCollector<PushoverNotification>(BuildCollector);            
        }

        private IAsyncCollector<PushoverNotification> BuildCollector(PushoverAttribute attribute)
        {            
            return new PushoverNotificationAsyncCollector(attribute);
        }
    }
}

The preceding class is decorated with the [Extension] attribute to mark the class as an extension and the Initialize method is where the binding rules are defined. In this method we can add binding rules and also custom convertors (for example if we wanted to be able to bind to IAsyncCollector<string> and this be automatically converted to IAsyncCollector<PushoverNotification>). In this example we’re not defining any such convertors.

Once the binding rule has been added (for the PushoverAttribute we created earlier) it can be configured as an output binding by calling the BindToCollector method. This method is used to create an instance of the PushoverNotificationAsyncCollector we created earlier, in this example this is done by calling the BuildCollector method that returns a IAsyncCollector<PushoverNotification> instance.

So hopefully your still with me; at this point we have the custom binding created in the class library project, we can now actually use it in our functions.

Using a Custom Binding in an Azure Function

In the function app project, add a reference to the class library project containing the custom binding.

We can now use the custom binding just as we would any of the pre-supplied ones as the following function code demonstrates:

[FunctionName("SendPushoverNotification")]
public static async Task<IActionResult> Run(
    [HttpTrigger(AuthorizationLevel.Function, "post")] HttpRequest req,
    [Pushover("%appkey%", "%userkey%")] IAsyncCollector<PushoverNotification> notifications,
    ILogger log)
{
    log.LogInformation("C# HTTP trigger function processed a request.");

    // validation omitted for demo purposes
    string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
    dynamic data = JsonConvert.DeserializeObject(requestBody);

    await notifications.AddAsync(new PushoverNotification { Title = data.Title, Message = data.Message });

    return new OkResult();
}

The preceding function code just happens to have an HTTP trigger but the custom binding can be used in functions with other triggers as well.

Notice the [Pushover] custom binding attribute being applied. The attribute decorates the notifications parameter that is of type IAsyncCollector<PushoverNotification>. (Hopefully it’s a bit clearer now how all the parts fit together…).

Now in the function body code, a notification can be send by calling the AddAsync method and passing the PushoverNotification to be sent.

Also notice in the [Pushover] attribute the binding expressions to the app key and user key stored in app settings (or local.settings.json in the development environment) This means these sensitive keys do not need to be hard coded.

There are a few final steps to getting this all to work.

The first is to register the custom binding extension by creating the following Startup class in the function app project:

using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Hosting;
using PushoverBindingExtensions;

namespace DontCodeTiredDemosV2
{
    public class Startup : IWebJobsStartup
    {
        public void Configure(IWebJobsBuilder builder)
        {
            builder.AddExtension<PushoverExtensions>();
        }
    }
}

And in the AssemblyInfo.cs (which you can create manually) add the following:

using Microsoft.Azure.WebJobs.Hosting;

// Register custom extension of Function App startup
[assembly: WebJobsStartup(typeof(DontCodeTiredDemosV2.Startup))]

This points to the Startup class we just created.

Testing It All Out

Now just run the function app and post the following JSON to the function address (e.g. http://localhost:7071/api/SendPushoverNotification):

{
    "Title" : "Functions App",
    "Message" : "I like cheese!"
}

This will result in the Azure Function executing and sending a request to the Pushover API, will will result in a notification arriving on your phone as the following screenshot shows:

Pushover notification via Azure Functions

I hope that’s help clarify the process a  little on how to create custom binding expressions in Azure Functions. If you like me to throw the code up on GitHub let me know :)

Testing EventGridTrigger Azure Functions Locally (Without Using ngrok)

$
0
0

(This post refers to Azure Functions v2))

One way to test Azure Functions that use Event Grid triggers is to run the Function App locally and then get Azure in the cloud to invoke the function running on the local machine. As an example, suppose you want to use Event Grid to improve the reliability and responsiveness of Blob Storage processing. To do this the documentation suggests the use of ngrok. Now when a blob is added to a container in the cloud, the locally running function on the dev machine will be invoked via ngrok.

There is a somewhat simpler solution that allows you to invoke the Event Grid triggered function locally.

This approach bypasses Event Grid completely, so it is not a substitute for proper end-to-end testing, it’s more a development-time testing & debugging tool.

Manually Running Non HTTP-Triggered Azure Functions

You can manually trigger a non HTTP-triggered function (such as a timer triggered or Event Grid triggered function) via a special HTTP endpoint.

The endpoint is of the format: {host}/admin/functions/{function name}

For example, take the following function (which was also used in the post Improving Azure Functions Blob Trigger Performance and Reliability - Part 3: Using Event Grid to Respond to New Blobs):

public static class ProcessFoodBlobsEventGrid
{
    private static readonly string[] _meats = { "steak", "chicken", "venison" };

    [FunctionName("ProcessFoodBlobsEventGrid")]
    public static void Run(
     [EventGridTrigger]EventGridEvent blobCreatedEvent,
     [Blob("{data.url}")] string foods, // assumes small blob size so using string not stream
     [Blob("{data.url}.vegetarian")] out string vegetarian,
     [Blob("{data.url}.nonvegetarian")] out string nonVegetarian,
     ILogger log)
    {
        log.LogInformation("Processing a blob created event");

        StorageBlobCreatedEventData createdEvent = ((JObject)blobCreatedEvent.Data).ToObject<StorageBlobCreatedEventData>();

        log.LogInformation($"Blob: {createdEvent.Url}");
        log.LogInformation($"Api operation: {createdEvent.Api}");

        vegetarian = null;
        nonVegetarian = null;

        string[] foodLines = foods.Split(new[] { "\r\n", "\n" }, StringSplitOptions.RemoveEmptyEntries);


        foreach (var food in foodLines)
        {
            var isMeat = _meats.Contains(food);

            if (isMeat)
            {
                nonVegetarian += food + Environment.NewLine;
            }
            else
            {
                vegetarian += food + Environment.NewLine;
            }
        }
    }
}

The preceding function when running locally in development would have the special URL: http://localhost:7071/admin/functions/ProcessFoodBlobsEventGrid

If you had a timer-triggered function called HerdCats that you wanted to manually invoke (so you didn’t have to wait for the next timed invocation) the special URL would be:http://localhost:7071/admin/functions/HerdCats

Note: when running locally in development you do not have to authenticate. If you wanted to manually invoke a deployed function in Azure, you need to provide an x-functions-key header that contains the function master key.

Manually Invoking an Event Grid Triggered Azure Function

When using the special URL to invoke a function, you can also provide data to be passed to the function. The type of data passed will depend on the trigger type of the function that you are invoking.

To provide data to the function, a JSON payload can be posted to the special URL. The data that is passed to the function is contained in a JSON property called “input”:

{
    "input": "trigger data goes here"
}

If the Event Grid triggered function will be invoked by a new blob event, the contents of this input property must match the event schema for an Azure Blob Storage event.

An example of event JSON (taken from the Microsoft documentation):

[{
  "topic": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/xstoretestaccount",
  "subject": "/blobServices/default/containers/testcontainer/blobs/testfile.txt",
  "eventType": "Microsoft.Storage.BlobCreated",
  "eventTime": "2017-06-26T18:41:00.9584103Z",
  "id": "831e1650-001e-001b-66ab-eeb76e069631",
  "data": {
    "api": "PutBlockList",
    "clientRequestId": "6d79dbfb-0e37-4fc4-981f-442c9ca65760",
    "requestId": "831e1650-001e-001b-66ab-eeb76e000000",
    "eTag": "0x8D4BCC2E4835CD0",
    "contentType": "text/plain",
    "contentLength": 524288,
    "blobType": "BlockBlob",
    "url": "https://example.blob.core.windows.net/testcontainer/testfile.txt",
    "sequencer": "00000000000004420000000000028963",
    "storageDiagnostics": {
      "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0"
    }
  },
  "dataVersion": "",
  "metadataVersion": "1"
}]

When testing the function outlined earlier, the first thing to do is ensure that there is a blob in the local blob container that will be read by the function by way of the blob input binding: [Blob("{data.url}")] string foods.

For example, in the Storage Emulator a blob called in.txt can be uploaded to the food-in container.

Now the new blob event data JSON needs to be modified, specifically the data.url property needs to contain the URL to the local blob: http://127.0.0.1:10000/devstoreaccount1/food-in/in.txt

A modified version with updated data.url would be as follows:

[{
  "topic": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/xstoretestaccount",
  "subject": "/blobServices/default/containers/testcontainer/blobs/testfile.txt",
  "eventType": "Microsoft.Storage.BlobCreated",
  "eventTime": "2017-06-26T18:41:00.9584103Z",
  "id": "831e1650-001e-001b-66ab-eeb76e069631",
  "data": {
    "api": "PutBlockList",
    "clientRequestId": "6d79dbfb-0e37-4fc4-981f-442c9ca65760",
    "requestId": "831e1650-001e-001b-66ab-eeb76e000000",
    "eTag": "0x8D4BCC2E4835CD0",
    "contentType": "text/plain",
    "contentLength": 524288,
    "blobType": "BlockBlob",
    "url": "http://127.0.0.1:10000/devstoreaccount1/food-in/in.txt",
    "sequencer": "00000000000004420000000000028963",
    "storageDiagnostics": {
      "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0"
    }
  },
  "dataVersion": "",
  "metadataVersion": "1"
}]

The next step is to remove the surrounding [], and replace the with . Then paste the resulting JSON into the input property:

{
    "input": "
  {
    'topic': '/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/xstoretestaccount',
    'subject': '/blobServices/default/containers/oc2d2817345i200097container/blobs/oc2d2817345i20002296blob',
    'eventType': 'Microsoft.Storage.BlobCreated',
    'eventTime': '2017-06-26T18:41:00.9584103Z',
    'id': '831e1650-001e-001b-66ab-eeb76e069631',
    'data': {
      'api': 'PutBlockList',
      'clientRequestId': '6d79dbfb-0e37-4fc4-981f-442c9ca65760',
      'requestId': '831e1650-001e-001b-66ab-eeb76e000000',
      'eTag': '0x8D4BCC2E4835CD0',
      'contentType': 'application/octet-stream',
      'contentLength': 524288,
      'blobType': 'BlockBlob',
      'url': 'http://127.0.0.1:10000/devstoreaccount1/food-in/in.txt',
      'sequencer': '00000000000004420000000000028963',
      'storageDiagnostics': {
        'batchId': 'b68529f3-68cd-4744-baa4-3c0498ec19f0'
      }
    },
    'dataVersion': '',
    'metadataVersion': '1'
  }
"
}

Now this JSON can be POSTed to the special URL: in the case of the example in this post the URL would be: http://localhost:7071/admin/functions/ProcessFoodBlobsEventGrid

The following screenshot shows posting using Postman:

 

Using Postman to post to Event Grid triggered Azure Function

Posting will cause the Event Grid triggered function to be invoked and the JSON contained inside the input property will be passed to the trigger input EventGridEvent blobCreatedEvent object. The function will execute and read in the blob called “in.txt”.

Using the Azure SignalR Service Bindings in Azure Functions to Create Real-time Serverless Applications

$
0
0

The Azure SignalR Service is a serverless offering from Microsoft to facilitate real-time communications without having to manage the infrastructure yourself.

SignalR itself has been around for a while, now the hosted/serverless version makes it even easier to consume.

There are also Azure Functions bindings available that make it easy to integrate SignalR with Azure Functions and end clients.

This means that any Azure Function with any trigger type (e.g. Azure Cosmos DB changes, queue messages, HTTP requests,  blob triggers, etc.) can push out a notification to clients via Azure SignalR Service.

In this article we’ll build a simple example that simulates the “someone in Austin just bought a Surface Laptop” kind of messages that you see on some shopping websites.

Azure SignalR Service Demo

Creating an Azure SignalR Service Instance

After logging into the Azure Portal, create a new SignalR Service instance (search for “SignalR Service”) – you can currently choose a free pricing tier.

Give the new instance a name, in this example the  instance was called “dctdemosorderplaced”. Once you’ve filled out the info for the new instance (such as resource group and  location) hit the create button and wait for Azure to create the new instance.

Once the instance has been created, open it and head into the settings and change the service mode to Serverless and save the changes, as the following screenshot shows:

Setting SignalR Service to serverless mode

You will need to copy the connection string to be used in the function app later, you can get this from the Keys tab – copy the Primary connection string as shown in the following screenshot:

image

Creating the Azure Functions App

In Visual Studio (or Code) create a new Azure Functions project.

Open the local.settings.json file and make the following changes:

{
  "IsEncrypted": false,
  "Values": {
    "AzureWebJobsStorage": "UseDevelopmentStorage=true",
    "FUNCTIONS_WORKER_RUNTIME": "dotnet",
    "AzureSignalRConnectionString": "PASTE YOUR SIGNALR SERVICE CONNECTION STRING HERE"
  },
  "Host": {
    "LocalHttpPort": 7071,
    "CORS": "http://localhost:3872",
    "CORSCredentials": true
  }
}

In the preceding code notice 2 things:

  1. The AzureSignalRConnectionString setting: paste your connection string here
  2. The CORS entries to configure the local environment (http://localhost:3872 is whatever local IIS Express port you will run the test website from)

Add a couple of classes to represent an incoming order and an order stored in table storage (we could have also use Cosmos DB or another storage mechanism – it isn’t really relevant to the SignalR demo):

public class OrderPlacement
{
    public string CustomerName { get; set; }
    public string Product { get; set; }
}


// Simplified for demo purposes
public class Order
{
    public string PartitionKey { get; set; }
    public string RowKey { get; set; }
    public string CustomerName { get; set; }
    public string Product { get; set; }
}

We can now add a function that will allow a new order to be placed:

public static class PlaceOrder
{
    [FunctionName("PlaceOrder")]
    public static async Task<IActionResult> Run(
        [HttpTrigger(AuthorizationLevel.Function, "post")] OrderPlacement orderPlacement,
        [Table("Orders")] IAsyncCollector<Order> orders, // could use cosmos db etc.
        [Queue("new-order-notifications")] IAsyncCollector<OrderPlacement> notifications,
        ILogger log)
    {
        await orders.AddAsync(new Order
        {
            PartitionKey = "US",
            RowKey = Guid.NewGuid().ToString(),
            CustomerName = orderPlacement.CustomerName,
            Product = orderPlacement.Product
        });

        await notifications.AddAsync(orderPlacement);

        return new OkResult();
    }
}

The preceding function allows a new OrderPlacement to be HTTP POSTed to the function. The function saves it in table storage and also adds it to a storage queue.

Creating The Azure Functions with the SignalR Bindings

The first SignalR-related function is the function that a client calls to get itself wired-up to the the SignalR service in the cloud:

[FunctionName("negotiate")]
public static SignalRConnectionInfo GetOrderNotificationsSignalRInfo(
    [HttpTrigger(AuthorizationLevel.Anonymous, "post")] HttpRequest req,
    [SignalRConnectionInfo(HubName = "notifications")] SignalRConnectionInfo connectionInfo)
{
    return connectionInfo;
}

This function will be automatically called from the SignalR client code as we’ll see later in this article. Notice the function returns a SignalRConnectionInfo object to the client. This data contains the SignalR URL and an access token which the client can use. Also notice that the SignalRConnectionInfo binding allows the SignalR hub name to be specified, in this case “notifications”.

Now the client can negotiate a connection to the Azure SignalR Service via our Function App.

Now the client can get itself wired-up, we need to push some data to the client:

[FunctionName("PlacedOrderNotification")]
public static async Task Run(
    [QueueTrigger("new-order-notifications")] OrderPlacement orderPlacement,
    [SignalR(HubName = "notifications")] IAsyncCollector<SignalRMessage> signalRMessages,
    ILogger log)
{
    log.LogInformation($"Sending notification for {orderPlacement.CustomerName}");

    await signalRMessages.AddAsync(
        new SignalRMessage
        {
            Target = "productOrdered",
            Arguments = new[] { orderPlacement }
        });
}

The preceding function is triggered from the queue (“new-order-notifications”) that the initial HTTP function wrote to, but you can send SignalR messages from any triggered function.

Notice the [SignalR(HubName = "notifications")] binding specifies the same hub name as in the negotiate function.

To send notifications, you can simply add one or more SignalRMessage messages to the IAsyncCollector<SignalRMessage> In this function, the arguments contain the OrderPlacement object. The client will then be ale to access the customer name and product and display it on the website as a notification.

Creating an Azure SignalR Service JavaScript Client

Create a new empty ASP.NET Core project and add a default.html file under wwwroot. This example does not require any server-side code (controllers, etc.) in the demo web site, so a simple static HTML file is sufficient.

In the startup.cs configure default and static files:

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
    if (env.IsDevelopment())
    {
        app.UseDeveloperExceptionPage();
    }

    app.UseDefaultFiles();
    app.UseStaticFiles();
}

Add the following content to the default.html file:

<html><head><!--Adapted (aka hacked) from: https://azure-samples.github.io/signalr-service-quickstart-serverless-chat/demo/chat-v2/ --><title>SignalR Demo</title><link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/bootstrap@4.1.3/dist/css/bootstrap.min.css"><style>
        .slide-fade-enter-active, .slide-fade-leave-active {
            transition: all 1s ease;
        }

        .slide-fade-enter, .slide-fade-leave-to {
            height: 0px;
            overflow-y: hidden;
            opacity: 0;
        }
    </style></head><body><p>&nbsp;</p><div id="app" class="container"><h3>Recent orders</h3><div class="row" v-if="!ready"><div class="col-sm"><div>Loading...</div></div></div><div v-if="ready"><transition-group name="slide-fade" tag="div"><div class="row" v-for="order in orders" v-bind:key="order.id"><div class="col-sm"><hr /><div><div style="display: inline-block; padding-left: 12px;"><div><span class="text-info small"><strong>{{ order.CustomerName }}</strong> just ordered a</span></div><div>
                                    {{ order.Product }}</div></div></div></div></div></transition-group></div></div><script src="https://cdn.jsdelivr.net/npm/vue@2.5.17/dist/vue.js"></script><script src="https://cdn.jsdelivr.net/npm/@aspnet/signalr@1.1.2/dist/browser/signalr.js"></script><script src="https://cdn.jsdelivr.net/npm/axios@0.18.0/dist/axios.min.js"></script><script>
        const data = {
          orders: [],
          ready: false
        };

        const app = new Vue({
          el: '#app',
          data: data,
          methods: {
          }
        });

        const connection = new signalR.HubConnectionBuilder()
          .withUrl('http://localhost:7071/api')
          .configureLogging(signalR.LogLevel.Information)
          .build();

        connection.on('productOrdered', productOrdered);
        connection.onclose(() => console.log('disconnected'));

        console.log('connecting...');
        connection.start()
            .then(() => data.ready = true)
            .catch(console.error);

        let counter = 0;

        function productOrdered(orderPlacement) {
        orderPlacement.id = counter++; // vue transitions need an id
        data.orders.unshift(orderPlacement);
    }
    </script></body></html>

The .withUrl('http://localhost:7071/api') line is the root of your locally running Azure Function development runtime environment.

When this page loads, the negotiate function will be called in the Function App and will return the Azure SignalR Service connection details so the client can connect to the hub in the cloud and send/receive messages (in this example we are just receiving messages in the JavaScript client).

Running the Demo Application

Right-click the solution in Visual Studio and select “Set Startup Projects…” and choose Multiple startup projects and set the Function App project to Start and the ASP.NET core project to Start without debugging. This just means you can hit F5 and both projects will run.

Once both projects are running you should be able to post some JSON to the PlaceOrder function HTTP endpoint (for example using Postman) containing the CustomerName and Product:

{
    "CustomerName" : "Amrit",
    "Product" : "Surface Book"
}

This will save the order to table storage and add a message to the queue.

The queue-triggered function will pick up this message an send a notification to the client and you should then see “Amrit just ordered a Surface Book” appear on the web site.

Note that this demo application allows anonymous/unauthenticated clients to access the functions and the SignalR Service, in a real-work production app you should ensure you secure the entire system appropriately – see docs for more info.

You can find this entire sample app on GitHub.


Different Ways to Parse Http Request Data in Http-triggered Azure Functions

$
0
0

(This post refers to Azure Functions v2)

There are different ways to access both the request data and also request metadata when a HTTP-triggered Azure Function is executed.

Getting Query String Data in Azure Functions

Suppose we have the following class (e.g. in table storage):

public class PhotoMetadata
{
    public string PartitionKey { get; set; }
    public string RowKey { get; set; }
    public string FileName { get; set; }
    public string Keywords { get; set; }
}

We could write an Azure Function triggered by a HTTP GET that returns an item from a database by a querystring parameter called “id”:

[FunctionName("GetPhotoMetadata")]
public static IActionResult GetPhotoMetadata(
    [HttpTrigger(AuthorizationLevel.Function, "get")] HttpRequest req,
    ILogger log)
{
    log.LogInformation("C# HTTP trigger function processed a request.");

    string id = req.Query["id"];

    if (string.IsNullOrWhiteSpace(id))
    {
        return new NotFoundResult();
    }

    PhotoMetadata metadata = LoadFromDatabase(id);

    return new OkObjectResult(metadata);
}

In the preceding code, to access querystring parameters use req.Query and specify the key you are looking for, in this example “id”.

If there is no value in the incoming request, id will be null and we return a NotFoundResult (404).

Getting HTTP POST JSON Request Data in Azure Functions

When it comes to accessing POSTed data, there are a number of options.

Manually Convert JSON Request Strings

The first option is to take control of the process at a lower level and read the posted data from the request body and parse the JSON into a dynamic C# object. [If you’re not familiar with dynamic C# check out my Dynamic C# Fundamentals Pluralsight course]

First we define a model that will represent the posted data (we don’t want to use the PhotoMetadata class as we don’t want clients specifying partition and row keys):

public class PhotoMetadataAdditionRequest
{
    public string FileName { get; set; }
    public string Keywords { get; set; }
}

Next we can write a function that will parse this incoming data:

[FunctionName("AddPhotoMetadata")]
public static async Task<IActionResult> AddPhotoMetadata(
    [HttpTrigger(AuthorizationLevel.Function, "post")] HttpRequest req,
    ILogger log)
{
    log.LogInformation("C# HTTP trigger function processed a request.");


    log.LogInformation("You can get additional information about the request such as:");
    log.LogInformation($" length : {req.ContentLength}");
    log.LogInformation($" type   : {req.ContentType}");
    log.LogInformation($" https  : {req.IsHttps}");
    log.LogInformation($" host   : {req.Host}");


    // read the contents of the posted data into a string
    string requestBody = await new StreamReader(req.Body).ReadToEndAsync();

    // use Json.NET to deserialize the posted JSON into a C# dynamic object
    dynamic data = JsonConvert.DeserializeObject(requestBody);

    // data validation omitted for demo purposes

    // extract data from the dynamic object into strongly typed object
    PhotoMetadata metadata = new PhotoMetadata
    {
        FileName = data.fileName, // notice the camel case (lowercase f)
        PartitionKey = "landscapes",
        RowKey = Guid.NewGuid().ToString(),
        Keywords = data.keywords // notice the camel case (lowercase k)
    };

    SaveToDatabase(metadata);

    return new OkObjectResult(metadata.RowKey);
}

Notice in the preceding code that you can also access information about the request such as req.ContentLength. Also note the lowercase f and k in data.fileName and data.keywords.

We can post the following JSON to the function:

{
    "fileName": "IMG0382435.jpg",
    "keywords": "landscape, sky, sunset"
}

Automatically Bind to Strongly Types POCOs in Azure HTTP Functions

You can also let the runtime auto-convert the POSTed JSON into a specified C# type:

[FunctionName("AddPhotoMetadata")]
public static IActionResult AddPhotoMetadata(
    [HttpTrigger(AuthorizationLevel.Function, "post")] PhotoMetadataAdditionRequest metadataAdditionRequest,
    ILogger log)
{
    log.LogInformation("C# HTTP trigger function processed a request.");

    log.LogInformation($" FileName : {metadataAdditionRequest.FileName}");
    log.LogInformation($" Keywords : { metadataAdditionRequest.Keywords}");

    PhotoMetadata metadata = new PhotoMetadata
    {
        FileName = metadataAdditionRequest.FileName,
        PartitionKey = "landscapes",
        RowKey = Guid.NewGuid().ToString(),
        Keywords = metadataAdditionRequest.Keywords
    };

    SaveToDatabase(metadata);

    return new OkObjectResult(metadata.RowKey);
}

In the preceding code, instead of binding to a HttpRequest object,  we bind to the PhotoMetadataAdditionRequest. Behind the scenes the JSON will be automatically deserialized into a PhotoMetadataAdditionRequest object.

Note that if you have malformed JSON you may get errors. For example if the “fileName” item in the JSON was misspelt as “file” then the FileName property of the PhotoMetadata would end up being set to null but the function body would still execute. If you had an int in the POCO but the POSTed JSON had a string (e.g. “hello”) instead of a number, then the runtime cannot bind a “hello” to an int– in this case your function body code will not even execute and you get an error from the runtime such as: “System.Private.CoreLib: Exception while executing function: AddPhotoMetadata. Microsoft.Azure.WebJobs.Host: Exception binding parameter 'metadataAdditionRequest'. System.Private.CoreLib: Input string was not in a correct format.” (and a 500 will status be returned to the client).

If you were handling things at a lower level (e.g. with the dynamic approach) you could perhaps provide a default value, do some extra logging, etc.

Accessing HTTP Request Metadata When Auto-binding to POCOs

If you want to do automatic binding and also want to get request metadata, you can simply add an extra parameter of type HttpRequest:

[FunctionName("AddPhotoMetadata")]
public static IActionResult AddPhotoMetadata(
    [HttpTrigger(AuthorizationLevel.Function, "post")] PhotoMetadataAdditionRequest metadataAdditionRequest,
    HttpRequest req,
    ILogger log)
{
    log.LogInformation("C# HTTP trigger function processed a request.");

    log.LogInformation("You can get additional information about the request such as:");
    log.LogInformation($" length : {req.ContentLength}");
    log.LogInformation($" type   : {req.ContentType}");
    log.LogInformation($" https  : {req.IsHttps}");
    log.LogInformation($" host   : {req.Host}");

    log.LogInformation($" FileName : {metadataAdditionRequest.FileName}");
    log.LogInformation($" Keywords : { metadataAdditionRequest.Keywords}");

    PhotoMetadata metadata = new PhotoMetadata
    {
        FileName = metadataAdditionRequest.FileName,
        PartitionKey = "landscapes",
        RowKey = Guid.NewGuid().ToString(),
        Keywords = metadataAdditionRequest.Keywords
    };

    SaveToDatabase(metadata);

    return new OkObjectResult(metadata.RowKey);
}

Posting Form Data to Azure Functions

In addition to POSTing JSON content to an Azure Function, you can also POST form data and access the HttpRequest.Form property:

[FunctionName("AddPhotoMetadata")]
public static IActionResult AddPhotoMetadata(
    [HttpTrigger(AuthorizationLevel.Function, "post")] HttpRequest req,
    ILogger log)
{
    log.LogInformation("C# HTTP trigger function processed a request.");

    log.LogInformation("You can get additional information about the request such as:");
    log.LogInformation($" length : {req.ContentLength}");
    log.LogInformation($" type   : {req.ContentType}");
    log.LogInformation($" https  : {req.IsHttps}");
    log.LogInformation($" host   : {req.Host}");

    PhotoMetadata metadata = new PhotoMetadata
    {
        FileName = req.Form["fileName"], // access form data
        PartitionKey = "landscapes",
        RowKey = Guid.NewGuid().ToString(),
        Keywords = req.Form["keywords"]  // access form data
    };

    SaveToDatabase(metadata);

    return new OkObjectResult(metadata.RowKey);
}

Returning HTTP Status Codes from Azure Functions

$
0
0

(This post refers to Azure Functions v2)

When creating HTTP-triggered Azure Functions there are a number of ways to indicate results back to the calling client.

Returning HTTP Status Codes Manually

To return a specific status code to the client you can create an instance of one of the …Result classes and return that from the function body.

The following example returns an instance of an OkResult or a BadRequestResult:

[FunctionName("AddActor1")]
public static async Task<IActionResult> AddActor1(
    [HttpTrigger(AuthorizationLevel.Function, "post")] HttpRequest req,
    ILogger log)
{
    log.LogInformation("C# HTTP trigger function processed a request.");

    string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
    dynamic data = JsonConvert.DeserializeObject(requestBody);

    string name = data.actorName; // get name from dynamic/JSON object

    if (name == null)
    {
        // Return a 400 bad request  result to the client
        return new BadRequestResult();
    }

    // Do some processing
    char firstLetter = name[0];

    // Return a 200 OK to the client
    return new OkResult();                
}

If you wanted to provide additional success/failure information you could use the OkObjectResult and BadRequestObjectResult classes instead, these allow you to provide additional contextual information to the client:

[FunctionName("AddActor2")]
public static async Task<IActionResult> AddActor2(
    [HttpTrigger(AuthorizationLevel.Function, "post")] HttpRequest req,
    ILogger log)
{
    log.LogInformation("C# HTTP trigger function processed a request.");

    string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
    dynamic data = JsonConvert.DeserializeObject(requestBody);

    string name = data.actorName; // get name from dynamic/JSON object

    if (name == null)
    {
        // Return a 400 bad request result to the client with additional information
        return new BadRequestObjectResult("Please pass an actorName in the request body");
    }

    // Do some processing
    char firstLetter = name[0];

    // Return a 200 OK to the client with additional information
    return new OkObjectResult($"Actor {name} was added");
}

Automatically Returning Status Codes

In addition to manually returning status code instances, you can let the functions runtime take care of this for you.

For example, the following code will automatically return a “204 no content” if the function executes without throwing an exception, or a “500 internal server error” if an exception was thrown:

[FunctionName("AddActor3")]
public static async Task AddActor3(
    [HttpTrigger(AuthorizationLevel.Function, "post")] HttpRequest req,
    ILogger log)
{
    log.LogInformation("C# HTTP trigger function processed a request.");

    string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
    dynamic data = JsonConvert.DeserializeObject(requestBody);

    string name = data.actorName; // get name from dynamic/JSON object

    // Do some processing
    char firstLetter = name[0]; // 500 internal server error if name is null

    // Auto return a 204 no content if no exception was thrown
}

In the preceding code, if the client fails to provide a actorName in the JSON, rather then getting a more helpful “400 bad request” (with optional additional message), they instead get a less useful “500 internal server error” status code and they have no idea what may have gone wrong or how to resolve it.

In this way, automatic status codes can be helpful if you want to write less code or perhaps use the return value of the function in a binding as in the following example:

[FunctionName("AddActor4")]
[return: Queue("new-actor-first-letter")]
public static async Task<string> AddActor4(
    [HttpTrigger(AuthorizationLevel.Function, "post")] HttpRequest req,
    ILogger log)
{
    log.LogInformation("C# HTTP trigger function processed a request.");

    string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
    dynamic data = JsonConvert.DeserializeObject(requestBody);

    string name = data.actorName;

    return name.Substring(0,1); // add a new message to the queue containing the first letter of the name
}

Once again, the preceding function will return a 500 if there is an exception (e.g. actorName not provided in JSON) but will return a “200 OK” if no exception occurs (rather than the “204 no content” in the earlier example).

Learning .NET Unit Testing the Easy Way

$
0
0

Knowing what you need to know is hard. Sometimes harder than the learning itself.

Many years ago I was getting started with .NET v1 and .NET unit testing, Agile had recently been “invented” and I had a printout of the agile manifesto on the office wall. I was also learning Test Driven Development (TDD), mocking, unit testing frameworks, assertions, data driven testing.

I remember it being a little overwhelming at times, so much to learn with fragments of information scattered around but no clearly defined path to follow to get where I knew I wanted to be: proficient and efficient in writing high quality, tested and testable code.

Today things are a little easier but there can still be the: “I don’t know what I need to know”.

This is where skills paths from Pluralsight can be super helpful. I wish I had had them all those years ago.

A path is a curated collection of courses in a specific order to get you to where you need to be for a specific learning goal.

I’m super proud to have contributed to the C# Unit Testing with NUnit Pluralsight path which at the time of writing you can start to watch for free with a Pluralsight free trial.

While it’s certainly possible that you could find the information and learn the topics yourself, you would also waste so much time in getting the information from disparate sources and trying to “meta learn” what it is you don’t know. Ultimately it depends on how much free time you have and how efficient you want to be at learning. You should always keep the end goal in mind and weigh up the costs/benefits/risks of the different ways of getting to that goal. If you want to learn to “how to write clean, testable code, all the way from writing your first test to mocking out dependencies to developing a pragmatic suite of unit tests for your application” then the C# Unit Testing with NUnit path may be your most efficient approach to get to your goal.

How To Notify Clients of Cosmos DB Changes with Azure SignalR and Azure Functions

$
0
0

This is the fourth part in a series of articles.

The Cosmos DB Azure Functions trigger can be used in conjunction with the Azure SignalR Service to create real-time notifications of changes made to data in Cosmos DB, all in a serverless way.

Azure Cosmos DB Change Notifications with Azure Functions and SignalR

If you want to learn more about how to setup the SignalR Service, check out this previous article.

In this series we’ve been using the domain of pizza delivery. In this article we’ll see how  updates in Cosmos DB can trigger a notification on an HTML client – you may have seen this if you’ve ordered pizza online where the delivery driver is tracked by GPS on a map.

The first thing to do is set up an Azure SignalR Service in Azure and add the SignalR Service connection string in the local.settings.json file which was covered in this article.

We can now add the negotiate function:

[FunctionName("negotiate")]
public static SignalRConnectionInfo GetDriverLocationUpdatesSignalRInfo(
    [HttpTrigger(AuthorizationLevel.Anonymous, "post")] HttpRequest req,
    [SignalRConnectionInfo(HubName = "driverlocationnotifications")] SignalRConnectionInfo connectionInfo)
{
    return connectionInfo;
}

The preceding code sets the SignalR hub to be used in this example as “driverlocationnotifications”, we’ll use the same hub name in the function that actually sends updates to clients.

The next thing to do is add a function that is triggered when changes are made to the location of pizza delivery drivers (note the trigger will also fire for new documents that are added):

[FunctionName("PizzaDriverLocationUpdated")]
public static async Task Run([CosmosDBTrigger(
    databaseName: "pizza",
    collectionName: "driver",
    ConnectionStringSetting = "pizzaConnection")] IReadOnlyList<Document> modifiedDrivers,
    [SignalR(HubName = "driverlocationnotifications")] IAsyncCollector<SignalRMessage> signalRMessages,
    ILogger log)
{
    if (modifiedDrivers != null)
    {
        log.LogInformation($"Total modified drivers: {modifiedDrivers.Count}");

        foreach (var modifiedDriver in modifiedDrivers)
        {
            var driverName = modifiedDriver.GetPropertyValue<string>("Name");
            var driverLat = modifiedDriver.GetPropertyValue<double>("Latitude");
            var driverLong = modifiedDriver.GetPropertyValue<double>("Longitude");

            log.LogInformation($"Driver {modifiedDriver.Id} {driverName} was updated (lat,long) {driverLat}, {driverLong}");

            var message = new DriverLocationUpdatedMessage
            {
                DriverId = modifiedDriver.Id,
                DriverName = driverName,
                Latitude = driverLat,
                Longitude = driverLong
            };

            await signalRMessages.AddAsync(new SignalRMessage
            {
                Target = "driverLocationUpdated",
                Arguments = new[] { message }
            });
        }
    }
}

The preceding code uses the [CosmosDBTrigger] to fire the function when there are changes made to the “driver” collection in the “pizza” database.

The [SignalR] binding attribute allows outgoing SignalR messages to be sent to connected clients by adding messages to the IAsyncCollector<SignalRMessage> signalRMessages object. Notice that the type of the Cosmos DB trigger is IReadOnlyList<Document>, to get the actual data items of the document we use the GetPropertyValue method, for example to get the driver name: modifiedDriver.GetPropertyValue<string>("Name").

In the HTML, we can connect to the “driverlocationnotifications” SignalR hub, and then get notifications when any driver location changes, in reality we would probably only want to get messages for the driver that is delivering our pizza but for simplicity we won’t worry about it in this demo code.

The following is the HTML to make this work:

<html><head><!--Adapted from: https://azure-samples.github.io/signalr-service-quickstart-serverless-chat/demo/chat-v2/ --><title>SignalR Demo</title><link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/bootstrap@4.1.3/dist/css/bootstrap.min.css"><style>
        .slide-fade-enter-active, .slide-fade-leave-active {
            transition: all 1s ease;
        }

        .slide-fade-enter, .slide-fade-leave-to {
            height: 0px;
            overflow-y: hidden;
            opacity: 0;
        }
    </style></head><body><p>&nbsp;</p><div id="app" class="container"><h3>Your Pizza Is On Its Way!!</h3><div class="row" v-if="!ready"><div class="col-sm"><div>Loading...</div></div></div><div v-if="ready"><transition-group name="slide-fade" tag="div"><div class="row" v-for="driver in drivers" v-bind:key="driver.DriverId"><div class="col-sm"><hr /><div><div style="display: inline-block; padding-left: 12px;"><div><span class="text-info small"><strong>{{ driver.DriverName }}</strong> just changed location:</span></div><div>
                                    {{driver.Latitude}},{{driver.Longitude}}</div></div></div></div></div></transition-group></div></div><script src="https://cdn.jsdelivr.net/npm/vue@2.5.17/dist/vue.js"></script><script src="https://cdn.jsdelivr.net/npm/@aspnet/signalr@1.1.2/dist/browser/signalr.js"></script><script src="https://cdn.jsdelivr.net/npm/axios@0.18.0/dist/axios.min.js"></script><script>
        const data = {
          drivers: [],
          ready: false
        };

        const app = new Vue({
          el: '#app',
          data: data,
          methods: {
          }
        });

        const connection = new signalR.HubConnectionBuilder()
          .withUrl('http://localhost:7071/api')
          .configureLogging(signalR.LogLevel.Information)
          .build();

        connection.on('driverLocationUpdated', driverLocationUpdated);
        connection.onclose(() => console.log('disconnected'));

        console.log('connecting...');
        connection.start()
            .then(() => data.ready = true)
            .catch(console.error);

        let counter = 0;

        function driverLocationUpdated(driverLocationUpdatedMessage) {
        driverLocationUpdatedMessage.id = counter++; // vue transitions need an id
        data.drivers.unshift(driverLocationUpdatedMessage);
    }
    </script></body></html>
Now when changes are made to the documents in the driver collection, the Azure Function will execute and send out Azure SignalR Service messages to clients. In this example we’re just writing out the messages in the page, but you can image the latitude and longitude being used to draw a little car image on top of a map, etc.

Processing a Single Azure Cosmos DB Document at a Time With Azure Functions

$
0
0

This is the fifth part in a series of articles.

In previous parts of this series the Azure Function code receives one or more Cosmos DB documents as part of the trigger.

The Azure Functions Cosmos DB trigger uses the change feed from Azure Cosmos to determine when to trigger the function and also what changes to send to the function invocation. If there is only a single updated (or inserted) document then the function will be called and the single document passed to it.

Notice in the following code that the document(s) that need processing are contained in the IReadOnlyList<Document> modifiedDriver collection:

[FunctionName("PizzaDriverLocationUpdated")]
public static async Task Run([CosmosDBTrigger(
    databaseName: "pizza",
    collectionName: "driver",
    ConnectionStringSetting = "pizzaConnection")] IReadOnlyList<Document> modifiedDrivers,            
    ILogger log)
{
    if (modifiedDrivers != null)
    {
        log.LogInformation($"Total modified drivers: {modifiedDrivers.Count}");

        foreach (var modifiedDriver in modifiedDrivers)
        {
            var driverName = modifiedDriver.GetPropertyValue<string>("Name");
            var driverLat = modifiedDriver.GetPropertyValue<double>("Latitude");
            var driverLong = modifiedDriver.GetPropertyValue<double>("Longitude");

            log.LogInformation($"Driver {modifiedDriver.Id} {driverName} was updated (lat,long) {driverLat}, {driverLong}");                    
        }
    }
}

The preceding code just outputs what documents changed to the log/console.

The reason that the foreach loop is required is that the function may receive multiple documents from the change feed.

If you only want to work with a single document per invocation of the function you can restrict the number of documents. To do this the MaxItemsPerInvocation property of the trigger attribute can be set as the following code demonstrates:

[FunctionName("PizzaDriverLocationUpdated")]
public static async Task Run([CosmosDBTrigger(
    databaseName: "pizza",
    collectionName: "driver",
    MaxItemsPerInvocation = 1,
    ConnectionStringSetting = "pizzaConnection")] IReadOnlyList<Document> modifiedDrivers,            
    ILogger log)
{
    if (modifiedDrivers != null)
    {
        log.LogInformation($"Total modified drivers: {modifiedDrivers.Count}");

        foreach (var modifiedDriver in modifiedDrivers)
        {
            var driverName = modifiedDriver.GetPropertyValue<string>("Name");
            var driverLat = modifiedDriver.GetPropertyValue<double>("Latitude");
            var driverLong = modifiedDriver.GetPropertyValue<double>("Longitude");

            log.LogInformation($"Driver {modifiedDriver.Id} {driverName} was updated (lat,long) {driverLat}, {driverLong}");                    
        }
    }
}

Now there will only be one document in the collection, so the function can be simplified to the following (removing the foreach loop):

[FunctionName("PizzaDriverLocationUpdated")]
public static async Task Run([CosmosDBTrigger(
    databaseName: "pizza",
    collectionName: "driver",
    MaxItemsPerInvocation = 1,
    ConnectionStringSetting = "pizzaConnection")] IReadOnlyList<Document> modifiedDrivers,
    ILogger log)
{
    if (modifiedDrivers != null && modifiedDrivers.Any())
    {
        var modifiedDriver = modifiedDrivers[0];

        var driverName = modifiedDriver.GetPropertyValue<string>("Name");
        var driverLat = modifiedDriver.GetPropertyValue<double>("Latitude");
        var driverLong = modifiedDriver.GetPropertyValue<double>("Longitude");

        log.LogInformation($"Driver {modifiedDriver.Id} {driverName} was updated (lat,long) {driverLat}, {driverLong}");                
    }
}

Now if 2 updates are made to the database, the function will execute twice, once per changed document:

Executing 'PizzaDriverLocationUpdated' (Reason='New changes on collection driver at 2019-05-28T05:42:25.2143459Z', Id=06d1e318-ab26-4f9b-ac94-3c95fba2dc5c)
Driver 1 Amrit was updated (lat,long) 31.2, 300.3
Executed 'PizzaDriverLocationUpdated' (Succeeded, Id=06d1e318-ab26-4f9b-ac94-3c95fba2dc5c)
Executing 'PizzaDriverLocationUpdated' (Reason='New changes on collection driver at 2019-05-28T05:42:25.5350736Z', Id=34629f4a-66f4-4bbc-85d6-8c3d8c17ee62)
Driver 2 Sarah was updated (lat,long) 222.1, 31.2
Executed 'PizzaDriverLocationUpdated' (Succeeded, Id=34629f4a-66f4-4bbc-85d6-8c3d8c17ee62)

In the next part of this series we’ll look at how to execute multiple different functions when a document changes.

Executing Multiple Azure Functions When Azure Cosmos DB Documents Are Created or Modified

$
0
0

This is the sixth part in a series of articles.

Sometimes you may want more than one Azure Function to execute when a document  is changed or inserted in Cosmos DB.

You could just use one function that performs multiple logical operations on the changed document but there are some things to consider when doing this:

  • What if the function throws an exception during the first logical operation? (operation 2 may not be executed).
  • What scaling do you want, you/Azure won’t be able to scale the 2 logical operations independently.
  • How long will the function execute for if performing multiple operations in a single function, will you risk function timeouts?
  • How will you monitor operations when they are all contained in a single function.
  • How will you update the code/fix bugs: you will have to update the entire function even if the bug is only related to one operation.
  • How will you write automated tests? They will be more complex if there are multiple operations in a single function.

In some cases you may decide the preceding points don’t matter, but if they do you will need to split the operations into multiple separate Azure Functions.

As an example, the following function contains two logical operations in a single function:

[FunctionName("PizzaDriverLocationUpdated")]
public static void RunMultipleOperations([CosmosDBTrigger(
    databaseName: "pizza",
    collectionName: "driver",
    ConnectionStringSetting = "pizzaConnection")] IReadOnlyList<Document> modifiedDrivers,
    ILogger log)
{
    if (modifiedDrivers != null)
    {
        foreach (var modifiedDriver in modifiedDrivers)
        {
            var driverName = modifiedDriver.GetPropertyValue<string>("Name");

            // Simulate running logical operation 1
            log.LogInformation($"Running operation 1 for driver {modifiedDriver.Id} {driverName}");

            // Simulate running logical operation 2
            log.LogInformation($"Running operation 2 for driver {modifiedDriver.Id} {driverName}");
        }
    }
}

The preceding function could be separated into two separate functions, each one containing only a single logical operation:

[FunctionName("PizzaDriverLocationUpdated1")]
public static void RunOperation1([CosmosDBTrigger(
    databaseName: "pizza",
    collectionName: "driver",
    ConnectionStringSetting = "pizzaConnection")] IReadOnlyList<Document> modifiedDrivers,
    ILogger log)
{
    if (modifiedDrivers != null)
    {
        foreach (var modifiedDriver in modifiedDrivers)
        {
            var driverName = modifiedDriver.GetPropertyValue<string>("Name");

            log.LogInformation($"Running operation 1 for driver {modifiedDriver.Id} {driverName}");
        }
    }
}

[FunctionName("PizzaDriverLocationUpdated2")]
public static void RunOperation2([CosmosDBTrigger(
    databaseName: "pizza",
    collectionName: "driver",
    ConnectionStringSetting = "pizzaConnection")] IReadOnlyList<Document> modifiedDrivers,
    ILogger log)
{
    if (modifiedDrivers != null)
    {
        foreach (var modifiedDriver in modifiedDrivers)
        {
            var driverName = modifiedDriver.GetPropertyValue<string>("Name");

            log.LogInformation($"Running operation 2 for driver {modifiedDriver.Id} {driverName}");
        }
    }
}

If you try and run the function app (for example in the local development functions runtime) you will see some error such as the following:

Unhealthiness detected in the operation AcquireLease for localhost_...==_...=..1 Owner='626d5aec...' Continuation="49" Timestamp(local)=...
Unhealthiness detected in the operation AcquireLease for localhost_...==_...=..0 Owner='626d5aec... Continuation="586" Timestamp(local)=...

If you then make updates/inserts you may see that only one of the two functions is executed, rather than both of them. This is due to change feed leases.

Understanding Azure Cosmos DB Change Feed Leases

The Azure Functions Cosmos DB trigger knows when documents are changed/insert ed by way of the Cosmos DB change feed.

The change feed at a simple level listens for changes made in a collection and allows these changes to be passed to other processes (such as Azure Functions) to do work on.

Without a way to keep track of what changes in the underlying collection have been “fed” out to other process(es) there would be no way to know what changed documents have been passed to external process(es). This is where the lease collection comes in.

The lease collection stores a “checkpoint” for an Azure Function that is using the Cosmos DB trigger. Without this checkpoint, the function would not know if it has processed changed documents or not.

When only one function exists for Cosmos DB collection there is not a problem as only one checkpoint needs to be stored, because there is only one function.

When more that one function exists, there needs to be a way to store different checkpoints for different functions.

One way to do this is to use lease prefixes.

Sharing a Single Lease Collection Across Multiple Azure Functions

To use a single lease collection when you have multiple Azure Functions, you can use the LeaseCollectionPrefix property of the [CosmosDBTrigger] attribute. The value for this property needs to be unique for every function, as the following code demonstrates:

[FunctionName("PizzaDriverLocationUpdated1")]
public static void RunOperation1([CosmosDBTrigger(
    databaseName: "pizza",
    collectionName: "driver",
    LeaseCollectionPrefix = "PizzaDriverLocationUpdated1",
    ConnectionStringSetting = "pizzaConnection")] IReadOnlyList<Document> modifiedDrivers,
    ILogger log)
{
    if (modifiedDrivers != null)
    {
        foreach (var modifiedDriver in modifiedDrivers)
        {
            var driverName = modifiedDriver.GetPropertyValue<string>("Name");

            log.LogInformation($"Running operation 1 for driver {modifiedDriver.Id} {driverName}");
        }
    }
}

[FunctionName("PizzaDriverLocationUpdated2")]
public static void RunOperation2([CosmosDBTrigger(
    databaseName: "pizza",
    collectionName: "driver",
    LeaseCollectionPrefix = "PizzaDriverLocationUpdated2",
    ConnectionStringSetting = "pizzaConnection")] IReadOnlyList<Document> modifiedDrivers,
    ILogger log)
{
    if (modifiedDrivers != null)
    {
        foreach (var modifiedDriver in modifiedDrivers)
        {
            var driverName = modifiedDriver.GetPropertyValue<string>("Name");

            log.LogInformation($"Running operation 2 for driver {modifiedDriver.Id} {driverName}");
        }
    }
}

In the preceding code, notice LeaseCollectionPrefix = "PizzaDriverLocationUpdated1", and LeaseCollectionPrefix = "PizzaDriverLocationUpdated2".

If the function app is run now there is no startup error and changes made to a document trigger both functions:

Executing 'PizzaDriverLocationUpdated2' (Reason='New changes on collection driver at 2019-05-31T02:49:55.6946671Z', Id=b1476848-7f98-4362-a25f-69beb714c379)
Executing 'PizzaDriverLocationUpdated1' (Reason='New changes on collection driver at 2019-05-31T02:49:55.6946679Z', Id=366fa257-3b94-4d41-94f2-2777e0b8249a)
Running operation 2 for driver 1 Amrit
Running operation 1 for driver 1 Amrit
Executed 'PizzaDriverLocationUpdated2' (Succeeded, Id=b1476848-7f98-4362-a25f-69beb714c379)
Executed 'PizzaDriverLocationUpdated1' (Succeeded, Id=366fa257-3b94-4d41-94f2-2777e0b8249a)

If you check the lease collection behind the scenes notice the lease prefixes in use as the following screenshot shows:

Azure Functions Lease Prefixes

 

Using Multiple Azure Cosmos DB Lease Collections with Azure Functions

Rather than sharing a single lease collection, you can instead specify completely separate collections with the LeaseCollectionName property:

[FunctionName("PizzaDriverLocationUpdated1")]
public static void RunOperation1([CosmosDBTrigger(
    databaseName: "pizza",
    collectionName: "driver",
    LeaseCollectionName = "PizzaDriverLocationUpdated1",
    CreateLeaseCollectionIfNotExists = true,
    ConnectionStringSetting = "pizzaConnection")] IReadOnlyList<Document> modifiedDrivers,
    ILogger log)
{
    if (modifiedDrivers != null)
    {
        foreach (var modifiedDriver in modifiedDrivers)
        {
            var driverName = modifiedDriver.GetPropertyValue<string>("Name");

            log.LogInformation($"Running operation 1 for driver {modifiedDriver.Id} {driverName}");
        }
    }
}

[FunctionName("PizzaDriverLocationUpdated2")]
public static void RunOperation2([CosmosDBTrigger(
    databaseName: "pizza",
    collectionName: "driver",
    LeaseCollectionName = "PizzaDriverLocationUpdated2",
    CreateLeaseCollectionIfNotExists = true,
    ConnectionStringSetting = "pizzaConnection")] IReadOnlyList<Document> modifiedDrivers,
    ILogger log)
{
    if (modifiedDrivers != null)
    {
        foreach (var modifiedDriver in modifiedDrivers)
        {
            var driverName = modifiedDriver.GetPropertyValue<string>("Name");

            log.LogInformation($"Running operation 2 for driver {modifiedDriver.Id} {driverName}");
        }
    }
}

Notice in the preceding code LeaseCollectionName = "PizzaDriverLocationUpdated1", and LeaseCollectionName = "PizzaDriverLocationUpdated2". Also notice the CreateLeaseCollectionIfNotExists = true, as its name suggests this will create the lease collections if they don’t already exist.

Running the function app and once again and changing a document will result in both functions executing.

Because there are now two separate collections being used for leases there will be a cost associated with having these two collections. Also be sure to read up on RUs for your lease collections, especially if sharing a lease collection and using lease prefixes you should keep an eye on the metrics and make sure you are not getting throttled requests on your lease collection(s).

How to Schedule Cosmos DB Data Processing With Azure Functions

$
0
0

This is the seventh part in a series of articles.

You can perform scheduled/batch processing of Azure Cosmos DB data by making use of timer triggers in Azure Functions.

Timer triggers allow you to set up a function to execute periodically based on set schedule.

For example the following attribute will cause the function to be executed once per day at  22:00 hours:

[TimerTrigger("0 0 22 * * *")]

As an example, in this series we’ve been using the domain of pizza delivery. Suppose that once per day the manager wanted an SMS with the total sales for the day.

To accomplish this we can combine a timer trigger, cosmos DB input binding, and a Twilio SMS output binding. In this example, the CosmosDB input binding is bound to an instance of DocumentClient. This allows us to perform more complex queries against Cosmos DB as we saw in part 2 of this series.

Once we have a DocumentClient instance, we can use LINQ to query Cosmos DB:

DateTime startOfToday = DateTime.Today;
DateTime endOfToday = startOfToday.AddDays(1).AddTicks(-1); 

decimal totalValueOfTodaysOrders = 
    client.CreateDocumentQuery<Order>(ordersCollectionUri, options)
         .Where(order => order.OrderDate >= startOfToday && order.OrderDate <= endOfToday)
         .Sum(order => order.OrderTotal);

The preceding query gets the total value or orders that have todays date, where order documents look like the following:

[
    {
        "id": "3",
        "StoreId": 2,
        "OrderTotal": 10.25,
        "OrderDate": "2019-06-06T17:17:17.7251173Z",
        "_rid": "Vg08AKOQeVQBAAAAAAAAAA==",
        "_self": "dbs/Vg08AA==/colls/Vg08AKOQeVQ=/docs/Vg08AKOQeVQBAAAAAAAAAA==/",
        "_etag": "\"00000000-0000-0000-1c2d-b2a092fd01d5\"",
        "_attachments": "attachments/",
        "_ts": 1559801067
    },
    {
        "id": "4",
        "StoreId": 2,
        "OrderTotal": 10.25,
        "OrderDate": "2019-06-06T18:18:18.7251173Z",
        "_rid": "Vg08AKOQeVQCAAAAAAAAAA==",
        "_self": "dbs/Vg08AA==/colls/Vg08AKOQeVQ=/docs/Vg08AKOQeVQCAAAAAAAAAA==/",
        "_etag": "\"00000000-0000-0000-1c37-77f607ea01d5\"",
        "_attachments": "attachments/",
        "_ts": 1559805263
    },
    {
        "id": "1",
        "StoreId": 1,
        "OrderTotal": 100,
        "OrderDate": "2019-06-06T14:14:14.7251173Z",
        "_rid": "Vg08AKOQeVQBAAAAAAAACA==",
        "_self": "dbs/Vg08AA==/colls/Vg08AKOQeVQ=/docs/Vg08AKOQeVQBAAAAAAAACA==/",
        "_etag": "\"00000000-0000-0000-1c2d-a87985f501d5\"",
        "_attachments": "attachments/",
        "_ts": 1559801050
    },
    {
        "id": "2",
        "StoreId": 1,
        "OrderTotal": 25.87,
        "OrderDate": "2019-06-06T16:16:16.7251173Z",
        "_rid": "Vg08AKOQeVQCAAAAAAAACA==",
        "_self": "dbs/Vg08AA==/colls/Vg08AKOQeVQ=/docs/Vg08AKOQeVQCAAAAAAAACA==/",
        "_etag": "\"00000000-0000-0000-1c2d-aef57e8c01d5\"",
        "_attachments": "attachments/",
        "_ts": 1559801061
    }
]

Once the total order value has been calculated, a Twilio SMS can be created and written to the Twilio output binding.

The complete listing is as follows:

using System;
using System.Linq;
using Microsoft.Azure.Documents.Client;
using Microsoft.Azure.WebJobs;
using Microsoft.Extensions.Logging;
using Twilio.Rest.Api.V2010.Account;
using Twilio.Types;

namespace DontCodeTiredDemosV2.CosmosDemos
{
    public static class DailySales
    {
        [FunctionName("DailySales")]
        public static void Run(
            [TimerTrigger("0 0 22 * * *")]TimerInfo myTimer,            
            [CosmosDB(ConnectionStringSetting = "pizzaConnection")] DocumentClient client,
            [TwilioSms(AccountSidSetting = "TwilioAccountSid", AuthTokenSetting = "TwilioAuthToken", From = "%TwilioFromNumber%")
                ] out CreateMessageOptions messageToSend,
            ILogger log)
        {
            log.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}");

            Uri ordersCollectionUri = UriFactory.CreateDocumentCollectionUri(databaseId: "pizza", collectionId: "orders");

            var options = new FeedOptions { EnableCrossPartitionQuery = true }; // Enable cross partition query

            DateTime startOfToday = DateTime.Today;
            DateTime endOfToday = startOfToday.AddDays(1).AddTicks(-1); 

            decimal totalValueOfTodaysOrders = 
                client.CreateDocumentQuery<Order>(ordersCollectionUri, options)
                      .Where(order => order.OrderDate >= startOfToday && order.OrderDate <= endOfToday)
                      .Sum(order => order.OrderTotal);


            var messageText = $"Total sales for today: {totalValueOfTodaysOrders}";

            log.LogInformation(messageText);

            var managersMobileNumber = new PhoneNumber(Environment.GetEnvironmentVariable("ManagersMobileNumber"));

            var mobileMessage = new CreateMessageOptions(managersMobileNumber)
            {
                Body = messageText
            };

            messageToSend = mobileMessage;
        }
    }
}

In the preceding code, notice the From = "%TwilioFromNumber%" element of the [TwilioSms] binding, the %% means that the from number will be read from configuration, e.g. local.settings.json in the development environment. Similarly, notice the phone number that the SMS is sent to is read from configuration: var managersMobileNumber = new PhoneNumber(Environment.GetEnvironmentVariable("ManagersMobileNumber"));

Now once per day at 10pm the function will run and send an SMS to the manager giving him the days sales figures.


Accessing Cosmos DB JSON Properties in Azure Functions with Dynamic C#

$
0
0

This is the eighth part in a series of articles.

When working with the Cosmos DB Microsoft.Azure.Documents.Document class, if you need to get custom properties from the document you can use the GetPropertyValue method as we saw used in part six of this series and replicated as follows:

[FunctionName("PizzaDriverLocationUpdated1")]
public static void RunOperation1([CosmosDBTrigger(
    databaseName: "pizza",
    collectionName: "driver",
    LeaseCollectionName = "PizzaDriverLocationUpdated1",
    CreateLeaseCollectionIfNotExists = true,
    ConnectionStringSetting = "pizzaConnection")] IReadOnlyList<Document> modifiedDrivers,
    ILogger log)
{
    if (modifiedDrivers != null)
    {
        foreach (var modifiedDriver in modifiedDrivers)
        {
            var driverName = modifiedDriver.GetPropertyValue<string>("Name");

            log.LogInformation($"Running operation 1 for driver {modifiedDriver.Id} {driverName}");
        }
    }
}

In the preceding code, the Azure Function is triggered from new/updated documents and the data that needs processing is passed to the function by way of the IReadOnlyList<Document> modifiedDrivers parameter. If you have multiple functions that work with a document type you may end up with many duplicated GetPropertyValue calls such as repeatedly getting the driver’s name: var driverName = modifiedDriver.GetPropertyValue<string>("Name"); Also notice the use of the magic string “Name” to refer to the document property to retrieve, this is not type safe and will return null at runtime if there is no property with that name.

Another option to simplify the code and remove the magic string is to use dynamic as the following code demonstrates:

[FunctionName("PizzaDriverLocationUpdated2")]
public static void RunOperation2([CosmosDBTrigger(
    databaseName: "pizza",
    collectionName: "driver",
    LeaseCollectionName = "PizzaDriverLocationUpdated2",
    CreateLeaseCollectionIfNotExists = true,
    ConnectionStringSetting = "pizzaConnection")] IReadOnlyList<dynamic> modifiedDrivers,
    ILogger log)
{
    if (modifiedDrivers != null)
    {
        foreach (var modifiedDriver in modifiedDrivers)
        {
            var driverName = modifiedDriver.Name;

            log.LogInformation($"Running operation 2 for driver {modifiedDriver.Id} {driverName}");
        }
    }
}

Notice in the preceding code that the binding has been changed from IReadOnlyList<Document> modifiedDrivers to IReadOnlyList<dynamic> modifiedDrivers and the code has changed from var driverName = modifiedDriver.GetPropertyValue<string>("Name"); to var driverName = modifiedDriver.Name;

While this removes the magic string, it is still not type safe and an incorrectly spelled property name will not error at compile time. Furthermore if the property does not exist, rather than return null, an exception will be thrown in your function at runtime.

If you’re not familiar with dynamic C# be sure to check out my Dynamic C# Fundamentals Pluralsight course.

Microsoft Feature Toggle Feature Flag Library: A First Look

$
0
0

As the creator of the .NET FeatureToggle library that has over half a million downloads on NuGet, I recently learned (thanks @OzBobWA) with some interest that Microsoft is working on a feature toggle / feature flag library.

It’s in its infancy at the moment and is currently in early preview on NuGet and as I understand it, the library is being developed by the Azure team and is not currently open sourced/available on GitHub.

Essentially the library allows you to configure whether a feature is on or off depending on a configuration setting, such as defined in the configuration JSON file or in Azure configuration.

Whilst the development of the library appears to be currently focused on enabling feature toggling in ASP.NET Core, I though it would be interesting to see if I could get it to work in a .NET Core console app. I hacked together the following code to demonstrate:

using System;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.FeatureManagement;

namespace ConsoleApp1
{
    class Program
    {
        static readonly IFeatureManager FeatureManager;

        static Program()
        {
            // Setup configuration to come from config file 
            IConfigurationBuilder builder = new ConfigurationBuilder();
            builder.AddJsonFile("appsettings.json");
            var configuration = builder.Build();

            // Register services (including feature management)
            var serviceCollection = new ServiceCollection();
            serviceCollection.AddSingleton<IConfiguration>(configuration);
            serviceCollection.AddFeatureManagement();

            // build the service and get IFeatureManager instance
            var serviceProvider = serviceCollection.BuildServiceProvider();
            FeatureManager = serviceProvider.GetService<IFeatureManager>();
        }

        static void Main(string[] args)
        {
            if (FeatureManager.IsEnabled("SayHello"))
            {
                Console.WriteLine("Hello World!");
            }


            Console.WriteLine("Press any key to exit...");
            Console.ReadLine();
        } 
    }
}

This code requires the following NuGets:

<PackageReference Include="Microsoft.Extensions.Configuration.Json" Version="3.0.0-preview6.19304.6" /><PackageReference Include="Microsoft.Extensions.DependencyInjection" Version="2.2.0" /><PackageReference Include="Microsoft.FeatureManagement" Version="1.0.0-preview-009000001-1251" />

The console app attempts to read from the appsettings.json file and looks in the FeatureManagement section to see whether features are enabled or not:

{
  "FeatureManagement": {
    "SayHello": true
  }
}

In the code, strings are used to evaluate whether a feature is enabled or not, e.g. the line: if (FeatureManager.IsEnabled("SayHello"))  looks for a configuration value called “SayHello”.

If this value is false, the Console.WriteLine("Hello World!"); will not execute; if it is true the text “Hello World!” will be output.

Comparing Microsoft.FeatureManagement to FeatureToggle

When compared to the FeatureToggle library there are some interesting differences, for example the same app created using the FeatureToggle library would like the following:

using System;
using FeatureToggle;

namespace ConsoleApp2
{
    class SayHelloFeature : SimpleFeatureToggle { }

    class Program
    {
        static void Main(string[] args)
        {
            if (Is<SayHelloFeature>.Enabled)
            {
                Console.WriteLine("Hello World!");
            }

            Console.WriteLine("Press any key to exit...");
            Console.ReadLine();            
        }
    }
}

Notice in the preceding code that there are no magic strings to read the toggle value, instead you define a class that represents the feature and then by convention, the name of the class is used to locate the toggle value in configuration which looks like:

{
  "FeatureToggle": {
    "SayHelloFeature": "false"
  }
}

Overall ,whilst it is still early days for the library, it is cool that Microsoft may have their own supported feature toggling library in the future and help bring the concept of feature toggles as an alternative/adjunct to feature branches in source control to “the masses”.

Understanding Azure Durable Functions - Part 1: Overview

$
0
0

This is the first part in a series of articles.

Durable Functions are built on top of top of the basic Azure Functions platform and provide the ability to define how multiple individual functions can be set up to work together.

You can accomplish a lot with just the basic Azure Functions so the durable extensions are not necessarily required to implement your solutions, however if you find yourself needing to make multiple functions work together in some kind of workflow then durable functions may be able to simply things and as a bonus allow you to document in code how the functions interact.

Using Multiple Functions

Just as you would keep classes and methods functionally cohesive (i.e. do one/small number related things) so should your Azure Functions be “specialized”. There are a number of good reasons for breaking down a problem into multiple individual functions such as:

  • Auto-scaling of functions rather than auto-scaling one massive “god” function
  • Maintainability: individual functions doing one thing are easier to understand/bug fix/test
  • Reusability/composeability: smaller units (functions) of logic could be reused in multiple workflows
  • Time-outs: if an individual function execution exceeds a timeout it will be stopped before finishing

Even if you are not going to use durable functions the above points still make sense.

Common Patterns That Durable Functions Can Help With

There are a number of common logical/architectural/workflow patterns that durable functions can help to orchestrate such as:

  • Chained Functions: the output of one function triggers the next function in the chain (aka functions pipeline)
  • Fan out, fan in: Split data into “chunks” (fanning out), process chunks in parallel,  aggregate results (fan-in)
  • Asynchronous HTTP APIs: Co-ordinate HTTP request with other services, client polls endpoint to check if work complete
  • Monitors: recurring process to perform clean-up, monitor (call) APIs to wait for completion, etc.
  • Human interaction: wait for human to make a decision at some part during the workflow

Implementing the preceding patterns without the use of durable functions may prove difficult, complex, or error prone due to the need to manually manage checkpoints (where are we in the process?) in addition to  other implementation details.

What Do Durable Functions Provide?

When using durable functions, there are a number of implementation details that are taken care of for you such as:

  • Maintains execution position of the workflow
  • When to execute next function
  • Replaying actions
  • Workflow monitoring/diagnostics
  • Workflow state storage
  • Scalability

Durable functions  also provide the ability to specify the workflow (“orchestration”) in code rather than using a visual flowchart style tool for example.

In the next part of this series we’ll see how to create a basic durable orchestration in C#.

Understanding Azure Durable Functions - Part 2: Creating Your First Durable Function

$
0
0

This is the second part in a series of articles.

Before creating durable functions it’s important to understand the logical types of functions involved. There are essentially 3 logical types of functions:

  • Client function: the entry point function, called by the end client to start the workflow, e.g. an HTTP triggered function
  • Orchestrator function: defines the workflow and what activity functions to call
  • Activity function: the function(s) that do the actual work/processing

When you create a durable function in Visual Studio, the template creates each of these 3 functions for you as a starting point.

Setup Development Environment

The first thing to do is set up your development environment:

  • Install Visual Studio 2019 (e.g. the free community version) – when installing, remember to install the Azure development workload as this enables functions development
  • Install and check that the Azure storage emulator is running – this allows you to run/test functions locally without deploying to Azure in the cloud

Create Azure Functions Project

Next, open Visual Studio 2019 and create a new Azure Functions project as the following screenshot shows:

Creating a new Azure Functions project in Visual Studio 2019

Once the project is created, you add individual functions to it.

At this point you should also manage NuGet packages for the project and update any packages to the latest versions.

Add a Function

Right click the new project and choose Add –> New Azure Function.

Adding a new Azure Function in Visual Studio 2019

Give the function a name (or leave it as the default “Function1.cs”) and click ok - this will open the function template chooser:

Azure Functions template chooser

Select Durable Functions Orchestration, and click OK.

This will create the following starter code:

using System.Collections.Generic;
using System.Net.Http;
using System.Threading.Tasks;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.Azure.WebJobs.Host;
using Microsoft.Extensions.Logging;

namespace DontCodeTiredDemosV2.Durables
{
    public static class Function1
    {
        [FunctionName("Function1")]
        public static async Task<List<string>> RunOrchestrator(
            [OrchestrationTrigger] DurableOrchestrationContext context)
        {
            var outputs = new List<string>();

            // Replace "hello" with the name of your Durable Activity Function.
            outputs.Add(await context.CallActivityAsync<string>("Function1_Hello", "Tokyo"));
            outputs.Add(await context.CallActivityAsync<string>("Function1_Hello", "Seattle"));
            outputs.Add(await context.CallActivityAsync<string>("Function1_Hello", "London"));

            // returns ["Hello Tokyo!", "Hello Seattle!", "Hello London!"]
            return outputs;
        }

        [FunctionName("Function1_Hello")]
        public static string SayHello([ActivityTrigger] string name, ILogger log)
        {
            log.LogInformation($"Saying hello to {name}.");
            return $"Hello {name}!";
        }

        [FunctionName("Function1_HttpStart")]
        public static async Task<HttpResponseMessage> HttpStart(
            [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")]HttpRequestMessage req,
            [OrchestrationClient]DurableOrchestrationClient starter,
            ILogger log)
        {
            // Function input comes from the request content.
            string instanceId = await starter.StartNewAsync("Function1", null);

            log.LogInformation($"Started orchestration with ID = '{instanceId}'.");

            return starter.CreateCheckStatusResponse(req, instanceId);
        }
    }
}

Notice in the preceding code the 3 types of function: client, orchestrator, and activity.

We can make this a bit clearer by renaming a few things:

using System.Collections.Generic;
using System.Net.Http;
using System.Threading.Tasks;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.Azure.WebJobs.Host;
using Microsoft.Extensions.Logging;

namespace DurableDemos
{
    public static class Function1
    {
        [FunctionName("OrchestratorFunction")]
        public static async Task<List<string>> RunOrchestrator(
            [OrchestrationTrigger] DurableOrchestrationContext context)
        {
            var outputs = new List<string>();

            // Replace "hello" with the name of your Durable Activity Function.
            outputs.Add(await context.CallActivityAsync<string>("ActivityFunction", "Tokyo"));
            outputs.Add(await context.CallActivityAsync<string>("ActivityFunction", "Seattle"));
            outputs.Add(await context.CallActivityAsync<string>("ActivityFunction", "London"));

            // returns ["Hello Tokyo!", "Hello Seattle!", "Hello London!"]
            return outputs;
        }

        [FunctionName("ActivityFunction")]
        public static string SayHello([ActivityTrigger] string name, ILogger log)
        {
            log.LogInformation($"Saying hello to {name}.");
            return $"Hello {name}!";
        }

        [FunctionName("ClientFunction")]
        public static async Task<HttpResponseMessage> HttpStart(
            [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")]HttpRequestMessage req,
            [OrchestrationClient]DurableOrchestrationClient starter,
            ILogger log)
        {
            // Function input comes from the request content.
            string instanceId = await starter.StartNewAsync("OrchestratorFunction", null);

            log.LogInformation($"Started orchestration with ID = '{instanceId}'.");

            return starter.CreateCheckStatusResponse(req, instanceId);
        }
    }
}

There are 3 Azure Functions in this single Function1 class.

First the “ClientFunction” is what starts the workflow, in this example it’s triggered by a HTTP call from the client, but you could use any trigger here – for example from a message on a queue or a timer. When this function is called, it doesn’t do any processing itself but rather creates an instance of the workflow that is defined in the "OrchestratorFunction". The line string instanceId = await starter.StartNewAsync("OrchestratorFunction", null); is what kicks off the workflow: the first argument is a string naming the orchestration to start, the second parameter (in this example null) is any input that needs to be passed to the orchestrator. The final line return starter.CreateCheckStatusResponse(req, instanceId); returns an HttpResponseMessage to the HTTP caller.

The second function "OrchestratorFunction" is what defines the activity functions that will comprise the workflow. In this function the CallActivityAsync method defines what activities get executed as part of the orchestration, in this example the same activity "ActivityFunction" is called 3 times. The CallActivityAsync method takes 2 parameters: the first is a string naming the activity function to execute, and the second is any data to be passed to the activity function; in this case hardcoded strings "Tokyo", "Seattle", and "London". Once these activities have completed execution, the result will be returned – a list: ["Hello Tokyo!", "Hello Seattle!", "Hello London!"].

The third function "ActivityFunction" is where the actual work/processing takes place.

Testing Durable Functions Locally

The project can be be run locally by hitting F5 in Visual Studio, this will start the local functions runtime:

                  %%%%%%
                 %%%%%%
            @   %%%%%%    @
          @@   %%%%%%      @@
       @@@    %%%%%%%%%%%    @@@
     @@      %%%%%%%%%%        @@
       @@         %%%%       @@
         @@      %%%       @@
           @@    %%      @@
                %%
                %

Azure Functions Core Tools (2.7.1373 Commit hash: cd9bfca26f9c7fe06ce245f5bf69bc6486a685dd)
Function Runtime Version: 2.0.12507.0
[9/07/2019 3:29:16 AM] Starting Rpc Initialization Service.
[9/07/2019 3:29:16 AM] Initializing RpcServer
[9/07/2019 3:29:16 AM] Building host: startup suppressed:False, configuration suppressed: False
[9/07/2019 3:29:17 AM] Initializing extension with the following settings: Initializing extension with the following settings:
[9/07/2019 3:29:17 AM] AzureStorageConnectionStringName: , MaxConcurrentActivityFunctions: 80, MaxConcurrentOrchestratorFunctions: 80, PartitionCount: 4, ControlQueueBatchSize: 32, ControlQueueVisibilityTimeout: 00:05:00, WorkItemQueueVisibilityTimeout: 00:05:00, ExtendedSessionsEnabled: False, EventGridTopicEndpoint: , NotificationUrl: http://localhost:7071/runtime/webhooks/durabletask, TrackingStoreConnectionStringName: , MaxQueuePollingInterval: 00:00:30, LogReplayEvents: False. InstanceId: . Function: . HubName: DurableFunctionsHub. AppName: . SlotName: . ExtensionVersion: 1.8.2. SequenceNumber: 0.
[9/07/2019 3:29:17 AM] Initializing Host.
[9/07/2019 3:29:17 AM] Host initialization: ConsecutiveErrors=0, StartupCount=1
[9/07/2019 3:29:17 AM] LoggerFilterOptions
[9/07/2019 3:29:17 AM] {
[9/07/2019 3:29:17 AM]   "MinLevel": "None",
[9/07/2019 3:29:17 AM]   "Rules": [
[9/07/2019 3:29:17 AM]     {
[9/07/2019 3:29:17 AM]       "ProviderName": null,
[9/07/2019 3:29:17 AM]       "CategoryName": null,
[9/07/2019 3:29:17 AM]       "LogLevel": null,
[9/07/2019 3:29:17 AM]       "Filter": "<AddFilter>b__0"
[9/07/2019 3:29:17 AM]     },
[9/07/2019 3:29:17 AM]     {
[9/07/2019 3:29:17 AM]       "ProviderName": "Microsoft.Azure.WebJobs.Script.WebHost.Diagnostics.SystemLoggerProvider",
[9/07/2019 3:29:17 AM]       "CategoryName": null,
[9/07/2019 3:29:17 AM]       "LogLevel": "None",
[9/07/2019 3:29:17 AM]       "Filter": null
[9/07/2019 3:29:17 AM]     },
[9/07/2019 3:29:17 AM]     {
[9/07/2019 3:29:17 AM]       "ProviderName": "Microsoft.Azure.WebJobs.Script.WebHost.Diagnostics.SystemLoggerProvider",
[9/07/2019 3:29:17 AM]       "CategoryName": null,
[9/07/2019 3:29:17 AM]       "LogLevel": null,
[9/07/2019 3:29:17 AM]       "Filter": "<AddFilter>b__0"
[9/07/2019 3:29:17 AM]     }
[9/07/2019 3:29:17 AM]   ]
[9/07/2019 3:29:17 AM] }
[9/07/2019 3:29:17 AM] FunctionResultAggregatorOptions
[9/07/2019 3:29:17 AM] {
[9/07/2019 3:29:17 AM]   "BatchSize": 1000,
[9/07/2019 3:29:17 AM]   "FlushTimeout": "00:00:30",
[9/07/2019 3:29:17 AM]   "IsEnabled": true
[9/07/2019 3:29:17 AM] }
[9/07/2019 3:29:17 AM] SingletonOptions
[9/07/2019 3:29:17 AM] {
[9/07/2019 3:29:17 AM]   "LockPeriod": "00:00:15",
[9/07/2019 3:29:17 AM]   "ListenerLockPeriod": "00:00:15",
[9/07/2019 3:29:17 AM]   "LockAcquisitionTimeout": "10675199.02:48:05.4775807",
[9/07/2019 3:29:17 AM]   "LockAcquisitionPollingInterval": "00:00:05",
[9/07/2019 3:29:17 AM]   "ListenerLockRecoveryPollingInterval": "00:01:00"
[9/07/2019 3:29:17 AM] }
[9/07/2019 3:29:17 AM] Starting JobHost
[9/07/2019 3:29:17 AM] Starting Host (HostId=desktopkghqug8-1671102379, InstanceId=015cba37-1f46-41a1-b3c1-19f341c4d3d9, Version=2.0.12507.0, ProcessId=18728, AppDomainId=1, InDebugMode=False, InDiagnosticMode=False, FunctionsExtensionVersion=)
[9/07/2019 3:29:17 AM] Loading functions metadata
[9/07/2019 3:29:17 AM] 3 functions loaded
[9/07/2019 3:29:17 AM] Generating 3 job function(s)
[9/07/2019 3:29:17 AM] Found the following functions:
[9/07/2019 3:29:17 AM] DurableDemos.Function1.SayHello
[9/07/2019 3:29:17 AM] DurableDemos.Function1.HttpStart
[9/07/2019 3:29:17 AM] DurableDemos.Function1.RunOrchestrator
[9/07/2019 3:29:17 AM]
[9/07/2019 3:29:17 AM] Host initialized (221ms)
[9/07/2019 3:29:18 AM] Starting task hub worker. InstanceId: . Function: . HubName: DurableFunctionsHub. AppName: . SlotName: . ExtensionVersion: 1.8.2. SequenceNumber: 1.
[9/07/2019 3:29:18 AM] Host started (655ms)
[9/07/2019 3:29:18 AM] Job host started
Hosting environment: Production
Content root path: C:\Users\Admin\OneDrive\Documents\dct\19\DontCodeTiredDemosV2\DontCodeTiredDemosV2\DurableDemos\bin\Debug\netcoreapp2.1
Now listening on: http://0.0.0.0:7071
Application started. Press Ctrl+C to shut down.

Http Functions:

        ClientFunction: [GET,POST] http://localhost:7071/api/ClientFunction

[9/07/2019 3:29:23 AM] Host lock lease acquired by instance ID '000000000000000000000000E72C9561'.

Now to start an instance of the workflow, the following PowerShell can be used:

$R = Invoke-WebRequest 'http://localhost:7071/api/ClientFunction' -Method 'POST'

This will result in the following rather verbose output:

[9/07/2019 3:30:55 AM] Executing HTTP request: {
[9/07/2019 3:30:55 AM]   "requestId": "36d9f77f-1ceb-43ec-aa1d-5702b42a8e15",
[9/07/2019 3:30:55 AM]   "method": "POST",
[9/07/2019 3:30:55 AM]   "uri": "/api/ClientFunction"
[9/07/2019 3:30:55 AM] }
[9/07/2019 3:30:55 AM] Executing 'ClientFunction' (Reason='This function was programmatically called via the host APIs.', Id=a014a4ae-77ff-46c7-b812-344bd442da38)
[9/07/2019 3:30:55 AM] f5a38610c07a4c90815f2936451628b8: Function 'OrchestratorFunction (Orchestrator)' scheduled. Reason: NewInstance. IsReplay: False. State: Scheduled. HubName: DurableFunctionsHub. AppName: . SlotName: . ExtensionVersion: 1.8.2. SequenceNumber: 2.
[9/07/2019 3:30:56 AM] Started orchestration with ID = 'f5a38610c07a4c90815f2936451628b8'.
[9/07/2019 3:30:56 AM] Executed 'ClientFunction' (Succeeded, Id=a014a4ae-77ff-46c7-b812-344bd442da38)
[9/07/2019 3:30:56 AM] Executing 'OrchestratorFunction' (Reason='', Id=d6263f09-2372-4bfb-9473-70f03874cfee)
[9/07/2019 3:30:56 AM] f5a38610c07a4c90815f2936451628b8: Function 'OrchestratorFunction (Orchestrator)' started. IsReplay: False. Input: (16 bytes). State: Started. HubName: DurableFunctionsHub. AppName: . SlotName: . ExtensionVersion: 1.8.2. SequenceNumber: 3.
[9/07/2019 3:30:56 AM] f5a38610c07a4c90815f2936451628b8: Function 'ActivityFunction (Activity)' scheduled. Reason: OrchestratorFunction. IsReplay: False. State: Scheduled. HubName: DurableFunctionsHub. AppName: . SlotName: . ExtensionVersion: 1.8.2. SequenceNumber: 4.
[9/07/2019 3:30:56 AM] Executed 'OrchestratorFunction' (Succeeded, Id=d6263f09-2372-4bfb-9473-70f03874cfee)
[9/07/2019 3:30:56 AM] f5a38610c07a4c90815f2936451628b8: Function 'OrchestratorFunction (Orchestrator)' awaited. IsReplay: False. State: Awaited. HubName: DurableFunctionsHub. AppName: . SlotName: . ExtensionVersion: 1.8.2. SequenceNumber: 5.
[9/07/2019 3:30:56 AM] Executed HTTP request: {
[9/07/2019 3:30:56 AM]   "requestId": "36d9f77f-1ceb-43ec-aa1d-5702b42a8e15",
[9/07/2019 3:30:56 AM]   "method": "POST",
[9/07/2019 3:30:56 AM]   "uri": "/api/ClientFunction",
[9/07/2019 3:30:56 AM]   "identities": [
[9/07/2019 3:30:56 AM]     {
[9/07/2019 3:30:56 AM]       "type": "WebJobsAuthLevel",
[9/07/2019 3:30:56 AM]       "level": "Admin"
[9/07/2019 3:30:56 AM]     }
[9/07/2019 3:30:56 AM]   ],
[9/07/2019 3:30:56 AM]   "status": 202,
[9/07/2019 3:30:56 AM]   "duration": 699
[9/07/2019 3:30:56 AM] }
[9/07/2019 3:30:56 AM] f5a38610c07a4c90815f2936451628b8: Function 'ActivityFunction (Activity)' started. IsReplay: False. Input: (36 bytes). State: Started. HubName: DurableFunctionsHub. AppName: . SlotName: . ExtensionVersion: 1.8.2. SequenceNumber: 6.
[9/07/2019 3:30:56 AM] Executing 'ActivityFunction' (Reason='', Id=d35e9667-f77b-4328-aff9-4ecbc3b66e89)
[9/07/2019 3:30:56 AM] Saying hello to Tokyo.
[9/07/2019 3:30:56 AM] Executed 'ActivityFunction' (Succeeded, Id=d35e9667-f77b-4328-aff9-4ecbc3b66e89)
[9/07/2019 3:30:56 AM] f5a38610c07a4c90815f2936451628b8: Function 'ActivityFunction (Activity)' completed. ContinuedAsNew: False. IsReplay: False. Output: (56 bytes). State: Completed. HubName: DurableFunctionsHub. AppName: . SlotName: . ExtensionVersion: 1.8.2. SequenceNumber: 7.
[9/07/2019 3:30:56 AM] Executing 'OrchestratorFunction' (Reason='', Id=5cc451d2-dd5b-4cb5-b10a-02e7bca71a08)
[9/07/2019 3:30:56 AM] f5a38610c07a4c90815f2936451628b8: Function 'ActivityFunction (Activity)' scheduled. Reason: OrchestratorFunction. IsReplay: False. State: Scheduled. HubName: DurableFunctionsHub. AppName: . SlotName: . ExtensionVersion: 1.8.2. SequenceNumber: 8.
[9/07/2019 3:30:56 AM] Executed 'OrchestratorFunction' (Succeeded, Id=5cc451d2-dd5b-4cb5-b10a-02e7bca71a08)
[9/07/2019 3:30:56 AM] f5a38610c07a4c90815f2936451628b8: Function 'OrchestratorFunction (Orchestrator)' awaited. IsReplay: False. State: Awaited. HubName: DurableFunctionsHub. AppName: . SlotName: . ExtensionVersion: 1.8.2. SequenceNumber: 9.
[9/07/2019 3:30:56 AM] f5a38610c07a4c90815f2936451628b8: Function 'ActivityFunction (Activity)' started. IsReplay: False. Input: (44 bytes). State: Started. HubName: DurableFunctionsHub. AppName: . SlotName: . ExtensionVersion: 1.8.2. SequenceNumber: 10.
[9/07/2019 3:30:56 AM] Executing 'ActivityFunction' (Reason='', Id=719c797b-9ee1-4167-972c-c0b0c4dd886c)
[9/07/2019 3:30:56 AM] Saying hello to Seattle.
[9/07/2019 3:30:56 AM] Executed 'ActivityFunction' (Succeeded, Id=719c797b-9ee1-4167-972c-c0b0c4dd886c)
[9/07/2019 3:30:56 AM] f5a38610c07a4c90815f2936451628b8: Function 'ActivityFunction (Activity)' completed. ContinuedAsNew: False. IsReplay: False. Output: (64 bytes). State: Completed. HubName: DurableFunctionsHub. AppName: . SlotName: . ExtensionVersion: 1.8.2. SequenceNumber: 11.
[9/07/2019 3:30:56 AM] Executing 'OrchestratorFunction' (Reason='', Id=0b115432-1d9d-43af-b5da-3e3607b808ac)
[9/07/2019 3:30:56 AM] f5a38610c07a4c90815f2936451628b8: Function 'ActivityFunction (Activity)' scheduled. Reason: OrchestratorFunction. IsReplay: False. State: Scheduled. HubName: DurableFunctionsHub. AppName: . SlotName: . ExtensionVersion: 1.8.2. SequenceNumber: 12.
[9/07/2019 3:30:56 AM] Executed 'OrchestratorFunction' (Succeeded, Id=0b115432-1d9d-43af-b5da-3e3607b808ac)
[9/07/2019 3:30:56 AM] f5a38610c07a4c90815f2936451628b8: Function 'OrchestratorFunction (Orchestrator)' awaited. IsReplay: False. State: Awaited. HubName: DurableFunctionsHub. AppName: . SlotName: . ExtensionVersion: 1.8.2. SequenceNumber: 13.
[9/07/2019 3:30:56 AM] f5a38610c07a4c90815f2936451628b8: Function 'ActivityFunction (Activity)' started. IsReplay: False. Input: (40 bytes). State: Started. HubName: DurableFunctionsHub. AppName: . SlotName: . ExtensionVersion: 1.8.2. SequenceNumber: 14.
[9/07/2019 3:30:56 AM] Executing 'ActivityFunction' (Reason='', Id=2cbd8e65-3e1d-4fbb-8d92-afe4b7e6a012)
[9/07/2019 3:30:56 AM] Saying hello to London.
[9/07/2019 3:30:56 AM] Executed 'ActivityFunction' (Succeeded, Id=2cbd8e65-3e1d-4fbb-8d92-afe4b7e6a012)
[9/07/2019 3:30:56 AM] f5a38610c07a4c90815f2936451628b8: Function 'ActivityFunction (Activity)' completed. ContinuedAsNew: False. IsReplay: False. Output: (60 bytes). State: Completed. HubName: DurableFunctionsHub. AppName: . SlotName: . ExtensionVersion: 1.8.2. SequenceNumber: 15.
[9/07/2019 3:30:56 AM] Executing 'OrchestratorFunction' (Reason='', Id=10215167-9de6-4197-bc65-1d819b8471cb)
[9/07/2019 3:30:56 AM] f5a38610c07a4c90815f2936451628b8: Function 'OrchestratorFunction (Orchestrator)' completed. ContinuedAsNew: False. IsReplay: False. Output: (196 bytes). State: Completed. HubName: DurableFunctionsHub. AppName: . SlotName: . ExtensionVersion: 1.8.2. SequenceNumber: 16.
[9/07/2019 3:30:56 AM] Executed 'OrchestratorFunction' (Succeeded, Id=10215167-9de6-4197-bc65-1d819b8471cb)

A few key things to notice:

Executing 'ClientFunction' – this is PowerShell calling the HTTP trigger function.

Started orchestration with ID = 'f5a38610c07a4c90815f2936451628b8' – the HTTP client function has started an instance of the orchestration.

And 3:Executing 'ActivityFunction'… - the 3 activity calls defined in the orchestrator function.

If we modify the "ActivityFunction" to introduce a simulated processing time:

[FunctionName("ActivityFunction")]
public static string SayHello([ActivityTrigger] string name, ILogger log)
{
    Thread.Sleep(5000); // simulate longer processing delay

    log.LogInformation($"Saying hello to {name}.");
    return $"Hello {name}!";
}

And now run the project again, and once again make the request from PowerShell, the client function returns a result to PowerShell “immediately”:

StatusCode        : 202
StatusDescription : Accepted
Content           : {"id":"5ed815a8fe3d497993266d49213a7c09","statusQueryGetUri":"http://localhost:7071/runtime/webhook
                    s/durabletask/instances/5ed815a8fe3d497993266d49213a7c09?taskHub=DurableFunctionsHub&connection=Sto
                    ra...
RawContent        : HTTP/1.1 202 Accepted
                    Retry-After: 10
                    Content-Length: 1232
                    Content-Type: application/json; charset=utf-8
                    Date: Tue, 09 Jul 2019 03:38:46 GMT
                    Location: http://localhost:7071/runtime/webhooks/durab...
Forms             : {}
Headers           : {[Retry-After, 10], [Content-Length, 1232], [Content-Type, application/json; charset=utf-8],
                    [Date, Tue, 09 Jul 2019 03:38:46 GMT]...}
Images            : {}
InputFields       : {}
Links             : {}
ParsedHtml        : mshtml.HTMLDocumentClass
RawContentLength  : 1232

So even though the HTTP request is completed (with a 202 Accepted HTTP code), the orchestration is still running.

Later in this series of articles we’ll learn more and dig into more detail about what is going on behind the scenes.

Understanding Azure Durable Functions - Part 3: What Is Durability?

$
0
0

This is the third part in a series of articles.

Durable Functions make it easier to organize (orchestrate) multiple individual Azure Functions working together.

In addition to simplifying this orchestration, durable functions (as its name suggests) provide a level of “durability”. So what is this durability and how does it work?

Durable Functions provides “reliable execution”. What this means is that the framework takes care of a number of things for us, one of these things is the management of orchestration history.

"Orchestrator functions and activity functions may be running on different VMs within a data center, and those VMs or the underlying networking infrastructure is not 100% reliable. In spite of this, Durable Functions ensures reliable execution of orchestrations." [Microsoft]

One important thing to note is that this “durability” is not meant to automatically retry operations or execute compensation logic, later in the series we’ll look at error handling.

Behind the Scenes

To better explain how this works, lets take the following functions:

using System.Collections.Generic;
using System.Net.Http;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.Extensions.Logging;

namespace DurableDemos
{
    public static class Function1
    {
        [FunctionName("OrchestratorFunction")]
        public static async Task<List<string>> RunOrchestrator(
            [OrchestrationTrigger] DurableOrchestrationContext context)
        {
            var outputs = new List<string>();

            // Replace "hello" with the name of your Durable Activity Function.
            outputs.Add(await context.CallActivityAsync<string>("ActivityFunction", "Tokyo"));
            outputs.Add(await context.CallActivityAsync<string>("ActivityFunction", "Seattle"));
            outputs.Add(await context.CallActivityAsync<string>("ActivityFunction", "London"));

            // returns ["Hello Tokyo!", "Hello Seattle!", "Hello London!"]
            return outputs;
        }

        [FunctionName("ActivityFunction")]
        public static string SayHello([ActivityTrigger] string name, ILogger log)
        {
            Thread.Sleep(5000); // simulate longer processing delay

            log.LogInformation($"Saying hello to {name}.");
            return $"Hello {name}!";
        }

        [FunctionName("ClientFunction")]
        public static async Task<HttpResponseMessage> HttpStart(
            [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")]HttpRequestMessage req,
            [OrchestrationClient]DurableOrchestrationClient starter,
            ILogger log)
        {
            // Function input comes from the request content.
            string instanceId = await starter.StartNewAsync("OrchestratorFunction", null);

            log.LogInformation($"Started orchestration with ID = '{instanceId}'.");

            return starter.CreateCheckStatusResponse(req, instanceId);
        }
    }
}

If we execute the orchestration (as we did in part two of this series) with some PowerShell: $R = Invoke-WebRequest 'http://localhost:7071/api/ReplayExample_HttpStart' -Method 'POST'

And take a look in the Azure storage account associated with the function app (in this example the local storage emulator) there are a number of storage queues and tables created as the following screenshot shows:

Durable Functions tables and queues

Whilst it is not necessary to understand how Durable Functions works behind the scenes, it is useful to know that these queues and tables exist and are used.

For example if we open the DurableFunctionsHubInstances table, we see a single row representing the execution of the orchestration we just initiates with PowerShell:

Azure Durable Functions behind the scenes table

Notice in the preceding screenshot, the Name field contains the name of the orchestration we just executed: [FunctionName("ReplayExample")]

If we execution the orchestration again, we get a second row, etc.

The DurableFunctionsHubHistory table is more complex and has more data per invocation:

DurableFunctionsHubHistory Table

Notice in the previous screenshot that the data in this table keeps track of where the orchestrator function is at, for example what activity functions have been executed so far.

The storage queues behind the scenes are used by the framework to drive function execution.

How Durable Functions Work – Checkpoints

The orchestrator function instance may be removed from memory while it is waiting for activity functions to execute, namely the await context.CallActivityAsync calls. Because these calls are awaited, the orchestrator function may temporarily be removed from memory/paused until the activity function completes.

Because orchestrator functions may be removed from memory, there needs to be some way of the framework to keep track of which activity functions have completed and which ones have yet to be executed. The framework does this by creating “checkpoints” and using the DurableFunctionsHubHistory table to store these checkpoints.

“Azure Storage does not provide any transactional guarantees between saving data into table storage and queues. To handle failures, the Durable Functions storage provider uses eventual consistency patterns. These patterns ensure that no data is lost if there is a crash or loss of connectivity in the middle of a checkpoint.” [Microsoft]

Essentially what this means is the the code in the orchestrator function may execute multiple times per invocation (be replayed) and the checkpoint data ensures that a given activity call is not called multiple times.

At a high level, at each checkpoint (each await) the execution history is saved into table storage and  messages are added to storage queues to trigger other functions.

Replay Considerations

Because the code in an orchestrator function can be replayed, there are a few restrictions on the code you write inside them.

Firstly, code in orchestrator function should be deterministic, that is if the code is replayed multiple times it should produce the same result – some examples of non-determinist code include random number generation, GUID generation, current date/time calls and calls to non-deterministic code/APIs/endpoints.

Code inside orchestrator functions should not block, so for example do not do IO operations or call Thread.Sleep.

Code inside orchestrator functions should not call into async code except by way of the DurableOrchestrationContext object that is passed into the orchestrator function, e.g. public static async Task RunOrchestrator( [OrchestrationTrigger] DurableOrchestrationContext context). For example initiating an activity function via: context.CallActivityAsync<string>("ReplayExample_ActivityFunction", "Tokyo"); So for example do not use Task.Run, Task.Delay, HttpClient.SendAsync, etc. inside an orchestrator function.

Inside an orchestrator function, if you want the following logic use the substitute APIs as recommended in the Microsoft docs:

Note: these restrictions apply to the orchestrator function, not to the activity function(s) or client function.

Seeing Replay in Action

As a quick example, we can modify the code in the orchestrator function to add a log message:

[FunctionName("ReplayExample")]
public static async Task RunOrchestrator(
    [OrchestrationTrigger] DurableOrchestrationContext context, ILogger log)
{
    log.LogInformation($"************** RunOrchestrator method executing ********************");
    await context.CallActivityAsync<string>("ReplayExample_ActivityFunction", "Tokyo");            
    await context.CallActivityAsync<string>("ReplayExample_ActivityFunction", "Seattle");
    await context.CallActivityAsync<string>("ReplayExample_ActivityFunction", "London");
}

Now when the client function is called and the orchestration started, the following simplified log output can be seen:

Executing HTTP request: {
  "requestId": "c7edb6b0-e947-4347-9f0e-fc46a2bdeefe",
  "method": "POST",
  "uri": "/api/ReplayExample_HttpStart"
}
Executing 'ReplayExample_HttpStart' (Reason='This function was programmatically called via the host APIs.', Id=ca981987-5ea3-4c76-8934-09fef757cd6a)
75dce58e3be440e98dcd3c99112c9bbf: Function 'ReplayExample (Orchestrator)' scheduled. Reason: NewInstance. IsReplay: False. State: Scheduled. HubName: DurableFunctionsHub. AppName: . SlotName: . ExtensionVersion: 1.8.2. SequenceNumber: 2.
Started orchestration with ID = '75dce58e3be440e98dcd3c99112c9bbf'.
Executed 'ReplayExample_HttpStart' (Succeeded, Id=ca981987-5ea3-4c76-8934-09fef757cd6a)
Executing 'ReplayExample' (Reason='', Id=fb5e9945-3f9e-4fc6-bda3-9028fa6d8f04)
75dce58e3be440e98dcd3c99112c9bbf: Function 'ReplayExample (Orchestrator)' started. IsReplay: False. Input: (16 bytes). State: Started. HubName: DurableFunctionsHub. AppName: . SlotName: . ExtensionVersion: 1.8.2. SequenceNumber: 3.
************** RunOrchestrator method executing ********************
75dce58e3be440e98dcd3c99112c9bbf: Function 'ReplayExample_ActivityFunction (Activity)' scheduled. Reason: ReplayExample. IsReplay: False. State: Scheduled. HubName: DurableFunctionsHub. AppName: . SlotName: . ExtensionVersion: 1.8.2. SequenceNumber: 4.
Executed 'ReplayExample' (Succeeded, Id=fb5e9945-3f9e-4fc6-bda3-9028fa6d8f04)
75dce58e3be440e98dcd3c99112c9bbf: Function 'ReplayExample (Orchestrator)' awaited. IsReplay: False. State: Awaited. HubName: DurableFunctionsHub. AppName: . SlotName: . ExtensionVersion: 1.8.2. SequenceNumber: 5.
75dce58e3be440e98dcd3c99112c9bbf: Function 'ReplayExample_ActivityFunction (Activity)' started. IsReplay: False. Input: (36 bytes). State: Started. HubName: DurableFunctionsHub. AppName: . SlotName: . ExtensionVersion: 1.8.2. SequenceNumber: 6.
Executing 'ReplayExample_ActivityFunction' (Reason='', Id=e93e770c-a15b-47b5-b8e3-e0db081cf44b)
Saying hello to Tokyo.
Executed 'ReplayExample_ActivityFunction' (Succeeded, Id=e93e770c-a15b-47b5-b8e3-e0db081cf44b)
75dce58e3be440e98dcd3c99112c9bbf: Function 'ReplayExample_ActivityFunction (Activity)' completed. ContinuedAsNew: False. IsReplay: False. Output: (56 bytes). State: Completed. HubName: DurableFunctionsHub. AppName: . SlotName: . ExtensionVersion: 1.8.2. SequenceNumber: 7.
Executing 'ReplayExample' (Reason='', Id=0542a8de-d9e5-46d0-a5a5-d94ea5040202)
************** RunOrchestrator method executing ********************
75dce58e3be440e98dcd3c99112c9bbf: Function 'ReplayExample_ActivityFunction (Activity)' scheduled. Reason: ReplayExample. IsReplay: False. State: Scheduled. HubName: DurableFunctionsHub. AppName: . SlotName: . ExtensionVersion: 1.8.2. SequenceNumber: 8.
Executed 'ReplayExample' (Succeeded, Id=0542a8de-d9e5-46d0-a5a5-d94ea5040202)
75dce58e3be440e98dcd3c99112c9bbf: Function 'ReplayExample (Orchestrator)' awaited. IsReplay: False. State: Awaited. HubName: DurableFunctionsHub. AppName: . SlotName: . ExtensionVersion: 1.8.2. SequenceNumber: 9.
75dce58e3be440e98dcd3c99112c9bbf: Function 'ReplayExample_ActivityFunction (Activity)' started. IsReplay: False. Input: (44 bytes). State: Started. HubName: DurableFunctionsHub. AppName: . SlotName: . ExtensionVersion: 1.8.2. SequenceNumber: 10.
Executing 'ReplayExample_ActivityFunction' (Reason='', Id=9b408473-3224-4968-8d7b-1b1ec56d9359)
Saying hello to Seattle.
Executed 'ReplayExample_ActivityFunction' (Succeeded, Id=9b408473-3224-4968-8d7b-1b1ec56d9359)
75dce58e3be440e98dcd3c99112c9bbf: Function 'ReplayExample_ActivityFunction (Activity)' completed. ContinuedAsNew: False. IsReplay: False. Output: (64 bytes). State: Completed. HubName: DurableFunctionsHub. AppName: . SlotName: . ExtensionVersion: 1.8.2. SequenceNumber: 11.
Executing 'ReplayExample' (Reason='', Id=040e7026-724c-4b07-9e4f-46ed52278785)
************** RunOrchestrator method executing ********************
75dce58e3be440e98dcd3c99112c9bbf: Function 'ReplayExample_ActivityFunction (Activity)' scheduled. Reason: ReplayExample. IsReplay: False. State: Scheduled. HubName: DurableFunctionsHub. AppName: . SlotName: . ExtensionVersion: 1.8.2. SequenceNumber: 12.
Executed 'ReplayExample' (Succeeded, Id=040e7026-724c-4b07-9e4f-46ed52278785)
75dce58e3be440e98dcd3c99112c9bbf: Function 'ReplayExample (Orchestrator)' awaited. IsReplay: False. State: Awaited. HubName: DurableFunctionsHub. AppName: . SlotName: . ExtensionVersion: 1.8.2. SequenceNumber: 13.
75dce58e3be440e98dcd3c99112c9bbf: Function 'ReplayExample_ActivityFunction (Activity)' started. IsReplay: False. Input: (40 bytes). State: Started. HubName: DurableFunctionsHub. AppName: . SlotName: . ExtensionVersion: 1.8.2. SequenceNumber: 14.
Executing 'ReplayExample_ActivityFunction' (Reason='', Id=03a0b729-d8e5-4e42-8b7a-28ea974bb1a6)
Saying hello to London.
Executed 'ReplayExample_ActivityFunction' (Succeeded, Id=03a0b729-d8e5-4e42-8b7a-28ea974bb1a6)
75dce58e3be440e98dcd3c99112c9bbf: Function 'ReplayExample_ActivityFunction (Activity)' completed. ContinuedAsNew: False. IsReplay: False. Output: (60 bytes). State: Completed. HubName: DurableFunctionsHub. AppName: . SlotName: . ExtensionVersion: 1.8.2. SequenceNumber: 15.
Executing 'ReplayExample' (Reason='', Id=02f307bc-c63c-4803-80f8-acfae72c3577)
************** RunOrchestrator method executing ********************
75dce58e3be440e98dcd3c99112c9bbf: Function 'ReplayExample (Orchestrator)' completed. ContinuedAsNew: False. IsReplay: False. Output: (null). State: Completed. HubName: DurableFunctionsHub. AppName: . SlotName: . ExtensionVersion: 1.8.2. SequenceNumber: 16.
Executed 'ReplayExample' (Succeeded, Id=02f307bc-c63c-4803-80f8-acfae72c3577)

Notice in the preceding log output, there are 4 instances of the log messages ************** RunOrchestrator method executing ********************

The final instance of the message occurs before the Function 'ReplayExample (Orchestrator)' completed message so even though we only have 3 activity calls, the orchestrator function method itself was executed 4 times.

Viewing all 207 articles
Browse latest View live