Skip to content

Latest commit

 

History

History
269 lines (201 loc) · 42.3 KB

Background-Jobs.md

File metadata and controls

269 lines (201 loc) · 42.3 KB

#Background jobs guidance

Overview

Many types of applications require background tasks that run independently of the user interface (UI). Examples include batch jobs, intensive processing tasks, and long running processes such as workflows. Background jobs can be executed without requiring user interaction; the application can start the job and then continue to process interactive requests from users. This can help to minimize the load on the application UI, which can improve availability and reduce interactive response times.

For example, if an application is required to generate thumbnails of images uploaded by users, it can do this as a background job and save the thumbnail to storage when complete without the user needing to wait for the process to complete. In the same way, a user placing an order can initiate a background workflow that processes the order, while the UI allows the user to continue browsing the website. When the background job is complete, it can update the stored orders data and send an email to the user confirming the order.

When considering whether to implement a task as a background job, the main criteria is whether the task can run without user interaction and without the UI needing to wait for the job to complete. Tasks that require the user or the UI to wait while they are completed may not be appropriate as background jobs.

Types of background jobs

Background jobs typically have one or more of the following characteristics:

  • CPU intensive jobs such as mathematical calculations, structural model analysis, and more.
  • I/O intensive jobs such as executing a series of storage transactions or indexing files.
  • Batch jobs such as nightly data updates or scheduled processing.
  • Long running workflows such as order fulfillment or provisioning services and systems.
  • Sensitive data processing where the task is handed off to a more secure location for processing. For example, you may not want to process sensitive data within a web role, and instead use a pattern such as Gatekeeper to transfer the data to an isolated background role that has access to protected storage.

Triggers

Background jobs can be initiated in several different ways. Effectively, all of them fall into one of the following categories:

  • Event driven triggers. The task is started in response to an event, typically an action taken by a user or a step in a workflow.
  • Schedule driven triggers. The task is invoked on a schedule based on a timer. This may be a recurring schedule, or a one-off invocation specified for a later time

Event driven triggers

Event driven invocation uses a trigger to start the background task. Examples of using event driven triggers include:

  • The UI or another job places a message in a queue. The message contains data about an action that has taken place, such as the user placing an order. The background task listens on this queue and detects the arrival of a new message. It reads the message and uses the data in it as the input to the background job.
  • The UI or another job saves or updates a value in storage. The background task monitors the storage and detects changes. It reads the data and uses it as the input to the background job.
  • The UI or another job makes a request to an endpoint, such as an HTTPS URI, or an API exposed as a web service. It passes the data required to complete the background task as part of the request. The endpoint or web service invokes the background task, which uses the data as its input.

Typical examples of tasks suited to event driven invocation include image processing, workflows, sending information to remote services, sending email messages, provisioning new users in multi-tenant applications, and more.

Schedule driven triggers

Schedule driven invocation uses a timer to start the background task. Examples of using schedule driven triggers include:

  • A timer running locally within the application or as part of the application’s operating system invokes a background task on a regular basis.
  • A timer running in a different application, or a timer service such as Azure Scheduler, sends a request to an API or web service on a regular basis. The API or web service invokes the background task.
  • A separate process or application starts a timer that causes the background task to be invoked once after a specified time delay, or at a specific time.

Typical examples of tasks suited to schedule driven invocation include batch processing routines such as updating related products lists for users based on their recent behavior, routine data processing tasks such as updating indexes or generating accumulated results, analyzing data for daily reports, data retention cleanup, data consistency checks, and more.

If you use a schedule driven task that must run as a single instance, be aware of the following:

  • If the compute instance that is running the scheduler (such as a Virtual Machine using Windows Scheduled Tasks) is scaled, you will have multiple instances of the scheduler running and these could start multiple instances of the task.
  • If tasks run for longer than the period between scheduler events, the scheduler may start another instance of the task while the previous one is still running.

Returning results

Background jobs execute asynchronously in a separate process, or even a separate location, from the UI or the process that invoked the background task. Ideally, background tasks are “fire and forget” operations, and their execution progress has no impact on the UI or the calling process. This means that the calling process does not wait for completion of the tasks, and therefore cannot automatically detect when the task ends. If you require a background task to communicate with the calling task to indicate progress or completion, you must implement a mechanism for this. Some examples are:

  • Write status indicator value to storage that is accessible to the UI or caller task, which can monitor or check this value when required. Other data that the background task must return to the caller can be placed into the same storage.
  • Establish a reply queue that the UI or caller listens on. The background task can send messages to the queue indicating status and completion. Data that the background task must return to the caller can be placed into the messages. If you are using Azure Service Bus, you can use the ReplyTo and CorrelationId properties to implement this capability. For more information, see Correlation in Service Bus Brokered Messaging.
  • Expose an API or endpoint from the background task that the UI or caller can access to obtain status information. Data that the background task must return to the caller can be included in the response.
  • Have the background task call back to the UI or caller through an API to indicate status at predefined points or on completion. This might be through events raised locally, or through a publish and subscribe mechanism. Data that the background task must return to the caller can be included in the request or event payload.

Hosting environment

You can host background tasks using a range of different Azure platform services:

  • Azure Web Sites. You can use WebJobs to execute custom jobs based on a range of different types of script or executable program within the context of the website.
  • Azure Cloud Services web and worker roles. You can write code within a role that executes as a background task.
  • Azure Virtual Machines. If you have a Windows service or you want to use the Windows Task Scheduler, it is common to host your background tasks within a dedicated virtual machine.

The following sections describe each of these options in more detail, and include considerations to help you choose the appropriate option.

Azure Web Sites and WebJobs

You can use Azure WebJobs to execute custom jobs as background tasks within an Azure Web Sites hosted application. WebJobs can run scripts or executable programs within the context of your website as a continuous process, or in response to a trigger event from Azure Scheduler or external factors such as changes to storage blobs and message queues. Jobs can be started and stopped on demand, and shut down gracefully. If a continuously running WebJob fails, it is automatically restarted. Retry and error actions are configurable.

When configuring a WebJob:

  • If you want the job to respond to an event driven trigger, it should be configured as Run continuously. The script or program is stored in the folder named site/wwwroot/app_data/jobs/continuous.
  • If you want the job to respond to a schedule driven trigger, it should be configured as Run on a schedule. The script or program is stored in the folder named site/wwwroot/app_data/jobs/triggered.
  • If you choose the Run on demand option when you configure a job, it will execute the same code as the Run on a schedule option when you start it.

Azure WebJobs run within the sandbox of the website, which means they can access environment variables, and share information such as connection strings with the website. The job has access to the unique identifier of the machine running the job. The connection string named AzureJobsStorage provides access to Azure storage queues, blobs, and tables for application data, and Service Bus for messaging and communication. The connection string named AzureJobsDashboard provides access to the job action log files.

Azure WebJobs have the following characteristics:

  • Security: WebJobs are protected by the deployment credentials of the website.
  • Supported file types: WebJobs can be defined using command scripts (.cmd), batch files (.bat), PowerShell scripts (.ps1), bash shell scripts (.sh), PHP scripts (.php), Python scripts (.py), JavaScript code (.js), and executable programs (.exe, .jar, and more).
  • Deployment: Scripts and executables can be deployed using the Azure portal, created and deployed by using the WebJobsVs add-in for Visual Studio or the Visual Studio 2013 Update 4, by using the Azure WebJobs SDK, or by copying them directly to the following locations:
    • for triggered execution: site/wwwroot/app_data/jobs/triggered/{job name}
    • for continuous execution: site/wwwroot/app_data/jobs/continuous/{job name}
  • Logging: Console.Out is treated (marked) as INFO and Console.Error as ERROR. Monitoring and diagnostics information can be accessed using the Azure portal, and log files can be downloaded directly from the site. They are saved in the following locations:
    • for triggered execution: Vfs/data/jobs/continuous/jobName
    • for continuous execution: Vfs/data/jobs/triggered/jobName
  • Configuration: WebJobs can be configured using the portal, the REST API, and PowerShell. A configuration file named settings.job in the same root directory as the job script can be used to provide configuration information for a job. For example:
    • { "stopping_wait_time": 60 }
    • { "is_singleton": true }

Considerations

  • By default, WebJobs scale with the website. However, jobs can be configured to run on single instance by setting the is_singleton configuration property to true. Single instance WebJobs are useful for tasks that you do not want to scale or run as simultaneous multiple instances, such as re-indexing, data analysis, and similar tasks.
  • To minimize the impact of jobs on the performance of the website, consider creating an empty Azure Web Sites instance to host WebJobs that may be long running or resource intensive.

More information

Azure Cloud Services web and worker roles

Background tasks can be executed within a web role or in a separate worker role. The decision whether to use a worker role should be based on consideration of scalability and elasticity requirements, task lifetime, release cadence, security, fault tolerance, contention, complexity, and the logical architecture. For more information, see Compute Resource Consolidation Pattern.

There are several ways to implement background tasks within a Cloud Services role:

  • Create an implementation of the RoleEntryPoint class in the role and use its methods to execute background tasks. The tasks run in the context of WaIISHost.exe, and can use the GetSetting method of the CloudConfigurationManager class to load configuration settings. For more information, see Lifecycle (Cloud Services).
  • Use startup tasks to execute background tasks when the application starts. To force the tasks to continue to run in the background set the taskType property to background (if you do not do this, the application startup process will halt and wait for the task to finish). For more information, see Run Startup Tasks in Azure.
  • Use the WebJobs SDK to implement background tasks as WebJobs that are initiated as a startup task. For more information, see Get Started with the Azure WebJobs SDK.
  • Use a startup task to install a Windows service that executes one or more background tasks. You must set the taskType property to background so that the service executes in the background. For more information, see Run Startup Tasks in Azure.

Running background tasks in the web role

The main advantage of running background tasks in the web role is the saving in hosting costs because there is no requirement to deploy additional roles.

Running background tasks in a worker role

Running background tasks in a worker role has several advantages:

  • It allows you to manage scaling separately for each type of role. For example, you may need more instances of a web role to support the current load, but fewer instances of the worker role that executes background tasks. Scaling background task compute instances separately from the UI roles can reduce hosting cost, while maintaining acceptable performance.
  • It offloads the processing overhead for background tasks from the web role. The web role that provides the UI can remain responsive, and it may mean fewer instances are required to support a given volume of requests from users.
  • It allows you to implement separation of concerns. Each role type can implement a specific set of clearly defined and related tasks. This makes designing and maintaining the code easier because there is less interdependence of code and functionality between each role.
  • It can help to isolate sensitive processes and data. For example, web roles that implement the UI do not need to have access to data that is managed and controlled by a worker role. This can be useful in strengthening security, especially when using a pattern such as the Gatekeeper Pattern.

Considerations

Consider the following points when choosing how and where to deploy background tasks when using Cloud Services web and worker roles:

  • Hosting background tasks in an existing web role can save the cost of running a separate worker role just for these tasks, but it is likely to affect the performance and availability of the application if there is contention for processing and other resources. Using a separate worker role protects the web role from the impact of long running or resource intensive background tasks.
  • If you host background tasks using the RoleEntryPoint class, you can easily move this to another role. For example, if you create the class in a web role and later decide you need to run the tasks in a worker role, you can move the RoleEntryPoint class implementation into the worker role.
  • Startup tasks are designed to execute a program or a script. Deploying a background job as an executable program may be more difficult, especially if it also requires deployment of dependent assemblies. It may be easier to deploy and use a script to define a background job when using startup tasks.
  • Exceptions that cause a background task to fail have a different impact depending on the way that they are hosted:
    • If you use the RoleEntryPoint class approach, a failed task will cause the role to restart so that the task automatically restarts. This can affect availability of the application. To prevent this, ensure that you include robust exception handling within the RoleEntryPoint class and all the background tasks. Use code to restart tasks that fail where this is appropriate, and throw the exception to restart the role only if you cannot gracefully recover from the failure within your code.
    • If you use startup tasks, you are responsible for managing the task execution and checking if it fails.
  • Managing and monitoring startup tasks is more difficult than using the RoleEntryPoint class approach. However, the Azure WebJobs SDK include a dashboard to make it easier to manage WebJobs that you initiate through startup tasks.

More information

Azure Virtual Machines

Background tasks may be implemented in a way that prevents them from being deployed to Azure Web Sites or Cloud Services, or this may not convenient. Typical examples are Windows services, and third party utilities and executable programs. It may also include programs written for an execution environment different to that hosting the application; for example, it may be a Unix or Linux program that you want to execute from a Windows or .NET application. You can choose from a range of operating systems for an Azure virtual machine, and run your service or executable on that virtual machine.

To help you choose when to use Virtual Machines, see Azure Websites, Cloud Services and Virtual Machines comparison. For information about the options for Virtual Machines, see Virtual Machine and Cloud Service Sizes for Azure. For more information about the operating systems and pre-built images available for Virtual Machines, see Azure Virtual Machines Gallery.

To initiate the background task in a separate virtual machine, you have a range of options:

  • You can execute the task on demand directly from your application by sending a request to an endpoint that the task exposes, passing in any data that the task requires. This endpoint invokes the task.
  • You can configure the task to run on a schedule using a scheduler or timer available in your chosen operating system. For example, on Windows you can use Windows Task Scheduler to execute scripts and tasks or, if you have SQL Server installed on the virtual machine, you can use the SQL Server Agent to execute scripts and tasks.
  • You can use Azure Scheduler to initiate the task by adding a message to a queue that the task listens on, or by sending a request to an API that the task exposes.

See the earlier section Triggers for more information about how you can initiate background tasks.

Considerations

Consider the following points when deciding whether to deploy background tasks in an Azure virtual machine:

  • Hosting background tasks in a separate Azure virtual machine provides flexibility and allows precise control over initiation, execution, scheduling, and resource allocation. However, it will increase runtime cost if a virtual machine must be deployed just to run background tasks.
  • There is no facility to monitor the tasks in the Azure portal, and no automated restart capability for failed tasks, although you can monitor the basic status of the virtual machine and manage it using the Azure Service Management Cmdlets. However, there are no facilities to control processes and threads in compute nodes. Typically, using a virtual machine will require additional effort to implement a mechanism that collects data from instrumentation in the task, and from the operating system in the virtual machine. One solution that may be appropriate is to use the System Center Management Pack for Windows Azure.
  • You might consider creating monitoring probes that are exposed through HTTP endpoints. The code for these probes could perform health checks, collect operational information and statistics, or collate error information, and return it to a management application. For more information, see Health Endpoint Monitoring Pattern.

More information

Design considerations

There are several fundamental factors to consider when designing background tasks. The following sections discuss partitioning, conflicts, and coordination.

Partitioning

If you decide to include background tasks within an existing compute instance (such as a website, web role, existing worker role, or virtual machine), you must consider how this will affect the quality attributes of the compute instance and the background task itself. These factors will help you to decide whether to co-locate the tasks with the existing compute instance, or separate them out into a separate compute instance:

  • Availability: Background tasks may not need to have the same level of availability as other parts of the application, in particular the UI and other parts directly involved in user interaction. Background tasks may be more tolerant of latency, retried connection failures, and other factors that affect availability because the operations can be queued. However, there must be sufficient capacity to prevent backing up of requests that could block queues and affect the application as a whole.
  • Scalability: Background tasks are likely to have a different scalability requirement to the UI and the interactive parts of the application. Scaling the UI may be necessary to meet peaks in demand, while outstanding background tasks could be completed during less busy times by a fewer number of compute instances.
  • Resiliency: Failure of a compute instance that just hosts background tasks may not fatally affect the application as a whole if the requests for these tasks can be queued or postponed until the task is available again. If the compute instance and/or tasks can be restarted within an appropriate interval, users of the application may not be affected.
  • Security: Background tasks may have different security requirements or restrictions than the UI or other parts of the application. By using a separate compute instance, you can specify a different security environment for the tasks. You can also use patterns such as Gatekeeper to isolate the background compute instances from the UI in order to maximize security and separation.
  • Performance: You can choose the type of compute instance for background tasks to specifically match the performance requirements of the tasks. This may mean using a less expensive compute option if the tasks do not require the same processing capabilities as the UI, or a larger instance if they require additional capacity and resources.
  • Manageability: Background tasks may have a different development and deployment rhythm from the main application code or the UI. Deploying them to a separate compute instance can simplify updates and versioning.
  • Cost: Adding compute instances to execute background tasks increases hosting costs. You should carefully consider the trade-off between additional capacity and these extra costs.

For more information, see Leader Election pattern and Competing Consumers pattern.

Conflicts

If you have multiple instances of a background job, it is possible that they will compete for access to resources and services such as databases and storage. This concurrent access can result in resource contention, which may cause conflicts in availability of the services and in the integrity of data in storage. Resource contention can be resolved by using a pessimistic locking approach to prevent competing instances of a task from concurrently accessing a service, or corrupting data.

Another approach to resolve conflicts is to define background tasks as a singleton, so that there is only ever one instance running. However, this eliminates the reliability and performance benefits that a multiple-instance configuration could provide, especially if the UI can supply sufficient work to keep more than one background task busy. It is vital to ensure that the background task can automatically restart, and that it has sufficient capacity to cope with peaks in demand. This may be achieved by allocating a compute instance with sufficient resources, by implementing a queueing mechanism that can store requests for later execution when demand decreases, or by a combination of these techniques.

Coordination

The background tasks may be complex, and require multiple individual tasks to execute to produce a result or to fulfil all the requirements. It is common in these scenarios to divide the task into smaller discreet steps or subtasks that can be executed by multiple consumers. Multi-step jobs can be more efficient and more flexible because individual steps may be reusable in multiple jobs. It is also easy to add, remove, or modify the order of the steps.

Coordinating multiple tasks and steps can be challenging, but there are three common patterns you can use to guide your implementation of a solution:

  • Decomposing a task into multiple reusable steps. An application may be required to perform a variety of tasks of varying complexity on the information that it processes. A straightforward but inflexible approach to implementing this application could be to perform this processing as monolithic module. However, this approach is likely to reduce the opportunities for refactoring the code, optimizing it, or reusing it if parts of the same processing are required elsewhere within the application. For more information, see Pipes and Filters Pattern.
  • Managing execution of the steps for a task. An application may perform tasks that comprise a number of steps, some of which may invoke remote services or access remote resources. The individual steps may be independent of each other, but they are orchestrated by the application logic that implements the task. For more information, see Scheduler Agent Supervisor Pattern.
  • Managing recovery for steps of a task that fail. An application may need to undo the work performed by a series of steps, which together define an eventually consistent operation, if one or more of the steps fail. For more information, see Compensating Transaction Pattern.

Lifecycle (Cloud Services)

If you decide to implement background jobs for Cloud Services applications that use web and worker roles by using the RoleEntryPoint class, it is important to understand the lifecycle of this class in order to use it correctly.

Web and worker roles go through a set of distinct phases as they start, run, and stop. The RoleEntryPoint class exposes a series of events that indicate when these stages are occurring. You use these to initialize, run, and stop your custom background tasks. The complete cycle is:

  • Azure loads the role assembly and searches it for a class that derives from RoleEntryPoint.
  • If it finds this class, it calls RoleEntryPoint.OnStart(). You override this method to initialize your background tasks.
  • After the OnStart method completes, Azure calls Application_Start() in the application’s Global file if this is present (for example, Global.asax in a web role running ASP.NET).
  • Azure calls RoleEntryPoint.Run() on a new foreground thread that executes in parallel with OnStart(). You override this method to start your background tasks.
  • When the Run method ends, Azure first calls Application_End() in the application’s Global file if this is present, and then calls RoleEntryPoint.OnStop(). You override the OnStop method to stop your background tasks, clean up resources, dispose of objects, and close connections that the tasks may have used.
  • The Azure worker role host process is stopped. At this point, the role will be recycled and will restart.

For more details and an example of using the methods of the RoleEntryPoint class, see Compute Resource Consolidation Pattern.

Considerations

Consider the following points when planning how you will run background tasks in a web or worker role:

  • The default Run method implementation in the RoleEntryPoint class contains a call to Thread.Sleep(Timeout.Infinite) that keeps the role alive indefinitely. If you override the Run method (which is typically necessary to execute background tasks) you must not allow your code to exit from the method unless you want to recycle the role instance.

  • A typical implementation of the Run method includes code to start each of the background tasks, and a loop construct that periodically checks the state of all the background tasks. It can restart any that fail, or monitor for cancellation tokens that indicate jobs have completed.

  • If a background task throws an unhandled exception, that task should be recycled while allowing any other background tasks in the role to continue running. However, if the exception is caused by corruption of objects outside the task, such as shared storage, the exception should be handled by your RoleEntryPoint class, all tasks should be cancelled, and the Run method allowed to end. Azure will then restart the role.

  • Use the OnStop method to pause or kill background tasks and clean up resources. This may involve stopping long-running or multi-step tasks, and it is vital to consider how this can be done to avoid data inconsistencies. If a role instance stops for any reason other than a user-initiated shutdown, the code running in the OnStop method must complete within five minutes before it is forcibly terminated. Ensure that your code can complete in that time, or can tolerate not running to completion.

  • The Azure load balancer starts directing traffic to the role instance when the RoleEntryPoint.OnStart method returns true. Therefore, consider putting all your initialization code in the OnStart method so that role instances that do not successfully initialize will not receive any traffic.

  • You can use startup tasks in addition to the methods of the RoleEntryPoint class. You should use startup tasks to initialize any settings you need to change in the Azure load balancer because these tasks will execute before the role receives any requests. For more information, see Run Startup Tasks in Azure.

  • If there is an error in a startup task, it may force the role to continually restart. This can prevent you from performing a VIP swap back to a previously staged version because the swap requires exclusive access to the role, and this cannot be obtained while the role is restarting. To resolve this:

    • Add the following code to the beginning of the OnStart and Run methods in your role:
     var freeze = CloudConfigurationManager.GetSetting("Freeze");
     if (freeze != null)
     {
     	if (Boolean.Parse(freeze))
       	{
     	    Thread.Sleep(System.Threading.Timeout.Infinite);
     	}
     }
    • Add the definition of the Freeze setting as a Boolean value to the ServiceDefinition.csdef and ServiceConfiguration.*.cscfg files for the role and set it to false. If the role goes into a repeated restart mode, you can change the setting to true to freeze role execution and allow it to be swapped with a previous version.

Resiliency considerations

Background tasks must be resilient in order to provide reliable services to the application. When planning and designing background tasks, consider the following points:

  • Background tasks must be able to gracefully handle role or service restarts without corrupting data or introducing inconsistency into the application. For long-running or multi-step tasks, consider using check pointing by saving the state of jobs in persistent storage, or as messages in a queue if this is appropriate. For example, you can persist state information in a message in a queue and incrementally update this state information with the task progress so that the task can be processed from the last known good checkpoint instead of restarting from the beginning. When using Azure Service Bus queues, you can use message sessions to enable the same scenario. Sessions allow you to save and retrieve the application processing state by using the SetState and GetState methods. For more information about designing reliable multi-step processes and workflows, see Scheduler Agent Supervisor Pattern.
  • When using web or worker roles to host multiple background tasks, design your override of the Run method to monitor for failed or stalled tasks, and restart them. Where this is not practical, and you are using a worker role, force the worker role to restart by exiting from the Run method.
  • When using queues to communicate with background tasks, the queues can act as a buffer to store requests sent to the tasks while the application is under higher than usual load. This allows the tasks to catch up with the UI during less busy periods. It also means that recycling the role will not block the UI. For more information, see Queue-Based Load Leveling Pattern. If some tasks are more important than others, consider implementing the Priority Queue Pattern to ensure that these tasks run before less important ones.
  • Background tasks that are initiated by, or otherwise process messages must be designed to handle inconsistencies such as messages arriving out of order, messages that repeatedly cause an error (often referred to as poison messages), and messages that are delivered more than once. Consider the following:
    • Messages that must be processed in a specific order, such as those that change data based on its existing value (for example, adding a value to an existing value), may not arrive in the original order they were sent. Alternatively, they may be handled by different instances of a background task in a different order due to varying loads on each instance. Messages that must be processed in a specific order should include a sequence number, key, or some other indicator that background tasks can use to ensure they are processed in the correct order. If you are using Azure Service Bus, you can use message sessions to guarantee the order of delivery. However, it is usually more efficient where possible to design the process so that the message order is not important.
    • Typically, a background task will peek messages in the queue, which temporarily hides them from other message consumers, and then delete the messages after they have been successfully processed. If a background task fails when processing a message, that message will reappear on the queue after the peek timeout expires, and will be processed by another instance of the task or during the next processing cycle of this instance. If the message consistently causes an error in the consumer, it will block the task, the queue, and eventually the application itself when the queue becomes full. Therefore, it is vital to detect and remove poison messages from the queue. If you are using Azure Service Bus, messages that cause an error can be moved automatically or manually to an associated dead letter queue.
    • Queues are guaranteed at least once delivery mechanisms, but they may deliver the same message more than once. In addition, if a background task fails after processing a message but before deleting it from the queue, the message will become available for processing again. Background tasks should be idempotent, which means that processing the same message more than once does not cause an error or inconsistency in the application’s data. Some operations are naturally idempotent, such as setting a stored value to a specific new value. However, operations such as adding a value to an existing stored value without checking that the stored value is still the same as when the message was originally sent will cause inconsistencies. Azure Service Bus queues can be configured to automatically remove duplicated messages.
    • Some messaging systems, such as Azure storage queues and Azure Service Bus queues, support a de-queue count property that indicates the number of times a message has been read from the queue. This can be useful in handling repeated and poison messages. For more information, see Asynchronous Messaging Primer and Idempotency Patterns.

Scaling and performance considerations

Background tasks must offer sufficient performance to ensure they do not block the application, or cause inconsistencies due to delayed operation when the system is under load. Typically, performance is improved by scaling the compute instances that host the background tasks. When planning and designing background tasks, consider the following points around scalability and performance:

  • Azure supports autoscaling (both scaling out and scaling back in) based on current demand and load, or on a predefined schedule, for Web Sites, Cloud Services web and worker roles, and Virtual Machines hosted deployments. Use this feature to ensure the application as a whole has sufficient performance capabilities while minimizing runtime costs.
  • Where background tasks have a different performance capability from the other parts of a Cloud Services application (for example, the UI or components such as the data access layer), hosting the background tasks together in a separate worker role allows the UI and background task roles to scale independently to manage the load. If multiple background tasks have significantly different performance capabilities from each other, consider dividing them into separate worker roles and scaling each role type independently, but note that this may increase runtime costs compared to combining all the tasks into fewer roles.
  • Simply scaling the roles may not be sufficient to prevent loss of performance under load. You may also need to scale storage queues and other resources to prevent a single point of the overall processing chain becoming a bottleneck. Also, consider other limitations, such as the maximum throughput of storage and other services the application and the background tasks rely on.
  • Background tasks must be designed for scaling. For example, they must be able to dynamically detect the number of storage queues in use in order to listen on or send messages to the appropriate queue.
  • By default, WebJobs scale with their associated Azure Web Sites instance. However, if you want a WebJob to run as only a single instance, you can create a Settings.job file containing the JSON data { "is_singleton": true }. This forces Azure to only run one instance of the WebJob, even if there are multiple instances of the associated website, which can be a useful technique for scheduled jobs that must run as only a single instance.

Related patterns

More information