Salesforce Architects | Write-triggered automation (2023)

Content last updated Nov 2022. Roadmap reflects October 2022 (Winter '23) forecasts.
Ourforward-looking statementapplies to script projections.

Guide overview

This guide provides tooling recommendations for various triggered automation use cases, along with the rationale for those recommendations. It also provides information on how Flow automatically handles volume control and recursion on behalf of the client, as well as some performance and automation design guidance.

Here are the main setbacks:

  • Snack #1:Flow and Apex are the preferred no-code and pro-code solutions for platform driven automation.
  • Snack #2:Stop inserting field updates for the same record in workflow rules and process builder. Instead, start inserting field updates for the same record into flow triggers before saving.
  • Reminder #3:If possible, start implementing use cases in flow triggers after save and not in process and workflow builder (except for field updates to the same record, in which case see takeaway #2).
  • Reminder #4:Use Apex when you need high-performance batch processing or sophisticated deployment logic. (To seeWell thought out – transaction processingfor more information.)
  • Reminder #5:You don't need to package all of your write-triggered automation into a single "megaflow" per object, but it's worth thinking about how to organize and manage your automation in the long term. (To seeSophisticated architecture – Composablefor more information.)

This document focuses on write-triggered automation. The same review of Salesforce's form creation tools can be found hereArchitect's Guide to Designing Forms in Salesforce.

Low-Code-------------------------------------->Pro-Code
Stream trigger before savingStream trigger after saveStream trigger after save + Apex Apex-Trigger
Field updates for the same recordAvailablenot idealnot ideal Available
High performance batch processing not ideal not ideal not ideal Available
Cross Object CRUD Not available Available Available Available
asynchronous processing Not available Available Available Available
Complex list processing Not available not ideal Available Available
Custom validation errors Not available Not available Not available Available
  • Available= should work fine, with basic considerations.
  • not ideal= possible, but with important and potentially limiting considerations.
  • Not available= no plans to support in any way for the next twelve months.

The table above shows the most common trigger use cases and the tools we've found to work well with.

In the case where multiple tools are available for a use case, we recommend choosing the tool that allows you to implement and manage the use case at the lowest cost. This depends a lot on your team composition.

For example, if your team includes Apex developers and you already have a well-established CI/CD pipeline and well-managed framework for handling Apex triggers, it will likely be cheaper to continue down this path. In this case, the cost of changing your organization's operating models to adopt flow development would be significant. On the other hand, if your team doesn't have consistent access to developer resources or a strong institutionalized culture of code quality, you're likely to be better served by triggered flows that more people can maintain than multiple lines of code that only a few people can. to maintain.

For a team with mixed skills or heavy admin skills, flow triggers provide a compelling option that is more powerful and easier to debug, maintain, and extend than any no-code offerings of the past. If you have limited developer resources, you can use flow triggers to delegate the deployment of business process implementations to focus those resources on projects and tasks that make the most of your skills.

Goodbye, Process and Workflow Builder-Regeln

While the road to retirement for process and workflow rule builders can be a long one, we encourage you to start implementing all of your low-code advanced automation in Flow. Flow is best designed to meet the growing functionality and extensibility needs of today's Salesforce customers.

  • The vast majority of workflow rules are used to perform field updates on the same record. While workflow rules have a reputation for being fast, they still cause astore recursively, and will always be significantly slower and more resource consuming than a functionally equivalent single stream-before-save trigger.
    • Also, workflow rules are just a completely different system of workflow, with different metadata and execution time. Any improvements Salesforce makes to Flow — not just performance improvements, but debugging improvements, manageability improvements, CI/CD improvements, etc. — never benefit workflow rules and vice versa.
  • Process Builder is always less powerful and harder to debug than Flow. Furthermore, it has ahard to readlist view.
    • Process Builder runs in the flow runtime, but there is a significant difference between Process Builder's human-centric design-time model and the flow runtime metadata model. This abstraction causes the underlying metadata of a process to result in a mangled, less powerful, and often incomprehensible flow definition. And a corrupted stream definition is much harder to debug than an uncorrupted stream definition.
  • In addition to the shortcomings already mentioned, both the Process Builder and the Workflow Rules rely on a highly inefficient initialization phase that increases processing time for each save order execution they are in. In practice, we've found that most process builders and workflow rules resolve to non-operational at runtime (i.e. criteria are not met, so no operation is performed) and do not significantly benefit from the design phase. startup. Unfortunately, this part of the implementation takes place at a fundamental level of code where changing it is extremely risky.
    • This costly initialization phase has been eliminated in the new flow-triggered architecture.

For these reasons, Salesforce will continue to focus its investments on Flow.We recommend building in Flow whenever possible and only using Process Builder or Workflow when necessary.

At this point, Flow has closed all the major functional gaps we've identified between it and Workflow Rules and Process Builder. We continue to invest in closing small remaining gaps, including improved formulas and entry conditions, as well as usability improvements to simplify areas where Flow is more complex.

A note on nomenclature

Flow introduced a new concept in the field of low-code automation by separating your recording triggers into before and after storing withinTrigger execution order🇧🇷 This matches the equivalent functionality available in Apex and allows for significantly better performance when it comes to field updates for the same record. However, this introduces additional complexity to the flow user experience, and users unfamiliar with triggers found the terminology confusing. Therefore, in this guide, we will continue to refer to these two options as "before save" and "after save", but in Flow Builder they have been renamed to "quick field update" and "associated actions and records".

Use case considerations

Field updates for the same record

Register changed flow: before saving Save changed process: After saving Log Changed Flow: After Save + Apex Apex-Trigger
Updates to the same record Available not ideal not ideal Available

Of all the recommendations in this guide, we strongly recommend taking steps to minimize the number of field updates to the same record that occur after a save. Or more simply, stop implementing field update actions for the same record in workflow rules or process builder processes! Also, don't start implementing field updates for the same record in stream triggers after saving! Instead, start implementing field update actions on the same record in Pre-Save Flow Triggers or Pre-Save Apex Triggers. There are two main reasons for this:

  1. The record field values ​​are already loaded into memory and do not need to be reloaded.
  2. The update is accomplished by changing the register values ​​in memory and relying on the original underlying DML operation to save the changes to the database. This not only avoids an expensive DML operation, but also all the recursive storage involved.

Well, that's the theory anyway; What happens in practice?

Our tests (Performance Discussion: Field updates for the same record) offer an empirical taste. In our experiments, bulk updates to the same dataset performed between 10 and 20 times faster when implemented using pre-save triggers than when implemented using workflow rules or the Process Builder. For this reason, while there are still some theoretical limitations with Apex, we don't believe that performance should be considered a limitation for implementing pre-save stream triggers, except perhaps in the most extreme scenarios.

The main limitation of pre-save stream triggers is that they are functionally sparse: you can query records, run loops, evaluate formulas, assign variables, and make decisions (e.g.,Exchangeinstructions) for logic and can only make updates to the underlying dataset. You cannot augment a flow trigger with invokeable Apex actions or child flows before saving it. Meanwhile, you can do whatever you want to an Apex trigger before saving (except explicit DML for the underlying dataset). We intentionally define flow triggers before saving them to only support those operations that guarantee the performance gains mentioned above.

We know that field updates to the same record account for the majority of workflow rule actions performed across the site and are also a major contributor to the problematic performance of running Process Builder. Removing all "recursive stores" from the storage request and implementing them before storage will lead to many interesting performance improvements.

High performance batch processing

Register changed flow: before saving Save changed process: After saving Log Changed Flow: After Save + Apex Apex-Trigger
High performance batch processing not ideal not ideal not ideal Available

If you're looking for high-performance evaluation of complex logic in batch scenarios, Apex's configurability and extensive debugging features and tools are for you. Here are some examples of what we mean by "complex logic" and why we recommend Apex.

  • Define and evaluate complicated logical expressions or formulas.
    • Flow's formula engine sporadically performs poorly when solving extremely complex formulas. This issue is exacerbated in batch use cases, as formulas are currently serially compiled and resolved at runtime. We are actively evaluating batch-compatible options for creating formulas, but formula resolution will always be serial. We still haven't identified the root cause of poor formula solving performance.
  • Complex list processing, loading and transforming data from large numbers of datasets, loops after loops of loops.
    • VerComplex list processingLearn about the current limitations of working directly with lists in Flow.
  • Anything that requires card-like or phrase-like functionality.
    • The stream does not support the map data type. Likewise, if a callable Apex action passes an Apex object to Flow and the Apex object contains a member variable of type Map, you will not be able to access that member variable in Flow. However, the member variable persists at runtime, so when Flow passes that Apex object to another Apex callable action, you can access the member variable in the incoming Apex callable action.
    • Map data type support is not on Flow's one-year roadmap.
  • transaction saving points
    • Transaction savepoints are not supported in flow triggers and will likely never be supported in flow.

While pre-save stream triggers are not as powerful as pre-save apex triggers in barebones speed competitions, the impact of the overhead is somewhat minimized when contextualized in the context of the larger transaction. Pre-save stream triggers should still be fast enough for the vast majority of non-complex batch scenarios (as enumerated above) for field updates of the same dataset. Because they are consistently 10 times faster than workflow rules, you can safely use them anywhere you currently use workflow rules.

For batch processing that doesn't need to be triggered immediately during the first transaction, Flow has some features, although they are still more limited and less feature rich than Apex. Currently, scheduled flows can run a batch operation for up to250,000 records per dayand can be used for datasets that are unlikely to reach this limit. Planned paths in log-driven streams now also support configurable batch sizes, so admins can change the batch size from the default (200) to a different value if needed. This can be used for scenarios such as external callouts that don't support the default batch size. (To seeWell thought out – transaction processingfor more information.)

Cross Object CRUD

Register changed flow: before saving Save changed process: After saving Log Changed Flow: After Save + Apex Apex-Trigger
Cross Object CRUD Not available Available Available Available

Creating, updating, or deleting any other record (other than the original record that triggered the transaction) requires a database operation, regardless of the tool you use. The only tool that currently does not support cross-object "crupdeletes" (a portmanteau of create, update, and delete operations) is the pre-save stream trigger.

Apex currently outperforms Flow in terms of database operation speed. That is, the Apex runtime takes less time to prepare, execute, and process the result of a given database call (such as a call to create a case) than the Flow runtime does to do the same. In practice, however, you're likely to find greater benefits if you're looking for big performance improvements, identifying inefficient user implementations and fixing them before looking to optimize for lower-level operations. Running the actual user logic on the application server often takes much longer than dealing with database operations.

(Video) Automate Your Business Processes with Orchestrator - Jacksonville Salesforce Architects Meeting

More inefficient user implementations tend to issue lots of DML statements where less would suffice. For example, here is an implementation of a flow trigger that updates two fields on the account's parent record of a case with two record update items.

Salesforce Architects | Write-triggered automation (1)

This is a sub-optimal implementation as it causes two DML operations (and two save tasks) to be performed at runtime. Combining the two field updates into a single Update Records element results in only one DML operation being performed at runtime.

Workflow rules have gained a reputation for being very powerful. Part of this can be attributed to how workflow rules limit the amount of DML that runs during a save.

  1. All immediate field update actions for the same record across all workflow rules for an object are automatically consolidated into a single DML statement at runtime (provided your criteria are met).
  2. A similar consolidation at runtime occurs for cross-object field update snap actions, from detail to master, across all workflow rules for an object.
  3. DML support between objects is very limited, particularly in workflow rules.

Cross-object DML scenarios are therefore about minimizing unnecessary DML from the start.

  1. Before you start optimizing, it's important to first understand where all the DML is taking place. This step is easier if you've spread the logic across fewer triggers and need to look in fewer places (this is one of the reasons for the widely propagated one/two triggers per object pattern), but you can also solve this by using practices. documentation, whether it's maintaining object-centric subflows or creating your own design patterns that enable efficient DML discovery at design time.
  2. Once you know where all the DML is happening, try to consolidate any DML targeting the same record into the fewest update record items needed.
  3. For more complex use cases that require conditional and/or sequential manipulation of multiple fields in a related record, consider creating a registry variable that serves as a temporary in-memory container for the data in the related record. Update the temporary data in this variable during the logical sequence of the flow using the Assignment element and perform a single explicit record update operation to store the temporary data in the database at the end of the flow.

Sometimes that's easier said than done - unless you're having performance issues, you may find that this optimization isn't worth the investment.

Complex list processing

Register changed flow: before saving Save changed process: After saving Log Changed Flow: After Save + Apex Apex-Trigger
Complex list processing Not available not ideal Available Available

There are currently some major list processing limitations in Flow.

  1. Flow offers alimited setof basic list processing operations out of the box.
  2. There is no way to refer to an element in a Flow collection, either by index or by Flow's looping function (at runtime, for each iteration through a given collection, the looping element simply assigns the next value in the loop variable collection to ; this assignment is by value, not by reference). So you can't do anything in Flow that you would usemylist[myindexvariable]do at the apex.
  3. Loops run serially at runtime, even during batch processing. Because of this, any SOQL or DML operation included in a loop isNotbundled together and increase the risk of exceeding the relevant transaction governor limits.

The combination of these limitations makes some common list processing tasks, such as direct data transformations, sorts, and filters, too complicated to achieve in Flow, while much easier (and more powerful) to achieve in Apex.

This is where extending flows with callable apex can really shine. Apex developers can andhave createdEfficient, modular, object-independent list processing methods in Apex. Since these methods are declared as callable methods, they are automatically available to Flow users. It's a great way to keep implementing business logic in a tool that business-minded users can use, without forcing developers to implement functional logic in a tool that isn't as well suited to implementing functional logic.

Whencreate callable apex, keep these considerations in mind:

  • It is the developer's responsibility to ensure that their callable Apex method accumulates correctly. Callable methods can be called from within a trigger context, for example a process or a post-save stream, so they need to be able to handle multiple calls in a given batch. At runtime, Flow calls the method and passes it a list containing entries from each applicable Flow interview in the batch.
  • Callable Apex methods can now be declared as inheritedgeneric sObject entries🇧🇷 While this functionality is more abstract, it allows for the implementation and maintenance of a single Apex invokeable method that can be reused for many triggers across multiple sObjects. Merging the generic sObject pattern with dynamic Apex can allow for some very elegant and reusable implementations.

Since this guide was originally written, Flow has added more list processing capabilities, including filtering and sorting. However, it still doesn't have all of Apex's list processing capabilities, so the advice about using Apex or modularizing individual components also applies to more complex use cases.

asynchronous processing

Register changed flow: before saving Save changed process: After saving Log Changed Flow: After Save + Apex Apex-Trigger
Fire and Forget Asynchronous Processing Not available Available Available Available
Other asynchronous processing Not available Available Available Available

asynchronous processinghas many meanings in the programming world, but when it comes to recording triggers, there are a few themes that come up in general. It is often asked, contrary to the default option, to make changes synchronously during the processTrigger execution order🇧🇷 Let's explore why you may or may not want to trade in sync.

Advantages of synchronous processing

  • Minimum database transactions:Record change triggers are usually configured to run during the first transaction to optimize database transactions. As seen with field updates of the same record, you can simplify these because you already know that the triggering record will be updated, so you can combine the additional automated updates into a single database transaction using Before-Save.
  • Consistent reversals:Likewise, compiling changes in the original transaction to updates in other records means that the overall change in the database is atomic from a data integrity point of view and rollbacks can be handled collectively. So if a record-triggered flow for an account updates all related contacts, but later in the transaction a separate automation throws an error that invalidates the entire transaction, those contacts will not be updated if the original account update is not going through.

Disadvantages of synchronous processing

  • Window of opportunity:The registration trigger initiates an open transaction to the database that cannot be committed until all steps in the trigger's execution order have been executed. This means that there is a finite window of time in which additional synchronous automations can run, as the database cannot be kept open indefinitely. And in the case of a user-triggered record change, we don't want the user to experience long delays after an edit.
  • Governor's Borders:Due to the timing constraints described above, Apex and Flow have tighter limits for asynchronous processing than synchronous processing to ensure consistent performance.
  • Support for external objects and callouts:In general, any access to an external system that needs to wait for a response (for example, to update the trigger record with a new value) takes a long time to complete in the original open transaction. Some invokeable actions work around this limitation by implementing custom logic to queue their own execution after the original transaction completes. For example, email alerts and outgoing messages do this, which is why you can invoke an outgoing message from a post-save flow, but not from an external service action. However, this is not the case for the vast majority of calls, and we recommend breaking them into their own asynchronous processes whenever possible.
  • DML place:Occasionally you may want to cross-object CRUD on both configured and unconfigured objects, for example B. update a user and a related contact after a specific change. Due to security restrictions, this cannot be done in a single transaction, so some use cases require a separate new transaction to be triggered via a second asynchronous process.

With these considerations in mind, Flow and Apex provide solutions for executing logic asynchronously to accommodate use cases that require separate transactions, external calls, or simply take too long. For Apex, we recommend implementing asynchronous processing in a Queueable Apex class. For Flow, we recommend using the Run asynchronously on streams after save path to achieve a similar result in a low-code manner. (To seeSophisticated architecture – throughput optimizationLearn about synchronous and asynchronous processing.)

When deciding between low code and professional code, an important consideration is how much control Apex will give you over calls. Flow offers a fixed number of retries and basic error handling through its error path, but Apex offers more direct control. For a mixed use case, you can callSystem.enqueueJobagainst Apex Queueable in an invokeable Apex method, then invoke Flow's method via the invokeable action framework.

When testing a solution, especially one that uses calls, it's important to think about the implications of what will happen if a particular step fails, times out, or returns incorrect data. In general, asynchronous processing is more powerful, but requires the designer to take better care of these edge cases, especially when that process is part of a larger solution that might depend on a specific value. For example, if automating your quote requires a call to a credit checking office, what state will the quote be in if that credit checking system is down for maintenance? What if it returns an invalid value? What state will your opportunity or lead be in in the interim, and what downstream automation is waiting for that outcome? Apex has more complex error handling customization than Flow, including the ability to intentionally throw an error case, and this can be a deciding factor between the two.

And the other solutions?

To date, low-code administrators have used various approaches (or "hacks") to achieve asynchronous processing. One was to create a time-based workflow (under Workflow Rules), a scheduled action (in Process Builder), or a scheduled path (under Flow) that runs 0 minutes after the trigger runs. This effectively did the same thing that the "execute asynchronously" path does today, but the new dedicated path has some advantages, including the speed at which it runs. A 0 minute scheduled action can take a minute or more to fully instantiate, while asynchronous execution is optimized to ensure it gets queued and executed as quickly as possible. "Execute asynchronously" may also allow functions with more state in the future, for example B. the ability to access the previous value of the trigger register, although today this is not possible. It performs special caching to improve performance.

The other "hack" used was to add a pause element using an autostarted subflow that waited zero minutes and then called that process builder flow. This "zero wait pause" will effectively pause the transaction and schedule the remaining automation to run in its own transaction, but the mechanisms it uses don't scale well because they weren't designed for that purpose. As a result, increased usage leads to performance issues and flow interview limits. Also, the stream becomes more brittle and harder to debug. Customers who have used this approach have often had to abandon it after reaching scale. We don't recommend starting this path (pun intended), which is why it's not available for subflows called record-driven flows.

Transferring data or state between processes

One of the appeals of the zero wait pause is the perceived stable relationship between synchronous and asynchronous processing. A flow variable can persist both before and after the pause in this particular hack, even if the pause lasts for weeks or months. This may be attractive from an initial design perspective, but it contradicts the underlying programming principles that asynchronous processing is intended to model. Separating processes for asynchronous execution gives them more flexibility and control over performance, but the data they work with often needs to be independent. That data can change in time between two different independent processes, even if only milliseconds separate them from each other, and almost certainly longer. Stream variables, like those in New Feature, are designed to only last as long as the individual process is running. If this information is needed by a separate process, even for a set that runs asynchronously after it completes, it should be stored in persistent storage. Most often, this takes the form of a custom field for the record object that triggered the flow, as it is automatically loaded as $Record on any path in a record-triggered flow. For example, if you use Get Records to get a name mapped from a contact to a record and want to reuse that name in an asynchronous path, you must call Get Records again on the separate path or save the associated names back to it.$registration🇧🇷 If you need sophisticated caching or alternative data storage beyond Salesforce objects and records, we recommend using Apex. (To seeWell thought out - running the statefor more information on state management.)

resume

When it comes to asynchronous processing, it can take extra care and thought to design your record-driven automation, especially if you need to make calls to external systems or maintain state between processes. The Run Asynchronous way in Flow should meet many of your low-code needs, but some complex needs related to custom errors or configurable retries require Apex.

Custom validation errors

Register changed flow: before saving Save changed process: After saving Log Changed Flow: After Save + Apex Apex-Trigger
Custom validation errors Not available Not available Not available Available

Flow does not currently provide a way to prevent DML operations from being committed or generating custom errors; TheaddError()The Apex method is not supported when executed from Flow via the callable Apex method.

Design your write-triggered automation

There are countless debates in the community about best practices when it comes to designing write-triggered automation. You may have already heard some of them:

  • Use only one tool or framework per object
  • Put all your automation in a single process builder or parent flow and use subflows for all logic
  • Don't put all your automation in one place, break your flows into the smallest parts possible
  • There is an ideal number of streams you should have per object (and that number is 1...or 2...or 5...or 1000).

The fact is, there is a kernel of truth in all of the advice, but none of it addresses everyone's unique challenges or needs. There will always be exceptions and rules that apply to some instances but not others. This section outlines the specific issues addressed by various boards to help you make your own decisions.

What problem are you trying to solve?

acting

When it came to building automation in Process Builder, performance was one of the main reasons for recommending building one process per object/trigger. Process Builder has high startup costs, so every time a Process Builder was run in a registry edition, performance was affected, and since Process Builder did not include gating entry conditions, these hits always occurred in any edition . Flow works differently than Process Builder, so it doesn't have as high a startup cost, but it does have some. Raw speed tests between Flow and Apex for identical use cases generally show that Apex is ahead, at least in theory, as Flow's low-code advantages add at least one layer of abstraction, but from a performance, this small difference is not a problem. great differentiator for most use cases.

(Video) Build Better with Salesforce Architects

The stream also provides input conditions that can help dramatically reduce the performance impact when used to exclude a stream from a registry edit. Most changes to a record probably don't require ongoing automation to make additional changes. So, for example, if a typo in a description is fixed, you don't need to run your owner assignment automation again. You can configure input conditions so that the automation runs only when a certain conditional state is reached. Changes made to a record are tracked and the automation only runs when a defined change is made. This is how you can run an automation when an opportunity is closed or on the specific issue that changed its status from open to closed. Each of these options is more efficient than running an automation every time a closed opportunity is updated.

resume

Getting your record-driven automation to perform well is a multidimensional issue, and no design rule will cover all factors. For Flow, there are two main points to keep in mind when it comes to your design:

  1. Consolidating your automation into a single stream doesn't have a huge performance impact compared to splitting it into multiple streams.
  2. Input conditions can result in significant performance improvements for your records-driven automation when used to exclude changes that don't affect a specific use case.

This guide covers a number of performance considerations and recommendations, including using pre-save streams to make field updates and eliminating excessive or repetitive DML operations wherever possible. These areas, where we commonly see performance issues in real customer scenarios, should be addressed first.

Problems solution

As architects, we'd love to never have to troubleshoot automation issues, but we do from time to time. While spreading your automation across multiple tools can work during initial development, it often creates more headaches over time as changes are made to be done in different places. Hence the advice to consolidate your automation into a single object in Apex or Flow. There is currently no unified troubleshooting experience that spans all Salesforce tools. Therefore, depending on the complexity of your organization and your anticipated debugging and troubleshooting needs, you may choose to use just one tool for your automation. Some customers make this a hard and fast rule because of their environment or the skills of their administrators and engineers. Others find it useful to split their automation between Flow and Apex, for example using callable actions for parts of the automation that are too complex or that require careful handling, and calling them from Flow to improve administrative access.

resume

It may be advisable to consolidate the automation of an object into a single tool when maintenance, debugging, or conflicts (for example, different people editing the same field) are likely to be an issue. Other approaches such as using invokeable actions to implement more complex non-admin functions can also be used.

an order

For many years, the main reason for consolidating automation into a single process or flow was to ensure order. The only way to keep two automation parts separate but run them in a guaranteed sequential order was to join them together. This quickly led to sizing issues. As organizations become more dynamic and need to adapt to changes in the business, these "mega-streams" become cumbersome and difficult to update even with small changes.

With the flow trigger order, introduced in Spring '22, administrators can now assign a priority value to their flows and guarantee their order of execution. This priority value is not an absolute value, so the values ​​do not need to be numbered sequentially as 1, 2, 3, and so on. Instead, the flow runs in the described order, applying a tiebreaker to duplicate values ​​(for example, if there are two Priority 1s, they run alphabetically) to minimize disruption to other automation, managed packages, or movements between orgs. All flows that do not have a firing order (any legacy or active flows) run between the numbers 1000 and 1001 to allow for backwards compatibility. If you want to leave your flows running alone, you can start your order at 1001 for any new flows you want to run after them. As a best practice, leave a space between numbered flows, for example, use 10, 20, and 30 as values ​​instead of 1, 2, and 3. That way, when you add a flow in the future, you can number 15 to place it between the first and second without having to disable and edit the already running flows.

resume

In the past, the need for ordering has led to recommendations to consolidate all automation into a single flow. With the Flow Trigger order, this is no longer necessary. (To seeWell thought out – transaction processingfor more transaction processing best practices.)

organizational problems

It's tempting to analyze the technical rationale behind many best practices, but it's no less important to think about your business and the people who build and maintain automation. Some customers want their admins to build all their automation into subflows, with just one main admin tasked with consolidating it all into a single flow to manage change tracking. Some just want to build in Apex because they have developers that can do it faster that way. Others want more functionality in stream input conditions, such as being able to use record type to ensure multiple groups can create automation that doesn't conflict in production (we're working on that record type requirement!). We recommend that you first organize around your business and functionally group the flows based on what you want them to automate and who should own them, but this is different for different organizations.

It can be incredibly difficult to understand an organization that has years of automation built in by administrators who no longer use the product. Best practices and documented design standards for your organization implemented early can help with long-term maintenance. Salesforce continues to invest in this space, with new features like the Flow Trigger Explorer to help you understand which triggered automations are already in place and running today. It's always a good idea to think about what benefits the long-term health and maintenance of your automation. If you're still stuck, we recommend reaching out to your Trailblazer community. Many pioneers who have walked this path and who can advise on the human side of building automation as well as the technical details. Best practices come from everyone!

Remember that documentation is just as important as automation! When documenting your work, write unique and unique names for things and use the description box for each element in Flow to explain your intent. Comment your code. Every architect that's been around long enough has taken this step in a hurry to meet a deadline. Likewise, seasoned architects have fallen victim to this scenario and ended up scratching their heads over some undocumented piece of automation.

resume

Ultimately, the best approach is the one that works well for your business and organization. If you're feeling a little lost, there's a lot of advice on how to manage a complex organization in the Trailblazer community, so dig around and ask questions as you learn how to best adapt your unique business and management setup to the product you can vote on. And remember: write things down!

Triggered flow runtime behavior

The remainder of this document describes technical details about the flow runtime.

Performance Discussion: Field updates for the same record

Approximately 150 billion actions were performed per workflow, process builder, and flow as of April 2020, including record updates, email notifications, outgoing messages, and invokeable actions. About 100 billion of those 150 billion actions were field updates on the same record. Note that pre-save flow triggers were only started in the previous version, meaning that 100 billion field updates after saving the same record - or equivalent to 100 billion recursive saves - were performed in just one month. Imagine how much time could have been saved using flow triggers before saving

Caution: Architects should be critical of all performance claims, even if they come from Salesforce. Results in your organization are likely to be different than results in our organizations.

Earlier in this guide, we noted that while workflow rules have a reputation for being fast, they are always slower and consume more resources than a functionally equivalent single flow trigger before saving. The theoretical side of this statement is that pre-save flow triggers do not cause DML operations or subsequent recursive triggering of the save order, whereas workflow rules do (because they happen after the save).

But what happens in practice? We did some experiments to find out.

[Experiment 1] single trigger; single record created through the user interface; Apex debug log duration

How much longer does an end user have to wait for a record to be saved?

For each of the different automation tools that can be used to automate a field update for the same record, we create a new organization + another new organization to serve as the base organization.

We then proceed as follows for each organization:

  1. In addition to the baseline organization, the simplest version implements a trigger oncreate opportunitythat would defineOpportunity.NextStep = Opportunity.Betrag.
  2. Apex debug logging enabled with all debug levels set toNoneexceptWorkflow.InfoeApex-Code.Debug
  3. Manually created a new opportunity record 25 times with a value of Amount populated through the UI.
  4. Calculates the average length of the log across 25 transactions.
  5. Subtracted from the average duration in #4, the average record duration in the baseline org.

This gave us the average overhead that each trigger added to the log duration.

Salesforce Architects | Write-triggered automation (2)

[Experience 2] 50 triggers; 50,000 records entered via Bulk API (200 batches of records); internal tools

What about the other end of the spectrum: high-volume batch processing?

(Video) Automating a University Food Pantry | Architect Academy

We borrowed some of our performance team's internal environments to get an idea of ​​how well the various trigger tools scale.

The configuration was:

  • 1 organization with 50 flows activated before savingCreate an accountevery updateAccount.Shipping Postcode
  • 1 org with 50 Apex triggers enabled before savingCreate an accountevery updateAccount.Shipping Postcode
  • 1 organization with 50 workflow rules enabledCreate an accountevery updateAccount.Shipping Postcode
  • 1 organization enabled with 50 after saving flow triggersCreate an accountevery updateAccount.Shipping Postcode
  • 1 organization with 50 Process Builder processes enabledCreate an accountevery updateAccount.Shipping Postcode

So, every Tuesday for the past 12 weeks, we've loaded 50,000 accounts with a batch size of 200 records for each organization via the Bulk API.

Fortunately, our internal environments can profile the trigger runtime directly without the need for Apex debug logging or extrapolation from a baseline.

As our indoor environments are not representative of production, we only share relative performance times and not raw performance times.

Salesforce Architects | Write-triggered automation (3)

In both write-once and mass use cases, the save-before-save stream performs excellently. As much as we want credit for the results, most of the performance savings are simply due to the enormous benefit of being ahead of the economy.

Go ahead and stop implementing field updates for the same record in workflow rules and process builder!

Bulkification and recursion control

This section was created to help you better understand how and why Flow happensgovernor bordershow is it going. Contains technical discussions onBulification of flow runtime& recursion control behavior.

We will focus primarily on how flow affects these regulator limits.

  • Total number of SOQL queries issued (100)
  • Total number of DML statements issued (150)
  • Total number of records processed due to DML statements (10,000)
  • Max CPU Time on Salesforce Servers (10,000ms)

We assume that the reader has a basic understanding of what these limits represent and encourage an update on the content and terminology used.How DML workseShooting and order of execution.

Before we delve into the specifics of the runtime behavior of triggered streams, it is extremely important to ensure that we are using the same common mental model of memory order for the purposes of further discussion. We believe that a tree model provides a very accurate abstraction.

  • What oritEUin the order storage structure matches a single Salesforce recordRoneand a timestamped DML operationDMLEUwho processed this record.
    • Is this possible for the same Salesforce recordRoneprocessed by multiple timestamped DML operations {DMLEU,DMLj,DMLk, ...} during the lifetime of a transaction.
    • This is possible for a single timestamped DML operationDMLEUto process multiple Salesforce records {Rone,Rb,Rr, ...} if the operation is a bulk DML operation.
    • It is possible for a single unstamped DML operation to process the same Salesforce records multiple times through a transaction, whether intentionally or unintentionally through recursion.
  • The root node of the storage task tree,it0, is created when an end user performs a save. When a user creates a single record in the UI, even a tree of saved tasks is created; When a user submits a batch of 200 record updates via the API, up to 200 independent saved task trees are created - one for each individual record update in the batch. Saved job trees can be skipped if saving is rejected by pre-save validation rules.
  • At runtime, the task save tree is populated according to the resolutionsave order🇧🇷 For a specific nodeitEU(Rone,DMLEU) in the work tree save,
    • For each trigger fired in response to the DML operationDMLEUaRone,
      • For each DML operationDMLpexecuted by the trigger at runtime:
        • For each individual Salesforce recordRbprocessed by this DML operationDMLp,
          • The trigger generates a child nodeitj(Rb,DMLp) for the original nodeitEU(Rone,DMLEU).
  • The subtree of a nodeitEUrepresents the entire cumulative set of records processed as a result of DML at runtimeDMLEU.
  • The entire tree, which is the subtree rooted at the root nodeit0, thus representing all records processed by DML in response to a DML operation in a single top-level recordDML0🇧🇷 In a batch of 200 records, the corresponding 200 trees would represent all records processed by the DML in the transaction.

Since each node in a saved work tree corresponds to a single processed DML record and the number of processed DML records per transaction is limited to 10,000, there cannot be more than 10,000 nodes in all saved work trees in total transaction .

In addition, no more than 150 unique time-stamped DML operations {DML0,DML1, ...* *,DML149} in all save order trees in the transaction.

Now, let's consider our previous example of a sub-optimal cross-object driven flow implementation:

Salesforce Architects | Write-triggered automation (4)

Suppose there are no other triggers in the organization and a user creates a single new case.Fall005, against parent accountAcme Corp🇧🇷 The corresponding save order tree is quite simple:

Salesforce Architects | Write-triggered automation (5)
  • There are a total of three records processed by the DML.
  • Each of the records was processed by its own dedicated DML statement, resulting in a total of three DML statements being issued.

Suppose the user creates two new cases,autumn006eautumn007, in a single DML statement. You get two storage order trees, each with three nodes, for a total of six DML-processed records. Thanks to Flow's automatic batch-to-batch collection logic (flow agglomeration), the six nodes would still be covered by a total of three output DML statements:

Salesforce Architects | Write-triggered automation (6)

Still not bad, right? However, in real life you would probably expect there to be a variety of triggers when updating an account, so each individual save order tree would look like this (for discussion, let's assume there are 3 triggers on the account):

Salesforce Architects | Write-triggered automation (7)

And in a scenario where you batched 200 cases, there would be 200 matching storage order trees, sharing a total limit of 10,000 nodes and a total limit of 150 issued DML statements. bring bad news.

However, when combining the stream's two original Update Records elements into a single Update Records element, the entire right subtree ofit0can be deleted.

Salesforce Architects | Write-triggered automation (8)Salesforce Architects | Write-triggered automation (9)

This is an example of what we will callfunctional bulkification, one of two types of bulkification practices that can reduce the number of DML statements required to process all DML rows in a batch.

  1. functional bulkificationtries to minimize the number of unique DML statements required to process all records in a single memory order tree.

    The example above achieves functional clustering effectivelymeetingstwo functionaldistinguishableDML nodes and their respective backup order subtrees in Acme Corp. into a single functionally equivalent merged DML node and backup order subtree. This not only reduces the number of DML statements issued, but also saves CPU time. All non-DML trigger logic runs once, not twice.

    (Video) Orchestrate ALL of your Salesforce Automation with the Trigger Actions Framework with Mitch Spano

  2. Bulification between batchestries to maximize the number of DML statements that can be shared on a stack across all memory order trees.

    An example of perfect batch bulkification is an implementation where, if the task tree of saving a dataset requires issuing 5 DML statements, a batch of 200 datasets still only requires issuing 5 DML statements.

    In the example above, cross-batch bulking is handled automatically by the flow runtime.

Recursion control, on the other hand, increases processing efficiency.clippingfunctionalsuperfluoussub-arvorous.

flow agglomeration

The flow runtime automatically performs bulk builds between batches on behalf of the user. However, it does not perform functional bulking.

The following flow elements can cause DML and SOQL consumption in a triggered flow.

  1. Create/update/delete records:Each item consumes 1 DML for the entire stack, excluding downstream DML caused by triggers on the target object.
  2. Retrieve recordings:Each item consumes 1 SOQL for the entire stack.
  3. calls to action: It depends on how the action is implemented. At runtime, the flow runtime creates a list of entries for all relevant flow interviews in the batch and then passes that list to a bulk call to action. From this point on, it is up to the action developer to ensure that the action is properly wrapped.
  4. Exit:It does not use DML or SOQL directly, but overrides Rules 1-3 above by running each element contained in the loop serially, for each flow interview in the batch, one at a time.
    1. This basically "avoids" automatically bulking Flow into batches: no DML or SOQL in a loop is shared between the work trees in the store, so the number of records in a batch has a multiplicative effect on the amount of DML and SOQL consumed .

As an example, consider the following triggered flow implementation, which when an account is updated, automatically updates all associated contracts and attaches a child contract change log record to each of those updated contracts.

Salesforce Architects | Write-triggered automation (10)

Now suppose 200 accounts are updated en masse. Then at runtime:

  1. That oneReceive associated contractselement adds +1 SOQL for the whole batch of 200 accounts.
  2. Then for each account in the 200 accounts:
    1. For each contract related to this account:
      1. That oneupdate contractelement adds +1 DML to update contract without any DML downstream caused by triggers on contract update.
      2. That oneCreate contract change recordadds +1 DML to create the corresponding child record of the Contract Changelog, without any downstream DML caused by triggers when creating the Contract Changelog.

For this reason, we strongly discourage including DML and SOQL in loops. This is very similar to Best Practice #2 onApex code best practices🇧🇷 Users will be warned if they try to do this while creating in Lightning Flow Builder.

Flow recursion control

Triggered streams follow the recursive storage behavior described in the Apex Developer GuideShooting and order of executionSide.

Salesforce Architects | Write-triggered automation (11)

What does it really mean? Let's go back to the tree model we created earlier and revisit this tree-specific property:

  • At runtime, the task save tree is populated according to the resolutionSave order:For a specific nodeitEU(Rone,DMLEU) in the work tree save,
    • For each trigger fired in response to the DML operationDMLEUaRone,
      • For each DML operationDMLpexecuted by the trigger at runtime:
        • For each individual Salesforce recordRbprocessed by this DML operationDMLp,
          • The trigger generates a child nodeitj(Rb,DMLp) for the original nodeitEU(Rone,DMLEU).

The guarantee, "During a recursive save, Salesforce skips...' adds a little more magic:

  • At runtime, the task save tree is populated according to the resolutionSave order:For a specific nodeitEU(Rone,DMLEU) in the work tree save,
    • for every triggerthat would normally ignitein response to DML operationDMLEUaRone,
      • If this trigger is included in steps 9-18 of the save order:
        • If this trigger was previously fired in response to a previous DML operationRone, thus starting a chain of DML operations that resulted in the current DML operationDMLEUaRone,
          • The trigger does not fire.
      • Otherwise, the trigger fires.
        • For each DML operationDMLpexecuted by the trigger at runtime:
          • For each individual Salesforce recordRbprocessed by this DML operationDMLp,
            • The trigger generates a child nodeitj(Rb,DMLp) for the original nodeitEU(Rone,DMLEU).

This has some important implications:

[Consideration #1]A flow triggerI canTrigger multiple times on the same record during a transaction.

Salesforce Architects | Write-triggered automation (12)

For example, suppose that in addition to the suboptimal flow trigger,make caseon the right, the organization has also activated a flow triggeraccount update.

For simplicity let's assume that the triggered flow is enabledaccount updateit's a no op. Suppose we create a new case, Case #007, with parent account Bond Brothers.

So the save order tree would look like this:

  1. Case #007 is created.
  2. save order tomake caseA case #007 is entered.
    1. Follow steps 1 through 16 in backup order. Since there are no triggers on Case other than the flow trigger above, nothing happens.
    2. Step 17 is running: our public document hasn't been updated yet, but the flow triggers after saving is new step 17; current step #17, roll-ups and everything below is moved 1 step down.
      1. Flow trigger is onmake caseFeuer.
        1. The flow trigger updates the Bond Brothers account rating.
          1. save order toaccount updateregistrado no Bond Brothers.
          2. Follow steps 1 through 16 in backup order. No surgeries.
          3. Step 17 is executed.
            1. Flow trigger is onaccount updateFire. // First run on Bond Brothers.
              1. How do we define the flow trigger inaccount updateto be a no-op, nothing happens.
            2. Because no other flow triggers are enabledaccount update, stage 17 ends.
          4. Follow steps 18-22. No surgeries.
          5. save order toaccount updatein Bond Brothers is over.
        2. The flow trigger updates the propensity to pay of the Bond Brothers account.
          1. save order toaccount updateregistrado no Bond Brothers.
          2. Follow steps 1 through 16 in backup order. No surgeries.
          3. Step 17 is executed.
            1. Flow trigger is onaccount updateFire. // Second run on Bond Brothers. // No recursive execution!
              1. How do we define the flow trigger inaccount updateto be a no-op, nothing happens.
            2. Because no other flow triggers are enabledaccount update, stage 17 ends.
          4. Follow steps 18-22. No surgeries.
          5. save order toaccount updatein Bond Brothers is over.
      2. Because no other flow triggers are enabledmake case, stage 17 ends.
    3. Follow steps 18-22. No surgeries.
    4. save order tomake casenot case #007.
  3. transaction is closed.

If the two Update Records items had been merged into a single Update Records item, the sorted storage order would look like this:

  1. Case #007 is created.
  2. save order tomake caseA case #007 is entered.
    1. Follow steps 1 through 16 in backup order. Since there are no triggers on Case other than the flow trigger above, nothing happens.
    2. Step 17 is running: our public document hasn't been updated yet, but the flow triggers after saving is new step 17; current step #17, roll-ups and everything below is moved 1 step down.
      1. Flow trigger is onmake caseFeuer.
        1. Flow trigger updates Bond Brothers account ratingewillingness to pay.
          1. save order toaccount updateregistrado no Bond Brothers.
          2. Follow steps 1 through 16 in backup order. No surgeries.
          3. Step 17 is executed.
            1. Flow trigger is onaccount updateFire. // First run on Bond Brothers.
              1. How do we define the flow trigger inaccount updateto be a no-op, nothing happens.
            2. Because no other flow triggers are enabledaccount update, stage 17 ends.
          4. Follow steps 18-22. No surgeries.
          5. save order toaccount updatein Bond Brothers is over.
      2. Because no other flow triggers are enabledmake case, stage 17 ends.
    3. Follow steps 18-22. No surgeries.
    4. save order tomake casenot case #007.
  3. transaction is closed.

[Consideration #2]A flow trigger nevercause yourselfshooting the same board again.

[Consideration #3]While stream triggers (and all other triggers in steps 9-18 of the v48.0 memory order) get this type of recursion control for free, steps 1-8 and 19-21 do not. Therefore, if a flow trigger performs an update on the same record after saving, a save order will be entered and steps 1-8 and 19-21 will be performed again. This behavior is why it's so important to send updates to the same record in flow triggers before saving!

final considerations

You did it! Have a nice day and thanks for reading. I hope you learned something that you found valuable.

tell us what you think

Help us ensure we publish what is most relevant to you:take part in our surveyto provide feedback on that content and let us know what you would like to see next.

(Video) Salesforce Process Automation Tools [Top 5] with Aaron McGriff

Videos

1. Record-Triggered Flow Design Patterns | Automate This
(Salesforce Admins)
2. Architect Academy - Building an Integration Strategy
(Salesforce Architects)
3. Salesforce Process Automation - Salesforce Certification Preparation Series
(Yarl Salesforce Ohana)
4. Architects - Solutioning
(Salesforce Developers)
5. What Are Salesforce Diagrams?
(Salesforce Architects)
6. Salesforce Enterprise Architect Overview with Allison
(100DaysofTrailhead)

References

Top Articles
Latest Posts
Article information

Author: Carmelo Roob

Last Updated: 20/10/2023

Views: 5715

Rating: 4.4 / 5 (45 voted)

Reviews: 84% of readers found this page helpful

Author information

Name: Carmelo Roob

Birthday: 1995-01-09

Address: Apt. 915 481 Sipes Cliff, New Gonzalobury, CO 80176

Phone: +6773780339780

Job: Sales Executive

Hobby: Gaming, Jogging, Rugby, Video gaming, Handball, Ice skating, Web surfing

Introduction: My name is Carmelo Roob, I am a modern, handsome, delightful, comfortable, attractive, vast, good person who loves writing and wants to share my knowledge and understanding with you.