Unleashing Unica’s Triad of Data Processing Powers

Unleashing Unica's Triad of
Data Processing Powers

In the ever-evolving landscape of data management, understanding the tools at your disposal can make all the difference. Welcome to a deep dive into the world of data processing, where we’ll uncover the secrets of leveraging Unica’s data processing capabilities.

Advanced Use of Derived Fields

“Derived Fields” are variables that do not originally exist within a data source but are instead constructed using one or more existing fields from various data sources. Within procedures, the configuration window offers a “Derived Field” button, which facilitates the creation of variables for diverse purposes, including querying, segmenting, sorting, calculating, or generating output for a table.

If you desire a Derived Field to be accessible in processes, it can be enabled using the “Make” option during its creation. These Derived Fields, available within a process, can be located within the “Derived fields” folder. It’s important to note that these Derived Fields are exclusively accessible within the process in which they were generated. In the absence of any derived fields within a process, the “Derived fields” folder will not appear in the list.

For situations requiring the use of a derived field in subsequent processes or across various flowcharts, you have the option of storing its expression in the “Stored derived fields” list, making it accessible for all processes and flowcharts.

Furthermore, aside from creating individual Derived Fields, there exists the possibility to create nested Derived Fields. This approach permits the definition of variables within a Derived Field, facilitating the execution of all calculations within that same derived field, thus negating the need for multiple derived fields. To illustrate this concept further, consider a scenario where a Derived Field needs to calculate a single value based on previous calculations.

Traditionally, multiple Derived Fields would be created sequentially, followed by their utilization in the final Derived Field. However, a more efficient approach is available. By creating variables within the Derived Field, each calculation can be carried out individually and then incorporated into the final Derived Field.

 This approach is best illustrated through a concrete example:

Picture1

In this specific use case, a Derived Field is crafted to display the total amount spent by customers based on specific dates. The process entails reformatting the transaction date, calculating a reference date (e.g., 14 days ago) based on the current date, and subsequently capturing the amount spent for transactions falling within the two date ranges. Rather than creating a multitude of Derived Fields, all these operations can be consolidated into a single Derived Field using variables, as depicted in the accompanying screenshot.

Certainly, the use of variables in your Derived Field can be expanded as necessary, depending on the complexity of the calculations or operations involved. Variables provide the flexibility to break down complex tasks into manageable components, improving both the clarity and efficiency of your Derived Field. This approach enables you to create more robust and versatile solutions tailored to your specific data processing requirements.

<TempTable>, <OutputTempTable> and <Extract> Tokens

Temporary tables serve as a valuable workspace for storing intermediate results during data processing or transfer. Once their purpose is fulfilled, these temporary tables are automatically discarded, optimizing the efficiency of data operations.

To achieve optimal performance, it is advisable to utilize the <TempTable> token when employing raw SQL queries, particularly when interacting with large datasets. This practice enhances the overall performance of your queries, ensuring smoother execution.

In the context of in-database optimization, the use of the <TempTable> token becomes essential when specifying a raw SQL query within a Select process that includes an input cell. This token guarantees correct behavior in this scenario, as explained below.

Here is an example query:

Picture2

Furthermore, when in-database optimization is enabled, employing the <OutputTempTable> token is also recommended. This token prevents the unnecessary copying of audience IDs from the database to the Unica Campaign server, thereby optimizing processing efficiency.

The behavior of raw SQL queries within a Select process with an input cell depends on whether in-database optimization is active. The control of in-database optimization can be managed globally through the “Use In-DB Optimization” configuration setting, or it can be specified on an individual flowchart basis using the “Use In-DB Optimization during Flowchart Run” option in the Admin menu.

Specifically:

  1. When in-database optimization is turned off: The list of IDs generated by the raw SQL query is automatically matched against the ID list from the incoming cell. The resulting list of IDs remains a subset of the input cell, as expected.
  2. When in-database optimization is turned on: Unica Campaign assumes that the ID list produced by the Select process is the final list. There is no comparison made against the ID list of any incoming cell. Therefore, for raw SQL queries within an intermediate Select process (a Select process with an input cell), the use of the <TempTable> token is crucial. This token facilitates proper joining against the input cell, ensuring accurate results and optimizing performance by avoiding unnecessary processing of audience IDs that do not exist in the input cell.

You can leverage the <Extract> token in downstream processes to reference an Extract table via raw SQL queries. This token allows you to specify subsets of data for subsequent processing, which can significantly enhance performance when dealing with large tables.

Here’s an illustrative example:

Picture3

 

In this example, the raw SQL query queries an Extract table to retrieve the customer IDs of all customers whose account balance exceeds $1,000.

It’s important to note that in flowcharts containing multiple Extract processes, the <Extract> token always refers to the latest available Extract table.

Additionally, please be aware that after a Merge operation, the validity of the <Extract> token may vary. It’s advisable to perform a test run of the flowchart to confirm whether the token functions as expected in such cases.

Conclusion

Mastering Unica’s data processing skills empowers your data journey with efficiency and customization. These tools, demonstrated through practical examples, streamline complex tasks, ensuring your data solutions are robust and adaptable in an ever-evolving landscape. Stay tuned for more insights on achieving data mastery with Unica.

Maximize your MarTech Investment!

With our expertise in Unica since 2005, Munvo has extensive experience with complex MarTech stacks and upgrades. Contact us to unlock the full potential of Unica's data processing powers.

BOOK A DISCOVERY CALL

Sales Inquiries + 1 (514) 223 3648
General Inquiries + 1 (514) 392 9822
sales@munvo.com

© 2024 Munvo is a trademark of Munvo Solutions Inc.


Pin It on Pinterest

Search