Continuous models are automatically executed when a profile is loaded (via the web, mobile, or CTV) or when relevant profile properties are updated (from any source). All outputs defined by the model are stored as profile properties on the profile.
Use continuous models to power use cases such as:
Look-alike modeling: identify new high-value audiences based on real-time behavior.
Propensity modeling: predict the likelihood to buy, churn, or click in the moment.
Before you begin
Before you upload or create a continuous model, confirm the following:
You have access to the AI Workbench in BlueConic.
Your model is in ONNX format, or you have the source model ready to convert.
The profile properties you plan to use as model inputs exist in BlueConic.
The profile properties where model outputs should be stored exist in BlueConic.
Continuous model format
Continuous models are stored in the ONNX format. The input of the model is a profile vector, and the model can have one or more outputs. Each output corresponds to a profile property that should be updated by the model.
Inputs
profile (optional float[]): a float tensor containing the vectorized profile, based on the profileProperties and featureNames metadata. If no profile input is defined, the model will be executed without any profile input.
Outputs
The model does not have any fixed outputs. Instead, each output corresponds to a profile property to be updated by the model. Outputs can be scalar values or one-dimensional tensors. Multi-dimensional tensors are not supported because they do not map directly to the profile data model.
Metadata
Each continuous model requires the following metadata:
Metadata field | Python API name | Description |
|
| The profile property IDs that are used as input features for the model (for example [ |
|
| The names of the features that are relevant for the model. The profile input will be filled based on these feature names. |
|
| (Optional) An optional segment ID that restricts model execution to profiles that are members of this segment. If not provided, the model is executed for all profiles. |
Upload a continuous model
You can upload a continuous model to BlueConic using the UI, the AI Workbench Python API, or the REST API.
Using the UI
Log into BlueConic and navigate to More.
Select AI Workbench from the drop down menu.
Go to the Models tab in the AI Workbench.
Click Add Model.
Set the model type to Continuous.
Upload your ONNX model file.
Select one or more Profile properties to use as inputs for the model.
Copy the Feature names.
(Optional) Select a Segment to restrict model execution to specific profiles.
Click Save.
Using the AI Workbench (Python notebook)
If you plan to run the AI Workbench notebook on a schedule to regularly retrain your model, we recommend using the update_model method in combination with a model parameter.
First, define your parameters:
import blueconic
bc = blueconic.Client()
# the profile properties to train the model on
PROFILE_PROPERTY_IDS = bc.get_blueconic_parameter_values(
"Profile properties", "profile_property"
)
if not PROFILE_PROPERTY_IDS:
raise ValueError("Please configure profile properties to train model on")
# where the model should be stored
MODEL_ID = bc.get_blueconic_parameter_value("Model", "model")
if not MODEL_ID:
raise ValueError("Please configure a model")
# restrict scoring to this specific segment
SCORING_SEGMENT_ID = bc.get_blueconic_parameter_value("Scoring segment", "segment")
# the profile property that should contain the score from the model
OUTPUT_PROPERTY_ID = bc.get_blueconic_parameter_value(
"Score property", "profile_property"
)
if not OUTPUT_PROPERTY_ID:
raise ValueError("Please configure a Score property")
Then, once you've trained a vectorizer (in this example called my_dict_vectorizer) and a model (in this example called my_onnx_model):
|
Note: For more information, see the Models Python API documentation. For details on how to create my_onnx_model, see Converting the model to ONNX format below. |
Using the REST API
Continuous models can also be pushed to the CDP through the REST API, for example as part of an external model pipeline.
Note: For more information, see the Models REST API documentation. |
Create a new continuous model
To build a continuous model from scratch, follow these steps:
Convert profiles to feature dictionaries and labels.
Train a
DictVectorizerand convert feature dictionaries to feature vectors.Train a model on the resulting data.
Convert the model to ONNX format.
Step 1: Convert profiles to feature dictionaries
You can use the profile_to_feature_dict method to convert a BlueConic Profile object to a feature dictionary. We recommend using this method to ensure that the data your model is trained on matches the data that will be passed to the model when it is executed.
|
Note: Storing all data in memory, as in the example above, only works for small datasets. For larger segments you will likely need to store data on disk, for example using sqlite. |
Step 2: Convert feature dictionaries to feature vectors
Once you have transformed profiles into feature dictionaries, the next step is to turn those dictionaries into feature vectors. You can typically use the standard scikit-learn DictVectorizer for this purpose:
|
Step 3: Convert the model to ONNX format
Once trained, convert your model to ONNX format. BlueConic supports two approaches.
Converting a scikit-learn model to ONNX format
Assuming you have trained your model using the scikit-learn package, you can use sklearn-onnx to convert the model to ONNX. After conversion, make the following adjustments:
Change the input name to
profile.Disable returning a dictionary by setting the
zipmapoption toFalse(for classifiers).Remove the class label output (for classifiers).
Rename the output to the profile property ID where the value should be stored.
|
Create an ONNX model from scratch
You can also use the onnx package to directly define your computational graph. The example below builds a simple RFM model that, based on pre-calculated thresholds, calculates a score between 1 and 5 for recency, frequency, and monetary value.
|
Note: When creating your own ONNX model, it's important to set the ir_version and opsetid to ensure your model is compatible with the platform.
Step 4: Unit test your model
It's good practice to validate the output of your ONNX model to ensure the output matches your expectations. You can use the onnxruntime package to execute ONNX models.
First, import the required packages and set up a unit testing framework:
|
Then add your unit tests. For example, to test the RFM model above:
|
FAQ
When does a continuous model run?
A continuous model runs automatically whenever a profile is loaded via web, mobile, or CTV, or when relevant profile properties are updated from any source.
Can I restrict a model to a specific segment?
Yes. Set the segmentId (segment_id in the Python API) metadata field to a segment ID. The model only runs for profiles that are members of that segment.
What output types does a continuous model support?
Outputs can be scalar values or one-dimensional tensors. Multi-dimensional tensors are not supported because they don't map directly to the profile data model.
Is the profile input required?
No. The profile input is optional. If you don't define it, the model runs without any profile data as input.
Which ONNX conversion tools does BlueConic support?
BlueConic works with any valid ONNX model. Common conversion tools include:
sklearn-onnx for scikit-learn models.
onnxmltools for a broader range of model frameworks.

