Skip to main content

Documentation Index

Fetch the complete documentation index at: https://openlayer.com/docs/llms.txt

Use this file to discover all available pages before exploring further.

Dialogflow CX hero Openlayer integrates with Dialogflow CX to help you monitor, evaluate, and improve your conversational agents. The integration reads your agent’s runtime logs from a BigQuery dataset (populated by a Cloud Logging sink in your GCP project), automatically discovers your agents, and periodically syncs every interaction so you can trace LLM calls, RAG retrievals, tool invocations, and token counts in Openlayer.

How it works

The data path is Dialogflow → Cloud Logging → BigQuery → Openlayer:
  1. Your Dialogflow CX agent writes runtime logs to Cloud Logging.
  2. A Cloud Logging sink streams those logs into a BigQuery dataset in your GCP project.
  3. Openlayer reads from that dataset on your behalf using a service account you provide.
Once connected, Openlayer:
  1. Discovers your agents: finds every Dialogflow CX agent that has logs in the configured dataset.
  2. Creates projects and data sources: each agent gets its own Openlayer project and data source, with no manual setup required.
  3. Syncs interactions: periodically (every 15 minutes) pulls new conversation traces and deduplicates against existing rows.
  4. Enriches traces: extracts LLM models, token counts, prompts, and RAG context so quality metrics work out of the box.
All access is read-only. Openlayer queries BigQuery and never writes to your GCP project. The service account you provide only needs read access to a single dataset.

Prerequisites

Three pieces of GCP setup, all in the same project as your Dialogflow CX agent.

1. Enable agent logging

Dialogflow CX produces runtime logs only when logging is explicitly enabled on the agent. In the Dialogflow CX console:
  1. Open your agent.
  2. Click the gear icon → Agent settingsLogging tab.
  3. Enable both:
    • Cloud Logging (enableStackdriverLogging): sends logs to Cloud Logging
    • Conversation history (enableInteractionLogging): includes the conversation content (without this, message text is stripped)
  4. Click Save.
Dialogflow CX agent logging settings showing both toggles enabled
If enableInteractionLogging is off, logs are still written but message text and trace details are stripped. Openlayer will be unable to ingest meaningful trace data.

2. Create a Cloud Logging sink to BigQuery

A Cloud Logging sink streams matching log entries into a BigQuery dataset in real time. Openlayer reads from that dataset.
Replace PROJECT_ID and DATASET with your values throughout.
# 1. Create the BigQuery dataset that will receive Dialogflow logs.
#    Pick US or EU based on your data residency requirements.
bq --location=US mk -d PROJECT_ID:DATASET

# 2. Create the Cloud Logging sink. The --use-partitioned-tables flag is
#    required for cost-efficient queries. See:
#    https://cloud.google.com/logging/docs/export/bigquery#partitioned-tables
gcloud logging sinks create openlayer-dialogflow-sink \
  bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET \
  --log-filter='logName:"dialogflow-runtime.googleapis.com/requests"' \
  --use-partitioned-tables

# 3. The sink runs as its own service account. Grant it write access
#    to the dataset so it can stream logs in.
SINK_SA=$(gcloud logging sinks describe openlayer-dialogflow-sink \
  --format="value(writerIdentity)")

bq add-iam-policy-binding \
  --member="$SINK_SA" \
  --role=roles/bigquery.dataEditor \
  PROJECT_ID:DATASET
The sink does not create the destination table until the first matching log arrives. Right after running these commands the dataset will be empty. Send a message to your agent (e.g. via the Test Agent simulator in the Dialogflow console) and wait 1-2 minutes for the table to appear.
The sink writes to a table named after the log filter. For dialogflow-runtime.googleapis.com/requests the table is:
dialogflow_runtime_googleapis_com_requests
You will paste this table name into the Openlayer connect form below.

3. Create a service account with read access

Openlayer authenticates as a service account that you create and own. The required BigQuery IAM roles are:
RoleScopePurpose
roles/bigquery.jobUserProjectRun BigQuery queries
roles/bigquery.dataViewerDatasetRead the Dialogflow logs table
# 1. Create the service account.
gcloud iam service-accounts create openlayer-reader \
  --display-name="Openlayer Dialogflow Reader" \
  --project=PROJECT_ID

SA_EMAIL="openlayer-reader@PROJECT_ID.iam.gserviceaccount.com"

# 2. Grant project-level role to run BQ jobs.
gcloud projects add-iam-policy-binding PROJECT_ID \
  --member="serviceAccount:$SA_EMAIL" \
  --role="roles/bigquery.jobUser"

# 3. Grant dataset-level role to read the Dialogflow logs table.
bq add-iam-policy-binding \
  --member="serviceAccount:$SA_EMAIL" \
  --role=roles/bigquery.dataViewer \
  PROJECT_ID:DATASET

# 4. Generate a JSON key. You'll paste the contents into Openlayer.
gcloud iam service-accounts keys create ./openlayer-key.json \
  --iam-account="$SA_EMAIL"

Setup guide

Step 1: Navigate to the integration

In your Openlayer workspace, go to SettingsIntegrations and find the Dialogflow CX card. Click Enable. Dialogflow CX integration card in Openlayer settings

Step 2: Provide your credentials

Fill in the connect form using the values from the prerequisites:
  • Service account JSON: paste the entire contents of openlayer-key.json
  • GCP project ID: the project hosting your BigQuery dataset
  • BigQuery dataset: e.g. dialogflow_logs
  • BigQuery table: dialogflow_runtime_googleapis_com_requests
  • BigQuery job location: US or EU (match your dataset’s location)
  • Parameter allowlist (optional): see PII control
Connecting Dialogflow CX in Openlayer with service account JSON
Your Dialogflow agent region (e.g. global, us-central1, europe-west1) is separate from the BigQuery job location (US, EU, us-central1, …). The form’s BigQuery job location refers to where the BigQuery dataset lives, not where the agent runs. They can differ.

Step 3: Test the connection

Before saving, click Test connection. Openlayer verifies that your service account can read the configured table. On success you’ll see the service account email confirmed inline. If it fails, common causes are:
  • Table not found: the sink has not created the table yet. Send a message via the Test Agent simulator in Dialogflow and wait 1-2 minutes.
  • Permission denied: IAM bindings have not propagated. Wait 30 seconds and retry, or re-run the dataset binding step.
  • Invalid JSON: the service account JSON paste was incomplete or malformed.

Step 4: Connect and discover agents

Click Connect. Openlayer immediately discovers every Dialogflow CX agent that has logs in the dataset. They appear in the Agents section identified by their agent ID.

Step 5: Configure sync settings

Once connected, configure automatic syncing:
  • Periodic sync: toggle on to enable automatic syncing every 15 minutes
  • Initial sync range: how far back to read on the first sync of a newly enabled agent
  • Parameter allowlist: see PII control
You can also click Sync Now at any time to trigger an immediate sync.

Step 6: Enable agents

Under the Agents section, click Enable on each agent you want to monitor. The Enable dialog lets you set a display name (Dialogflow CX runtime logs only carry agent UUIDs, so this is the cleanest place to give the agent a friendly label in Openlayer) and choose between Create new project (default) or Map to existing project. When you click Enable, Openlayer:
  • Creates a new project named Dialogflow - <display name>
  • Creates a default inference pipeline and links it to the agent
  • Queues an immediate sync so data starts flowing
Dialogflow CX connected state with agents table in Openlayer
Initial interactions may take 1-2 minutes to appear after enabling. Cloud Logging takes a few minutes to export new logs to BigQuery.

PII control for queryParams.parameters

Dialogflow CX request logs contain the full queryParams.parameters object, a free-form bag of keys and values that often holds personally identifiable information (email, name, IP address, customer IDs, internal session tokens, etc). By default, Openlayer drops every key in queryParams.parameters. To allow specific keys into trace metadata, list them in the Parameter allowlist field on either the connect form or the Settings panel:
chattype, lg, env
Anything not in the allowlist is dropped before the row is written to Openlayer. You can update the allowlist at any time from the Settings panel; the change applies to subsequent syncs (existing rows are not retroactively modified).

Monitoring in Openlayer

Once agents are enabled and the first sync completes, conversations automatically appear in their respective Openlayer projects.

Conversation traces

Each Dialogflow CX interaction is converted into a trace that captures the user query, the agent’s response, latency, and the full execution tree: LLM calls, knowledge-base retrievals, webhook tool calls, and intent or playbook orchestration. Example Dialogflow CX trace in Openlayer

Backfill historical data

To import interactions that occurred before the integration was connected (or before an agent was enabled), click Backfill in the agent’s options menu. In the dialog, choose:
  • All available history: re-fetch every interaction the dataset still retains
  • Custom start date: fetch interactions from a specific date forward
Duplicate interactions are automatically skipped, so it’s safe to run a backfill at any time.
Backfill is bounded by your BigQuery dataset’s retention. If you need to pull data from before the dataset existed, that data simply isn’t there to pull. Cloud Logging→BigQuery export only starts once the sink is created.

Run evaluations

With interactions flowing into Openlayer, you can:
  • Create tests to score response quality
  • Detect hallucinations and measure faithfulness against retrieved RAG context
  • Track safety and compliance metrics
  • Monitor latency and token-usage trends
  • Compare agent performance across versions

Disconnecting

To disconnect the Dialogflow CX integration:
  1. Go to SettingsIntegrationsDialogflow CX
  2. Click Remove Dialogflow from workspace
This stops all syncs and deletes the connection and discovered-agent records. Existing data already imported into Openlayer (projects, pipelines, traces) is preserved. To fully tear down the GCP-side resources:
gcloud logging sinks delete openlayer-dialogflow-sink
gcloud iam service-accounts delete openlayer-reader@PROJECT_ID.iam.gserviceaccount.com
bq rm -r -d PROJECT_ID:DATASET

Troubleshooting

SymptomLikely causeFix
Test connection: Table not foundSink hasn’t created the table yetSend a message via the Test Agent simulator, wait 1-2 minutes
Test connection: Permission deniedIAM bindings haven’t propagated, or wrong rolesWait 30 seconds; verify bigquery.jobUser (project) and bigquery.dataViewer (dataset)
Test connection: Invalid JSONService account JSON paste was incompletePaste the entire file, including the leading { and trailing }
Connect succeeds but 0 agents discoveredNo logs in the table yetGenerate a few messages via the Test Agent simulator, wait, click Refresh agents
0 interactions synced after enabling an agentLogs exist but none are response logs yetSend another message and wait. Response logs land within seconds of the request log
model and tokens are always null on synced rowsAgent is classic-flow, not generativeExpected. Only generative (Playbook / Generator-driven) agents emit LLM telemetry. Classic flows produce intent-match and webhook traces only
Sync fails with Quota exceeded or bytes billed exceededBackfill range is too wide for the datasetUse Custom start date instead of All history
Costs higher than expectedSink wasn’t created with --use-partitioned-tablesRecreate the sink with the flag set; existing rows in the unpartitioned table are scanned in full on every query