Re Labeling Conversations

Re-labeling customer conversations and notes #

This guide explains how to reanalyze (re-label) an individual conversation/note or conversations/notes within a time range.

Individual conversations/notes #

You’ll need the following IDs to proceed:

  • Conversation FSID: Represents the conversation’s FSID which needs to be reanalyzed.

OR

  • Note ID: Represents the ID of the note which needs to be reanalyzed.
  • Note Version: Represent the latest version of the note which needs to be reanalyzed.

Step 1: Trigger the re-labeling process #

We need to create a pubsub message for starting the re-labeling process. Use the following fetch with relevant IDs, depending on whether you’re reanalyzing a conversation or note.

fetch('/api/internal/pubsub/publish', {method: "POST", body: JSON.stringify({
  topic: "conversation_note_analysis",
  key: {
      conversation_fs_id: "<conversation_fs_id here>",
      note_id: "<note_id here>>", 
      note_version: "<note_version here>"
  },
  data: {}
})})

Step 2: Force run the runners #

We also need to tick the runners so that the pubsub message with topic conversation_note_analysis, gets picked up for processing Run the below fetch:

fetch('/api/tick/runners', {method: "POST"})

This should reanalyze the appropriate conversation/note.

Conversations/notes withing a time range #

We also have the ability to bulk process conversations/notes within a given time range.

Step 1: Trigger the reanalysis process #

We need to call the /api/internal/labels/reanalysis endpoint with appropriate query param. The supported query params are -

  1. from: Represents the farthest timestamp upto which we’ll reanalyze. Defaults to 30 days ago.
  2. until: Represent the most recent timestamp till which we’ll reanalyze. Default is current time.
  3. type: Represents the type of entity we want to reanalyze. Possible values are conversations and notes. By default, we’ll process both.

Run the below fetch to trigger the reanalysis -

fetch('/api/internal/labels/reanalysis', {method: "POST"})

Add the appropriate query params as needed.

Step 2: Force run the runners #

We also need to tick the runners so that the pubsub message with topic conversation_note_analysis, gets picked up for processing Since we’re doing bulk reanalysis and there might be huge number of conversations/notes to be processed, we need a script to sequentially keep ticking the runners until all conversations/notes have been processed.

Run the below script:

async function runFetchSequentially() {
    const totalCalls = 1000;
    let successCount = 0;
    let errorCount = 0;

    console.log(`Starting ${totalCalls} sequential fetch calls...`);

    for (let i = 1; i <= totalCalls; i++) {
        try {
            console.log(`Making call ${i}/${totalCalls}...`);

            const response = await fetch('/api/tick/runners', {
                method: "POST"
            });

            if (response.ok) {
                successCount++;
                console.log(`✅ Call ${i} succeeded (${response.status})`);
            } else {
                errorCount++;
                console.log(`❌ Call ${i} failed with status: ${response.status}`);
            }

        } catch (error) {
            errorCount++;
            console.log(`❌ Call ${i} failed with error:`, error.message);
        }
    }

    console.log('\n--- Summary ---');
    console.log(`Total calls: ${totalCalls}`);
    console.log(`Successful: ${successCount}`);
    console.log(`Failed: ${errorCount}`);
}

// Run the function
runFetchSequentially();

Note: you will need to set the totalCalls based on the expected number of conversations to be processed.