Support for COMET models

I am trying to use NNsight for interpretability of machine translation evaluation metrics. Specifically, I am interested in the COMET metric: GitHub - Unbabel/COMET: A Neural Framework for MT Evaluation

The metric has its own model loading and inference functions and it doesn’t use the huggingface Transformers standard functions.

Where should I go from here?

Hi @Wafaa , NNsight supports any PyTorch model in theory! Looking at the comet code id try something like this:

from comet import download_model, load_from_checkpoint

model_path = download_model("Unbabel/wmt22-comet-da")
model = load_from_checkpoint(model_path)
data = [
    {
        "src": "Dem Feuer konnte Einhalt geboten werden",
        "mt": "The fire could be stopped",
        "ref": "They were able to control the fire."
    },
    {
        "src": "Schulen und Kindergärten wurden eröffnet.",
        "mt": "Schools and kindergartens were open",
        "ref": "Schools and kindergartens opened"
    }
]

from nnsight import NNsight

model = NNsight(model)

with model.predict(data, batch_size=8, gpus=1) as tracer:
    model_output = tracer.result.save()
print (model_output)

Please report back if this works!

1 Like

It works! thank you so much!

1 Like

@JadenFK a followup question: I am now tying to change the values of some weights. For example, let’s say I want to set some weights to zero.
Here is a simple code snippet I tried:

with model.predict(data) as tracer:
    L0_output_before = model.encoder.model.encoder.layer[0].output.dense.output[0].save()
    edited_tensor = model.encoder.model.encoder.layer[0].output.dense.output[0].clone().save()
    edited_tensor[:,0] = 0
    model.encoder.model.encoder.layer[0].output.dense.output[0] = edited_tensor
    L0_output_after = model.encoder.model.encoder.layer[0].output.dense.output[0].save()
print("\n")
print("############")
print("L0 before: ", L0_output_before)
print("L0 after: ", L0_output_after)
print("############")

When I try to run it, I get this error: RuntimeError: Inplace update to inference tensor outside InferenceMode is not allowed.You can make a clone to get a normal tensor before doing inplace update.

I think it is probably a simple fix of some default setting I need to change. I would appreciate the help.

@Wafaa The predict method of that model is wrapped in torch inference mode: inference_mode — PyTorch 2.9 documentation

Which prevents in-place updates to tensors like edited_tensor[:,0] = 0, but you are cloning it first so that should be fine.
I also think its preventing this line (maybe):
model.encoder.model.encoder.layer[0].output.dense.output[0] = edited_tensor

If it is that line, you need to replace the whole tensor (I dont know what .output is here):
model.encoder.model.encoder.layer[0].output.dense.outpu = edited_tensor

If things are still a problem I’d try to disable inference mode somehow.