Hi @Wafaa , NNsight supports any PyTorch model in theory! Looking at the comet code id try something like this:
from comet import download_model, load_from_checkpoint
model_path = download_model("Unbabel/wmt22-comet-da")
model = load_from_checkpoint(model_path)
data = [
{
"src": "Dem Feuer konnte Einhalt geboten werden",
"mt": "The fire could be stopped",
"ref": "They were able to control the fire."
},
{
"src": "Schulen und Kindergärten wurden eröffnet.",
"mt": "Schools and kindergartens were open",
"ref": "Schools and kindergartens opened"
}
]
from nnsight import NNsight
model = NNsight(model)
with model.predict(data, batch_size=8, gpus=1) as tracer:
model_output = tracer.result.save()
print (model_output)
@JadenFK a followup question: I am now tying to change the values of some weights. For example, let’s say I want to set some weights to zero.
Here is a simple code snippet I tried:
When I try to run it, I get this error: RuntimeError: Inplace update to inference tensor outside InferenceMode is not allowed.You can make a clone to get a normal tensor before doing inplace update.
I think it is probably a simple fix of some default setting I need to change. I would appreciate the help.
Which prevents in-place updates to tensors like edited_tensor[:,0] = 0, but you are cloning it first so that should be fine.
I also think its preventing this line (maybe): model.encoder.model.encoder.layer[0].output.dense.output[0] = edited_tensor
If it is that line, you need to replace the whole tensor (I dont know what .output is here): model.encoder.model.encoder.layer[0].output.dense.outpu = edited_tensor
If things are still a problem I’d try to disable inference mode somehow.