NNsight 0.5 Prerelease: Feedback Requested

Thanks, that’s very usefull! Nitpick: would be nice to have a better exception than NNsightException: f-string expression part cannot include a backslash when there is no forward.

Also the error message when accessing a value that doesn’t exists in a source could be made more explicit:

---------------------------------------------------------------------------
NNsightException                          Traceback (most recent call last)
Cell In[10], line 6
      4 mname = "bigscience/bigscience-small-testing"
      5 model = StandardizedTransformer(mname, )
----> 6 with model.trace(["hello", "hello the fox is jumping"], output_attentions=True):
      7     print(model.model.layers[0].source.self_input_layernorm_1)
      8     # print(model.attentions[0].source)#.attention_interface_0.output[1])
      9     # print([t.shape if t is not None else "None" for t in model.attentions[0].output])

File ~/.venv/lib/python3.10/site-packages/nnsight/intervention/tracing/base.py:387, in Tracer.__exit__(self, exc_type, exc_val, exc_tb)
    383 # Suppress the ExitTracingException but let other exceptions propagate
    384 if exc_type is ExitTracingException:
    385 
    386     # Execute the traced code using the configured backend
--> 387     self.backend(self)
    389     return True

File ~/.venv/lib/python3.10/site-packages/nnsight/intervention/backends/execution.py:24, in ExecutionBackend.__call__(self, tracer)
     21     tracer.execute(fn)
     22 except Exception as e:
---> 24     raise wrap_exception(e, tracer.info) from None
     25 finally:
     26     Globals.exit()

NNsightException: 

Traceback (most recent call last):
  File "/tmp/ipykernel_34500/4127555823.py", line 7, in <module>
    print(model.model.layers[0].source.self_input_layernorm_1)
  File "/root/.venv/lib/python3.10/site-packages/nnsight/intervention/envoy.py", line 1340, in __getattr__
    return super().__getattr__(name)

AttributeError: 'super' object has no attribute '__getattr__'

One thing that was unclear to me until now is that in source you can only access intermediate variables that are results of function calls.

from nnsight import NNsight
import torch as th
import torch.nn as nn

def add(a, b):
    return a + b

class MyModel(nn.Module):
    def __init__(self):
        super().__init__()
        self.layer = nn.Linear(10, 10)

    def forward(self, x):
        foo = x + 2  # can't access from source
        foo_func = add(x, 2)  # can access as go through the artificial `add` function
        foo_trap = self.layer(x) + 1  # accesses `self.layer(x)` but not `self.layer(x) + 1`
        return self.layer(x)


model = NNsight(MyModel())
with model.trace(th.randn(10, 10)):
    print(model.source)

prints:

* def forward(self, x):
                  0     foo = x + 2  # can't access from source
 add_0        ->  1     foo_func = add(x, 2)  # can access as go through the artificial `add` function
 self_layer_0 ->  2     foo_trap = self.layer(x) + 1  # accesses `self.layer(x)` but not `self.layer(x) + 1`
 self_layer_1 ->  3     return self.layer(x)
                  4

I naively thought that self_layer_0 would be the value of foo_trap but it’s not.
I guess this is related to the limitations to what nnsight can trace, and the naming kind of implies that this is layer’s output rather than foo_trap. Just thought it might be worth emphasizing in the doc

You introduced the .skip() method, which is a great addition!

Nonetheless, I was wondering how to use it in practice. Often, we might want to skip all layers before a given one. How should we proceed in this case?

What I imagine would be something like tracer.skip_till(model.path.to.the.layer). Would that be possible?

Because going through all layers manually and skipping them one by one might lead to many errors!

@Antonin_Poche
I’m doing this as part as nnterp 1.0.0 that will be compatible with nnsight 0.5.x


def skip_layers(
    nn_model: LanguageModel,
    start_layer: int,
    end_layer: int,
    skip_with: TraceTensor | None = None,
):
    """
    Skip all layers between start_layer and end_layer (inclusive). Equivalent to:
    ```py
    set_layer_output(nn_model, end_layer, get_layer_input(nn_model, start_layer))
    ```
    But skip the useless computa

    Args:
        nn_model: The NNSight model
        start_layer: The layer to start skipping from
        end_layer: The layer to stop skipping at
    """
    if skip_with is None:
        skip_with = get_layer_input(nn_model, start_layer)
    for layer in range(start_layer, end_layer):
        get_layer(nn_model, layer).skip((skip_with,))
    get_layer(nn_model, end_layer).skip((skip_with,))
2 Likes

gemma-3 generate seems to be broken in 0.5 (trace works and generate works in 0.4.8)

from nnsight import LanguageModel
gemma3 = LanguageModel("axolotl-ai-co/gemma-3-34M", device_map="auto", dispatch=True)
with gemma3.generate("Hello, world!", max_length=10):
    print(gemma3.output.logits.save())
tensor([[[ 0.0000, -0.3108, -0.0311,  ..., -0.0029,  0.1946,  0.3862]]],
       device='cuda:1')
Traceback (most recent call last):
  File "/workspace/nnterp/a.py", line 3, in <module>
    with gemma3.generate("Hello, world!", max_length=10):
  File "/root/.venv/lib/python3.10/site-packages/nnsight/intervention/tracing/base.py", line 387, in __exit__
    self.backend(self)
  File "/root/.venv/lib/python3.10/site-packages/nnsight/intervention/backends/execution.py", line 24, in __call__
    raise wrap_exception(e, tracer.info) from None
nnsight.NNsightException: 

Traceback (most recent call last):
  File "/root/.venv/lib/python3.10/site-packages/nnsight/intervention/backends/execution.py", line 21, in __call__
    tracer.execute(fn)
  File "/root/.venv/lib/python3.10/site-packages/nnsight/intervention/tracing/tracer.py", line 331, in execute
    self.model.interleave(interleaver, self.fn, *args, **kwargs)
  File "/root/.venv/lib/python3.10/site-packages/nnsight/modeling/mixins/meta.py", line 76, in interleave
    return super().interleave(interleaver, fn, *args, **kwargs)
  File "/root/.venv/lib/python3.10/site-packages/nnsight/intervention/envoy.py", line 705, in interleave
    interleaver(fn, *args, **kwargs)
  File "/root/.venv/lib/python3.10/site-packages/nnsight/intervention/interleaver.py", line 312, in __call__
    fn(*args, **kwargs)
  File "/root/.venv/lib/python3.10/site-packages/nnsight/modeling/language.py", line 145, in __nnsight_generate__
    output = self._model.generate(*args, **kwargs)
  File "/root/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/root/.venv/lib/python3.10/site-packages/transformers/generation/utils.py", line 2623, in generate
    result = self._sample(
  File "/root/.venv/lib/python3.10/site-packages/transformers/generation/utils.py", line 3607, in _sample
    outputs = model_forward(**model_inputs, return_dict=True)
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 655, in _fn
    return fn(*args, **kwargs)
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1432, in __call__
    return self._torchdynamo_orig_callable(
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 598, in __call__
    return _compile(
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1059, in _compile
    guarded_code = compile_inner(code, one_graph, hooks, transform)
  File "/root/.venv/lib/python3.10/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
    return function(*args, **kwargs)
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 761, in compile_inner
    return _compile_inner(code, one_graph, hooks, transform)
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 797, in _compile_inner
    out_code = transform_code_object(code, transform)
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1422, in transform_code_object
    transformations(instructions, code_options)
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 257, in _fn
    return fn(*args, **kwargs)
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 715, in transform
    tracer.run()
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3500, in run
    super().run()
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1337, in run
    while self.step():
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1246, in step
    self.dispatch_table[inst.opcode](self, inst)
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 819, in wrapper
    return inner_fn(self, inst)
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2168, in CALL_FUNCTION
    self.call_function(fn, args, {})
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1170, in call_function
    self.push(fn.call_function(self, args, kwargs))  # type: ignore[arg-type]
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 926, in call_function
    return super().call_function(tx, args, kwargs)
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 404, in call_function
    return super().call_function(tx, args, kwargs)
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 185, in call_function
    return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1187, in inline_user_function_return
    return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3726, in inline_call
    return tracer.inline_call_()
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3905, in inline_call_
    self.run()
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1337, in run
    while self.step():
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1246, in step
    self.dispatch_table[inst.opcode](self, inst)
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 819, in wrapper
    return inner_fn(self, inst)
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2168, in CALL_FUNCTION
    self.call_function(fn, args, {})
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1170, in call_function
    self.push(fn.call_function(self, args, kwargs))  # type: ignore[arg-type]
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 926, in call_function
    return super().call_function(tx, args, kwargs)
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 404, in call_function
    return super().call_function(tx, args, kwargs)
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 185, in call_function
    return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1187, in inline_user_function_return
    return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3726, in inline_call
    return tracer.inline_call_()
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3905, in inline_call_
    self.run()
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1337, in run
    while self.step():
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1246, in step
    self.dispatch_table[inst.opcode](self, inst)
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 819, in wrapper
    return inner_fn(self, inst)
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2168, in CALL_FUNCTION
    self.call_function(fn, args, {})
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1170, in call_function
    self.push(fn.call_function(self, args, kwargs))  # type: ignore[arg-type]
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 926, in call_function
    return super().call_function(tx, args, kwargs)
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 404, in call_function
    return super().call_function(tx, args, kwargs)
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 185, in call_function
    return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1187, in inline_user_function_return
    return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3726, in inline_call
    return tracer.inline_call_()
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3905, in inline_call_
    self.run()
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1337, in run
    while self.step():
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1246, in step
    self.dispatch_table[inst.opcode](self, inst)
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1688, in SETUP_WITH
    self.setup_or_before_with(inst)
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2960, in setup_or_before_with
    unimplemented_v2(
  File "/root/.venv/lib/python3.10/site-packages/torch/_dynamo/exc.py", line 517, in unimplemented_v2
    raise Unsupported(msg)

Unsupported: Unsupported context manager
  Explanation: Dynamo does not know how to enter a `lock` context manager.
  Hint: Avoid using the unsupported context manager.
  Hint: File an issue to PyTorch. Simple context managers can potentially be supported, but note that context managers can't be supported in general

  Developer debug context: Attempted SETUP_WITH/BEFORE_WITH on UserDefinedObjectVariable(lock)


from user code:
   File "/root/.venv/lib/python3.10/site-packages/nnsight/intervention/interleaver.py", line 122, in inner
    inputs = self.handle(self.iterate(f"{provider}.input"), (args, kwargs))
  File "/root/.venv/lib/python3.10/site-packages/nnsight/intervention/interleaver.py", line 356, in handle
    mediator.handle(provider)
  File "/root/.venv/lib/python3.10/site-packages/nnsight/intervention/interleaver.py", line 521, in handle
    process = not self.event_queue.empty()
  File "/usr/lib/python3.10/queue.py", line 108, in empty
    with self.mutex:

0.5 can’t be used in python terminal:

>>> from nnsight import LanguageModel; model = LanguageModel("gpt2")

>>> with model.trace("a"):
...     pass
... 
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/root/.venv/lib/python3.10/site-packages/nnsight/modeling/mixins/remoteable.py", line 31, in trace
    return super().trace(
  File "/root/.venv/lib/python3.10/site-packages/nnsight/intervention/envoy.py", line 409, in trace
    return InterleavingTracer(fn, self, *args, **kwargs)
  File "/root/.venv/lib/python3.10/site-packages/nnsight/intervention/tracing/tracer.py", line 261, in __init__
    super().__init__(*args, backend=backend)
  File "/root/.venv/lib/python3.10/site-packages/nnsight/intervention/tracing/base.py", line 117, in __init__
    self.capture()
  File "/root/.venv/lib/python3.10/site-packages/nnsight/intervention/tracing/base.py", line 157, in capture
    source_lines, offset = inspect.getsourcelines(frame)
  File "/usr/lib/python3.10/inspect.py", line 1121, in getsourcelines
    lines, lnum = findsource(object)
  File "/usr/lib/python3.10/inspect.py", line 958, in findsource
    raise OSError('could not get source code')
OSError: could not get source code
from nnsight import LanguageModel
model = LanguageModel("gpt2", device_map="cuda", attn_implementation="eager")
with model.scan("a"):  # same error with trace
    print(model.transformer.h[0].attn.source.attention_interface_0.source)
print("success")

printing this attention source fails with this weird error:

                             * def eager_attention_forward(module, query, key, value, attention_mask, head_mask=None, **kwargs):
 key_transpose_0         ->  0     attn_weights = torch.matmul(query, key.transpose(-1, -2))
 torch_matmul_0          ->  +     ...
                             1 
                             2     if module.scale_attn_weights:
 torch_full_0            ->  3         attn_weights = attn_weights / torch.full(
 value_size_0            ->  4             [], value.size(-1) ** 0.5, dtype=attn_weights.dtype, device=attn_weights.device
                             5         )
                             6 
                             7     # Layer-wise attention scaling
                             8     if module.scale_attn_by_inverse_layer_idx:
 float_0                 ->  9         attn_weights = attn_weights / float(module.layer_idx + 1)
                            10 
                            11     if not module.is_cross_attention:
                            12         # if only "normal" attention layer implements causal mask
 query_size_0            -> 13         query_length, key_length = query.size(-2), key.size(-2)
 key_size_0              ->  +         ...
                            14         causal_mask = module.bias[:, :, key_length - query_length : key_length, :key_length]
 torch_finfo_0           -> 15         mask_value = torch.finfo(attn_weights.dtype).min
                            16         # Need to be a tensor, otherwise we get error: `RuntimeError: expected scalar type float but found double`.
                            17         # Need to be on the same device, otherwise `RuntimeError: ..., x and y to be on the same device`
 torch_full_1            -> 18         mask_value = torch.full([], mask_value, dtype=attn_weights.dtype, device=attn_weights.device)
 attn_weights_to_0       -> 19         attn_weights = torch.where(causal_mask, attn_weights.to(attn_weights.dtype), mask_value)
 torch_where_0           ->  +         ...
                            20 
                            21     if attention_mask is not None:
                            22         # Apply the attention mask
                            23         causal_mask = attention_mask[:, :, :, : key.shape[-2]]
                            24         attn_weights = attn_weights + causal_mask
                            25 
 nn_functional_softmax_0 -> 26     attn_weights = nn.functional.softmax(attn_weights, dim=-1)
                            27 
                            28     # Downcast (if necessary) back to V's dtype (if in mixed-precision) -- No-Op otherwise
 attn_weights_type_0     -> 29     attn_weights = attn_weights.type(value.dtype)
 module_attn_dropout_0   -> 30     attn_weights = module.attn_dropout(attn_weights)
                            31 
                            32     # Mask heads if we want to
                            33     if head_mask is not None:
                            34         attn_weights = attn_weights * head_mask
                            35 
 torch_matmul_1          -> 36     attn_output = torch.matmul(attn_weights, value)
 attn_output_transpose_0 -> 37     attn_output = attn_output.transpose(1, 2)
                            38 
                            39     return attn_output, attn_weights
                            40 
Traceback (most recent call last):
  File "/workspace/nnterp/a.py", line 3, in <module>
    with model.scan("a"):
  File "/root/.venv/lib/python3.10/site-packages/nnsight/intervention/tracing/base.py", line 387, in __exit__
    self.backend(self)
  File "/root/.venv/lib/python3.10/site-packages/nnsight/intervention/backends/execution.py", line 24, in __call__
    raise wrap_exception(e, tracer.info) from None
nnsight.NNsightException: <exception str() failed>

If I try to get an output I getthe same error message:

from nnsight import LanguageModel
model = LanguageModel("gpt2", device_map="cuda", attn_implementation="eager")
with model.trace("a"): 
    print(model.transformer.h[0].attn.source.attention_interface_0.source.float_0.output)
print("success")

Btw should I post this kind of issue on github directly instead of here?

Python 3.12.*:
Basic nnsight “Hello World” example throws Segmentation fault (core dumped) (was working on 0.4.8).

from nnsight import LanguageModel
model = LanguageModel("gpt2")
with model.trace("Hello World"):
    model.transformer.h[0].attn.output[0][:] = 0
    output = model.output.save()
print(output)

Hey I really appreciate you going through all these!! I had this working in 0.4 and just ported over (and made more robust) similar functionality to 0.5.

When a module defines a submodule at .output .inputs or .input, nnsight value access will be mounted at .nns_output etc. .Output will just return the normal sub-Envoy for the module (so to access its output you would do .output.output. Also raises a warning informing of this change for this module.

1 Like

can’t reproduce with:

Using Python 3.10.12 environment at: /root/.venv
Name: nnsight
Version: 0.5.0.dev5
Location: /root/.venv/lib/python3.10/site-packages
Requires: accelerate, astor, dill, ipython, pydantic, python-socketio, toml, torch, transformers
Required-by: nnterp
---
Name: torch
Version: 2.7.0
Location: /root/.venv/lib/python3.10/site-packages
Requires: filelock, fsspec, jinja2, networkx, nvidia-cublas-cu12, nvidia-cuda-cupti-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-runtime-cu12, nvidia-cudnn-cu12, nvidia-cufft-cu12, nvidia-cufile-cu12, nvidia-curand-cu12, nvidia-cusolver-cu12, nvidia-cusparse-cu12, nvidia-cusparselt-cu12, nvidia-nccl-cu12, nvidia-nvjitlink-cu12, nvidia-nvtx-cu12, sympy, triton, typing-extensions
Required-by: accelerate, compressed-tensors, nnsight, outlines, torchaudio, torchvision, vllm, xformers, xgrammar
---
Name: transformers
Version: 4.53.0
Location: /root/.venv/lib/python3.10/site-packages
Requires: filelock, huggingface-hub, numpy, packaging, pyyaml, regex, requests, safetensors, tokenizers, tqdm
Required-by: compressed-tensors, nnsight, vllm, xgrammar

Problem was when a module was called inside a function with .source (in this case dropout was called in attention_interface). Fixed in 0.5.0.dev6

1 Like

Should work in 0.5.0.dev6

1 Like

Yeah wasn’t working with torch.compiled module. Fixed in 0.5.0.dev6

1 Like

Can you make sure you’re on the latest nnsight version ? 0.5.0.dev6

1 Like

yup it works now! thanks :slight_smile:

For people using 0.5 with pytest, to avoid headaches:

  • Run tests with --cache-clear, otherwise you might get weird errors when you change your code
  • If you get some OSError: source code not available because e.g. you moved your test to another directory, clear all your python cache find . -name "*.pyc" -delete && find . -name "__pycache__" -type d -exec rm -rf {} + 2>/dev/null || true && rm -rf .pytest_cache

Great to hear everyone’s feedback so far! Don’t forget that we’re having a live feedback session tomorrow at 12 PM EDT. See below for more information:

Live Feedback Session

Join us for a real-time discussion about NNsight 0.5:

I was surprised that you need to save leaf values like integers or None. This is another breaking change compared to 0.4

from nnsight import LanguageModel

model = LanguageModel("gpt2")
# a = "default"
with model.trace("hi") as tracer:
    a = None
print(f"a: {a}")
> prints default or fails with NameError: name 'a' is not defined

this happened to me with this code failing

    if cache_inputs:
        input_ids = model.input_ids.save()
    else:
        input_ids = None

I will just initialize input_ids to None outside the trace, but this could be a surprising to new user. I could also just do

    if cache_inputs:
        input_ids = model.input_ids
    else:
        input_ids = None
    input_ids.save()

Maybe saving the non-tracing values could be a better default for 0.5? happy to hear your thoughts on this. but e.g.

my_var = 1
my_var = my_var.save()

looks quite verbose and can only be shortened with my_var = int(1).save() but there is no equivalent for None AFAIK

For None an alternative is

def none():
   return None

with model ...:
   a = none().save()