Use Python Objects
Open a file with DNBFDocument, find an object by class name, follow fields, edit
a value, and write a new binary.
Library Documentation
dnbflib is a Python library for old .NET BinaryFormatter files. These files are
packed binary snapshots of .NET objects. dnbflib helps you inspect them, change supported
values, and write a new DNBF binary back out.
Think of a BinaryFormatter file as a saved object graph: objects contain fields, fields can contain numbers or strings, and some fields point to other objects. dnbflib gives you two ways to work with that graph.
Open a file with DNBFDocument, find an object by class name, follow fields, edit
a value, and write a new binary.
Export the binary into a folder of editable YAML and extra raw byte files. Rebuild the folder back into a BinaryFormatter file when you are done.
When dnbflib does not understand a record deeply, it keeps the original bytes. That makes no-edit rebuilds and small edits much safer.
Start with the workflow that matches how you want to edit the file.
DNBFDocumentDNBFRecordStore
dnbflib requires Python 3.11 or newer. It depends on pydatastreams, which
installs the Python module named datastream.
When running from a checkout without installing the package, point Python at the local
src folder first.
$env:PYTHONPATH = "src"
python -c "from dnbflib import DNBFRecordStore; print(DNBFRecordStore)"
python -m pip install -e .
pip install dnbflib
A BinaryFormatter file is not one big object. It is a list of smaller chunks called records. One record might say "this is a string," another might say "this object has these fields," and another might say "this field points to object 42." dnbflib scans those records in order.
BinaryFormatter often gives objects numeric IDs. A field may not contain the object directly;
it may contain a reference to an ID instead. When you call member.deref(),
dnbflib follows that reference and gives you the target object.
Every record has original bytes in the source file. dnbflib keeps those bytes around. If a record is not edited, those exact bytes can be written back out unchanged.
A decoded view is dnbflib's best human-readable explanation of a record. For example, a class record might decode into a class name and a list of fields. If a decoded field is marked editable, dnbflib knows how to rebuild that part safely.
This script checks the safest promise first: export a binary, rebuild it without edits, and verify that the rebuilt bytes match the input exactly.
from pathlib import Path
from dnbflib import export_dnbf_to_yaml, rebuild_yaml_export
input_path = Path("src/test.bin")
export_dir = Path("parsed_yaml")
export_dnbf_to_yaml(input_path, export_dir)
rebuilt = rebuild_yaml_export(export_dir)
print("roundtrip:", rebuilt == input_path.read_bytes())
If this prints roundtrip: True, the export folder can rebuild the original file exactly.
After that, make one small edit at a time and rebuild again.
The included example script does the same thing from PowerShell or a terminal.
python examples/export_to_yaml.py save.dat save_export --verify
DNBFRecordStore is a SQLite-backed table of records. You usually do not need it
for simple edits, but it is useful when you want to inspect a file by offsets and IDs.
| Column | Meaning |
|---|---|
sequence |
The record number in file order. |
offset, size |
Where the record starts in the binary and how many bytes it occupies. |
record_type, record_type_name |
The kind of record, such as string, array, class, reference, or message end. |
object_id, library_id, reference_id, metadata_id |
IDs used to connect objects and references when the binary contains them. |
raw |
The original bytes for this record, used for lossless rebuilds. |
from dnbflib import DNBFRecordStore
store = DNBFRecordStore("parsed.dnbf.sqlite3")
print("source sha256:", store.get_metadata("source_sha256"))
for record in store.iter_records():
print(record.sequence, record.record_type.name, record.offset, record.size)
This is low-level. For normal object navigation, prefer DNBFDocument.
stored = store.get_by_object_id(42)
print(stored.record_type.name)
print(stored.raw.hex())
There are two editing styles. Use DNBFDocument when you know the object and field
names. Use YAML export when you want to inspect records by hand before deciding what to change.
This example finds one Life object, follows its Finances field, changes
BankBalance, and writes a new file.
from pathlib import Path
from dnbflib import DNBFDocument
input_path = Path("save.dat")
output_path = Path("edited.bin")
with DNBFDocument.open(input_path) as doc:
life = doc.one(class_name="Life", where=lambda obj: obj.member("Name").value == "Alex")
finances = life.member("Finances").deref()
finances.member("BankBalance").set(123456)
doc.write(output_path)
class_name="Life", use a where filter
so dnbflib knows which instance you mean.
In an exported record file, raw.bin is the original record bytes and
record.yaml describes how to rebuild that record. If a decoded value is editable,
set rebuild to decoded and change the decoded field.
value,
values, string text, or other fields explicitly marked editable: true.
Do not change structural fields such as record_type, sequence,
offset, size, value_offset, value_size,
raw_path, decoded_path, object IDs, reference IDs, library IDs, or
metadata IDs unless you are intentionally changing the BinaryFormatter graph and know how to
update every related reference.
ref_path.
Open the file at ref_path to inspect or edit the referenced object. For example,
a Life field named Finances may point to a separate
Game.Finances record file rather than containing the balance fields directly.
In a Life record, the member might look like this:
{
"name": "<Finances>k__BackingField",
"binary_type": "Class",
"editable": true,
"record_type": "MemberReference",
"ref_id": 42,
"ref_path": "records/objects/Game.Finances/000128_ClassWithMembersAndTypes/record.yaml"
}
That means the field does not contain the finances data here. It points to object
42. Open the referenced record.yaml and edit the value fields there:
{
"record_type": "ClassWithMembersAndTypes",
"decoded": {
"type": "ClassWithMembersAndTypes",
"fields": {
"class_name": "Game.Finances",
"members": [
{
"name": "<BankBalance>k__BackingField",
"binary_type": "Primitive",
"editable": true,
"primitive_type": "Int64",
"value": 123456,
"value_offset": 184,
"value_size": 8
}
]
}
}
}
Not safely yet. dnbflib currently supports editing supported values inside records that already exist. It does not currently provide a high-level API for adding a brand-new object to the BinaryFormatter graph.
For now, use dnbflib to change existing objects and existing fields. If you need insertion support, it should be implemented as a dedicated graph-editing feature that allocates IDs, writes the required records, and updates every referencing record together.
This changes one standalone primitive integer record from its old value to 250.
{
"record_type": "MemberPrimitiveTyped",
"rebuild": "decoded",
"decoded": {
"editable": true,
"type": "MemberPrimitiveTyped",
"fields": {
"primitive_type": "Int32",
"value": 250
}
}
}
For primitive arrays, edit the numbers in decoded.fields.values.
{
"record_type": "ArraySinglePrimitive",
"rebuild": "decoded",
"decoded": {
"editable": true,
"type": "ArraySinglePrimitive",
"fields": {
"object_id": 55,
"primitive_type": "Int16",
"values": [1, 9, 3]
}
}
}
For class records, primitive fields appear as member entries. The value_offset and
value_size fields tell dnbflib exactly where the old value lives inside the original
raw record, so it can patch that value while preserving the rest.
{
"record_type": "ClassWithMembersAndTypes",
"rebuild": "decoded",
"decoded": {
"type": "ClassWithMembersAndTypes",
"fields": {
"class_name": "Person",
"members": [
{
"name": "Age",
"binary_type": "Primitive",
"editable": true,
"primitive_type": "Int32",
"value": 31,
"value_offset": 42,
"value_size": 4
}
]
}
}
}
DNBFDocumentUse this API for Python-only object traversal and editing.
from dnbflib import DNBFDocument
with DNBFDocument.open("save.dat") as doc:
life = doc.one(class_name="Life", where=lambda obj: obj.member("Name").value == "Alex")
finances = life.member("Finances").deref()
finances.member("BankBalance").set(123456)
doc.write("edited.dat")
| Member | Description |
|---|---|
DNBFDocument.open(path) |
Open a BinaryFormatter file and build a lightweight record offset index. |
objects(class_name=None) |
Return decoded objects, optionally filtered by class name. |
one(class_name=..., where=...) |
Return exactly one matching object. Raises an error if there are zero or many. |
node.member(name) |
Find a field/member on an object. Backing-field names are simplified when possible. |
member.deref() |
Follow a reference field to the object it points at. |
member.set(value) |
Change a supported decoded value and mark its record for rebuild. |
write(path) |
Write the edited binary to disk. Prefer this over to_bytes() for large files. |
DNBFRecordStoreUse this API when you want the low-level record table.
| Member | Description |
|---|---|
add_record(record, offset, size, raw) |
Store a writable record object plus its original bytes. |
add_raw_record(...) |
Store a record when you already know its type, offset, IDs, and bytes. |
iter_records() |
Loop over stored records in the same order as the source file. |
get_by_object_id(object_id) |
Find the first record that defines the requested object ID. |
to_bytes() |
Rebuild bytes by joining all stored raw records in order. |
DNBFWriterUse this API when you already have raw records or stored records and want bytes back.
| Member | Description |
|---|---|
DNBFWriter(records=None) |
Create a writer from writable record objects, stored records, or raw byte chunks. |
DNBFWriter.from_record_store(store) |
Create a writer from all records in a DNBFRecordStore. |
to_bytes() |
Return the rebuilt BinaryFormatter bytes. |
write_path(path) |
Write the rebuilt BinaryFormatter bytes to a path. |
| Name | Description |
|---|---|
export_dnbf_to_yaml, export_record_store_to_yaml, rebuild_yaml_export |
Export a binary or record store to editable YAML, then rebuild it. |
RecordTypeEnumeration, BinaryTypeEnumeration, PrimitiveTypeEnumeration |
Named constants for BinaryFormatter record, member, and primitive value types. |
BinaryObjectString |
Small helper for constructing a writable BinaryFormatter string record. |
| Traversal errors | Errors raised when object or member lookup is missing or ambiguous. |
ModuleNotFoundError: No module named 'datastream'
Install pydatastreams or activate the environment where it is already installed.
The package name is pydatastreams, but the Python import name is datastream.
First test a no-edit YAML round trip with rebuild_yaml_export(...). If an untouched
export does not rebuild exactly, stop and inspect the record mentioned by the error before editing.
The error should include a record type or offset. Use that to inspect the exported
record.yaml and raw.bin. The scanner or decoded rebuild code may need
support for that record shape.
References usually point to object IDs, not byte offsets. If you edit a value while keeping the same object ID, references should still point to it. Be careful when changing object IDs or adding new objects.