I'm attempting to do a bulk update with sqlalchemy.
What does work, is selecting the objects to update, then setting the attributes inside of a with session.begin_nested():
statement. The time it takes to do the actual save is slow, however.
When I attempt to use a bulk operation instead via session.bulk_save_objects
or session.bulk_update_mappings
I get the following exception:
A value is required for bind parameter 'schema_table_table_id'
[SQL: u'UPDATE schema.table SET updated_col=%(updated_col)s
WHERE schema.table.table_id = %(schema_table_table_id)s']
[parameters: [{'updated_col': 'some_val'}]]
It looks like bulk_save_objects
uses the same logic path as bulk_update_mappings
.
I actually don't even understand how bulk_update_mappings
is supposed to work, because you are providing the updated values and a reference class but the primary key associated with those values are missing from your list. That essentially seems to be problem here. I tried using bulk_update_mappings
and I provided the generated dictionary key used for the primary key param (schema_table_table_id
in my example) and it just ended up being ignored. If I used the id
attribute name instead, it would update the primary key in the generated SQL but still did not provide the needed parameter in the where clause.
This is using SQLAlchemy 1.0.12, which is the latest version on pip.
My suspicion is that this is a bug.