View Issue Details

IDProjectCategoryView StatusLast Update
0001112OpenMPT[All Projects] File Format Supportpublic2018-04-11 17:40
ReportermanxAssigned Tomanx 
PrioritynormalSeveritymajorReproducibilityhave not tried
Status assignedResolutionopen 
Product Version 
Target Versionlibopenmpt 0.4 (goals)Fixed in Version 
Summary0001112: Properly report and handle out-of-memory
Description

Various places in OpenMPT/iibopenmpt code currently handle out-of-memory locally by discarding the small thing currently loading from disk. This ultimately results in different actual module data getting succesfully loaded depending on the current process' memory footprint. It corrupts the data loaded by discarding parts of it, and returns success.

Out-of-memory must be handled globally at API boundaries (like the file loading code path in OpenMPT or openmpt::module construction in libopenmpt). We already have code in place at these most important locations to handle out-of-memory.

Steps To Reproduce

Repeatedly load a module with huge amount of sample data. Files opened later will have sample data missing.

Additional Information

The most important change would be to make the sample data RAII. Anything else should already be exception safe for the most part.
And after that, remove all fine-grained out-of-memory exception handling in various file loading related functions.

It might also make sense to look into detecting memory leaks during fuzzing.

TagsNo tags attached.
Has the bug occurred in previous versions?
Tested code revision (in case you know it)

Activities

Saga Musix

Saga Musix

2018-04-11 10:59

administrator   ~0003500

I guess it is worth discussing if this should only generate a warning or fail hard; at least in the context of OpenMPT I think it can still be desirable to load huge modules in a low-memory situation while receiving a warning from the application. For example, think of a slightly damaged file where the "sample length" field of a single sample was destroyed due to a transfer error, and now claims that this one sample is very huge. For the sake of rescuing parts of such a file, we'd still want to be able to load it.

manx

manx

2018-04-11 11:38

administrator   ~0003501

For the specific case of sample data, yes, it might make sense to handle that explicitly and report a warning.

However this probably requires further refactoring compared to just failing to load completely. For libopenmpt, such behaviour would need to be opt-in (i.e. the API user can explicitly set an option to skip out-of-memory sample data after trying first without having that option set).

manx

manx

2018-04-11 17:10

administrator   ~0003502

The only sane way to implement this, is to support setting a maximum allocation size limit upfront before loading, and still failing hard for any allocation failure.

Otherwise, minor allocations that happen after a huge one will still cause a failure to load anyway, without any chance of loading this particular file at all. The only way to handle this is to provide a limit for (in particular) sample data and plugin data, before starting the loading process, so that huge problematic allocations will not happen in the first place.

Converting allocation failure to a warning is the wrong thing to do, even for this use case.

manx

manx

2018-04-11 17:38

administrator   ~0003503

Also, there are tons of other places in the various module formats that might cause large allocations (ultimately, any length field that is 32 bits could have the same issue, iff the allocation is not additionally limited to remaining file data). [1]

Which is a very important point: Iff the file is actually huge, we should better try to allocate the huge requested buffer, and if it is not, we should only allocate as much as could sanely be needed, and if this is not known upfront, we should be just growing a buffer on demand. I fail to see how converting to warning would be required to handle broken files.

This leaves leaves the allocation size limitation as only an additional feature which could maybe useful for very memory-constrained situations in the tracker. I do not see any use for it in libopenmpt any more. In particular, I do not think we should block/delay proper out-of-memory handling because this potential feature.

[1] Rogue unlimited allocations based on a 32bit length field in a module file can easily be found with fuzzing (in combination with allocator instrumentation if needed).

manx

manx

2018-04-11 17:40

administrator   ~0003504

It might make sense to implement proper out-of-memory handling in a development branch instead of directly in trunk.

Issue History

Date Modified Username Field Change
2018-04-11 06:36 manx New Issue
2018-04-11 06:36 manx Status new => assigned
2018-04-11 06:36 manx Assigned To => manx
2018-04-11 10:59 Saga Musix Note Added: 0003500
2018-04-11 11:38 manx Note Added: 0003501
2018-04-11 17:10 manx Note Added: 0003502
2018-04-11 17:38 manx Note Added: 0003503
2018-04-11 17:40 manx Note Added: 0003504