View Issue Details

IDProjectCategoryView StatusLast Update
0000501OpenMPTFeature Requestpublic2022-02-28 15:46
Reportercoda Assigned Tomanx  
PrioritynormalSeverityminorReproducibilityhave not tried
Status acknowledgedResolutionopen 
Target VersionOpenMPT 1.?? (long term goals) 
Summary0000501: Audio recording to sample
Description

Non-trivial feature but a long-term goal that would prove very useful.

Rationale: FT2 and Renoise both have various methods to create sample data from within the tracker. They're still definitely trackers and not DAWs, although Renoise gets a bit close with its pattern-syncing. OpenMPT only allows importing external samples or creating sample data with the pencil tool.

My approach would be to minimize the amount of extra UI needed:

  • Samples have Arm/Disarm toggle
  • Playing back the song while a sample is armed begins writing audio input to the sample when the sample is triggered within the pattern. That is, a sample armed for recording receives audio instead of plays audio.
  • Hitting stop could optionally disarm the samples so subsequent playbacks do not overwrite the samples.

This allows for recording live audio along with a pattern at any point, as well as splitting a recording across multiple samples that are all armed. This could also be a starting point for routing parts of the graph back to the input for a Freeze/"Sample VSTi" feature (another useful thing in Renoise, which is made more necessary by its utter lack of tracker-style pitch control on VSTis).

I provided a proof-of-concept patch which implements this sort of behavior - with, of course, no error checking or UI (naming a sample starting with '%' arms it, only ASIO is supported, the sample must have some initial data and be triggered at the playback rate and have a 16-bit mono format).

TagsNo tags attached.
Has the bug occurred in previous versions?
Tested code revision (in case you know it)

Relationships

related to 0001042 new Render pattern channels to separate buffers 
parent of 0000722 assignedmanx Support sound device with only input channels 
has duplicate 0001448 closed Ability to record sample from audio input in Sample view 
related to 0000842 assignedSaga Musix Add plugin delay compensation 
Not all the children of this issue are yet resolved or closed.

Activities

coda

coda

2014-04-03 20:31

reporter  

samplerecord.patch (5,940 bytes)   
Index: sounddev/SoundDevice.h
===================================================================
--- sounddev/SoundDevice.h	(revision 3634)
+++ sounddev/SoundDevice.h	(working copy)
@@ -248,6 +248,7 @@
 	uint32 UpdateIntervalMS;
 	uint32 Samplerate;
 	uint8 Channels;
+	uint8 InChannels;
 	SampleFormat sampleFormat;
 	bool ExclusiveMode; // Use hardware buffers directly
 	bool BoostThreadPriority; // Boost thread priority for glitch-free audio rendering
@@ -261,6 +262,7 @@
 		, UpdateIntervalMS(5)
 		, Samplerate(48000)
 		, Channels(2)
+		, InChannels(2)
 		, sampleFormat(SampleFormatFloat32)
 		, ExclusiveMode(false)
 		, BoostThreadPriority(true)
Index: sounddev/SoundDeviceASIO.cpp
===================================================================
--- sounddev/SoundDeviceASIO.cpp	(revision 3634)
+++ sounddev/SoundDeviceASIO.cpp	(working copy)
@@ -322,11 +322,11 @@
 			ASSERT(false);
 		}
 	
-		m_BufferInfo.resize(m_Settings.Channels);
-		for(int channel = 0; channel < m_Settings.Channels; ++channel)
+		m_BufferInfo.resize(m_Settings.Channels + m_Settings.InChannels);
+		for(int channel = 0; channel < m_Settings.Channels + m_Settings.InChannels; ++channel)
 		{
 			MemsetZero(m_BufferInfo[channel]);
-			m_BufferInfo[channel].isInput = ASIOFalse;
+			m_BufferInfo[channel].isInput = channel < m_Settings.Channels ? ASIOFalse : ASIOTrue;
 			m_BufferInfo[channel].channelNum = m_Settings.ChannelMapping.ToDevice(channel);
 		}
 		m_Callbacks.bufferSwitch = CallbackBufferSwitch;
@@ -336,20 +336,20 @@
 		ALWAYS_ASSERT(g_CallbacksInstance == nullptr);
 		g_CallbacksInstance = this;
 		Log(mpt::String::Print("ASIO: createBuffers(numChannels=%1, bufferSize=%2)", m_Settings.Channels, m_nAsioBufferLen));
-		asioCall(createBuffers(&m_BufferInfo[0], m_Settings.Channels, m_nAsioBufferLen, &m_Callbacks));
+		asioCall(createBuffers(&m_BufferInfo[0], m_Settings.Channels + m_Settings.InChannels, m_nAsioBufferLen, &m_Callbacks));
 		m_BuffersCreated = true;
 
-		m_ChannelInfo.resize(m_Settings.Channels);
-		for(int channel = 0; channel < m_Settings.Channels; ++channel)
+		m_ChannelInfo.resize(m_Settings.Channels + m_Settings.InChannels);
+		for(int channel = 0; channel < m_Settings.Channels + m_Settings.InChannels; ++channel)
 		{
 			MemsetZero(m_ChannelInfo[channel]);
-			m_ChannelInfo[channel].isInput = ASIOFalse;
+			m_ChannelInfo[channel].isInput = channel < m_Settings.Channels ? ASIOFalse : ASIOTrue;
 			m_ChannelInfo[channel].channel = m_Settings.ChannelMapping.ToDevice(channel);
 			asioCall(getChannelInfo(&m_ChannelInfo[channel]));
 			ASSERT(m_ChannelInfo[channel].isActive);
 			mpt::String::SetNullTerminator(m_ChannelInfo[channel].name);
 			Log(mpt::String::Print("ASIO: getChannelInfo(isInput=%1 channel=%2) => isActive=%3 channelGroup=%4 type=%5 name='%6'"
-				, ASIOFalse
+				, m_ChannelInfo[channel].isInput
 				, m_Settings.ChannelMapping.ToDevice(channel)
 				, m_ChannelInfo[channel].isActive
 				, m_ChannelInfo[channel].channelGroup
@@ -361,7 +361,7 @@
 		bool allChannelsAreFloat = true;
 		bool allChannelsAreInt16 = true;
 		bool allChannelsAreInt24 = true;
-		for(int channel = 0; channel < m_Settings.Channels; ++channel)
+		for(int channel = 0; channel < m_Settings.Channels + m_Settings.InChannels; ++channel)
 		{
 			if(!IsSampleTypeFloat(m_ChannelInfo[channel].type))
 			{
Index: sounddev/SoundDeviceASIO.h
===================================================================
--- sounddev/SoundDeviceASIO.h	(revision 3634)
+++ sounddev/SoundDeviceASIO.h	(working copy)
@@ -30,12 +30,15 @@
 {
 	friend class TemporaryASIODriverOpener;
 
+public:
+	std::vector<ASIOBufferInfo> m_BufferInfo;
+	long m_BufferIndex;
 protected:
 
 	IASIO *m_pAsioDrv;
 
 	long m_nAsioBufferLen;
-	std::vector<ASIOBufferInfo> m_BufferInfo;
+	
 	ASIOCallbacks m_Callbacks;
 	static CASIODevice *g_CallbacksInstance; // only 1 opened instance allowed for ASIO
 	bool m_BuffersCreated;
@@ -48,7 +51,7 @@
 
 	bool m_DeviceRunning;
 	uint64 m_TotalFramesWritten;
-	long m_BufferIndex;
+	
 	LONG m_RenderSilence;
 	LONG m_RenderingSilence;
 
Index: soundlib/Sndmix.cpp
===================================================================
--- soundlib/Sndmix.cpp	(revision 3634)
+++ soundlib/Sndmix.cpp	(working copy)
@@ -141,6 +141,20 @@
 }
 
 
+#include "..\sounddev\SoundDevice.h"
+#include "..\sounddev\SoundDevices.h"
+
+#include "..\sounddev\SoundDeviceASIO.h"
+
+#include "../common/misc_util.h"
+#include "../common/StringFixer.h"
+#include "../soundlib/SampleFormatConverters.h"
+
+#include "..\mptrack\MainFrm.h"
+
+#include "modsmp_ctrl.h"
+
+
 CSoundFile::samplecount_t CSoundFile::Read(samplecount_t count, IAudioReadTarget &target)
 //---------------------------------------------------------------------------------------
 {
@@ -225,6 +239,20 @@
 			m_Reverb.Process(MixSoundBuffer, countChunk);
 		#endif // NO_REVERB
 
+
+		for(int chn=0;chn<MAX_CHANNELS;chn++)
+		if(Chn[chn].pCurrentSample && Chn[chn].nInc == 0x10000 && Chn[chn].pModSample && Chn[chn].pModSample->filename[0] == '%') {
+			CASIODevice *sda = dynamic_cast<CASIODevice*>(CMainFrame::GetMainFrame()->gpSoundDevice);
+			if(sda) {
+				if(Chn[chn].nPos >= Chn[chn].pModSample->nLength - countChunk)
+					ctrlSmp::InsertSilence( *Chn[chn].pModSample, Chn[chn].pModSample->nLength, Chn[chn].pModSample->nLength, *this);
+
+
+				CopyInterleavedToChannel<SC::Convert<int16, int32> >(reinterpret_cast<int16*>(const_cast<void*>(Chn[chn].pCurrentSample)) + Chn[chn].nPos, reinterpret_cast<int32*>(sda->m_BufferInfo[2].buffers[1 - sda->m_BufferIndex]) ,  1, countChunk, 0);
+				//CopyInterleavedToChannel<SC::Convert<int16, int32> >(reinterpret_cast<int16*>(Samples[1].pSample) + (m_lTotalSampleCount % (Samples[1].nLength - 1024)), reinterpret_cast<int32*>(sda->m_BufferInfo[2].buffers[1 - sda->m_BufferIndex]) ,  1, countChunk, 0);
+			}
+		}
+
 		if(mixPlugins)
 		{
 			ProcessPlugins(countChunk);
samplerecord.patch (5,940 bytes)   
coda

coda

2018-01-29 02:00

reporter   ~0003397

Now that we have builtin plugins and some potential device input support it occurred to me the quickest path to exposing input would probably be something like the Audio version of Midi I/O (minus the O?). Might not need any params at all - just copy input into master/the next plugin in the graph.
There are already VST plugins like Edison that can do host-synced record/playback of their input pins so there would be less immediate need for recording UI.

StarWolf3000

StarWolf3000

2018-01-29 06:20

reporter   ~0003398

So you're looking for the feature commonly known as "bouncing"?

coda

coda

2018-01-29 07:13

reporter   ~0003399

No, this feature request is about audio input. OpenMPT can already bounce tracks.

manx

manx

2018-01-30 16:25

administrator   ~0003402

Now that we have builtin plugins and some potential device input support it occurred to me the quickest path to exposing input would probably be something like the Audio version of Midi I/O (minus the O?). Might not need any params at all - just copy input into master/the next plugin in the graph.

I do not think that would be easy to implement.
When not connecting audio recording directly to the already used audio output device (i.e. not using the exact same device in full-duplex mode), we would inevitably have to deal with audio-clock desync between the two clocks (which is frankly an enormous nightmare to do properly, implementation-wise). This is no problem in the MIDI case because MIDI is no monotonic contiguous stream and is only synchronized relatively and loosely to both the wall-clock and the audio clock.

manx

manx

2018-01-30 16:36

administrator   ~0003403

What I have in mind is along the lines of the following (just sketching ideas right now, there is no implementation yet, an honestly there are more internal refactorings that I would like to see before even starting an implementation of recording):

  • Recording function will be global, and if globally enabled, It will record ALWAYS and keep a (configurable length) back-buffer so that you could even hit record after the fact in case you came up with a brilliant idea while jamming around. This in my opinion is a rather crucial and distinct feature, and we should consider this with high priority when designing a recording feature. In articular, I think we should avoid designing a recording workflow, that would prohibit or complicate this use case.
  • Recording would have different, selectable sources. In particular: sound card input, OpenMPT master output, or individual channels or plugins (the latter with less priority in order to get the feature implemented in a somewhat timely manner). Multiple source could be selected at the same time, resulting in multiple recorded files, which would be very useful when recording some live performance played to the playing module.
  • There is no need to select the destination of the recorded data upfront. This can be deferred until after the recording has been stopped (at which point suitable options are: save to file, copy to sample slot, or simply discard)
manx

manx

2022-02-28 15:42

administrator   ~0005114

As this came up in IRC again, I did some digging and found some old notes of mine that I did write up back in 2015 in order to post it to a forum thread (<https://forum.openmpt.org/index.php?topic=5486>), but never actually did. See forum post for context.

Even though use cases 1 and 2 as outlined above can be solved using the recording plugin that Saga Musix linked, I actually do think this would be a genuinely usefull feature, but for a somewhat different reason or use case.

If it would not just record when actively told to do so but just always record and remember the last, say 1 minute (configurable of course), this would have saved me a lot of wasted hours in each and every audio application I have used in the past.

When experimenting, it often happens (at least to me) that you accidentily do something that sounds great for whatever reason or aspect. Trying to reproduce or redo what you just did can be quite cumbersome or even impossible. You can really quickly end up asking yourself "How did I just do that?" and even "What did that even actually sound like some seconds ago?". This can really annoy you and interfere with your creative process in a quite fatal way. Just having the software do the remembering for you gives you the possibility to actually listen again to what you just did and either just take it directly in sampled form or at least try to redo it with a clear reference on how you want the result to sound like.

Solving this problem with a plugin remembering the output of the master channel is of course technically possible. But his requires the user to setup things upfront, which is not possible in general because you just do not know in advance that you will need this feature in this session. In a professional studio kind of setup, you can probably do this either using template modules with such a plugin or just some external recording software that monitors everything or a hardware loop machine or whatever. The casual user just won't do any of this, despite probably benefitting the most from such a audio history feature.

I imagine such a faeture with some kind button to permanently remember what just came out of the speakers in the last minute and optionally continue recording from now on.

Now, to the technical side of things.

Technically, implementation wise, both features have in common that they both require some functionality to get live audio data and store it somewhere for later use.

There 3 basic options in OpenMPT to where to actually store the live audio:

  • A) In a sample slot.
  • B) Directly in a final wave file on disk.
  • C) Some temporary file on disk, and permanently saving or discarding (depending on context and/or user interaction) it later on.

As OpenMPT currently permanently holds all sample data in memory (and this will not change), option A) limits the length of the recorded audio to about 1 hour on 32bit systems (heavily estimated, there are all kinds of technical and less technical reasons for that). Recording live sessions with such a limit on length does not look particularly useful for me, I'm actually not considering this option anymore.

Option B) has the implicit requirement to ask the user upfront where he actually wants to store the data at the moment he hits some kind of record button.

Option C) lends itself very well to the audio history feature I outlined above. When recording is stopped, the audio data can be either discarded, saved to a file or moved directly into a sample slot, depending on user input or preconfigured behaviour. This option also has some disadvantages though: OpenMPT would have to copy the data to the final location where the user wants it to be. This will generally not be that slow but will require twice the disk space of the audio data temporarily while saving it to a file.

Now, some even more techincal aspects related to the pattern-syncronized record-to-sample feature from the bug tracker:

Even though I do see the use of pattern triggered sample destinations for live input audio that get played alongside the actual pattern being played, implementing this in a proper way in the OpenMPT code base would turn out to be rather difficult. What do I mean by "proper way"? In order to actually stitch in the recorded sample data in the right place, OpenMPT would need to account for the latency of the recorded audio relative to the actual position where itself currently generates audio and the audio that actually gets played back. Just stitching it in at the exact position the player currently plays (which would be the trivial thing to do and the simplest to implement) would be wrong. The recording audio will be offset by AT LEAST the soundacard's output latency that way (not taking into account soundcard input latency here, that gets additionally added to the game). Played back directly afterwards, this will NOT match what you actually heard when doing the recording.

Latency can be compensated for. OpenMPT does latency compensation for the pattern display and sample position display in the sample editor, as well as latency compensation for VST plugins that want to relate non-audio output data to the audio output (which is mainly useful for any MIDI output plugins). As far as I know, OpenMPT currently does not even compensate for the delay any VST plugin can introduce and report, which would be a more important feature to implement than input latency compensation.

Doing latency compensation for input audio could be done at different layers in OpenMPT. Either right down in the mixer code that actually touches sample data during the playback, or in the module playback pattern logic code that does all the note dispatching to the mixer or VST instruments or even at a higher level in the application playback/display logic (which is currently done for the pattern and sample editor display).

I frankly do not see input latency compensation coming down right to the mixer in the codebase in its current form (and even for the next years to come). I also have no idea how to implement latency compensation at the pattern interpretation level. Doing it at the application level would require some way to transfer the sample destination location for the mixer up to the application layer. Thats certainly possible (and done currently for the sample editor display). It would not be trivial as we would have to introduce another position notification buffer for the input latency (that code currently only handles soundacard output latency in relation to mixer playback position).

I'm not sure if any of this is actually worth it doing at all for the first implementation of some kind of recording functionality. In most cases the user probably needs to edit the resulting sample anyway, cutting out audio at the beginning or end and syncronizing it to the pattern the way it was intended (even humans playing live music do not do perfect timing either :). Thus, i'm not sure if this pattern-triggered sample slot idea would actually be that useful in practice.

Personally I would favour the rather simple solution of a single global record button/buttons (with history remember functionality) that the user can just click on any time and decide where to put the audio after recording is done (i.e. to a sample slot or a file on disk). This would also work for recording the output of any VST instruments (which is the second point in the bug tracker issue) trivially. Just solo the VST instrument channel and hit the record button.

Issue History

Date Modified Username Field Change
2014-04-03 20:31 coda New Issue
2014-04-03 20:31 coda File Added: samplerecord.patch
2014-08-14 10:49 manx Assigned To => manx
2014-08-14 10:49 manx Status new => acknowledged
2014-08-14 10:50 manx Target Version => OpenMPT 1.24.01.00 / libopenmpt 0.2-beta8 (upgrade first)
2014-12-12 00:02 Saga Musix Target Version OpenMPT 1.24.01.00 / libopenmpt 0.2-beta8 (upgrade first) => OpenMPT 1.?? (long term goals)
2015-11-07 17:18 manx Relationship added parent of 0000722
2018-01-29 02:00 coda Note Added: 0003397
2018-01-29 06:20 StarWolf3000 Note Added: 0003398
2018-01-29 07:13 coda Note Added: 0003399
2018-01-30 16:25 manx Note Added: 0003402
2018-01-30 16:36 manx Note Added: 0003403
2018-05-18 07:11 manx Relationship added related to 0001042
2021-04-16 08:40 Saga Musix Relationship added has duplicate 0001448
2022-02-28 15:42 manx Note Added: 0005114
2022-02-28 15:46 manx Relationship added related to 0000842