Mittwoch, 25. September 2024

Diploma thesis

In June 2020, my study mate Maria and I recorded our diploma thesis at the tone-art studio.
Total time of work:
30 minutes for set-up,
4 hours for recording,
4 hours for editing, mixing and mastering.

Supervision and evaluation: Herbert  

We also had to come up with a rough plan of how to do the recording (planning, talking to bands/artists, ...). The general idea was to get a taste of the experience of "studio time is expensive and therefore short", to get used to working under pressure and delivering results with a strictly limited amount of time.

Artist: Roman List (vocals, gittern, bombard, flute, tenor drum)
Song: Neidhart

This is a medieval song. The text tells the story of a party, where it was, who arrived there, what happened at the party, ending with the main narrator, Neidhart, hiding in a barrel of wine so as not to get hit by an angry drunk. Sounds like it was a good party.

You can listen to the recording here. Enjoy!

Roman List performs medieval music in Vienna once in a while, sometimes telling stories about other famous medieval people, such as Count Dracula. It is probably easiest to take a look at his Facebook page for upcoming events.

My next band-related project will probably be mixing recordings for CrashMan's Kapelle, a trio (guitar/vocs, bass, drums) from Vienna playing pop/blues/psychedelic improvisations - in exchange for drum lessons. Looking forward to it!

Montag, 12. August 2024

Sending two XLR out signals to two old loudspeakers

Draft by Gwendolin Korinek, Festspiele Reichenau. CC-BY-SA

This design is a draft which I did not put in practice, but may be useful for theatre purposes.

If you have a pre-installed network cable connection to attach a stagebox (which only provides XLR outs) e. g. on stage left or right, but your FOH is miles away because the theatre hall is quite big, you can save a lot of cable metres with this approach.

The reason you may want to use oldschool loudspeakers is that they could be a functioning prop, part of a play that takes place in, say, the 1970s.

This draft turns two balanced signals into two unbalanced signals.

You need:

- a stagebox 

- two XLR cables

- a special adapter (which you need to solder yourself)

- a 3.5 mm jack cable

- an old household stereo player (which is the „amp“ for the two unbal signals) with a 3.5 mm jack aux input and two oldschool loudspeaker outs

- four loudspeaker cables (2x signals, 2x minus/sleeve)

- two oldschool loudspeakers

- one socket for the stereo player, ideally completely separate from lighting apparatus sockets to avoid hum or issues related to dimming

- stage camouflage to hide the stereo player

Here‘s how it works, in words:

You are sending two signals (double mono, or stereo) to two stagebox XLR outs. Every parameter, including SPL and EQ, can be adjusted for each channel individually, which is very useful if e. g. you want to pre-program a panoramic fade in QLab, or one speaker is in a very different acoustic position from the other.

Next comes the cool adapter you need to solder together as shown in the draft. It uses the classic headphones TRS configuration. One signal to T, the other signal to R, everything else to S.

The two signals now travel to the 3.5 mm jack output which is fed to an old stereo player with a 3.5 mm jack input. Adjust stereo player settings as needed.

The stereo player is the amp for our signals. Next, the signals are fed to two oldschool loudspeaker cables, which feed the two loudspeakers.

For easier use, the stereo player can be hidden in a prop that can be moved around, and only fed with electricity when the play where it is needed actually takes place. Proper cable management (e. g. figure of eight wiring) and formatting cables to the length required (no excess metres of speaker cable) make life easier.

Enjoy!

If you use this design, please credit me (Gwen Korinek) and Festspiele Reichenau.


Samstag, 16. Dezember 2023

The 'fake acoustic' synth

This is an idea of a specific synth sound setup for e. g. orchestra ensembles in classical music halls. I had this idea while working at Musikverein (as a stagehand, to avoid any confusion) at night once. I was wondering what it would be like to play a piece for synth and orchestra there someday. Some fresh air, so to say. 

This idea has maybe been around and even put into practice in some keyboard models for a while, I guess, but maybe not in good enough quality... and I haven't really seen it in use in top performing orchestras / chamber music ensembles yet. Could be worthwhile to give it a new shot - with better gear.

Standard settings:
Synth to DI to mixing board to PA, end of story.

'Fake acoustic' synth:

Imagine that a synth player wants to perform a modern classical symphony with an orchestra ensemble in a concert hall with marvellous acoustics.

Rather than wiring the synth to a PA, I think it would make more sense to equip the synth with loudspeakers directly on the instrument in such a way that it mimics e. g. a piano, with sound levels being similar to a classical piano. The sound would come from two loudspeakers, L and R, that are installed at either end of what would be the area where the wing of a classical piano is opened and the strings of the piano can be seen (Fig. 1).

Fig. 1. The 'fake acoustic' synth (doodle). Keys, patch section and pitch bend wheel (or whatever you like, drumpad keys, etc. Here, it is just plain chaos, because I couldn't be bothered to draw it realistically. It is just a generic representation. It could also look totally different, too). In the area where a classical piano would have a wing and strings, L and R panning. Signal flow: from keys/controls to a (very fast) FFT analyser (because you can pitch bend...) to ensure correct panning at the loudspeakers, to speakers. For noise samples, maybe positioning at the predominant frequency makes sense, or the panning parameters for complex samples could already be saved in the database, and then played accordingly.

Each key of the synth would be panned at the correct position (but fretless, of course) so that each frequency that it can produce could be heard at the same position as it would on a classical piano covering the same octave range.

Advantages of this setup:

1. The instrument would fit in well with orchestral pieces of work from previous centuries, where PA systems were not known yet. It would feel more 'natural'.

2. Easy localisation of the instrument by other players and the audience.

3. By emitting sound this way, the instrument could use the naturally good acoustics of good concert halls, like the rest of the orchestra does. For example, the Musikverein in Vienna.

4. It could also work better in a 'chamber music' setting with e. g. theremin (why not, while we're at it) or string instruments.

The quality would, of course, strongly depend on the quality of the installed speakers, which could be limited by speaker size. Probably some sort of studio monitors with soft membranes and neutral sound would be best.

Please note that this would only make sense in rooms and concert halls with decent acoustics.

As a follow-up thought, I was wondering how to write this sort of thing in a sheet music version. As far as I know, no one has really bothered to come up with a standardised notation yet. It would also require a total absence of "colouring" by various synth models, to reproduce everything the same way every time, over the centuries. That might only be possible to do with a software emulating synths, like puredata. Could be one hell of a nightmare to even get started. BUT. If we want synth music to last, it would be worth the effort. Synth models may go out of production, reproductions may be inaccurate. But if people find a workaround (like just writing down mathematical operations, like in programming code), the sound will stay the same over centuries. Might be worth investing more time into thinking about methods of accurately reproducing synth sounds, by introducing a unified sheet music notation system and maybe software standard, just like MIDI facilitates communication and control.


Mittwoch, 27. September 2023

Left handed piano

 All my piano life, I have been learning new songs mostly in the following "setup" (let's call it setup 1):

Left hand: Chords, arpeggios, small movements

Right hand: Melody, improvisations

Very rarely, however, have I encountered songs written the other way round (let's call it setup 2):

Left hand: Melody, improvisations

Right hand: Chords, arpeggios, small movements

Conclusion: The piano, as it is taught in many places in the world, is a concept by a majority of right-handed people.

However, to obtain, practice and maintain full dexterity in both hands, I would suggest that any and all piano lessons should be structured the following way:

Setup 1 - Setup 2 - repeat (50/50 balance)

(Later perhaps: two melodies, like in canons, etc. - this is just a suggestion for early practice)

Of course, you shouldn't break your arms, but it is perfectly possible to play bass lines as individual melodies with the left hand. In jazz songs, for example, I think it would be easy to switch hands because the accompanying chords of the voicings are very free, anyway, so you can rotate them any way you like.

The result of the current way of teaching piano is:

Most people, at a beginner or intermediate level, are much more agile in their right hands.

Only if they practice switching "chord hand" and "melody hand" will they practice "ambidexterity". I think that, especially for styles like jazz, this should give any piano students more flexibility in playing e.g. impros and increase overall piano "fluidity".


Sonntag, 18. Juni 2023

TRS - unbalanced stereo input in mixing boards

 Classic situation:

Spontaneous decision to feed music into a mixing board via a laptop.

In this situation, most people do not have a USB interface at hand to route the signal via two balanced outs to a mixing board.

What happens instead? They use the headphones out, which classically comes in a TRS configuration - unbalanced L and R out.

Once you feed this into a line in of the mixing board, the board combines both channels to mono.

Instead, there should be at least two line ins for a TRS configuration where L and R are recognised as unbalanced signals and routed to the correct channels. (Two line ins because you may want to stream sound from multiple devices, and one line in should be a backup.)

Right now, I have this exact issue working with an Allen & Heath SQ-16. It does not have cinch, so unbalanced tape ins via cinch are not a workaround option here. Maybe cinch to 6.3 mm jack adapters would work, but again, that requires a bunch of adapters. The more adapters, the higher the risk of equipment failure.

Samstag, 22. April 2023

Dwarf keyboard for jazz musicians with tiny hands

 I would like to have a keyboard with

- white keys half the size 
and
- black keys 2/3 the size

of a Novation UltraNova keyboard. It should span 5 octaves and have both MIDI and 6.3 mm balanced L+R jack outputs.

Case in point:

E7b9.

With my tiny hands, I cannot play this on an UltraNova without playing at least one unintended extra key and risking hand damage (see picture). In my opinion, this is yet another case of gender bias in product design.

I am the T-Rex of jazz music

To those who claim that this is doable with more practice: It is not really practical, since I do not have the required anatomy! Everything on a keyboard with dimensions for standard male hands and players is easier if you have big hands and becomes very tedious when you have lady hands. On the Ultranova, my hand spans a ninth without cramps, a tenth is only possible with cramps and will not sound good because I cannot play it with ease. However, if I intend to use all five fingers at once, the maximum interval span gets shorter because I need to crook all my fingers to use all five keys. To be limited by design flaws is not amusing.

I have been told that some Casio models have smaller keys. I have yet to find out if any keyboard according to my wishes exists. If not, I would like to see it made, to make jazz music life for children and people with small hands easier.


Off switch for household intercom speakers or doorbells

 All household intercoms should be made with an off switch for the speakers. Alternatively, a potentiometer for the volume would not only allow one to switch off the speakers at any given time, but also adjust the volume to a lower level suitable for cat and sound tech ears.

I manually installed two off switches for my own and a neighbours' intercom. Nowadays, I usually leave it on when I ordered takeaway food or expect mail delivery, but keep it turned off at all other times. I also want to attach a resistor to decrease the volume to a fixed level, since installing a potentiometer would require adjustments on the motherboard, which may make it more difficult to return to the default settings of the intercom, if I would like to let the flat to someone else someday.

Advantages of an off switch:

- Allows you to sleep in peace after a night shift. Usually, post employees will ring the doorbell in the morning hours, hoping that one of the neighbours of the receiver of a package will be at home. The intercom at my flat has also had periods of malfunctioning and/or there have been weirdos ringing doorbells in the middle of the night, repeatedly.

- Cats who are easily scared by foreigners and doorbells, like mine, have a less stressful life. This is also why I would like to install the resistor.

- No disruptions during rehearsals and/or home studio time.

- The sound is not very appealing, so less of it increases one's overall quality of life.

The technical heart of our intercom. A moment of triumph as I detached the loudspeaker from the wiring. All of a sudden, my quality of life had increased significantly.

Intercom off-switch that I installed for my neighbour. Off-switch kindly provided by my workmate Elmar, who is a genius at playing around with various Raspberry Pi smart home applications that he designed by himself - for watering plants, switching off lights etc.

Samstag, 1. April 2023

Plugin to convert (mono and poly) synths with mono out to stereo

 As someone who enjoys playing old synths, you may have run across this problem before. Your synth only comes with an audio mono out (like the Casio VL-tone).

For monophonic synths, the conversion should be quite easy to write e. g. in Max4Live or puredata etc.

Values that need to be set before use:

1) Highest note of synth (100 % R)

2) Lowest note of synth (0 % R)

These are the endpoints of a linear graph y = k . x on the Oct/Volt system.

If you play a note, an analyser (e.g. FFT) reads the Hz value, which is then fed into a sidechain for the panning. Depending on where you want to hear the synth, you may want to create the illusion that the synth player stands e.g. to the right, so you would use the entire synth note range in 50-100% R of your soundscape. Or, if you want to make the synth appear very broad or only e.g. use one of three octaves, you may want to pan the synth up to 0-300% width, for example.

This (monophonic) solution works for any frequencies within the specified range, which is nice if you have specific, or not well-tempered, tuning settings, like in DAFs Der Räuber und der Prinz.

For polyphonic synths, the conversion gets more complicated. The general settings are the same as previously.

Ideally, you would then need to make a sample dump from your synth onto your DAW of preference or, if the synth only has a 3.5 mm jack out, find e. g. a MIDI instrument that is an exact copy of your synth's sound. In that case, the commands would need to be translated e. g. to MIDI CC messages.

When a chord is played, you would send it to an FFT (this function is available e. g. in puredata). The Hz value is then used for calculating the panning for each note, as above. The amplitude is used to recreate the correct intensity. Each signal triggers the corresponding sound from the sample dump. Latency may be an issue, but could be fixed after playing by moving the sidechain track just a tiny bit forward (which an algorithm would be smart enough to do at the precise position).

The polyphonic solution would be synth-specific and limited to the discrete Hz values of the synth you are playing, unless you recreate the entire s o u n d, e. g. in puredata, without any limitations on which Hz you feed the algorithm.

There may be better solutions if the synth can speak MIDI. However, there are a few synths out there which are polyphonic with an audio-only mono out that are covered by the outlined strategy. Another issue may be that some synths existed before the dawn of MIDI. In this case, sample dumping may provide tedious.

If you would like to start working on this plugin, I won't stop you.

Of course, there are less elegant ways to create an illusion of stereo. For just a few notes, another idea is to record each note separately and then arrange them spatially wherever you like. But there is some joy in working in a 'stilren' way, as the Swedes would say.

Four-stroke engine with organ pipes as an analog sequencer

 This may be a totally impracticable idea, but here goes:

I recently listened to an old Steyr-Puch tractor from the 1930s. It provided a good four stroke metronome. The strokes originate from cylinders. Organ pipes resemble cylinders up to some point, so perhaps it should be possible to run an engine with cylinders that generate notes. Then, as you would in a sequencer, perhaps you could "program" / "set" gradual differences, e.g. in pitch, over time.

Obviously, not the most environmentally friendly idea. Just a nerdy thought for some industrial band out there like Die Krupps...

Adjust EQ settings by drawing EQ curves by hand on a touchpad

 Currently, many mixing boards work like this:

You manually select some frequencies, then adjust the filters at said frequencies (shelf, notch, dynamic EQing, bell, whatever...). This can be tedious.

To speed things up, if you know more or less how your instrument / mic / ... behaves and what you would like to do, it could be useful to use a pen and a touchpad as input devices, and set a new EQ curve by simply d r a w i n g the entire graph at once. For a bell curve or a shelf, the exact position matters less than for a notch filter. Still, it should be possible to draw a notch filter by hand and finetune the frequency of the absolute minimum later. Since there may be at least 4-ish annoying frequencies you need to take care of, if you know the behaviour of your gear, this could save some time.

Use a balanced XLR out as an unbalanced L/R TRS out

 The following may be a familiar situation:

You are running out of outputs and the camera person / streaming person / ... asks you for a main L/R sum out.

One 'dirty' way to achieve this if there is a lack of space on the mixing board would be to be able to switch a balanced XLR out to an unbalanced L/R TRS out, as in old headphones, where T/S is L and R/S is R, for example. It is not a perfect solution, but it could be useful to have, say, four of these as optional extensions in medium budget mixing boards with limited space. For example, 16 outs can become too little pretty fast. 4 switchable outs would result in a total of 12 balanced outs plus 8 unbalanced outs.

Microports with positioning info and automatic panning

 For use in theatres:

To be able to locate actors' positions by ear, actors could wear microports that send their position on a linear Stage Left - Stage Right axis to the mixing board, where this geographical information is directly linked to the panning settings. Panning follows the actor automatically wherever they walk so that, if they were at the very left from the audience's view, their voice would only come out of the left Main PA speaker.

For settings with more loudspeakers and actors moving on a two-dimensional plane, it should mathematically be possible to construct a similar setup. Here, the actor could probably be described with the same algorithms used to predict the movement of planets (some differential equations). According to where the actor is on a two-dimensional plane, the acoustic signal is mixed together to match the actor's position in the room. If the audio signal does not exactly follow the actor but remains in one set position at all times, using microports in e. g, a room with audience watching from three or four sides would greatly confuse the audience. In bigger settings (like a stadium / amphitheatre), amplification of the voice may become absolutely necessary.

Additional thoughts:

In larger venues, one may want to achieve panning effects by other means than having different SPL in both speakers, such as playing with delay or other ideas, to avoid a situation in which people on the e. g. right side cannot hear a voice panned 100% to the left anymore. This could also be developed in an automated way, where the panning is an autofollow parameter determined by the current position of the actor.

DNA as a means of song data storage

 DNA could be used as a long-lasting alternative to encoding music in 0 and 1. It would provide a base-4 data storage system. Encode music to ATGC sequences, 48 k, 32 bit. Freeze at -80 degrees.

To restore the information, thaw the DNA, amplify it with PCR, send it to sequencing, decode the ATGC sequence to analog, use a KI algorithm to correct possible PCR sequencing mistakes, play.