Heeyoooo! Merci ça fait plaisir 😁 J'ai commencé à étudier VS (Visual Synthesizer) avec assez d'enthousiasme dernièrement et entre ca et la maitrise de OBS que je suis en train de développer c'est vraiment cool de voir comment ca impacte mon contenu! Bon quand est-ce que tu nous ponds un vidéo LOL?! 😅
@@Silent_Stillness J'ai pensé que c'était VS. Je ne l'ai pas, mais c'est assez cool comme app. Ouais ... je traîne de la patte pour les vidéos ... ça fait des années que je veux faire Jamuary et que finalement j'ai d'autres choses qui se présentent. J'espère pour cette année!
@@mrtinacrispoti Lol Jamuary je connaissais pas mais après avoir investigué le concept semble familier! Si j'avais pas eu VS pour aussi peu sur iPad (à moment donné ils ont fait un 80% off et je l'avais acheté par précaution incluant les 2 IAP même si ca m'intéressait pas du tout lol - faut croire que j'ai une bonne intuition!), je l'aurais acheté quand il y a eu le rabais pour PC (sachant ce que je sais mainteanant que j'ai de l'expérience avec l'app). C'était EXTREMEMENT abordable la version iPad mais je suis quand même poigné à devoir extraire l'audio de mon fichier video brut enregistré sur OBS avec mon PC, pour l'envoyer par dropbox sur mon iPad, pour ensuite devoir connecter mon iPad par USB à mon PC pour pouvoir utiliser iTunes pour telecharger le fichier sur mon PC. Juste un petit vidéo de même pas 5 minutes comme celui-là ca génère un fichier de 1.5GB+. Le comfort d'avoir la version VS sur PC ca simplifierait vraiment les choses mais bon je suis sûr qu'ultimement sur PC il doit y avoir mieux. Entre-temps mon fric reste dans mes poches!!
@@Silent_Stillness Moi, ce genre de - 10 opérations pour arriver un résultat - souvent, me démotive à faire quelque chose. Avec le temps, je deviens lazy. ;) Sur MAC et tout récemment sur PC, Ebosuite est dur à battre, mais ça prend Ableton Live.
@@mrtinacrispoti C'est clair! Contre toute attente il y a quand même un charme immersif à gosser sur iPad même pour faire des trucs video de ce genre donc curieusement ca rend le processus bizarrement intriguant malgré le fait que ca soit clairement mega time consuming, donc clairement ca prend de la motivation haha! J'y prend tranquilement gout à VS, ca permet de faire qqch de différent aussi 🤓
Hey i was wondering, do you still find yourself a desktop DAW much or if at all? I an Ableton license that i dont seem to use anymore after getting the iPad, so I'm highly considering selling it
Hi :) This is going to be a 2 PARTS reply (brace yourself!): PART1: Before I started focusing on somewhat spontaneous OBS video recordings of performances such as what I did in this video (which also serves as a very powerful self analysis tool to improve my skill "performing" electronic music in real time), my default was to record my iPad's Main Out in Ableton on my PC leveraging Ableton Link so that my audio file length is almost perfectly aligned with the beginning of the first bar coming from my iPad and the process of re-recording multiple takes over and over is VERY simplified and improved as a result. I never spoke about this on my channel before but all this infrastructure is there as a part of my setup. In my original 1.5+ h video I mention that I leave considerations regarding the inclusion of my PC in my setup hidden due to the complexity involved. There's also a different set of implications that come with having Ableton's screen opened as I record multiple takes because I can quickly switch between multiple recorded takes for auditioning purposes and I can also visually see the differences in the length of each take and look at the appearance of each take's waveform (which I can't do when reviewing my video recorded performances in VLC for example). I can also easily review the loudness of what I recorded as well and gain access to certain analysis tools that can help me elevate my mix directly at the source (on my iPad) as well. Even though I have Pro-L 2 on my iPad I never use it because of the added latency as that's incompatible with my live performance (low latency requirements) needs. Being able to test my recorded takes with a limiter on Ableton's end on my PC in post is a massive flexibility advantage as well. Ableton Link is a very powerful technology that alleviates (but doesn't fully solves - as I involve elektron boxes in my pipeline) the colossal problem of MIDI clock technology shittyness in the context of USB connectivity considering we're in 2024. Even though now I pretty much focus exclusively on live execution/performance of arrangements, it is restricting the complexity of what I can generate (again in terms of arrangement) to some degree. I view preserving access to Ableton still as valuable tool even though the likelihood (over the next year or so) that I would find myself in a situation where I need to run a linear arrangement in Ableton and on my iPad in parallel seems rather slim, as needing to "actively" involve my PC in my composition process (as opposed to just a technical sidekick) would significantly damage the immersion quality of my workflow. My "go-to" if I wanted to come up with a linear arrangement would first be to rely on Xequence 2, but there are definitely some shortcomings with this software that are irritants for automation especially so my output would be likely to include some degree of live performance actions in most cases as a result. From my perspective if I want to retain access to the AUM environment, I have to make the sacrifice of high resolution AU parameter automation control "cross AUv3s". As a result, in that context running Ableton in parallel and potentially even just ONE synth (hosted in Ableton) with a properly micromanaged high resolution automation over a linear arrangement is a valuable asset. This all boils down to the problem of AUv3 sequencers in this current meta.
PART 2: Helium doesn't offer the capacity to manipulate blocks of MIDI data to make the arrangement process more convenient (as one would be able to in Xequence or in Ableton), but it's AUv3, so timing/accuracy wise it's much more desirable than Xequence in that sense. However, any sequencer that is external to AUM's "native" code (of course AUM doesn't have a native sequencer) is not going to have access to high resolution AU parameter controls of AUv3s hosted in AUM for the purpose of automation, which is a massive design flaw of the whole platform. While both Xequence AND Helium offer the capacity to control parameters using MIDI with 14 bit resolution (Xequence NRPN functionality is a paid add-on which I don't have as there's effectively no use for it), AUM doesn't offer the option of controlling AU parameters with custom user defined MIDI mappings using 14 bit resolution, which again makes the functionality effectively useless for now. Because almost no AUv3 apps have an NRPN "hard wired" MIDI implementation, that means that users are limited to only 128 different positions possible for controlling an AU parameter in the context of AUM, which is REALLY problematic. Even above that, I can easily make the argument that the entire structure of my setup is fundamentally incompatible with the idea of writing linear arrangements, as basically everything I use is loop/pattern based for easy ON/OFF toggle control. In that context, Ableton's clip based system IS compatible with that approach while preserving access to high resolution and flexible VST/AU parameter automation control (for the plugins it hosts), and on iPad Imaginando's LK Matrix could potentially also become a much more relevant tool than Xequence or Helium in that use case as LK Matrix is also clip based and offers automation capability. The problem with all these low cost sequencer apps is that they are very likely NOT to offer the same level of reliability and polish that one would be accustomed to in Ableton. There's also another approach which would consist in involving Drambo as a Standalone component in parallel with AUM (again in that scenario Drambo would NOT be hosted in AUM as an AUv3) using Ableton Link to sync everything, and to have Drambo send its "Main Out" audio output into an audio track in AUM. In this scenario, Drambo (which IS a host application - and btw also happens to be usable as a hosted AUv3 both as a sound generator and a MIDI sequencer) WOULD potentially host some AUv3 synths which AU parameters it COULD control with high resolution because it has a NATIVE sequencer (as a host app in that use case) and an automation system that IS clip based AND includes HIGHLY desirable bezier curves. The problem in that context is that this would potentially be quite taxing in resources on the iPad on top of being inherently less reliable until thoroughly tested, and again we run in this problem of low cost apps that don't have the level of polish and reliability that is needed to prevent distractions due to technical issues, which Ableton has achieved. What should have come across as I explained all this stuff above is that there is a MAJOR flaw on this platform when it comes down to this subject (sequencing and automation), and until this particular topic gets adressed once and for all there's much more of an incentive to keep access to different tools to find creative ways of achieving desirable results that involve automation as that's the gateway to the future of electronic music. No automation basically equals poor expressiveness, which then translates in music that's just not compelling as everything tends to sound somewhat static and easily excessively repetitive. There NEEDS to be uniqueness and clear events that happen only once in a piece of music in order for it to realistically aspire to be perceived as compelling. When you hear a track such as Staying Alive by the Bee Gees for example, it perfectly illustrates the power of uniqueness to generate compelling results. In that specific example the expressiveness is clearly focused on the singer's voice and while there are elements of repetition in the vocals directly there are clear elements of uniqueness that only happen once in the track, amd that's kind the kind of "key" element I'm trying to point out with this example, in order to bring forward the importance of having access to a convenient/viable/polished/reliable system to draw automation. Food for thought hopefully!
@@Silent_Stillness Great response and info. So on the topic of automation as it relates to these performances you've been doing. Ableton is not Ableton to automated the AuV3 parameters to bring about uniqueness in that way. So when you mentioned using an Ableton synth, does that mean to bring uniqueness to you performances you tend to program a "backing" sound in Ableton, with automation, to then play overtop with your iPad/Elektron setup?
@@DJ_Personal Yes you understood right, that WOULD in theory be the idea (Ableton controlling a directly hosted plugin - VST3 etc. - and summing that audio with what's coming out of the iPad) even though I haven't felt compelled to do that yet, due to the unquestionable significant degradation of the quality of immersion of my setup that such an inclusion would imply. There's also the problem/challenge of recording all this audio tightly synced within the context of OBS if I'm involving Ableton in that use case, which is another can of worms that could potentially require a significant time investment to sort out, to preserve certain key attributes of my setup (such as it being compatible for potentially real-time streaming on Twitch/YT for example). Especially when you start to factor in things like stacking transients from sound sources coming from Ableton and from the iPad directly (+ considerations for needing "brick wall" limiting to preserve loudness, small discrepancies in latencies, and highly technical - but potentially VERY important - things of this nature that are hard to account for before actually testing / experimenting in the real world), this could really be a problem. I haven't really experimented to see how far I can go with Helium, LK Matrix, and the potential "Drambo in parallel" strategy I described earlier, but intuitively I have a negative bias towards all of these avenues due to each and every one of them being under developed to some (potentially deal-breaking) degree. These are all uncharted territories where I'm pretty much acting as a pioneer as there's no obvious resource online that I'm aware of where people are reporting thoroughly testing these kinds of workflows (with a mature discerning eye and a conscious clear emphasis on preserving the integrity of the setup as a truly immersive creativity inducing pipeline) and explicitly demonstrate whether they're viable/reliable or not. This kind of branches onto somewhat of a frustration I'm experiencing to an extent at the moment, which is the realization that by now I've managed to identify critical areas of "work" that I could be doing that could potentially have a very significant positive impact for the community at large but I'm not being compensated financially for this which poses a major problem for sustainability in the long run. Being truly fully functional in the iPad environment while enjoying the flexibility of AUM is like solving a puzzle that no one with actual true merit has actually advertised having "fully solved". An argument could be made that I'm the only resource on RUclips that has actually "advertised" something that even remotely resembles a solution with my original guide video leveraging elektron boxes as a part of my iPad custom setup, but I think you probably understand that given the context of our conversation elektron boxes are flawed just the same as every sequencer in the context of using AUv3s because of the low resolution problem, on top of the fact that they don't allow the automation freedom that using something like Drambo would allow - they're limited to parameter locking and LFO shapes to bring movement, which is very powerful but doesn't encompass the full scope of what is needed. In this very video that we're having this conversation in, on 2 occasions you can see me doing substantial things with Fabfilter Volcano 3 (especially at the end), which are all actions that I simply could not automate with any of the tools at my disposal in the context of AUM. I would still take this any day of the week over something like Logic Pro, which I haven't even bothered installing or trying because it being subscription based basically equals it not existing in my eyes (I don't think I had mentioned that before). Oh and by the way Cubasis 3 is incompatible with elektron boxes, it cannot clock them properly, and Cubasis even though it includes Ableton Link functionality, in the context of my elektron boxes cannot coexist with AUM running in parallel as it causes audio freezing issues whenever switching between apps in real time when music is playing. If it did work that would have solved the whole issue a long time ago, but of course it does not. Cubasis is viable purely as a standalone, which there's no way I'm ever going to use considering I have elektron boxes.
@@Silent_Stillness I wonder if a hardware sequencer like the Squarp Hapax could prove to be somewhat of a solution. My understanding is that the Hapax allows you to draw automation, and I'm assuming if you assigned an AUV3's parameter to a particular CC on a particular MIDI channel then you could link the drawn automation on the Hapax to that.
Niiiice! Hilda is a strange beast... J'aime beaucoup tes syncopated rythme. Le visuel est cool aussi! Cheers,
Heeyoooo! Merci ça fait plaisir 😁 J'ai commencé à étudier VS (Visual Synthesizer) avec assez d'enthousiasme dernièrement et entre ca et la maitrise de OBS que je suis en train de développer c'est vraiment cool de voir comment ca impacte mon contenu! Bon quand est-ce que tu nous ponds un vidéo LOL?! 😅
@@Silent_Stillness J'ai pensé que c'était VS. Je ne l'ai pas, mais c'est assez cool comme app. Ouais ... je traîne de la patte pour les vidéos ... ça fait des années que je veux faire Jamuary et que finalement j'ai d'autres choses qui se présentent. J'espère pour cette année!
@@mrtinacrispoti Lol Jamuary je connaissais pas mais après avoir investigué le concept semble familier! Si j'avais pas eu VS pour aussi peu sur iPad (à moment donné ils ont fait un 80% off et je l'avais acheté par précaution incluant les 2 IAP même si ca m'intéressait pas du tout lol - faut croire que j'ai une bonne intuition!), je l'aurais acheté quand il y a eu le rabais pour PC (sachant ce que je sais mainteanant que j'ai de l'expérience avec l'app). C'était EXTREMEMENT abordable la version iPad mais je suis quand même poigné à devoir extraire l'audio de mon fichier video brut enregistré sur OBS avec mon PC, pour l'envoyer par dropbox sur mon iPad, pour ensuite devoir connecter mon iPad par USB à mon PC pour pouvoir utiliser iTunes pour telecharger le fichier sur mon PC. Juste un petit vidéo de même pas 5 minutes comme celui-là ca génère un fichier de 1.5GB+. Le comfort d'avoir la version VS sur PC ca simplifierait vraiment les choses mais bon je suis sûr qu'ultimement sur PC il doit y avoir mieux. Entre-temps mon fric reste dans mes poches!!
@@Silent_Stillness Moi, ce genre de - 10 opérations pour arriver un résultat - souvent, me démotive à faire quelque chose. Avec le temps, je deviens lazy. ;) Sur MAC et tout récemment sur PC, Ebosuite est dur à battre, mais ça prend Ableton Live.
@@mrtinacrispoti C'est clair! Contre toute attente il y a quand même un charme immersif à gosser sur iPad même pour faire des trucs video de ce genre donc curieusement ca rend le processus bizarrement intriguant malgré le fait que ca soit clairement mega time consuming, donc clairement ca prend de la motivation haha! J'y prend tranquilement gout à VS, ca permet de faire qqch de différent aussi 🤓
Hey i was wondering, do you still find yourself a desktop DAW much or if at all?
I an Ableton license that i dont seem to use anymore after getting the iPad, so I'm highly considering selling it
Hi :) This is going to be a 2 PARTS reply (brace yourself!):
PART1:
Before I started focusing on somewhat spontaneous OBS video recordings of performances such as what I did in this video (which also serves as a very powerful self analysis tool to improve my skill "performing" electronic music in real time), my default was to record my iPad's Main Out in Ableton on my PC leveraging Ableton Link so that my audio file length is almost perfectly aligned with the beginning of the first bar coming from my iPad and the process of re-recording multiple takes over and over is VERY simplified and improved as a result. I never spoke about this on my channel before but all this infrastructure is there as a part of my setup. In my original 1.5+ h video I mention that I leave considerations regarding the inclusion of my PC in my setup hidden due to the complexity involved.
There's also a different set of implications that come with having Ableton's screen opened as I record multiple takes because I can quickly switch between multiple recorded takes for auditioning purposes and I can also visually see the differences in the length of each take and look at the appearance of each take's waveform (which I can't do when reviewing my video recorded performances in VLC for example). I can also easily review the loudness of what I recorded as well and gain access to certain analysis tools that can help me elevate my mix directly at the source (on my iPad) as well. Even though I have Pro-L 2 on my iPad I never use it because of the added latency as that's incompatible with my live performance (low latency requirements) needs. Being able to test my recorded takes with a limiter on Ableton's end on my PC in post is a massive flexibility advantage as well.
Ableton Link is a very powerful technology that alleviates (but doesn't fully solves - as I involve elektron boxes in my pipeline) the colossal problem of MIDI clock technology shittyness in the context of USB connectivity considering we're in 2024. Even though now I pretty much focus exclusively on live execution/performance of arrangements, it is restricting the complexity of what I can generate (again in terms of arrangement) to some degree. I view preserving access to Ableton still as valuable tool even though the likelihood (over the next year or so) that I would find myself in a situation where I need to run a linear arrangement in Ableton and on my iPad in parallel seems rather slim, as needing to "actively" involve my PC in my composition process (as opposed to just a technical sidekick) would significantly damage the immersion quality of my workflow.
My "go-to" if I wanted to come up with a linear arrangement would first be to rely on Xequence 2, but there are definitely some shortcomings with this software that are irritants for automation especially so my output would be likely to include some degree of live performance actions in most cases as a result.
From my perspective if I want to retain access to the AUM environment, I have to make the sacrifice of high resolution AU parameter automation control "cross AUv3s". As a result, in that context running Ableton in parallel and potentially even just ONE synth (hosted in Ableton) with a properly micromanaged high resolution automation over a linear arrangement is a valuable asset.
This all boils down to the problem of AUv3 sequencers in this current meta.
PART 2:
Helium doesn't offer the capacity to manipulate blocks of MIDI data to make the arrangement process more convenient (as one would be able to in Xequence or in Ableton), but it's AUv3, so timing/accuracy wise it's much more desirable than Xequence in that sense.
However, any sequencer that is external to AUM's "native" code (of course AUM doesn't have a native sequencer) is not going to have access to high resolution AU parameter controls of AUv3s hosted in AUM for the purpose of automation, which is a massive design flaw of the whole platform.
While both Xequence AND Helium offer the capacity to control parameters using MIDI with 14 bit resolution (Xequence NRPN functionality is a paid add-on which I don't have as there's effectively no use for it), AUM doesn't offer the option of controlling AU parameters with custom user defined MIDI mappings using 14 bit resolution, which again makes the functionality effectively useless for now.
Because almost no AUv3 apps have an NRPN "hard wired" MIDI implementation, that means that users are limited to only 128 different positions possible for controlling an AU parameter in the context of AUM, which is REALLY problematic.
Even above that, I can easily make the argument that the entire structure of my setup is fundamentally incompatible with the idea of writing linear arrangements, as basically everything I use is loop/pattern based for easy ON/OFF toggle control. In that context, Ableton's clip based system IS compatible with that approach while preserving access to high resolution and flexible VST/AU parameter automation control (for the plugins it hosts), and on iPad Imaginando's LK Matrix could potentially also become a much more relevant tool than Xequence or Helium in that use case as LK Matrix is also clip based and offers automation capability. The problem with all these low cost sequencer apps is that they are very likely NOT to offer the same level of reliability and polish that one would be accustomed to in Ableton.
There's also another approach which would consist in involving Drambo as a Standalone component in parallel with AUM (again in that scenario Drambo would NOT be hosted in AUM as an AUv3) using Ableton Link to sync everything, and to have Drambo send its "Main Out" audio output into an audio track in AUM. In this scenario, Drambo (which IS a host application - and btw also happens to be usable as a hosted AUv3 both as a sound generator and a MIDI sequencer) WOULD potentially host some AUv3 synths which AU parameters it COULD control with high resolution because it has a NATIVE sequencer (as a host app in that use case) and an automation system that IS clip based AND includes HIGHLY desirable bezier curves. The problem in that context is that this would potentially be quite taxing in resources on the iPad on top of being inherently less reliable until thoroughly tested, and again we run in this problem of low cost apps that don't have the level of polish and reliability that is needed to prevent distractions due to technical issues, which Ableton has achieved.
What should have come across as I explained all this stuff above is that there is a MAJOR flaw on this platform when it comes down to this subject (sequencing and automation), and until this particular topic gets adressed once and for all there's much more of an incentive to keep access to different tools to find creative ways of achieving desirable results that involve automation as that's the gateway to the future of electronic music. No automation basically equals poor expressiveness, which then translates in music that's just not compelling as everything tends to sound somewhat static and easily excessively repetitive. There NEEDS to be uniqueness and clear events that happen only once in a piece of music in order for it to realistically aspire to be perceived as compelling. When you hear a track such as Staying Alive by the Bee Gees for example, it perfectly illustrates the power of uniqueness to generate compelling results. In that specific example the expressiveness is clearly focused on the singer's voice and while there are elements of repetition in the vocals directly there are clear elements of uniqueness that only happen once in the track, amd that's kind the kind of "key" element I'm trying to point out with this example, in order to bring forward the importance of having access to a convenient/viable/polished/reliable system to draw automation. Food for thought hopefully!
@@Silent_Stillness Great response and info.
So on the topic of automation as it relates to these performances you've been doing. Ableton is not Ableton to automated the AuV3 parameters to bring about uniqueness in that way. So when you mentioned using an Ableton synth, does that mean to bring uniqueness to you performances you tend to program a "backing" sound in Ableton, with automation, to then play overtop with your iPad/Elektron setup?
@@DJ_Personal Yes you understood right, that WOULD in theory be the idea (Ableton controlling a directly hosted plugin - VST3 etc. - and summing that audio with what's coming out of the iPad) even though I haven't felt compelled to do that yet, due to the unquestionable significant degradation of the quality of immersion of my setup that such an inclusion would imply. There's also the problem/challenge of recording all this audio tightly synced within the context of OBS if I'm involving Ableton in that use case, which is another can of worms that could potentially require a significant time investment to sort out, to preserve certain key attributes of my setup (such as it being compatible for potentially real-time streaming on Twitch/YT for example). Especially when you start to factor in things like stacking transients from sound sources coming from Ableton and from the iPad directly (+ considerations for needing "brick wall" limiting to preserve loudness, small discrepancies in latencies, and highly technical - but potentially VERY important - things of this nature that are hard to account for before actually testing / experimenting in the real world), this could really be a problem.
I haven't really experimented to see how far I can go with Helium, LK Matrix, and the potential "Drambo in parallel" strategy I described earlier, but intuitively I have a negative bias towards all of these avenues due to each and every one of them being under developed to some (potentially deal-breaking) degree. These are all uncharted territories where I'm pretty much acting as a pioneer as there's no obvious resource online that I'm aware of where people are reporting thoroughly testing these kinds of workflows (with a mature discerning eye and a conscious clear emphasis on preserving the integrity of the setup as a truly immersive creativity inducing pipeline) and explicitly demonstrate whether they're viable/reliable or not.
This kind of branches onto somewhat of a frustration I'm experiencing to an extent at the moment, which is the realization that by now I've managed to identify critical areas of "work" that I could be doing that could potentially have a very significant positive impact for the community at large but I'm not being compensated financially for this which poses a major problem for sustainability in the long run. Being truly fully functional in the iPad environment while enjoying the flexibility of AUM is like solving a puzzle that no one with actual true merit has actually advertised having "fully solved". An argument could be made that I'm the only resource on RUclips that has actually "advertised" something that even remotely resembles a solution with my original guide video leveraging elektron boxes as a part of my iPad custom setup, but I think you probably understand that given the context of our conversation elektron boxes are flawed just the same as every sequencer in the context of using AUv3s because of the low resolution problem, on top of the fact that they don't allow the automation freedom that using something like Drambo would allow - they're limited to parameter locking and LFO shapes to bring movement, which is very powerful but doesn't encompass the full scope of what is needed.
In this very video that we're having this conversation in, on 2 occasions you can see me doing substantial things with Fabfilter Volcano 3 (especially at the end), which are all actions that I simply could not automate with any of the tools at my disposal in the context of AUM. I would still take this any day of the week over something like Logic Pro, which I haven't even bothered installing or trying because it being subscription based basically equals it not existing in my eyes (I don't think I had mentioned that before). Oh and by the way Cubasis 3 is incompatible with elektron boxes, it cannot clock them properly, and Cubasis even though it includes Ableton Link functionality, in the context of my elektron boxes cannot coexist with AUM running in parallel as it causes audio freezing issues whenever switching between apps in real time when music is playing. If it did work that would have solved the whole issue a long time ago, but of course it does not. Cubasis is viable purely as a standalone, which there's no way I'm ever going to use considering I have elektron boxes.
@@Silent_Stillness I wonder if a hardware sequencer like the Squarp Hapax could prove to be somewhat of a solution.
My understanding is that the Hapax allows you to draw automation, and I'm assuming if you assigned an AUV3's parameter to a particular CC on a particular MIDI channel then you could link the drawn automation on the Hapax to that.