No One To Hear Your Screams

Поделиться
HTML-код
  • Опубликовано: 20 дек 2023
  • Welcome to Xinferis TV, your gateway to television from another reality.
    🔮 If you enjoyed this video, don't forget to give it a thumbs up, share your thoughts in the comments, and hit that subscribe button to join our community.
    👉 Know someone who would appreciate my music? Feel free to share this video with them.
    🎶 Listen to my music on:
    🎧 Spotify: open.spotify.com/artist/1hQWO...
    🍎 Apple Music: / xinferis
    🔗 Explore more of my content and stay up-to-date with my latest projects via my Linktree: linktr.ee/xinferis
    🎹 Curious about my creative process? I create the music using the following software tools:
    Logic Pro
    Native Instruments Komplete 14 CE
    Arturia V Collection X
    Arturia FX Collection 4
    Arturia Pigments 4
    u-he Repro-5
    u-he Repro 1
    u-he Diva
    u-he Zebra Legacy
    Ozone 11 Advanced
    IK Multimedia Total Studio 3.5 MAX
    🎛️ And the audio hardware below:
    Native Instruments Kontrol S61 MK3
    Akai MPK-49
    Arturia KeyStep
    Arturia BeatStep Pro
    Behringer DeepMind 12
    Behringer Model D
    Behringer Neutron
    Arturia MiniBrute
    Korg Minilogue
    MOTU 828es
    Yamaha HS8 Studio Monitors
    Yamaha HS8S Studio Subwoofer
    🎨 I use the following creative tools to make the art:
    Midjourney
    Draw Things (which uses Stable Diffusion)
    Pixelmator Pro
    Apple Photos
    🎬 And these editing tools to create the videos:
    Apple Final Cut Pro
    Apple Compressor
    Apple Motion
    Thank you for visiting the worlds of Xinferis TV! Your continued support helps drive me to continue creating more videos. 🚀🌌
    🚫 Copyright Notice: All of the music and art on this channel are created by me. The content of this video is not royalty-free, and I reserve all rights to the video, music, and art. Any unauthorized use or reproduction of my content is strictly prohibited.
    #synth #soundtrack #aiart

Комментарии • 46

  • @johnferry7778
    @johnferry7778 6 месяцев назад +8

    Sinister machinations from beyond Yuggoth. Invocations of baleful beauty powered by unknown radiation.(Probably.)

    • @XinferisTV
      @XinferisTV  6 месяцев назад +1

      Everyone's personal interpretation of my videos is the real story behind each one. :)

  • @craigpayne5500
    @craigpayne5500 6 месяцев назад +2

    You are one incredibly talented artist. Please never stop. Can you add these all together and tell a story? Hehe. Brilliant

    • @XinferisTV
      @XinferisTV  6 месяцев назад +1

      Thank you! The video below, The Android Uprising, tells a story, but it got a lot less views than my other videos. :) I am proud of it though.
      ruclips.net/video/Rl1TdriCMbQ/видео.html

  • @ss-oq9pc
    @ss-oq9pc 6 месяцев назад +3

    Some truly disturbing sh!t in here. I love it. More please.

    • @XinferisTV
      @XinferisTV  6 месяцев назад +1

      Thank you! I have a lot more to come! :) I have more videos on my channel too if you haven't looked around yet.

  • @wimvanderstraeten6521
    @wimvanderstraeten6521 6 месяцев назад +3

    This video has a rather Lovecraftian vibe. Love the music, btw.

    • @XinferisTV
      @XinferisTV  6 месяцев назад

      Definitely was aiming for that. :)

  • @modolief
    @modolief 6 месяцев назад +3

    Yes! A new Xinferis
    Such perfect timing for me

    • @XinferisTV
      @XinferisTV  6 месяцев назад +1

      I try to have a release each week. Currently I release at midnight Friday my time. That way if someone doesn't need to work on the weekend, they might have the time to watch a fresh video if they want to.

    • @modolief
      @modolief 6 месяцев назад

      @XinferisTV Amigo your stuff is sooo cool, I just keep watching and re-watching. You do amazing work, thanks so much! 🙏🙏🙏🙏

  • @jamesvetromila6068
    @jamesvetromila6068 6 месяцев назад +2

    Sorta reminds me of a few places I've worked at over the years.

    • @XinferisTV
      @XinferisTV  6 месяцев назад

      Lol that doesn't sound good. :)

    • @modolief
      @modolief 6 месяцев назад

      Loooooooolll !!!!

    • @unspeakableoaf
      @unspeakableoaf 6 месяцев назад

      Frickin' Walmart, right?

  • @AntoineVanGeyseghem
    @AntoineVanGeyseghem 6 месяцев назад +2

    Hypnotizing...

  • @F.R.B.
    @F.R.B. 6 месяцев назад +2

    🖤🖤

    • @XinferisTV
      @XinferisTV  6 месяцев назад

      Thank you for watching!

  • @cujoedaman
    @cujoedaman 6 месяцев назад +1

    I see a lot of DOOM, Metroid, Alien (or Giger in general), and basic 60-70's sci-fi references in this :D

    • @XinferisTV
      @XinferisTV  6 месяцев назад

      I am definitely into all of those things! :)

    • @cujoedaman
      @cujoedaman 5 месяцев назад +1

      @@XinferisTV It's just our brains trying to sort out the chaos to make sense of the images based on things we've already seen... and I love it!

  • @DavidBrown-in8hi
    @DavidBrown-in8hi 6 месяцев назад +2

    :34 eerie

  • @johnferry7778
    @johnferry7778 6 месяцев назад +2

    Aha! First here again I’m a bit of a sad case I suppose.

    • @XinferisTV
      @XinferisTV  6 месяцев назад +2

      I appreciate the enthusiasm. :) I had a lot of fun making this one.

  • @youMEtubeUK
    @youMEtubeUK 5 месяцев назад +1

    Could I ask for an example of a prompt that you used? I’ve tried other images generators like DALIE but it looks like MJ is head and shoulders above.

    • @XinferisTV
      @XinferisTV  5 месяцев назад

      I don't usually share specific prompts because the information goes out of date very quickly and is restricted to very specific models. For example this video I used Midjourney 5.2. Midjourney 6 is superior, and I would highly recommend starting with that model. I do have recommendations, tips and tricks on how to create prompts in the comments of my videos. They are usually relatively generic and would apply to any model, including Stable Diffusion, which I also use.
      I describe my workflow and how to construct prompts in the comments of the following video:
      studio.ruclips.net/user/videoufD1TizuMBc
      My workflow always evolves and has changed quite a bit, but the workflow in the comments of that video is a good starting point. It is also the workflow I used for this video.

  • @ua4339
    @ua4339 5 месяцев назад +1

    Are these pictures on google images or online somewhere? I want to make some of them my screen saver.

    • @XinferisTV
      @XinferisTV  5 месяцев назад

      Not at the moment. I may create a Patreon at some point and have the images from my videos as a reward (for personal use) at some point if there is enough interest.

  • @DavidBrown-in8hi
    @DavidBrown-in8hi 6 месяцев назад +1

    1:44, 4:40

    • @XinferisTV
      @XinferisTV  6 месяцев назад

      Those are interesting ones!

  • @user-gb1mp5wb8n
    @user-gb1mp5wb8n 5 месяцев назад

    아카디인이 있네....ㅡ..ㅡ..

    • @XinferisTV
      @XinferisTV  5 месяцев назад

      Thank you for watching my video.

  • @blockhead1899
    @blockhead1899 6 месяцев назад +1

    The ai is really good now

    • @XinferisTV
      @XinferisTV  6 месяцев назад +1

      It is definitely getting better. I still had to generate thousands of images to get the ones I used in this video.

    • @modolief
      @modolief 6 месяцев назад +1

      @@XinferisTV Do you have some kind of feedback process you use? Like: Have the AI enhance or rework some section of the image, or feed it comparison images for different portions?

    • @XinferisTV
      @XinferisTV  6 месяцев назад

      @modolief I have outlined my process in a previous comment but I will include it here to make it easier for you to find. I really should just make videos about how I make art with AI.
      I used Midjourney to create the art on this channel, but I also use Stable Diffusion (through Draw Things) quite a lot. I feel like in general Midjourney typically creates art I find more aesthetic. Everyone has different preferences on aesthetics though. Stable Diffusion has a much deeper feature set that goes well beyond just generating images.
      In my art workflow as a whole I use Midjourney, Apple Photos Pixelmator Pro and Final Cut Pro. My workflow changes continuously as new tools come out and I learn new ways to improve the way I do things.
      I start by creating prompts. The longest part of the prompt creation process for me is coming up with general styles I like. These styles are usually a combination of a prompt, negative prompt and in the case of Midjourney tweaking the style settings. Once I have a style I like, I can add themes and ideas of what I actually want in the scene to the prompts.
      At the point where I have a prompt that creates something within the ballpark of what I envisioned, I generate a lot of images with that prompt. When I state I create a lot, I mean hundreds or even thousands. During this process of creating thousands of images, I may tweak the prompts, either to guide the AI closer to what I want, or to explore different variations and ideas that come to me during the process.
      Once I have this large number of images, I rate them in Midjourney. Luckily Midjourney has keyboard support, so I can just use the cursor keys to move through images and press 1 if it's a bad image or 4 if it's a keeper. This rating process eliminates the majority of the images I created. It really depends on the prompt and my mood, but after generating 1000 images, I might be left with 20 - 100 images. I try to move fast with this process and show no mercy!
      After that, I download the high rated images and import them into Apple Photos. I have albums for various projects and themes. At the point when I am actually creating a video, I might have hundreds of images, or even over 1000 images that I rated highly to pick from. From those images I pick the best or most fitting ones for that video. A video will typically have about 50 images.
      Now this is the fun part. Since Pixelmator Pro is highly integrated with Apple Photos, I can open image in Pixelmator Pro and do edits instantly without even needing to save. This is a major time saver. I open each image in Pixelmator Pro and do whatever edits are necessary. A lot of the editing I do is with the healing brush to remove things that look off. Sometimes I have to go more crazy and redraw details. Occasionally I will composite together things from multiple AI images. On my other RUclips channel I also do color correction. I don't typically need to with the images for this channel because the art already looks the way I want it to color wise.
      After all the images have been edited, I will export them from Apple Photos. I take these exported images and use a Shortcut I created in Apple Shortcuts to automate Pixelmator Pro upscaling the images to 4K. For this channel I use bilinear filtering because the images are already higher than 720p to begin with, and I like how the slightly fuzzier look combines with the film grain I add in Final Cut Pro. With my other channel, which has more traditional hand painted styles of art, I use Pixelmator Pro's Super Resolution. It does take away a slight amount of the texture in the images, but overall it looks better than using bilinear filtering for that style of image.
      After all of this, I assemble the images and music in Final Cut Pro and export the video. So much for AI art being low effort. :)
      Probably a bit more detail than you were asking for, but I think this might give you a starting point if you want to explore AI art, at least in the form I do. Maybe I should create a new RUclips channel that talks about creating AI art. :)

    • @XinferisTV
      @XinferisTV  6 месяцев назад +1

      @modolief So to answer your question more directly, I currently don't use inpainting or comparison images for my videos. I have experimented with inpainting quite a bit, especially for fixing hands, and although it can make things look a little better, it still looks a bit wonky no matter how many iterations I do. I have also experimented with using image prompts and img2img. I was able to get some interesting results but nothing that is in my normal workflow.
      My real focus is just making sure I have the best prompts (and negative prompts) as possible for what I am trying to do. In general part of my prompt will define the style and part of it defines the themes and what is happening. For the style part I can spend an extremely long time creating it. And even once I have a style I like, I keep trying to refine it to get better, more consistent results. It's all about finding a way to light up the correct virtual neural pathways that create what I want. :)
      I am pretty meticulous with this process, and even do experiments with the same seeds with different parts of prompts removed or added to see the results. It's much easier to do this with SDXL than Midjourney, because SDXL will typically subtly change things if you remove something from a prompt using the same seed. Midjourney on the other hand completely changes the image, even when using the same seed. So with the initial experiments with prompt tweaking I may use SDXL first and then test those concepts with Midjourney. The results will never be the same between the two, but I do find that there is at least some overlap in how they interpret things.
      The image generation and selection process is pretty important too. I might have a prompt that consistently creates good results, but I am not looking for good, I am looking for images that blow my mind, and don't have any issues I cannot correct with editing. So I need to create an insanely large amount of images. For example for "The Horrific Empires of the Orion Nebula" I created well over 7000 images to make that video, yet I only used less than 50 of them.
      I am still trying to refine how I pick the images I use, but I tend to get stricter and stricter and reject more and more images over time. As long as the image doesn't have something messed up in it, like wonky hands, I start to think about the following things. Would this image make a great RUclips Short? Could this image be the RUclips thumbnail for this video? Does it have some X factor to it? Does it tell a story? Could it add something special to the video?
      That is just the initial selection process and only defines what I download to my machine from Midjourney. I get maybe 500 to 1000 images out of the 7000. When I actually start making the video itself I pick 50 or so that I feel are the best for the video I am making. Then once I get those images into my video, I usually have too many images to fit the length of the music track. This is done intentionally. It makes it so I have to make hard choices about which images stay. There are always a few images that I selected that I probably shouldn't have, and it helps force me to remove them.
      I have been playing with Midjourney 6 a lot, and it's amazing. It's going to give me a lot more options on what I am going to be able to create. My next video for this channel will likely still use Midjourney 5.2, which definitely creates amazing results. I am fairly certain my next video for my other RUclips channel Xinferis will be using Midjourney 6 though. It's going to allow me to create something slightly more narrative based which I am really excited about.

    • @blockhead1899
      @blockhead1899 6 месяцев назад

      @@XinferisTVwow i appreiceited that i wish more youtubers spent that much time explaining stuff as in depth as you.