Is Defragmenting Useless Now?

Поделиться
HTML-код
  • Опубликовано: 23 янв 2025
  • НаукаНаука

Комментарии • 1,4 тыс.

  • @alejandroespinoza5680
    @alejandroespinoza5680 Год назад +1896

    Therapist: "The Intel CPU in an AMD socket doesn't exist and can't hurt you"
    Intel CPU in an AMD socket: 2:26

    • @doctordothraki4378
      @doctordothraki4378 Год назад +164

      No wonder it's working harder than needed

    • @TalonosPretereo
      @TalonosPretereo Год назад +99

      even worse its an LGA cpu on a PGA socket....

    • @fro248
      @fro248 Год назад +81

      I guess it was done on purpose to troll

    • @SmallSpoonBrigade
      @SmallSpoonBrigade Год назад +31

      That used to be a common thing back before they started changing socket standards just about every iteration.

    • @TalonosPretereo
      @TalonosPretereo Год назад +4

      @@fro248 definitely

  • @pyromethious
    @pyromethious Год назад +346

    Defragmenting was almost calming sometimes. Watching it move stuff around was cool to watch.

    • @robsku1
      @robsku1 Год назад +6

      well, it wouldn't be difficult at all to create a program, possibly as screensaver, that would emulate it's visual look only.
      I liked the look of text-based defragment applications (that howevere used basically exactly same visual representation, just with block-graphics characters instead of pixel graphic squares) better though, but you could just as easily even add an option to choose either look ;)

    • @Replyingtoclowns
      @Replyingtoclowns Год назад +1

      It’s wild they released this video now I was just thinking in the shower a few weeks ago about how we used to defrag drives and now it’s practically obsolete, wait hours just to use it again, or setup automatic times to do a defrag, or how about set it up before you go to bed.

    • @Torthrodhel
      @Torthrodhel Год назад +1

      @@robsku1 while a part of the satisfaction of it would absolutely be lost once you know it's not actually solving a problem, there still could be something to that idea nonetheless.

    • @xanxenon1934
      @xanxenon1934 Год назад +3

      I understand you. You can still use Hard Disk Sentinel to test drive health. Select a reading test and you will have a similar view. :)

    • @andrewsparkes6275
      @andrewsparkes6275 3 месяца назад

      ​​​@@robsku1It was actually just a set of stock animations (on Windows, at least) in the first place. As in, yes while it was doing a certain process, it shows certain colors of blocks moving around, and then switched to the next animation in the sequence as the next process started. But each individual block didn't correlate to an actual patch of data on your device. That was just meaningless 'theatrics'.
      If you actually paid attention, it was always the same blocks moving to the same places every time you ran the program. And when the animations switched to the next process, the blocks all suddenly jumped places, since it was just a new animation starting fresh from the start (which would always also be the same each time you ran it).
      So you got Animation A followed by Animation B followed by Animation C, it's just that sometimes Animation A, B or C ran a little longer or shorter, etc.

  • @northwestrepair
    @northwestrepair Год назад +1558

    I remember there was a time when you had to manually disable automatic defrag because it was causing issues for SSD drivers

    • @bgezal
      @bgezal Год назад +140

      It was the early days of Windows 7. Later on they made it automatically detect and disable for SSD.

    • @killertruth186
      @killertruth186 Год назад +67

      @@bgezal And most users didn’t adopted SSDs until a few years after Windows 10 was released.

    • @bgezal
      @bgezal Год назад +18

      @@killertruth186 well I was a late adopter buying my first SSD in 2011. An OCZ Vertex 2, that I got replaced with a Vertex 3 same year, after it bricked from the sandman bug.

    • @rightwingsafetysquad9872
      @rightwingsafetysquad9872 Год назад +70

      In 2011 you were still a very early adopter. Perhaps not very early measured in years, but you were in probably the first 5% of customers to adopt an SSD. SSDs did not surpass HDDs as the most common main system storage for consumer devices until around 2016. And that's just for new devices, it didn't surpass for total systems in use until 2020.

    • @lawrencedoliveiro9104
      @lawrencedoliveiro9104 Год назад +19

      Meanwhile, Linux developers figured out how to avoid the need for a defragger in the first place.

  • @taznz1
    @taznz1 Год назад +93

    The trick back in the day before SSD became the norm, was to make a small partition at the end of the drive say 4-8gb, then move the paging file (swap file) to this partition. Before we all had many gigabytes of RAM, the constant expansion and contraction of the paging file was the cause of a huge percentage of the disk fragmentation, by moving it to its own partition it no longer effected files saved to the rest of the drive. This one trick dropped fragmentation by 80%.

    • @davidtauriainen9116
      @davidtauriainen9116 Год назад +15

      I still remember the worst fragmented file I'd ever seen in the early 2000's. A pagefile that was in more than 16000 discontiguous locations. If you couldn't use another partition, statically sizing the pagefile was a good second option.

    • @termitreter6545
      @termitreter6545 Год назад +2

      I think Windows at some point actually switched to just prefering a constant big pagefile, rather than one that dynamically changed in size. I always wondered why the pagefile became static in size, always taking up a few GB. Probably to solve that issue?

    • @malcomreynolds4103
      @malcomreynolds4103 Год назад +2

      @@davidtauriainen9116 you were always supposed to statically size page files on memory intensive servers even if it was on a dedicated partition (on its own volume or not) the i/o to grow them on slow disk (including some early SSD) was crippling. Not statically sizing them also would typically windows to do a full core dump if there was a crash and in a machine with a lot of memory, could take an outage that would otherwise only last a couple minutes at most, to tens of minutes while it writes the core dump

    • @krashd
      @krashd Год назад +2

      @@davidtauriainen9116 Thankfully defragmenting the page file was as simple as turning off paging, defragmenting the drive if you needed to make space for a contiguous page file, and then turning paging back on. Had to use that trick a few times on relative's PCs as they all had a tendency to fill drives to the brim with crap.

    • @truearmy1953
      @truearmy1953 11 месяцев назад

      Biggest problem of fragmentation is with Android. More than 15 years since it first came to mobile phones and still no manual defragmentation feature they are able to implement in. Fragmentation makes the flash storage space unusable and makes the "free space" appear less and less till about 1GB remaining?? And users always keep clearing that ~1 gb free space portion, and in return it makes a high load of erasure on that 1GB free space which is all the times being used for app caches(youtube , facebook , instagram etc caches) and then we always clearing the cache. So in turn not being able to provide a defragmentation option is worst thing which causes flash storage to die. Google is a very slow company. See only now in start of 2024, portrait orientation dark mode of RUclips came. While RUclips for android was released for the first time in about year 2010. So see Google is 15 years slow company . Microsoft is much better in terms of OS & app development properly. What takes 6 months for microft to develop will take 20 years for google in OS related software development. "Windows for Smartphones" could have been many times better if users did not start using android in a high market share.

  • @voidmayonnaise
    @voidmayonnaise Год назад +1524

    Defragmenting was so satisfying, though 😢

    • @SeñorDossierOficial
      @SeñorDossierOficial Год назад +19

      🥺

    • @ZTenski
      @ZTenski Год назад +140

      I remember sitting there watching the bits moving on all 1gb of the drive in the mid 90s, wishing it would be done because I wanted to play my Crayola Crayola ROCK! game, but dad said not to touch it until it's done or it would be broken lol.

    • @ozmosis0074
      @ozmosis0074 Год назад +9

      It was..... same with using tweak ui 😢

    • @Choralone422
      @Choralone422 Год назад +15

      It was something that was really satisfying and basically required every so often in the pre Win XP days.

    • @SpeedRebirth
      @SpeedRebirth Год назад +14

      #ItWasNotBetterBefore

  • @egoheals
    @egoheals Год назад +573

    Apple's claim about never needing to defragment usually butted up against an odd, somewhat edge case: having to resize a drive. If you needed to resize a partition and there was data in the area that needed to be unallocated, OS X would simply refuse to do it. I ran across this problem quite often; the only solution in that case would be to do a full defragmentation, but you usually needed a third-party tool for this as OS X never really had a nice GUI tool that people could easily use to do the job.

    • @AlanTheBeast100
      @AlanTheBeast100 Год назад +20

      Partitioning is another thing that has been rare in my life since about pre-2000 ...

    • @FaZekiller-qe3uf
      @FaZekiller-qe3uf Год назад +36

      @@AlanTheBeast100I do it often, but I’m a programmer who enjoys low level software.

    • @AlanTheBeast100
      @AlanTheBeast100 Год назад +9

      @@FaZekiller-qe3uf
      I fail to see the relationship between "low level software" and de-fragging - but then I program so deep in the weeds that fragmentation is the least of my worries.

    • @jdilksjr
      @jdilksjr Год назад +43

      @@AlanTheBeast100 Wake up Alan, he was responding to your comment about "Partitioning".

    • @SterkeYerke5555
      @SterkeYerke5555 Год назад +5

      @@AlanTheBeast100 Nearly every mac I've ever had had at least two partitions for Bootcamp (or Mac OS 9 on PowerPC macs)

  • @smileymattj
    @smileymattj Год назад +645

    There was some important key points this video missed.
    Ever since Windows 7. There is a scheduled task built into windows that automatically defrags drives. So the biggest reason you don’t need to manually do it yourself is because Windows already does it.
    Sometime in Windows 8 or 10. Defrag.exe was given SSD support. It will recognize that the drive is an SSD and not defrag it. Instead it will preform trim on the SSD. The above mentioned scheduled task still exists. So no matter which drive type you have, or mixture. Defrag.exe will maintain each drive using the proper method for its type.

    • @the_hamrat
      @the_hamrat Год назад +22

      Came here to say just this

    • @Kualinar
      @Kualinar Год назад +40

      Another great improvement : Since Windows 7, defrag.exe can work properly on drives that are filled to more than 90% of their capacity.
      Before that, it looked as if it worked, but didn't do anything at all in those cases.

    • @Mandrag0ras
      @Mandrag0ras Год назад +1

      Partly right. Windows' scheduled task DOES defrag your SSD once every month if volume snapshots are enabled. Google search "scott hanselman Does Windows defragment your SSD?"

    • @thewiirocks
      @thewiirocks Год назад +38

      #3 is that the larger sizes of hard drives compared to the file sizes means that OSes can more easily allocate contiguous space without needing to rely on fragmentation. (People always forget that fragmentation was originally a *feature* meant to make the entire space of a drive usable.)

    • @bilateralrope8643
      @bilateralrope8643 Год назад +15

      Yes. It's all automated now.
      Just like my taxes.

  • @DoctorX17
    @DoctorX17 Год назад +242

    2:27 not sure if the editor put an Intel chip on AM4 to trigger someone, or just a silly mistake

    • @Sparta_I
      @Sparta_I Год назад +7

      I thought the same, either way it gave me anxiety 😂

    • @2ELI0
      @2ELI0 Год назад +21

      was thinking the same, maybe it was done on purpose to see who noticed

    • @DoctorX17
      @DoctorX17 Год назад +1

      @@2ELI0 at least 3 of us I guess XD

    • @TheRealSkeletor
      @TheRealSkeletor Год назад +43

      It was put there to make someone comment this, which increases engagement (and thereby the amount of money they make from this video).

    • @ddevin
      @ddevin Год назад +13

      @@TheRealSkeletor 200 IQ move

  • @nicholas_scott
    @nicholas_scott Год назад +35

    I remember fragmenting was mostly an issue if you ran out of space on the harddrive. That meant you knew the fragments would be terrible. An old trick would then be to delete some huge files, then move the files from one drive to another and back. That would help defrag a chunk without going throug the hoops

  • @jh441
    @jh441 Год назад +44

    I've been migrating servers to VMs over the last 10 years for customers. I use defraggler constantly to move sectors to the front and shrink the disk space to save VM space if necessary. Now a days backup and restore applications can help with that, but defragging was helpful most of the time

  • @bumbalaaa
    @bumbalaaa Год назад +84

    2:26 Intel CPU in an AM4 socket is one of the more cursed things I’ve seen so far this week

    • @ZepG
      @ZepG Год назад

      I agree, why would you put a good CPU in a crappy AMD MB?

  • @paineretlaw3344
    @paineretlaw3344 Год назад +272

    I know that it's not a big deal but when it showed the CPU sweating that was a Intel chip ib a AMD socket lol

    • @nicolasrodriguez5054
      @nicolasrodriguez5054 Год назад

      lmaoo

    • @HeresorLegacy
      @HeresorLegacy Год назад +3

      I was about to point that out as well ^^ completely inconsequential, but still... how?

    • @paulforvance2028
      @paulforvance2028 Год назад +14

      Average intel cpu in Ohio 💀

    • @HyperX_D
      @HyperX_D Год назад

      ​@@paulforvance2028 🎉👍👍

    • @Splarkszter
      @Splarkszter Год назад +2

      The quality control on LMD is null 💀

  • @janno288
    @janno288 Год назад +379

    I would like to see Linus do a performance comparison between a drive of different file fragmentation percentages, 1% 5% 10% 15% etc and between a 0% fragmentation.

    • @jakobole
      @jakobole Год назад +20

      Yes. I never noticed a difference back in 1998....

    • @Lofote
      @Lofote Год назад +14

      When commonly accessed files were fragmented like pagefile or the regustry, the Performance drop was huge. In pre vista days you used a special sysinternals tool called pagedefrag that fixed that on every bootup.
      If your program or document files were fragmented, performance drop was negligeable

    • @franchocou
      @franchocou Год назад

      ​@@busimagen😮

    • @Mr.Morden
      @Mr.Morden Год назад +4

      TBH I never noticed a difference even in Windows 95/98, before NTFS fixed **A LOT** of fundamental file corruption problems.

    • @Mr.Morden
      @Mr.Morden Год назад +1

      @@Lofote I remember the XP page file only being in like 2-4 locations max. That's not going to cause any seek delays, besides next time you boot up it'll just fragment it again. I think the virtual memory was split up and managed in such a way it wasn't an issue.

  • @Anacronian
    @Anacronian Год назад +10

    Really appreciate that Anthony put on some nice nail polish for those close-ups of the M.2's.

  • @JGreen-le8xx
    @JGreen-le8xx Год назад +196

    Watching that old defrag going is ASMR for geeks. ❤

    • @mattk6827
      @mattk6827 Год назад +10

      I used to enjoy watching it sometimes. lol.

    • @thebasketballhistorian3291
      @thebasketballhistorian3291 Год назад +2

      Always made me feel cool when someone computer newb was looking over my shoulder and I told them "I'm cleaning the mainframe". 😂

    • @davidpardy
      @davidpardy Год назад +4

      I hate ASMR but it was so satisfactory seeing all those little squares move into their rightful places. It was even better in DOS defrag

  • @reallybigjohnson
    @reallybigjohnson Год назад +27

    I have to admit that I actually enjoyed watching the defragger that I used. I forgot the name but it had a circle and it was kind of relaxing to watch the little blocks organize themselves.

    • @BeamDeam
      @BeamDeam Год назад +5

      Defraggler?

    • @krashd
      @krashd Год назад +1

      UltimateDefrag? I use that because I like how it uses a platter to display the contents of the drive rather than a box.

    • @vlefteris
      @vlefteris 9 месяцев назад

      Diskeeper was tha best!

  • @ErrorMessageNotFound
    @ErrorMessageNotFound Год назад +68

    It might be worth mentioning (because I've never seen anyone talk about it) that windows has integrated "TRIM" into "Defragment and Optimize Drives" such that if you run it on an SSD it will TRIM the drive instead.

    • @YounesLayachi
      @YounesLayachi Год назад

      @@asificam1 USB SSDs do support TRIM commands, you just have to make sure with a google search that it mentions "UASP+TRIM" (and SMART) before buying, and avoid black box "portable SSDs" since they hide their components. SSD enclosures typically use standard bridges that are better supported and can be firmware updated. You can also lookup the bridge by model number directly instead of the enclosure model

    • @castonyoung7514
      @castonyoung7514 Год назад +1

      But what the heck is TRIM?

    • @YounesLayachi
      @YounesLayachi Год назад

      @@castonyoung7514 Google it

    • @termitreter6545
      @termitreter6545 Год назад

      First time I acutally got an SSD for windows, I did a deep dive and checked out all the tips and things I should consider when using an SSD. I knew about the accidentaly SSD defrag stuff.
      But apparently, even with Windows 7 IIRC, at that point everything was figured out and I had to do nothing. Just put it in, transfer the Windows partition, and thats it.

    • @termitreter6545
      @termitreter6545 Год назад +2

      @@castonyoung7514 IIRC, TRIM is just an optional feature to make SSD memory management a bit more efficient. Idk how exactly it works.

  • @AlanTheBeast100
    @AlanTheBeast100 Год назад +7

    I don't recall the last time I de-fragged a disk - probably pre-2000.
    Once drive sizes get to a certain huge size it just doesn't make sense much as a re-write of a file will leave a small block, but eventually these small blocks are adjacent and make room for a larger file.

  • @IgabodDobagi
    @IgabodDobagi Год назад +24

    I literally haven't used defrag since like 2003. I actually kinda forgot it existed for a while there. But for the most part I just don't experience the same slowdown as my computers in the 90s did. This is probably just due to the sheer increase in computer speeds since those days. Back then, when you had a 10% increase in load times it was very noticeable because load times by default were pretty slow. But these days 10% may only be a second or less.

    • @gabrielandy9272
      @gabrielandy9272 Год назад +7

      nowdays windows is set to automatic defrag anyway, just check you defrag settings, it should be automatically set to do it once a week or month depending on the pc. so even if ytou don't do it, windows is ALREADY doing it in the background automatically. windows is a muhc more automated system nowdays.

    • @Ikouy
      @Ikouy Год назад

      A lot of it also has to do with hard drive space and density of the data. Say a 1GB Caviar with Windows 9x you would easily use up half the disk with the OS alone so a lot of time can be spent seeking for files. A Windows XP installation on a 40GB drive only uses roughly 5% (not including RAM related files) so the amount of time the hard drive has to seek is significantly shorter.

    • @dogtrollololl
      @dogtrollololl Год назад

      widows defrag does ssd trim automatically so remember not to disable auto defrag like old days

    • @krashd
      @krashd Год назад

      @@Ikouy There is also the fact that the bigger a hard drive the faster it seeks, reads and writes since in order to go from 1GB to 100GB to 10TB the data on the eight or so platters has to become denser while the drive still rotates at the same speed and the heads still move at the same speed. So while it may have taken 4 seconds to read 100MB in 1998 most drives can read 10GB in the same time today. My 8TB is faster than my 6TB which in turn is faster than my 4TB, only my 1TB C: drive is faster and that is because last year I gave in to the temptation of fast boot up speeds and bought an SSD for Windows. 🤩

    • @Ikouy
      @Ikouy Год назад

      @@krashd It's what I said, just more technical.

  • @mo_mo1995
    @mo_mo1995 Год назад +11

    Another reason that fragmentation is not that much of an issue nowadays is that hard drives are crazy large today, compare with the era when defragmentation did matter. Most home users can hardly fill up their drives meaning there is always fresh new spaces for new file to write into, instead of fitting into fragmented gaps.

    • @termitreter6545
      @termitreter6545 Год назад +1

      I mean, I got 3 gigs of SSD memory and thats full with 150 free. With some games over 100 GB large and tons of games on steam/gamepass/epic, its not that hard to fill up.

    • @KasumiRINA
      @KasumiRINA Год назад

      If you do RUclips, recording, say, one game's playthrough, is hundreds of gigs already. I need dozens of terabytes, most space is in use at all times, and they constantly get written, rewritten, overwritten, games installed, copied, backed up, modded, recorded, edited, various temp files photoshop makes etc., I have NO IDEA how people live with 1-2Tb SSD and nothing else! Reminds me of PC I used as a kid with like 2Gb, it meant one game installed and nothing else. Then if someone is into graphics, wanna look at Daz3D or Stable Diffusion folder sizes? My friend films videos on street herself and at 27 Mbps bitrate at 1080p footage is either deleted, or hogs it. And that's not even 4k or higher!

    • @kashiichan
      @kashiichan Год назад

      Neither of you are "most home users" though, who generally use computers largely for work or study purposes (document editing etc).

  • @innxir
    @innxir Год назад +19

    can we talked about how cursed the graphic at 2:25 is? The 9th Gen Intel®Core™ i5 in a "SOCKET AM4" is def not a thing i expected today lul

  • @Vospi
    @Vospi Год назад

    Why are the colors in the 3:43 segment SO satisfying to me? The combinations with metallic, and grays, and violet background, even though I'm not into the cyan-violet color scheme... mmmm, chef's kiss.

  • @facundosoler2200
    @facundosoler2200 Год назад +14

    Anthony is great, he's awesome as a host , very knowledgeable and a nice guy overall. 10/10 guys :)

  • @JodyBruchon
    @JodyBruchon Год назад +5

    *There IS a reason to defragment modern drives, even fast SSDs.* All modern filesystems are extent-based and all of them generally treat all of the disk free sapce just another file that is a special reserved type. Free space can become fragmented--sometimes severely so. I just ran _defrag c: /a /v_ and my free space is in 7,000 fragments. It might not sound like a lot, but that means that the free space is broken into that many pieces, all of different sizes, and every single file that's created or appended will require a scan and modification of that free list to allocate the new space. Even though the free space information is stored in indirect blocks that compose a binary tree (thus it doesn't actually touch 7,000 items) there's still up to 13 levels of tree nodes to search through (assuming it's perfectly balanced which most are not).
    A defrag consolidates free space, reducing the size of the free space file metadata. The same applies for file fragmentation: SSDs have no mechanical latency so jumping between fragments incurs basically zero penalty, but there is still more overhead to read and process the metadata for more fragments and there is often a very small speed drop when the sequential read has to end and resume at a different location on the SSD (this is why random 4K reads in CrystalDiskMark are so much worse). On Windows, file fragmentation seems to increase more aggressively on SSDs, and running defrag on a drive with a free space count in the hundreds of thousands or even in the millions *will improve the performance of the filesystem.*
    The effect might be far less pronounced than on a spinning disk but it's by no means small enough to ignore completely. You should clean out all the temp files you can and then defragment your SSDs about once a month. If you're worried about the added writes from a defrag burning out your SSD then you're worrying about nothing because any remotely decent SSD controller chip automatically deduplicates writes, so that data being written to the SSD that's identical to the block already stored doesn't write a block to the flash memory for every piece of data moved; it only writes new filesystem- and device-level metadata and that device-level metadata points to the already-existing block. _If you encrypt your drive then this is no longer true, though, so maybe defrag encrypted volumes a less often._

  • @thephantom1492
    @thephantom1492 Год назад +5

    I will add that FAT specification stated to write to the first available cluster, starting from the start of the disk, therefore encouraging fragmentation. Newer OS violated the specs (for a good reason) to avoid doing this, and attempt to continue to write in a continuous block. And, instead of writting from the first cluster available, it tried to find a chunk that was more apropriate in size (like not a single sector...), which reduce the amount of fragmentation.
    The use of disk caching also help alot. By buffering the writes, the OS have an idea of the size of the data to be written to the disk. Instead of writting cluster by cluster, it can delay it, and then have a few hundreds of clusters to write at once. Now it can find a chunk big enough for that amount, instead of finding just an empty cluster that might be surrounded by used ones...

  • @trogdor8764
    @trogdor8764 Год назад +1

    Man, I was just thinking I hadn't seen you on LTT in a while. Glad you're still here.

  • @omegadecisive
    @omegadecisive Год назад +4

    I remember watching my pc defrag on both 98 and XP and I'm not afraid to say I enjoyed it, so satisfying. Took hours though, and I wondered why I didn't have many friends when I was younger...

  • @sysbofh
    @sysbofh Год назад +1

    Another thing that affects fragmentation is the way the position of files is chosen.
    With "best fit" one file is written as close as possible to another. If the file is later modified, there is a chance for fragmentation - as it will not fit the existing space.
    With "worst fit" exactly the opposite is done: the file is written at where will be the most free space around it. This way it can be modified later, and having space available to grow will not fragment.

  • @nadeemhussain4780
    @nadeemhussain4780 Год назад +5

    I like how at 2:27 an Intel cpu is placed on an AMD motherboard

  • @carlosreyesf19
    @carlosreyesf19 Год назад

    0:40 Sad story: my SSD did Kapoom before reaching the write limit. Kapoom like in, completely dead. Unrecoverable.
    Moral of the story: write on your SSD to your heart's content haha

  • @Randmagnum69
    @Randmagnum69 Год назад +7

    Always enjoy Anthony. He is a great presenter

  • @florianrueth1412
    @florianrueth1412 Год назад +1

    The i5 on the AM4 socket was a really nice touch @2:26

  • @SingleRacerSVR
    @SingleRacerSVR Год назад +11

    Video suggestion - you mentioned COMPRESSION from the 1:50 mark. But if you haven't done a video on it yet, could you do a short video where you ask your LTT staff in what circumstance or situation they themselves might use HDD compression (EG :- a photographer with lots of images - or a gamer to get more games loaded, etc?)

    • @zyeborm
      @zyeborm Год назад +13

      Using drive compression on anything with "media" as the main use (photos and videos etc) is generally not going to come out in front.

    • @SingleRacerSVR
      @SingleRacerSVR Год назад +1

      @@zyeborm thanks. And as a PC user from the original DOS days, that was always my own belief too. But that's why I'm so curious what reason someone would ever use it in the first place. (unless most people avoid it like the plague, anyway)

    • @zyeborm
      @zyeborm Год назад +4

      @@SingleRacerSVR if you have compressible data then it's useful, code, text stuff like that compresses heaps.
      For a general business data with zfs people get like 1.2x-1.4 compression (IE 20% reduction) but hey free 20% that's nice right. performance is generally increased as disks are slower than CPUs .
      On a log partition I was getting 40-60x compression which was nice.

    • @Pro_Triforcer
      @Pro_Triforcer Год назад +2

      Slow HDDs could be a good use case for compression. Smaller files are faster to read, and decompression latency should be negligible even on slower processors.
      Games definitely aren't. If there is a patch that changes a few bytes in a 30GB .pak file, you'll have to recompress the entire thing, which takes ages. The downloads folder should be uncompressed for the same reason, especially if you use torrents.

    • @SingleRacerSVR
      @SingleRacerSVR Год назад

      @@Pro_Triforcer thanks for the interesting info, guys

  • @malcomreynolds4103
    @malcomreynolds4103 Год назад +1

    NTFS doesn't need allocate on flush. It writes two different ways: in streams when the disk subsystem can perform contiguous writes synchronously. It is not limited to small writes only that fit within a single i/o, but larger writes as well depending on how the OS was able to buffer the writes. Otherwise it writes in blocks, optimizing the write of an entire file, though this would normally only happen in large writes and more commonly when the size of the write is known before the write starts.

  • @doodskie999
    @doodskie999 Год назад +3

    I remember doing this in the late 2000's
    I would grab some snacks or watch tv while doing defrag
    What humble times we used to live

  • @allanrichardson3135
    @allanrichardson3135 Год назад

    Extents are older than PCs. In both DOS/360, where they were allocated manually, and all versions of OS/360, where file extents and free space are managed automatically, every file consists of one or more extents!

  • @Paul_VB
    @Paul_VB Год назад +8

    i like the image of an intel cpu on an am4 socket at 2:26

  • @TheHitmanAgent
    @TheHitmanAgent Год назад +2

    2:26 I see what you did there! "9th Gen Intel Core i5" in a "Socket AM4"

  • @kekistanifreedomfighter4197
    @kekistanifreedomfighter4197 Год назад +3

    props to the editor for sneaking in 2:25 lol

  • @hallcrash
    @hallcrash Год назад +1

    I have very large files on 16TB mechanical HDs configured in a raid1 [btrfs], and they get fragmented when writing new files. We are talking 50 gigabyte files. I've used the filefrag command every now and again, but I often use e4defrag to simplify workloads. They DO fragment when writing, but if managed regularly, the data will be much more responsive.

    • @KasumiRINA
      @KasumiRINA Год назад

      I am not running with 50Gb files but at constant video recording and writing, I just stopped bothering with fragmentation now. I just keep getting more hard drives lol.

  • @richardshaw7595
    @richardshaw7595 Год назад +38

    Always enjoy content from Anthony. More please!

    • @dernthehermit3541
      @dernthehermit3541 Год назад +2

      Yeah, Anthony is one of the top 3 LTT employees, alongside Alex and Riley. IMO anyway.

    • @ArniesTech
      @ArniesTech Год назад +3

      My favourite host as well 💪😎

  • @teomanefeaycan4854
    @teomanefeaycan4854 Год назад +1

    This video getting uploaded just as I am defragmenting the HDDs on both my PCs is quite the coincidence

  • @IIGrayfoxII
    @IIGrayfoxII Год назад +58

    Here is why defragging was more or less done on older HDDs.
    We did not have SATA, but we had PATA.
    This standard used a wide ribbon cable to transmit data.
    PATA had several modes to communicate.
    Programmed IO or PIO
    Was first used and it needed to talk to the CPU for every data request that was to be sent or received
    Since CPUs were more slower in the 90s being single cored and under 800Mhz or so
    This did have a slight impact.
    There were 7 versions of PIO and these were the maximum data rates.
    Mode 0: 3MB/s
    Mode 1: 5MB/s
    Mode 2: 8MB/s
    Mode 3: 11MB/s
    Mode 4: 16MB/s
    Mode 5: 20MB/s
    Mode 6: 25MB/s
    So these were slow and required CPU cycles, so if the drive was defragged, the data could be obtained faster
    We then got something called Direct Memory Access, this method allowed the drive to send data to RAM directly and not wait for the CPU.
    DMA was way waster the PIO after version 2
    But most common speeds were
    66MB/s.
    100MB/s
    133MB/s
    167MB/s
    While 167MB/s was not bad, real progress was made with SATA as it had a function called native command queuing
    So instead of the HDD pulling a file in order 12345, it could pull data in any order like 24153, the file was rebuild correctly so it did not really matter if the drive was fragmented.

    • @N0zer0
      @N0zer0 Год назад

      also HDD cache sizes grew bigger

    • @IIGrayfoxII
      @IIGrayfoxII Год назад

      @@N0zer0 Those normally help when writing data rather than reading data>
      But even still, you're still limited by the bus speed and features like NCQ being present.

    • @Terry2020
      @Terry2020 Год назад

      thx for the sharing.

    • @Daggett1122
      @Daggett1122 Год назад +2

      It's not throughput, it's head seek time. The head takes a few milliseconds to jump to a new spot on the platter. If your file has 15k fragments, that's a lot of waiting for the head to seek.

    • @IIGrayfoxII
      @IIGrayfoxII Год назад

      @@Daggett1122 But throughput also helps, not to mention CPU and RAM
      Newer CPUs are faster and RAM is faster so data can be processed faster and loaded in RAM faster.
      I found that I needed to defrag less and less with newer systems because it was that much faster overall.

  • @mwbgaming28
    @mwbgaming28 Год назад +1

    Delayed allocation has one HUGE issue, unexpected power loss
    If your data is in RAM waiting for space to be allocated on the drive, and the computer loses power for whatever reason, that data is GONE FOREVER unless you're using a laptop with a working battery, or you have a UPS
    I'll take drive fragmentation over having data sit in RAM when it doesn't need to

  • @krunkske5733
    @krunkske5733 Год назад +5

    2:30 sees Intel chip in AM4 socket: **sigh** *goes to comments*

  • @scbtripwire
    @scbtripwire Год назад

    Watching that defragmenting back in the day, whether in DOS or windows, was *so* hypnotic! I would watch that for ages, it was like watching a lit and crackling fireplace.

  • @austininmedford
    @austininmedford Год назад +62

    As long as Anthony is making videos, I will continue to watch indefinitely.

    • @bt4670
      @bt4670 Год назад +2

      Got news for ya

    • @Laurabeck329
      @Laurabeck329 Год назад +2

      @@bt4670 I hope she comes back after the backlash has cooled down

  • @HalfBloodCreeGames
    @HalfBloodCreeGames Год назад +1

    I remember my dad had a computer business when I was a kid and he got this computer from someone cause it was "Slow and obsolete" and wanted my dad to look at it - after we set it up - our first thought was 'Wow, this is slow. Let's try defrag." and it was nothing but a sea of red. It took multiple passes just to start seeing a change in the grid color. It was crazy.

  • @yuriserigne5524
    @yuriserigne5524 Год назад +3

    the windows defragmentation tool does not only defragment. it "optimizes" and says that even ssds need to be optimized. i think that it just trims when you do that and does not defragment the ssds. which could be necessary on old ssds which dont trim automatically.

    • @fqdn
      @fqdn Год назад

      Correct, Microsoft just never created a separate tool for trimming SSDs. You can tell that it will trim a drive by the fact that "Analyze" is greyed out and that it says "x days since last retrim). Edit: Actually, you can just look at the "Media type" section, if it says "Solid state drive", it'll trim it.

    • @malcomreynolds4103
      @malcomreynolds4103 Год назад

      @@fqdn It does defrag periodically, but only when fragmentation gets really high to the point of losing meaningful amounts of storage and the controller getting overwhelemed have to pick up too many fragments.

  • @Wild_Sheep_Chase
    @Wild_Sheep_Chase Год назад +2

    I've heard that it's usually a good idea to defragment your drive (both HD and SSD) before messing around with any partitions--the concept being that you want to compress the amount of space being used before partitioning a chunk of the storage.
    Curious to find out if it's actually necessary or if modern conveniences make that step irrelevant.

    • @KasumiRINA
      @KasumiRINA Год назад

      I only partitioned freshly-formatted drives, it is possible to backup all data and just wipe the drive before partitioning.

    • @fireztonez-teamepixcraft3993
      @fireztonez-teamepixcraft3993 Год назад

      This is not totally true. It is not good for defrag a SSD and in fact most good defrag utility, including the one include in Windows should not allow you to defrag a SSD.
      When reducing the size of a partition, all written data outside of the partition space is usually automatically move, so defragmentation is not really necessity even on a hard-drive, but might help to do the job faster. But in any case she you reduce a partition in size, you should backup your data just in case. Partition corruption or data lost may happens.
      Defragmenting a SSD is bad, will reduce the lifespan of the SSD and can only reduce the global performance of the SSD, not improving it.

  • @Mephy.
    @Mephy. Год назад +3

    I have zero idea what "Defragmenting" is but that was my icon for my "Games" folder back in the XP times.

    • @lasarith2
      @lasarith2 Год назад

      Imagine a book , but all the pages are mixed up instead of page 1,2,3,4 etc it’s - 4,2,3,1.

  • @jabezhane
    @jabezhane Год назад

    I really dont worry about write cycles anymore. I did in 2012 when a 120GB SSD cost a fortune. Now they get bought, used and replaced before they are hardly used.

  • @rootatlogic5216
    @rootatlogic5216 Год назад +3

    These videos help so much with keeping up with everything

  • @rimzul9466
    @rimzul9466 Год назад +1

    I remember having a 32gb SATA 2 SSD just for the OS when it was bleeding edge tech, It was the single most dramatic upgrade i ever felt on my pc, now i havent used a spinning drive in any of my pcs in more than a decade.

  • @FPVwineUK
    @FPVwineUK Год назад +4

    Video suggestion: SSD lifetime and why/how to back up on other storage spaces with longer life cycles.

  • @Killertamagotchi
    @Killertamagotchi Год назад +1

    For this reason, with SSDs, the defrag tool also speaks of optimization instead of defragmentation, since only the memory cells are then refreshed instead of the data being pushed back and forth until they are properly sorted, as is the case with defragmentation.
    but since Windows Vista does it automatically in the background when the PC isn't being used much, every now and then at intervals

  • @lukas_ls
    @lukas_ls Год назад +4

    Fun Fact: Windows still defragments SSDs due to limitations of the file system. It does so once a month.There is a weekly task that optimizes SSDs (manual TRIM) but still does a full defragmentation once a month

    • @TalesOfWar
      @TalesOfWar Год назад +1

      This, among other reasons is why Microsoft have been promising a new file system since at least the XP days as a replacement for NTFS, which at this point is almost 30 years old. It was using a bunch of shims and duct tape like patching even by XP to do a bunch of things more modern or at least more robust file systems like EXT were doing at the time. I remember WinFS being a HUGE marketing point of the original release of Vista (when it was known as Longhorn), before they had to go back and basically rewrite half the security in XP SP2 after it turns out "security" wasn't really a thing, which delayed Vista and made them decide on using Windows Server 2003 as the code base rather than XP, which meant they had to restart development. WinFS and a lot of other really interesting things like an arguably far, far better UI than the glass thing we ended up with were thrown out of the window...s (sorry, I had to).

    • @MrRedRye
      @MrRedRye Год назад +1

      I was looking for a comment saying this as it's not widely known. You can see when windows runs a defrag in Event Viewer or if you monitor total bytes written on the drive itself. I've excluded my SSDs from the monthly optimisation and made scheduled ReTRIM commands instead to avoid it running unnecessary defrags. I manually run a defrag every 12 months or so to sort out any file system issues.

    • @malcomreynolds4103
      @malcomreynolds4103 Год назад

      @@TalesOfWar No, MS has not been promising a new file system. Even EXT4 is absolutely primitive compared to even early NTFS.
      WinFS was never going to be a file system. It was a planned storage management system for tiering and grouping data. It would have used NTFS or FAT as its file system. As a tool for optimizing the performance of file systems, it became irelevent as files written as binary blobs all pretty much went away in the early 2000s, negating the need to have a separate system to manage writes

    • @malcomreynolds4103
      @malcomreynolds4103 Год назад

      @Zaydan Alfariz you are referring to ReFS. Its not quite the same thing. ReFS is not really useful for desktops or workstations. an edge case may be to use several cheap ssds in a cheap desktop with software raid 5 to improve performance and reliability, but most of the time you will be better off buying a single better ssd.

    • @malcomreynolds4103
      @malcomreynolds4103 Год назад

      @Zaydan Alfariz ReFS came out about 10-12 years ago, but its use is not the same thing EXT, NTFS or FAT would be used for. it is more for storage applications. They added it to Windows desktop - have no idea what purpose it serves on a desktop but its probably just as easy to include it as to exlude it since it is in windows server already

  • @Sonny_McMacsson
    @Sonny_McMacsson Год назад +1

    When not fragmented, even non-mechanical persistent storage can be faster because fewer read/write commands have to be sent to the controller, which can take awhile to execute. Maybe that's improved lately through a different interface, or not, I don't know.

  • @MMWielebny
    @MMWielebny Год назад +3

    Extends are pretty modern method. Older fs like jfs tried to spare files in different random parts of the disk while FAT was writing one after another. Thanks to this if you do not use large part of Total storage you will not hit problems from fragmentation

    • @malcomreynolds4103
      @malcomreynolds4103 Год назад

      extents are not modern HPFS and NTFS implemented them over 30 years ago

    • @MMWielebny
      @MMWielebny Год назад

      @@malcomreynolds4103 NTFS has not 30 more like 20 years. Perhaps you was thinking about inodes or bitmals they have even more than 30 years. That why I used "pretty modern" term

    • @malcomreynolds4103
      @malcomreynolds4103 Год назад

      @@MMWielebny NTFS was created in 1992

    • @MMWielebny
      @MMWielebny Год назад

      Ok my bad but still I stand in my opinion what was pretty modern ;)

    • @malcomreynolds4103
      @malcomreynolds4103 Год назад

      @@MMWielebny was more pointing out that anthony was wrong in what he said about it. In NTFS I think they call them clusteres, but I always get confused as there are several similar terms that mean different things between vendors and similar but different things in the database word that I primarily work in

  • @cliffontheroad
    @cliffontheroad Год назад

    I would love to hear about unused "frames" are stored and allocated. Hard drives use/address a track and sector. I've surmised deleted data stays on the disk but is marked. When there is no more disk space available, does the machine self fix by overwriting those "deleted" items? In my mind, if I deleted items/files/folders , there would not be a "disk full" condition.
    My knowledge of the Pick Multivalue OS had an Overflow Table. Delete an item/folder and the frames were put back in the overflow table. When you needed a block of 10K frames, you got it. You needed one, you got the lowest frame number.

  • @BrainiacManiac142
    @BrainiacManiac142 Год назад +2

    2:27 How did you manage to put an intel CPU into an AM4 socket? Impressive!

  • @graxxon
    @graxxon Год назад +1

    SSD upgrade can help to speed up a 10 years old tiny PC.
    Boot time: 5 --> 1 minute.

  • @MrRedRye
    @MrRedRye Год назад +4

    Windows actually still defragments SSDs despite the UI suggesting otherwise. When it optimises an SSD it sends a ReTRIM command, but it still runs monthly defrags anyway due to a limitation with the file system. This can be verified by looking in Event Viewer where it will mention "defrag" and not "optimisation". Alternatively, you can see that it is moving large quantities of files by monitoring total bytes written on your drive. If I understand correctly, the filesystem has no visibility of the flash storage, with the controller effectively making it opaque like if it were a HDD. While the SSD and controller have no problem dealing with fragmented files, Windows runs into some kind of character limit on the file location database if fragmentation gets too out of hand. It's dumb but that's how it will be until Microsoft pull their fingers out.

    • @Olivyay
      @Olivyay Год назад

      This is by design and not due to a limitation, it's because you still lose performance on SSDs if the file system gets heavily fragmented, especially on large files as random reads are still slower than sequential ones.

    • @gabrielandy9272
      @gabrielandy9272 Год назад

      its set to defrag every week or month, but even if you disable it the system is much better than in the past and its unlikely u will have huge issues. and you don't need wait microsoft ot change this because te interface that control this is totally visible to the user to change the settings no need commands or anything, just open the windows optmizer settings on the control panol, and u can enable disable this automatic defrag on any drive you wish.

    • @MrRedRye
      @MrRedRye Год назад

      @@gabrielandy9272The full defrag isn't run on the same user selectable schedule from the optimisation menu, it runs when the drive gets above a threshold of fragmentation (~20%), but you're correct that it is disabled if you totally disable optimisation for a given drive. The downside is that if you disable optimisation for SSDs it no longer sends ReTRIM commands. If you disable it then it's best to set up a scheduled ReTRIM in Task Scheduler every day or so.

    • @malcomreynolds4103
      @malcomreynolds4103 Год назад

      @@MrRedRye nothing you wrote was even somehwhat accurate

    • @MrRedRye
      @MrRedRye Год назад

      @@malcomreynolds4103 please elaborate

  • @PSjustanormalguy
    @PSjustanormalguy Год назад +2

    I'm Windows 3.1 old, and you could specify the order of the files at the start. I once specified the first 50 or so files, and the old 386 never booted so fast ! Not a single track seek

  • @HarpaxA
    @HarpaxA Год назад +2

    I feel you Anthony, it's tax season here, and as a guy who's in charge of 4 reports, I am so sad ... 😭

  • @henriproos
    @henriproos Год назад +3

    2:25 I like the Intel CPU in an AM4 Socket

  • @vasudevmenon2496
    @vasudevmenon2496 Год назад +1

    I do defrag once with /D and with /X for free space consolidation after fresh install and then just trim SSD on automatic schedule set by windows. Even modern HDD performance isn't affected without defrag unless it's near full. After defragging the SSD your disk benchmark won't improve. Would love to see Anthony explaining benefits of Dedicated hardware for accelerating video, encryption, decryption etc against the software version. For example dTPM,fTPM, SED, non SED, Intel TME, AMD SME etc.

  • @explosivedude8295
    @explosivedude8295 Год назад +50

    In short.
    SSD (no)
    HDD (after a while)

  • @KyriosHeptagrammaton
    @KyriosHeptagrammaton Год назад

    I remember doing the full sweep. Anti virus, defragger, temporary files, disable on startup all of it. It would take hours. And at the end of it the computer was twice as fast. Very satisfying.

  • @JB-fh1bb
    @JB-fh1bb Год назад +4

    Faster files closer to the edge: I’m sure someone has mentioned it already, but when platter drives were real slow there was actually a speed boost for files that were at the edge of the platter. Some utilities let you defragment so that you could choose which files got placed there 🚀

    • @CultOfTheGlenda
      @CultOfTheGlenda Год назад +2

      You have that backwards my dude.

    • @JB-fh1bb
      @JB-fh1bb Год назад +1

      @@CultOfTheGlenda Not anymore lol. Thanks for being correct

  • @KlsonBob
    @KlsonBob Год назад +2

    I never knew Scuba wasn't a word by itself

    • @TalesOfWar
      @TalesOfWar Год назад

      @@busimagen Seeing laser spelled as "lazer" especially irks me because of this fact lol.

  • @ailivac
    @ailivac Год назад +3

    It's even more pointless on SSDs because of wear leveling. The logical blocks get assigned to physical blocks in an arbitrary order anyway. Trying to defagment at the filesystem level just moves it from one random arrangement to another, which doesn't help anything since latency is constant and as you said it just uses up P/E cycles for no reason.

    • @lawrencedoliveiro9104
      @lawrencedoliveiro9104 Год назад +1

      Wear-levelling within the SSD firmware can only achieve so much. It’s running this amazingly complex emulation layer, just to make the drive behave like a magnetic disk to the OS. It would make much more sense to expose the flash storage layer directly, and have the OS implement a purpose-built filesystem designed around the characteristics of the hardware, with wear-levelling built into the block allocation strategy.
      Linux has JFFS2, LogFS etc, precisely for this purpose. It would be so much more efficient and reliable to use these.

    • @malcomreynolds4103
      @malcomreynolds4103 Год назад

      @@lawrencedoliveiro9104 That isn't how file systems or storage controllers work.

    • @lawrencedoliveiro9104
      @lawrencedoliveiro9104 Год назад

      @@malcomreynolds4103 Go look up the details of those filesystems, before opening your mouth.

    • @malcomreynolds4103
      @malcomreynolds4103 Год назад

      @@lawrencedoliveiro9104 already read enough of your other comments, you proved you have absolutely no idea what you are talking about

    • @lawrencedoliveiro9104
      @lawrencedoliveiro9104 Год назад

      @@malcomreynolds4103 Don’t take my word for it, it’s all there in the Linux source code, for everyone to see. Go educate yourself.
      Because if you don’t want to, well, that’s pretty clear, isn’t it?

  • @lGuileWilliamsl
    @lGuileWilliamsl Год назад +2

    For some reason as a kid I used to love watching the Windows 98 Defrag work. As the little blips were moved and organized I could hear the HDD cracking away (mechanical drivers were much louder back then). Ahhhh the nostalgia. 😊

    • @KasumiRINA
      @KasumiRINA Год назад

      I dread their sound, the clicking usually meant it's dying. I had a computer load for many minutes before clicking on and on and blocking off a faulty part helped save storage but before I removed the drive that had bad sectors completely, it slowed down entire system. Right now I use tons of HDDs of various age on multiple PCs to hog data and I barely hear anything... But a click still gives me anxiety.

    • @lGuileWilliamsl
      @lGuileWilliamsl Год назад

      @@KasumiRINA Yeah the click of death for the old HDDs were no fun. I just meant the sound they made from standard operations

  • @raychat2816
    @raychat2816 Год назад +2

    Speaking of mechanical hard drives, I’d like to see a tech quickie on what bad sectors are

    • @stephensnell5707
      @stephensnell5707 Год назад

      The word Techquickie is all 1 word,it is not split up

  • @ricahrdb
    @ricahrdb Год назад

    I stopped defragmenting my drives in the late 90's or the early 00's. At the time it did not feel like a performance issue anymore.

  • @deusexaethera
    @deusexaethera Год назад +9

    Defragmenting is still useful on SSDs *_occasionally._* Even though SSDs can access any byte with the same near-zero access time, as file fragments accumulate, the _filesystem_ will bog down from keeping track of them all. Access times will slow-down as the filesystem has to send multiple read requests to the SSD for fragmented files, and in an extreme scenario the filesystem can even reach its record limit before all the space is actually used. Because the write-abstraction layer in SSDs is hidden from the filesystem, the filesystem can't tidy-up its records of file locations with the help of the SSD; the only way to do it is to run a defragmenter. Some filesystems do this automatically, but it's still a good idea to do it manually once a year or so.

    • @fqdn
      @fqdn Год назад +3

      I don't think it should be ever necessary to defragment an SSD under normal circumstances, so I wouldn't bother doing it at any interval, even yearly. But yes, non-sequential data has an impact on SSDs as well. Happy to see someone mentioning it.

    • @TalesOfWar
      @TalesOfWar Год назад +2

      This is why TRIM is a thing.

    • @fqdn
      @fqdn Год назад +2

      @@TalesOfWar Trim is an entirely different thing and doesn’t do anything for fragmentation. It just tells the SSD which blocks are unused so they can be erased for faster writes and to be used for wear leveling. Fragmentation is a filesystem thing, Trim is a block device thing.

    • @Olivyay
      @Olivyay Год назад

      Also, it is a special mode that is different than the full defrag on a HDD, and doesn't waste SSD life by moving the actual files around, it just defrags the filesystem.
      (Edit for clarification following replies to this: what I meant is that it won't move files which are not actually fragmented, which *does* happen when defragmenting a HDD, it only moves the actually fragmented files to avoid having to keep metadata on too many fragments in the file system table)

    • @deusexaethera
      @deusexaethera Год назад +2

      @@TalesOfWar : TRIM erases unused blocks on the SSD so they don't have to be erased in realtime the next time those pages are allocated to store data. Erasing takes by far the most amount of time of any SSD write operation. TRIM is not involved with defragmentation in any way, it's just often confused as being the SSD equivalent because it's a required maintenance task like defragmentation was for HDDs.

  • @Ikouy
    @Ikouy Год назад +1

    Also, one major point to consider is that defragmenting a disk does not account for context. A really good example is say running Windows 10 on an HDD, which I still do on systems that aren't daily drivers. I have a 500GB drive in my test system and I installed Windows 10, used up 400GB and then upgraded to Windows 11.
    Because Windows puts the new install without impacting the existing one, the whole new OS is located near the center of the drive. So it takes about 90 minutes to reach idle once the system boots. Norton used to take care of this back on Win9x with certain settings such as infrequently used files moved to the back/center of the disk. A contextual defragging of a modern hard drive would take a few days.

    • @KasumiRINA
      @KasumiRINA Год назад

      Hmm, so it's still a good idea to format and install system on a fresh drive if possible. It will put it on fast sectors. I had system load forever but now it's mostly fixed... I still have weird issue where Firefox, of all things, takes ages to load. Other browsers don't. Cache was cleared. I feel it's probably cookies spread over entire drive whoever installed my OS partitioned to 100Gb which is barely enough for Windows 10, I literally had to hardlink game caches to another drive and install all Adobe programs in separate HDD.
      I mean I could buy SSD, but not during the full-scale war. Not highest priority right now.

    • @Ikouy
      @Ikouy Год назад

      @@KasumiRINA The best option would be to run multiple partitions. I don't remember exactly but I know on Windows XP on a single partition disk it would place all files after a huge chunk of files that cannot be moved after the first tenth or so.
      You could create three partitions and put Windows 10 on the middle one. It is not something I've tried but always thought about. We're at the point that Windows 10 doesn't have any major feature updates but for cumulative updates it might help performance on a 100GB disk.
      Partition 0 would be 16gb for swap, no drive letter. Partition 1 would be 40 or do GB for Windows and the rest data. If the drive is 1TB partition 0 could be 32GB or 64GB, for example while partition 1 is 256GB.

  • @AgWhatsUp
    @AgWhatsUp Год назад +7

    Wow so much nostalgia just hit me with the word defragmenting… I was a nerdy 90s kid

  • @johnny5805
    @johnny5805 Год назад

    Those jokers at Condusiv (Diskeeper) told me that of course SSD's need defragmenting. They told me like I was the most stupid person on the planet.
    So, I am glad Anthony put the matter to rest.

  • @ramenramune
    @ramenramune Год назад +4

    2:26 Intel CPU in a AM4 socket.

  • @Psycandy
    @Psycandy Год назад

    Defrag is only used when making removing large numbers of small files, which can massively slow an external HDD until the defrag is run. Otherwise, periodically, I format and reinstall the system, which pretty much optimizes everything and keeps old hardware running properly quickly.

  • @matsv201
    @matsv201 Год назад +3

    There are hardly any files that is not comoressed already.
    Mpeg.. Compressed
    Jpeg... Compressed
    Mp3.. Compressed
    Pdf... Compressed.
    If it's any large file... its probobly Compressed.

    • @chrisrib05
      @chrisrib05 Год назад

      Word, Excel are compressed files too, (you can open it with your fav' archive tool)

    • @lawrencedoliveiro9104
      @lawrencedoliveiro9104 Год назад

      Compression has nothing to do with fragmentation.

  • @rosiefay7283
    @rosiefay7283 Год назад

    1:50 Yes, whatever information content a file contains, file compression replaces it by that same content in another format which it would've been better to store it in in the first place.
    2:07 How come? Compressing a file's contents *creates* a gap.

  • @BeeWaifu
    @BeeWaifu Год назад +3

    2:11 uhh...

  • @jordytje92
    @jordytje92 Год назад

    I Always found it to be satisfying to see the blocks getting shuffled arround. Now A days i use a classic Defrag tool to see all blocks being moved. Only on my hardrive though.

  • @Mette_Bus
    @Mette_Bus Год назад +4

    2:26 A 9th gen i5 in an am4 socket? 😂

  • @TimDoherty
    @TimDoherty Год назад

    I remember in university days in the year 2000 some friends coming to my hostel room and a few of us just watching the defrag. So satisfying to watch

  • @danielnetz5173
    @danielnetz5173 Год назад +4

    Watching Anthony explain things is therapeutic.

  • @Marc_Fuchs_1985
    @Marc_Fuchs_1985 Год назад

    The only mechanical drive I keep using is for data backup storage. Everytime I make a new backup, the whole drive gets erased and all written new. And even if I don't renew the main drive for too long, fragmentation shouldn't be an issue, since no system files or work files (video data which should be handled fast) come from this drive.

  • @LaughingOrange
    @LaughingOrange Год назад +5

    I heard somewhere that the NT in NTFS actually doesn't stand for anything, instead being a reference to Windows NT, where NT actually stands for New Technology.

    • @mrkitty777
      @mrkitty777 Год назад

      Windows NT is based on VMS another operating system. If you shift the letters VMS one letter you'll get WNT. The name Vista was when one of the Microsoft employees looked out of the window and saw Vista at a building. 🤔

    • @FlyboyHelosim
      @FlyboyHelosim Год назад +1

      So it does stand for something then.

    • @shanent5793
      @shanent5793 Год назад

      The NT stands for N10 (ie. N-Ten) which was a revision of Intel's i860, the processor used in the original Windows NT development system

    • @lawrencedoliveiro9104
      @lawrencedoliveiro9104 Год назад +1

      Windows NT was masterminded by Dave Cutler, who was responsible for VMS at DEC (and also RSX-11 about a decade before that). He left that company after management cancelled his projects for creating successors to the VAX hardware and VMS OS. This was in 1988. Then a few months later he turned up at Microsoft.
      Unfortunately, he was one of those at DEC who hated the Unix way of doing things. Imagine how differently things might have turned out otherwise ...

    • @mrkitty777
      @mrkitty777 Год назад

      @@lawrencedoliveiro9104 shift VMS with one letter and you get WNT, well Cutler disliked is understatement

  • @jk-mm5to
    @jk-mm5to Год назад +1

    Haven't defragged since I stopped spinning rust.

  • @Ultrajamz
    @Ultrajamz Год назад +8

    I find it odd how linux with ext4 and stuff claims to not require defragging on hdds.

    • @skelebro9999
      @skelebro9999 Год назад +4

      maybe because their entire file system and they way they compress files is way different than windows

    • @Ultrajamz
      @Ultrajamz Год назад +5

      @@skelebro9999 it just seems unbelievable - I can believe its “less” of an issue but not that it “isn’t” an issue, unless its basically defragging automatically.

    • @skelebro9999
      @skelebro9999 Год назад +1

      @@Ultrajamz that could be something
      you should check out BTRFS which is kinda weird and quite interesting

    • @killertruth186
      @killertruth186 Год назад +3

      @@Ultrajamz Yeah, Linux will get bad reputation if the fanboys claims it can do something that is impossible for any distros in that regard.

    • @lawrencedoliveiro9104
      @lawrencedoliveiro9104 Год назад +2

      Devs and sysadmins have been running high-performance Linux systems for years--decades, even. Nobody feels the need to build a defragger process into the system. Particularly because of the fun problems that happened when Microsoft tried adding one to Windows.

  • @kjamison5951
    @kjamison5951 Год назад

    I remember running Norton Utilities on Macs… the defrag and optimisation was so satisfying to watch! Almost mesmeric!

  • @thehristokolev
    @thehristokolev Год назад +5

    9-th gen Intel in an AM4 socket?

  • @glenjo0
    @glenjo0 Год назад +1

    I have to admit, I sorta like my very old spinning hard drives. I just checked, and I have one that has been in use for 12 years, and SMART is reporting no, none, nada errors. Great for backing up the NVME data.

    • @lawrencedoliveiro9104
      @lawrencedoliveiro9104 Год назад

      SMART is known to only pick up only about a third of incipient disk failures. I don’t bother with it. Instead, I use the badblocks tool to do a full sector-level scan about every couple of months.

  • @arithex
    @arithex Год назад +2

    I think Win10+ does the right thing (defrag or trim) for the physical drive type, on a scheduled basis, by default.
    Would be nice to do a followup vid -- talk about how often SSDs need to be TRIM'd. I have no idea.

  • @DezzieYT
    @DezzieYT Год назад

    Thanks for the info. I still remember doing monthly defrags. And if a non-tech friend called with an issue: Me: "Have you tried defragging the drive?" Them: "De-what now?"

  • @ruddyhell7800
    @ruddyhell7800 Год назад +13

    Anthony is great. I hope he never changes.

  • @billy65bob
    @billy65bob Год назад +1

    I have a weird memory of Windows 2000, using NTFS.
    I was having a chronic issue where the OS would just lock up after about 15-20 mins of use, and it's been going on for months.
    I don't know what possessed me to defrag it, but as soon as I did all the lock ups just stopped.

  • @source_engine_wizard
    @source_engine_wizard Год назад +17

    Rip anthony

    • @JoseBSL
      @JoseBSL 9 месяцев назад +1

      Iooooo nooo

    • @thaddeus5944
      @thaddeus5944 8 месяцев назад +2

      I completely forgot about him, good, I don't like transformers

    • @maxttk97
      @maxttk97 8 месяцев назад

      ​@@thaddeus5944I pity him.