Talk:Dual-boot Gentoo and Windows 7 with BIOS-powered software raid
Before creating a discussion or leaving a comment, please read about using talk pages. To create a new discussion, click here. Comments on an existing discussion should be signed using
~~~~
:
A comment [[User:Larry|Larry]] 13:52, 13 May 2024 (UTC) : A reply [[User:Sally|Sally]] 11:31, 8 December 2024 (UTC) :: Your reply ~~~~
Software & Hardware Raid Clarification
Just a quick note on the differences of software and hardware raid -- which quite a bit elsewhere already.
If I'm not mistaken, many motherboards, including recently within the past five or so years, do have raid controller chips within the motherboard, hence, they're hardware raid.
Software raid has been something implemented within the kernels at the software level to utilize two or more disks within a raid configuration. The big negative with this software raid implementation, software raid uses other resources such as CPU to compensate for not have hardware related raid chips. I'll guess software raid is probably best used for mirroring or backup exclusively versus looking for any performance improvements.
Now, what you're citing within your opening paragraph about seeing the BIOS raid setting and assuming software raid, I'm guessing you're implying the BIOS raid setting is for a software raid implemented within the BIOS firmware or within the EFI/UEFI BIOS. Some motherboards seem to have hardware (or fake) raid chips which are likely far superior to software implemented raid within kernels.
I'm just piping in here as I noticed a slight lacking of detail concerning discerning the differences. As for my opinion on raid, eh as you already mentioned in your last paragraph, I too think raid is a waste for most users. I think only database centers could implement & maintain such redundancy and avoiding the risk. It wouldn't be sensible and cost effective for normal users. But I still see it done, but I can't seem to justify the cost here even after 15+ years of having a computer around. ---Roger 21:24, 28 October 2012 (UTC)
- The firmware/driver-based RAID controller on motherboards are NO hardware RAID controller. Like software RAIDs they offload the processing to the CPU. See Wikipedia.
- These fake controllers are the worst solution, because you're are bound to the motherboard and are limited to the features of the fake controller. Only you're vendor is in charge of performance and stability. A fake RAID is only a solution, if you're OS doesn't have a software RAID implementation, like Windows XP.
- -Astaecker 07:58, 30 October 2012 (UTC)
The mainboard i have is a Asus M5A97. In it's manual ist claims to have support for raid. But as far as i know, there isn't any raid chip on it. And they don't write it. Instead, they write "The motherboard comes with an AMD SB850 chipset that allows you to configure Serial ATA hard disk drives as RAID sets." And i guess, that is all the bios does: make a note that the drives you configure are a RAID set. If the driver (under linux) does not honor it, then all it sees are the physical disks. And it does not matter at all. The linux logical disk, which is configured as raid1 in bios, does not conform to raid1. It does have different data inside the partitions and the bios does not complain about it. The only thing i did do, is to make sure, that the partition tables of the two disk are identical.
The phrase i choose for it (i read it somewhere) "bios powered software raid" should reflect, that it is like a standalone software raid solution (md raid as an example) with additional support of the bios, which lets you configure the drives and make a note about it. The term may be missleading, but i like it more than "fake raid".
What is the use of raid? If you go back in time and have a look at an old Dec Alpha server, then they often had at moust 4GB SCSI disk drives in it. (No, i never had to do with it at work, i just bought a used one for fun as they got cheap. And it is sold already, i don't have it anymore.) Raid was used to make logical disks, which where fault tolerant, had better IO performance and one was able to create storage bejond 4 GB. As disks grew, the same did fit for, lets say, a dell server at 2000. With the raid controller one was able to build a disk which reaches almost 300 GB out of 5 72 GB disks with raid 5. This way one did not have to spend the money on the expensive 144 GB disks. And today? Well, i don't have any need to build a bigger disk out of physical disks. So this pro has gone. Fault tolerance? My box does not have to be up 24 hours a day, so no pro for me neither (for companies i think a must have). And data security against lost is done with backup, it never had anything to do with raid. But raid still can be a win. If you have a postgresql database or any other program, which reads and writes to the same disk. Then with raid it is spread among the physical drives. But at least postgresql now lets you configure more, so that you can spread a database on more than one disk. IIRC, it is now possible to configure the space for sorting on an other disk as the database. So this also becomes less important. For me, there actually is no need for raid. I did it just for fun. -- Grashopper 03.11.2012