[ILUG] Highpoint Sata RAID
john.coleman at gmail.com
Fri Mar 25 13:00:06 GMT 2005
On Wed, 23 Mar 2005 12:00:45 -0800, Rick Moen <rick at linuxmafia.com> wrote:
> Quoting Paul Jakma (paul at clubi.ie):
> > - The Promise TX4 is *not* the same card as the Promise SX4.
> /me checks online.
> SATA150 SX4 appears to have the _same_ design limitation as the TX4: It
> has a chip devoted to XOR calculations only. Striping, error-handling,
> etc., are offloaded. (In a further display of cheapness, I notice that
> they also use a Marvell 88i8030-TBC PATA/SATA converter chip on each
> channel, like a lot of the other first-generation jobs.)
> This is not necessarily a bad thing: It makes the cards a lot cheaper
> than, say, 3Ware's and Areca's. But the system load problem is real.
> > - If you're using Linux MD, as you should be, (be it with a promise
> > TX4 or SX4 or Highpoint or whatever) then you're
> > obviously not using proprietary on-disk RAID formats..
> Which, you will note, is my recommendation for those determined to save
> a few extra Euros. (Addendum: People looking over _new_-generation
> SATA-II cards might want to look at the Tekrams: They closely imitate
> the extremely good but high-priced Areca cards using the same chipsets
> and basic designs.)
LSI's Sata Megaraid 300 series also looks interesting.
> > - On-disk RAID superblock format is *not* the cause of performance
> > problems with soft-raid.
> And of course I _didn't say it did_.
> > - The primary reason soft-raid can be slower is because it requires
> > extra bus bandwidth - instead of writing a block once and letting
> > the controller write a copy to each disk you have to send each
> > block across the bus for each disk (for RAID-1). For RAID-5 the XOR
> > doesnt cost much on a modern CPU (which are *vastly* superior in
> > speed and RAM bandwidth to the CPUs and RAM used on "embedded
> > computer on a PCI card" RAID controllers), however it is an extra
> > block of data to pass across the bus.
> That is correct, with exception noted below.
> > Ie, extra bus bandwidth of software RAID is the primary bottleneck
> > of soft RAID
> Except during restriping, when the calculation overhead becomes
> absolutely brutal, and the system is often effectively nonfunctional.
> Irish Linux Users' Group
I'm trying to find a source for 3Ware 8506-8 / 9500S-8 and the LSI
Sata Megaraid 150-6 /Intel SRCS16 (OEM Clone - same BIOS, Firmware,
on-card utils) after losing out on an eBay bid by a smidgen at 2am
These are 'proper' RAID5 cards, onboard XOR, stripe, stream
management, array configuration and management, the works. And both
with native kernel support; LSI's card uses the MegaRaid/MegaRaid2
driver and presents logical drives as scsi devices to the OS.
The lack or a requirement for OS-based array management and
configuration, along with the performance and lack of host-cpu
dependence made them the only choices I've been considering.
With whichever card I buy, I'm going to be putting 3x 400Gig drives in
them as one raid5 array, single logical drive, formatted with XFS. I
intend to expand this array with more drives down the road but I have
a query regarding the array expansion; after plugging in the new
drive, I expect the controller to intergrate the drive into the
current array, and that the existing logical drive will need to be
increased to take advantage of the new space offered.
I have 2 concerns regarding this:
Firstly, ignoring downtime, will the existing partition on the logical
drive remain untouched, with the additional space showing up as
unpartitioned dirive space?
Secondly, I have never had to change the size of a partition where the
data was critical, I assume there are methods in place to expand the
existing partition to use the whole logical drive, and that XFS
supports expansion in such a way?
NUIG, Computer Society
More information about the ILUG