1 頁 (共 1 頁)

ubuntu ibm serveraid

發表於 : 2008-11-21 15:01:39
yehlu
http://ubuntuforums.org/showthread.php? ... ost4464944

As promised:

Acknowledgements:

Much of this procedure is based on Andrew Kutz's article, Managing hardware RAIDs with Adaptec Storage Manager and Ubuntu. IBM's ServeRAID cards and software (or at least, all the ones I've worked with) are just rebadged versions of the Adaptec kit. Andrew's article covers the original Adaptec kit on Feisty. I found I needed a couple of extra tweaks to get the rebadged stuff working on Gutsy (whether this is down to differences between the Adaptec and IBM rebadged versions, or differences between Feisty and Gutsy, I'm not sure).


Pre-Requisites:

Below is the list of hardware and software I'm using. I'm reasonably confident that these steps should work on other versions of ServeRAID hardware and software (e.g. the ServeRAID 8.40 software), but for reference this is the kit I used:

* Machine: IBM System x3500 7797-G2G
* OS: Ubuntu 7.10 Gutsy Gibbon amd64
* Software: IBM ServeRAID 9.00 for Linux x86_64
* RAID Card: IBM ServeRAID 8k (SATA/SAS)



Obtaining the ServeRAID Software:

Your server should have shipped with one or two ServeRAID CDs (e.g. "ServeRAID Support" and "ServeRAID Application"). The Support CD is usually a bootable live CD which you can use to configure the RAID setup prior to installing an OS. The Application CD (sometimes labelled "ServeRAID Management") contains the software for installation. However, it's probably wise to check the IBM System x Support site to check for the latest versions of these CDs, and any firmware updates for your card. Select your product family, type, model, and a Linux OS from the lists and filter the results for RAID. For example, this is the page for my particular box (an x3500 7797-G2G). I strongly recommend you check for any firmware updates, especially if they're labelled "Critical update" (for example, on my box the latest firmware update labelled critical, "fixed a problem where a reboot would occur when all drives would spin up at the same time" ... ouch!)


Procedure for 64-bit users:

1. As ServeRAID is distributed in RPMs we'll obviously need alien and fakeroot for conversion, so install these packages:

Code:

$ sudo apt-get install alien fakeroot

2. We also need the ia-32 compatibility libraries as the x86_64 / amd64 version of ServeRAID is still a 32-bit application:

Code:

$ sudo apt-get install ia32-libs

3. Something in the ServeRAID agent expects sort to be in /bin. However, in Ubuntu it's only in /usr/bin, so we'll create a symbolic link to fix this:

Code:

$ sudo ln -s /usr/bin/sort /bin/sort

4. From the ServeRAID Application CD (aka the ServeRAID Management CD), copy the RPM containing the software for your architecture to your home directory. For example:

Code:

$ cp /media/cdrom/linux_x86_64/manager/RaidMan-9.00.x86_64.rpm .

5. Even though the RPM is allegedly for x86_64 / amd64, internally it appears to be for i386. Therefore, alien will refuse to convert it for us due to the different architectures. So, we need to tweak alien to make it think the architecture of the RPM is really amd64. Open /usr/share/perl5/Alien/Package/Deb.pm in your favourite editor with root privileges:

Code:

sudo vim /usr/share/perl5/Alien/Package/Deb.pm

On (or near) line 351 you should find the following line:

Code:

print OUT "Architecture: ".$this->arch."\n";

Copy and comment out the original line with a # prefix, so you can switch it back easily later, and change the copied line as follows:

Code:

# print OUT "Architecture: ".$this->arch."\n";
print OUT "Architecture: amd64\n";

6. Now, use alien in a fakeroot to convert the RPM package to a debian package:

Code:

$ fakeroot alien -c RaidMan-9.00.x86_64.rpm

7. You should get a message stating raidman_9.00-1_i386.deb created, however the resulting file will actually be called raidman_9.00-1_amd64.deb (this discrepancy is due to the tweak we made earlier)
8. Next, we need to tweak some of the scripts inside the package to work properly on Ubuntu. Make a directory structure to extract the debian package into, and extract the package along with its control scripts:

Code:

$ mkdir -p raidman_9.00-1_amd64/DEBIAN
$ dpkg -x raidman_9.00-1_amd64.deb raidman_9.00-1_amd64/
$ dpkg -e raidman_9.00-1_amd64.deb raidman_9.00-1_amd64/DEBIAN

9. Open the post-install (postinst) script with your favourite editor:

Code:

$ vim raidman_9.00-1_amd64/DEBIAN/postinst

Remove the following line

Code:

chkconfig --add raid_agent

10. Open the post-remove (postrm) script with your favourite editor:

Code:

$ vim raidman_9.00-1_amd64/DEBIAN/postrm

Remove the following line

Code:

chkconfig --del raid_agent

11. Now we need to make sure the Java launchers can find the correct 32-bit libraries. The existing scripts assume 32-bit libraries live under /usr/lib, which is incorrect on Ubuntu (they live under /usr/lib32 here). Open the agent launcher script with your favourite editor:

Code:

$ vim raidman_9.00-1_amd64/usr/RaidMan/RaidAgnt.sh

Duplicate the following lines (they should start at or near line 141) and change the second copy to reference /usr/lib32 instead of /usr/lib:

Code:

if [ -f /usr/lib/libstdc++.so.5 ]
then
LD_PRELOAD=/usr/lib/libstdc++.so.5
fi

For example, these should become:

Code:

if [ -f /usr/lib/libstdc++.so.5 ]
then
LD_PRELOAD=/usr/lib/libstdc++.so.5
fi
if [ -f /usr/lib32/libstdc++.so.5 ]
then
LD_PRELOAD=/usr/lib32/libstdc++.so.5
fi

12. Now the same for the manager launcher script:

Code:

$ vim raidman_9.00-1_amd64/usr/RaidMan/RaidMan.sh

Duplicate the following lines (they should start at or near line 141 again) and change the second copy to reference /usr/lib32 instead of /usr/lib:

Code:

if [ -f /usr/lib/libstdc++.so.5 ]
then
LD_PRELOAD=/usr/lib/libstdc++.so.5
fi

For example, these should become:

Code:

if [ -f /usr/lib/libstdc++.so.5 ]
then
LD_PRELOAD=/usr/lib/libstdc++.so.5
fi
if [ -f /usr/lib32/libstdc++.so.5 ]
then
LD_PRELOAD=/usr/lib32/libstdc++.so.5
fi

13. Now repackage the extracted content back into the debian package:

Code:

$ dpkg -b raidman_9.00-1_amd64/ raidman_9.00-1_amd64.deb

14. Install the new package:

Code:

$ sudo dpkg -i raidman_9.00-1_amd64.deb

15. With a bit of luck the install should go smoothly and the background RAID agent ought to start automatically. You can control the RAID agent with the /etc/init.d script like so:

Code:

$ sudo /etc/init.d/raid_agent stop
$ sudo /etc/init.d/raid_agent start

16. Should you need to perform this installation on any other servers, simply copy and re-use the debian package.


Procedure for 32-bit users:

1. As ServeRAID is distributed in RPMs we'll obviously need alien and fakeroot for conversion, so install these packages:

Code:

$ sudo apt-get install alien fakeroot

2. We also need the libstdc++ compatibility libraries:

Code:

$ sudo apt-get install libstdc++5

3. Something in the ServeRAID agent expects sort to be in /bin. However, in Ubuntu it's only in /usr/bin, so we'll create a symbolic link to fix this:

Code:

$ sudo ln -s /usr/bin/sort /bin/sort

4. From the ServeRAID Application CD (aka the ServeRAID Management CD), copy the RPM containing the software for your architecture to your home directory. For example:

Code:

$ cp /media/cdrom/linux/manager/RaidMan-9.00.i386.rpm .

5. Now, use alien in a fakeroot to convert the RPM package to a debian package:

Code:

$ fakeroot alien -c RaidMan-9.00.i386.rpm

6. Next, we need to tweak some of the scripts inside the package to work properly on Ubuntu. Make a directory structure to extract the debian package into, and extract the package along with its control scripts:

Code:

$ mkdir -p raidman_9.00-1_i386/DEBIAN
$ dpkg -x raidman_9.00-1_i386.deb raidman_9.00-1_i386/
$ dpkg -e raidman_9.00-1_i386.deb raidman_9.00-1_i386/DEBIAN

7. Open the post-install (postinst) script with your favourite editor:

Code:

$ vim raidman_9.00-1_i386/DEBIAN/postinst

Remove the following line

Code:

chkconfig --add raid_agent

8. Open the post-remove (postrm) script with your favourite editor:

Code:

$ vim raidman_9.00-1_i386/DEBIAN/postrm

Remove the following line

Code:

chkconfig --del raid_agent

9. Now repackage the extracted content back into the debian package:

Code:

$ dpkg -b raidman_9.00-1_i386/ raidman_9.00-1_i386.deb

10. Install the new package:

Code:

$ sudo dpkg -i raidman_9.00-1_i386.deb

11. With a bit of luck the install should go smoothly and the background RAID agent ought to start automatically. You can control the RAID agent with the /etc/init.d script like so:

Code:

$ sudo /etc/init.d/raid_agent stop
$ sudo /etc/init.d/raid_agent start

12. Should you need to perform this installation on any other servers, simply copy and re-use the debian package.


Configuration with RAID manager (local and remote):

If you have an X environment installed on your server you should be able to launch the RAID manager interface by using something like this (I haven't tried this as I don't have X on any servers, although I have tried something similar on a 32-bit Gentoo client and it worked nicely):

Code:

$ gksu /usr/RaidMan/RaidMan.sh

NOTE: The RAID manager must be run as root (hence the gksu in the command above), even if you're running it on a separate client. You can also use RAID manager on a separate machine to configure the agent on the server. For this to work you must ensure that TCP ports 34571 to 34575 are open if you're running a firewall (the RAID manager only listens on 34571 and 34572 for connections, but for some reason it's necessary to have the others open after a connection is established - I'm not sure why yet).

When using the RAID manager on a separate client to configure the server (e.g. if you don't have X on your server) you'll find you can't alter most settings unless you logged into the server as root (from within the RAID manager interface). Unfortunately, for this to be possible you'll have to give your root user a password:

Code:

$ sudo passwd root

To re-disable the root account after you've finished the configuration, use the following:

Code:

$ sudo passwd -l root

See the excellent RootSudo page for more information on these commands and why you should generally leave the root account disabled. Some other things to note when trying to use RAID manager on a separate machine:

* When adding a new server to the managed list (from the menu: Remote | Add) it can take a very long time to appear. If it's still sat on the hourglass after 2 or 3 minutes, don't worry - that seems to be perfectly normal!
* In Andrew Kutz's article (see top) he walks through the process of running the manager on the server and forwarding the display to a different client. This is another good method of remote management but I'd only recommend it for clients with a very fast low-latency link to the server. The "pretty" (aka downright silly) interface needs an awful lot of network roundtrips to draw / update / etc. A simple mouse click on a button, or selecting a menu took 5 or more seconds to register when I tried this over a reasonably high-speed (8Mbit down, 800Kbit up) broadband connection with SSH. In the end I got so annoyed with it I just used the RAID manager on a Windows server via remote desktop to configure it.
* Sometimes it seems that the results of an operation don't immediately appear in the interface. For example, when adding new entries for e-mail notification of errors the users didn't appear in the list until I closed and re-opened the configuration window (not the whole application mind). This only seems to apply when using RAID manager on a separate machine to the server (i.e. not with a forwarded display, but by using the remote configuration facility).




Please let me know about any errors or corrections!


Cheers,

Dave.
waveform is offline Reply With Quote