I’ve always been interested in military equipment – just take a look on this cluster bomb presentations:
BLU-108 submunition Infospot - real presentation, not a simulation
I’ve always been interested in military equipment – just take a look on this cluster bomb presentations:
BLU-108 submunition Infospot - real presentation, not a simulation
Some time ago i noticed huge corruption on / on one of my servers, /sbin/reboot was gone also with /sbin/init, /sbin/poweroff and so on.
How to shutdown machine remotley ? Simple:
echo 1 > /proc/sys/kernel/sysrq
echo o > /proc/sysrq-trigger
( this requires SysRQ compiled in kernel, most distributions have this
compiled in their’s kernels )
Mostly English site dedicated to benchmarking clustering filesystems DistributedMassStorage
QuickHowTo about ”exporting” via iSCSI simple file from Linux target (ietd) to Solaris OS:
Linux target is running Debian/4.0, 2.6.18 kernel and iSCSI target version 0.4.14 – I wish it was Solaris box, but my very old home SCSI controllers aren’t supported by Solaris ( DELL MegaRAID 428 – PERC2 and InitIO ) – however there are some drivers but for Solaris 2.7-2.8, but after small war with them I must say that I failed…. even after playing hardcore stuff in /etc/driver_aliases
Installing iSCSI target on Debian is discussed here: Unofficial ISCSI target installation. Some checks:
rac3:/etc/init.d# cat /proc/net/iet/volume
tid:2 name:iqn.2001-04.com.example:storage.disk2.sys1.xyz
lun:0 state:0 iotype:fileio iomode:wt path:/u01/iscsi.target
rac3:/etc/init.d# cat /proc/net/iet/session
tid:2 name:iqn.2001-04.com.example:storage.disk2.sys1.xyz
As you can see /u01/iscsi.target is normal file ( created with dd(1) ) on MegaRAID RAID0 array. We will use it to do some testing from Solaris:
root@opensol:~# iscsiadm add static-config iqn.2001-04.com.example:storage.disk2.sys1.xyz,10.99.1.25
root@opensol:~# iscsiadm modify discovery --static enable
root@opensol:~# devfsadm -i iscsi
root@opensol:~# iscsiadm list target
Target: iqn.2001-04.com.example:storage.disk2.sys1.xyz
Alias: -
TPGT: 1
ISID: 4000002a0000
Connections: 1
root@opensol:~# format
Searching for disks...done
/pci@0,0/pci1000,30@10/sd@0,0
/iscsi/disk@0000iqn.200104.com.example%3Astorage.disk2.sys1.xyzFFFF,0
Specify disk (enter its number): CTRL+C
Okay so we are now sure that iSCSI works. In several days i'm going to test exporting SONY SDT-9000 ( an old tape drive ) via iSCSI
Samba nigdy zbytnio mnie nie interesowala ( bo nie interesowal mnie fakt udostepniania przestrzeni Windowsom ) aczkolwiek ciekawie zapowiada sie klastrowana Samba.
Na forum CCIE.PL jest bardzo ciekawy wpis ( autor: pjeter ) ktory opisuje mniej wiecej sciaganie obrazu z tunera TV z kablowki i rzucanie tego na multicast w czasie rzeczywistym ( z kodekiem ) na zwyklym PC z Linuxem dzieki czemu obraz TV jest dostepny dla innych komputerow w sieci lokalnej…
Architektury sieci SAN mnie interesuja ( np link ) … ale niestety sa poza zasiegiem budzetowym:
IPMP – IP MultiPathing to technika HA ( High Avability ) pozwalajaca utrzymac lacznosc ze swiatem serwera z systemem Solaris mimo awarii jednego lub wiecej polaczen sieciowych. To tyle tytulem wprowadzenia :)
xeno – komputer podlaczony do switcha pojedyncza karta sieciowa
10.99.1.20 – Solaris10 z IPMP ( 2 karty sieciowe do tego samego switcha co xeno )
xeno:~# ping -i 0.5 10.99.1.20
PING 10.99.1.20 (10.99.1.20) 56(84) bytes of data.
64 bytes from 10.99.1.20: icmp_seq=1 ttl=255 time=0.318 ms
64 bytes from 10.99.1.20: icmp_seq=2 ttl=255 time=0.308 ms
64 bytes from 10.99.1.20: icmp_seq=3 ttl=255 time=0.293 ms
64 bytes from 10.99.1.20: icmp_seq=4 ttl=255 time=0.325 ms
64 bytes from 10.99.1.20: icmp_seq=5 ttl=255 time=0.312 ms
64 bytes from 10.99.1.20: icmp_seq=6 ttl=255 time=0.308 ms
64 bytes from 10.99.1.20: icmp_seq=7 ttl=255 time=0.342 ms
64 bytes from 10.99.1.20: icmp_seq=8 ttl=255 time=0.325 ms
# wyciagniecie kabelka...
64 bytes from 10.99.1.20: icmp_seq=19 ttl=255 time=0.267 ms
64 bytes from 10.99.1.20: icmp_seq=20 ttl=255 time=0.271 ms
64 bytes from 10.99.1.20: icmp_seq=21 ttl=255 time=0.338 ms
64 bytes from 10.99.1.20: icmp_seq=22 ttl=255 time=0.280 ms
64 bytes from 10.99.1.20: icmp_seq=23 ttl=255 time=0.254 ms
Czyli 19-8 pakietow zgubionych, co przeklada sie na (19-8)*0.5=5.5 sekund niedostepnosci maszyny przy ponizszym wpisie do /etc/default/mpathd:
# 2s
FAILURE_DETECTION_TIME=2000
A teraz kilka nieudokumentowanych flag dla in.mpathd:
SQLPLUS: COPY FROM … TO … Czyli cos o czym pojecia dotad nie mialem – nawet nie wiedzialem o istnieniu czegos takiego…
Fajny tekst o MySpace i ich “ewolucji skalowania”.
Post z dnia: 07/03/2007
Exterminowanie datafile przez sily natury AKA /dev/chaos:
oracle@xeno:/u04/data/vn2$ dd if=/dev/urandom of=users02.dbf bs=1M count=5
Po probie INSERT do tabeli ktora byla w tamtym pliku w alert.logu mozna wyczytac ze:
Wed Mar 7 20:36:08 2007
Hex dump of (file 5, block 10) in trace file /u04/admin/vn2/udump/vn2_ora_6088.trc
Corrupt block relative dba: 0x0140000a (file 5, block 10)
Bad header found during buffer read
[..]
Reread of rdba: 0x0140000a (file 5, block 10) found same corrupted data
Wed Mar 7 20:36:08 2007
Corrupt Block Found
TSN = 4, TSNAME = USERS
RFN = 5, BLK = 10, RDBA = 20971530
OBJN = 10235, OBJD = 10235, OBJECT = DANE1, SUBOBJECT =
SEGMENT OWNER = VNULL, SEGMENT TYPE = Table Segment
Do rzeczy ( sprawdzenie integralnosci ):
oracle@xeno:/u04/data/vn2$ dbv file=users02.dbf
[..duzo smiecia..]
Total Pages Examined : 639
[..]
Total Pages Marked Corrupt : 639
Ale oczywiscie inne przestrzenie tabel i to co jest w SGA ladnie dziala.
Shutdown immediate nie dziala tak wiec abort:
SQL> shutdown abort;
SQL> startup;
[..]
ORA-01122: database file 5 failed verification check
ORA-01110: data file 5: '/u04/data/vn2/users02.dbf'
ORA-01251: Unknown File Header Version read for file number 5
Czyli teraz baza jest w trybie mount, odlaczamy uszkodzony plik:
SQL> alter database datafile '/u04/data/vn2/users02.dbf' offline;
SQL> alter database open;
W tym momencie uzytkownicy moga sie logowac i tworzyc nowe tabele ( trafia do datafile users01.dbf ), jednakze dane z tabeli sa dalej niedostepne z uwagi ze byla w users02.dbf!
A teraz przywracamy z hotbackupu recznie gdyz “nie czuje” jeszcze RMANa i za duzo tam magii
SQL> shutdown immediate;
SQL> startup mount;
SQL> alter database datafile '/u04/data/vn2/users02.dbf' offline drop;
oracle@xeno:/u04/data/vn2$ cp /backup/users02.dbf .
SQL> alter database open;
SQL> recover datafile '/u04/data/vn2/users02.dbf';
SQL> alter database datafile '/u04/data/vn2/users02.dbf' online;
Hura, dane sa aczkolwiek nalezaloby jeszcze zrobic RECOVERy
BTW: To zapytanie wyswietla pliki w ktorych sa obiekty intersujace nas obiekty danego usera:
SELECT segment_name, d.name AS datafile FROM dba_segments s JOIN v$datafile d ON (s.header_file=file#) WHERE s.owner=’<schema>’;
Post z dnia: 06/03/2007
ORACLE_SID jest ustawione na vn2
# symulacja calkowitego padu dysu na ktorym jest spfile
oracle@xeno: dbs$ dd if=/dev/urandom of=spfilevn2.ora bs=1k count=3
3+0 records in
3+0 records out
3072 bytes (3.1 kB) copied, 0.001162 seconds, 2.6 MB/s
# kill -9 <pidy_oracle> to byloby zbyt wiele pisania a efekt prawie taki sam – tzn pad bazy
SQL> shutdown abort;
ORACLE instance shut down.
# elektrownia wlaczyla prad…
SQL> startup;
ORA-00600: internal error code, arguments: [733], [1157335612], [pga heap], [],[], [], [], []
No i gdyby nie bylo kopii zapasowej spfile, czy tez pfile utworzonego przez CREATE PFILE FROM SPFILE to bylaby “jazda”… ale na szczescie jest:
SQL> startup pfile='/u01/product/10.2.0/db_1/dbs/initvn2.ora';
ORACLE instance started.
[...]
Post z dnia: 05/03/2007
http://www.gentoo-wiki.info/HOWTO_Install_on_Software_RAID#Write-intent_bitmap
Wlasnie sie dowiedzialem ze:
Post z dnia: 01/03/2007
Gdyby ktos nie wiedzial to od wersji 0.99.5 wlacznie mozna wykonac duzo polecen Quaggi w pojedynczym poleceniu shella – co jest calkiem fajne:
ZZZ:/etc# VTYSH_PAGER=/bin/cat vtysh -c 'show ip rip status' -c 'conf t' -c 'router rip' -c 'passive-interface eth0'
Routing Protocol is "rip"
Sending updates every 30 seconds with +/-50%, next due in -1173105357 seconds
Timeout after 180 seconds, garbage collect after 120 seconds
Outgoing update filter list for all interface is rip_lan
Incoming update filter list for all interface is rip_lan
Default redistribution metric is 1
[etc...]
Albo na szybko zapamietanie konfiguracji ( z uzywaniem plik per protokol routingu, standardowo vtysh chce zapamietywac do jednego wspolnego – Quagga.conf ):
ZZZ:/etc# VTYSH_PAGER=/bin/cat /usr/bin/vtysh -c 'conf t' -c 'no service integrated-tysh-config' -c 'end' -c 'write'
Building Configuration...
Configuration saved to /etc/quagga/zebra.conf
Configuration saved to /etc/quagga/ripd.conf
Configuration saved to /etc/quagga/ospfd.conf
[OK]
ZZZ:/etc#
Post z dnia: 26/02/2007
alter system set filesystemio_options=SETALL scope=spfile;
shutdown;
startup;
oracle@xeno:~$ strace -ff -e open sqlplus / as sysdba 2> open2.log
[..]
SQL> startup;
[..]
SQL> CTRL+Z
[1]+ Stopped strace -ff -e open sqlplus / as sysdba 2>open2.logoracle@xeno:~$ grep O_DIRECT open2.log | grep 'users01\.dbf'
[pid 7194] open("/u03/product/10.2.0/oradata/vn1/users01.dbf", O_RDONLY|O_DIRECT|O_LARGEFILE) = 22
[pid 7194] open("/u03/product/10.2.0/oradata/vn1/users01.dbf", O_RDWR|O_SYNC|O_DIRECT|O_LARGEFILE) = 22
[pid 7216] open("/u03/product/10.2.0/oradata/vn1/users01.dbf", O_RDWR|O_SYNC|O_DIRECT|O_LARGEFILE) = 18
[pid 7196] open("/u03/product/10.2.0/oradata/vn1/users01.dbf", O_RDWR|O_SYNC|O_DIRECT|O_LARGEFILE) = 25
[pid 7200] open("/u03/product/10.2.0/oradata/vn1/users01.dbf", O_RDWR|O_SYNC|O_DIRECT|O_LARGEFILE) = 19
[pid 7206] open("/u03/product/10.2.0/oradata/vn1/users01.dbf", O_RDWR|O_SYNC|O_DIRECT|O_LARGEFILE) = 22 <p <li <ul
Post z dnia: 23/02/2007
To jest slabo udokumentowane ( wlasciwie wcale; p jak permanent ):
route -p
Doda trase do /etc/inet/static_routes, nie ma juz potrzeby modyfikowania skryptow w /etc/init.d/, trase te podniesie sam SMF przy reboocie.