Search Results

Search found 1900 results on 76 pages for 'xserve raid'.

Page 41/76 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Issue with broken disk on Solaris with raidctl - how to proceed

    - by weismat
    I have a SunFire T2000 server which has 2 mirrored disks pairs. The server required an exchange of the system battery. After swaping the battery first no disks were found. After booting from CD we managed to find the disks, but now one disk is broken and the raidctl reports a failed synchronisation. The boot process stops now when trying to mount the file systems. The power light of the broken drive is not even blinking. What is the best way to proceed now ? Fortunately I could live with loosing the data on the drive as it is backed up, but I would like to keep the rest of the data as it contains /etc and get the server booting again.

    Read the article

  • Server drives: 2.5" SCSI less reliable than 3.5" ?

    - by Bill
    Just had an HP 2.5" SAS 10k drive fail on a RAID5 array after about 2.5 years. It made me wonder if this was a fluke or an indication that 2.5" drives are less reliable than 3.5" SAS drives. I've had many 3.5" SAS drives running for many years without any issues (knock on wood). I would think that smaller drives would generate less heat and therefore be more reliable, but couldn't find any evidence of this. I realize all drives will eventually fail and that it's a crap shoot with any particular model, but was hoping someone could point out some related studies or comment on the SCSI drive sizes they've found to be most reliable in servers. Thanks.

    Read the article

  • What happens to missed writes after a zpool clear?

    - by Kevin
    I am trying to understand ZFS' behaviour under a specific condition, but the documentation is not very explicit about this so I'm left guessing. Suppose we have a zpool with redundancy. Take the following sequence of events: A problem arises in the connection between device D and the server. This causes a large number of failures and ZFS therefore faults the device, putting the pool in degraded state. While the pool is in degraded state, the pool is mutated (data is written and/or changed.) The connectivity issue is physically repaired such that device D is reliable again. Knowing that most data on D is valid, and not wanting to stress the pool with a resilver needlessly, the admin instead runs zpool clear pool D. This is indicated by Oracle's documentation as the appropriate action where the fault was due to a transient problem that has been corrected. I've read that zpool clear only clears the error counter, and restores the device to online status. However, this is a bit troubling, because if that's all it does, it will leave the pool in an inconsistent state! This is because mutations in step 2 will not have been successfully written to D. Instead, D will reflect the state of the pool prior to the connectivity failure. This is of course not the normative state for a zpool and could lead to hard data loss upon failure of another device - however, the pool status will not reflect this issue! I would at least assume based on ZFS' robust integrity mechanisms that an attempt to read the mutated data from D would catch the mistakes and repair them. However, this raises two problems: Reads are not guaranteed to hit all mutations unless a scrub is done; and Once ZFS does hit the mutated data, it (I'm guessing) might fault the drive again because it would appear to ZFS to be corrupting data, since it doesn't remember the previous write failures. Theoretically, ZFS could circumvent this problem by keeping track of mutations that occur during a degraded state, and writing them back to D when it's cleared. For some reason I suspect that's not what happens, though. I'm hoping someone with intimate knowledge of ZFS can shed some light on this aspect.

    Read the article

  • How is it possible for SSD's drives to have such a good latency?

    - by tigrou
    First time i read some information about SSD's, i was surprised to learn they internally use NAND flash chips. This kind of memory is generally slow (low bandwidth) and have high latency while SSD's are just the opposite. But here is how it works : SSD drives increase their bandwidth by using several NAND flash chips in parallel. In other words, they do some data striping (aka RAID0) across several chips (done by the controller). What i don't understand is how SSD's drives have such a low latency, whereas they are using NAND chips? (or at least lot better than what a typical single NAND chip would do) EDIT: I think under-estimate NAND chip capabilities. USB drives, while powered by NAND's are mostly limited by USB protocol (which have a pretty high latency) and the USB controller. That explain their poor performance in some cases.

    Read the article

  • Installing Solaris 10 on sunT5220 - ZFS/UFS raid 10?

    - by Matthew
    I am in a bit of a time crunch, and need to get two T5220's built. We were very happy to see two boxes in our aged inventory which had 8 HDD's each, but didn't think to check if they were running hardware RAID or not. Turns out that they aren't. When we install, we are given the option to use UFS or ZFS, but when we select a place to install we're only given the option of installing on one single disk. Is it possible to create a software raid 10 across all of the disks and install the OS on that? Sorry if any lingo is wrong, I'm not really a Sun guy and our guru is out of town right now. Any help would be really appreciated! Note: Most of the guides I've found on google entail installing the OS on a single disk, and then creating a separate RAID 10 on other disks. We would actually like the OS to reside on the RAID 10. Hope that clarifies things.

    Read the article

  • Expendable, Redundant, Easily recoverable

    - by MeIr
    I am desperate at this point, I have been looking for "Big storage" solution for a while on my own and I can't find anything that would suite my needs. But now push came to shove. Current situation: I have about 6TB data storage (already full) - Drobo. Yesterday Drobo died on me and it put me into bad situation - I can't recover my data without buying another Drobo. From extensive research online I realized that Drobo is not the safest bet and by now it seems very poor choice. I ordered new Drobo to try to get my data back, however I don't want to be in the same situation later and continuing using Drobo promises this event to re-occur. What I am looking for: 1) Inexpensive setup. 2) Dynamically extendable - add more drives and/or replace a drive with bigger capacity. 3) Redundant - setup against 1-3 drive failure, will depend on total number of drives. For the sake of argument let's assume for every 4 drives one should be able to fail without data loss. 4) Easy data recovery - let's say unforeseen happens, I would like to be able to recover information without buying new tools or replacements - example: new Drobo. 5) Should be USB or Network Attach Storage 6) No demand on speed. Doesn't have to be fast, I am not doing video editing on the setup. However if option exists, would be nice to have a decent speed. After thoughts: I reviewed few options and FreeNAS looks nice, but it doesn't have #2 - Dynamic extendability. There are work around with Pools but it seems a bit complicated and unnecessary. More over it seems like data safety is a big question - saw some horror stories. Please advise on what options I have and what seems like an optimal solution (if any). I don't care if it has to be Windows or Linux box or any other OS and/or software that has to run on top, but simple solution is more attractive. Thank you! P.S: Feel free to ignore "After thoughts".

    Read the article

  • How to figure out disks performance in Xen?

    - by cpt.Buggy
    So, I have a Dell R710 with PERC 6/i Integrated and 6 450Gb Seagate 15k SAS disks in RAID10, I have 30 Xen vps working on it. Now I need to deploy second server with same hardware for same tasks and I want to figure out maybe it's a good idea to use RAID5 instead of RAID10 because we have a lot of "free" memory on first server and not so much "free space". How do I find out disks performance on first server and find out could I move it to RAID5 without slowing down of whole system?

    Read the article

  • How can one implement RAID1 with a Dell Latitude laptop containing one normal hard drive, and one hard drive in an external bay?

    - by user12583188
    OS: Win7 professional Laptop: latitude e6420 The answer to this question should address how to deploy RAID1 software wise on a dell latitude e6420. I have two Hitachi Z5K500 320GB drives (new). There is one hard drive (320GB capacity) in the system now, which contains the current installation that I would prefer to keep. The drive currently inside the laptop will be replaced with one of the Hitachi drives, and the other Hitachi drive will be fitted into the laptop by way of a Dell hard drive "caddy" enclosure, which inserts into the media bay of the laptop (you remove the cd-rom bay, insert hd-bay).

    Read the article

  • How SSD's drives reduce their latency?

    - by tigrou
    First time i read some information about SSD's, i was surprised to learn they internally use NAND flash chips. This kind of memory is generally slow (low bandwidth) and have high latency while SSD's are just the opposite. But here is how it works : SSD drives increase their bandwidth by using several NAND flash chips in parallel. In other words, they do some data striping (aka RAID0) across several chips (done by the controller). What i don't understand is how SSD's drives managed to reduce latency? (or at least lot better than what a single NAND chip without any controller can do)

    Read the article

  • HP DL380 G5 Predictive failure of a new drive

    - by CharlieJ
    Consolidated Error Report: Controller: Smart Array P400 in slot 3 Device: Physical Drive 1I:1:1 Message: Predictive failure. We have an HP DL380 G5 server with two 72GB 15k SAS drives configured in RAID1. A couple weeks ago, the server reported a drive failure on Drive 1. We replaced the drive with a brand new HDD -- same spares number. A few days ago, the server started reporting a predictive drive failure on the new drive, in the same bay. Is it likely the new drive is bad... or more likely we have a bay failure problem? This is a production server, so any advice would be appreciated. I have another spare drive, so I can hot swap it if this is a fluke and new drive is just bad. THANKS! CharlieJ

    Read the article

  • ZFS Configuration advice

    - by rbarrette
    I need some advice on configuring ZFS. Here is what I have: Physical Disks: 4x 3 TB 2x 2 TB 2x 1 TB What is the best configuration for my Vdevs and storage pool. I want to maximaze space but still maintain redundancy. Should I just get 2 more 3TB's and just create 2x 3-3TB raid2z storage pools? Create a 1x 4-3TB raidz2 vdev? Can I put redundancy at the pool level and create individual vdevs for each drive and then add 2x 1TB+2TB striped vdevs to keep all vdevs the same size. Keep in mind I do need to migrate data from the smaller drives and am planning on adding more 3tb drives later on. What do you think?

    Read the article

  • When using RAID10 + BBWC why is it better to separate PostgreSQL data files from OS and transaction logs than to keep them all on the same array?

    - by Vlad
    I've seen the advice everywhere (including here and here): keep your OS partition, DB data files and DB transaction logs on separate discs/arrays. The general recommendation is to use RAID1 for OS, RAID10 for data (or RAID5 if load is very read-biased) and RAID1 for transaction logs. However, considering that you will need at least 6 or 8 drives to build this setup, wouldn't a RAID10 over 6-8 drives with BBWC perform better? What if the drives are SSDs? I'm talking here about internal server drives, not SAN.

    Read the article

  • Is it possible to create a Mirror or Stripe volume for the boot partition in Windows 2008/R2?

    - by Georgios
    Hello, I have a server with two identical disks and I have installed Windows Server 2008 R2 on C, which is a 60GB volume on Disk 0. Using the disk manager, I have attempted to create both a Mirror and Stripe volume in Disk 1 but every time I get the same error "No extents found in the plex". This error occurs after Windows has converted both disks to Dynamic. The fact that the manager allows me to attempt to do this would point to the fact that this is possible. However I have been unable to find any solutions to this error. Any ideas on how to solve this? Thanks Georgios

    Read the article

  • Can I split one RAID1 partition in two?

    - by Prosys
    I have a linux box with CentOS 6.2 and a RAID1 (2x 2Tb) configuration: /dev/md1 -> / (10G) /dev/md2 -> /home (1.9T) I want to split the md2 in two different partitions, so I can get the following configuration: /dev/md1 -> / (10G) /dev/md2 -> /home (1T) /dev/md3 -> /example (900G) How can I achieve this? I already know that I can resize the partition, but that doesn't alter the real partition table (only the md device), so how can I do this?

    Read the article

  • RAID1: Which disk will be mirrored?

    - by tmelen
    How does a RAID1 system determine which disk to use as the source and which disk to use as the destination when mirroring? Assume for instance the following scenario: A RAID1 array is created with two disks A and B. A is replaced by disk C, which is added to the array. Files are beeing modified as time goes by. Now B is removed and A is reinserted. Will the RAID1 system realize that A and C are out of sync? And that C is more up-to-date than A? And if not, is there a safe way to avoid the mirroring process to start immediately when disk A is inserted?

    Read the article

  • How does a Promise FastTrak 133 interleave a striped array?

    - by Jemenake
    I co-worker had two drives configured as a stripe on a motherboard with an on-board Promise FastTrak 133. The motherboard failed, and we've been unable to find any others with an on-board Promise controller which can recognize the array. With Linux or some disk editors, I can see data on both drives... and I want to see if I can combine the data from both drives onto a single, larger drive. But I need to know how that information is interleaved on the drives. I've tried dmraid on Linux, but that doesn't recognize the drives as an array. I guess I could just try combining alternating blocks from the drives, starting with a block size of 256B and keep doubling until I get a result that looks intact. But I'd like to avoid that if someone already knows how Promise controllers spread the data over a striped array.

    Read the article

  • Have two partitions in RAID1

    - by mateikav
    The answers are unclear wherever I look. I have two 2TB drives for a RAID1 and I want to mirror them while having two partitions on the drives. One partition will be 100GB and contain programs, the other partition will be 1.8TB and contain personal files. Some may ask why? The answer is that my programs are currently on another older drive and I want to save time and pain uninstalling and re-installing critical programs while merely copying them to the new drives via Shadowcopy. When I create the RAID1, will both partitions be mirrored? Is this possible? I am sorry if I am being confusing or unclear.

    Read the article

  • Rebuild mdadm RAID5 array with fewer disks

    - by drjeep
    I have a 4 disk RAID5 array, one of which is starting to fail according to smartd. However, since I'm using less than half the space on /dev/md0, I'd like to rebuild the array without the failing disk. The closest scenario I've been able to find online has been this post, however it contains bits that don't apply to me (LVM volumes) and also doesn't explain how I go about resizing the partition after I'm done. Please note I have backups of important data, but I'd like to avoid rebuilding the array from scratch if possible.

    Read the article

  • SCSI Windows Setup on Dell Precision 670 Workstation...please help.

    - by sweetcoder
    Error Windows Setup: "setup did not find any hard disk drives installed in your computer" This is not exactly a programming question but I thought you guys might be able to help. I just received a Dell Precision 670 workstation. Windows is not recognizing the hard drive and I have experienced this before with other computers. I usually would just go in the bios and set the configuration to compatibility mode. I have no idea how to do this on this machine. There is this Adaptec SCSI HostRaid BIOS v4.30.4S5 screen on startup. It says to press CTRL A for SCSI select utility. It shows a Maxtor ATLAS10K5_73WLS for the drive. I was wondering if anyone out there knew how to configure this thing so that windows setup will recognize the hard drive? Any advice is very much appreciated and if you have to know further information please let me know. Raid was turned off in the BIOS for this device. TY

    Read the article

  • Why is my RAID /dev/md1 showing up as /dev/md126? Is mdadm.conf being ignored?

    - by mmorris
    I created a RAID with: sudo mdadm --create --verbose /dev/md1 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1 sudo mdadm --create --verbose /dev/md2 --level=mirror --raid-devices=2 /dev/sdb2 /dev/sdc2 sudo mdadm --detail --scan returns: ARRAY /dev/md1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e ARRAY /dev/md2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb Which I appended it to /etc/mdadm/mdadm.conf, see below: # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default (built-in), scan all partitions (/proc/partitions) and all # containers for MD superblocks. alternatively, specify devices to scan, using # wildcards if desired. #DEVICE partitions containers # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR root # definitions of existing MD arrays # This file was auto-generated on Mon, 29 Oct 2012 16:06:12 -0500 # by mkconf $Id$ ARRAY /dev/md1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e ARRAY /dev/md2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb cat /proc/mdstat returns: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md2 : active raid1 sdb2[0] sdc2[1] 208629632 blocks super 1.2 [2/2] [UU] md1 : active raid1 sdb1[0] sdc1[1] 767868736 blocks super 1.2 [2/2] [UU] unused devices: <none> ls -la /dev | grep md returns: brw-rw---- 1 root disk 9, 1 Oct 30 11:06 md1 brw-rw---- 1 root disk 9, 2 Oct 30 11:06 md2 So I think all is good and I reboot. After the reboot, /dev/md1 is now /dev/md126 and /dev/md2 is now /dev/md127????? sudo mdadm --detail --scan returns: ARRAY /dev/md/ion:1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e ARRAY /dev/md/ion:2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb cat /proc/mdstat returns: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md126 : active raid1 sdc2[1] sdb2[0] 208629632 blocks super 1.2 [2/2] [UU] md127 : active (auto-read-only) raid1 sdb1[0] sdc1[1] 767868736 blocks super 1.2 [2/2] [UU] unused devices: <none> ls -la /dev | grep md returns: drwxr-xr-x 2 root root 80 Oct 30 11:18 md brw-rw---- 1 root disk 9, 126 Oct 30 11:18 md126 brw-rw---- 1 root disk 9, 127 Oct 30 11:18 md127 All is not lost, I: sudo mdadm --stop /dev/md126 sudo mdadm --stop /dev/md127 sudo mdadm --assemble --verbose /dev/md1 /dev/sdb1 /dev/sdc1 sudo mdadm --assemble --verbose /dev/md2 /dev/sdb2 /dev/sdc2 and verify everything: sudo mdadm --detail --scan returns: ARRAY /dev/md1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e ARRAY /dev/md2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb cat /proc/mdstat returns: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md2 : active raid1 sdb2[0] sdc2[1] 208629632 blocks super 1.2 [2/2] [UU] md1 : active raid1 sdb1[0] sdc1[1] 767868736 blocks super 1.2 [2/2] [UU] unused devices: <none> ls -la /dev | grep md returns: brw-rw---- 1 root disk 9, 1 Oct 30 11:26 md1 brw-rw---- 1 root disk 9, 2 Oct 30 11:26 md2 So once again, I think all is good and I reboot. Again, after the reboot, /dev/md1 is /dev/md126 and /dev/md2 is /dev/md127????? sudo mdadm --detail --scan returns: ARRAY /dev/md/ion:1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e ARRAY /dev/md/ion:2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb cat /proc/mdstat returns: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md126 : active raid1 sdc2[1] sdb2[0] 208629632 blocks super 1.2 [2/2] [UU] md127 : active (auto-read-only) raid1 sdb1[0] sdc1[1] 767868736 blocks super 1.2 [2/2] [UU] unused devices: <none> ls -la /dev | grep md returns: drwxr-xr-x 2 root root 80 Oct 30 11:42 md brw-rw---- 1 root disk 9, 126 Oct 30 11:42 md126 brw-rw---- 1 root disk 9, 127 Oct 30 11:42 md127 What am I missing here?

    Read the article

  • What tells initramfs or the Ubuntu Server boot process how to assemble RAID arrays?

    - by Brad
    The simple question: how does initramfs know how to assemble mdadm RAID arrays at startup? My problem: I boot my server and get: Gave up waiting for root device. ALERT! /dev/disk/by-uuid/[UUID] does not exist. Dropping to a shell! This happens because /dev/md0 (which is /boot, RAID 1) and /dev/md1 (which is /, RAID 5) are not being assembled correctly. What I get is /dev/md0 isn't assembled at all. /dev/md1 is assembled, but instead of using /dev/sda2, /dev/sdb2, /dev/sdc2, and /dev/sdd2, it uses /dev/sda, /dev/sdb, /dev/sdc, /dev/sdd. To fix this and boot my server I do: $(initramfs) mdadm --stop /dev/md1 $(initramfs) mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 $(initramfs) mdadm --assemble /dev/md1 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2 $(initramfs) exit And it boots properly and everything works. Now I just need the RAID arrays to assemble properly at boot so I don't have to manually assemble them. I've checked /etc/mdadm/mdadm.conf and the UUIDs of the two arrays listed in that file match the UUIDs from $ mdadm --detail /dev/md[0,1]. Other details: Ubuntu 10.10, GRUB2, mdadm 2.6.7.1 UPDATE: I have a feeling it has to do with superblocks. $ mdadm --examine /dev/sda outputs the same thing as $ mdadm --examine /dev/sda2. $ mdadm --examine /dev/sda1 seems to be fine because it outputs information about /dev/md0. I don't know if this is the problem or not, but it seems to fit with /dev/md1 getting assembled with /dev/sd[abcd] instead of /dev/sd[abcd]2. I tried zeroing the superblock on /dev/sd[abcd]. This removed the superblock from /dev/sd[abcd]2 as well and prevented me from being able to assemble /dev/md1 at all. I had to $ mdadm --create to get it back. This also put the super blocks back to the way they were.

    Read the article

  • Creating mdraid device on top of other existing mdraid devices

    - by Dmitriusan
    I'm considering creating something like "hierarchical raid" and wondering whether it is possible using pure mdraid. Moreover, I'm going to boot from this device. I'm using Ubuntu Server 12.04 LTS with Grub2 bootloader. Motivation behind doing that is: I have 4 x 1tb 7200rpm disks. Two are newer and faster (up to 200mb/sec) and other two are slower (up to 140mb/sec). I want to create RAID-0 device from them. When creating such RAID-0 directly from 4 hard disks, I get summary speed up to ~480mb/sec. That is roughly 4*120mb/sec, so RAID-0 works with speed of the slowest device. I have an idea to create a separate RAID-0 md0 device from 500gb partitions of slower hard disks. Theoretically, this md0 device will have speed 2*140=240~280mb/sec. After that, I'm going to add this md0 device to RAID-0 with faster disks, finishing with up to 3*200=600mb/sec. Stripe-width for this raid will be 2x times bigger than for underlying raid with slow disks. Questions are: is it possible or I'm missing something? will that work as expected? can I boot from such consolidated raid device? any better ideas? any pitfalls? I don't want to use fakeraid for consolidating slow disks for multiple reasons (portability, ability to customize parameters and so on). PS Speed is needed for home virtualization server and just for experience/fun. Reliability is provided via regular automatic backups to a separate device. PPS I considered also using different stripe-width for hard disks with different speed in single raid, but mdraid does not seem to support that.

    Read the article

  • How likely can my data be recovered after Windows CHKDSK performed on a degraded RAID 5 array?

    - by chrisling106
    Hello there, We have a RAID 5 setup with 3 SATA disks, #2 went down as reported on the pre-POST screen. Unfortunately, for some reason out of my control, the system was rebooted with a degraded RAID :-O Windows XP (64-bit) loaded, CHKDSK ran automatically and done its recovery! From that point onwards, the following error prompts every time even in Safe Mode: lsass.exe - The endpoint format is invalid I took those 3 disks to the data recovery expert and need to wait at least 2-4 days for results. There are 2 VMs on multiple files stored in this RAID 5 array, and there's no backup! Sorry, I just inherited the system from an ex-staff who has left the company 2 months before I joined. How likely the data can be recovered?

    Read the article

  • How to determine if a CentOS system is Raid-1?

    - by Tedd Johnson
    I've tried searching for this answer, but haven't found anything elegant. I have numerous servers in a colo that is in another state. I need to find a way to check that the servers have RAID-1 on them, so that I can determine if they were setup correctly by my colo. df -h shows: Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 442G 1.5G 418G 1% / /dev/sda1 99M 19M 75M 20% /boot tmpfs 4.0G 0 4.0G 0% /dev/shm however as CentOS uses LVM by default, this doesn't indicate if a RAID-1 is present. it is supposed to be a software raid, so I'm pretty sure there should be a way to check. Thanks

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >