Search Results

Search found 30801 results on 1233 pages for 'hard link'.

Page 93/1233 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • HD latency measurement using bonnie++ on different machines with different RAM size

    - by j0nes
    Hello, I have run bonnie++ v1.96 on two different servers without any additional load. One server is a "physical" Dell server with 32GB RAM, the other one is a virtual instance with 14GB RAM. I have read in the bonnie manuals that I should use two times the size of RAM in my bonnie runs, so I used 64GB on the physical machine and 28GB on the virtual machine. Now I want to compare the results, and I am wondering whether the results are comparable at all. The most interesting part is the latency part - on the physical machine, the values are about 10 times higher than on the virtual machine! Can I take these results seriously (e.g. the virtual machine HD is much much faster) or does the different RAM size tamper the results? Thanks! Jonas

    Read the article

  • Which HDD brand do you ..trust

    - by Shiki
    Okay it says its 'subjective' but I believe it's not. Basically I want to ask the community about your preference. Not really 'preference' but actual experience. Like if you never had a problem with Western Digital, then write that in an answer, or if there is one with WD, just vote it up. And so on. (Heard so many stories, experiences. I only had Samsung, Maxtor, WD, Seagate HDDs. Samsung died with bad blocks, had anomalies. Maxtor died so fast I couldn't even try it really and it's really hot, loud. Seagate is just as loud as a jet plane, and moderately hot. My WD (green) is quiet, really cool and somewhat fast. That's all I have about experiences. So I would say Western Digital in an answer (OR Hitachi. Never had one yet, but every expert I know says I should get one since they even had problems with WD but Hitachi seems to be ok. (My laptop comes with Hitachi hdd but I don't think its really relevant.)) Basically I mean desktop 7200RPM HDDs here. Well.. notebook HDDs are ok also, but no raptor/scsi/server ones. Hope you get what I meant and it won't get closed.

    Read the article

  • Managing disk in a VM

    - by dst
    I'm replacing my two old rack servers with a new one that has plenty of power to take over the functionality my current servers. The server is a 4U rack mount with 16 3.5" SAS drive bays, two 2.5" bays, a Xeon E3-1230v2 CPU and 32GB of ECC RAM. My issue is the following. I would like to have a FreeBSD file server with ZFS managing disks. However, I need other VMs for e.g. a shell/git server, mail server etc. I'm wondering how to deal with the following issues: I want ZFS to fully manage the disks, so I'm not using any hardware RAID. Should I pass the SAS controller directly to the FreeBSD system as passthrough PCI? I want to maximize the reliability of the setup. On what disks should I install the hypervsor and keep server system disks? For (2) I have the option of having a RAID setup on the SAS controller and using that as system disk to store the hypervisor as well as VM images. However, this makes PCI passthrough to the file server impossible. Another option is using the two 2.5" bays. In terms of reliability how are SSDs compared to e.g. WD RE4 disks? Would it make sense to have two SSDs in software RAID as boot disks for the hypervisor or should I just go with e.g. WD RE4 disks in a software RAID setup. I also need to think about where to store the mails for the mail server, but this could be done over NFS between the VMs. BTW, this is for home use, so the load is not really that big. What I'm looking for is best practices for splitting up a server.

    Read the article

  • Can't initialize drive in Windows 7

    - by Abe
    I have a 2.5" laptop harddrive that I plugged into my laptop through a SATA to USB cable. It's powered by an adapter to a normal wall-plug. I want to use it as an external harddrive. It wouldn't initialize (Device is not ready), so as suggested in some instructions I found I uninstalled the drivers, unplugged it and plugged it back in. That worked, and I initialized the drive and formatted it. However I unplugged it after that and a few days later the same strategy does not work and I can't figure out how to use the drive. I can't initialize it and it doesn't show up in my computer.

    Read the article

  • Does WD Drive Lock encrypt the data?

    - by ssg
    I wonder if WD Drive Lock ineed encrypts the data on a Western Digital My Book Essential device or just puts a firmware-level password on the device. If it's just a password the data surely could be retrieved by a third party. I could not find anything on about that on user manuals. I found a blog saying "data is secured with AES256" bla bla but that doesn't say anything about if the password could be compromised or not. Because I don't see any delays when I add/remove the password. On the other hand when I enable BitLocker, it takes hours before it encrypts everything with my password.

    Read the article

  • Partition table correpted in windows 7 machine as simple dynamic partition

    - by raki
    I have Windows 7 installed in my system. When I originally partitioned it I made 3 partitions and 16 gb space as unallocated. Later I tried to create a partition using this free space using diskmanagement tool. It shows free space as unusabe space and the only one option available is to make it as a simple partition. Unfortunately I made it as simple partition and all my partitions converted to simple dynamic partition. Now after reboot the OS is not loading. I tried to reinstall the OS by formatting the C drive, but it doesn't work. Now I can't load the OS properly. How can I install Windows 7 on my system without losing my data on the other two drives?

    Read the article

  • Why does unpartitioned Hitachi HDS5C3020 drive start consuming 50% more power 15 minutes after boot?

    - by Pro Backup
    In a Debian 6.0.6 system there are 74 pieces of 2TB Toshiba DT01ABA200 drives. These drives are identified as Hitachi HDS5C3020BLE630 drives running firmware revision MZ4OAAB0. 64 Drives attached via HP SAS expander cards to an LSI 2008 SAS controller, another 5 drives are connected directly to the mainboard, 4 drives are connected to a Sil based PCI controller and last 1 drive is only powered and has no data cable connected. The controller LSI and Sil card's their onboard BIOS are both disabled and the mpt2sas and sata_sil modules are removed from the Linux debian 2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 x86_64 GNU/Linux kernel. The mpt2sas module is loaded after boot using a modprobe command in /etc/rc.local. These 74 drives are not partitioned, neither formatted and also not mounted. The system consumes: with 0 drives: 70.6 - 70.9 Watt (also 15 minutes after boot); with 74 drives: 330 - 360 Watt, just after boot (is equivalent to 3.5 - 3.9W per drive in idle state); with 74 drives: 420 - 466 Watt, each time in the 15th minute of uptime (is equivalent to 4.7 - 5.3W per drive in idle state). The drive specification lists 4.7W as read/write, and 3.3W as idle power consumption. The increased power consumption is most likely on the 5V line, because after roughly 1 minute an "over current protection" (OCP) of the power supply (PSU) shuts down the power. The used PSU is a single rail model with an OCP of 122A on the 12V line and 55A on the 5V line. Regression: It doesn't matter whether the drive its APM value is set to disabled or 1 (maximum power saving). The operating system records no read/write activity in /proc/diskstats. The values there are identical (28 read, 0 write operations) as immediately after the modprobe operation. Can't test what happens when booting into the mainboard it's BIOS - to exclude any OS intervention - because the Super Micro X8SI6-F mainboard running firmware 06/27/12 has a bug that incorrectly reads a +74.0 C CPU sensor temperature as "High" in BIOS mode, and shuts down the power after 1 minute. What might be causing the drive read/write activity on all drives in the 15th minute after boot and how to prevent it from happening?

    Read the article

  • Harddrive in the freezer ever work for you?

    - by Stefan Thyberg
    Once upon a time, my little 10 GB drive in my webserver failed and of course I had no backup, teaching me to immediately set up an automatic backup job afterwards. Anyhow, this drive refused to start and as a last-ditch effort I put it in a plastic bag and put it in the freezer overnight, since I had heard somewhere that it might work and I really didn't have any other options. The next day I take it out, immediately plug it in outside the case and lo and behold, the drive works long enough for me to copy my data off it. Have you ever had a similar experience with this method?

    Read the article

  • Is it possible create a 4TB bootable partition in the x86 edition of Windows Server 2003 Enterprise?

    - by Giffyguy
    I'd like to find out if there is any way to accomplish this, since it would benifit my storage server greatly. I am using a Promise FastTrak 8660 and five Seagate ST31000340NS 1TB drives in a RAID 5 array. I figure that if the x86 ENTERPRISE edition of Server 2003 can handle 64GB of RAM, it should have no problem supporting larger HDD volumes as well. I've read (somewhere...) that the Windows Server operating systems are not limited to the standard 2TB like Windows XP and 2000 are. I'm hoping it's something that just needs to be turned on, similar to the way PAE works for the 4GB RAM limit in x86 servers.

    Read the article

  • Using symbolic links with git

    - by Alfredo Palhares
    I used to have my system configuration files all in one directory for better management but now i need to use some version control on it. But the problem is that git doesn't understand symbolic links that point to outside of the repository, and i can't invert the role ( having the real files on the repository and the symbolic links on their proper path ) since some files are read before the kernel loads. I think that I can use unison to sync the files in the repo and and the their paths, but it's just not practical. And hard links will probably be broken. Any idea ?

    Read the article

  • Create a partition table on a hardware RAID1 drive with [c]fdisk

    - by Lev Levitsky
    My question is, is there a reason for this not to work? Details: I have two 500 Gb drives, and my motherboard RAID support, so I created a RAID1 array and booted from a Linux live medium. I then listed the disks and, apart from the obvious /dev/sda, /dev/sdb, etc. there was /dev/md126 which, I figured, was the mirrored "virtual" drive. Its size was 475 Gb; I had seen that the size of the array would be smaller than 500 Gb when I was creating it, so no surprise there. I did cfdisk /dev/md126, created the necessary partitions and chose write. It's been about half an hour now, I think. It doesn't seem like it's ever going to finish. The only thing about cfdisk in dmesg is that it's "blocked for more than 120 seconds". Doing fdisk -l /dev/md126 in another terminal I see all three partitions I created and a note that "Partition 1 does not start on a physical sector boundary". The table is lost after reboot, though. I tried to partition /dev/sda individually, and it worked, the table was written in about a second. The "not on a physical sector boundary" message is there, too. EDIT: I tried fdisk on /dev/sda, then there were no messages about sector boundaries. After a reboot, I am able to use mkfs on /dev/dm126p1, etc. fdisk shows that /dev/md126 has the same partitions as /dev/sda (but /dev/sdb doesn't have any). But at some point ("writing superblock and filesystem accounting information") mkfs is also blocked. Using it on sda1 results in a "partition is used by the system" error. What can be the problem? EDIT 2: I booted a freshly updated system from a pendrive and was able to create partition table and filesystems on /dev/md126 without any apparent problems. Was it an issue with the support of the hardware? My MB is Asus P9X79.

    Read the article

  • OS X: MySQL not dealing properly with data directory via soft link

    - by GJ
    I am trying to get a macports-installed MySQL to use a data directory stored inside my FileVault-protected home dir. I used sudo cp -a /opt/local/var/db/mysql5 ~/db/ (the -a to ensure file permissions remain intact) and then replaced the original mysql5 directory with a soft link: sudo ln -s ~/db/mysql5 /opt/local/var/db/mysql5 However, when I now try to start MySQL it fails. It follows the soft link at least to the extent that it modifies some files in the ~/db/mysql5 dir, notably the error log which gets appended to it this: 110108 15:33:08 mysqld_safe Starting mysqld daemon with databases from /opt/local/var/db/mysql5 110108 15:33:08 [Warning] '--skip-locking' is deprecated and will be removed in a future release. Please use '--skip-external-locking' instead. 110108 15:33:08 [Warning] '--log_slow_queries' is deprecated and will be removed in a future release. Please use ''--slow_query_log'/'--slow_query_log_file'' instead. 110108 15:33:08 [Warning] '--default-character-set' is deprecated and will be removed in a future release. Please use '--character-set-server' instead. 110108 15:33:08 [Warning] Setting lower_case_table_names=2 because file system for /opt/local/var/db/mysql5/ is case insensitive 110108 15:33:08 [Note] Plugin 'FEDERATED' is disabled. 110108 15:33:08 [Note] Plugin 'ndbcluster' is disabled. /opt/local/libexec/mysqld: Table 'mysql.plugin' doesn't exist 110108 15:33:08 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 110108 15:33:09 InnoDB: Started; log sequence number 4 1596664332 110108 15:33:09 [ERROR] /opt/local/libexec/mysqld: Can't create/write to file '/opt/local/var/db/mysql5/mac.local.pid' (Errcode: 13) 110108 15:33:09 [ERROR] Can't start server: can't create PID file: Permission denied 110108 15:33:09 mysqld_safe mysqld from pid file /opt/local/var/db/mysql5/gPod.local.pid ended I can't see why MySQL can't create the pid file, since manually creating it using the _mysql user succeeds (sudo -u _mysql touch mac.local.pid from inside ~/db/mysql5) Any ideas how to resolve this?

    Read the article

  • Issues with non-HP harddisk in HP ML 350 server?

    - by Torben Warberg Rohde
    I'm looking to buy some extra disk space for a HP ML 350 G5 server. It is for simple file-serving - not OS/system stuff. HP harddisks are insanely expensive, so I'm tempted to buy some other brand instead. I have heard that they sometime use special firmware on their disks, but I suspect that might just be HP spreading rumors to sell disks. Does anyone have experience using non-HP disks? Any features not working, or not being able to build the RAID at all? I'm looking at 2.5" SAS Seagate drives - Constellation 500 GB (7.2k) or Savvio 600 GB (10k).

    Read the article

  • raid 0 data recovery?

    - by Fred
    HI All, I have two identical seagate 7200.9 500Gb drives confiured as a RAID 0 spanned disk in windows. One of the drives has lost power and wont spin up at all. I know this normally means death for the data on both drives but i have a cunning plan.. DISK 1 - NO POWER RAID 0 DISK DISK 2 - FULLY FUNCTIONAL RAID 0 DISK DISK 3 - FULLY FUNCTIONAL SPARE DISK Copy the working drive (disk 2) data to a third 500GB DISK (disk 3), remove the logic board from the working disk (disk 2) and replace it with the non working logic board on the broken drive (disk 1) , then hopefully recreate the RAID 0 with disk 1 and disk 3, just long enough to get the data off it. Hope this makes sense, here are my questions: Windows disk manager atm recognises disk 2 but wont let me access it in anyway, therefore copying the data off it (or getting a disk image) cant be done in windows. Does anyone know of any software (in linux or self booting) that would allow me to access this disk? Anyone know of any software that will recreate the spanned drive off two disk images Am i missing any key information that means i definitely shouldn't even bother starting this, i know its a long shot anyway but its worth a try unless i definitely cant do it. The irritating thing is that i am sure its a logic board failure on disk 1 as it simply wont power up at all, suddenly no signs of life, so i am sure the data is intact! Any help would be really appreciated! Thanks

    Read the article

  • ospfd over an OpenVPN link - strange error in logs

    - by Alex
    I am trying to set up Quagga ospfd on two hosts connected by an OpenVPN link. These hosts have VPN IPs 10.31.0.1 and 10.31.0.13. ospfd config is pretty simple: hostname bizon password xxxxxxxxx enable password xxxxxxxxx ! log file /var/log/quagga/ospfd.log ! interface lo ! interface tun0 ip ospf network point-to-point ip ospf mtu-ignore ip ospf cost 10 interface tun1 ip ospf network point-to-point ip ospf mtu-ignore ip ospf cost 10 interface tun2 ip ospf network point-to-point ip ospf mtu-ignore ip ospf cost 10 ! router ospf ospf router-id 10.31.0.1 network 10.31.0.0/16 area 0.0.0.0 network 10.119.2.0/24 area 0.0.0.0 redistribute connected area 0.0.0.0 range 10.0.0.0/8 ! line vty ! debug ospf event debug ospf packet all I am getting the following error in the ospfd.log (the log is from 10.31.0.13): 2012/10/05 01:25:28 OSPF: ip_v 4 2012/10/05 01:25:28 OSPF: ip_hl 5 2012/10/05 01:25:28 OSPF: ip_tos 192 2012/10/05 01:25:28 OSPF: ip_len 64 2012/10/05 01:25:28 OSPF: ip_id 64666 2012/10/05 01:25:28 OSPF: ip_off 0 2012/10/05 01:25:28 OSPF: ip_ttl 1 2012/10/05 01:25:28 OSPF: ip_p 89 2012/10/05 01:25:28 OSPF: ip_sum 0xe5d1 2012/10/05 01:25:28 OSPF: ip_src 10.31.0.1 2012/10/05 01:25:28 OSPF: ip_dst 224.0.0.5 2012/10/05 01:25:28 OSPF: Packet from [10.31.0.1] received on link tun1 but no ospf_interface I'm not sure what to do next. I have set up ospfd over OpenVPN several times but I used Debian and I am on CentOS 6 now. Quagga version is 0.99.15. Should I try to get more recent version?

    Read the article

  • If I partition a drive connected via eSata will it show different partitions when connected via USB?

    - by jeffreypriebe
    I have an odd problem with an external drive. I'm formatting it connected to my laptop prior to connecting it to my router. The HDD enclosure has both an eSata and USB connections. Generally, I connect it via eSata to my laptop. I created my partitions and connected it to the router, but I see partition information that is different than what I created. After chasing leads concerning large HDD size, I mindlessly connected the HDD to my laptop with USB. Lo! I see the same partitions as the router. Attached are screenshots using the same program and the HDD in question. The only difference is the connection. For the first, I connected via eSata and hit "refresh" on the partition program. Then, turned off the HDD, disconnected the eSata cable, and connected via USB. Power and refresh. eSata: reports a total HDD size of 2328 GB, with four partitions (the third being 1.96TB) USB: reports a total HDD size of 280 GB, with three partitions (the third being 279 GB) Any idea why this is happening? It looks like it clearly is an issue of the 4K sector size and not playing nice with the USB enclosure. I tried it eSata and USB in Windows and Linux and it appears consistently that eSata is reporting correctly, USB incorrectly.

    Read the article

  • What's the point of 6.0GB/s SATA harddrives?

    - by earlz
    So I've recently been seeing on the higher grade motherboards SATA 6.0gb/s ports. That's all fine and dandy. Extra room for expansion.. Now, my question is why are people already selling SATA 6.0GB/s port containing harddrives when it is already known that harddrives aren't even saturating 3.0GB/s(even server grade). What is the point of this?

    Read the article

  • Spanned volumes on new install

    - by Noio
    My Windows 7 Release Candidate is about to expire, so I'm going to do a clean install of a retail version. I have two volumes, on four physical drives, as follows: Disk 0: Spanned Volume (D:) Disk 1: Primary Partition, Boot/Windows Install (C:) Disk 2: Spanned Volume (D:) Disk 3: Spanned Volume (D:) If I install Windows to a formatted drive 1, will it still recognize the spanned volume in Disks 0, 2, and 3? The spanned volume is not redundant in any way, so the volume is 1.5TB consisting of three 500GB disks. I don't have the space to do an external backup, and I thought it was impossible to convert a spanned volume back to a basic volume.

    Read the article

  • Intel RST accidentally selected wrong drive as system drive -- how to fix?

    - by Sean Killeen
    Question / TL;DR If Intel RST has marked a drive other than my RAID set as the system drive, how can I get it so that the RAID set is now seen as the system drive, and catch it up to my drive now? What Happened NOTE: Some perhaps unwise decisions are ahead. This is as best as I can recall the order of things. I had a 2x1TB RAID1 config. I bought the drives around the same time, and they started to die around the same time. I replaced 1st drive with a 2 TB drive before the other one's SMART errors got more serious. I waited for the RAID to replicate, then replaced the 2nd drive with a manufacturer's replacement. I got a second manufacturer's drive replacement and used it as a spare. so I now I had a 1TB/2TB drive in a RAID1 and another 1TB as a spare. The 1TB drive in the replacement set was bad from the manufacturer. Rather than mess with their refurbished stuff, I bought another 2 TB drive an upped the config to a 2x2TB RAID1 with the other, functioning manufacturer's drive as a spare. I made the mistake of trying to bring the other drive online to clean it out and the signatue clash killed my machine. When the machine rebooted, that drive was marked as the system drive. So, I have a 2x2TB RAID1 that is apparently offline, and 1 spare 1 TB refurbished drive that everything is being run from. Not a great idea. Options I'm considering Bring the 2x2TB drive back online, and then unplug the spare until I can format it in another system. This would involve some data loss, but the more I think about it, I actually think I haven't modified any data that isn't backed up or synced somewhere (go me!) Anything that isn't is likely trivial, enough that I'm willing to take the risk. One downside here is that if the 2 TB doesn't have data on it for some reason, I could be screwed trying to put the other drive back in, no? Try to somehow get the RAID1 updated with the data from the current system drive. Option 3?

    Read the article

  • Resize a RAID 1 volume on OS X Snow Leopard - how? (Note: software raid)

    - by Emmel
    I've scoured the Internet in search of an answer to this question, and as usual with OSX-related topics, I often don't find any deep-dive technical explanations sufficient enough to feel confident doing dangerous things. Here is my question: I have a Mac Pro, running OS X 10.6.2. I have, as my main root/boot disk, a RAID 1 volume called "Mirror1". Mirror1 is comprised of two 1 TB disks. Mirror1, however, is fixed at 640 GB. That's because, I originally took a 640GB disk, bought a terabyte disk, mirrored it (using diskutil appleraid enable), when it synced I removed the 640GB and replaced it with a second 1 TB disk, and synced again. Voila! A single 640 GB replaced by two 1 TB disks in a mirror.. Actually, no. There's still something missing from the equation: Mirror1 needs to be expanded from 640GB to 1 TB to match the partition sizes on each of those disks. How do I do this? Perhaps the diskutil output will help: -> diskutil list /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *1.0 TB disk0 1: EFI 209.7 MB disk0s1 2: Apple_RAID 999.9 GB disk0s2 3: Apple_Boot Boot OSX 134.2 MB disk0s3 /dev/disk1 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *1.0 TB disk1 1: EFI 209.7 MB disk1s1 2: Apple_RAID 999.9 GB disk1s2 3: Apple_Boot Boot OSX 134.2 MB disk1s3 /dev/disk2 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *640.1 GB disk2 1: EFI 209.7 MB disk2s1 2: Apple_HFS Mac Disk 2 536.7 GB disk2s2 3: Microsoft Basic Data BOOTCAMP 103.1 GB disk2s3 /dev/disk3 #: TYPE NAME SIZE IDENTIFIER 0: Apple_HFS Mirror1 *639.8 GB disk3 -> diskutil appleraid list AppleRAID sets (1 found) =============================================================================== Name: Macintosh HD Unique ID: 1953F864-B474-4EB6-8E69-41834EBD0247 Type: Mirror Status: Online Size: 639.8 GB (639791038464 Bytes) Rebuild: manual Device Node: disk3 ------------------------------------------------------------------------------- # Device Node UUID Status ------------------------------------------------------------------------------- 0 disk1s2 25109BAE-5697-40EA-B612-0217851444F7 Online 1 disk0s2 11B83AB0-8148-4DB6-8761-DEF08C855F8D Online =============================================================================== Thanks in advance.

    Read the article

  • western digital caviar black. EXT4-fs error

    - by azat
    Recently I update my HDD on desktop machine, and bought WD Caviar Black. But after I format & copy information to it (using dd), and fix partitions size: I have next errors in kern.log: Aug 27 16:04:35 home-spb kernel: [148265.326264] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 9054, 32254 clusters in bitmap, 32258 in gd Aug 27 16:07:11 home-spb kernel: [148421.493483] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 9045, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:17 home-spb kernel: [148546.481693] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 10299, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:17 home-spb kernel: [148546.487147] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.258711] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 4345, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.277591] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.278202] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 4344, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.284760] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.291983] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 9051, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.297495] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.297916] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 9050, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.297940] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.303213] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 4425, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.312127] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.312487] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 4424, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.317858] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.322231] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 4336, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.326250] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.326599] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 4335, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.332397] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.341957] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 5764, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.350709] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.351127] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 5763, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.355916] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:43 home-spb kernel: [148572.401055] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 10063, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:43 home-spb kernel: [148572.404357] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:43 home-spb kernel: [148572.414699] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 10073, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:43 home-spb kernel: [148572.420411] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:43 home-spb kernel: [148572.493933] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 9059, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:43 home-spb kernel: [148572.493956] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. One time, machine rebooted (not manually), when I turn it on, it runs fsck on /dev/sdc2 and fix some errors and some files are missing on /dev/sdc2 I'v check /dev/sdc2 for badblocks, it doesn't have it ( using e2fsck -c /dev/sdc2 ) Here is the output of fsck http://pastebin.com/D5LmLVBY What else I can do to understand what's wrong here? BTW for /dev/sdc1 no message like that, in kern.log Linux version: 3.3.0 Distributive: Debian wheezy

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >