Search Results

Search found 10693 results on 428 pages for 'raw disk'.

Page 50/428 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • Routing and Remote Access Service won't start after full disk

    - by NKCSS
    The HDD of the server was out of disk space, and after a reboot, RRAS won't start anymore on my 2008 R2 server. Error Details: Log Name: System Source: RemoteAccess Date: 2/5/2012 9:39:52 PM Event ID: 20153 Task Category: None Level: Error Keywords: Classic User: N/A Computer: Windows14111.<snip> Description: The currently configured accounting provider failed to load and initialize successfully. The connection was prevented because of a policy configured on your RAS/VPN server. Specifically, the authentication method used by the server to verify your username and password may not match the authentication method configured in your connection profile. Please contact the Administrator of the RAS server and notify them of this error. Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="RemoteAccess" /> <EventID Qualifiers="0">20153</EventID> <Level>2</Level> <Task>0</Task> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2012-02-05T20:39:52.000Z" /> <EventRecordID>12148869</EventRecordID> <Channel>System</Channel> <Computer>Windows14111.<snip></Computer> <Security /> </System> <EventData> <Data>The connection was prevented because of a policy configured on your RAS/VPN server. Specifically, the authentication method used by the server to verify your username and password may not match the authentication method configured in your connection profile. Please contact the Administrator of the RAS server and notify them of this error.</Data> <Binary>2C030000</Binary> </EventData> </Event> I think it has something to do with a corrupt config file, but I am unsure of what to do. I Removed the RRAS role, rebooted, and re-added, but it keeps failing with the same error. Thanks in advance. [UPDATE] If i set the accounting provider from 'Windows' to '' the service starts but VPN won't work. Any ideas how this can be repaired?

    Read the article

  • Server with 3 Disk, what's the best HD Configuration?

    - by aleroot
    I Have an HP Server with a quad core Opteron and 3 Disk 250Gb S-ATA Disk, i'm thinking about what's the best configuration of the disk for performance and reliability. There is mainly 2 scenario : -RAID 5 with these 3 HD (on the the array 100GB Partition for OS, Other Space for Data Partition) -RAID 1 + 1 Disk for OS (one single Disk OS Installation, RAID 1 Array for a Data Partition) What's the best configuration ? In the Server Run MySQL and Small Document File server, the OS to be installed is Windows Server 2008 ...

    Read the article

  • Windows 8 disk errors

    - by wrongusername
    So yesterday, I forcibly restarted my Windows 8 PC. VMWare Workstation was having some trouble with the guest Linux Mint OS. It wasn't responding for some time, so I tried suspending it September 28th or perhaps even before. It wouldn't suspend -- I forgot what the window looked like, but all options in the power menu were disabled (i.e. "Shutdown," "Power Off," and options like that were all disabled). I eventually killed the VMWare application through Task Manager, though I was too lazy to hunt down the running virtual machine itself, and decided to kill it by just shutting down my PC entirely. The PC wouldn't shut down for quite some time after the monitor went blank, so I did a cold reset by holding the power button. I then powered it on again and Windows briefly gave me some message like "Search for KERNEL_STACK_INPAGE_ERROR." Windows then started diagnosing some problems and gave me the message, "Repairing disk errors. This might take over an hour to complete." That was yesterday night, and I went to sleep without waiting for it to finish. This morning, it said that the repair failed, and that the log was at C:\windows\system32\LogFiles\srt\srtTrail.txt (as I remember it -- I don't have the exact path I wrote down right now). It gave me some other options to troubleshoot, such as resetting Windows (files and settings still intact, but programs not installed through the app store will be erased). That didn't work (no error message given, I was just told it didn't work). I tried rebooting in safe mode, the same diagnosis process begins, except that this time it doesn't bother with the automatic repairs again. So I tried using the command prompt to try to see if my files are at least still there. I was on the X drive, and I couldn't cd to the C drive. I couldn't find my folder under Users (of course?), and couldn't find the srt folder under LogFiles either. I am not sure what to try next. I have backed up everything, but to the cloud, so if absolutely necessary I can start off with a fresh copy of Windows and restore all my data, though it would be a hassle. Any thoughts on what might be wrong or what I can try? My computer was purchased just this June, so the hard drive should still be pretty new.

    Read the article

  • DosBox Booting From HDD Image, FreeDOS Image created with qemu-img.

    - by TechZilla
    I'm having trouble booting a HDD image with DosBox. I've only gotten either read errors, or boot failures. The HDD image is a verified working FreeDOS installation, created with qemu-img. The image has been formatted FAT32, and it's working as expected with QEMU. The Image is only 1G in size, and is a flat raw image. I have been able to mount it with Linux, for ease of file transfer. I even was able to boot with DOSEMU, After I mounted the image under Linux. I would love to somehow just boot from the raw image file, but I would have no problem booting from a mount. I just can't get anything to happen, and I have read the Documents over. I have verified DosBox is working as expected, with its included DOSlike environment. I would appreciate any help, as I just don't have much of DosBox experience.

    Read the article

  • Is it possible to change the Raid5 chunk size of an existing device?

    - by AlexCombas
    I have an existing raid5 device which I created using mdadm on Linux. When I created the device I set the chunk size to 64 but I would like to test the performance of various sizes but I don't want to have to rebuild my entire system to do so. If it is not possible to do it live then is it possible to do this by booting with a rescue disk? Any advice on the steps how to do this, either live or with a rescue disk, will be greatly appreciated.

    Read the article

  • TFS Disk Structure - and "Add new folder" vs "Add solution"

    - by NealWalters
    Our organization recently got TFS 2008 set up ready for our use. I have a practice TeamProject available to play with. To simplify slightly, we previous organized our code on disk like this: -EC - Main - Database - someScript1.sql - someScript2.sql - Documents - ReleaseNotes_V1.doc - Source - Common - Company.EC.Common.Biztalk.Artifacts [folder] - Company.EC.Common.BizTalk.Components [folder] - Company.EC.Common.Biztalk.Deployment [folder] - Company.EC.BookTransfer.BizTalk.sln - BookTransfer - Company.EC.BookTransfer.BizTalk.Artifacts [folder] - Company.EC.BookTransfer.BizTalk.Components [folder] - Company.EC.BookTransfer.BizTalk.Components.UnitTest [folder] - Company.EC.BookTransfer.BizTalk.Deployment [folder] - Company.EC.BookTransfer.BizTalk.sln I'm trying to decide, do I want to check in the entire c:\EC directory? Or do I want to open each solution and checkin. What are the pros and cons of each? It seems like by doing the "Add Files/Folder" option, I could check in everything at once and it would match the disk structure. It also looks like that if I check in each solution separately, that creates another working folder in my Workspace. I think if I check in by "add files/folder", I will have one workspace and that would be better. But most of the books and samples I see talk about checking in projects and solutions. P.S. I know I need to add more to my disk structure in accordance with the Branch/Merge guidelines, but that is not the question I'm asking here. Thanks, Neal Walters

    Read the article

  • Script to copy files on CD and not on hard disk to a new directory

    - by John22
    I need to copy files from a set of CDs that have a lot of duplicate content, with each other, and with what's already on my hard disk. The file names of identical files are not the same, and are in sub-directories of different names. I want to copy non-duplicate files from the CD into a new directory on the hard disk. I don't care about the sub-directories - I will sort it out later - I just want the unique files. I can't find software to do that - see my post at SuperUser http://superuser.com/questions/129944/software-to-copy-non-duplicate-files-from-cd-dvd Someone at SuperUser suggested I write a script using GNU's "find" and the Win32 version of some checksum tools. I glanced at that, and have not done anything like that before. I'm hoping something exists that I can modify. I found a good program to delete duplicates, Duplicate Cleaner (it compares checksums), but it won't help me here, as I'd have to copy all the CDs to disk, and each is probably about 80% duplicates, and I don't have room to do that - I'd have to cycle through a few at a time copying everything, then turning around and deleting 80% of it, working the hard drive a lot. Thanks for any help.

    Read the article

  • How to load/save C++ class instance (using STL containers) to disk

    - by supert
    I have a C++ class representing a hierarchically organised data tree which is very large (~Gb, basically as large as I can get away with in memory). It uses an STL list to store information at each node plus iterators to other nodes. Each node has only one parent, but 0-10 children. Abstracted, it looks something like: struct node { public: node_list_iterator parent; // iterator to a single parent node double node_data_array[X]; map<int,node_list_iterator> children; // iterators to child nodes }; class strategy { private: list<node> tree; // hierarchically linked list of nodes struct some_other_data; public: void build(); // build the tree void save(); // save the tree from disk void load(); // load the tree from disk void use(); // use the tree }; I would like to implement the load() and save() to disk, and it should be fairly fast, however the obvious problems are: I don't know the size in advance; The data contains iterators, which are volatile; My ignorance of C++ is prodigious. Could anyone suggest a pure C++ solution please?

    Read the article

  • tcp checksum and tcp offloading

    - by scatman
    i am using raw sockets to create my own socket. i need to set the tcp_checksum. i have tried a lot of references but all are not working (i am using wireshark for testing). could you help me please. by the way, i read somewhere that if you set tcp_checksum=0. then the hardware will calculate the checksum automatically for you. is this true? i tried it, but in wireshark the tcp_checksum gives a value of 0X000 and says tcp offload. i also read about tcp offloading, and didn't understand, is it only that wireshark is cannot check an offloaded tcp checksum, but there is a correct one??

    Read the article

  • XFS disk becomes unavailable after a while

    - by Guard
    Ubuntu 12.04 (but the same was on 11.10 before upgrading) WD MyBook, 2TB, no RAID (or RAID0, not completely sure, anyway no mirroring, both 1TB disks are in use, mounted as a single device). Formatted to XFS, normally used for big movie files. Connected to Firewire 800. At some point the LED started going up and down as when constantly reading/writing. The device gives access error. When unplugged (cable, then holding the power button for a while, then unplugging the power) and re-connected becomes available. xfs_check with no results. xfs_repair did something, but looks like didn't fix any error. Then after a massive read (checking 1.5GB torrent file for integrity) becomes unavailable again. Any ideas what's wrong? Drives? Cables? Motherboard? OS? UPD: not sure how relevant this is, but here are dmesg output [14380.632816] SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled [14380.633356] SGI XFS Quota Management subsystem [14421.812220] firewire_core: phy config: card 0, new root=ffc1, gap_count=5 [14441.890596] firewire_core: phy config: card 0, new root=ffc1, gap_count=5 [14441.896858] firewire_core: phy config: card 0, new root=ffc1, gap_count=5 [14453.895347] firewire_core: created device fw1: GUID 0090a99500a35518, S400, 9 config ROM retries [14453.904818] scsi6 : SBP-2 IEEE-1394 [14453.905014] scsi7 : SBP-2 IEEE-1394 [14454.139993] firewire_sbp2: fw1.0: logged in to LUN 0000 (0 retries) [14454.158769] scsi 6:0:0:0: Direct-Access WD My Book 1015 PQ: 0 ANSI: 4 [14454.159251] sd 6:0:0:0: Attached scsi generic sg3 type 0 [14454.162391] firewire_sbp2: fw1.1: logged in to LUN 0001 (0 retries) [14454.167453] sd 6:0:0:0: [sdc] 3907017568 512-byte logical blocks: (2.00 TB/1.81 TiB) [14454.178822] sd 6:0:0:0: [sdc] Write Protect is off [14454.178826] sd 6:0:0:0: [sdc] Mode Sense: 10 00 00 00 [14454.186830] scsi 7:0:0:1: Enclosure WD My Book Device 1015 PQ: 0 ANSI: 4 [14454.186995] scsi 7:0:0:1: Attached scsi generic sg4 type 13 [14454.190078] sd 6:0:0:0: [sdc] Cache data unavailable [14454.190087] sd 6:0:0:0: [sdc] Assuming drive cache: write through [14454.202176] sd 6:0:0:0: [sdc] Cache data unavailable [14454.202185] sd 6:0:0:0: [sdc] Assuming drive cache: write through [14454.239940] sdc: [mac] sdc1 sdc2 sdc3 sdc4 [14454.271262] sd 6:0:0:0: [sdc] Cache data unavailable [14454.271270] sd 6:0:0:0: [sdc] Assuming drive cache: write through [14454.271354] sd 6:0:0:0: [sdc] Attached SCSI disk [14454.272149] ses 7:0:0:1: Attached Enclosure device [14606.090024] XFS (sdc3): Mounting Filesystem [14612.048343] XFS (sdc3): Starting recovery (logdev: internal) [14620.697636] XFS (sdc3): Ending recovery (logdev: internal) [14748.120957] e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: Rx/Tx [14748.120963] e1000e 0000:00:19.0: eth0: 10/100 speed: disabling TSO [14752.568382] uhci_hcd 0000:00:1a.0: PCI INT A disabled [14752.568579] uhci_hcd 0000:00:1a.1: PCI INT B disabled [14752.568738] ehci_hcd 0000:00:1a.7: PCI INT C disabled [14752.568779] ehci_hcd 0000:00:1a.7: PME# enabled [14752.584526] uhci_hcd 0000:00:1d.1: PCI INT B disabled [14752.584689] uhci_hcd 0000:00:1d.2: PCI INT C disabled [14752.680079] ehci_hcd 0000:00:1a.7: BAR 0: set to [mem 0xe4641000-0xe46413ff] (PCI address [0xe4641000-0xe46413ff]) [14752.680104] ehci_hcd 0000:00:1a.7: restoring config space at offset 0xf (was 0x300, writing 0x30b) [14752.680136] ehci_hcd 0000:00:1a.7: restoring config space at offset 0x1 (was 0x2900000, writing 0x2900002) [14752.680170] ehci_hcd 0000:00:1a.7: PME# disabled [14752.680182] ehci_hcd 0000:00:1a.7: PCI INT C -> GSI 18 (level, low) -> IRQ 18 [14752.680190] ehci_hcd 0000:00:1a.7: setting latency timer to 64 [14752.710334] uhci_hcd 0000:00:1a.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [14752.710342] uhci_hcd 0000:00:1a.0: setting latency timer to 64 [14752.749186] uhci_hcd 0000:00:1a.1: PCI INT B -> GSI 17 (level, low) -> IRQ 17 [14752.749194] uhci_hcd 0000:00:1a.1: setting latency timer to 64 [14752.790231] uhci_hcd 0000:00:1d.1: PCI INT B -> GSI 22 (level, low) -> IRQ 22 [14752.790239] uhci_hcd 0000:00:1d.1: setting latency timer to 64 [14752.829170] uhci_hcd 0000:00:1d.2: PCI INT C -> GSI 18 (level, low) -> IRQ 18 [14752.829178] uhci_hcd 0000:00:1d.2: setting latency timer to 64

    Read the article

  • How to implement bridging/NAT on linux? [closed]

    - by mikepurvis
    What I have is a network topology which looks like this: ------ PC --- IP Camera The PC has two ethernet interfaces, and is hosting a small webserver providing some auxiliary data. The issue is that the server on the PC runs on port 80, and the IP Camera is also running on port 80. Currently, we are bridging them, so that the PC's server is accessible at 192.168.0.2 and the camera at 192.168.0.3. However, what I'm trying to explore is the feasibility of using the PC to expose them both on the PC's IP, ideally both on port 80. Can this be done with regular sockets, or will it be necessary to use raw sockets?

    Read the article

  • Event on SQL Server 2008 Disk IO and the new Complex Event Processing (StreamInsight) feature in R2

    - by tonyrogerson
    Allan Mitchell and myself are doing a double act, Allan is becoming one of the leading guys in the UK on StreamInsight and will give an introduction to this new exciting technology; on top of that I'll being talking about SQL Server Disk IO - well, "Disk" might not be relevant anymore because I'll be talking about SSD and IOFusion - basically I'll be talking about the underpinnings - making sure you understand and get it right, how to monitor etc... If you've any specific problems or questions just ping me an email [email protected]. To register for the event see: http://sqlserverfaq.com/events/217/SQL-Server-and-Disk-IO-File-GroupsFiles-SSDs-FusionIO-InRAM-DBs-Fragmentation-Tony-Rogerson-Complex-Event-Processing-Allan-Mitchell.aspx 18:15 SQL Server and Disk IOTony Rogerson, SQL Server MVPTony's Blog; Tony on TwitterIn this session Tony will talk about RAID levels, how SQL server writes to and reads from disk, the effect SSD has and will talk about other options for throughput enhancement like Fusion IO. He will look at the effect fragmentation has and how to minimise the impact, he will look at the File structure of a database and talk about what benefits multiple files and file groups bring. We will also touch on Database Mirroring and the effect that has on throughput, how to get a feeling for the throughput you should expect.19:15 Break19:45 Complex Event Processing (CEP)Allan Mitchell, SQL Server MVPhttp://sqlis.com/sqlisStreamInsight is Microsoft’s first foray into the world of Complex Event Processing (CEP) and Event Stream Processing (ESP).  In this session I want to show an introduction to this technology.  I will show how and why it is useful.  I will get us used to some new terminology but best of all I will show just how easy it is to start building your first CEP/ESP application.

    Read the article

  • No Rush, Defragging that Drive can Wait [Humorous Image]

    - by Asian Angel
    That drive is only fragmented a little bit…nothing to worry about there. View a Larger Version of the Image You should defragment this volume. Ya think?! [via Fail Desk] What’s the Difference Between Sleep and Hibernate in Windows? Screenshot Tour: XBMC 11 Eden Rocks Improved iOS Support, AirPlay, and Even a Custom XBMC OS How To Be Your Own Personal Clone Army (With a Little Photoshop)

    Read the article

  • How can I prevent [flush-8:16] and [jbd2/sdb2-8] from causing GUI unresponsiveness?

    - by ændrük
    Approximately twice a week, the entire graphical interface will lock up for about 10-20 seconds without warning while I am doing simple tasks such as browsing the web or writing a paper. When this happens, GUI elements do not respond to mouse or keyboard input, and the System Monitor applet displays 100% IOWait processor usage. Today, I finally happened to have GNOME Terminal already open when the problem started. Despite other applications such as Google Chrome, Firefox, GNOME Do, and GNOME Panel being unresponsive, the terminal was usable. I ran iotop and observed that commands named [flush-8:16] and [jbd2/sdb2-8] were alternately using 99.99% IO. What are these, and how can I prevent them from causing GUI unresponsiveness? Details $ mount | grep ^/dev /dev/sda1 on / type ext4 (rw,noatime,discard,errors=remount-ro,commit=0) /dev/sdb2 on /home type ext4 (rw,commit=0) /dev/sda is an OCZ-VERTEX2 and /dev/sdb is a WD10EARS. Here is dumpe2fs /dev/sdb2, if it's relevant.

    Read the article

  • How do I free up more space in /boot?

    - by user6722
    My /boot partition is nearly full and I get a warning every time I reboot my system. I already deleted old kernel packages (linux-headers...), actually I did that to install a newer kernel version that came with the automatic updates. After installing that new version, the partition is nearly full again. So what else can I delete? Are there some other files associated to the old kernel images? Here is a list of files that are on my /boot partition: :~$ ls /boot/ abi-2.6.31-21-generic lost+found abi-2.6.32-25-generic memtest86+.bin abi-2.6.38-10-generic memtest86+_multiboot.bin abi-2.6.38-11-generic System.map-2.6.31-21-generic abi-2.6.38-12-generic System.map-2.6.32-25-generic abi-2.6.38-8-generic System.map-2.6.38-10-generic abi-3.0.0-12-generic System.map-2.6.38-11-generic abi-3.0.0-13-generic System.map-2.6.38-12-generic abi-3.0.0-14-generic System.map-2.6.38-8-generic boot System.map-3.0.0-12-generic config-2.6.31-21-generic System.map-3.0.0-13-generic config-2.6.32-25-generic System.map-3.0.0-14-generic config-2.6.38-10-generic vmcoreinfo-2.6.31-21-generic config-2.6.38-11-generic vmcoreinfo-2.6.32-25-generic config-2.6.38-12-generic vmcoreinfo-2.6.38-10-generic config-2.6.38-8-generic vmcoreinfo-2.6.38-11-generic config-3.0.0-12-generic vmcoreinfo-2.6.38-12-generic config-3.0.0-13-generic vmcoreinfo-2.6.38-8-generic config-3.0.0-14-generic vmcoreinfo-3.0.0-12-generic extlinux vmcoreinfo-3.0.0-13-generic grub vmcoreinfo-3.0.0-14-generic initrd.img-2.6.31-21-generic vmlinuz-2.6.31-21-generic initrd.img-2.6.32-25-generic vmlinuz-2.6.32-25-generic initrd.img-2.6.38-10-generic vmlinuz-2.6.38-10-generic initrd.img-2.6.38-11-generic vmlinuz-2.6.38-11-generic initrd.img-2.6.38-12-generic vmlinuz-2.6.38-12-generic initrd.img-2.6.38-8-generic vmlinuz-2.6.38-8-generic initrd.img-3.0.0-12-generic vmlinuz-3.0.0-12-generic initrd.img-3.0.0-13-generic vmlinuz-3.0.0-13-generic initrd.img-3.0.0-14-generic vmlinuz-3.0.0-14-generic Currently, I'm using the 3.0.0-14-generic kernel.

    Read the article

  • Cannot delete .Trash-503 directory, returns a $RECYCLE.BIN.trashinfo: Input/output error

    - by Parto
    I cannot delete .Trash-503 folder via GUI or terminal, it returns a $RECYCLE.BIN.trashinfo: Input/output error Not even sudo rm -r or even a simple ls works in that trash directory. Check terminal output below: subroot@subroot:~$ cd /media/xxxxx/ subroot@subroot:/media/xxxxx$ rm .Trash-503/ rm: cannot remove `.Trash-503/': Is a directory subroot@subroot:/media/xxxxx$ rm -r .Trash-503/ rm: cannot remove `.Trash-503/info/$RECYCLE.BIN.trashinfo': Input/output error rm: cannot remove `.Trash-503/info/found.000.trashinfo': Input/output error rm: cannot remove `.Trash-503/info': Directory not empty subroot@subroot:/media/xxxxx$ sudo rm -r .Trash-503/ [sudo] password for subroot: rm: cannot remove `.Trash-503/info/$RECYCLE.BIN.trashinfo': Input/output error rm: cannot remove `.Trash-503/info/found.000.trashinfo': Input/output error subroot@subroot:/media/xxxxx$ cd .Trash-503/ subroot@subroot:/media/xxxxx/.Trash-503$ ls info subroot@subroot:/media/xxxxx/.Trash-503$ cd info/ subroot@subroot:/media/xxxxx/.Trash-503/info$ ls ls: cannot access $RECYCLE.BIN.trashinfo: Input/output error ls: cannot access found.000.trashinfo: Input/output error found.000.trashinfo $RECYCLE.BIN.trashinfo subroot@subroot:/media/xxxxx/.Trash-503/info$ What's going on here and how can I delete this folder? EDIT I tried checking and repairing the partition using gparted only to get this error message: ERROR: Filesystem check failed! ERROR: 264 clusters are referenced multiple times. NTFS is inconsistent. Run chkdsk /f on Windows then reboot it TWICE! The usage of the /f parameter is very IMPORTANT! No modification was and will be made to NTFS by this software until it gets repaired. I don't have windows installed, how can I run chkdsk /f from ubuntu?

    Read the article

  • Where did my free space go?

    - by Ari B. Friedman
    I have a storage drive (2TB) and an OS drive (90GB SSD). I've run out of space on the OS drive: /$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sdb1 72G 72G 0 100% / udev 5.9G 12K 5.9G 1% /dev tmpfs 2.4G 1.2M 2.4G 1% /run none 5.0M 0 5.0M 0% /run/lock none 5.9G 428K 5.9G 1% /run/shm /dev/sda1 1.9T 639G 1.2T 37% /media/StorageDrive So be it. But when I attempt to figure out where the space has gone, I cannot find it anything remotely approaching the capacity of the drive: /$ sudo du -h -d 1 du: cannot access `./media/StorageDrive/home/ari/.gvfs': Permission denied 675G ./media 2.3G ./var 0 ./proc 7.0M ./tmp 27M ./boot 4.0K ./lib64 12K ./dev 44M ./home 16K ./lost+found 8.0M ./sbin 223M ./lib 4.0K ./selinux 1.4M ./run 140K ./root 8.8M ./bin 4.0K ./mnt 38M ./etc 8.0K ./srv 4.8G ./usr 65M ./opt 0 ./sys 682G . Note the difference between the total (682G) and the mounted drives in /media (675G) is only about 9G. How are 72G being used? Where is this dark matter hiding?

    Read the article

  • /tmp shows 690 Mb full, actual size 72 K, Why?

    - by Ankit
    Why is /tmp diretory on my system showing 690 Mb full, whereas du -sh /tmp shows only 72K full. drwxrwxrwt 2 lightdm lightdm 4096 Aug 29 21:49 at-spi2 drwx------ 2 ankit ankit 4096 Aug 29 21:50 keyring-0JTfoY drwx------ 2 ankit ankit 4096 Aug 29 21:44 keyring-rChLLL drwx------ 2 root root 16384 Jul 22 02:10 lost+found drwx------ 2 ankit ankit 4096 Jan 1 1970 orbit-ankit drwx------ 2 lightdm lightdm 4096 Aug 29 21:50 pulse-2L9K88eMlGn7 drwx------ 2 root root 4096 Aug 29 21:44 pulse-PKdhtXMmr18n drwx------ 2 ankit ankit 4096 Aug 29 21:50 pulse-zR1TZUAZfmQW drwx------ 2 ankit ankit 4096 Aug 29 21:44 ssh-dlslOXOq2203 drwx------ 2 ankit ankit 4096 Aug 29 21:50 ssh-MrQQVRyy3316 -rw------- 1 ankit ankit 0 Aug 29 21:45 tmp0qnNG4 -rw------- 1 ankit ankit 0 Aug 29 21:50 tmpVvSMt6 -rw------- 1 ankit ankit 0 Aug 29 21:49 tmpy9Gadz -rw-rw-r-- 1 lightdm lightdm 0 Aug 29 21:44 unity_support_test.0 ankit@duster:/tmp$ df -h df: `/home/ankit/.gvfs': Transport endpoint is not connected Filesystem Size Used Avail Use% Mounted on /dev/sda1 79G 11G 65G 14% / udev 2.9G 4.0K 2.9G 1% /dev tmpfs 1.2G 868K 1.2G 1% /run none 5.0M 0 5.0M 0% /run/lock none 2.9G 220K 2.9G 1% /run/shm /dev/sda7 38G 690M 35G 2% /tmp /dev/sda5 93G 26G 63G 30% /home /dev/sda6 93G 1.6G 87G 2% /boot /dev/sda3 154G 69G 78G 48% /home/mount_150 ankit@duster:/tmp$ ankit@duster:/tmp$ ankit@duster:/tmp$ sudo du -sh /tmp/ 72K

    Read the article

  • NFS server generating "invalid extent" on EXT4 system disk?

    - by Stephen Winnall
    I have a server running Xen 4.1 with Oneiric in the dom0 and each of the 4 domUs. The system disks of the domUs are LVM2 volumes built on top of an mdadm RAID1. All the domU system disks are EXT4 and are created using snapshots of the same original template. 3 of them run perfectly, but one (called s-ub-02) keeps on being remounted read-only. A subsequent e2fsck results in a single "invalid extent" diagnosis: e2fsck 1.41.14 (22-Dec-2010) /dev/domu/s-ub-02-root contains a file system with errors, check forced. Pass 1: Checking inodes, blocks, and sizes Inode 525418 has an invalid extent (logical block 8959, invalid physical block 0, len 0) Clear<y>? yes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information /dev/domu/s-ub-02-root: 77757/655360 files (0.3% non-contiguous), 360592/2621440 blocks The console shows typically the following errors for the system disk (xvda2): [101980.903416] EXT4-fs error (device xvda2): ext4_ext_find_extent:732: inode #525418: comm apt-get: bad header/extent: invalid extent entries - magic f30a, entries 12, max 340(340), depth 0(0) [101980.903473] EXT4-fs (xvda2): Remounting filesystem read-only I have created new versions of the system disk. The same thing always happens. This, and the fact that the disk is ultimately on a RAID1, leads me to preclude a hardware disk error. The only obvious distinguishing feature of this domU is the presence of nfs-kernel-server, so I suspect that. Its exports file looks like this: /exports/users 192.168.0.0/255.255.248.0(rw,sync,no_subtree_check) /exports/media/music 192.168.0.0/255.255.248.0(rw,sync,no_subtree_check) /exports/media/pictures 192.168.0.0/255.255.248.0(rw,sync,no_subtree_check) /exports/opt 192.168.0.0/255.255.248.0(rw,sync,no_subtree_check) /exports/users and /exports/opt are LVM2 volumes from the same volume group as the system disk. /exports/media is an EXT2 volume. (There is an issue where clients see /exports/media/pictures as being a read-only volume, which I mention for completeness.) With the exception of the read-only problem, the NFS server appears to work correctly under light load for several hours before the "invalid extent" problem occurs. There are no helpful entries in /var/log. All of a sudden, no more files are written, so you can see when the disk was remounted read-only, but there is no indication of what the cause might be. Can anyone help me with this problem? Steve

    Read the article

  • How do you force Ubuntu to unmount a disk when you press the eject button on an optical drive?

    - by Michael Curran
    When upgrading my hardware, I also upgraded to Ubuntu 10.10. On my previous system (with 10.04 and earlier) when I ejected a disk from the optical drive, the subfolder in the /media directory was automatically removed. In my new 10.10 system, if I don't eject the disk using the "eject" command within the system, the disk remains mounted, even after a new disk is installed. The new drive is a Blu Ray drive, but I haven't noticed any other problems from it. Normally, this isn't a problem, but it makes installing applications that are spread over multiple CDs more difficult in many cases (i.e. Wine). Any advice?

    Read the article

  • What is Camera Raw, and Why Would a Professional Prefer it to JPG?

    - by Eric Z Goodnight
    A common setting on many digital cameras, RAW is a filetype option many professional photographers prefer over JPG, despite a huge disparity in filesize. Find out why, what RAW is, and how you can benefit using this professional quality filetype Latest Features How-To Geek ETC How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 The 50 Best How-To Geek Windows Articles of 2010 The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC Enjoy Old School Style Video Game Fun with Chicken Invaders Hide the Twitter “Litter” in Twitter’s Sidebar Area (Chrome and Iron) Public Domain Day: Reflections on Copyright and the Importance of Public Domain Angry Birds Coming to PS3 and PSP This Week I Hate Mondays Wallpaper for That First Day Back at Work Tune Pop Enhances Android Music Notifications

    Read the article

  • Oh that XML - did you ever try to read a raw file?

    - by GGBlogger
    If you've ever looked at a raw XML file - even a very simple one - you'll understand. XML files are nearly impossible to read in raw format. That's where various tools come in and there are a bunch of them including some very simple tools. If, however, you need some horsepower one of the best tools on the planet is LiquidXML! LiquidXML is a developer's tool. It's also an analyst's tool, a tester's tool and a designer's tool. Did I mention that it is compatible with Visual Studio? Once again I will be following up on this as time permits. But if this sounds like something you can use just visit http://www.liquid-technologies.com/. You will find a very complete description plus high quality training videos that will help you decide if this is a tool you can use.

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >