Search Results

Search found 19975 results on 799 pages for 'disk queue length'.

Page 13/799 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • windows seven disk check error

    - by crazybmanp
    i have gotten the error in the microsoft KB article 975778 (http://support.microsoft.com/kb/975778/en-us) i was able to get onto my computer luckily, but when i went on to get the hotfix that is stated in the article, it gave me this error when i ran it: "the update is not applicable to your computer." is there any way of getting the disk check error to stop, or to get this fix to work. i will answer any questions on hardware if you post the questions in the comments. answers to questions this is the a full version that i got from ASUS as an upgrade disk from windows vista (hopefully more specific than you were asking)

    Read the article

  • Need help solving "Disk Error Press any key to restart" after blue screen in XP.

    - by Dennis
    I have had a couple of BSOD's lately and after the last one I got the disk error message. The disk shows up during the BIOS portion of the boot. It is in a Dell Precision 690 so the F12 key gives me a boot menu. I can boot from the utility partition, and if I specifically select the hard drive from the boot menu it will boot fine. Any ideas why if I just try to do an unattended boot, it give the error listed in the title?

    Read the article

  • Altq limits not being applied to UDP transfers

    - by overkordbaever
    I have a OpenBSD server acting as a router/firewall with yhr packet filter ruleset shown below, a linux server, and a linux client. When transferring files (using netcat) by TCP, the limits are applied (for example the 100mbit limit in the example), though when transferring data by UDP, the limits aren't applied; the file always takes the same amount of time no matter the queue bandwidth limit I set (I can even turn off the queues completely, and will still get the same result). Why aren't the queuing rules applied to UDP packages? The rules used: #queue rules altq on { $int_if, $ext_if } cbq bandwidth 100Mb queue { def, low } queue def bandwidth 0Mb cbq(default) queue low bandwidth 100Mb cbq #Passrules test pass out quick from $int_if to $ext_if queue low pass in quick from $ext_if to $int_if queue low pass out quick from $ext_if to $int_if queue low pass in quick from $int_if to $ext_if queue low I suppose this may be related a question I've previously asked, though since it's more of a separate question, I suppose a separate question should be used for this

    Read the article

  • Can I make my drives visible and change their partition type without losing my data?

    - by user165408
    I have made a lot of mistakes and now I cannot see my hard disk nor I can start my operating system on my laptop. All my passwords and important files on my hdd without any backup. I followed this course of action Changed my hard disk partitions to dynamic just for getting 5th partition. (1st mistake) Decreased partitions to 4 again. Backed up operating system from 4th to 3rd partition with Norton Ghost. Booted from a live CD for Windows XP. Formatted 4th partition and moved my all important data from 1st and 2nd partitions to the 4th partition. Deleted 1st and 2nd partitions and got 1 partition from half of empty space. So I have just 3 partitions and empty space between 1st and 2nd partitions. Tried to install Windows 8 to the first partition but it did not allow because it is dynamic. Also it did not allow to install to other partitions. Tried to install Windows XP to the 1st partition but it said if I continue I cannot use other drivers. Therefore I escaped from installing it. Booted from the Windows XP live CD then increased 1st partiton to less than 400mb of empty space. Therefore I thought it will be adjacent but it was shown as 2 partitions. In my computer I see just 3 drivers. Using Norton Ghost I recovered my OS to the 1st partition. (2nd mistake it was on 4th partition originally) Booted from a Windows XP live CD I tried to install bcdedit to the Windows XP live CD but it did not work. Then I tried to install EaseUS Partition Master Home Edition. It was installed with errors then I start it and it showed me an error like there is no hard disk. I looked to my PC and my drivers were not there. Booted from the Norton Ghost CD and it did not show me my drivers either, but before I was able to see them. I checked numbers of partition shown by the Norton Ghost utility and they are still have same numbers so I have to see my drivers but I cannot see them now. My hard disk is shown as extarnal dynamic now so I cannot see any drive in my PC in the live Windows XP. There are two options; first one is import extarnal disk and second one is convert disk to basic. Will they delete my data? I fear booting from CDs like Windows XP live CD, Norton Ghost CD, and the operating system CD/DVD, because they may overwrite a few MB their data to my data. These recover tools are already exist in Windows XP live CD by The Ultimate Boot CD for Windows. Can any of them help me? CompuAppa SwissKnife V3 DBXtract Disk Investigator Fab's AutoBackup 2.0 FileRecovery Floppy Repair Free Undelete Handy Recovery Recovery Manager Restorastion Restorastion Help File by UBCD4Win UnChk Unstoppable Copier Finally How can I make it so that my drives are visible again without losing my data? How can I convert my dynamic partitions to basic without losing my data?

    Read the article

  • Hyper-V Virtual Disk Creation Taking Forver

    - by mnemosyn
    After some struggle, I finally managed to set up Hyper-V 2008 R2 on our server. So I connected to it using the Hyper-V Manager from a Windows 7 client and used the "New Virtual Machine Wizard". I set up a 350GB virtual hard disk. So I hit the "finish" button and the Hyper-V manager has been working for 24hours now, showing merely a dialog "Creating Disk". A console on the Hyper-V still reports 99.9% free space on the HD, but the machines HD LED flashes from time to time (making a rather idle impression, it's not flashing frenetically). Does this usually take this long? Is there a way to find out whether it's still working or just idling? Should I repeat the process? Guides on the net tell me to be patient, but 1d seems a bit extreme!?

    Read the article

  • Compressed disk image on Linux

    - by Aaron Digulla
    I just got my new computer with a much bigger harddisk. I think I copied all important files over but just to be sure, I'd like to keep a disk image of my old disk. To save space, I'd like to compress it but I didn't find an option to mount a compressed image. My goals: Result must be easy to access No need to decompress the whole thing before I can access anything Files should be quick to locate - no TAR/CPIO archive Necessary space should be less than just copying the files over So ideally, I'm looking for a read-only, compressed file system which I can create in a file and which grows automatically.

    Read the article

  • Check the disk for problems on Debian Lenny

    - by Equ
    Hi guys! I just bought a VPS hosting with Debian Lenny (I'm new to all this world). I've managed to install and setup everthing I need pretty well. My testing website works fast as expected most of the time, but sometimes it is really slow (response time is about 5-10 seconds). I checked everything and seems that there are may be some disk issues. How can I check the disk for problems/performance? What else could possible cause such a behaviour? Thank you!

    Read the article

  • Transfer disk contents *without* cloning tools

    - by Chris Cummins
    Is it possible to "clone" a disk which contains programs by performing a copy of all the disk contents (preserving file attributes) from source to destination disk, and unplugging the source disk and changing the drive letter of the destination disk to match that of the source? Context I have a two disk Windows 8 system with a system drive and a data drive. Recently, the data drive developed a number of bad sectors leading to IO errors. I have been sent a replacement drive so I simply need to clone the contents of this data drive onto the replacement. The drive contents include documents & media, user folders (My Documents and related), and some programs (games etc). Problem The problem is that the bad sectors on the source disk causes most disk cloning tools to fail with read errors. Attempted approaches include: Disk clone from live boot environment with Acronis True Image. Fails due to read errors. Disk clone from live boot environment with Clonezilla. Fails due to read errors. Disk clone using Roadkil's Unstoppable Copier. Fails due to hardware timeouts in the HDD (application hangs indefinitely). A straightforward copy from source to destination disk using FreeFileSync (preserving file attributes and metadata). This succeeds. So at the moment I have a replacement disk which contains all of the data from the original disk. Now all I need to is somehow get Windows to replace all references to the old disk to the new one. Is this possible by simply swapping the assigned drive letters? Any help would be greatly appreciated, thanks!

    Read the article

  • Disk fragmentation when dealing with many small files

    - by Zorlack
    On a daily basis we generate about 3.4 Million small jpeg files. We also delete about 3.4 Million 90 day old images. To date, we've dealt with this content by storing the images in a hierarchical manner. The heriarchy is something like this: /Year/Month/Day/Source/ This heirarchy allows us to effectively delete days worth of content across all sources. The files are stored on a Windows 2003 server connected to a 14 disk SATA RAID6. We've started having significant performance issues when writing-to and reading-from the disks. This may be due to the performance of the hardware, but I suspect that disk fragmentation bay be a culprit at well. Some people have recommended storing the data in a database, but I've been hesitant to do this. An other thought was to use some sort of container file, like a VHD or something. Does anyone have any advice for mitigating this kind of fragmentation?

    Read the article

  • Testing a Non-blocking Queue

    - by jsw
    I've ported the non-blocking queue psuedocode here to C#. The code below is meant as a near verbatim copy of the paper. What approach would you take to test the implementation? Note: I'm running in VS2010 so I don't have CHESS support yet. using System.Threading; #pragma warning disable 0420 namespace ConcurrentCollections { class QueueNodePointer<T> { internal QueueNode<T> ptr; internal QueueNodePointer() : this(null) { } internal QueueNodePointer(QueueNode<T> ptr) { this.ptr = ptr; } } class QueueNode<T> { internal T value; internal QueueNodePointer<T> next; internal QueueNode() : this(default(T)) { } internal QueueNode(T value) { this.value = value; this.next = new QueueNodePointer<T>(); } } public class ConcurrentQueue<T> { private volatile int count = 0; private QueueNodePointer<T> qhead = new QueueNodePointer<T>(); private QueueNodePointer<T> qtail = new QueueNodePointer<T>(); public ConcurrentQueue() { var node = new QueueNode<T>(); node.next.ptr = null; this.qhead.ptr = this.qtail.ptr = node; } public int Count { get { return this.count; } } public void Enqueue(T value) { var node = new QueueNode<T>(value); node.next.ptr = null; QueueNodePointer<T> tail; QueueNodePointer<T> next; while (true) { tail = this.qtail; next = tail.ptr.next; if (tail == this.qtail) { if (next.ptr == null) { var newtail = new QueueNodePointer<T>(node); if (Interlocked.CompareExchange(ref tail.ptr.next, newtail, next) == next) { Interlocked.Increment(ref this.count); break; } else { Interlocked.CompareExchange(ref this.qtail, new QueueNodePointer<T>(next.ptr), tail); } } } } Interlocked.CompareExchange(ref this.qtail, new QueueNodePointer<T>(node), tail); } public T Dequeue() { T value; while (true) { var head = this.qhead; var tail = this.qtail; var next = head.ptr.next; if (head == this.qhead) { if (head.ptr == tail.ptr) { if (next.ptr == null) { return default(T); } Interlocked.CompareExchange(ref this.qtail, new QueueNodePointer<T>(next.ptr), tail); } else { value = next.ptr.value; var newhead = new QueueNodePointer<T>(next.ptr); if (Interlocked.CompareExchange(ref this.qhead, newhead, head) == head) { Interlocked.Decrement(ref this.count); break; } } } } return value; } } } #pragma warning restore 0420

    Read the article

  • Lockless queue implementation ends up having a loop under stress

    - by Fozi
    I have lockless queues written in C in form of a linked list that contains requests from several threads posted to and handled in a single thread. After a few hours of stress I end up having the last request's next pointer pointing to itself, which creates an endless loop and locks up the handling thread. The application runs (and fails) on both Linux and Windows. I'm debugging on Windows, where my COMPARE_EXCHANGE_PTR maps to InterlockedCompareExchangePointer. This is the code that pushes a request to the head of the list, and is called from several threads: void push_request(struct request * volatile * root, struct request * request) { assert(request); do { request->next = *root; } while(COMPARE_EXCHANGE_PTR(root, request, request->next) != request->next); } This is the code that gets a request from the end of the list, and is only called by a single thread that is handling them: struct request * pop_request(struct request * volatile * root) { struct request * volatile * p; struct request * request; do { p = root; while(*p && (*p)->next) p = &(*p)->next; // <- loops here request = *p; } while(COMPARE_EXCHANGE_PTR(p, NULL, request) != request); assert(request->next == NULL); return request; } Note that I'm not using a tail pointer because I wanted to avoid the complication of having to deal with the tail pointer in push_request. However I suspect that the problem might be in the way I find the end of the list. There are several places that push a request into the queue, but they all look generaly like this: // device->requests is defined as struct request * volatile requests; struct request * request = malloc(sizeof(struct request)); if(request) { // fill out request fields push_request(&device->requests, request); sem_post(device->request_sem); } The code that handles the request is doing more than that, but in essence does this in a loop: if(sem_wait_timeout(device->request_sem, timeout) == sem_success) { struct request * request = pop_request(&device->requests); // handle request free(request); } I also just added a function that is checking the list for duplicates before and after each operation, but I'm afraid that this check will change the timing so that I will never encounter the point where it fails. (I'm waiting for it to break as I'm writing this.) When I break the hanging program the handler thread loops in pop_request at the marked position. I have a valid list of one or more requests and the last one's next pointer points to itself. The request queues are usually short, I've never seen more then 10, and only 1 and 3 the two times I could take a look at this failure in the debugger. I thought this through as much as I could and I came to the conclusion that I should never be able to end up with a loop in my list unless I push the same request twice. I'm quite sure that this never happens. I'm also fairly sure (although not completely) that it's not the ABA problem. I know that I might pop more than one request at the same time, but I believe this is irrelevant here, and I've never seen it happening. (I'll fix this as well) I thought long and hard about how I can break my function, but I don't see a way to end up with a loop. So the question is: Can someone see a way how this can break? Can someone prove that this can not? Eventually I will solve this (maybe by using a tail pointer or some other solution - locking would be a problem because the threads that post should not be locked, I do have a RW lock at hand though) but I would like to make sure that changing the list actually solves my problem (as opposed to makes it just less likely because of different timing).

    Read the article

  • Why is the root partition on my disk full?

    - by Agmenor
    I installed Ubuntu 12.04 by doing a fresh install where there was previously Ubuntu 11.10. My computer warns me now that my disk is nearly full. After having run apt-get purge, run apt-get autoremove and emptied the Trash can, I still have this problem as shown by this screenshot of Gparted: The disk /dev/sda7 is indeed full. I ran the Disk Usage Analyzer (Baobab) and I am still not sure of what is happening: One of my hypothesis is that when installing Ubuntu 12.04, I didn't configure my disks well and the disk /dev/sda6 is not mounted well as /home. Is this the reason indeed? What should I do to verify this and then to get the things fixed? Here are a few additional details to answer the questions I received (thank you everybody): My home directory is not encrypted. The Backup utility (Déjà Dup) is not set for automatic backups. (I do it myself and manually.) After I mount /dev/sda6, the command df -h gives Filesystem Size Used Avail Use% Mounted on /dev/sda7 244G 221G 12G 96% / udev 3,9G 4,0K 3,9G 1% /dev tmpfs 1,6G 904K 1,6G 1% /run none 5,0M 0 5,0M 0% /run/lock none 3,9G 164K 3,9G 1% /run/shm /dev/sda6 653G 189G 433G 31% /media/8ec2fa69-039b-4c52-ab1b-034d785132a1 (sorry but formatting this into code does not work, for an unknown reason) Thanks to izx's post, I realized /dev/sda6 was not even mounted before. It contains all the documents I used to have when I was running Ubuntu 11.10.

    Read the article

  • How can I get identity of a disk?

    - by sxingfeng
    I want to identify disk in c++ in my windows application. For example: I have a disk on E:\ Then I changed the disk, and replace it with another one. the name is still E:\ How can I know the disk is changed, it is not the original one? If I have no administrator priority in win7, Can I still use some method to identy different disks? Many thanks!

    Read the article

  • Variable length Blob in hibernate?

    - by Seth
    I have a byte[] member in one of my persistable classes. Normally, I'd just annotate it with @Lob and @Column(name="foo", size=). In this particular case, however, the length of the byte[] can vary a lot (from ~10KB all the way up to ~100MB). If I annotate the column with a size of 128MB, I feel like I'll be wasting a lot of space for the small and mid-sized objects. Is there a variable length blob type I can use? Will hibernate take care of all of this for me behind the scenes without wasting space? What's the best way to go about this? Thanks!

    Read the article

  • IIS7 - Specifying content-length header in ASP causes "connection reset" error

    - by MisterZimbu
    I'm migrating a series of websites from an existing IIS5 server to a brand new IIS7 web server. One of the pages pulls a data file from a blob in the database and serves it to the end user: Response.ContentType = rs("contentType") Response.AddHeader "Content-Disposition", "attachment;filename=" & Trim(rs("docName"))&rs("suffix")' let the browser know the file name Response.AddHeader "Content-Length", cstr(rs("docsize"))' let the browser know the file size Testing this in the new IIS7 install, I get a "Connection Reset" error in both Internet Explorer and Firefox. The document is served up correctly if the Content-Length header is removed (but then the user won't get a useful progress bar). Any ideas on how to correct this; whether it be a server configuration option or via code? Thanks.

    Read the article

  • Unable to format 16Gb Usb Disk

    - by akshay.is.gr8
    Whenever i try to format my 16 GB usb disk using gparted it does to formating and when it refreshes then show unknown. tried disk utility as well. disk utility was able to format it into FAT but files vanish web the disk is removed and attached again. edit: the disk format completes every time but when using gparted it immediately show Unknown type file system and disk utility show FAT but when the Disk is unplugged and then connected the files are not there. either way it is unusable.

    Read the article

  • Ubuntu Unable to format 16Gb Usb Disk

    - by akshay.is.gr8
    Whenever i try to format my 16 GB usb disk using gparted it does to formating and when it refreshes then show unknown. tried disk utility as well. disk utility was able to format it into FAT but files vanish web the disk is removed and attached again. edit: the disk format completes every time but when using gparted it immediately show Unknown type file system and disk utility show FAT but when the Disk is unplugged and then connected the files are not there. either way it is unusable.

    Read the article

  • ?Oracle Database 12c????ASM Scrubbing Disk Groups

    - by Liu Maclean(???)
    ?12.1?Oracle ASM??????????????????? ??Scrubbing Disk Groups, Disk Scrubbing???????????,?????Normal ??High Redundancy?disk group?????? Scrubbing ?????????????????Disk Scrubbing???disk group rebalancing???????I/O?????Disk Scrubbing??????I/O????? ?????????Scrubbing????,?????,????????????,?????ALTER DISKGROUP?????????: SQL> ALTER DISKGROUP data SCRUB POWER LOW; SQL> ALTER DISKGROUP data SCRUB FILE '+DATA/ORCL/ASKMACLEAN/example.266.806582193' REPAIR POWER HIGH FORCE; SQL> ALTER DISKGROUP data SCRUB DISK DATA_0005 REPAIR POWER HIGH FORCE; ?????SCRUB ?: ??REPAIR??????????,?????REPAIR,?SCRUB???????????????? ??POWER?????AUTO LOW HIGH ??MAX? ?POWER???,???AUTO????? ??WAIT ???????scrubbing ?????????WAIT???,?scrubbing??????scrubbing queue ??,??????? ?FORCE?????,?????I/O????????????????scrubbing ,????????

    Read the article

  • Disk operations freeze Debian

    - by Grzenio
    Hi, I have just installed Debian testing on my new desktop and I am not very happy with performance - when I perform a disk intensive operation, e.g. upgrade packages in the system, everything seems to freeze, e.g. changing tabs in Iceweasel takes 3 seconds. I run the Debian on my 3 year old Thinkpad X60 ultra-portable, and I don't have these issues. (every single parameter of the laptop is much worse than the desktop). I am using the default packaged kernel and scripts. I run hdparm -t /dev/sda1 And I got around 96GB/s, which is expected. What else can I try to make it work better? EDIT: grzes:/home/ga# hdparm -i /dev/sda /dev/sda: Model=WDC WD15EARS-00Z5B1, FwRev=80.00A80, SerialNo=WD-WMAVU1362357 Config={ HardSect NotMFM HdSw>15uSec SpinMotCtl Fixed DTR>5Mbs FmtGapReq } RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=50 BuffType=unknown, BuffSize=unknown, MaxMultSect=16, MultSect=16 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=2930277168 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120} PIO modes: pio0 pio3 pio4 DMA modes: mdma0 mdma1 mdma2 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6 AdvancedPM=no WriteCache=enabled Drive conforms to: Unspecified: ATA/ATAPI-1,2,3,4,5,6,7 * signifies the current active mode EDIT2: Even my wife said "on this new computer I can't do anything when I copy the photos from the camera and its much worse than on the old one". So it must be serious. EDIT3: Updated to 2.6.32, but still no improvement EDIT4: I forgot to mention that the new disk is ext4, the old was ext3. EDIT5: Still not solved. I have a P43 ASUS P5QL-E board. Lines from dmesg that seem relevant: [ 0.370850] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253) [ 0.370852] io scheduler noop registered [ 0.370853] io scheduler anticipatory registered [ 0.370854] io scheduler deadline registered [ 0.370876] io scheduler cfq registered (default) ... [ 0.908233] ata_piix 0000:00:1f.2: version 2.13 [ 0.908243] ata_piix 0000:00:1f.2: PCI INT B -> GSI 19 (level, low) -> IRQ 19 [ 0.908246] ata_piix 0000:00:1f.2: MAP [ P0 P2 P1 P3 ] [ 0.908275] ata_piix 0000:00:1f.2: setting latency timer to 64 [ 0.908316] scsi0 : ata_piix [ 0.908374] scsi1 : ata_piix [ 0.909180] ata1: SATA max UDMA/133 cmd 0xa000 ctl 0x9c00 bmdma 0x9480 irq 19 [ 0.909183] ata2: SATA max UDMA/133 cmd 0x9880 ctl 0x9800 bmdma 0x9488 irq 19 [ 0.909199] ata_piix 0000:00:1f.5: PCI INT B -> GSI 19 (level, low) -> IRQ 19 [ 0.909202] ata_piix 0000:00:1f.5: MAP [ P0 -- P1 -- ] [ 0.909228] ata_piix 0000:00:1f.5: setting latency timer to 64 [ 0.909279] scsi2 : ata_piix [ 0.909326] scsi3 : ata_piix [ 0.910021] ata3: SATA max UDMA/133 cmd 0xb000 ctl 0xac00 bmdma 0xa480 irq 19 [ 0.910024] ata4: SATA max UDMA/133 cmd 0xa880 ctl 0xa800 bmdma 0xa488 irq 19 [ 0.915575] FDC 0 is a post-1991 82077 ... [ 1.716062] ata1.00: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [ 1.716074] ata1.01: SATA link down (SStatus 0 SControl 300) [ 1.724318] ata1.00: ATA-8: WDC WD15EARS-00Z5B1, 80.00A80, max UDMA/133 [ 1.724322] ata1.00: 2930277168 sectors, multi 16: LBA48 NCQ (depth 0/32) [ 1.740339] ata1.00: configured for UDMA/133 [ 1.740428] scsi 0:0:0:0: Direct-Access ATA WDC WD15EARS-00Z 80.0 PQ: 0 ANSI: 5 [ 1.746788] scsi 6:0:0:0: CD-ROM ASUS DRW-1608P 1.17 PQ: 0 ANSI: 5 ... [ 1.925981] sd 0:0:0:0: [sda] 2930277168 512-byte logical blocks: (1.50 TB/1.36 TiB) [ 1.926005] sd 0:0:0:0: [sda] Write Protect is off [ 1.926007] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 [ 1.926020] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 1.926092] sda:sr0: scsi3-mmc drive: 40x/40x writer cd/rw xa/form2 cdda tray [ 1.931106] Uniform CD-ROM driver Revision: 3.20 [ 1.931191] sr 6:0:0:0: Attached scsi CD-ROM sr0 ... [ 1.941936] sda1 sda2 sda3 sda4 < sda5 sda6 > [ 1.967691] sd 0:0:0:0: [sda] Attached SCSI disk [ 1.970938] sd 0:0:0:0: Attached scsi generic sg0 type 0 [ 1.970959] sr 6:0:0:0: Attached scsi generic sg1 type 5 ... [ 2.500086] EXT4-fs (sda3): mounted filesystem with ordered data mode ... [ 7.150468] EXT4-fs (sda6): mounted filesystem with ordered data mode

    Read the article

  • Windows 7 disk errors after a few hours of runtime

    - by GFK
    I'm having trouble understanding what is going on with my work PC. Whenever I boot it, it runs fine for a while, then starts to randomly show disk errors. The displayed error often contains the message "not enough storage is available to process this command", although depending on the application that fails it can be different. This has happened for weeks now and is getting worse. This is what troubles me: It never seems to impact critical parts of the system (no BSOD, no freeze). Only some applications seem impacted, refusing to function correctly after a while: Outlook 2010 cannot download RSS feeds anymore, Firefox 6 or IE9 cannot download anything bigger than 3MB without failing, Windows Update fails, all msi installers fail, Visual Studio 2010 starts failing in weird manners... It only happens after a while using it (typically 3 hours, but it seems that installing a program or compiling several times makes it shorter) Rebooting solves it (temporarily). The system: The OS is Windows 7 Pro Spanish SP1, 32 bits The system is an HP Compaq 6000 Pro with 4 GB memory (only 3.4GB usable since the system is 32bit), one 500GB hard drive. Installed applications include: Visual Studio 2010, SQL Server 2008 R2, VMWare Workstation 7, Microsoft Security Essentials, Office 2010. Shutting down all related services and processes doesn't seem to change anything. The diagnostics I've run so far: Hard drive : 465GB, 165GB free Process Explorer : physical and virtual memory seem ok (pagefile is 5.3GB, physical memory usage 70%, system commit 39%) Windows Memory diagnostic tool: OK CHKDSK returned: 488282111 KB total disk space. 281668248 KB in 265779 files. 150188 KB in 62949 indexes. 0 KB in bad sectors. 571755 KB in use by the system. The log file has occupied 65536 kilobytes. 205891920 KB available on disk. For non-spanish speakers, that means all ok. SMART diagnostic tools (DiskCheckup) report all values normal. temperatures are in the normal range (HWinfo). The event viewer doesn't seem to contain any significant message. ran CCleaner 3, without any noticeable effect. I was thinking about some file number limit (between Visual Studio projects and other applications, there are around 300.000 files on the hard drive), but I couldn't find any. It's possible there is something related with the use of the temporary folders (it's the only explanation I have for why applications fail but Windows doesn't), but I cannot confirm that. Only thing I cannot find out is if chkdsk reporting 65MB for the log is normal. It seems since Vista it always reports this. Any other cleaning/diagnostic tool you might know of? Edit: I ran several other tools since I first published the question: Seagate SeaTools (the HD manufacturer's analysis tool): complete test run OK. Intel Rapid 10.1 (the HD controller manufacturer's troubleshooting tool): the HD's ok. Microsoft Desktop Heap Monitor: Desktop Heap Information Monitor Tool (Version 8.1.2925.0) Copyright (c) Microsoft Corporation. All rights reserved. Session ID: 1 Total Desktop: ( 46464 KB - 11 desktops) WinStation\Desktop Heap Size(KB) Used Rate(%) WinSta0\Winlogon (s1) 128 3.6 WinSta0\Disconnect (s1) 64 3.8 WinSta0\Default (s1) 20480 3.0 msswindowstation\mssrestricteddesk (s0) 1024 0.2 __X78B95_89_IW__A8D9S1_42_ID (s0) 1024 0.2 Service-0x0-3e5$\Default (s0) 1024 0.6 Service-0x0-3e4$\Default (s0) 1024 0.3 Service-0x0-3e7$\Default (s0) 1024 2.1 WinSta0\Winlogon (s0) 128 1.9 WinSta0\Disconnect (s0) 64 3.8 WinSta0\Default (s0) 20480 0.0 All ok, desktop heap usage < 5% Edit 2: I tried totally resetting my account by creating a new one, logging under this new one and delete the first one (local rights and files), then logging back with this deleted account (it is a domain account). No luck. Also, I found out often the error is "not enough storage is available to process this command". Searching on the internet, I found an old troubleshooting tip (setting a registry key to raise the IRP stack limit, whatever it is) which did not change anything.

    Read the article

  • Cash Application Work Queue in Oracle Receivables Release 12.1.1

    - by Robert Story
    Upcoming WebcastTitle: Cash Application Work Queue in Oracle Receivables Release 12.1.1Date: March 24, 2010Time: 10:00 am EDT, 7:00 am PDT, 14:00 GMT Product Family: E-Business Suite Receivables 12.1.1 Receipts Summary Understand the setups and processes for the Cash Application Work Queue in Release 12.1.1 and learn how to diagnose basic functional issues. This one-hour session is recommended for technical and functional users. We will be covering topics related to processing receipts efficiently, managing the work load of cash application owners and diagnosing issues. Topics will include: Description of Cash Application Work Queue Setup and Work Queue Process Dependencies and Interactions Basic Troubleshooting Steps A short, live demonstration (only if applicable) and question and answer period will be included. Click here to register for this session....... ....... ....... ....... ....... ....... .......The above webcast is a service of the E-Business Suite Communities in My Oracle Support.For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • Occasional disk I/O errors in SQLite

    - by Alix Axel
    I have a very simple website running PHP and SQLite 3.7.9 (with PDO). After establishing the SQLite connection I immediately execute the following queries: PRAGMA busy_timeout=0; PRAGMA cache_size=8192; PRAGMA foreign_keys=ON; PRAGMA journal_size_limit=67110000; PRAGMA legacy_file_format=OFF; PRAGMA page_size=4096; PRAGMA recursive_triggers=ON; PRAGMA secure_delete=ON; PRAGMA synchronous=NORMAL; PRAGMA temp_store=MEMORY; PRAGMA journal_mode=WAL; PRAGMA wal_autocheckpoint=4096; This website only has one writer and a few occasional readers, so I don't expect any concurrency problems (and I'm even using WAL). Every couple of days, I've seen this error being reported by PHP: Fatal error: Uncaught exception 'PDOException' with message 'SQLSTATE[HY000]: General error: 10 disk I/O error' in ... Stack trace: #0 ...: PDO-exec('PRAGMA cache_si...') There are several things that make this error very weird to me: it's not a transient problem - no matter how many times I refresh the page, it won't go away the database file is not corrupted - the sqlite3 executable can open the database without problems If the following pragmas are commented out, PHP stops throwing the disk I/O exception: PRAGMA cache_size=8192; PRAGMA synchronous=NORMAL; PRAGMA journal_mode=WAL; Then, after successfully reconnecting to the database, I'm able to reintroduce these pragmas and the code with run smoothly for days - until eventually, the same error will occur without any apparent reason. I wasn't able to reproduce this error so far, so I'm clueless about the origin of it. I'm really curious what may be causing this problem... Any ideas? Environment: Ubuntu Server 12.04 LTS PHP 5.4.15 SQLite 3.7.9 Database size: ? 10MiB Transaction (write) size: ? 1KiB EDIT: Might these symptoms have something to do with busy_timeout?

    Read the article

  • Get Python to raise MemoryError instead of eating all my disk space

    - by asmeurer
    If I run a Python program with a memory leak, I would normally expect the program to eventually die with MemoryError. But instead, what happens is that all the virtual memory is used until my disk runs out of space. I am running Mac OS X 10.8 on a retina MacBook Pro. My computer generally has between 10GB to 20GB free. Mac OS X is smart enough to not die completely when the disk runs out of space (rather, it gives me a dialog letting me force quit my GUI programs). Is there a way to make Python just die when it runs out of real memory, or some reasonable amount of virtual memory? This is what happens on Linux, as far as I can tell. I guess Mac OS X is more generous than Linux with virtual memory (the fact that I have an SSD might be part of this; I don't know just how smart OS X is with this stuff). Maybe there's a way to tell the Mac OS X kernel to never use so much virtual memory that leaves less than, say, 5 GB free on the hard drive?

    Read the article

  • Tool to Save a Range of Disk Clusters to a File

    - by Synetech inc.
    Hi, Yesterday I deleted a (fragmented) archive file only to find that it did not extract correctly, so I was left stranded. Fortunately there was not much space free on the drive, so most of the space marked as free was from the now-deleted archive. I pulled up a disk editor and—painfully—managed to get a list of cluster ranges from the FAT that were marked as unused. My task then was to save these ranges of clusters to files so that I could examine them to try to determine which parts were from the archive and recombine them to attempt to restore the deleted file. This turned out to be a huge pain in the butt because the disk editor did not have the ability to select a range of clusters, so I had to navigate to the start of each cluster and hold down Ctrl+Shift+PgDn until I reached the end of the range (which usually took forever!) I did a quick Google search to see if I could find a command-line tool (preferably with Windows and DOS versions) that would allow me to issue a commands such as: SAVESECT -c 0xBEEF 0xCAFE FOO.BAR ::save clusters 0xBEEF-0xCAFE to FOO.BAR SAVESECT -s 1111 9876 BAZ.BIN ::save sectors 1111-9876 to BAZ.BIN Sadly my search came up empty. Any ideas? Thanks!

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >