Search Results

Search found 14653 results on 587 pages for 'disk cache'.

Page 58/587 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • Producer / Consumer - I/O Disk

    - by Pedro Magalhaes
    Hi, I have a compressed file in the disk, that a partitioned in blocks. I read a block from disk decompress it to memory and the read the data. It is possible to create a producer/consumer, one thread that recovers compacted blocks from disk and put in a queue and another thread that decompress and read the data? Will the performance be better? Thanks!

    Read the article

  • Creating GPT partitions on EFI raw disk using diskpart

    - by kafka
    I've got a raw, blank GPT disk for use in a UEFI system. I need to create the partitions on it using diskpart. The only tutorial I've found so far is for diskpart.efi, which I believe is slightly different from the command-line diskpart. MS guide to GPT partitions with diskpart.efi Also the guide says to create a MSR of 32MB, but for a disk= 16GB I know it needs to be 128MB. I'm happy doing it with diskpart, just want to be sure I understand the fundamentals. I'm planning on installing, in this order: ESP partition, size 102 MB (create partition esp size=102) MSR partition, size 128 MB (create partition msr size=128) data partition, the remaining space (approx 460GB) Is this the correct thing to do, or is there anything I'm missing?

    Read the article

  • XEN disk mapping problem under opensolaris

    - by Louis
    I have a system with two harddisks, i wanted to use the simplicity of ZFS for my file server and i also need to run a linux. I choosed XEN virtualization for that, supported on both system. My GRUB is well configured and i can boot both system. I would like is to run both system with solaris as a dom0 and the debian installed on the 2nd HD as a virtual machine. My problem is that i want to use the partitions of my 1st harddisk (sda1 under linux) and it does not work. I didn't find my use case on the web- Here is my Opensolaris device name of this partition : /dev/rdsk/c7d0p1 But when i use : disk = [ 'phy:rdsk/c7d0p1,sda1,w' ] as a disk mapping in my XEN configuration file i have the error : Error: Device 2049 (vbd) could not be connected. error: "rdsk/c7d0p1" is not a valid block device. I am "lost".

    Read the article

  • Resetting Mac OS X administrator password without a disk

    - by Simon Sheehan
    I'm currently in possession of an eMac G4 running OS X 10.4. I went to install some software, and found I didn't actually have the password. No one seems to know it, and it's not written down anywhere. These were purchased by the school many years ago and are not really maintained, since people just used Garageband mostly. I went to look for the restore disk, and its nowhere to be found. How can I reset the password without a disk?

    Read the article

  • HP ProBook 4540s CD/DVD Drive cannot Read Disks

    - by DavidB
    I seem to have a problem with the CD/DVD drive on my HP ProBook 4540s laptop. I cannot get it to read any disks. Normally, I would say that this is a hardware issue, but whenever I put a disk that previously could be read in the drive, it starts to make noise like it is trying to to read the disk but cannot and AnyDVD HD seems to be able to retrieve disk information with some struggle. Any ideas on what the problem could be?

    Read the article

  • Strip my windows NTFS disk of all ACLs

    - by Alain Pannetier
    When you purchase a windows PC nowadays, you don't actually "own" the whole disk... There are so many ACLs on each folder that there are portions of it you actually can access only through a complex sequence of actions requiring skills well beyond the average PC user. You have to drill down to deeply buried dialog boxes accessible through concealed buttons. You have to understand at which level of the hierarchy you have to take ownership, remove ACLs etc... Yet when you think of it, that's your PC, that's what the "P" of PC originally stand for... So I'm toying with the idea of just stripping the disk of all ACLs I just purchased and leave standard file protections do the basic protection work... Just like previous century Windows used to do... (before I chmod -R 777 ;-) Has anybody done that already and nevertheless survived in reasonably good shape for a reasonable amount of time ? Any technical advice to do that ? Powershell script ? basic script using iCACLS ?

    Read the article

  • How to configure nginx to serve static contents from RAM?

    - by Vijayendra Tripathi
    I want to set up nginx as my web server. I want to have image files cached in the memory (RAM) rather then disk. I am serving a small page and want few images always served from RAM. I dont wish to use varnish (or any other such tools) for this as I believe nginx has a capability to cache contents into RAM. I am not sure as how may I configure nginx for this? I did try few combinations but they didn't work. nginx uses disk all the time to get images. For example, when I tried apache benchmark to test with following command - ab -c 500 -n 1000 http://localhost/banner.jpg I get following error - socket: Too many open files (24) I guess this means nginx is trying to open to many files simultaneously from the disk and OS is not allowing this operation. Can anyone please suggest me a correct configuration? Thanks for considering this message.

    Read the article

  • When and Where does Wubi mount it's virtual disks

    - by TuxPotato
    My use of the wubi-new-virtual-disk made Wubi start mounting this new virtual home disk over my /home folder. After the use of the script failed, I am left with Wubi constantly remounting an empty virtual disk over my /home folder. I followed the instructions on the Ubuntu website to revert the change, but the mounting continues. Where did Wubi put the mount operation, and how can I remove it? Thanks in advance!

    Read the article

  • prevent OS X from prompting disk initialization/formatting

    - by Just-A-User.A-Superuser
    i have TrueCrypt partition, when i insert it in OS X, it always prompt me to initialize the hard disk. is there a way to prevent os x from detecting uninitialize hard disk? [UPDATE] by the way, as Truecrypt suggested while i'm in Windows, i must make partitions so the os won't detect the hard drive as uninitialized. Windows respected that the drive already have contents by the mere fact that it has partitions, while OS X thinks that it is still uninitialized. i think OS X is trying to be smart by detecting if each partition has a valid filesystem id/marker

    Read the article

  • Run Wave Trusted Drive Manager from a bootable CD, recover crashed enrypted SSD?

    - by TigerInCanada
    Is there a way to run Wave Trusted Drive Manager from a live-cd to access a non-bootable SSD with Full Disk Encyption hard disk? http://www.wave.com/products/tdm.asp The crashed disk is a Samsung SSD PB22-JS3, 128Gb. Is has bad blocks at 128-block intervals. If the SSD password could be unset, is sending the unit for disaster recovery possible? What might cause a nearly new SSD to crash in this way, and what is the probability of it happening again? We have other units in service an I can do without every laptop disk in the company crashing...

    Read the article

  • linux kernel option to set sata disk to udma/133 1.5gbps

    - by John Doe
    hi, i try to speed up boot time of my linux server box which uses removable HDD rack's the current boot time is around 2 min's but if i connect the hdd's directly to the mainboard its about 2 sec's the problem is that ahci's kernel implementation causes a timeout of around 30 seconds for each disk during boot which originates from the hdd-rack after the timeout the kernel prints that the disk is limited with speed to 1.5gbps and udma/133 is used so the question i have is: how can i set this in grub as a boot option so the kernel doesnt have to wait for a timeout and just hardcoded limits the speed of the disks? i read about a few options like pci=nomsi or such, which dont work thats why im asking for limiting precisely the disks during boot thx

    Read the article

  • Windows 2008 additional disk going offline with reboots on Amazon EC2

    - by Ernest Mueller
    OK, so I took the stock Windows 2008 64-bit Amazon AMI and wanted to add a D: drive for page file space and crash dumps. I launched the instance with a second EBS volume attached as xvdf and went into Disk Management set it online, and added the page file and crash dump settings and all that works. But when I reboot, the box comes back up with that second drive as "Offline." How do I get that disk to automatically come online on reboot (or most notably, when I turn this into an AMI and launch more instances off it - I've tried that too and same deal with the D:).

    Read the article

  • Complete Apple Powerbook G4 format without disk

    - by Sam
    I have gone through many sites to look for the exact answer. However I failed. What I want: To completely format my Powerbook G4 to factory or brand new without any DVD/CD or Disk/Disc I am not worried about ethical or un-ethical way. I just need to format the entire Powerbook G4 to factory setting WITHOUT ANY DVD/CD/DISC/DISK so that It’s a brand new one. I am ready to do anything, but please don’t advice me on buying or download the MAC OS 10.5 Leopard from torrent or blah blah.

    Read the article

  • Backup and rescue disk creation

    - by Polppan
    I am in the process of backing up my PC using "Macrium backup and restore". I have successfully backed my PC, (both C and D drive) to an external hard disk. I have a question regarding creating rescue disks. I am following the steps as mentioned in this document. If I am creating an ISO file based on the document, how it is relates to the backup I have taken to my external disk ? I see no relation between creating rescue disks and backup data or am I missing something obvious? Any insight will be highly appreciable...

    Read the article

  • Large file copy from NFS to local disk performance drop

    - by Bernhard
    I'm trying to copy a 200GB file from an NFS mount to a local disk. The local disk is an XFS filesystem on a LVM on top of a RAID 5 system (hardware RAID controler). I'm using rsync to monitor the transfer speed. At the beginning the IO speed is about 200MB/s, stable for the first 18GB. But then the performance drops by a factor of 10-20 and never recovers to the initial rate. Sometimes it reaches about 50-100MB/s but just for a few seconds and then the process seems to hang for a bit. At the same time all file-stat operations on the target filesystem are blocking for a long time (minutes). Also interrupting the copy process blocks for several minutes, a sub-sequent delete of the partly copied file takes also several minutes. Any ideas what could be causing this?

    Read the article

  • script Disk Management configuration

    - by Joseph
    I have 10 workstations with large monitors that have USB slots and several card readers built in. The card readers cannot be disabled and will map to drive letters when I image the computers. I go into Disk Management and delete the drive mappings and add mappings to a single folder in C:\ with a folder for each slot. I have to do this because of scripts that run that are expecting specific letter drive mappings to network resources. Is there a way to script the deleting and adding of drive mappings instead of having to use the Disk Management GUI manually on each workstation? The workstations are running XP Professional.

    Read the article

  • Prevent Mac OS X from prompting disk initialization/formatting

    - by Just-A-User.A-Superuser
    I have a TrueCrypt partition. When I insert it in Mac OS X, it always prompt me to initialize the hard disk. Is there a way to prevent Mac OS X from detecting uninitialize hard disk? [UPDATE] By the way, as Truecrypt suggested while I'm in Windows, I must make partitions so the OS won't detect the hard drive as uninitialized. Windows respected that the drive already have contents by the mere fact that it has partitions, while Mac OS X thinks that it is still uninitialized. I think Mac OS X is trying to be smart by detecting if each partition has a valid filesystem id/marker.

    Read the article

  • windows 2000 freezing during large disk write

    - by robert
    We have a windows 2000 sp4 server which freezes up for about 1 minutes while its web-app does a ~500mb write operation. I can see the webapp start to do I/O activity (through process explorer) then the RDP session becomes unresponsive, you can click on windows and buttons but nothing happens. When the disk write finally finishes the session 'catches up' on all the mouse clicks you did during the freeze in a mad flurry of window activity and the server returns to normal. During the freeze the web-app stops as well. The same behaviour happens on the console of the server. (so I know its not a network thing) Nothing appears in the Event logs. Its like nothing happened. I have upgraded all the HP hardware drivers to the latest proliant support pack. And also run a HP hardware diagnostics which found nothing wrong. What would cause a disk write to lock the rest of the OS?

    Read the article

  • gzip compression using varnish cache

    - by Ali Raza
    Im trying to provide gzip compression using varnish cache. But when I set content-encoding as gzip using my below mentioned configuration for varnish (default.vcl). Browser failed to download those content for which i set content-encoding as gzipped. Varnish configuration file: backend default { .host = "127.0.0.1"; .port = "9000"; } backend socketIO { .host = "127.0.0.1"; .port = "8083"; } acl purge { "127.0.0.1"; "192.168.15.0"/24; } sub vcl_fetch { /* If the request is for pictures, javascript, css, etc */ if (req.url ~ "^/public/" || req.url ~ "\.js"){ unset req.http.cookie; set beresp.http.Content-Encoding= "gzip"; set beresp.ttl = 86400s; set beresp.http.Cache-Control = "public, max-age=3600"; /*set the expires time to response header*/ set beresp.http.expires=beresp.ttl; /* marker for vcl_deliver to reset Age: */ set beresp.http.magicmarker = "1"; } if (!beresp.cacheable) { return (pass); } return (deliver); } sub vcl_deliver { if (resp.http.magicmarker) { /* Remove the magic marker */ unset resp.http.magicmarker; /* By definition we have a fresh object */ set resp.http.age = "0"; } if(obj.hits > 0) { set resp.http.X-Varnish-Cache = "HIT"; }else { set resp.http.X-Varnish-Cache = "MISS"; } return (deliver); } sub vcl_recv { if (req.http.x-forwarded-for) { set req.http.X-Forwarded-For = req.http.X-Forwarded-For ", " client.ip; } else { set req.http.X-Forwarded-For = client.ip; } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { /* Non-RFC2616 or CONNECT which is weird. */ return (pipe); } # Pass requests that are not GET or HEAD if (req.request != "GET" && req.request != "HEAD") { return(pass); } #pipe websocket connections directly to Node.js if (req.http.Upgrade ~ "(?i)websocket") { set req.backend = socketIO; return (pipe); } # Properly handle different encoding types if (req.http.Accept-Encoding) { if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|js|css)$") { # No point in compressing these remove req.http.Accept-Encoding; } elsif (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } elsif (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { # unkown algorithm remove req.http.Accept-Encoding; } } # allow PURGE from localhost and 192.168.15... if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } return (lookup); } return (lookup); } sub vcl_hit { if (req.request == "PURGE") { purge_url(req.url); error 200 "Purged."; } } sub vcl_miss { if (req.request == "PURGE") { purge_url(req.url); error 200 "Purged."; } } sub vcl_pipe { if (req.http.upgrade) { set bereq.http.upgrade = req.http.upgrade; } } Response Header: Cache-Control:public, max-age=3600 Connection:keep-alive Content-Encoding:gzip Content-Length:11520 Content-Type:application/javascript Date:Fri, 06 Apr 2012 04:53:41 GMT ETag:"1330493670000--987570445" Last-Modified:Wed, 29 Feb 2012 05:34:30 GMT Server:Play! Framework;1.2.x-localbuild;dev Via:1.1 varnish X-Varnish:118464579 118464571 X-Varnish-Cache:HIT age:0 expires:86400.000 Any suggestion on how to fix it and how to provide gzip compression using varnish.

    Read the article

  • Reclaiming deleted disk space from file vault

    - by cbrulak
    I have my main user account encrypted with file vault. After deleting some data (like 20 GB) my free space on the hard drive hasn't change (yes I emptied the trash, confirmed that the files are actually gone, etc,etc). I also tried "erasing free space" in the disk utility app. I logged off, and rebooted and so far that space hasn't been reclaimed. I'm assuming file vault or disk utility has some method of reclaiming but I can't find it. Any ideas?

    Read the article

  • Run disk error check on NTFS file?

    - by paulius_l
    I have a feeling that my system hard drive is dying. Benchmark kind of enforces it. Here is the benchmark of my system hard drive during low system activity: And here is the benchmark of backup drive: Furthermore, there are some files which I just can't touch because I get CRC errors and the hard drive activity spikes to 100% with operating speeds less than 1 MB/s while working with such files. I haven't yet tried swapping SATA cable as I have read this might cause the problems. Anyway, I would like to run some tests on specific clustsers where those files I am interested in are stored. I don't want to do the full chkdsk because it takes a very long time. I would like to either find the utility which executes the disk check directly on the clusters where the file belongs or a couple utilities where one tells me the cluster locations and another can check just those locations. How do I check and possibly fix disk errors where the files I am interested in are stored? Edit: S.M.A.R.T. info:

    Read the article

  • Hard Drive Bad Sector marking utility

    - by Kevin Boyd
    I already have Windows XP, During installing Ubuntu(dual boot) the disk drive just stuck up at one place and doesn't seem to move ahead.. Is there a disk bad sector mark utility that just marks these sectors so that the disk doesn't seek them later. I tried running Seagate Seatools on the drive but both the short test and long test fail even before they start even chkdsk /f/r doesn't seem to work as the system locks up at stage four.

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >