Search Results

Search found 14653 results on 587 pages for 'disk cache'.

Page 39/587 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • What's the best way to version CSS and JS URLs?

    - by David Eyk
    As per Yahoo's much-ballyhooed Best Practices for Speeding Up Your Site, we serve up static content from a CDN using far-future cache expiration headers. Of course, we need to occasionally update these "static" files, so we currently add an infix version as part of the filename (based on the SHA1 sum of the file contents). Thus: styles.min.css Becomes: styles.min.abcd1234.css However, managing the versioned files can become tedious, and I was wondering if a GET argument notation might be cleaner and better: styles.min.css?v=abcd1234 Which do you use, and why? Are there browser- or proxy/cache-related considerations that I should consider?

    Read the article

  • Seriousness of a "Smart" disk error. How long will it last?

    - by Workshop Alex
    I have an 1 TB data disk and the bios and Windows are reporting a "Smart" error. At least, I get a Smart event but it doesn't indicate how serious the failure could be. My system is about 6 months old, including the disk so the warranty will cover the damage. Unfortunately, I lack a second disk of 1 TB in size which I can use to make a full backup. The most important data on this disk is safe, but there's a lot of work data which can be regenerated but this would cost a lot of time. So I ordered an USB disk of 1 TB which will arrive in three days. By then I can make a full backup of the data and afterwards, it can crash. But will the disk live that long? (Well, I won't use the PC as long as I can't make a backup.) How serious is such a Smart event? I know it's serious enough to have it replaced, but will it live for another week or could it die any moment?Update: I purchased an 1 TB external disk and spent most of the day making a backup of the 1 TB disk. It survived that. I then received a new disk, since it was still under warranty and replaced the hard disk. Then I had to spend most of a day again to put back the backup. I need to send back the faulty disk and now have an additional external disk, which could always be practical. :-) The Smart Error report did not cause any failures on the original disk. I won't advise to ignore these warnings, but the disk still has enough life in it to last a few more days. (Just make sure you have a good back-up.) And oh, the horror of having to make a complete backup such a huge disk. :-) If your data is important, make sure you have something that supports incremental backups and lots of space. (In my case, the data wasn't very important, just practical to have on-disk together.)

    Read the article

  • Why cache static files with Varnish, why not pass

    - by Saif Bechan
    I have a system runnning nginx / php-fpm / varnish / wordpress and amazon s3. Now I have looked at a lot of configuration files while setting up the system, and in all of them I found something like this: /* If the request is for pictures, javascript, css, etc */ if (req.url ~ "\.(jpg|jpeg|png|gif|css|js)$") { /* Remove the cookie and make the request static */ unset req.http.cookie; return (lookup); } I do not understand why this is done. Most of the examples also run NginX as a webserver. Now the question is, why would you use the varnish cache to cache these static files. It makes much more sense to me to only cache the dynamic files so that php-fpm / mysql don't get hit that much. Am I correct or am I missing something here? UPDATE I want to add some info to the question based on the answer given. If you have a dynamic website, where the content actually changes a lot, chaching does not make sense. But if you use WordPress for a static website for example, this can be cached for long periods of time. That said, more important to me is static conent. I have found a link with some test and benchmarks on different cache apps and webserver apps. http://nbonvin.wordpress.com/2011/03/14/apache-vs-nginx-vs-varnish-vs-gwan/ NginX is actually faster in getting your static content, so it makes more sense to just let it pass. NginX works great with static files. -- Apart from that, most of the time static content is not even in the webserver itself. Most of the time this content is stores on a CDN somewhere, maybe AWS S3, something like that. I think the varnish cache is the last place where you want to have you static content stored.

    Read the article

  • Graphical disk surface check tool?

    - by sbergeron
    I need a program that can scan my hard drive for read and write errors so I can partition around them. I REALLY don't do well with numbers but if I can have something that shows an output like the graphical display on gparted that would be perfect. I know a lot of people would recommend replacing the disk but right now I can't as I NEED this laptop for school and can't wait for a hard drive to arrive (I have ordered one, yes, but I don't expect it to arrive for another couple weeks as I only figured out afterwards they still have to manufacture it)

    Read the article

  • zfs raidz1-0 corrupted data, all disk are online

    - by gkcr2d2
    my zfs pool "datas" is unavailable but all my disk are online, Do you know how the problem can be fixed? Thank you for your help. root@oxygen:~# zpool status pool: datas state: UNAVAIL scan: none requested config: NAME STATE READ WRITE CKSUM datas UNAVAIL 0 0 0 insufficient replicas raidz1-0 UNAVAIL 0 0 0 corrupted data scsi-SATA_ST3000DM001-9YN_W1F0LBVX ONLINE 0 0 0 scsi-SATA_ST3000DM001-9YN_W1F0KYX9 ONLINE 0 0 0 scsi-SATA_ST3000DM001-9YN_W1F0LCC8 ONLINE 0 0 0 scsi-SATA_ST3000DM001-9YN_W1F0LBXZ ONLINE 0 0 0

    Read the article

  • Nginx + PHP-FPM ignores no-cache headers

    - by Eric Winchell
    I'm using the following header on a php page. // Prevent page caching. header('Expires: Tue, 20 Oct 1981 05:00:00 GMT'); header('Cache-Control: no-store, no-cache, must-revalidate'); header('Cache-Control: post-check=0, pre-check=0', FALSE); header('Pragma: no-cache'); I'm also using a rand=999999999 (with a real random number) in the URLs. But pages are still being cached. Reload works, but first load is cached. Anyone know where I can change this?

    Read the article

  • I get an "serious errors while checking the disk drives for /boot" error while booting

    - by Vaios Argiropoulos
    I have just set up my new system by creating three partitions on a whole hard disk (/boot,/home,swap and /(root)) .I saw this article and now i am getting those errors and the system provide me with some options (I to ignore, S to skip mounting or M for manual recovery). I am forced to choose skip mounting because i don't know what manual recovery is. Despite the error i get i think that the system boots ok because i can use the system but this error is very annoying.

    Read the article

  • Image caching when rendering the same images on different pages

    - by HelpNeeder
    I'm told to think about caching of images that will be displayed on the page. The images will be repeated throughout the website on different pages and I'm told to figure out the best way to cache these images. There could be few to dozen of images on single page. Here's few questions: Will browser caching work to display the same images across different web pages? Should I rather store images in stringified form in a memory instead, using JavaScript arrays? Store them on hard drive using 'localStorage'? What would be easiest yet best option for this? Are there any other alternatives? How to force cache? Any other information would be greatly appreciated...

    Read the article

  • Hook Response.Cache to memcache

    - by dvr
    Has anyone done this before? I have a 32 bit win 2003 server running 2.0 and have read the ms engineers' blog about min(60%, 1800mb) for cache limits and our site (asp.net 2.0 / 3.5) is caching alot. It throws system outofmemory exceptions when wp is around 1.3gb (unfortunately it is the 2.0 apps) and I would like to push alot over to memcache but worried that at the moment the site is efficient using response.cache as is (though memory is an issue). I want to move most items over to memcache and have concerns on a – how to do this (implementation of response.cache to read/write from memcache) and b – what will performance be like? Before I commit to doing this and possibly spending a few days running tests I would like to hear from you if this has been done already and get some feedback. (and please don’t tell me to buy a x64 machine – I have already requested this!), by the way I ran a test requesting a single image 1000 times and response.cache was over 50% quicker than using application cache. Does response.cache bypass the page lifecycle?

    Read the article

  • Simple VM That allows booting from folder or disk

    - by None
    I was wondering if there was a very simple and free virtual machine that would allow you to boot from a folder or disk image that couldn't damage my hard disk. I am using a MacBook and am looking into operating system programming. I found a tutorial on the internet that looked promising (http://www.viralpatel.net/taj/tutorial/hello_world_bootloader.php). I want to try this but using a VM instead of actually booting from a disk. If I made a folder or disk image containing the boot.bin file and wanted to try the OS I made (while booting from a folder or disk image, not a disk), is there a VM that would let me do it? I have no previous experience with virtual machines. I also want to be sure my hard disk would not be damaged.

    Read the article

  • Cache Wrapper with expressions

    - by Fujiy
    I dont know if is possible. I want a class to encapsulate all Cache of my site. I thinking about the best way to do this to avoid conflict with keys. My first idea is something like this: public static TResult Cachear<TResult>(this Cache cache, Expression<Func<TResult>> funcao) { string chave = funcao.ToString(); if (!(cache[chave] is TResult)) { cache[chave] = funcao.Compile()(); } return (TResult)cache[chave]; } Is the best way? Ty

    Read the article

  • Need data on disk drive management by OS: getting base I/O unit size, “sync” option, Direct Memory A

    - by Richard T
    Hello All, I want to ensure I have done all I can to configure a system's disks for serious database use. The three areas I know of (any others?) to be concerned about are: I/O size: the database engine and disk's native size should either match, or the database's native I/O size should be a multiple of the disk's native I/O size. Disks that are capable of Direct Memory Access (eg. IDE) should be configured for it. When a disk says it has written data persistently, it must be so! No keeping it in cache and lying about it. I have been looking for information on how to ensure these are so for CENTOS and Ubuntu, but can't seem to find anything at all! I want to be able to check these things and change them if needed. Any and all input appreciated.

    Read the article

  • CloudFront with Custom Origin and ELB

    - by kmfk
    We are using CloudFront for our static assets but also wanted to allow for Gzip. We set up a new distribution with a custom origin pointing back to our application servers which are behind a elastic load balancer. We manually keep the files in sync across the cluster and update them when we publish. However, with this set up, we get nothing but Miss and RefreshHits from CloudFront, which so far has defeated the purpose. Is there any additional settings in order to use an ELB as your custom origin? In the docs, it references this as a viable solution. It appears when we point the distribution to a single server in our production cluster, cloudfront properly caches our assets. Is it possible that the sticky sessions cookie and the subsequent header that gets added by it could be an issue? Cache-Control: no-cache="set-cookie" //Added by load balancer Any ideas? FYI - currently, we have our custom origin pointing to a single EC2 instance, so caching is working correctly - in case you try to curl the file below. Example headers: curl -I http://static.quick-cdn.com/css/9850999.css HTTP/1.0 200 OK Accept-Ranges: bytes Cache-Control: max-age=3700 Cache-Control: no-cache="set-cookie" Content-Length: 23038 Content-Type: text/css Date: Thu, 12 Apr 2012 23:03:52 GMT Last-Modified: Thu, 12 Apr 2012 23:00:14 GMT Server: Apache/2.2.17 (Ubuntu) Vary: Accept-Encoding X-Cache: RefreshHit from cloudfront X-Amz-Cf-Id: K_q7Zy3_jdzlEJ85ukELVtdx1GmuXqApAbZZ7G0fPt0mxRMqPKX5pQ==,RzJmPku-rEIO9WlvuSoKa8hiAaR3dLk5KC4cQMWWrf_MDhmjWe8n6A== Via: 1.0 28c34f9fbf559a21ee16594849e4fc9c.cloudfront.net (CloudFront) Connection: close

    Read the article

  • Empty $upstream_http_location variable if response was cached

    - by Ivaldi
    I would like to cache the response of an redirect. (Cache the request to some site which returns a redirect and cache the second request which returns the actual content.) So far my config looks like this: location = /proxy { error_page 301 302 307 = @redir; resolver 8.8.8.8; proxy_pass $arg_url; proxy_intercept_errors on; proxy_cache pcache; proxy_cache_key $arg_url; proxy_cache_valid 200 301 302 307 1d; proxy_cache_min_uses 1; proxy_ignore_client_abort on; proxy_ignore_headers Set-Cookie Expires Cache-Control; } location @redir { resolver 8.8.8.8; # we need to assign $upstream_http_location to another var in order to use it with proxy_pass set $target $upstream_http_location; proxy_pass $target; proxy_cache predirects; proxy_cache_key $upstream_http_location; proxy_cache_valid 200 301 302 307 1d; proxy_cache_min_uses 1; proxy_ignore_headers Set-Cookie Expires Cache-Control; } It works for the first request or without the 30x codes for proxy_cache_valid in the /proxy part, but $target and $upstream_http_location are empty, if the response was cached. Is there a nice solution to cache both requests? Thanks!

    Read the article

  • How to Use Offline Files in Windows to Cache Your Networked Files Offline

    - by Taylor Gibb
    The problem with storing all your files on a file server or networked machine is that when you leave the network, how are you going to access your files? Instead of using a VPN or Dropbox, you can use the Offline Files feature built into Windows. Note: You should probably not be using this guide to make your 2 terabyte movie collection available offline—while it may work, it is not recommended just because the Offline Files feature isn’t made for storing massive amounts of data offline. How to Use Offline Files in Windows to Cache Your Networked Files Offline How to See What Web Sites Your Computer is Secretly Connecting To HTG Explains: When Do You Need to Update Your Drivers?

    Read the article

  • Why does 12.04 upgrade abort with out of space error when I have lots of it?

    - by Kristian Thomsen
    When upgrading Ubuntu from 11.10 to 12.04 I discovered an unexpected problem. The upgrade was stopped because there wasn't enough free space for the installation. I managed to free some space and do the upgrade but now a prompt appears after logging in saying I'm out of space. This prompt asks me if I want to examine the problem. The "Disk Usage Analyser" is opened. In the top it says: Total filesystem capacity: 47.0 GB (used: 13.5 GB available: 33.4 GB) Folder -- Usage -- Size / -- 100% -- 12.5 GB usr -- 44.8 % -- 5.6 GB home -- 30.3 % -- 3.8 GB lib -- 13.0 % -- 1.6 GB var -- 9.1 % -- 1.1 GB boot 2.5 % 309.5 GB and a lot of small contributors like: etc, opt, sbin, bin etc. I do not really understand this problem since the analyser in the top says that I have 33.4 GB left in this file system. What can I do to make Ubuntu use the remaining space? Running df -i in the terminal gives: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda7 610800 576874 33926 95% / udev 213451 563 212888 1% /dev tmpfs 218524 486 218038 1% /run none 218524 3 218521 1% /run/lock none 218524 7 218517 1% /run/shm /dev/sda8 2264752 16371 2248381 1% /home The output of df -h Filesystem Size Used Avail Use% Mounted on /dev/sda7 9,3G 7,8G 1,1G 88% / udev 993M 4,0K 993M 1% /dev tmpfs 401M 884K 400M 1% /run none 5,0M 0 5,0M 0% /run/lock none 1003M 152K 1002M 1% /run/shm /dev/sda8 35G 4,0G 29G 13% /home /dev/sda2 101G 64G 37G 64% /media/A2C8E28BC8E25CD3 Running sudo fdisk -l gives Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000080 Device Boot Start End Blocks Id System /dev/sda1 63 96389 48163+ de Dell Utility /dev/sda2 * 98304 210434488 105168092+ 7 HPFS/NTFS/exFAT /dev/sda3 210436094 312576704 51070305+ f W95 Ext'd (LBA) /dev/sda5 306279288 312576704 3148708+ dd Unknown /dev/sda6 210436096 214341631 1952768 82 Linux swap / Solaris /dev/sda7 214343680 233873407 9764864 83 Linux /dev/sda8 233875456 306278399 36201472 83 Linux Partition table entries are not in disk order

    Read the article

  • Microsoft gives you your cache back

    - by Dave Ballantyne
    The system works and its called Microsoft Connect , who would of thought it :) Following on from my previous blog post MicroSoft – Follow best practices, on the connect item , the followup stated that changes had been made in 2008.  I genuinely thought that a change would take an age to trickle through to the customer. But after firing up 2008R2 RTM and examining the SqlAgent traffic with profiler , where before i would see non-parameterized sql, I now see RPC calls.    Excellent , i get my cache back.

    Read the article

  • KVM guest disk performance

    - by Alex
    My KVM guest does max. 200MB/s although the host does easily 700MB/s (Raid 0 with 4 SSDs). Configuration: File-based storage (raw), cache none. Host 24 cores, 96GB ram, Ubuntu 12.04.1 LTS and virt-manager. I suspect the CPU to be the bottleneck (one core goes up during hdparm). Anyone experienced the same or has an explanation ? Edit: one more info: guest is the same as host (Ubuntu 12). Same poor disk performance observed with Windows 2008 R2 and Suse Enterprise Linux (9 or 10 I think). Max 1 guest running.

    Read the article

  • Read only file system error on ubuntu after partitioning

    - by Ranjith R
    I am not sure if I am the root cause of this problem but this is what I did: I wanted latest ubuntu and latest linux mint together on my thinkpad laptop. Windows 7 was already there. I already had mint. So I put in the USB with ubuntu image and started installing ubuntu. I chose to install side by side. It was taking a long time to finish defragmenting and partitioning. I decided to give up as I became a little impatient and I pressed the skip button. After the skipping, I realized that the partitioning was complete and went ahead with installing ubuntu. Now the linux mint OS starts reporting the file system as read only at least once every day and I have restart and tell the OS to fix errors in hard disk. After I press F key, the system fixes the issues, restarts and all is well again. Is there some way to fix the issue permanently. I think reinstalling will solve the issues, but I can not do it as I have a lot of data and I will have reinstall and configure a lot of softwares that I use daily. I checked the smart check in disk utility and the hard disk seems to be fine Also I checked both the partitions for errors with disk utility and the report says they are fine. Is there something I can do before I reinstall?

    Read the article

  • .bash_history and .cache

    - by John Isaacks
    I have a user who's home directory is a Mercurial repository. Mercurial notified me that there were 2 new unversioned files in my repository. .bash_history and .cache/motd.legal-displayed. I assume bash_history is the history of bash commands for my user. I have no idea what the other is. I don't want these files to be versioned by Mercurial, are they safe to just delete, or will they come back, or mess something up? Can they be moved to somewhere else? Or do I have to add them to my .hgignore file?

    Read the article

  • How do I throttle a command in a terminal window?

    - by To Do
    I needed to run convert with a lot of images at the same time. The command took quite a while but this doesn't bother me. The issue is that this command rendered my computer unusable while the command was running (for about 15 minutes). So is it possible to throttle the command by limiting resources (processor and memory) to the command, directly from the command line? This can only work if I add something to the same line before pressing Enter because once I start the process the computer slows so much that it is impossible for example to switch to "System monitor" and reduce priority. Edit: top and iotop results I managed to run top and sudo iotop >iotop.txt while doing one of these convert operations. (The iotop.txt file produced is difficult to read) Results of top: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 14275 username 20 0 4043m 3.0g 1448 D 7.0 80.4 0:16.45 convert Results of iotop: [?1049h[1;24r(B[m[4l[?7h[?1h=[39;49m[?25l[39;49m(B[m[H[2JTotal DISK READ: 1269.04 K/s | Total DISK WRITE:[59G0.00 B/s (B[0;7m TID PRIO USER DISK READ DISK WRITE SWAPIN(B[0;1;7m IO(B[0;7m COMMAND [3;2H(B[m2516 be/4 username 350.08 K/s 0.00 B/s 0.00 % 0.00 % zeitgeist-datahub 7394 be/4 username 568.88 K/s 0.00 B/s 77.41 % 0.00 % --rendere~.530483991[5;1H14275 idle username 350.08 K/s 0.00 B/s 37.49 % 0.00 % convert S~f test.pdf[6;2H2048 be/4 root[6;24H0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/3:2] [5G1 be/4 root[7;24H0.00 B/s 0.00 B/s 0.00 % 0.00 % init Furthermore, even after the process ends, the computer does not return to the previous performance. I found a way around this by running sudo swapoff -a followed by sudo swapon -a

    Read the article

  • Random Cache Expiry

    - by mahemoff
    I've been experimenting with random cache expiry times to avoid situations where an individual request forces multiple things to update at once. For example, a web page might include five different components. If each is set to time out in 30 minutes, the user will have a long wait time every 30 minutes. So instead, you set them all to a random time between 15 and 45 minutes to make it likely at most only one component will reload for any given page load. I'm trying to find any research or guidelines on this topic, e.g. optimal variance parameters. I do recall seeing one article about how Google (?) uses this technique, but can't locate it, and there doesn't seem to be much written about the topic.

    Read the article

  • Implementing cache system in Java Web Application

    - by TGM
    I worked with JPA (Eclipselink implementation) and Hibernate. As I understand these two have great caching systems. I am interested in caching in a Web application and in order to better understand the process I'm trying to implement something on my own. Sadly, I cannot find any in depth documentation about this subject. I'm interested in things like high scalability, sharing memory on different machines and other important theoretical matters. Is there any tutorial or open project I could check out? Thank you! *LE: * I want to cache DB information in POJOs just like JPA or Eclipselink

    Read the article

  • Cannot Mount USB 3.0 Hard Disk ?!!

    - by Tenken
    Hi, I have a USB 3.0 external hard disk which I am unable to mount. The entry appears in the "lsusb" command, but I do not exactly understand how to mount it. This is the output for my lsusb command. "ASMedia Technology Inc." is the USB 3.0 device. I would appreciate some help in mounting and accessing the hard disk. This the relevant output of my "lsusb -v" : Bus 009 Device 002: ID 174c:5106 ASMedia Technology Inc. Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x174c ASMedia Technology Inc. idProduct 0x5106 bcdDevice 0.01 iManufacturer 2 ASMedia iProduct 3 AS2105 iSerial 1 00000000000000000000 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xc0 Self Powered MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 80 Bulk (Zip) iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x02 EP 2 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0001 Self Powered Bus 009 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 3.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 3 bMaxPacketSize0 9 idVendor 0x1d6b Linux Foundation idProduct 0x0003 3.0 root hub bcdDevice 2.06 iManufacturer 3 Linux 2.6.35-28-generic xhci_hcd iProduct 2 xHCI Host Controller iSerial 1 0000:04:00.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0004 1x 4 bytes bInterval 12 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 4 wHubCharacteristic 0x0009 Per-port power switching Per-port overcurrent protection TT think time 8 FS bits bPwrOn2PwrGood 10 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Port 3: 0000.0503 highspeed power enable connect Port 4: 0000.0503 highspeed power enable connect Device Status: 0x0003 Self Powered Remote Wakeup Enabled This is the error given when I try to mount the hard drive: shinso@shinso-IdeaPad:~$ sudo mount /dev/sdb /mnt [sudo] password for shinso: mount: /dev/sdb: unknown device This the output of "dmesg|tail": [30062.774178] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO [30535.800977] usb 9-4: USB disconnect, address 3 [30659.237342] Valid eCryptfs headers not found in file header region or xattr region [30659.237351] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO [31259.268310] Valid eCryptfs headers not found in file header region or xattr region [31259.268313] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO [31860.059058] Valid eCryptfs headers not found in file header region or xattr region [31860.059062] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO [32465.220590] Valid eCryptfs headers not found in file header region or xattr region [32465.220593] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO I am using Ubuntu 10.10 (64 bit). Any help is appreciated.

    Read the article

  • Not enough free disk space

    - by carmatt95
    I'm new to Ubuntu and I'm getting an error in software updater. When I try and do my daily updates, it says: The upgrade needs a total of 25.3 M free space on disk /boot. Please free at least an additional 25.3 M of disk space on /boot. Empty your trash and remove temporary packages of former installations using sudo apt-get clean. I tried typing in sudo apt-get clean into the terminal but I still get the message. All of the pages I read seem to be for experianced Ubuntuers. Any help would be appreciated. I'm running Ubuntu 12.10. I want to upgrade to 13.04 but understand I have to finish these first. EDIT: @Alaa, This is the output from typing in cat /etc/fstab into the terminal: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> /dev/mapper/ubuntu-root / ext4 errors=remount-ro 0 1 # /boot was on /dev/sda1 during installation UUID=fa55c082-112d-4b10-bcf3-e7ffec6cebbc /boot ext2 defaults 0 2 /dev/mapper/ubuntu-swap_1 none swap sw 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0 matty@matty-G41M-ES2L:~$ df -h: Filesystem Size Used Avail Use% Mounted on /dev/mapper/ubuntu-root 915G 27G 842G 4% / udev 984M 4.0K 984M 1% /dev tmpfs 397M 1.1M 396M 1% /run none 5.0M 0 5.0M 0% /run/lock none 992M 1.8M 990M 1% /run/shm none 100M 52K 100M 1% /run/user /dev/sda1 228M 222M 0 100% /boot matty@matty-G41M-ES2L:~$ dpkg -l | grep linux-image: ii linux-image-3.5.0-17-generic 3.5.0-17.28 i386 Linux kernel image for version 3.5.0 on 32 bit x86 SMP ii linux-image-3.5.0-18-generic 3.5.0-18.29 i386 Linux kernel image for version 3.5.0 on 32 bit x86 SMP ii linux-image-3.5.0-19-generic 3.5.0-19.30 i386 Linux kernel image for version 3.5.0 on 32 bit x86 SMP ii linux-image-3.5.0-21-generic 3.5.0-21.32 i386 Linux kernel image for version 3.5.0 on 32 bit x86 SMP ii linux-image-3.5.0-22-generic 3.5.0-22.34 i386 Linux kernel image for version 3.5.0 on 32 bit x86 SMP ii linux-image-3.5.0-23-generic 3.5.0-23.35 i386 Linux kernel image for version 3.5.0 on 32 bit x86 SMP ii linux-image-3.5.0-24-generic 3.5.0-24.37 i386 Linux kernel image for version 3.5.0 on 32 bit x86 SMP ii linux-image-3.5.0-25-generic 3.5.0-25.39 i386 Linux kernel image for version 3.5.0 on 32 bit x86 SMP ii linux-image-3.5.0-26-generic 3.5.0-26.42 i386 Linux kernel image for version 3.5.0 on 32 bit x86 SMP iF linux-image-3.5.0-28-generic 3.5.0-28.48 i386 Linux kernel image for version 3.5.0 on 32 bit x86 SMP

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >