Search Results

Search found 25196 results on 1008 pages for 'hard drive cache'.

Page 87/1008 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • Rail's FileStore with Linux Disk Caching or RAMdisk?

    - by Yo Ludke
    I have a Ruby on Rails application that stores it's catched files on the filesystem (Rails file-system cache). I was thinking about changing to memcached Store, but a short test shows it isn't a big difference in speed. From linuxatemyram.com I learned a bit about file caching. On the current machine there would be around 40..45GB RAM left which isn't needed for the application and which can be used to linux-disk-cache this rails file cache store. The disk is a RAID10 system with almost 120MB disk perfomance. How can I tell Linux to use free RAM more deliberately and not to be shy about using it? Do think it's necessary to adjust a sysyctl/.. value here, or would I have performance advantages to put the File Store root diretory on a ramdisk? (Loosing the cache during a reboot wouldn't be a problem)

    Read the article

  • Clone a 2TB WD Green internal drive with bad sectors to a 3TB partitioned external

    - by ron
    I have a 2TB WD Black drive and would like to simply do a straight clone from a failing 3TB drive to it. Both are SATA. Will I be able to just install the new drive alongside the faulty one and then do the clone/rescue attempt with ddrescue or is there a better method? The faulty internal drive mentioned has bed sectors although am usually able to boot into Windows 7 Ultimate with it and navigate and access all my programs. I have been attempting some trials with an Ubuntu Live CD using ddrescue but am not sure I'm doing it right. I have a 3TB WD my book essential external which is GPT and have created a separate 2TB partition on it which I am trying to clone to. I assume I need to format the new drive first to NTFS? Can I do that via the Ubuntu Rescue Remix 12-04 live DVD that I've been booting with?

    Read the article

  • Varnish with multiple sites/boxes

    - by jerhinesmith
    Is it possible for Varnish to redirect traffic to different IPs based on the url? For example, is the following setup feasible (and if so, what would the VCL look like): *.example.com points to Varnish IP address When a request is made to foo.example.com, varnish checks the cache and sends the request to Server1's IP address on a cache miss. When a request is made to bar.example.com, varnish checks the cache and sends the request to Server2's IP address on a cache miss. foo and bar are (for the most part) completely unrelated sites. They use the engine, but have different content and their own distinct database. Since there previously was no penalty for doing so (other than cost) we split them up into two separate boxes so that a ton of traffic to foo won't have a negative impact on visitors browsing around bar. I could set up two instances of varnish and have one serve up foo's static content and the other serve up bar's, but as there doesn't seem to be much overhead to running Varnish, I think (perhaps mistakenly) that it would make more sense to go with one Varnish server that redirects the traffic to the appropriate box on a cache miss.

    Read the article

  • Map a drive to root of a server (\\sever) in Vista

    - by Andy T
    Hi, In Win XP, I can very easily map a network drive to the root of my NAS server. I browse to it in Explorer (\192.168.1.70), choose "Map Network Drive", choose the drive letter, done. In Vista, this does not seem possible. I have to go "Map Network Drive" from 'Computer', then enter the address, but it will only let me map to specific shares (sub-folders off of the server root) and NOT to the server root share. Since my NAS has built-in shares (music, photo, video, etc.) then I would have to have drive letters for all of these, which I absolutely don't want. Can anyone tell me - how come I can easily map to the server root from XP, but not in Vista? Is there something fundamentally different in the networking across the two OS's? Or do I just need to do things a different way? Hope someone can help. Thanks, AT

    Read the article

  • HTTP Caching Server that supports POST

    - by Jeroen
    I am hosting a REST service which is sending appropriate cache-control headers. I use Varnish as a caching server in front of my webserver. However, a limitation of varnish is that it doesn't support caching HTTP POST and HTTP PUT. Is there any alternate caching server that will be able to cache these requests? I understand that caching POST is a bit tricky because you cannot just cache based on the url as a key like for GET; it needs to actually inspect the request body. In case of multipart/form-data requests, there should probably be a limit on the size of the request body for it to be cached (so that big file uploads, etc won't be cached). Nevertheless I really want to be able to cache short HTTP POST, or at least the application/x-www-form-urlencoded ones.

    Read the article

  • Is it normal for a SAS drive to have a few bad blocks, or should I replace my drive ASAP?

    - by Nate
    I have a drive—part of a RAID 1 mirror—that has two bad blocks. Adaptec Storage Manger e-mailed me when it detected the blocks. It shows 4 medium errors for that drive, but state is still “optimal”. This is my first time using Adaptec RAID controllers. I don’t know if an occasional bad block is normal, or if I should immediately replace that drive. Update: The drive failed later the same day! The disk subsystem is: Adaptec 6405 with ZMM (2) Seagate near-line SAS drives (ST31000424SS) The other drive hasn’t reported any bad blocks yet. I am running a consistency check.

    Read the article

  • How to resolve "dpkg: error processing /var/cache/apt/archives/python-apport_2.0.1-0ubuntu9_all.deb"?

    - by raz7588
    Update Manager will not update although I have over 100 updates to do I get a error message like this: installArchives() failed: Extracting templates from packages: 29%% Extracting templates from packages: 58%% Extracting templates from packages: 88%% Extracting templates from packages: 100%% Preconfiguring packages ... Extracting templates from packages: 29%% Extracting templates from packages: 58%% Extracting templates from packages: 88%% Extracting templates from packages: 100%% Preconfiguring packages ... Extracting templates from packages: 29%% Extracting templates from packages: 58%% Extracting templates from packages: 88%% Extracting templates from packages: 100%% Preconfiguring packages ... Extracting templates from packages: 29%% Extracting templates from packages: 58%% Extracting templates from packages: 88%% Extracting templates from packages: 100%% Preconfiguring packages ... (Reading database ... (Reading database ... 5%% (Reading database ... 10%% (Reading database ... 15%% (Reading database ... 20%% (Reading database ... 25%% (Reading database ... 30%% (Reading database ... 35%% (Reading database ... 40%% (Reading database ... 45%% (Reading database ... 50%% (Reading database ... 55%% (Reading database ... 60%% (Reading database ... 65%% (Reading database ... 70%% (Reading database ... 75%% (Reading database ... 80%% (Reading database ... 85%% (Reading database ... 90%% (Reading database ... 95%% (Reading database ... 100%% (Reading database ... 189751 files and directories currently installed.) Preparing to replace python-problem-report 2.0.1-0ubuntu7 (using .../python-problem-report_2.0.1-0ubuntu9_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/python-problem-report_2.0.1-0ubuntu9_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace python-apport 2.0.1-0ubuntu7 (using .../python-apport_2.0.1-0ubuntu9_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/python-apport_2.0.1-0ubuntu9_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace apport 2.0.1-0ubuntu7 (using .../apport_2.0.1-0ubuntu9_all.deb) ... apport stop/waiting Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/apport_2.0.1-0ubuntu9_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already apport start/running Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace gnome-orca 3.4.1-0ubuntu0.1 (using .../gnome-orca_3.4.2-0ubuntu0.1_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/gnome-orca_3.4.2-0ubuntu0.1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace python-piston-mini-client 0.7.2-0ubuntu1 (using .../python-piston-mini-client_0.7.2+bzr57-0ubuntu1_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/python-piston-mini-client_0.7.2+bzr57-0ubuntu1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace oneconf 0.2.8 (using .../oneconf_0.2.8.1_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/oneconf_0.2.8.1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace software-center 5.2.2 (using .../software-center_5.2.2.2_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/software-center_5.2.2.2_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace libglade2-0 1:2.6.4-1ubuntu1 (using .../libglade2-0_1%%3a2.6.4-1ubuntu1.1_amd64.deb) ... Unpacking replacement libglade2-0 ... Preparing to replace libv4l-0 0.8.6-1ubuntu1 (using .../libv4l-0_0.8.6-1ubuntu2_amd64.deb) ... De-configuring libv4l-0:i386 ... Unpacking replacement libv4l-0 ... Preparing to replace libv4l-0:i386 0.8.6-1ubuntu1 (using .../libv4l-0_0.8.6-1ubuntu2_i386.deb) ... Unpacking replacement libv4l-0:i386 ... Preparing to replace libv4lconvert0:i386 0.8.6-1ubuntu1 (using .../libv4lconvert0_0.8.6-1ubuntu2_i386.deb) ... De-configuring libv4lconvert0 ... Unpacking replacement libv4lconvert0:i386 ... Preparing to replace libv4lconvert0 0.8.6-1ubuntu1 (using .../libv4lconvert0_0.8.6-1ubuntu2_amd64.deb) ... Unpacking replacement libv4lconvert0 ... Errors were encountered while processing: /var/cache/apt/archives/python-problem-report_2.0.1-0ubuntu9_all.deb /var/cache/apt/archives/python-apport_2.0.1-0ubuntu9_all.deb /var/cache/apt/archives/apport_2.0.1-0ubuntu9_all.deb /var/cache/apt/archives/gnome-orca_3.4.2-0ubuntu0.1_all.deb /var/cache/apt/archives/python-piston-mini-client_0.7.2+bzr57-0ubuntu1_all.deb /var/cache/apt/archives/oneconf_0.2.8.1_all.deb /var/cache/apt/archives/software-center_5.2.2.2_all.deb Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1) Setting up libglade2-0 (1:2.6.4-1ubuntu1.1) ... dpkg: error processing gnome-orca (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. dpkg: error processing python-problem-report (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. Setting up libv4lconvert0 (0.8.6-1ubuntu2) ... Setting up libv4lconvert0:i386 (0.8.6-1ubuntu2) ... dpkg: error processing python-piston-mini-client (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. Setting up libv4l-0 (0.8.6-1ubuntu2) ... Setting up libv4l-0:i386 (0.8.6-1ubuntu2) ... dpkg: dependency problems prevent configuration of python-apport: python-apport depends on python-problem-report (>= 0.94); however: Package python-problem-report is not configured yet. dpkg: error processing python-apport (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of software-center: software-center depends on python-piston-mini-client (>= 0.1+bzr29); however: Package python-piston-mini-client is not configured yet. dpkg: error processing software-center (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of oneconf: oneconf depends on python-piston-mini-client (>= 0.3+bzr32-0ubuntu1); however: Package python-piston-mini-client is not configured yet. dpkg: error processing oneconf (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of apport: apport depends on python-apport (>= 2.0.1-0ubuntu7); however: Package python-apport is not configured yet. dpkg: error processing apport (--configure): dependency problems - leaving unconfigured Processing triggers for libc-bin ... ldconfig deferred processing now taking place This has been going on for two weeks now and I cannot get any updates. Any help would be great.

    Read the article

  • unable to inject seam cache provider

    - by Joshua
    Env: Seam 2.2, ehcache-core 2.1.0 I tried injecting the CacheProvider using the following call in my bean scoped for session @In CacheProvider cacheProvider; WEB-INF\components.xml contains the following line to enable the cache provider <cache:eh-cache-provider/> The above configuration seems to return a null value for the cache provider Using the cache provider like this CacheProvider cacheProvider = CacheProvider.instance(); throws the following warning 15:29:27,586 WARN [CacheManager] Creating a new instance of CacheManager using the diskStorePath "C:\DOCUME~1\user5\LOCALS~1\Temp\" which is already used by an existing CacheManager. The source of the configuration was net.sf.ehcache.config.generator.Configuratio nSource$DefaultConfigurationSource@15ed0f9. The diskStore path for this CacheManager will be set to C:\DOCUME~1\user5\LOCALS ~1\Temp\\ehcache_auto_created_1276682367586. To avoid this warning consider using the CacheManager factory methods to create a singleton CacheManager or specifying a separate ehcache configuration (ehcache .xml) for each CacheManager instance. What am I missing here?

    Read the article

  • Grails / GORM, Disable First-level Cache

    - by Stephen Swensen
    Suppose I have the following Domain class mapping to a legacy table, utilizing read-only second-level cache, and having a transient field: class DomainObject { static def transients = ['userId'] Long id Long userId static mapping = { cache usage: 'read-only' table 'SOME_TABLE' } } I have a problem, references to DomainObject are being shared due to first-level caching, and thus transient fields are writing over each other. For example, def r1 = DomainObject.get(1) r1.userId = 22 def r2 = DomainObject.get(1) r2.userId = 34 assert r1.userId == 34 That is, r1 and r2 are references to the same instance. This is undesirable, I would like to cache the table data without sharing references. Any ideas? [Edit] Understanding the situation better now, I believe my question boils down to the following: Is there anyway to disable first level cache for a specific domain class while still using second level cache?

    Read the article

  • Second level cache for entities with where clause

    - by bertolami
    I am wondering where the hibernate second level cache works as expected if I put a where clause in the hbm.xml class definition: <hibernate-mapping> <class name="com.clazzes.A" table="TABLE_A" mutable="false" where="xyz=5" > <cache usage="read-only"/> <id name="id" /> ... Will hibernate still put the id as key into the cache, or do I have enable the query cache? E.g. when I then execute a HQL query like from A where id=2 that results in an SQL similar to select * from TABLE_A where id=2 and (xyz=5). If I execute this query twice, will it consider the second level cache, or will it nevertheless execute the SQL twice?

    Read the article

  • USB Hardware vs. Software Write Lock

    - by TreyK
    I'm in the market for a USB flash drive, and remember this cool feature a tiny 32MB flash drive of mine had: a write lock switch. This seemed like it would be an amazing feature to have as a shield against any nastiness happening to the drive on an unfamiliar computer. However, very few drives on the market offer this feature. Instead, it seems that forms of software protection are the more prominent method. This software protection causes me a bit of uneasiness, as it seems like this software wouldn't be nearly as bulletproof as a physical switch. Also, levels of protection seem to vary from product to product. Being able to protect certain folders from reading and/or writing would be nice, but is the security trade-off worth it? Just how effective can this software protection be? Wouldn't a simple format be able to clean any drive with software protection? My drive must also be compatible with Windows XP, Vista, and 7, as well as Linux and Mac. What would be the best way forward for getting a well-sized (~8GB) flash drive with a strong write protection implementation, for little or no more than a regular drive? Thanks.

    Read the article

  • How to install GIT on an offline RHEL?

    - by Stijn Vanpoucke
    I'm using the following commands from the manual to install GIT $ tar -zxf git-1.7.2.2.tar.gz $ cd git-1.7.2.2 $ make prefix=/usr/local all $ sudo make prefix=/usr/local install but I'm receiving the following exceptions ... cache.h: At top level: cache.h:746: error: expected declaration specifiers or â...â before âtime_tâ cache.h:889: warning: âstruct timevalâ declared inside parameter list cache.h:895: warning: âstruct timevalâ declared inside parameter list cache.h:970: error: expected specifier-qualifier-list before âoff_tâ cache.h:979: error: expected specifier-qualifier-list before âoff_tâ cache.h:997: error: expected specifier-qualifier-list before âoff_tâ cache.h:1057: error: expected declaration specifiers or â...â before âoff_tâ cache.h:1063: error: expected declaration specifiers or â...â before âuint32_tâ cache.h:1064: error: expected â=â, â,â, â;â, âasmâ or â__attribute__â before ânt h_packed_object_offsetâ cache.h:1065: error: expected â=â, â,â, â;â, âasmâ or â__attribute__â before âfi nd_pack_entry_oneâ cache.h:1067: error: expected declaration specifiers or â...â before âoff_tâ cache.h:1069: error: expected declaration specifiers or â...â before âoff_tâ cache.h:1070: error: expected declaration specifiers or â...â before âoff_tâ cache.h:1094: error: expected specifier-qualifier-list before âoff_tâ cache.h:1168: error: expected â)â before â*â token cache.h:1177: error: expected â=â, â,â, â;â, âasmâ or â__attribute__â before âre ad_in_fullâ cache.h:1178: error: expected â=â, â,â, â;â, âasmâ or â__attribute__â before âwr ite_in_fullâ cache.h:1179: error: expected â=â, â,â, â;â, âasmâ or â__attribute__â before âwr ite_str_in_fullâ cache.h:1252: error: expected declaration specifiers or â...â before âFILEâ In file included from credential-store.c:2: credential.h:28: error: expected declaration specifiers or â...â before âFILEâ credential.h:29: error: expected declaration specifiers or â...â before âFILEâ In file included from credential-store.c:4: parse-options.h:115: error: expected specifier-qualifier-list before âintptr_tâ credential-store.c: In function âparse_credential_fileâ: credential-store.c:13: error: âFILEâ undeclared (first use in this function) credential-store.c:13: error: âfhâ undeclared (first use in this function) credential-store.c:17: warning: implicit declaration of function âfopenâ credential-store.c:19: error: âerrnoâ undeclared (first use in this function) credential-store.c:19: error: âENOENTâ undeclared (first use in this function) credential-store.c:24: error: too many arguments to function âstrbuf_getlineâ credential-store.c:24: error: âEOFâ undeclared (first use in this function) credential-store.c:39: warning: implicit declaration of function âfcloseâ credential-store.c: In function âprint_entryâ: credential-store.c:44: warning: implicit declaration of function âprintfâ credential-store.c:44: warning: incompatible implicit declaration of built-in fu nction âprintfâ credential-store.c: In function âmainâ: credential-store.c:132: warning: implicit declaration of function âumaskâ credential-store.c:144: error: âstdinâ undeclared (first use in this function) credential-store.c:144: error: too many arguments to function âcredential_readâ credential-store.c:147: warning: implicit declaration of function âstrcmpâ Is this because I didn't install the dependencies? apt-get install libcurl4-gnutls-dev libexpat1-dev gettext libz-dev libssl-dev How do I install them offline?

    Read the article

  • Proxy cache zone static is unknown

    - by AnApprentice
    I'm working to setup a reverse proxy cache. In nginx.conf I added the following: location /blog { # Reverse Proxy # Cache the Blog Pages from Heroku proxy_cache STATIC; proxy_cache_valid 200 10m; proxy_cache_valid 404 1m; proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504; rewrite ^/blog$ /; rewrite ^/blog/(.*)$ /$1; proxy_pass http://whispering-retreat-1.herokuapp.com; break; } However when trying to restart nginx I received the following error: $ /opt/nginx/sbin/nginx -s stop nginx: [emerg] "proxy_cache" zone "STATIC" is unknown in /opt/nginx/conf/nginx.conf:182 Any ideas what's the problem is with using STATIC? I just want to cache the blog pages so it doesn't hit heroku every time which is horribly slow. Thanks

    Read the article

  • How to delete the history and cache in Opera Mobile (10.1)

    - by Mathias Lin
    I run Opera Mobile 10.1 on Android. My device is rooted. How can I clear the history and cache of the browser via shell? As su, removing /data/data/com.opera.browser/opera/profiles/smartphone/cookies4.dat /data/data/com.opera.browser/opera/profiles/smartphone/cache /data/data/com.opera.browser/opera/profiles/smartphone/cacheO and a /system/xbin/busybox killall -9 com.opera.browser afterwards doesn't seem to do the job. Afterwards, bookmarks etc. are still there. In Opera Mini I found it easy to just delete /data/data/com.opera.mini.android/cache/webviewCache /data/data/com.opera.mini.android/databases but unfortunately, Opera Mini in it's current version has a bug and doesn't work on most devices.

    Read the article

  • local cache for NAS or network folder

    - by HugoRune
    I am planning to build a network attached storage (NAS) server. Is there a way to cache frequently acccessed files from the remote storage automatically on the local PC? (I am not looking for a way to sync whole folders like rsync, but rather something that automatically and transparently caches the last accessed 50 gb of files.) Ideally I am searching for something that caches writes as well as reads, since only one pc will be accessing the server (and one day of lost changes if the local cache is damaged would be acceptable) I looked into windows offline files, but as far as I could tell this requires manual interaction to disconnect the server or go into offline mode in order to use the cache. The server would probably be running Linux or freeNAS, the pc runs Windows xp, but could be upgraded to 7 if required.

    Read the article

  • SAN cache memory upgrade

    - by Scott Lundberg
    We currently have an IBM DS4300 Dual Controller Fibre SAN. It is a good box, but getting pretty old. It came with 256MB of cache per controller. Recently we replaced the batteries in one of the controllers and noticed that the cache is a DDR PC2100 ECC DIMM. Of course, we are thinking about how cheap this RAM is now and is there any good reason we can't upgrade the RAM. IBM used to have a "Turbo" upgrade to this box that doubled the cache and had a bunch of software features for about 10K USD. Since that product has been end-of-lifed, I don't think we can get that upgrade and we don't need the software upgrades (FlashCopy, StorageCopy, etc). Besides the obvious potential warranty issue, what if any issues would we expect to see if attempting to put 2 - 1GB DIMMS in this unit? Any other things I am missing here? EDIT: Memory label: Samsung CN 0433 PC2100U-25331-A1 M381L3223ETM-CB0 256MB DDR PC2100 CL2.5 ECC

    Read the article

  • Cherokee high virtual memory usage even after disabling I/O Cache

    - by nidheeshdas
    I've Ubuntu 10.04LTS 64-bit running on a openvz container and Cherokee 1.0.8 compiled from source. The virtual memory usage of cherokee-worker is around 430 MB even after disabling I/O cache from Advanced - I/O Cache - NOT enabled. Is this issue particular to openvz? Because many people reported to have successfully reduced virt memory usage by disabling io cache. htop output: http://imgur.com/z5JEL.jpg (newbies not allowed to post image.) thanks in advance. nidheeshdas

    Read the article

  • APC has no system cache entries

    - by lazzio
    I have 2 web servers to provide PHP websites. One server is : Apache + PHP-FPM + APC The other : Apache with MPM-itk + APC. For both of these servers, APC has no cache system entries but only users cache entries as you can see on the screenshot. APC with only users cache entries APC configuration is : apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 2 apc.mmap_file_mask apc.num_files_hint 1000 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.shm_segments 1 apc.shm_size 256 apc.stat 1 apc.stat_ctime 0 apc.ttl 7200 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 7200 apc.write_lock 1 Does anyone know why APC acts like this and how to make it work well ? Thank you for your help!

    Read the article

  • Cherokee high virtual memory usage even after disabling I/O Cache

    - by nidheeshdas
    hi all I've Ubuntu 10.04LTS 64-bit running on a openvz container and Cherokee 1.0.8 compiled from source. The virtual memory usage of cherokee-worker is around 430 MB even after disabling I/O cache from Advanced - I/O Cache - NOT enabled. Is this issue particular to openvz? Because many people reported to have successfully reduced virt memory usage by disabling io cache. htop output: http://imgur.com/z5JEL.jpg (newbies not allowed to post image.) thanks in advance. nidheeshdas

    Read the article

  • Boot failure on installation from a burned iso image

    - by jdamae
    I'm encountering boot failure while trying to install a Linux distro from a CD. I'm using an older PC; here are its specs: HP Pavilion a255c 2.66GHz CPU, 512MB RAM with a BIOS revision of 6/30/2003 I reclaimed an older drive (Seagate ST340810A) that seems to be working, as it's recognized in the BIOS (auto-detected). So this is not the original HDD, but a replacement. I downloaded a mini.iso of Ubuntu 10.10 that I want to install, and burned the image to a CD for install. My boot sequence is: First Boot Device [CDROM]. I disabled devices 2-4 so I can just force it to read first from the CD-ROM. This old PC also has a separate CD writer which is a Sec.Slave. The Sec.Master is the Toshiba DVD/ROM DSM-171 drive where I placed the burned Linux CD. With these settings I cannot get it to boot. I get the message "DISK BOOT FAILURE, INSERT SYSTEM DISK AND PRESS ENTER" when I start the pc with the cd (burned iso image). Would I be able to boot off a usb flash drive? Would that work?

    Read the article

  • DVD wont mount Ubuntu 12.04

    - by CyborgGold
    I can't seem to be able to mount my optical drive. I have tried numerous solutions from this site with no results. I am not able to see the device inside the file browser either. There is a DVD in the drive. I am running 12.04 on an HP g60-235dx portable. I have a link below to the specs. I will also list what I have tried (that I can find back right now.) I know the drive is functioning, because just before Windows 7 crashed and my MBR went fubar I was watching movies just fine. I am fairly new to linux, so don't assume I know anything. Ok, so here is what I have tried: sudo wget --output-document=/etc/apt/sources.list.d/medibuntu.list http://www.medibuntu.org/sources.list.d/$(lsb_release -cs).list sudo apt-get --quiet update sudo apt-get --yes --quiet --allow-unauthenticated install medibuntu-keyring sudo apt-get --quiet update sudo apt-get install libdvdcss2 dmesg | grep sr0 (no output) apt-get install libdvdnav4 (already installed, and up to date) sudo /usr/share/doc/libdvdread4/install-css.sh ls -l /dev/cdrom /dev/cdrw /dev/dvd /dev/dvdrw /dev/scd0 /dev/sr0 ls: cannot access /dev/scd0: No such file or directory lrwxrwxrwx 1 root root 3 Sep 10 03:51 /dev/cdrom -> sr0 lrwxrwxrwx 1 root root 3 Sep 10 03:51 /dev/cdrw -> sr0 lrwxrwxrwx 1 root root 3 Sep 10 03:51 /dev/dvd -> sr0 lrwxrwxrwx 1 root root 3 Sep 10 03:51 /dev/dvdrw -> sr0 brw-rw----+ 1 root cdrom 11, 0 Sep 10 03:51 /dev/sr0 wodim --devices wodim: Overview of accessible drives (1 found) : ------------------------------------------------------------------------- 0 dev='/dev/sg1' rwrw-- : 'TSSTcorp' 'CDDVDW TS-L633M' ------------------------------------------------------------------------- sudo lshw optical *-cdrom description: DVD-RAM writer product: CDDVDW TS-L633M vendor: TSSTcorp physical id: 1 bus info: scsi@1:0.0.0 logical name: /dev/cdrom logical name: /dev/cdrw logical name: /dev/dvd logical name: /dev/dvdrw logical name: /dev/sr0 version: 0200 capabilities: removable audio cd-r cd-rw dvd dvd-r dvd-ram configuration: ansiversion=5 status=nodisc sudo lshw | grep cdrom *-cdrom logical name: /dev/cdrom Spec sheet for portable: http://www.cnet.com/laptops/hp-g60-235dx/4507-3121_7-33496192.html If you need any more information than all of that... please let me know.

    Read the article

  • Dismount USB External Drive using powershell

    - by JC
    Hello, I am attempting to dismount an external USB drive using powershell and I cannot successfuly do this. The following script is what I use: #get the Win32Volume object representing the volume I wish to eject $drive = Get-WmiObject Win32_Volume -filter "DriveLetter = 'F:'" #call dismount on that object there by ejecting drive $drive.Dismount($Force , $Permanent) I then check my computer to check if drive is unmounted but it is now. The boolean parameters $force and $permanent have been tried with different permutations to no avail. The exit code returned by the dismount command changes when the params are toggled. (0,0) = exit code 0 (0,1) = exit code 2 (1,0) = exit code 0 (1,1) = exit code 2 The documentation for exit code 2 indicates that there are existing mount points as a reason why it cannot dismount. Although I am trying to dismount the only mount point that exists so I am unsure what this exit code is trying to tell me. Having already trawled the web for people experiencing similar problems I have only found one additional command to try and that is the following: # executed after the .Dismount() command $drive.Put() This additional command does not help. I am running out of things to try, so any assistance anyone can give me would be greatly appreciated, Thanks.

    Read the article

  • LTO 2 tape performance in LTO 3 drive

    - by hmallett
    I have a pile of LTO 2 tapes, and both an LTO 2 drive (HP Ultrium 460e), and an autoloader with an LTO 3 drive in (Tandberg T24 autoloader, with a HP drive). Performance of the LTO 2 tapes in the LTO 2 drive is adequate and consistent. HP L&TT tells me that the tapes can be read and written at 64 MB/s, which seems in line with the performance specifications of the drive. When I perform a backup (over the network) using Symantec Backup Exec, I get about 1700 MB/min backup and verify speeds, which is slower, but still adequate. Performance of the LTO 2 tapes in the LTO 3 drive in the autoloader is a different story. HP L&TT tells me that the tapes can be read at 82 MB/s and written at 49 MB/s, which seems unusual at the write speed drop, but not the end of the world. When I perform a backup (over the network) using Symantec Backup Exec though, I get about 331 MB/min backup speed and 205 MB/min verify speeds, which is not only much slower, but also much slower for reads than for writes. Notes: The comparison testing was done on the same server, SCSI card and SCSI cable, with the same backup data set and the same tape each time. The tape and drives are error-free (according to HP L&TT and Backup Exec). The SCSI card is a U160 card, which is not normally recommended for LTO 3, but we're not writing to LTO 3 tapes at LTO 3 speeds, and a U320 SCSI card is not available to me at the moment. As I'm scratching my head to determine the reason for the performance drop, my first question is: While LTO drives can write to the previous generation LTO tapes, does doing so normally incur a performance penalty?

    Read the article

  • Using smartctl to get vendor specific Attributes from ssd drive behind a SmartArray P410 controller

    - by Lairsdragon
    Recently I have deployed some HP server with SSD's behind a SmartArray P410 controller. While not official supported from HP the server work well sofar. Now I like to get wear level info's, error statistics etc from the drive. While the SA P410 supports a passthru of the SMART Command to a single drive in the array the output I was not able to the the interesting things from the drive. In this case especially the value the Wear level indicator is from interest for me (Attr.ID 233), but this is ony present if the drive is directly attanched to a SATA Controller. smartctl on directly connected ssd: # smartctl -A /dev/sda smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 5 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 3 Spin_Up_Time 0x0000 100 000 000 Old_age Offline In_the_past 0 4 Start_Stop_Count 0x0000 100 000 000 Old_age Offline In_the_past 0 5 Reallocated_Sector_Ct 0x0002 100 100 000 Old_age Always - 0 9 Power_On_Hours 0x0002 100 100 000 Old_age Always - 8561 12 Power_Cycle_Count 0x0002 100 100 000 Old_age Always - 55 192 Power-Off_Retract_Count 0x0002 100 100 000 Old_age Always - 29 232 Unknown_Attribute 0x0003 100 100 010 Pre-fail Always - 0 233 Unknown_Attribute 0x0002 088 088 000 Old_age Always - 0 225 Load_Cycle_Count 0x0000 198 198 000 Old_age Offline - 508509 226 Load-in_Time 0x0002 255 000 000 Old_age Always In_the_past 0 227 Torq-amp_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 228 Power-off_Retract_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 smartctl on P410 connected ssd: # ./smartctl -A -d cciss,0 /dev/cciss/c1d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net (Right, it is complety empty) smartctl on P410 connected hdd: # ./smartctl -A -d cciss,0 /dev/cciss/c0d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Current Drive Temperature: 27 C Drive Trip Temperature: 68 C Vendor (Seagate) cache information Blocks sent to initiator = 1871654030 Blocks received from initiator = 1360012929 Blocks read from cache and sent to initiator = 2178203797 Number of read and write commands whose size <= segment size = 46052239 Number of read and write commands whose size > segment size = 0 Vendor (Seagate/Hitachi) factory information number of hours powered up = 3363.25 number of minutes until next internal SMART test = 12 Do I hunt here a bug, or is this a limitation of the p410 SMART cmd Passthru?

    Read the article

  • Can't mount FAT32 drive under Ubuntu Linux

    - by Josh
    I have a 320GB USB drive with a single large FAT32 partition. The volume mounts perfectly fine on my Mac OS X 10.5.8 machine and Disk Utility on the mac reports no issues with the volume. I can read/write all data on the drive. However when I connect the drive to my Ubuntu 9.10 Karmic system, the partition does not mount. dmesg|tail says: [ 2752.334822] scsi3 : SCSI emulation for USB Mass Storage devices [ 2752.335040] usb-storage: device found at 3 [ 2752.335044] usb-storage: waiting for device to settle before scanning [ 2757.330301] usb-storage: device scan complete [ 2757.331005] scsi 3:0:0:0: Direct-Access WD 3200AAK External 1.65 PQ: 0 ANSI: 0 [ 2757.331772] sd 3:0:0:0: Attached scsi generic sg2 type 0 [ 2757.355647] sd 3:0:0:0: [sdb] 625142448 512-byte logical blocks: (320 GB/298 GiB) [ 2757.360737] sd 3:0:0:0: [sdb] Write Protect is off [ 2757.360749] sd 3:0:0:0: [sdb] Mode Sense: 00 00 00 00 [ 2757.360755] sd 3:0:0:0: [sdb] Assuming drive cache: write through [ 2757.367618] sd 3:0:0:0: [sdb] Assuming drive cache: write through [ 2757.367631] sdb: sdb1 [ 2762.797622] sd 3:0:0:0: [sdb] Assuming drive cache: write through [ 2762.797636] sd 3:0:0:0: [sdb] Attached SCSI disk [ 2822.866228] FAT: bogus number of reserved sectors [ 2822.866237] VFS: Can't find a valid FAT filesystem on dev sdb1. When I run fsck.vfat -a /dev/sdb1 I get: root@cartman:~# fsck.vfat -a /dev/sdb1 dosfsck 3.0.3, 18 May 2009, FAT32, LFN Logical sector size is zero. Googling "vfat Logical sector size is zero" produced no consensus as to the solution. I would prefer not to have to completely reformat the disk if possible because it contains about 280GB of data I would rather not have to find a temporary home for. Any suggestions?

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >