Search Results

Search found 8997 results on 360 pages for 'apt cache'.

Page 63/360 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • Unknown http requests of type http://<domain>/cache/<32-digit-alphanumeric-key>

    - by Siva Bathula
    I am getting a lot of incoming requests with this structure: //domain_name/cache/22092e9b25c40809dfb94b6179166b26. I am running a .NET 4.0 website served from IIS 7.5. A lot of these URLs have no referrer URLs and come in randomly with a different 32 digit alphanumeric key. And I do not have any resource like '.../cache/...' on my website. I just want to eliminate such requests and want to understand where these are coming from at all. Any help would be appreciated.

    Read the article

  • Sharepoint publishing cache- counter missing on WFE(Object Caching)

    - by Ryan
    I want to tune object caching in my sharepoint environment. The way to do is check sharepoint publishing cache counter through perfmon on your farm. I have one application server and 2 WFE but when I am trying to create counter for my WFE its showing Sharepoint publishing cache but I am not able to add any instance of it but when I select my application server I can see all the instances. But If I want to check publishing hit ratio i need to run this on WFE also ...correct me if I m wrong? How to resolve this issue? Also, how to check the hit ratio as our site is not go-live so we don't get enuf users to hit the site to check this thing. Does it means that I can tune it up only my site go live and real load will get on it. Thanks, Amit

    Read the article

  • vs2010 Cache SQL data incorrect fields

    - by mickartz
    OK, I found a walkthrough on msdn for what I was after (offline database cache). However when I let the wizard create a local database from my online sql server the timespan fields are converted to a string?? Now I know the suggestion was to create my own local database and then use the MS Synch framework...however...this proclaims to do it "out of the box" However now I've a dataset which I've no idea how to use, and a database newly formed (for the synched cache) taht I will have to use Ling to Entities with(??) meanwhile I have this weird timespan to string conversion? should I give up now or push on? can i overwrite the the .designer.cs? typeof(string) to typeof(timespan)? damn wizards!!

    Read the article

  • Force Windows 7 to store thumbnails locally

    - by kotekzot
    I want Windows 7 to store thumbnails cache files in the same folder as the files (thumbs.db) instead of using the centralized location for all thumbnails (By default %userprofile%\AppData\Local\Microsoft\Windows\Explorer). How would one achieve this effect? Alternatively, if the former is implausible, I'd settle for no thumbnail caching at all, forcing Windows to regenerate thumbnails each time a folder is accessed.

    Read the article

  • Asking browsers to cache as aggressively as possible

    - by balpha
    This is about a web app that serves images. Since the same request will always return the same image, I want the accessing browsers to cache the images as aggressively as possible. I pretty much want to tell the browser Here's your image. Go ahead and keep it; it's really not going to change for the next couple of days. No need to come back. Really. I promise. I do, so far, set Cache-Control: public, max-age=86400 Last-Modified: (some time ago) Expires: (two days from now) and of course return a 304 not modified if the request has the appropriate If-Modified-Since header. Is there anything else I can do (or anything I should do differently) to get my message across to the browsers? The app is hosted on the Google App Engine, in case that matters.

    Read the article

  • Improving Javascript Load Times - Concatenation vs Many + Cache

    - by El Yobo
    I'm wondering which of the following is going to result in better performance for a page which loads a large amount of javascript (jQuery + jQuery UI + various other javascript files). I have gone through most of the YSlow and Google Page Speed stuff, but am left wondering about a particular detail. A key thing for me here is that the site I'm working on is not on the public net; it's a business to business platform where almost all users are repeat visitors (and therefore with caches of the data, which is something that YSlow assumes will not be the case for a large number of visitors). First up, the standard approach recommended by tools such as YSlow is to concatenate it, compress it, and serve it up in a single file loaded at the end of your page. This approach sounds reasonably effective, but I think that a key part of the reasoning here is to improve performance for users without cached data. The system I currently have is something like this * All javascript files are compressed and loaded at the bottom of the page * All javascript files have far future cache expiration dates, so will remain (for most users) in the cache for a long time * Pages only load the javascript files that they require, rather than loading one monolithic file, most of which will not be required Now, my understanding is that, if the cache expiration date for a javascript file has not been reached, then the cached version is used immediately; there is no HTTP request sent at to the server at all. If this is correct, I would assume that having multiple tags is not causing any performance penalty, as I'm still not having any additional requests on most pages (recalling from above that almost all users have populated caches). In addition to this, not loading the JS means that the browser doesn't have to interpret or execute all this additional code which it isn't going to need; as a B2B application, most of our users are unfortunately stuck with IE6 and its painfully slow JS engine. Another benefit is that, when code changes, only the affected files need to be fetched again, rather than the whole set (granted, it would only need to be fetched once, so this is not so much of a benefit). I'm also looking at using LabJS to allow for parallel loading of the JS when it's not cached. So, what do people think is a better approach? In a similar vein, what do you think about a similar approach to CSS - is monolithic better?

    Read the article

  • Jsp cache problem

    - by idiotgenius
    I use javascript and css to build a multi-level drop down menu with following markups: <ul> <li>menu item 1</li> <ul> <li><a href="#">sub menu menu item 1</a></li> ................. This markup is generated by a custom JSTL tag <mui:menu .../> which loads menu data from a database. I hope my jsp page can behave like this: if menu data has not changed since last time I visited the page, just use browser's cache otherwise load from database... How can I do it? I don't know much detail about cache mechanism.

    Read the article

  • Delete cache when web browser is close.

    - by Edy Cu
    Hi all. I have issue about multiple login in asp.net. Case this happen: User X login as "user1" in web browser. Then user Y also login as "user1" also in another web browser. User Y got error message "Another user log in some account". That is work as expected. If X, close their web browser. Then try again to login in as "user1". X get also get "Another user log in some account". So i trying debug then i found session is remove when web browser is close, but cache still remaining in web browser. Anyone have idea about this how to clear cache when user close their browser, (not tab). Regard.

    Read the article

  • Offline Database Write Cache in C#

    - by Todd Gardner
    I have a windows service that receives a large amount of data that needs to be transformed and persisted to a database. To ensure that we do not lose data, I want to create a "Write cache" for the data that will continue regardless if the database is online. Once the database becomes available again, I would want it to flush the content of the cache back into the database. I've seen some articles indicating that I might be able to do this with NHibernate, but I haven't found it conclusively. What options exist for this, and is NHibernate the appropriate direction?

    Read the article

  • Design guidelines for cache mechanism

    - by Delashmate
    Hi All, I got assignment to write design for cache mechanism (this is work assignment, not homework), This is my first time writing a design document, Our program display images for doctors, and we want to reduce the parsing time of the images So we want to save the parsed data in advance (in files or inside database) Currently I have several design key ideas: Handle locks - each shared data structure should be handled, also files Test - add test to verify the data from the cache is equal to the data from the files To decouple the connection to the database- not to call directly to the database Cleanup mechanisem- to delete old files if the cahce directory exceed configurable threshold Support config file Support performance tool in the feature I will also add class diagram, data flow charts, and workflow What do you think I should add to the key ideas? Do you know good link to atricales about design? Thanks in advance, Dan

    Read the article

  • PeerApp Scalability

    - by ChaosFreak
    William, In response to a question on P2P caching, you answered "PeerApp can do that but probably doesn't suit the scale you are looking at." PeerApp is the most scalable P2P cache in the world, and can handle hundreds of Gb per second of bandwidth. Their largest deployment in Taiwan handles 120Gbps with no problem. The next largest competitor, OverSi, can barely handle a tenth of that. Where do you get your information that PeerApp "doesn't suit scale"?

    Read the article

  • How to check total cache size using a program

    - by user1888541
    so I'm having some trouble creating a program to measure cache size in C. I understand the basic concept of going about this but I'm still having trouble figuring out exactly what I am doing wrong. Basically, I create an array of varying length (going by power of 2s) and access each element in the array and put it in a dummy variable. I go through the array and do this around 1000 times to negate the "noise" that would otherwise occur if I only did it once to get an accurate measurement for time. Then, I look for the size that causes a big jump in access time. Unfortunately, this is where I am having my problem, I don't see this jump using my code and clearly I am doing something wrong. Another thing is that I used /proc/cpuinfo to check the cache and it said the size was 6114 but that was not a power of 2. I was told to go by powers of 2 to figure out the cache can anyone explain why this is? Here is the just of my code...I will post the rest if need be { struct timeval start; struct timeval end; // int n = 1; // change this to test different sizes int array_size = 1048576*n; // I'm trying to check the time "manually" first before creating a loop for the program to do it by itself this is why I have a separate "n" variable to increase the size char x = 0; int i =0, j=0; char *a; a =malloc(sizeof(char) * (array_size)); gettimeofday(&start,NULL); for(i=0; i<1000; i++) { for(j=0; j < array_size; j += 1) { x = a[j]; } } gettimeofday(&end,NULL); int timeTaken = (end.tv_sec * 1000000 + end.tv_usec) - (start.tv_sec *1000000 + start.tv_usec); printf("Time Taken: %d \n", timeTaken); printf("Average: %f \n", (double)timeTaken/((double)array_size); }

    Read the article

  • how to save downloaded files in cache android

    - by madcoderz
    Hi i'm streaming video from a website in my android application. I have a history option showing the last seen videos. I wonder if i can use cache so that when the user enters the history the video is played faster (not downloaded again). When you use cache in Android does that mean that the whole video is downloaded and saved somewhere? or some data is saved somwhere(not the whole video). Some help will be appreciated!!! Thanks.

    Read the article

  • Design cache mechanism

    - by Delashmate
    Hi All, I got assignment to write design for cache mechanism, This is my first time writing a design document, Our program display images for doctors, and we wan't to reduce the parsing time of the images So we want to save the parsed data in advance (in files or inside database) Currently I have several design key ideas Handle locks - each shared data structure should be handled, also files Test - add test to verify the data from the cache is equal to the data from the files To decouple the connection to the database- not to call directly to the database Cleanup mechanisem- to delete old files if the cahce directory exceed configurable threshold Support config file Support performance tool in the feature I will also add class diagram, data flow charts, and workflow What do you think I should add to the key ideas? Do you know good link to atricales about design? Thanks in advance, Dan

    Read the article

  • OSB and Coherence Integration

    - by mark.ms.smith
    Anyone who has tried to manage Coherence nodes or tried to cache results in OSB, will appreciate the new functionality now available. As of WebLogic Server 10.3.4, you can use the WebLogic Administration Server, via the Administration Console or WLST, and java-based Node Manager to manage and monitor the life cycle of stand-alone Coherence cache servers. This is a great step forward as the previous options mainly involved writing your own scripts to do this. You can find an excellent description of how this works at James Bayer’s blog. You can also find the WebLogic documentation here.As of Oracle Service Bus 11gR1 (11.1.1.3.0), OSB now supports service result caching for Business Bervices with Coherence. If you use Business Services that return somewhat static results that do not change often, you can configure those Business Services to cache results. For Business Services that use result caching, you can control the time to live for the cached result. After the cached result expires, the next Business Service call results in invoking the back-end service to get the result. This result is then stored in the cache for future requests to access. I’m thinking that this caching functionality would be perfect for some sort of cross reference data that was refreshed nightly by batch. You can find the OSB Business Service documentation here.Result Caching in a dedicated JVMThis example demonstrates these new features by configuring a OSB Business Service to cache results in a separate Coherence JVM managed by WebLogic. The reason why you may want to use a separate, dedicated JVM is that the result cache data could potentially be quite large and you may want to protect your OSB java heap.In this example, the client will call an OSB Proxy Service to get Employee data based on an Employee Id. Using a Business Service, OSB calls an external system. The results are automatically cached and when called again, the respective results are retrieved from the cache rather than the external system.Step 1 – Set up your Coherence Server Via the OSB Administration Server Console, create your Coherence Server to be used as the results cache.Here are the configured Coherence Server arguments from the Server Start tab. Note that I’m using the default Cache Config and Override files in the domain.-Xms256m -Xmx512m -XX:PermSize=128m -XX:MaxPermSize=256m -Dtangosol.coherence.override=/app/middleware/jdev_11.1.1.4/user_projects/domains/osb_domain2/config/osb/coherence/osb-coherence-override.xml -Dtangosol.coherence.cluster=OSB-cluster -Dtangosol.coherence.cacheconfig=/app/middleware/jdev_11.1.1.4/user_projects/domains/osb_domain2/config/osb/coherence/osb-coherence-cache-config.xml -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dcom.sun.management.jmxremote Just incase you need it, here is my Coherence Server classpath:/app/middleware/jdev_11.1.1.4/oracle_common/modules/oracle.coherence_3.6/coherence.jar: /app/middleware/jdev_11.1.1.4/modules/features/weblogic.server.modules.coherence.server_10.3.4.0.jar: /app/middleware/jdev_11.1.1.4/oracle_osb/lib/osb-coherence-client.jarBy default, OSB will try and create a local result cache instance. You need to disable this by adding the following JVM parameters to each of the OSB Managed Servers:-Dtangosol.coherence.distributed.localstorage=false -DOSB.coherence.cluster=OSB-clusterIf you need more information on configuring a remote result cache, have a look at the configuration documentration under the heading Using an Out-of-Process Coherence Cache Server.Step 2 – Configure your Business Service Under the respective Business Service Message Handling Configuration (Advanced Properties), you need to enable “Result Caching”. Additionally, you need to determine what the cache data will be keyed on. In the example below, I’m keying it on the unique Employee Id.The Results As this test was on my laptop, the actual timings are just an indication that there is a benefit to caching results. Using my test harness, I sent 10,000 requests to OSB, all with the same Employee Id. In this case, I had result caching disabled.You can see that this caused the back end Business Service (BS_GetEmployeeData) to be called for each request. Then after enabling result caching, I sent the same number of identical requests.You can now see the Business Service was only invoked once on the first request. All subsequent requests used the Results Cache.

    Read the article

  • How can I back up my ubuntu system?

    - by Eloff
    I'm sure there's a lot of questions on here similar to this, and I've been reading them, but I still feel this warrants a new question. I want nightly, incremental backups (full disk images would waste a lot of space - unless compressed somehow.) Preferably rotating or deleting old backups when running out of space or after a fixed number of backups. I want to be able to quickly and painlessly restore my system from these backups. This is my first time running ubuntu as my main development machine and I know from my experience with it as a server and in virtual machines that I regularly manage to make it unbootable or damage it to the point of being unable to rescue it. So how would you recommend I do this? There are so many options out there I really don't know where to start. There seems to be a vocal school of thought that it's sufficient to backup your home directory and the list of installed packages from the package manager. I've already installed lots of things from source, or outside of the package manager (development tools, ides, compilers, graphics drivers, etc.) So at the very least, if I do not back up the operating system itself I need to grab all config files, all program binaries, all created but required files, etc. I'd rather backup too much than too little - an ubuntu install is tiny anyway. Also this drastically reduces the restore time, which would cost me more in my time than the extra storage space. I tried using Deja Dup to backup the root partition, excluding some things like /mnt /media /dev /proc etc. Although many websites assured me you can backup a running linux system this way - that seems to be false as it complained that it could not backup the following files: /boot/System.map-3.0.0-17-generic /boot/System.map-3.2.0-22-generic /boot/vmcoreinfo-3.0.0-17-generic /boot/vmlinuz-3.0.0-17-generic /boot/vmlinuz-3.2.0-22-generic /etc/.pwd.lock /etc/NetworkManager/system-connections/LAN Connection /etc/apparmor.d/cache/lightdm-guest-session /etc/apparmor.d/cache/sbin.dhclient /etc/apparmor.d/cache/usr.bin.evince /etc/apparmor.d/cache/usr.lib.telepathy /etc/apparmor.d/cache/usr.sbin.cupsd /etc/apparmor.d/cache/usr.sbin.tcpdump /etc/apt/trustdb.gpg /etc/at.deny /etc/ati/inst_path_default /etc/ati/inst_path_override /etc/chatscripts /etc/cups/ssl /etc/cups/subscriptions.conf /etc/cups/subscriptions.conf.O /etc/default/cacerts /etc/fuse.conf /etc/group- /etc/gshadow /etc/gshadow- /etc/mtab.fuselock /etc/passwd- /etc/ppp/chap-secrets /etc/ppp/pap-secrets /etc/ppp/peers /etc/security/opasswd /etc/shadow /etc/shadow- /etc/ssl/private /etc/sudoers /etc/sudoers.d/README /etc/ufw/after.rules /etc/ufw/after6.rules /etc/ufw/before.rules /etc/ufw/before6.rules /lib/ufw/user.rules /lib/ufw/user6.rules /lost+found /root /run/crond.reboot /run/cups/certs /run/lightdm /run/lock/whoopsie/lock /run/udisks /var/backups/group.bak /var/backups/gshadow.bak /var/backups/passwd.bak /var/backups/shadow.bak /var/cache/apt/archives/lock /var/cache/cups/job.cache /var/cache/cups/job.cache.O /var/cache/cups/ppds.dat /var/cache/debconf/passwords.dat /var/cache/ldconfig /var/cache/lightdm/dmrc /var/crash/_usr_lib_x86_64-linux-gnu_colord_colord.102.crash /var/lib/apt/lists/lock /var/lib/dpkg/lock /var/lib/dpkg/triggers/Lock /var/lib/lightdm /var/lib/mlocate/mlocate.db /var/lib/polkit-1 /var/lib/sudo /var/lib/urandom/random-seed /var/lib/ureadahead/pack /var/lib/ureadahead/run.pack /var/log/btmp /var/log/installer/casper.log /var/log/installer/debug /var/log/installer/partman /var/log/installer/syslog /var/log/installer/version /var/log/lightdm/lightdm.log /var/log/lightdm/x-0-greeter.log /var/log/lightdm/x-0.log /var/log/speech-dispatcher /var/log/upstart/alsa-restore.log /var/log/upstart/alsa-restore.log.1.gz /var/log/upstart/console-setup.log /var/log/upstart/console-setup.log.1.gz /var/log/upstart/container-detect.log /var/log/upstart/container-detect.log.1.gz /var/log/upstart/hybrid-gfx.log /var/log/upstart/hybrid-gfx.log.1.gz /var/log/upstart/modemmanager.log /var/log/upstart/modemmanager.log.1.gz /var/log/upstart/module-init-tools.log /var/log/upstart/module-init-tools.log.1.gz /var/log/upstart/procps-static-network-up.log /var/log/upstart/procps-static-network-up.log.1.gz /var/log/upstart/procps-virtual-filesystems.log /var/log/upstart/procps-virtual-filesystems.log.1.gz /var/log/upstart/rsyslog.log /var/log/upstart/rsyslog.log.1.gz /var/log/upstart/ureadahead.log /var/log/upstart/ureadahead.log.1.gz /var/spool/anacron/cron.daily /var/spool/anacron/cron.monthly /var/spool/anacron/cron.weekly /var/spool/cron/atjobs /var/spool/cron/atspool /var/spool/cron/crontabs /var/spool/cups

    Read the article

  • update-apt-xapian-index uses 100% CPU, even when Update Manager is set to not check for updates

    - by Dave M G
    I have a slightly older laptop running Ubuntu 11.10. It runs fine, but frequently, when I start it up, the CPU monitor in my Gnome Panel shows 100% usage for for what can be up to five minutes or so. It seems that the offending process is update-apt-xapian-index, which, if I understand correctly, is the update manager checking for updates. I have gone into the update manager settings, and selected to never check for updates. I'll do that manually when I feel like I have the time to leave the laptop running for that. However, despite my selection, this still happens. Roughly 50% of the time or more, when I start my laptop, it runs update-apt-xapian-index. How can I get the update manager to respect my settings, or at least to get this process to stop eating my CPU cycles?

    Read the article

  • Kernel Panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) after apt-get upgrade

    - by Edward
    I'm using Ubuntu 9.1 server edition, I get this error during boot time after I ran sudo apt-get upgrade when checking my kernel version, uname -r returns 2.6.31-14-generic but when i run dpkg -l 'linux-image*' | grep ^.i I cannot find 2.6.31-14 (only contains 2.6.32*) Following the solution in the thread: Kernel Panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) doesn't work for me I'm running the commands inside the Rescue mode by booting from the Ubuntu 9.1 Installation Disc Do I need to update my kernel and run update-initramfs + update-grub again? If so, how can I update the kernel? apt-get install any linux-headers/image cannot change the uname -r value Thanks!

    Read the article

  • E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution)

    - by B Jo
    I would like to upgrade to Ubuntu 13.04 since almost a month now. Am a pretty novice in linux and in software in general. My /boot is full : bijo@bijo-AMILO-Xi-2428:~$ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/ubuntu-root 228G 7.7G 208G 4% / udev 1001M 4.0K 1001M 1% /dev tmpfs 404M 836K 403M 1% /run none 5.0M 0 5.0M 0% /run/lock none 1008M 156K 1008M 1% /run/shm none 100M 48K 100M 1% /run/user /dev/sda1 228M 222M 0 100% /boot I tried the : sudo apt-get purge $( dpkg --list | grep -P -o "linux-image-\d\S+" | grep -v $(uname -r | grep -P -o ".+\d") ) but i got this as reply E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). infact I'm going round and round ... Can someone guide me through please? Thanks in advance for ur time

    Read the article

  • How to make the apt autocompletion work in minimal system (in LXC container)?

    - by Adam Ryczkowski
    When I work inside thin LXC container on 12.04 I have only very basic system. In particular the /etc/bash_completion.d is missing the e.g. apt, that I find particularly useful. Is there any standard package, that installs the autocompletion for the apt, or should I copy the file manually? And just copying the files into /etc/bash_completion.d manually just doesn't seem to work. I use bash as my command interpreter. What am I missing here?

    Read the article

  • Do I need to do an "apt-get update" after adding a PPA?

    - by Sat93
    After adding a new ppa to the repository, is it necessary to update the whole database? By "whole database" I mean is it necessary to update index's of each and every packege? If it's not necessary, then how can I update only that specific package whose ppa I have just added into the repository. For example, if I add an ppa by typing the following in terminal, sudo add-apt-repository ppa:tiheum/equinox then we normally run the following command after it, sudo apt-get update But how can I update the only package which is associated with the above ppa, instead of updating the whole database.

    Read the article

  • How do I install obsolete packages with apt-get?

    - by naman
    i need to install old vulnerable package on ubuntu to make my own version of metasploitable. its part of my project. i am trying to do it manually but its difficult to install and run the vulnerable program. so my question is : can i install these old vulnerable package with "apt-get". suppose i install a program from sourcecode then how will i start this ? as example if i manually install telnetd (configure,make,make install) then i do not find its service in "/etc/init.d" as we get it after installing it from "apt-get" please help me. thanks in advance

    Read the article

  • Adaptec 5805 after reboot don't starting

    - by Rakedko ShotGuns
    After rebooting the system, the controller is not included. It only works if the computer is shut down and turn off. Late i update firmware "Adaptec RAID 5805 Firmware Build 18948" How to fix the problem? add Log Configuration summary Server name.....................raid_test Adaptec Storage Manager agent...7.31.00 (18856) Adaptec Storage Manager console.7.31.00 (18856) Number of controllers...........1 Operating system................Windows Configuration information for controller 1 ------------------------------------------------------- Type............................Controller Model...........................Adaptec 5805 Controller number...............1 Physical slot...................2 Installed memory size...........512 MB Serial number...................8C4510C6C9E Boot ROM........................5.2-0 (18948) Firmware........................5.2-0 (18948) Device driver...................5.2-0 (16119) Controller status...............Optimal Battery status..................Charging Battery temperature.............Normal Battery charge amount (%).......37 Estimated charge remaining......0 days, 16 hours, 12 minutes Background consistency check....Disabled Copy back.......................Disabled Controller temperature..........Normal (40C / 104F) Default logical drive task priorityHigh Performance mode................Dynamic Number of logical devices.......1 Number of hot-spare drives......0 Number of ready drives..........0 Number of drive(s) assigned to MaxCache cache0 Maximum drives allowed for MaxCache cache8 MaxCache Read Cache Pool Size...0 GB NCQ status......................Enabled Stay awake status...............Disabled Internal drive spinup limit.....0 External drive spinup limit.....0 Phy 0...........................No device attached Phy 1...........................No device attached Phy 2...........................No device attached Phy 3...........................1.50 Gb/s Phy 4...........................No device attached Phy 5...........................No device attached Phy 6...........................No device attached Phy 7...........................No device attached Statistics version..............2.0 SSD Cache size..................0 Pages on fetch list.............0 Fetch list candidates...........0 Candidate replacements..........0 69319...........................31293 Logical device..................0 Logical device name............. RAID level......................Simple volume Data space......................148,916 GB Date created....................09/19/2012 Interface type..................Serial ATA State...........................Optimal Read-cache mode.................Enabled Preferred MaxCache read cache settingEnabled Actual MaxCache read cache setting Disabled Write-cache mode................Enabled (write-back) Write-cache setting.............Enabled (write-back) Partitioned.....................Yes Protected by hot spare..........No Bootable........................Yes Bad stripes.....................No Power Status....................Disabled Power State.....................Active Reduce RPM timer................Never Power off timer.................Never Verify timer....................Never Segment 0.......................Present: controller 1, connector 0, device 0, S/N 9RX3KZMT Overall host IOs................99075 Overall MB......................4411203 DRAM cache hits.................71929 SSD cache hits..................0 Uncached IOs....................29239 Overall disk failures...........0 DRAM cache full hits............71929 DRAM cache fetch / flush wait...0 DRAM cache hybrid reads.........3476 DRAM cache flushes..............-- Read hits.......................0 Write hits......................0 Valid Pages.....................0 Updates on writes...............0 Invalidations by large writes...0 Invalidations by R/W balance....0 Invalidations by replacement....0 Invalidations by other..........0 Page Fetches....................0 0...............................0 73..............................10822 8...............................3 46138...........................4916 27184...........................15226 20875...........................323 16982...........................1771 1563............................5317 1948............................2969 Serial attached SCSI ----------------------- Type............................Disk drive Vendor..........................Unknown Model...........................ST3160815AS Serial Number...................9RX3KZMT Firmware level..................3.AAD Reported channel................0 Reported SCSI device ID.........0 Interface type..................Serial ATA Size............................149,05 GB Negotiated transfer speed.......1.50 Gb/s State...........................Optimal S.M.A.R.T. error................No Write-cache mode................Write back Hardware errors.................0 Medium errors...................0 Parity errors...................0 Link failures...................0 Aborted commands................0 S.M.A.R.T. warnings.............0 Solid-state disk (non-spinning).false MaxCache cache capable..........false MaxCache cache assigned.........false NCQ status......................Enabled Phy 0...........................1.50 Gb/s Power State.....................Full rpm Supported power states..........Full rpm, Powered off 0x01............................113 0x03............................98 0x04............................99 0x05............................100 0x07............................83 0x09............................75 0x0A............................100 0x0C............................99 0xBB............................100 0xBD............................100 0xBE............................61 0xC2............................39 0xC3............................69 0xC5............................100 0xC6............................100 0xC7............................200 0xC8............................100 0xCA............................100 Aborted commands................0 Link failures...................0 Medium errors...................0 Parity errors...................0 Hardware errors.................0 SMART errors....................0 End of the configuration information for controller 1 List item

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >