Search Results

Search found 8703 results on 349 pages for 'cache money'.

Page 72/349 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • how to config grails1.2 with oscache?

    - by wavelet
    I do this : DataSource.groovy: hibernate { cache.use_second_level_cache=true cache.use_query_cache=true cache.provider_class='com.opensymphony.oscache.hibernate.OSCacheProvider' } and in BuildConfig.groovy: inherits( "global" ) { // uncomment to disable ehcache excludes 'ehcache' } runtime ("opensymphony:oscache:2.4.1") { excludes 'jms', 'commons-logging', 'servlet-api' } but i only get this error : commons.DefaultGrailsApplication The class [com.ai.scenter.service.reschange.ResourceChangeAdapter] was not found when attempting to load Grails application. Skipping. commons.DefaultGrailsApplication The class [com.ai.scenter.service.reschange.ResourceChangeService] was not found when attempting to load Grails application. Skipping. hibernate.ConfigurableLocalSessionFactoryBean There was an error configuring the Hibernate second level cache: could not instantiate CacheProvider [com.opensymphony.oscache.hibernate.OSCacheProvide] hibernate.ConfigurableLocalSessionFactoryBean This is normally due to one of two reasons. Either you have incorrectly specified the cache provider class name in [DataSource.groovy] or you do not have the cache provider on your classpath (eg. runtime ("net.sf.ehcache:ehcache:1.6.1")) how can i do it?

    Read the article

  • Java memory mapped files and swap

    - by MarkS
    I'm looking at some memory mapped files in Java. Let's say I have a heap size set to 2gb, and I memory map a file that is 50gb - far more than the physical memory on the machine. The OS will cache parts of that 50gb file in the os file cache, the java process will have 2gb of heap space. What I'm curious about is how does the OS decide how much of the 50gb file to cache? For instance, if I have another java process, also with a 2gb heap size, will that 2gb be swapped out to allow the os to cache parts of the memory mapped file? Will parts of the heap space of the first process be swapped out to allow the OS to cache? Is there any way to tell the OS not to swap heap space for OS caching? If the OS doesn't swap out main processes, how does it determine how big its file cache should be?

    Read the article

  • assistance with classifying tests

    - by amateur
    I have a .net c# library that I have created that I am currently creating some unit tests for. I am at present writing unit tests for a cache provider class that I have created. Being new to writing unit tests I have 2 questions These being: My cache provider class is the abstraction layer to my distributed cache - AppFabric. So to test aspects of my cache provider class such as adding to appfabric cache, removing from cache etc involves communicating with appfabric. Therefore the tests to test for such, are they still categorised as unit tests or integration tests? The above methods I am testing due to interacting with appfabric, I would like to time such methods. If they take longer than a specified benchmark, the tests have failed. Again I ask the question, can this performance benchmark test be classifed as a unit test? The way I have my tests set up I want to include all unit tests together, integration tests together etc, therefore I ask these questions that I would appreciate input on.

    Read the article

  • change Magento to new server, images of existing product do not shown in frontend

    - by forrest gump
    I have moved a Magento website to a new server to optimize speed. All images appear to be replaced by image placeholders. I tried to set permission for media to 777, when I delete cache media folder and refresh, a new cache folder is created automatically, but only with placeholder images inside. I flushed image cache, reindex in magento backend but still no hope. What should I do now? Somethings might be helpful: . It's fine to create new product with images . Flush cache, only placeholders images are created . tried to use magento-cleanup.php, no hope . tried to remove .htaccess in media folder . the images of products are there in the media folder, just not in cache folder. Magento only looks at the cache folder for image :(

    Read the article

  • Difference between redirecting to a page and coming to the same page after pressing back button

    - by Mac
    Actually i have a page in which i am not using cache by using this code HttpContext.Current.Response.Cache.SetExpires(DateTime.UtcNow.AddDays(-1)); HttpContext.Current.Response.Cache.SetValidUntilExpires(false); HttpContext.Current.Response.Cache.SetRevalidation(HttpCacheRevalidation.AllCaches); HttpContext.Current.Response.Cache.SetCacheability(HttpCacheability.NoCache); HttpContext.Current.Response.Cache.SetNoStore(); now i want to know is there any difference between coming to this page using a proper link or coming back using browser back button or is there any way to detect this.

    Read the article

  • Output caching in HTTP Handler and SetValidUntilExpires

    - by mayor
    I'm using output caching in my custom HTTP handler in the following way: public void ProcessRequest(HttpContext context) { TimeSpan freshness = new TimeSpan(0, 0, 0, 60); context.Response.Cache.SetExpires(DateTime.Now.Add(freshness)); context.Response.Cache.SetMaxAge(freshness); context.Response.Cache.SetCacheability(HttpCacheability.Public); context.Response.Cache.SetValidUntilExpires(true); ... } It works, but the problem is that refreshing the page with F5 leads to page regeneration (instead of cache usage) despite of the last codeline: context.Response.Cache.SetValidUntilExpires(true); Any suggestions?

    Read the article

  • Is NSManagedObjectContext autosaved or am I looking at NSFetchedResultsController's cache?

    - by Andreas
    I'm developing an iPhone app where I use a NSFetchedResultsController in the main table view controller. I create it like this in the viewDidload of the main table view controller: NSSortDescriptor *sortDescriptorDate = [[NSSortDescriptor alloc] initWithKey:@"date" ascending:YES]; NSSortDescriptor *sortDescriptorTime = [[NSSortDescriptor alloc] initWithKey:@"start" ascending:YES]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptorDate,sortDescriptorTime, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; [sortDescriptorDate release]; [sortDescriptorTime release]; [sortDescriptors release]; controller = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:context sectionNameKeyPath:@"date" cacheName:nil]; [fetchRequest release]; NSError *error; BOOL success = [controller performFetch:&error]; Then, in a subsequent view, I create a new object on the context: TestObject *testObject = [NSEntityDescription insertNewObjectForEntityForName:@"TestObject" inManagedObjectContext:context]; The TestObject has several related object which I create in the same way and add to the testObject using the provided add...Objects methods. Then, if before saving the context, I press cancel and go back to the main table view, nothing is shown as expected. However, if I restart the app, the object I created on the context shows in the main table view. How come? At first, I thought it was because the NSFetchedResultsController was reading from the cache, but as you can see I set this to nil just to test. Also, [context hasChanges] returns true after I restart. What am I missing here?

    Read the article

  • How to flush DNS cache in Windows Mobile programmatically?

    - by Bounded
    Hello, My windows mobile application (written in C# with the compact framework) needs to know if a particular machine is active or not. To achieve this goal, I thought to use a ping mechanism. I tried to use the Ping class implemented in the opennetcf framework (the System.Net.NetworkInformation.Ping class for the .NET Framework is not part of the compact framework). Because I give to the Ping.Send function a host name, it first tries to resolve this host name and to retrieve an IP address. But i observe the following problem : If the first dns resolution fails (because the network is down at this moment), and if the application tries immediately again to send the ping, it fails too, even if the network is note down anymore. I check with a famous network protocol analyzer and i saw that only the requests concerning the first dns resolution are sent. The requests concerning the dns resolution of the second ping are not sent. Why is the second dns request not sent ? Is there any dns cache mechanism on such Windows Mobile devices ? If yes, can this mechanism beeing flushed programmatically ? EDIT : I gave up finding a solution to this DNS flush. I chose to ping an IP adress instead of a name machine. The problem of pinging an hard coded adress IP is that we have to be 100% sure that this IP will not change. The gateway IP can be used because it's always reachable (if it does not, it means the network is down).

    Read the article

  • How do i programmatically access the face cache in Windows Live Photo Gallery?

    - by acorderob
    I'm not talking about the "people tags" embeded in the XMP packets of JPEGs. I'm talking about the face database used to recognize new faces. I want to add to my program the option to recognize faces using the already trained database of WLPG. I managed to use the API (a type library dll) to detect faces, but to recognize them it needs an Exemplar Cache object that is not available in the same API. I could create my own object, but i want to use the already existing one to avoid duplicate training for the user. I know the database is in C:\Users\\AppData\Local\Microsoft\Windows Live Photo Gallery and that it is in an SQL Server Compact format. I tried to open the database with Visual Studio 2010, but it says that it is in an older version (pre-3.5) and needs to be upgraded. I don't want to change the database, just read it. I don't know how the WPLG reads it since apparently i don't have the correct OLEDB provider version. I would also prefer to read it without accesing the database directly but i don't see any DLL that exports that functionality. BTW, i'm using Delphi 2010. Any ideas?

    Read the article

  • Ruby On Rails with HTML5 offline apps - Firefox does not cache the application.manifest but Safari does

    - by hoitomt
    I'm working off of this Railscast tutorial: episode 247 I’m up to this point in the tutorial: added the rack-offline gem, added the application.manifest route, and added a reference to the manifest in the html tag. Right before it starts talking about problems with caching. Safari works as intended – When the server is running the page is served. From the server logs I can see that Safari is making a single request to the server every time for the items page. When I turn off the server the page displays as well, even after shutting down the browser and restarting. It appears to be pulling from the application.manifest (cache manifest). Firefox does not work as intended – When accessing the page for the first time, Firefox lets me know that the web page wants to store something locally, I allow. After clicking on allow, Firefox makes 5 requests to the server for the page (from the server log). The hash is different in every request. Is it is possible that the changing hash is triggering Firefox to keep trying to get the new manifest until it reaches some maximum (5 attempts)? Then, after the server is stopped, Firefox will not show the page at all. It looks like it isn’t caching the application.manifest. Firefox also gives you a way to see what sites are storing stuff locally by going to Tools/Options/Advanced/Network (Firefox/Preferences/Advanced/Network on Apple). I see localhost there but the size is 0 bytes. So for some reason, Firefox is not downloading my application.manifest along with the files

    Read the article

  • help Implementing Object Oriented ansi-C approach??

    - by No Money
    Hey there, I am an Intermediate programmer in Java and know some of the basics in C++. I recently started to scam over "C language" [please note that i emphasized on C language and want to stick with C as i found it to be a perfect tool, so no need for suggestions focusing on why should i move back to C++ or Java]. Moving on, I code an Object Oriented approach in C but kindda scramble with the pointers part. Please understand that I am just a noob trying to extend my knowledge beyond what i learned in High School. Here is my code..... #include <stdio.h> typedef struct per{ int privateint; char *privateString; struct per (*New) (); void (*deleteperOBJ) (struct t_person *); void (*setperNumber) ((struct*) t_person,int); void (*setperString) ((struct*) t_person,char *); void (*dumpperState) ((struct*) t_person); }t_person; void setperNumber(t_person *const per,int num){ if(per==NULL) return; per->privateint=num; } void setperString(t_person *const per,char *string){ if(per==NULL) return; per->privateString=string; } void dumpperState(t_person *const per){ if(per==NULL) return; printf("value of private int==%d\n", per->privateint); printf("value of private string==%s\n", per->privateString); } void deleteperOBJ(struct t_person *const per){ free((void*)t_person->per); t_person ->per = NULL; } main(){ t_person *const per = (struct*) malloc(sizeof(t_person)); per = t_person -> struct per -> New(); per -> setperNumber (t_person *per, 123); per -> setperString(t_person *per, "No money"); dumpperState(t_person *per); deleteperOBJ(t_person *per); } Just to warn you, this program has several errors and since I am a beginner I couldn't help except to post this thread as a question. I am looking forward for assistance. Thanks in advance.

    Read the article

  • Implementing Object Oriented: ansi-C approach

    - by No Money
    Hey there, I am an Intermediate programmer in Java and know some of the basics in C++. I recently started to scam over "C language" [please note that i emphasized on C language and want to stick with C as i found it to be a perfect tool, so no need for suggestions focusing on why should i move back to C++ or Java or any other crappy language (e.g: C#)]. Moving on, I code an Object Oriented approach in C but kindda scramble with the pointers part. Please understand that I am just a noob trying to extend my knowledge beyond what i learned in High School. Here is my code..... #include <stdio.h> typedef struct per{ int privateint; char *privateString; struct per (*New) (); void (*deleteperOBJ) (struct t_person *); void (*setperNumber) ((struct*) t_person,int); void (*setperString) ((struct*) t_person,char *); void (*dumpperState) ((struct*) t_person); }t_person; void setperNumber(t_person *const per,int num){ if(per==NULL) return; per->privateint=num; } void setperString(t_person *const per,char *string){ if(per==NULL) return; per->privateString=string; } void dumpperState(t_person *const per){ if(per==NULL) return; printf("value of private int==%d\n", per->privateint); printf("value of private string==%s\n", per->privateString); } void deleteperOBJ(struct t_person *const per){ free((void*)t_person->per); t_person ->per = NULL; } main(){ t_person *const per = (struct*) malloc(sizeof(t_person)); per = t_person -> struct per -> New(); per -> setperNumber (t_person *per, 123); per -> setperString(t_person *per, "No money"); dumpperState(t_person *per); deleteperOBJ(t_person *per); } Just to warn you, this program has several errors and since I am a beginner I couldn't help except to post this thread as a question. I am looking forward for assistance. Thanks in advance.

    Read the article

  • anyone doing php-fpm and APC? cache files missing in /tmp

    - by Vangel
    I have been using xcache for a long time. Recently I put together php-fpm and nginx. I see apc is installed and enabled in the configuration. I was assuming that apc will automatically opcode the files and store it somewhere. According to config it should be in /tmp/apx.xxxx but there is no such thing there. What am i missing? any clues to investigate would be of much help. Please note I am using php 5.3 fpm. thanks mates. UPDATE: i looked at apc.php it says things are fine. will just have to take its word for it.

    Read the article

  • How much HDD space would I need to cache the web while respecting robot.txts?

    - by Koning Baard XIV
    I want to experiment with creating a web crawler. I'll start with indexing a few medium sized website like Stack Overflow or Smashing Magazine. If it works, I'd like to start crawling the entire web. I'll respect robot.txts. I save all html, pdf, word, excel, powerpoint, keynote, etc... documents (not exes, dmgs etc, just documents) in a MySQL DB. Next to that, I'll have a second table containing all restults and descriptions, and a table with words and on what page to find those words (aka an index). How much HDD space do you think I need to save all the pages? Is it as low as 1 TB or is it about 10 TB, 20? Maybe 30? 1000? Thanks

    Read the article

  • Squid not caching files (Randomly)

    - by Heinrich
    I want to use an intercepting squid server to cache specific large zip files that users in my network download frequently. I have configured squid on a gateway machine and caching is working for "static" zip files that are served from an Apache web server outside our network. The files that I want to have cached by squid are zip files 100MB which are served from a heroku-hosted Rails application. I set an ETag header (SHA hash of the zip file on the server) and Cache-Control: public header. However, these files are not cached by squid. This, for example, is a request that is not cached: $ curl --no-keepalive -v -o test.zip --header "X-Access-Key: 20767ed397afdea90601fda4513ceb042fe6ab4e51578da63d3bc9b024ed538a" --header "X-Customer: 5" "http://MY_APP.herokuapp.com/api/device/v1/media/download?version=latest" * Adding handle: conn: 0x7ffd4a804400 * Adding handle: send: 0 * Adding handle: recv: 0 ... > GET /api/device/v1/media/download?version=latest HTTP/1.1 > User-Agent: curl/7.30.0 > Host: MY_APP.herokuapp.com > Accept: */* > X-Access-Key: 20767ed397afdea90601fda4513ceb042fe6ab4e51578da63d3bc9b024ed538a > X-Customer: 5 > 0 0 0 0 0 0 0 0 --:--:-- 0:00:09 --:--:-- 0< HTTP/1.1 200 OK * Server Cowboy is not blacklisted < Server: Cowboy < Date: Mon, 18 Aug 2014 14:13:27 GMT < Status: 200 OK < X-Frame-Options: SAMEORIGIN < X-Xss-Protection: 1; mode=block < X-Content-Type-Options: nosniff < ETag: "95e888938c0d539b8dd74139beace67f" < Content-Disposition: attachment; filename="e7cce850ae728b81fe3f315d21a560af.zip" < Content-Transfer-Encoding: binary < Content-Length: 125727431 < Content-Type: application/zip < Cache-Control: public < X-Request-Id: 7ce6edb0-013a-4003-a331-94d2b8fae8ad < X-Runtime: 1.244251 < X-Cache: MISS from AAA.fritz.box < Via: 1.1 vegur, 1.1 AAA.fritz.box (squid/3.3.11) < Connection: keep-alive In the logs squid is reporting a TCP_MISS. This is the relevant excerpt from my squid file: # Squid normally listens to port 3128 http_port 3128 http_port 3129 intercept # Uncomment and adjust the following to add a disk cache directory. maximum_object_size 1000 MB maximum_object_size_in_memory 1000 MB cache_dir ufs /usr/local/var/cache/squid 10000 16 256 cache_mem 2000 MB # Leave coredumps in the first cache dir coredump_dir /usr/local/var/cache/squid cache_store_log daemon:/usr/local/var/logs/cache_store.log #refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern -i .(zip) 525600 100% 525600 override-expire ignore-no-cache ignore-no-store refresh_pattern . 0 20% 4320 ## DNS Configuration dns_nameservers 8.8.8.8 8.8.4.4 After trying around for some time I realized that squid is sometimes deciding that my file is cacheable, sometimes not, depending on whether and when I enable/disable the dns_nameservers directive. What could be wrong here?

    Read the article

  • What's wrong with this vcl config for varnish-cache as load balancer?

    - by dabito
    I have the current configurations active on my default.vcl varnish file on the machine that balances the load for other two machines (the other two machines also have varnish active). My intention is to have this server do only the load balancing and the other machines do the processing and also their own caching. My problem is that even with the config testing (not even a stress test or anything, just a few requests a minute) I get the guru meditation error and have to restart varnish. This is the default.vcl for the load balancing server: backend vader { .host = "app1.server.com"; .probe = { .url = "/"; .interval = 10s; .timeout = 4s; .window = 5; .threshold = 3; } } backend malgus { .host = "app2.server.com"; .probe = { .url = "/"; .interval = 10s; .timeout = 4s; .window = 5; .threshold = 3; } } director dooku round-robin { { .backend = vader; } { .backend = malgus; } } sub vcl_recv { if (req.http.host ~ "^balancer.server.com$") { set req.backend = dooku; } } Am I doing something wrong or missing something? EDIT: This is varnishlog's output: 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1345839995 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1345839998 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1345840001 1.0 0 Backend_health - malgus Still sick 4--X--- 0 3 5 0.000000 3.846876 0 Backend_health - vader Still sick 4--X--- 0 3 5 0.000000 3.839194 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1345840004 1.0 14 SessionOpen c 10.150.7.151 38272 :80 14 ReqStart c 10.150.7.151 38272 458200540 14 RxRequest c GET 14 RxURL c / 14 RxProtocol c HTTP/1.1 14 RxHeader c Host: dooku-dev.excelsior.com 14 RxHeader c Connection: keep-alive 14 RxHeader c User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11 14 RxHeader c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 14 RxHeader c Accept-Encoding: gzip,deflate,sdch 14 RxHeader c Accept-Language: en-US,en;q=0.8,es-419;q=0.6,es;q=0.4 14 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 14 RxHeader c Cookie: SESSa87d6c6da0c61037a9169122dc5e4a19=HR_0Srhgc-uDArT3aJFzOBy31FtzneTXg38byr1eGMU; __atuvc=4%7C33 14 VCL_call c recv pass 14 VCL_call c hash 14 Hash c / 14 Hash c dooku-dev.excelsior.com 14 VCL_return c hash 14 VCL_call c pass pass 14 FetchError c no backend connection 14 VCL_call c error deliver 14 VCL_call c deliver deliver 14 TxProtocol c HTTP/1.1 14 TxStatus c 503 14 TxResponse c Service Unavailable 14 TxHeader c Server: Varnish 14 TxHeader c Content-Type: text/html; charset=utf-8 14 TxHeader c Retry-After: 5 14 TxHeader c Content-Length: 418 14 TxHeader c Accept-Ranges: bytes 14 TxHeader c Date: Fri, 24 Aug 2012 20:26:44 GMT 14 TxHeader c X-Varnish: 458200540 14 TxHeader c Age: 0 14 TxHeader c Via: 1.1 varnish 14 TxHeader c Connection: close 14 Length c 418 14 ReqEnd c 458200540 1345840004.916415691 1345840004.965190172 0.020933390 0.048741817 0.000032663 14 SessionClose c error 14 StatSess c 10.150.7.151 38272 0 1 1 0 1 0 256 418 14 SessionOpen c 10.150.7.151 38273 :80 14 ReqStart c 10.150.7.151 38273 458200541 14 RxRequest c GET 14 RxURL c /favicon.ico 14 RxProtocol c HTTP/1.1 14 RxHeader c Host: dooku-dev.excelsior.com 14 RxHeader c Connection: keep-alive 14 RxHeader c Accept: */* 14 RxHeader c User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11 14 RxHeader c Accept-Encoding: gzip,deflate,sdch 14 RxHeader c Accept-Language: en-US,en;q=0.8,es-419;q=0.6,es;q=0.4 14 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 14 RxHeader c Cookie: SESSa87d6c6da0c61037a9169122dc5e4a19=HR_0Srhgc-uDArT3aJFzOBy31FtzneTXg38byr1eGMU; __atuvc=4%7C33 14 VCL_call c recv pass 14 VCL_call c hash 14 Hash c /favicon.ico 14 Hash c dooku-dev.excelsior.com 14 VCL_return c hash 14 VCL_call c pass pass 14 FetchError c no backend connection 14 VCL_call c error deliver 14 VCL_call c deliver deliver 14 TxProtocol c HTTP/1.1 14 TxStatus c 503 14 TxResponse c Service Unavailable 14 TxHeader c Server: Varnish 14 TxHeader c Content-Type: text/html; charset=utf-8 14 TxHeader c Retry-After: 5 14 TxHeader c Content-Length: 418 14 TxHeader c Accept-Ranges: bytes 14 TxHeader c Date: Fri, 24 Aug 2012 20:26:45 GMT 14 TxHeader c X-Varnish: 458200541 14 TxHeader c Age: 0 14 TxHeader c Via: 1.1 varnish 14 TxHeader c Connection: close 14 Length c 418 14 ReqEnd c 458200541 1345840005.226389885 1345840005.226457834 0.000026941 0.000043154 0.000024796 14 SessionClose c error 14 StatSess c 10.150.7.151 38273 0 1 1 0 1 0 256 418

    Read the article

  • Does Exchange Cache Mode affect email markers refresh time in other Outlook clients?

    - by David
    We have users who share a single email account by using the Additional Email option under their accounts. Now, they want to assign emails to one another using the markers alongside the emails. We noticed that when changing the color of a marker, one Outlook client updated immediately, but another Outlook client did not. It looked like they were both set to "Cached Mode". Is it likely that caching effected the refresh of the client? Would it be better to turn off cached mode if we are using Outlook this way?

    Read the article

  • How i can setup a nginx cache strategy that first try amazon s3, then memcache and do a fallback on miss?

    - by Tim
    i have a large site with lot of pages that almost never change, right now i am using two memcache servers (amazon elasticache), but this its really expensive. Thats why for this files that barely never change i want to upload them to amazon s3 and shutdown 1 memcache server. Here is my conf; location ~ /longterm/(.*){ proxy_pass http://amazonS3bucket; proxy_intercept_errors on; proxy_next_upstream http_404; error_page 404 503 = @fallback_memcached } location @fallback_memcache { set $memcached_key $uri; memcached_pass name:11211; error_page 404 @fallback; } location @fallback { try_files $uri $uri/index.html } I dont know why but the config doesnt work on the final fallback; if i got an amazon S3 hit it works, if i got an amazon S3 miss and a memcache hit it works, but if i got an amazon S3 miss then a memcache miss when it try to resolve the las fallback it fails. I am also thinking in use the amazon s3 fuse http://code.google.com/p/s3fs/ instead of the proxy pass i think it would be easier for implement, i would also be less performant?

    Read the article

  • Making a Login Work After Cache, Cookies, etc. Have Been Cleared

    - by John
    Hello, I am using the code below for a user login. The first I try to login after cache / cookies, etc. have been cleared, the browser refreshes and the user name is not logged in. After that, logging in works fine. Any idea how I can make it work the first time? Thanks in advance, John index.php: <?php if($_SERVER['REQUEST_METHOD'] == "POST"){header('Location: http://www...com/.../index.php?username='.$username.'&password='.$password.'');} require_once "header.php"; include "login.php"; require_once "footer.php"; ?> login.php: <?php if (!isLoggedIn()) { if (isset($_POST['cmdlogin'])) { if (checkLogin($_POST['username'], $_POST['password'])) { show_userbox(); } else { echo "Incorrect Login information !"; show_loginform(); } } else { show_loginform(); } } else { show_userbox(); } ?> show_loginform function: function show_loginform($disabled = false) { echo '<form name="login-form" id="login-form" method="post" action="./index.php?'.$_SERVER['QUERY_STRING'].'"> <div class="usernameformtext"><label title="Username">Username: </label></div> <div class="usernameformfield"><input tabindex="1" accesskey="u" name="username" type="text" maxlength="30" id="username" /></div> <div class="passwordformtext"><label title="Password">Password: </label></div> <div class="passwordformfield"><input tabindex="2" accesskey="p" name="password" type="password" maxlength="15" id="password" /></div> <div class="registertext"><a href="http://www...com/.../register.php" title="Register">Register</a></div> <div class="lostpasswordtext"><a href="http://www...com/.../lostpassword.php" title="Lost Password">Lost password?</a></div> <p class="loginbutton"><input tabindex="3" accesskey="l" type="submit" name="cmdlogin" value="Login" '; if ($disabled == true) { echo 'disabled="disabled"'; } echo ' /></p></form>'; }

    Read the article

  • Suggestions In Porting ASP.NET to MVC.NET - Is storing SiteConfiguration in Cache RESTful?

    - by DaveDev
    I've been tasked with porting/refactoring a Web Application Platform that we have from ASP.NET to MVC.NET. Ideally I could use all the existing platform's configurations to determine the properties of the site that is presented. Is it RESTful to keep a SiteConfiguration object which contains all of our various page configuration data in the System.Web.Caching.Cache? There are a lot of settings that need to be loaded when the user acceses our site so it's inefficient for each user to have to load the same settings every time they access. Some data the SiteConfiguration object contains is as follows and it determines what Master Page / site configuration / style / UserControls are available to the client, public string SiteTheme { get; set; } public string Region { private get; set; } public string DateFormat { get; set; } public string NumberFormat { get; set; } public int WrapperType { private get; set; } public string LabelFileName { get; set; } public LabelFile LabelFile { get; set; } // the following two are the heavy ones // PageConfiguration contains lots of configuration data for each panel on the page public IList<PageConfiguration> Pages { get; set; } // This contains all the configurations for the factsheets we produce public List<ConfiguredFactsheet> ConfiguredFactsheets { get; set; } I was thinking of having a URL structure like this: www.MySite1.com/PageTemplate/UserControl/ the domain determines the SiteConfiguration object that is created, where MySite1.com is SiteId = 1, MySite2.com is SiteId = 2. (and in turn, style, configurations for various pages, etc.) PageTemplate is the View that will be rendered and simply defines a layout for where I'm going to inject the UserControls Can somebody please tell me if I'm completely missing the RESTful point here? I'd like to refactor the platform into MVC because it's better to work in but I want to do it right but with a minimum of reinventing-the-wheel because otherwise it won't get approval. Any suggestions otherwise? Thanks

    Read the article

  • HTG Explains: Should You Build Your Own PC?

    - by Chris Hoffman
    There was a time when every geek seemed to build their own PC. While the masses bought eMachines and Compaqs, geeks built their own more powerful and reliable desktop machines for cheaper. But does this still make sense? Building your own PC still offers as much flexibility in component choice as it ever did, but prebuilt computers are available at extremely competitive prices. Building your own PC will no longer save you money in most cases. The Rise of Laptops It’s impossible to look at the decline of geeks building their own PCs without considering the rise of laptops. There was a time when everyone seemed to use desktops — laptops were more expensive and significantly slower in day-to-day tasks. With the diminishing importance of computing power — nearly every modern computer has more than enough power to surf the web and use typical programs like Microsoft Office without any trouble — and the rise of laptop availability at nearly every price point, most people are buying laptops instead of desktops. And, if you’re buying a laptop, you can’t really build your own. You can’t just buy a laptop case and start plugging components into it — even if you could, you would end up with an extremely bulky device. Ultimately, to consider building your own desktop PC, you have to actually want a desktop PC. Most people are better served by laptops. Benefits to PC Building The two main reasons to build your own PC have been component choice and saving money. Building your own PC allows you to choose all the specific components you want rather than have them chosen for you. You get to choose everything, including the PC’s case and cooling system. Want a huge case with room for a fancy water-cooling system? You probably want to build your own PC. In the past, this often allowed you to save money — you could get better deals by buying the components yourself and combining them, avoiding the PC manufacturer markup. You’d often even end up with better components — you could pick up a more powerful CPU that was easier to overclock and choose more reliable components so you wouldn’t have to put up with an unstable eMachine that crashed every day. PCs you build yourself are also likely more upgradable — a prebuilt PC may have a sealed case and be constructed in such a way to discourage you from tampering with the insides, while swapping components in and out is generally easier with a computer you’ve built on your own. If you want to upgrade your CPU or replace your graphics card, it’s a definite benefit. Downsides to Building Your Own PC It’s important to remember there are downsides to building your own PC, too. For one thing, it’s just more work — sure, if you know what you’re doing, building your own PC isn’t that hard. Even for a geek, researching the best components, price-matching, waiting for them all to arrive, and building the PC just takes longer. Warranty is a more pernicious problem. If you buy a prebuilt PC and it starts malfunctioning, you can contact the computer’s manufacturer and have them deal with it. You don’t need to worry about what’s wrong. If you build your own PC and it starts malfunctioning, you have to diagnose the problem yourself. What’s malfunctioning, the motherboard, CPU, RAM, graphics card, or power supply? Each component has a separate warranty through its manufacturer, so you’ll have to determine which component is malfunctioning before you can send it off for replacement. Should You Still Build Your Own PC? Let’s say you do want a desktop and are willing to consider building your own PC. First, bear in mind that PC manufacturers are buying in bulk and getting a better deal on each component. They also have to pay much less for a Windows license than the $120 or so it would cost you to to buy your own Windows license. This is all going to wipe out the cost savings you’ll see — with everything all told, you’ll probably spend more money building your own average desktop PC than you would picking one up from Amazon or the local electronics store. If you’re an average PC user that uses your desktop for the typical things, there’s no money to be saved from building your own PC. But maybe you’re looking for something higher end. Perhaps you want a high-end gaming PC with the fastest graphics card and CPU available. Perhaps you want to pick out each individual component and choose the exact components for your gaming rig. In this case, building your own PC may be a good option. As you start to look at more expensive, high-end PCs, you may start to see a price gap — but you may not. Let’s say you wanted to blow thousands of dollars on a gaming PC. If you’re looking at spending this kind of money, it would be worth comparing the cost of individual components versus a prebuilt gaming system. Still, the actual prices may surprise you. For example, if you wanted to upgrade Dell’s $2293 Alienware Aurora to include a second NVIDIA GeForce GTX 780 graphics card, you’d pay an additional $600 on Alienware’s website. The same graphics card costs $650 on Amazon or Newegg, so you’d be spending more money building the system yourself. Why? Dell’s Alienware gets bulk discounts you can’t get — and this is Alienware, which was once regarded as selling ridiculously overpriced gaming PCs to people who wouldn’t build their own. Building your own PC still allows you to get the most freedom when choosing and combining components, but this is only valuable to a small niche of gamers and professional users — most people, even average gamers, would be fine going with a prebuilt system. If you’re an average person or even an average gamer, you’ll likely find that it’s cheaper to purchase a prebuilt PC rather than assemble your own. Even at the very high end, components may be more expensive separately than they are in a prebuilt PC. Enthusiasts who want to choose all the individual components for their dream gaming PC and want maximum flexibility may want to build their own PCs. Even then, building your own PC these days is more about flexibility and component choice than it is about saving money. In summary, you probably shouldn’t build your own PC. If you’re an enthusiast, you may want to — but only a small minority of people would actually benefit from building their own systems. Feel free to compare prices, but you may be surprised which is cheaper. Image Credit: Richard Jones on Flickr, elPadawan on Flickr, Richard Jones on Flickr     

    Read the article

  • Nginx compiled --with-http_spdy_module yet raise errors complains ngx_http_spdy_module

    - by c19
    [emerg] 21101#0: the "spdy" parameter requires ngx_http_spdy_module in /etc/nginx/conf.d/cc.conf isn't it the same module? and it causes multi-redirection error too. I have no idea what is going on. Full configure arg: nginx version: nginx/1.4.2 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-cc-opt='-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --with-pcre --with-http_ssl_module `--with-http_spdy_module` --with-http_gunzip_module --with-http_gzip_static_module --with-http_stub_status_module --with-openssl=/usr/local/src/openssl-1.0.1e

    Read the article

  • C# 4: The Curious ConcurrentDictionary

    - by James Michael Hare
    In my previous post (here) I did a comparison of the new ConcurrentQueue versus the old standard of a System.Collections.Generic Queue with simple locking.  The results were exactly what I would have hoped, that the ConcurrentQueue was faster with multi-threading for most all situations.  In addition, concurrent collections have the added benefit that you can enumerate them even if they're being modified. So I set out to see what the improvements would be for the ConcurrentDictionary, would it have the same performance benefits as the ConcurrentQueue did?  Well, after running some tests and multiple tweaks and tunes, I have good and bad news. But first, let's look at the tests.  Obviously there's many things we can do with a dictionary.  One of the most notable uses, of course, in a multi-threaded environment is for a small, local in-memory cache.  So I set about to do a very simple simulation of a cache where I would create a test class that I'll just call an Accessor.  This accessor will attempt to look up a key in the dictionary, and if the key exists, it stops (i.e. a cache "hit").  However, if the lookup fails, it will then try to add the key and value to the dictionary (i.e. a cache "miss").  So here's the Accessor that will run the tests: 1: internal class Accessor 2: { 3: public int Hits { get; set; } 4: public int Misses { get; set; } 5: public Func<int, string> GetDelegate { get; set; } 6: public Action<int, string> AddDelegate { get; set; } 7: public int Iterations { get; set; } 8: public int MaxRange { get; set; } 9: public int Seed { get; set; } 10:  11: public void Access() 12: { 13: var randomGenerator = new Random(Seed); 14:  15: for (int i=0; i<Iterations; i++) 16: { 17: // give a wide spread so will have some duplicates and some unique 18: var target = randomGenerator.Next(1, MaxRange); 19:  20: // attempt to grab the item from the cache 21: var result = GetDelegate(target); 22:  23: // if the item doesn't exist, add it 24: if(result == null) 25: { 26: AddDelegate(target, target.ToString()); 27: Misses++; 28: } 29: else 30: { 31: Hits++; 32: } 33: } 34: } 35: } Note that so I could test different implementations, I defined a GetDelegate and AddDelegate that will call the appropriate dictionary methods to add or retrieve items in the cache using various techniques. So let's examine the three techniques I decided to test: Dictionary with mutex - Just your standard generic Dictionary with a simple lock construct on an internal object. Dictionary with ReaderWriterLockSlim - Same Dictionary, but now using a lock designed to let multiple readers access simultaneously and then locked when a writer needs access. ConcurrentDictionary - The new ConcurrentDictionary from System.Collections.Concurrent that is supposed to be optimized to allow multiple threads to access safely. So the approach to each of these is also fairly straight-forward.  Let's look at the GetDelegate and AddDelegate implementations for the Dictionary with mutex lock: 1: var addDelegate = (key,val) => 2: { 3: lock (_mutex) 4: { 5: _dictionary[key] = val; 6: } 7: }; 8: var getDelegate = (key) => 9: { 10: lock (_mutex) 11: { 12: string val; 13: return _dictionary.TryGetValue(key, out val) ? val : null; 14: } 15: }; Nothing new or fancy here, just your basic lock on a private object and then query/insert into the Dictionary. Now, for the Dictionary with ReadWriteLockSlim it's a little more complex: 1: var addDelegate = (key,val) => 2: { 3: _readerWriterLock.EnterWriteLock(); 4: _dictionary[key] = val; 5: _readerWriterLock.ExitWriteLock(); 6: }; 7: var getDelegate = (key) => 8: { 9: string val; 10: _readerWriterLock.EnterReadLock(); 11: if(!_dictionary.TryGetValue(key, out val)) 12: { 13: val = null; 14: } 15: _readerWriterLock.ExitReadLock(); 16: return val; 17: }; And finally, the ConcurrentDictionary, which since it does all it's own concurrency control, is remarkably elegant and simple: 1: var addDelegate = (key,val) => 2: { 3: _concurrentDictionary[key] = val; 4: }; 5: var getDelegate = (key) => 6: { 7: string s; 8: return _concurrentDictionary.TryGetValue(key, out s) ? s : null; 9: };                    Then, I set up a test harness that would simply ask the user for the number of concurrent Accessors to attempt to Access the cache (as specified in Accessor.Access() above) and then let them fly and see how long it took them all to complete.  Each of these tests was run with 10,000,000 cache accesses divided among the available Accessor instances.  All times are in milliseconds. 1: Dictionary with Mutex Locking 2: --------------------------------------------------- 3: Accessors Mostly Misses Mostly Hits 4: 1 7916 3285 5: 10 8293 3481 6: 100 8799 3532 7: 1000 8815 3584 8:  9:  10: Dictionary with ReaderWriterLockSlim Locking 11: --------------------------------------------------- 12: Accessors Mostly Misses Mostly Hits 13: 1 8445 3624 14: 10 11002 4119 15: 100 11076 3992 16: 1000 14794 4861 17:  18:  19: Concurrent Dictionary 20: --------------------------------------------------- 21: Accessors Mostly Misses Mostly Hits 22: 1 17443 3726 23: 10 14181 1897 24: 100 15141 1994 25: 1000 17209 2128 The first test I did across the board is the Mostly Misses category.  The mostly misses (more adds because data requested was not in the dictionary) shows an interesting trend.  In both cases the Dictionary with the simple mutex lock is much faster, and the ConcurrentDictionary is the slowest solution.  But this got me thinking, and a little research seemed to confirm it, maybe the ConcurrentDictionary is more optimized to concurrent "gets" than "adds".  So since the ratio of misses to hits were 2 to 1, I decided to reverse that and see the results. So I tweaked the data so that the number of keys were much smaller than the number of iterations to give me about a 2 to 1 ration of hits to misses (twice as likely to already find the item in the cache than to need to add it).  And yes, indeed here we see that the ConcurrentDictionary is indeed faster than the standard Dictionary here.  I have a strong feeling that as the ration of hits-to-misses gets higher and higher these number gets even better as well.  This makes sense since the ConcurrentDictionary is read-optimized. Also note that I tried the tests with capacity and concurrency hints on the ConcurrentDictionary but saw very little improvement, I think this is largely because on the 10,000,000 hit test it quickly ramped up to the correct capacity and concurrency and thus the impact was limited to the first few milliseconds of the run. So what does this tell us?  Well, as in all things, ConcurrentDictionary is not a panacea.  It won't solve all your woes and it shouldn't be the only Dictionary you ever use.  So when should we use each? Use System.Collections.Generic.Dictionary when: You need a single-threaded Dictionary (no locking needed). You need a multi-threaded Dictionary that is loaded only once at creation and never modified (no locking needed). You need a multi-threaded Dictionary to store items where writes are far more prevalent than reads (locking needed). And use System.Collections.Concurrent.ConcurrentDictionary when: You need a multi-threaded Dictionary where the writes are far more prevalent than reads. You need to be able to iterate over the collection without locking it even if its being modified. Both Dictionaries have their strong suits, I have a feeling this is just one where you need to know from design what you hope to use it for and make your decision based on that criteria.

    Read the article

  • External HDD USB 3.0 failure

    - by Philip
    [ 2560.376113] usb 9-1: new high-speed USB device number 2 using xhci_hcd [ 2560.376186] usb 9-1: Device not responding to set address. [ 2560.580136] usb 9-1: Device not responding to set address. [ 2560.784104] usb 9-1: device not accepting address 2, error -71 [ 2560.840127] hub 9-0:1.0: unable to enumerate USB device on port 1 [ 2561.080182] usb 10-1: new SuperSpeed USB device number 5 using xhci_hcd [ 2566.096163] usb 10-1: device descriptor read/8, error -110 [ 2566.200096] usb 10-1: new SuperSpeed USB device number 5 using xhci_hcd [ 2571.216175] usb 10-1: device descriptor read/8, error -110 [ 2571.376138] hub 10-0:1.0: unable to enumerate USB device on port 1 [ 2571.744174] usb 10-1: new SuperSpeed USB device number 7 using xhci_hcd [ 2576.760116] usb 10-1: device descriptor read/8, error -110 [ 2576.864074] usb 10-1: new SuperSpeed USB device number 7 using xhci_hcd [ 2581.880153] usb 10-1: device descriptor read/8, error -110 [ 2582.040123] hub 10-0:1.0: unable to enumerate USB device on port 1 [ 2582.224139] hub 9-0:1.0: unable to enumerate USB device on port 1 [ 2582.464177] usb 10-1: new SuperSpeed USB device number 9 using xhci_hcd [ 2587.480122] usb 10-1: device descriptor read/8, error -110 [ 2587.584079] usb 10-1: new SuperSpeed USB device number 9 using xhci_hcd [ 2592.600150] usb 10-1: device descriptor read/8, error -110 [ 2592.760134] hub 10-0:1.0: unable to enumerate USB device on port 1 [ 2593.128175] usb 10-1: new SuperSpeed USB device number 11 using xhci_hcd [ 2598.144183] usb 10-1: device descriptor read/8, error -110 [ 2598.248109] usb 10-1: new SuperSpeed USB device number 11 using xhci_hcd [ 2603.264171] usb 10-1: device descriptor read/8, error -110 [ 2603.480157] usb 10-1: new SuperSpeed USB device number 12 using xhci_hcd [ 2608.496162] usb 10-1: device descriptor read/8, error -110 [ 2608.600091] usb 10-1: new SuperSpeed USB device number 12 using xhci_hcd [ 2613.616166] usb 10-1: device descriptor read/8, error -110 [ 2613.832170] usb 10-1: new SuperSpeed USB device number 13 using xhci_hcd [ 2618.848135] usb 10-1: device descriptor read/8, error -110 [ 2618.952079] usb 10-1: new SuperSpeed USB device number 13 using xhci_hcd [ 2623.968155] usb 10-1: device descriptor read/8, error -110 [ 2624.184176] usb 10-1: new SuperSpeed USB device number 14 using xhci_hcd [ 2629.200124] usb 10-1: device descriptor read/8, error -110 [ 2629.304075] usb 10-1: new SuperSpeed USB device number 14 using xhci_hcd [ 2634.320172] usb 10-1: device descriptor read/8, error -110 [ 2634.424135] hub 10-0:1.0: unable to enumerate USB device on port 1 [ 2634.776186] usb 10-1: new SuperSpeed USB device number 15 using xhci_hcd [ 2639.792105] usb 10-1: device descriptor read/8, error -110 [ 2639.896090] usb 10-1: new SuperSpeed USB device number 15 using xhci_hcd [ 2644.912172] usb 10-1: device descriptor read/8, error -110 [ 2645.128174] usb 10-1: new SuperSpeed USB device number 16 using xhci_hcd [ 2650.144160] usb 10-1: device descriptor read/8, error -110 [ 2650.248062] usb 10-1: new SuperSpeed USB device number 16 using xhci_hcd [ 2655.264120] usb 10-1: device descriptor read/8, error -110 [ 2655.480182] usb 10-1: new SuperSpeed USB device number 17 using xhci_hcd [ 2660.496121] usb 10-1: device descriptor read/8, error -110 [ 2660.600086] usb 10-1: new SuperSpeed USB device number 17 using xhci_hcd [ 2665.616167] usb 10-1: device descriptor read/8, error -110 [ 2665.832177] usb 10-1: new SuperSpeed USB device number 18 using xhci_hcd [ 2670.848110] usb 10-1: device descriptor read/8, error -110 [ 2670.952066] usb 10-1: new SuperSpeed USB device number 18 using xhci_hcd [ 2675.968081] usb 10-1: device descriptor read/8, error -110 [ 2676.072124] hub 10-0:1.0: unable to enumerate USB device on port 1 [ 2786.104531] xhci_hcd 0000:02:00.0: remove, state 4 [ 2786.104546] usb usb10: USB disconnect, device number 1 [ 2786.104686] xHCI xhci_drop_endpoint called for root hub [ 2786.104692] xHCI xhci_check_bandwidth called for root hub [ 2786.104942] xhci_hcd 0000:02:00.0: USB bus 10 deregistered [ 2786.105054] xhci_hcd 0000:02:00.0: remove, state 4 [ 2786.105065] usb usb9: USB disconnect, device number 1 [ 2786.105176] xHCI xhci_drop_endpoint called for root hub [ 2786.105181] xHCI xhci_check_bandwidth called for root hub [ 2786.109787] xhci_hcd 0000:02:00.0: USB bus 9 deregistered [ 2786.110134] xhci_hcd 0000:02:00.0: PCI INT A disabled [ 2794.268445] pci 0000:02:00.0: [1b73:1000] type 0 class 0x000c03 [ 2794.268483] pci 0000:02:00.0: reg 10: [mem 0x00000000-0x0000ffff] [ 2794.268689] pci 0000:02:00.0: PME# supported from D0 D3hot [ 2794.268700] pci 0000:02:00.0: PME# disabled [ 2794.276383] pci 0000:02:00.0: BAR 0: assigned [mem 0xd7800000-0xd780ffff] [ 2794.276398] pci 0000:02:00.0: BAR 0: set to [mem 0xd7800000-0xd780ffff] (PCI address [0xd7800000-0xd780ffff]) [ 2794.276419] pci 0000:02:00.0: no hotplug settings from platform [ 2794.276658] xhci_hcd 0000:02:00.0: enabling device (0000 -> 0002) [ 2794.276675] xhci_hcd 0000:02:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 2794.276762] xhci_hcd 0000:02:00.0: setting latency timer to 64 [ 2794.276771] xhci_hcd 0000:02:00.0: xHCI Host Controller [ 2794.276913] xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 9 [ 2794.395760] xhci_hcd 0000:02:00.0: irq 16, io mem 0xd7800000 [ 2794.396141] xHCI xhci_add_endpoint called for root hub [ 2794.396144] xHCI xhci_check_bandwidth called for root hub [ 2794.396195] hub 9-0:1.0: USB hub found [ 2794.396203] hub 9-0:1.0: 1 port detected [ 2794.396305] xhci_hcd 0000:02:00.0: xHCI Host Controller [ 2794.396371] xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 10 [ 2794.396496] xHCI xhci_add_endpoint called for root hub [ 2794.396499] xHCI xhci_check_bandwidth called for root hub [ 2794.396547] hub 10-0:1.0: USB hub found [ 2794.396553] hub 10-0:1.0: 1 port detected [ 2798.004084] usb 1-3: new high-speed USB device number 8 using ehci_hcd [ 2798.140824] scsi21 : usb-storage 1-3:1.0 [ 2820.176116] usb 1-3: reset high-speed USB device number 8 using ehci_hcd [ 2824.000526] scsi 21:0:0:0: Direct-Access BUFFALO HD-PZU3 0001 PQ: 0 ANSI: 6 [ 2824.002263] sd 21:0:0:0: Attached scsi generic sg2 type 0 [ 2824.003617] sd 21:0:0:0: [sdb] 1953463728 512-byte logical blocks: (1.00 TB/931 GiB) [ 2824.005139] sd 21:0:0:0: [sdb] Write Protect is off [ 2824.005149] sd 21:0:0:0: [sdb] Mode Sense: 1f 00 00 08 [ 2824.009084] sd 21:0:0:0: [sdb] No Caching mode page present [ 2824.009094] sd 21:0:0:0: [sdb] Assuming drive cache: write through [ 2824.011944] sd 21:0:0:0: [sdb] No Caching mode page present [ 2824.011952] sd 21:0:0:0: [sdb] Assuming drive cache: write through [ 2824.049153] sdb: sdb1 [ 2824.051814] sd 21:0:0:0: [sdb] No Caching mode page present [ 2824.051821] sd 21:0:0:0: [sdb] Assuming drive cache: write through [ 2824.051825] sd 21:0:0:0: [sdb] Attached SCSI disk [ 2839.536624] usb 1-3: USB disconnect, device number 8 [ 2844.620178] usb 10-1: new SuperSpeed USB device number 2 using xhci_hcd [ 2844.640281] scsi22 : usb-storage 10-1:1.0 [ 2850.326545] scsi 22:0:0:0: Direct-Access BUFFALO HD-PZU3 0001 PQ: 0 ANSI: 6 [ 2850.327560] sd 22:0:0:0: Attached scsi generic sg2 type 0 [ 2850.329561] sd 22:0:0:0: [sdb] 1953463728 512-byte logical blocks: (1.00 TB/931 GiB) [ 2850.329889] sd 22:0:0:0: [sdb] Write Protect is off [ 2850.329897] sd 22:0:0:0: [sdb] Mode Sense: 1f 00 00 08 [ 2850.330223] sd 22:0:0:0: [sdb] No Caching mode page present [ 2850.330231] sd 22:0:0:0: [sdb] Assuming drive cache: write through [ 2850.331414] sd 22:0:0:0: [sdb] No Caching mode page present [ 2850.331423] sd 22:0:0:0: [sdb] Assuming drive cache: write through [ 2850.384116] usb 10-1: USB disconnect, device number 2 [ 2850.392050] sd 22:0:0:0: [sdb] Unhandled error code [ 2850.392056] sd 22:0:0:0: [sdb] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 2850.392061] sd 22:0:0:0: [sdb] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 [ 2850.392074] end_request: I/O error, dev sdb, sector 0 [ 2850.392079] quiet_error: 70 callbacks suppressed [ 2850.392082] Buffer I/O error on device sdb, logical block 0 [ 2850.392194] ldm_validate_partition_table(): Disk read failed. [ 2850.392271] Dev sdb: unable to read RDB block 0 [ 2850.392377] sdb: unable to read partition table [ 2850.392581] sd 22:0:0:0: [sdb] READ CAPACITY failed [ 2850.392584] sd 22:0:0:0: [sdb] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 2850.392588] sd 22:0:0:0: [sdb] Sense not available. [ 2850.392613] sd 22:0:0:0: [sdb] Asking for cache data failed [ 2850.392617] sd 22:0:0:0: [sdb] Assuming drive cache: write through [ 2850.392621] sd 22:0:0:0: [sdb] Attached SCSI disk [ 2850.732182] usb 10-1: new SuperSpeed USB device number 3 using xhci_hcd [ 2850.752228] scsi23 : usb-storage 10-1:1.0 [ 2851.752709] scsi 23:0:0:0: Direct-Access BUFFALO HD-PZU3 0001 PQ: 0 ANSI: 6 [ 2851.754481] sd 23:0:0:0: Attached scsi generic sg2 type 0 [ 2851.756576] sd 23:0:0:0: [sdb] 1953463728 512-byte logical blocks: (1.00 TB/931 GiB) [ 2851.758426] sd 23:0:0:0: [sdb] Write Protect is off [ 2851.758436] sd 23:0:0:0: [sdb] Mode Sense: 1f 00 00 08 [ 2851.758779] sd 23:0:0:0: [sdb] No Caching mode page present [ 2851.758787] sd 23:0:0:0: [sdb] Assuming drive cache: write through [ 2851.759968] sd 23:0:0:0: [sdb] No Caching mode page present [ 2851.759977] sd 23:0:0:0: [sdb] Assuming drive cache: write through [ 2851.817710] sdb: sdb1 [ 2851.820562] sd 23:0:0:0: [sdb] No Caching mode page present [ 2851.820568] sd 23:0:0:0: [sdb] Assuming drive cache: write through [ 2851.820572] sd 23:0:0:0: [sdb] Attached SCSI disk [ 2852.060352] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2852.076533] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2852.076538] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2852.196329] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2852.212593] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2852.212599] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2852.456290] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2852.472402] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2852.472408] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2852.624304] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2852.640531] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2852.640536] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2852.772296] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2852.788536] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2852.788541] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2852.920349] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2852.936536] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2852.936540] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2853.072287] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2853.088565] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2853.088570] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2884.176339] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2884.192561] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2884.192567] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2884.320349] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2884.336526] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2884.336531] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2884.468344] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2884.484551] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2884.484556] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2884.612349] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2884.628540] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2884.628545] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2884.756350] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2884.772528] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2884.772533] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2884.848116] usb 10-1: USB disconnect, device number 3 [ 2884.851493] scsi 23:0:0:0: [sdb] killing request [ 2884.851501] scsi 23:0:0:0: [sdb] killing request [ 2884.851699] scsi 23:0:0:0: [sdb] Unhandled error code [ 2884.851702] scsi 23:0:0:0: [sdb] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 2884.851708] scsi 23:0:0:0: [sdb] CDB: Read(10): 28 00 00 5f 2b ee 00 00 3e 00 [ 2884.851721] end_request: I/O error, dev sdb, sector 6237166 [ 2884.851726] Buffer I/O error on device sdb1, logical block 6237102 [ 2884.851730] Buffer I/O error on device sdb1, logical block 6237103 [ 2884.851738] Buffer I/O error on device sdb1, logical block 6237104 [ 2884.851741] Buffer I/O error on device sdb1, logical block 6237105 [ 2884.851744] Buffer I/O error on device sdb1, logical block 6237106 [ 2884.851747] Buffer I/O error on device sdb1, logical block 6237107 [ 2884.851750] Buffer I/O error on device sdb1, logical block 6237108 [ 2884.851753] Buffer I/O error on device sdb1, logical block 6237109 [ 2884.851757] Buffer I/O error on device sdb1, logical block 6237110 [ 2884.851807] scsi 23:0:0:0: [sdb] Unhandled error code [ 2884.851810] scsi 23:0:0:0: [sdb] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 2884.851813] scsi 23:0:0:0: [sdb] CDB: Read(10): 28 00 00 5f 2c 2c 00 00 3e 00 [ 2884.851824] end_request: I/O error, dev sdb, sector 6237228 [ 2885.168190] usb 10-1: new SuperSpeed USB device number 4 using xhci_hcd [ 2885.188268] scsi24 : usb-storage 10-1:1.0 Please help me with my problem. I got this after running dmesg.

    Read the article

  • SQL SERVER – Identify Most Resource Intensive Queries – SQL in Sixty Seconds #028 – Video

    - by pinaldave
    During performance tuning conversation the very first question people often ask is what are the queries offending the server or in another word let us identify the queries which are the most resource intensive. The resources are often described as either Memory, CPU or IO. When we talk about the queries the same is applicable for them as well. The query which is doing lots of reads or writes are for sure resource intensive as well query which are taking maximum CPU time. Performance tuning is a very deep subject and we all have our own preference regarding what should be the first step to tuning and what should be looked with the salt of grain. Though there is no denying that a query which uses more resources than what it should be using for sure require tuning. There are many ways to do identify query using intense resources (e.g. Extended events etc) but in this one we will go by simple DMV. There is a small gotcha we all have to remember about usage of DMV is that it only brings back results from existing cache. So if you have a query which is very resource intensive but is not cached or if you have explicitly removed the query from the cache it will be not part of the result returned by this DMV. It is quite possible that a query is aged and removed from the cache if your cache is not huge. If your cache is large you may want to be careful in running this query during business hours as this query itself can be resource intensive. Get Script to identify resource intensive query from Here Related Tips in SQL in Sixty Seconds: SQL SERVER – Find Most Expensive Queries Using DMV Simple Example to Configure Resource Governor – Introduction to Resource Governor SQL SERVER – DMV – sys.dm_exec_query_optimizer_info – Statistics of Optimizer SQL SERVER – Wait Stats – Wait Types – Wait Queues – Day 0 of 28 Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >