Search Results

Search found 7323 results on 293 pages for 'cache manifest'.

Page 68/293 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • change Magento to new server, images of existing product do not shown in frontend

    - by forrest gump
    I have moved a Magento website to a new server to optimize speed. All images appear to be replaced by image placeholders. I tried to set permission for media to 777, when I delete cache media folder and refresh, a new cache folder is created automatically, but only with placeholder images inside. I flushed image cache, reindex in magento backend but still no hope. What should I do now? Somethings might be helpful: . It's fine to create new product with images . Flush cache, only placeholders images are created . tried to use magento-cleanup.php, no hope . tried to remove .htaccess in media folder . the images of products are there in the media folder, just not in cache folder. Magento only looks at the cache folder for image :(

    Read the article

  • Difference between redirecting to a page and coming to the same page after pressing back button

    - by Mac
    Actually i have a page in which i am not using cache by using this code HttpContext.Current.Response.Cache.SetExpires(DateTime.UtcNow.AddDays(-1)); HttpContext.Current.Response.Cache.SetValidUntilExpires(false); HttpContext.Current.Response.Cache.SetRevalidation(HttpCacheRevalidation.AllCaches); HttpContext.Current.Response.Cache.SetCacheability(HttpCacheability.NoCache); HttpContext.Current.Response.Cache.SetNoStore(); now i want to know is there any difference between coming to this page using a proper link or coming back using browser back button or is there any way to detect this.

    Read the article

  • Output caching in HTTP Handler and SetValidUntilExpires

    - by mayor
    I'm using output caching in my custom HTTP handler in the following way: public void ProcessRequest(HttpContext context) { TimeSpan freshness = new TimeSpan(0, 0, 0, 60); context.Response.Cache.SetExpires(DateTime.Now.Add(freshness)); context.Response.Cache.SetMaxAge(freshness); context.Response.Cache.SetCacheability(HttpCacheability.Public); context.Response.Cache.SetValidUntilExpires(true); ... } It works, but the problem is that refreshing the page with F5 leads to page regeneration (instead of cache usage) despite of the last codeline: context.Response.Cache.SetValidUntilExpires(true); Any suggestions?

    Read the article

  • 7u10: JavaFX packaging tools update

    - by igor
    Last weeks were very busy here in Oracle. JavaOne 2012 is next week. Come to see us there! Meanwhile i'd like to quickly update you on recent developments in the area of packaging tools. This is an area of ongoing development for the team, and we are  continuing to refine and improve both the tools and the process. Thanks to everyone who shared experiences and suggestions with us. We are listening and fixed many of reported issues. Please keep them coming as comments on the blog or (even better) file issues directly to the JIRA. In this post i'll focus on several new packaging features added in JDK 7 update 10: Self-Contained Applications: Select Java Runtime to bundle Self-Contained Applications: Create Package without Java Runtime Self-Contained Applications: Package non-JavaFX application Option to disable proxy setup in the JavaFX launcher Ability to specify codebase for WebStart application Option to update existing jar file Self-Contained Applications: Specify application icon Self-Contained Applications: Pass parameters on the command line All these features and number of other important bug fixes are available in the developer preview builds of JDK 7 update 10 (build 8 or later). Please give them a try and share your feedback! Self-Contained Applications: Select Java Runtime to bundle Packager tools in 7u6 assume current JDK (based on java.home property) is the source for embedded runtime. This is useful simplification for many scenarios but there are cases where ability to specify what to embed explicitly is handy. For example IDE may be using fixed JDK to build the project and this is not the version you want to bundle into your application. To make it more flexible we now allow to specify location of base JDK explicitly. It is optional and if you do not specify it then current JDK will be used (i.e. this change is fully backward compatible). New 'basedir' attribute was added to <fx:platform> tag. Its value is location of JDK to be used. It is ok to point to either JRE inside the JDK or JDK top level folder. However, it must be JDK and not JRE as we need other JDK tools for proper packaging and it must be recent version of JDK that is bundled with JavaFX (i.e. Java 7 update 6 or later). Here are examples (<fx:platform> is part of <fx:deploy> task): <fx:platform basedir="${java.home}"/> <fx:platform basedir="c:\tools\jdk7"/> Hint: this feature enables you to use packaging tools from JDK 7 update 10 (and benefit from bug fixes and other features described below) to create application package with bundled FCS version of JRE 7 update 6. Self-Contained Applications: Create Package without Java Runtime This may sound a bit puzzling at first glance. Package without embedded Java Runtime is not really self-contained and obviously will not help with: Deployment on fresh systems. JRE need to be installed separately (and this step will require admin permissions). Possible compatibility issues due to updates of system runtime. However, these packages are much much smaller in size. If download size matters and you are confident that user have recommended system JRE installed then this may be good option to consider if you want to improve user experience for install and launch. Technically, this is implemented as an extension of previous feature. Pass empty string as value for 'basedir' attribute and this will be treated as request to not bundle Java runtime, e.g. <fx:platform basedir=""/> Self-Contained Applications: Package non-JavaFX application One of popular questions people ask about self-contained applications - can i package my Java application as self-contained application? Absolutely. This is true even for tools shipped with JDK 7 update 6. Simply follow steps for creating package for Swing application with integrated JavaFX content and they will work even if your application does not use JavaFX. What's wrong with it? Well, there are few caveats: bundle size is larger because JavaFX is bundled whilst it is not really needed main application jar needs to be packaged to comply to JavaFX packaging requirements(and this may be not trivial to achieve in your existing build scripts) javafx application launcher may not work well with startup logic of your application (for example launcher will initialize networking stack and this may void custom networking settings in your application code) In JDK 7 update 6 <fx:deploy> was updated to accept arbitrary executable jar as an input. Self-contained application package will be created preserving input jar as-is, i.e. no JavaFX launcher will be embedded. This does not help with first point above but resolves other two. More formally following assertions must be true for packaging to succeed: application can be launched as "java -jar YourApp.jar" from the command line  mainClass attribute of <fx:application> refers to application main class <fx:resources> lists all resources needed for the application To give you an example lets assume we need to create a bundle for application consisting of 3 jars:     dist/javamain.jar     dist/lib/somelib.jar    dist/morelibs/anotherlib.jar where javamain.jar has manifest with      Main-Class: app.Main     Class-Path: lib/somelib.jar morelibs/anotherlib.jar Here is sample ant code to package it: <target name="package-bundle"> <taskdef resource="com/sun/javafx/tools/ant/antlib.xml" uri="javafx:com.sun.javafx.tools.ant" classpath="${javafx.tools.ant.jar}"/> <fx:deploy nativeBundles="all" width="100" height="100" outdir="native-packages/" outfile="MyJavaApp"> <info title="Sample project" vendor="Me" description="Test built from Java executable jar"/> <fx:application id="myapp" version="1.0" mainClass="app.Main" name="MyJavaApp"/> <fx:resources> <fx:fileset dir="dist"> <include name="javamain.jar"/> <include name="lib/somelib.jar"/> <include name="morelibs/anotherlib.jar"/> </fx:fileset> </fx:resources> </fx:deploy> </target> Option to disable proxy setup in the JavaFX launcher Since JavaFX 2.2 (part of JDK 7u6) properly packaged JavaFX applications  have proxy settings initialized according to Java Runtime configuration settings. This is handy for most of the application accessing network with one exception. If your application explicitly sets networking properties (e.g. socksProxyHost) then they must be set before networking stack is initialized. Proxy detection will initialize networking stack and therefore your custom settings will be ignored. One way to disable proxy setup by the embedded JavaFX launcher is to pass "-Djavafx.autoproxy.disable=true" on the command line. This is good for troubleshooting (proxy detection may cause significant startup time increases if network is misconfigured) but not really user friendly. Now proxy setup will be disabled if manifest of main application jar has "JavaFX-Feature-Proxy" entry with value "None". Here is simple example of adding this entry using <fx:jar> task: <fx:jar destfile="dist/sampleapp.jar"> <fx:application refid="myapp"/> <fx:resources refid="myresources"/> <fileset dir="build/classes"/> <manifest> <attribute name="JavaFX-Feature-Proxy" value="None"/> </manifest> </fx:jar> Ability to specify codebase for WebStart application JavaFX applications do not need to specify codebase (i.e. absolute location where application code will be deployed) for most of real world deployment scenarios. This is convenient as application does not need to be modified when it is moved from development to deployment environment. However, some developers want to ensure copies of their application JNLP file will redirect to master location. This is where codebase is needed. To avoid need to edit JNLP file manually <fx:deploy> task now accepts optional codebase attribute. If attribute is not specified packager will generate same no-codebase files as before. If codebase value is explicitly specified then generated JNLP files (including JNLP content embedded into web page) will use it.  Here is an example: <fx:deploy width="600" height="400" outdir="Samples" codebase="http://localhost/codebaseTest" outfile="TestApp"> .... </fx:deploy> Option to update existing jar file JavaFX packaging procedures are optimized for new application that can use ant or command line javafxpackager utility. This may lead to some redundant steps when you add it to your existing build process. One typical situation is that you might already have a build procedure that produces executable jar file with custom manifest. To properly package it as JavaFX executable jar you would need to unpack it and then use javafxpackager or <fx:jar> to create jar again (and you need to make sure you pass all important details from your custom manifest). We added option to pass jar file as an input to javafxpackager and <fx:jar>. This simplifies integration of JavaFX packaging tools into existing build  process as postprocessing step. By the way, we are looking for ways to simplify this further. Please share your suggestions! On the technical side this works as follows. Both <fx:jar> and javafxpackager will attempt to update existing jar file if this is the only input file. Update process will add JavaFX launcher classes and update the jar manifest with JavaFX attributes. Your custom attributes will be preserved. Update could be performed in place or result may be saved to a different file. Main-Class and Class-Path elements (if present) of manifest of input jar file will be used for JavaFX application  unless they are explicitly overriden in the packaging command you use. E.g. attribute mainClass of <fx:application> (or -appclass in the javafxpackager case) overrides existing Main-Class in the jar manifest. Note that class specified in the Main-Class attribute could either extend JavaFX Application or provide static main() method. Here are examples of updating jar file using javafxpackager: Create new JavaFX executable jar as a copy of given jar file javafxpackager -createjar -srcdir dist -srcfiles fish_proto.jar -outdir dist -outfile fish.jar  Update existing jar file to be JavaFX executable jar and use test.Fish as main application class javafxpackager -createjar -srcdir dist -appclass test.Fish -srcfiles fish.jar -outdir dist -outfile fish.jar  And here is example of using <fx:jar> to create new JavaFX executable jar from the existing fish_proto.jar: <fx:jar destfile="dist/fish.jar"> <fileset dir="dist"> <include name="fish_proto.jar"/> </fileset> </fx:jar> Self-Contained Applications: Specify application icon The only way to specify application icon for self-contained application using tools in JDK 7 update 6 is to use drop-in resources. Now this bug is resolved and you can also specify icon using <fx:icon> tag. Here is an example: <fx:deploy ...> <fx:info> <fx:icon href="default.png"/> </fx:info> ... </fx:deploy> Few things to keep in mind: Only default kind of icon is applicable to self-contained applications (as of now) Icon should follow platform specific rules for sizes and image format (e.g. .ico on Windows and .icns on Mac) Self-Contained Applications: Pass parameters on the command line JavaFX applications support two types of application parameters: named and unnamed (see the API for Application.Parameters). Static named parameters can be added to the application package using <fx:param> and unnamed parameters can be added using <fx:argument>. They are applicable to all execution modes including standalone applications. It is also possible to pass parameters to a JavaFX application from a Web page that hosts it, using <fx:htmlParam>.  Prior to JavaFX 2.2, this was only supported for embedded applications. Starting from JavaFX 2.2, <fx:htmlParam> is applicable to Web Start applications also. See JavaFX deployment guide for more details on this. However, there was no way to pass dynamic parameters to the self-contained application. This has been improved and now native launchers will  delegate parameters from command line to the application code. I.e. to pass parameter to the application you simply need to run it as "myapp.exe somevalue" and then use getParameters().getUnnamed().get(0) to get "somevalue".

    Read the article

  • Is NSManagedObjectContext autosaved or am I looking at NSFetchedResultsController's cache?

    - by Andreas
    I'm developing an iPhone app where I use a NSFetchedResultsController in the main table view controller. I create it like this in the viewDidload of the main table view controller: NSSortDescriptor *sortDescriptorDate = [[NSSortDescriptor alloc] initWithKey:@"date" ascending:YES]; NSSortDescriptor *sortDescriptorTime = [[NSSortDescriptor alloc] initWithKey:@"start" ascending:YES]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptorDate,sortDescriptorTime, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; [sortDescriptorDate release]; [sortDescriptorTime release]; [sortDescriptors release]; controller = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:context sectionNameKeyPath:@"date" cacheName:nil]; [fetchRequest release]; NSError *error; BOOL success = [controller performFetch:&error]; Then, in a subsequent view, I create a new object on the context: TestObject *testObject = [NSEntityDescription insertNewObjectForEntityForName:@"TestObject" inManagedObjectContext:context]; The TestObject has several related object which I create in the same way and add to the testObject using the provided add...Objects methods. Then, if before saving the context, I press cancel and go back to the main table view, nothing is shown as expected. However, if I restart the app, the object I created on the context shows in the main table view. How come? At first, I thought it was because the NSFetchedResultsController was reading from the cache, but as you can see I set this to nil just to test. Also, [context hasChanges] returns true after I restart. What am I missing here?

    Read the article

  • How to flush DNS cache in Windows Mobile programmatically?

    - by Bounded
    Hello, My windows mobile application (written in C# with the compact framework) needs to know if a particular machine is active or not. To achieve this goal, I thought to use a ping mechanism. I tried to use the Ping class implemented in the opennetcf framework (the System.Net.NetworkInformation.Ping class for the .NET Framework is not part of the compact framework). Because I give to the Ping.Send function a host name, it first tries to resolve this host name and to retrieve an IP address. But i observe the following problem : If the first dns resolution fails (because the network is down at this moment), and if the application tries immediately again to send the ping, it fails too, even if the network is note down anymore. I check with a famous network protocol analyzer and i saw that only the requests concerning the first dns resolution are sent. The requests concerning the dns resolution of the second ping are not sent. Why is the second dns request not sent ? Is there any dns cache mechanism on such Windows Mobile devices ? If yes, can this mechanism beeing flushed programmatically ? EDIT : I gave up finding a solution to this DNS flush. I chose to ping an IP adress instead of a name machine. The problem of pinging an hard coded adress IP is that we have to be 100% sure that this IP will not change. The gateway IP can be used because it's always reachable (if it does not, it means the network is down).

    Read the article

  • How do i programmatically access the face cache in Windows Live Photo Gallery?

    - by acorderob
    I'm not talking about the "people tags" embeded in the XMP packets of JPEGs. I'm talking about the face database used to recognize new faces. I want to add to my program the option to recognize faces using the already trained database of WLPG. I managed to use the API (a type library dll) to detect faces, but to recognize them it needs an Exemplar Cache object that is not available in the same API. I could create my own object, but i want to use the already existing one to avoid duplicate training for the user. I know the database is in C:\Users\\AppData\Local\Microsoft\Windows Live Photo Gallery and that it is in an SQL Server Compact format. I tried to open the database with Visual Studio 2010, but it says that it is in an older version (pre-3.5) and needs to be upgraded. I don't want to change the database, just read it. I don't know how the WPLG reads it since apparently i don't have the correct OLEDB provider version. I would also prefer to read it without accesing the database directly but i don't see any DLL that exports that functionality. BTW, i'm using Delphi 2010. Any ideas?

    Read the article

  • anyone doing php-fpm and APC? cache files missing in /tmp

    - by Vangel
    I have been using xcache for a long time. Recently I put together php-fpm and nginx. I see apc is installed and enabled in the configuration. I was assuming that apc will automatically opcode the files and store it somewhere. According to config it should be in /tmp/apx.xxxx but there is no such thing there. What am i missing? any clues to investigate would be of much help. Please note I am using php 5.3 fpm. thanks mates. UPDATE: i looked at apc.php it says things are fine. will just have to take its word for it.

    Read the article

  • Windows Feature List blank, Updates fail, system readiness tool says files are corrupt

    - by Chris T
    I tried to install IIS and to my surprise the feature/components lists was blank =[. I tried the system update readiness tool and it creates the following log: ================================= Checking System Update Readiness. Binary Version 6.1.7100.4104 Package Version 5.0 2009-09-30 23:38 Checking Deployment Packages Checking Package Manifests and catalogs. (f) CBS MUM Corrupt 0x800F0900 servicing\packages\Package_1_for_KB973540~31bf3856ad364e35~amd64~~6.1.1.0.mum Line 1: (f) CBS Catalog Corrupt 0x800B0100 servicing\packages\Package_1_for_KB973540~31bf3856ad364e35~amd64~~6.1.1.0.cat (f) CBS MUM Corrupt 0x800F0900 servicing\packages\Package_for_KB973540_RTM~31bf3856ad364e35~amd64~~6.1.1.0.mum Line 1: (f) CBS Catalog Corrupt 0x800B0100 servicing\packages\Package_for_KB973540_RTM~31bf3856ad364e35~amd64~~6.1.1.0.cat (f) CBS MUM Corrupt 0x800F0900 servicing\packages\Package_for_KB973540~31bf3856ad364e35~amd64~~6.1.1.0.mum Line 1: (f) CBS Catalog Corrupt 0x800B0100 servicing\packages\Package_for_KB973540~31bf3856ad364e35~amd64~~6.1.1.0.cat Checking package watchlist. Checking component watchlist. Checking packages. Checking component store (f) CSI Catalog Corrupt 0x800B0003 winsxs\Catalogs\efdfd17ac9909b9d81e1455d9abf291319237877c23df8a67a3f5a1f2f9e034f.cat 5fbf0b9691b..6772f1b0a58_31bf3856ad364e35_6.1.7100.4127_c2160c1f90006ee6 (f) CSI Manifest All Zeros 0x00000000 WinSxS\Manifests\amd64_microsoft-windows-mediaplayer-wmpdxm_31bf3856ad364e35_6.1.7100.4127_none_35ba254677b2a294.manifest amd64_microsoft-windows-mediaplayer-wmpdxm_31bf3856ad364e35_6.1.7100.4127_none_35ba254677b2a294 (f) CSI Manifest All Zeros 0x00000000 WinSxS\Manifests\wow64_microsoft-windows-mediaplayer-wmpdxm_31bf3856ad364e35_6.1.7100.4127_none_400ecf98ac13648f.manifest wow64_microsoft-windows-mediaplayer-wmpdxm_31bf3856ad364e35_6.1.7100.4127_none_400ecf98ac13648f Summary: Seconds executed: 240 Found 9 errors CSI Manifest All Zeros Total Count: 2 CSI Catalog Corrupt Total Count: 1 CBS MUM Corrupt Total Count: 3 CBS Catalog Corrupt Total Count: 3 How can I fix this?

    Read the article

  • Is a larger hard drive with the same cache, rpm, and bus type faster?

    - by Joel Coehoorn
    I recently heard that, all else being equal, larger hard are faster than smaller. It has to do with more bits passing under the read head as the drive spins - since a large drive packs the bits more tightly, the same amount of spin/time presents more data to the read head. I had not heard this before, and was inclined to believe the the read heads expected bits at a specific rate and would instead stagger data, so that the two drives would be the same speed. I now find myself looking at purchasing one of two computer models for the school where I work. One model has an 80GB drive, the other a 400GB (for ~$13 more). The size of the drive is immaterial, since users will keep their files on a file server where they can be backed up. But if the 400GB drive will really deliver a performance boost to the hard drive, the extra money is probably worth it. Thoughts?

    Read the article

  • How much HDD space would I need to cache the web while respecting robot.txts?

    - by Koning Baard XIV
    I want to experiment with creating a web crawler. I'll start with indexing a few medium sized website like Stack Overflow or Smashing Magazine. If it works, I'd like to start crawling the entire web. I'll respect robot.txts. I save all html, pdf, word, excel, powerpoint, keynote, etc... documents (not exes, dmgs etc, just documents) in a MySQL DB. Next to that, I'll have a second table containing all restults and descriptions, and a table with words and on what page to find those words (aka an index). How much HDD space do you think I need to save all the pages? Is it as low as 1 TB or is it about 10 TB, 20? Maybe 30? 1000? Thanks

    Read the article

  • Squid not caching files (Randomly)

    - by Heinrich
    I want to use an intercepting squid server to cache specific large zip files that users in my network download frequently. I have configured squid on a gateway machine and caching is working for "static" zip files that are served from an Apache web server outside our network. The files that I want to have cached by squid are zip files 100MB which are served from a heroku-hosted Rails application. I set an ETag header (SHA hash of the zip file on the server) and Cache-Control: public header. However, these files are not cached by squid. This, for example, is a request that is not cached: $ curl --no-keepalive -v -o test.zip --header "X-Access-Key: 20767ed397afdea90601fda4513ceb042fe6ab4e51578da63d3bc9b024ed538a" --header "X-Customer: 5" "http://MY_APP.herokuapp.com/api/device/v1/media/download?version=latest" * Adding handle: conn: 0x7ffd4a804400 * Adding handle: send: 0 * Adding handle: recv: 0 ... > GET /api/device/v1/media/download?version=latest HTTP/1.1 > User-Agent: curl/7.30.0 > Host: MY_APP.herokuapp.com > Accept: */* > X-Access-Key: 20767ed397afdea90601fda4513ceb042fe6ab4e51578da63d3bc9b024ed538a > X-Customer: 5 > 0 0 0 0 0 0 0 0 --:--:-- 0:00:09 --:--:-- 0< HTTP/1.1 200 OK * Server Cowboy is not blacklisted < Server: Cowboy < Date: Mon, 18 Aug 2014 14:13:27 GMT < Status: 200 OK < X-Frame-Options: SAMEORIGIN < X-Xss-Protection: 1; mode=block < X-Content-Type-Options: nosniff < ETag: "95e888938c0d539b8dd74139beace67f" < Content-Disposition: attachment; filename="e7cce850ae728b81fe3f315d21a560af.zip" < Content-Transfer-Encoding: binary < Content-Length: 125727431 < Content-Type: application/zip < Cache-Control: public < X-Request-Id: 7ce6edb0-013a-4003-a331-94d2b8fae8ad < X-Runtime: 1.244251 < X-Cache: MISS from AAA.fritz.box < Via: 1.1 vegur, 1.1 AAA.fritz.box (squid/3.3.11) < Connection: keep-alive In the logs squid is reporting a TCP_MISS. This is the relevant excerpt from my squid file: # Squid normally listens to port 3128 http_port 3128 http_port 3129 intercept # Uncomment and adjust the following to add a disk cache directory. maximum_object_size 1000 MB maximum_object_size_in_memory 1000 MB cache_dir ufs /usr/local/var/cache/squid 10000 16 256 cache_mem 2000 MB # Leave coredumps in the first cache dir coredump_dir /usr/local/var/cache/squid cache_store_log daemon:/usr/local/var/logs/cache_store.log #refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern -i .(zip) 525600 100% 525600 override-expire ignore-no-cache ignore-no-store refresh_pattern . 0 20% 4320 ## DNS Configuration dns_nameservers 8.8.8.8 8.8.4.4 After trying around for some time I realized that squid is sometimes deciding that my file is cacheable, sometimes not, depending on whether and when I enable/disable the dns_nameservers directive. What could be wrong here?

    Read the article

  • What's wrong with this vcl config for varnish-cache as load balancer?

    - by dabito
    I have the current configurations active on my default.vcl varnish file on the machine that balances the load for other two machines (the other two machines also have varnish active). My intention is to have this server do only the load balancing and the other machines do the processing and also their own caching. My problem is that even with the config testing (not even a stress test or anything, just a few requests a minute) I get the guru meditation error and have to restart varnish. This is the default.vcl for the load balancing server: backend vader { .host = "app1.server.com"; .probe = { .url = "/"; .interval = 10s; .timeout = 4s; .window = 5; .threshold = 3; } } backend malgus { .host = "app2.server.com"; .probe = { .url = "/"; .interval = 10s; .timeout = 4s; .window = 5; .threshold = 3; } } director dooku round-robin { { .backend = vader; } { .backend = malgus; } } sub vcl_recv { if (req.http.host ~ "^balancer.server.com$") { set req.backend = dooku; } } Am I doing something wrong or missing something? EDIT: This is varnishlog's output: 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1345839995 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1345839998 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1345840001 1.0 0 Backend_health - malgus Still sick 4--X--- 0 3 5 0.000000 3.846876 0 Backend_health - vader Still sick 4--X--- 0 3 5 0.000000 3.839194 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1345840004 1.0 14 SessionOpen c 10.150.7.151 38272 :80 14 ReqStart c 10.150.7.151 38272 458200540 14 RxRequest c GET 14 RxURL c / 14 RxProtocol c HTTP/1.1 14 RxHeader c Host: dooku-dev.excelsior.com 14 RxHeader c Connection: keep-alive 14 RxHeader c User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11 14 RxHeader c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 14 RxHeader c Accept-Encoding: gzip,deflate,sdch 14 RxHeader c Accept-Language: en-US,en;q=0.8,es-419;q=0.6,es;q=0.4 14 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 14 RxHeader c Cookie: SESSa87d6c6da0c61037a9169122dc5e4a19=HR_0Srhgc-uDArT3aJFzOBy31FtzneTXg38byr1eGMU; __atuvc=4%7C33 14 VCL_call c recv pass 14 VCL_call c hash 14 Hash c / 14 Hash c dooku-dev.excelsior.com 14 VCL_return c hash 14 VCL_call c pass pass 14 FetchError c no backend connection 14 VCL_call c error deliver 14 VCL_call c deliver deliver 14 TxProtocol c HTTP/1.1 14 TxStatus c 503 14 TxResponse c Service Unavailable 14 TxHeader c Server: Varnish 14 TxHeader c Content-Type: text/html; charset=utf-8 14 TxHeader c Retry-After: 5 14 TxHeader c Content-Length: 418 14 TxHeader c Accept-Ranges: bytes 14 TxHeader c Date: Fri, 24 Aug 2012 20:26:44 GMT 14 TxHeader c X-Varnish: 458200540 14 TxHeader c Age: 0 14 TxHeader c Via: 1.1 varnish 14 TxHeader c Connection: close 14 Length c 418 14 ReqEnd c 458200540 1345840004.916415691 1345840004.965190172 0.020933390 0.048741817 0.000032663 14 SessionClose c error 14 StatSess c 10.150.7.151 38272 0 1 1 0 1 0 256 418 14 SessionOpen c 10.150.7.151 38273 :80 14 ReqStart c 10.150.7.151 38273 458200541 14 RxRequest c GET 14 RxURL c /favicon.ico 14 RxProtocol c HTTP/1.1 14 RxHeader c Host: dooku-dev.excelsior.com 14 RxHeader c Connection: keep-alive 14 RxHeader c Accept: */* 14 RxHeader c User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11 14 RxHeader c Accept-Encoding: gzip,deflate,sdch 14 RxHeader c Accept-Language: en-US,en;q=0.8,es-419;q=0.6,es;q=0.4 14 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 14 RxHeader c Cookie: SESSa87d6c6da0c61037a9169122dc5e4a19=HR_0Srhgc-uDArT3aJFzOBy31FtzneTXg38byr1eGMU; __atuvc=4%7C33 14 VCL_call c recv pass 14 VCL_call c hash 14 Hash c /favicon.ico 14 Hash c dooku-dev.excelsior.com 14 VCL_return c hash 14 VCL_call c pass pass 14 FetchError c no backend connection 14 VCL_call c error deliver 14 VCL_call c deliver deliver 14 TxProtocol c HTTP/1.1 14 TxStatus c 503 14 TxResponse c Service Unavailable 14 TxHeader c Server: Varnish 14 TxHeader c Content-Type: text/html; charset=utf-8 14 TxHeader c Retry-After: 5 14 TxHeader c Content-Length: 418 14 TxHeader c Accept-Ranges: bytes 14 TxHeader c Date: Fri, 24 Aug 2012 20:26:45 GMT 14 TxHeader c X-Varnish: 458200541 14 TxHeader c Age: 0 14 TxHeader c Via: 1.1 varnish 14 TxHeader c Connection: close 14 Length c 418 14 ReqEnd c 458200541 1345840005.226389885 1345840005.226457834 0.000026941 0.000043154 0.000024796 14 SessionClose c error 14 StatSess c 10.150.7.151 38273 0 1 1 0 1 0 256 418

    Read the article

  • Does Exchange Cache Mode affect email markers refresh time in other Outlook clients?

    - by David
    We have users who share a single email account by using the Additional Email option under their accounts. Now, they want to assign emails to one another using the markers alongside the emails. We noticed that when changing the color of a marker, one Outlook client updated immediately, but another Outlook client did not. It looked like they were both set to "Cached Mode". Is it likely that caching effected the refresh of the client? Would it be better to turn off cached mode if we are using Outlook this way?

    Read the article

  • How i can setup a nginx cache strategy that first try amazon s3, then memcache and do a fallback on miss?

    - by Tim
    i have a large site with lot of pages that almost never change, right now i am using two memcache servers (amazon elasticache), but this its really expensive. Thats why for this files that barely never change i want to upload them to amazon s3 and shutdown 1 memcache server. Here is my conf; location ~ /longterm/(.*){ proxy_pass http://amazonS3bucket; proxy_intercept_errors on; proxy_next_upstream http_404; error_page 404 503 = @fallback_memcached } location @fallback_memcache { set $memcached_key $uri; memcached_pass name:11211; error_page 404 @fallback; } location @fallback { try_files $uri $uri/index.html } I dont know why but the config doesnt work on the final fallback; if i got an amazon S3 hit it works, if i got an amazon S3 miss and a memcache hit it works, but if i got an amazon S3 miss then a memcache miss when it try to resolve the las fallback it fails. I am also thinking in use the amazon s3 fuse http://code.google.com/p/s3fs/ instead of the proxy pass i think it would be easier for implement, i would also be less performant?

    Read the article

  • 7u45 Caller-Allowable-Codebase and Trusted-Library

    - by costlow
    Java 7 update 45 (October 2013) changed the interactions between JavaScript and Java Applets made through LiveConnect. The 7u45 update is a critical patch update that has also raised the security baseline and users are strongly recommended to upgrade. Versions below the security baseline used to apply the Trusted-Library Manifest attribute to call between sandboxed code and higher-privileged code. The Trusted-Library value was a Boolean true or false. Security changes for the current security baseline (7u45) introduced a different Caller-Allowable-Codebase that indicates precisely where these LiveConnect calls can originate. For example, LiveConnect calls should not necessarily originate from 3rd party components of a web page or other DOM-based browser manipulations (pdf). Additional information about these can be located at “JAR File Manifest Attributes for Security.” The workaround for end-user dialogs is described on the 7u45 release notes, which explains removing the Trusted-Library attribute for LiveConnect calls in favor of Caller-Allowable-Codebase. This provides necessary protections (without warnings) for all users at or above the security baseline. Client installations automatically detect updates to the secure baseline and prompt users to upgrade. Warning dialogs above or below Both of these attributes should work together to support the various versions of client installations. We are aware of the issue that modifying the Manifest to use the newer Caller-Allowable-Codebase causes warnings for users below the security baseline and that not doing it displays a warning for users above. Manifest Attribute 7u45 7u40 and below Only Caller-Allowable-Codebase No dialog Displays prompt Only Trusted-Library Displays prompt No dialog Both Displays prompt (*) No dialog This will be fixed in a future release so that both attributes can co-exist. The current work-around would be to favor using Caller-Allowable-Codebase over the old Trusted-Library call. For users who need to stay below the security baseline System Administrators that schedule software deployments across managed computers may consider applying a Deployment Rule Set as described in Option 1 of “What to do if your applet is blocked or warns of mixed code.” System Administrators may also sign up for email notifications of Critical Patch Updates.

    Read the article

  • Making a Login Work After Cache, Cookies, etc. Have Been Cleared

    - by John
    Hello, I am using the code below for a user login. The first I try to login after cache / cookies, etc. have been cleared, the browser refreshes and the user name is not logged in. After that, logging in works fine. Any idea how I can make it work the first time? Thanks in advance, John index.php: <?php if($_SERVER['REQUEST_METHOD'] == "POST"){header('Location: http://www...com/.../index.php?username='.$username.'&password='.$password.'');} require_once "header.php"; include "login.php"; require_once "footer.php"; ?> login.php: <?php if (!isLoggedIn()) { if (isset($_POST['cmdlogin'])) { if (checkLogin($_POST['username'], $_POST['password'])) { show_userbox(); } else { echo "Incorrect Login information !"; show_loginform(); } } else { show_loginform(); } } else { show_userbox(); } ?> show_loginform function: function show_loginform($disabled = false) { echo '<form name="login-form" id="login-form" method="post" action="./index.php?'.$_SERVER['QUERY_STRING'].'"> <div class="usernameformtext"><label title="Username">Username: </label></div> <div class="usernameformfield"><input tabindex="1" accesskey="u" name="username" type="text" maxlength="30" id="username" /></div> <div class="passwordformtext"><label title="Password">Password: </label></div> <div class="passwordformfield"><input tabindex="2" accesskey="p" name="password" type="password" maxlength="15" id="password" /></div> <div class="registertext"><a href="http://www...com/.../register.php" title="Register">Register</a></div> <div class="lostpasswordtext"><a href="http://www...com/.../lostpassword.php" title="Lost Password">Lost password?</a></div> <p class="loginbutton"><input tabindex="3" accesskey="l" type="submit" name="cmdlogin" value="Login" '; if ($disabled == true) { echo 'disabled="disabled"'; } echo ' /></p></form>'; }

    Read the article

  • Suggestions In Porting ASP.NET to MVC.NET - Is storing SiteConfiguration in Cache RESTful?

    - by DaveDev
    I've been tasked with porting/refactoring a Web Application Platform that we have from ASP.NET to MVC.NET. Ideally I could use all the existing platform's configurations to determine the properties of the site that is presented. Is it RESTful to keep a SiteConfiguration object which contains all of our various page configuration data in the System.Web.Caching.Cache? There are a lot of settings that need to be loaded when the user acceses our site so it's inefficient for each user to have to load the same settings every time they access. Some data the SiteConfiguration object contains is as follows and it determines what Master Page / site configuration / style / UserControls are available to the client, public string SiteTheme { get; set; } public string Region { private get; set; } public string DateFormat { get; set; } public string NumberFormat { get; set; } public int WrapperType { private get; set; } public string LabelFileName { get; set; } public LabelFile LabelFile { get; set; } // the following two are the heavy ones // PageConfiguration contains lots of configuration data for each panel on the page public IList<PageConfiguration> Pages { get; set; } // This contains all the configurations for the factsheets we produce public List<ConfiguredFactsheet> ConfiguredFactsheets { get; set; } I was thinking of having a URL structure like this: www.MySite1.com/PageTemplate/UserControl/ the domain determines the SiteConfiguration object that is created, where MySite1.com is SiteId = 1, MySite2.com is SiteId = 2. (and in turn, style, configurations for various pages, etc.) PageTemplate is the View that will be rendered and simply defines a layout for where I'm going to inject the UserControls Can somebody please tell me if I'm completely missing the RESTful point here? I'd like to refactor the platform into MVC because it's better to work in but I want to do it right but with a minimum of reinventing-the-wheel because otherwise it won't get approval. Any suggestions otherwise? Thanks

    Read the article

  • Nginx compiled --with-http_spdy_module yet raise errors complains ngx_http_spdy_module

    - by c19
    [emerg] 21101#0: the "spdy" parameter requires ngx_http_spdy_module in /etc/nginx/conf.d/cc.conf isn't it the same module? and it causes multi-redirection error too. I have no idea what is going on. Full configure arg: nginx version: nginx/1.4.2 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-cc-opt='-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --with-pcre --with-http_ssl_module `--with-http_spdy_module` --with-http_gunzip_module --with-http_gzip_static_module --with-http_stub_status_module --with-openssl=/usr/local/src/openssl-1.0.1e

    Read the article

  • C# 4: The Curious ConcurrentDictionary

    - by James Michael Hare
    In my previous post (here) I did a comparison of the new ConcurrentQueue versus the old standard of a System.Collections.Generic Queue with simple locking.  The results were exactly what I would have hoped, that the ConcurrentQueue was faster with multi-threading for most all situations.  In addition, concurrent collections have the added benefit that you can enumerate them even if they're being modified. So I set out to see what the improvements would be for the ConcurrentDictionary, would it have the same performance benefits as the ConcurrentQueue did?  Well, after running some tests and multiple tweaks and tunes, I have good and bad news. But first, let's look at the tests.  Obviously there's many things we can do with a dictionary.  One of the most notable uses, of course, in a multi-threaded environment is for a small, local in-memory cache.  So I set about to do a very simple simulation of a cache where I would create a test class that I'll just call an Accessor.  This accessor will attempt to look up a key in the dictionary, and if the key exists, it stops (i.e. a cache "hit").  However, if the lookup fails, it will then try to add the key and value to the dictionary (i.e. a cache "miss").  So here's the Accessor that will run the tests: 1: internal class Accessor 2: { 3: public int Hits { get; set; } 4: public int Misses { get; set; } 5: public Func<int, string> GetDelegate { get; set; } 6: public Action<int, string> AddDelegate { get; set; } 7: public int Iterations { get; set; } 8: public int MaxRange { get; set; } 9: public int Seed { get; set; } 10:  11: public void Access() 12: { 13: var randomGenerator = new Random(Seed); 14:  15: for (int i=0; i<Iterations; i++) 16: { 17: // give a wide spread so will have some duplicates and some unique 18: var target = randomGenerator.Next(1, MaxRange); 19:  20: // attempt to grab the item from the cache 21: var result = GetDelegate(target); 22:  23: // if the item doesn't exist, add it 24: if(result == null) 25: { 26: AddDelegate(target, target.ToString()); 27: Misses++; 28: } 29: else 30: { 31: Hits++; 32: } 33: } 34: } 35: } Note that so I could test different implementations, I defined a GetDelegate and AddDelegate that will call the appropriate dictionary methods to add or retrieve items in the cache using various techniques. So let's examine the three techniques I decided to test: Dictionary with mutex - Just your standard generic Dictionary with a simple lock construct on an internal object. Dictionary with ReaderWriterLockSlim - Same Dictionary, but now using a lock designed to let multiple readers access simultaneously and then locked when a writer needs access. ConcurrentDictionary - The new ConcurrentDictionary from System.Collections.Concurrent that is supposed to be optimized to allow multiple threads to access safely. So the approach to each of these is also fairly straight-forward.  Let's look at the GetDelegate and AddDelegate implementations for the Dictionary with mutex lock: 1: var addDelegate = (key,val) => 2: { 3: lock (_mutex) 4: { 5: _dictionary[key] = val; 6: } 7: }; 8: var getDelegate = (key) => 9: { 10: lock (_mutex) 11: { 12: string val; 13: return _dictionary.TryGetValue(key, out val) ? val : null; 14: } 15: }; Nothing new or fancy here, just your basic lock on a private object and then query/insert into the Dictionary. Now, for the Dictionary with ReadWriteLockSlim it's a little more complex: 1: var addDelegate = (key,val) => 2: { 3: _readerWriterLock.EnterWriteLock(); 4: _dictionary[key] = val; 5: _readerWriterLock.ExitWriteLock(); 6: }; 7: var getDelegate = (key) => 8: { 9: string val; 10: _readerWriterLock.EnterReadLock(); 11: if(!_dictionary.TryGetValue(key, out val)) 12: { 13: val = null; 14: } 15: _readerWriterLock.ExitReadLock(); 16: return val; 17: }; And finally, the ConcurrentDictionary, which since it does all it's own concurrency control, is remarkably elegant and simple: 1: var addDelegate = (key,val) => 2: { 3: _concurrentDictionary[key] = val; 4: }; 5: var getDelegate = (key) => 6: { 7: string s; 8: return _concurrentDictionary.TryGetValue(key, out s) ? s : null; 9: };                    Then, I set up a test harness that would simply ask the user for the number of concurrent Accessors to attempt to Access the cache (as specified in Accessor.Access() above) and then let them fly and see how long it took them all to complete.  Each of these tests was run with 10,000,000 cache accesses divided among the available Accessor instances.  All times are in milliseconds. 1: Dictionary with Mutex Locking 2: --------------------------------------------------- 3: Accessors Mostly Misses Mostly Hits 4: 1 7916 3285 5: 10 8293 3481 6: 100 8799 3532 7: 1000 8815 3584 8:  9:  10: Dictionary with ReaderWriterLockSlim Locking 11: --------------------------------------------------- 12: Accessors Mostly Misses Mostly Hits 13: 1 8445 3624 14: 10 11002 4119 15: 100 11076 3992 16: 1000 14794 4861 17:  18:  19: Concurrent Dictionary 20: --------------------------------------------------- 21: Accessors Mostly Misses Mostly Hits 22: 1 17443 3726 23: 10 14181 1897 24: 100 15141 1994 25: 1000 17209 2128 The first test I did across the board is the Mostly Misses category.  The mostly misses (more adds because data requested was not in the dictionary) shows an interesting trend.  In both cases the Dictionary with the simple mutex lock is much faster, and the ConcurrentDictionary is the slowest solution.  But this got me thinking, and a little research seemed to confirm it, maybe the ConcurrentDictionary is more optimized to concurrent "gets" than "adds".  So since the ratio of misses to hits were 2 to 1, I decided to reverse that and see the results. So I tweaked the data so that the number of keys were much smaller than the number of iterations to give me about a 2 to 1 ration of hits to misses (twice as likely to already find the item in the cache than to need to add it).  And yes, indeed here we see that the ConcurrentDictionary is indeed faster than the standard Dictionary here.  I have a strong feeling that as the ration of hits-to-misses gets higher and higher these number gets even better as well.  This makes sense since the ConcurrentDictionary is read-optimized. Also note that I tried the tests with capacity and concurrency hints on the ConcurrentDictionary but saw very little improvement, I think this is largely because on the 10,000,000 hit test it quickly ramped up to the correct capacity and concurrency and thus the impact was limited to the first few milliseconds of the run. So what does this tell us?  Well, as in all things, ConcurrentDictionary is not a panacea.  It won't solve all your woes and it shouldn't be the only Dictionary you ever use.  So when should we use each? Use System.Collections.Generic.Dictionary when: You need a single-threaded Dictionary (no locking needed). You need a multi-threaded Dictionary that is loaded only once at creation and never modified (no locking needed). You need a multi-threaded Dictionary to store items where writes are far more prevalent than reads (locking needed). And use System.Collections.Concurrent.ConcurrentDictionary when: You need a multi-threaded Dictionary where the writes are far more prevalent than reads. You need to be able to iterate over the collection without locking it even if its being modified. Both Dictionaries have their strong suits, I have a feeling this is just one where you need to know from design what you hope to use it for and make your decision based on that criteria.

    Read the article

  • External HDD USB 3.0 failure

    - by Philip
    [ 2560.376113] usb 9-1: new high-speed USB device number 2 using xhci_hcd [ 2560.376186] usb 9-1: Device not responding to set address. [ 2560.580136] usb 9-1: Device not responding to set address. [ 2560.784104] usb 9-1: device not accepting address 2, error -71 [ 2560.840127] hub 9-0:1.0: unable to enumerate USB device on port 1 [ 2561.080182] usb 10-1: new SuperSpeed USB device number 5 using xhci_hcd [ 2566.096163] usb 10-1: device descriptor read/8, error -110 [ 2566.200096] usb 10-1: new SuperSpeed USB device number 5 using xhci_hcd [ 2571.216175] usb 10-1: device descriptor read/8, error -110 [ 2571.376138] hub 10-0:1.0: unable to enumerate USB device on port 1 [ 2571.744174] usb 10-1: new SuperSpeed USB device number 7 using xhci_hcd [ 2576.760116] usb 10-1: device descriptor read/8, error -110 [ 2576.864074] usb 10-1: new SuperSpeed USB device number 7 using xhci_hcd [ 2581.880153] usb 10-1: device descriptor read/8, error -110 [ 2582.040123] hub 10-0:1.0: unable to enumerate USB device on port 1 [ 2582.224139] hub 9-0:1.0: unable to enumerate USB device on port 1 [ 2582.464177] usb 10-1: new SuperSpeed USB device number 9 using xhci_hcd [ 2587.480122] usb 10-1: device descriptor read/8, error -110 [ 2587.584079] usb 10-1: new SuperSpeed USB device number 9 using xhci_hcd [ 2592.600150] usb 10-1: device descriptor read/8, error -110 [ 2592.760134] hub 10-0:1.0: unable to enumerate USB device on port 1 [ 2593.128175] usb 10-1: new SuperSpeed USB device number 11 using xhci_hcd [ 2598.144183] usb 10-1: device descriptor read/8, error -110 [ 2598.248109] usb 10-1: new SuperSpeed USB device number 11 using xhci_hcd [ 2603.264171] usb 10-1: device descriptor read/8, error -110 [ 2603.480157] usb 10-1: new SuperSpeed USB device number 12 using xhci_hcd [ 2608.496162] usb 10-1: device descriptor read/8, error -110 [ 2608.600091] usb 10-1: new SuperSpeed USB device number 12 using xhci_hcd [ 2613.616166] usb 10-1: device descriptor read/8, error -110 [ 2613.832170] usb 10-1: new SuperSpeed USB device number 13 using xhci_hcd [ 2618.848135] usb 10-1: device descriptor read/8, error -110 [ 2618.952079] usb 10-1: new SuperSpeed USB device number 13 using xhci_hcd [ 2623.968155] usb 10-1: device descriptor read/8, error -110 [ 2624.184176] usb 10-1: new SuperSpeed USB device number 14 using xhci_hcd [ 2629.200124] usb 10-1: device descriptor read/8, error -110 [ 2629.304075] usb 10-1: new SuperSpeed USB device number 14 using xhci_hcd [ 2634.320172] usb 10-1: device descriptor read/8, error -110 [ 2634.424135] hub 10-0:1.0: unable to enumerate USB device on port 1 [ 2634.776186] usb 10-1: new SuperSpeed USB device number 15 using xhci_hcd [ 2639.792105] usb 10-1: device descriptor read/8, error -110 [ 2639.896090] usb 10-1: new SuperSpeed USB device number 15 using xhci_hcd [ 2644.912172] usb 10-1: device descriptor read/8, error -110 [ 2645.128174] usb 10-1: new SuperSpeed USB device number 16 using xhci_hcd [ 2650.144160] usb 10-1: device descriptor read/8, error -110 [ 2650.248062] usb 10-1: new SuperSpeed USB device number 16 using xhci_hcd [ 2655.264120] usb 10-1: device descriptor read/8, error -110 [ 2655.480182] usb 10-1: new SuperSpeed USB device number 17 using xhci_hcd [ 2660.496121] usb 10-1: device descriptor read/8, error -110 [ 2660.600086] usb 10-1: new SuperSpeed USB device number 17 using xhci_hcd [ 2665.616167] usb 10-1: device descriptor read/8, error -110 [ 2665.832177] usb 10-1: new SuperSpeed USB device number 18 using xhci_hcd [ 2670.848110] usb 10-1: device descriptor read/8, error -110 [ 2670.952066] usb 10-1: new SuperSpeed USB device number 18 using xhci_hcd [ 2675.968081] usb 10-1: device descriptor read/8, error -110 [ 2676.072124] hub 10-0:1.0: unable to enumerate USB device on port 1 [ 2786.104531] xhci_hcd 0000:02:00.0: remove, state 4 [ 2786.104546] usb usb10: USB disconnect, device number 1 [ 2786.104686] xHCI xhci_drop_endpoint called for root hub [ 2786.104692] xHCI xhci_check_bandwidth called for root hub [ 2786.104942] xhci_hcd 0000:02:00.0: USB bus 10 deregistered [ 2786.105054] xhci_hcd 0000:02:00.0: remove, state 4 [ 2786.105065] usb usb9: USB disconnect, device number 1 [ 2786.105176] xHCI xhci_drop_endpoint called for root hub [ 2786.105181] xHCI xhci_check_bandwidth called for root hub [ 2786.109787] xhci_hcd 0000:02:00.0: USB bus 9 deregistered [ 2786.110134] xhci_hcd 0000:02:00.0: PCI INT A disabled [ 2794.268445] pci 0000:02:00.0: [1b73:1000] type 0 class 0x000c03 [ 2794.268483] pci 0000:02:00.0: reg 10: [mem 0x00000000-0x0000ffff] [ 2794.268689] pci 0000:02:00.0: PME# supported from D0 D3hot [ 2794.268700] pci 0000:02:00.0: PME# disabled [ 2794.276383] pci 0000:02:00.0: BAR 0: assigned [mem 0xd7800000-0xd780ffff] [ 2794.276398] pci 0000:02:00.0: BAR 0: set to [mem 0xd7800000-0xd780ffff] (PCI address [0xd7800000-0xd780ffff]) [ 2794.276419] pci 0000:02:00.0: no hotplug settings from platform [ 2794.276658] xhci_hcd 0000:02:00.0: enabling device (0000 -> 0002) [ 2794.276675] xhci_hcd 0000:02:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 2794.276762] xhci_hcd 0000:02:00.0: setting latency timer to 64 [ 2794.276771] xhci_hcd 0000:02:00.0: xHCI Host Controller [ 2794.276913] xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 9 [ 2794.395760] xhci_hcd 0000:02:00.0: irq 16, io mem 0xd7800000 [ 2794.396141] xHCI xhci_add_endpoint called for root hub [ 2794.396144] xHCI xhci_check_bandwidth called for root hub [ 2794.396195] hub 9-0:1.0: USB hub found [ 2794.396203] hub 9-0:1.0: 1 port detected [ 2794.396305] xhci_hcd 0000:02:00.0: xHCI Host Controller [ 2794.396371] xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 10 [ 2794.396496] xHCI xhci_add_endpoint called for root hub [ 2794.396499] xHCI xhci_check_bandwidth called for root hub [ 2794.396547] hub 10-0:1.0: USB hub found [ 2794.396553] hub 10-0:1.0: 1 port detected [ 2798.004084] usb 1-3: new high-speed USB device number 8 using ehci_hcd [ 2798.140824] scsi21 : usb-storage 1-3:1.0 [ 2820.176116] usb 1-3: reset high-speed USB device number 8 using ehci_hcd [ 2824.000526] scsi 21:0:0:0: Direct-Access BUFFALO HD-PZU3 0001 PQ: 0 ANSI: 6 [ 2824.002263] sd 21:0:0:0: Attached scsi generic sg2 type 0 [ 2824.003617] sd 21:0:0:0: [sdb] 1953463728 512-byte logical blocks: (1.00 TB/931 GiB) [ 2824.005139] sd 21:0:0:0: [sdb] Write Protect is off [ 2824.005149] sd 21:0:0:0: [sdb] Mode Sense: 1f 00 00 08 [ 2824.009084] sd 21:0:0:0: [sdb] No Caching mode page present [ 2824.009094] sd 21:0:0:0: [sdb] Assuming drive cache: write through [ 2824.011944] sd 21:0:0:0: [sdb] No Caching mode page present [ 2824.011952] sd 21:0:0:0: [sdb] Assuming drive cache: write through [ 2824.049153] sdb: sdb1 [ 2824.051814] sd 21:0:0:0: [sdb] No Caching mode page present [ 2824.051821] sd 21:0:0:0: [sdb] Assuming drive cache: write through [ 2824.051825] sd 21:0:0:0: [sdb] Attached SCSI disk [ 2839.536624] usb 1-3: USB disconnect, device number 8 [ 2844.620178] usb 10-1: new SuperSpeed USB device number 2 using xhci_hcd [ 2844.640281] scsi22 : usb-storage 10-1:1.0 [ 2850.326545] scsi 22:0:0:0: Direct-Access BUFFALO HD-PZU3 0001 PQ: 0 ANSI: 6 [ 2850.327560] sd 22:0:0:0: Attached scsi generic sg2 type 0 [ 2850.329561] sd 22:0:0:0: [sdb] 1953463728 512-byte logical blocks: (1.00 TB/931 GiB) [ 2850.329889] sd 22:0:0:0: [sdb] Write Protect is off [ 2850.329897] sd 22:0:0:0: [sdb] Mode Sense: 1f 00 00 08 [ 2850.330223] sd 22:0:0:0: [sdb] No Caching mode page present [ 2850.330231] sd 22:0:0:0: [sdb] Assuming drive cache: write through [ 2850.331414] sd 22:0:0:0: [sdb] No Caching mode page present [ 2850.331423] sd 22:0:0:0: [sdb] Assuming drive cache: write through [ 2850.384116] usb 10-1: USB disconnect, device number 2 [ 2850.392050] sd 22:0:0:0: [sdb] Unhandled error code [ 2850.392056] sd 22:0:0:0: [sdb] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 2850.392061] sd 22:0:0:0: [sdb] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 [ 2850.392074] end_request: I/O error, dev sdb, sector 0 [ 2850.392079] quiet_error: 70 callbacks suppressed [ 2850.392082] Buffer I/O error on device sdb, logical block 0 [ 2850.392194] ldm_validate_partition_table(): Disk read failed. [ 2850.392271] Dev sdb: unable to read RDB block 0 [ 2850.392377] sdb: unable to read partition table [ 2850.392581] sd 22:0:0:0: [sdb] READ CAPACITY failed [ 2850.392584] sd 22:0:0:0: [sdb] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 2850.392588] sd 22:0:0:0: [sdb] Sense not available. [ 2850.392613] sd 22:0:0:0: [sdb] Asking for cache data failed [ 2850.392617] sd 22:0:0:0: [sdb] Assuming drive cache: write through [ 2850.392621] sd 22:0:0:0: [sdb] Attached SCSI disk [ 2850.732182] usb 10-1: new SuperSpeed USB device number 3 using xhci_hcd [ 2850.752228] scsi23 : usb-storage 10-1:1.0 [ 2851.752709] scsi 23:0:0:0: Direct-Access BUFFALO HD-PZU3 0001 PQ: 0 ANSI: 6 [ 2851.754481] sd 23:0:0:0: Attached scsi generic sg2 type 0 [ 2851.756576] sd 23:0:0:0: [sdb] 1953463728 512-byte logical blocks: (1.00 TB/931 GiB) [ 2851.758426] sd 23:0:0:0: [sdb] Write Protect is off [ 2851.758436] sd 23:0:0:0: [sdb] Mode Sense: 1f 00 00 08 [ 2851.758779] sd 23:0:0:0: [sdb] No Caching mode page present [ 2851.758787] sd 23:0:0:0: [sdb] Assuming drive cache: write through [ 2851.759968] sd 23:0:0:0: [sdb] No Caching mode page present [ 2851.759977] sd 23:0:0:0: [sdb] Assuming drive cache: write through [ 2851.817710] sdb: sdb1 [ 2851.820562] sd 23:0:0:0: [sdb] No Caching mode page present [ 2851.820568] sd 23:0:0:0: [sdb] Assuming drive cache: write through [ 2851.820572] sd 23:0:0:0: [sdb] Attached SCSI disk [ 2852.060352] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2852.076533] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2852.076538] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2852.196329] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2852.212593] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2852.212599] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2852.456290] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2852.472402] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2852.472408] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2852.624304] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2852.640531] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2852.640536] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2852.772296] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2852.788536] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2852.788541] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2852.920349] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2852.936536] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2852.936540] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2853.072287] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2853.088565] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2853.088570] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2884.176339] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2884.192561] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2884.192567] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2884.320349] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2884.336526] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2884.336531] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2884.468344] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2884.484551] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2884.484556] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2884.612349] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2884.628540] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2884.628545] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2884.756350] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2884.772528] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2884.772533] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2884.848116] usb 10-1: USB disconnect, device number 3 [ 2884.851493] scsi 23:0:0:0: [sdb] killing request [ 2884.851501] scsi 23:0:0:0: [sdb] killing request [ 2884.851699] scsi 23:0:0:0: [sdb] Unhandled error code [ 2884.851702] scsi 23:0:0:0: [sdb] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 2884.851708] scsi 23:0:0:0: [sdb] CDB: Read(10): 28 00 00 5f 2b ee 00 00 3e 00 [ 2884.851721] end_request: I/O error, dev sdb, sector 6237166 [ 2884.851726] Buffer I/O error on device sdb1, logical block 6237102 [ 2884.851730] Buffer I/O error on device sdb1, logical block 6237103 [ 2884.851738] Buffer I/O error on device sdb1, logical block 6237104 [ 2884.851741] Buffer I/O error on device sdb1, logical block 6237105 [ 2884.851744] Buffer I/O error on device sdb1, logical block 6237106 [ 2884.851747] Buffer I/O error on device sdb1, logical block 6237107 [ 2884.851750] Buffer I/O error on device sdb1, logical block 6237108 [ 2884.851753] Buffer I/O error on device sdb1, logical block 6237109 [ 2884.851757] Buffer I/O error on device sdb1, logical block 6237110 [ 2884.851807] scsi 23:0:0:0: [sdb] Unhandled error code [ 2884.851810] scsi 23:0:0:0: [sdb] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 2884.851813] scsi 23:0:0:0: [sdb] CDB: Read(10): 28 00 00 5f 2c 2c 00 00 3e 00 [ 2884.851824] end_request: I/O error, dev sdb, sector 6237228 [ 2885.168190] usb 10-1: new SuperSpeed USB device number 4 using xhci_hcd [ 2885.188268] scsi24 : usb-storage 10-1:1.0 Please help me with my problem. I got this after running dmesg.

    Read the article

  • SQL SERVER – Identify Most Resource Intensive Queries – SQL in Sixty Seconds #028 – Video

    - by pinaldave
    During performance tuning conversation the very first question people often ask is what are the queries offending the server or in another word let us identify the queries which are the most resource intensive. The resources are often described as either Memory, CPU or IO. When we talk about the queries the same is applicable for them as well. The query which is doing lots of reads or writes are for sure resource intensive as well query which are taking maximum CPU time. Performance tuning is a very deep subject and we all have our own preference regarding what should be the first step to tuning and what should be looked with the salt of grain. Though there is no denying that a query which uses more resources than what it should be using for sure require tuning. There are many ways to do identify query using intense resources (e.g. Extended events etc) but in this one we will go by simple DMV. There is a small gotcha we all have to remember about usage of DMV is that it only brings back results from existing cache. So if you have a query which is very resource intensive but is not cached or if you have explicitly removed the query from the cache it will be not part of the result returned by this DMV. It is quite possible that a query is aged and removed from the cache if your cache is not huge. If your cache is large you may want to be careful in running this query during business hours as this query itself can be resource intensive. Get Script to identify resource intensive query from Here Related Tips in SQL in Sixty Seconds: SQL SERVER – Find Most Expensive Queries Using DMV Simple Example to Configure Resource Governor – Introduction to Resource Governor SQL SERVER – DMV – sys.dm_exec_query_optimizer_info – Statistics of Optimizer SQL SERVER – Wait Stats – Wait Types – Wait Queues – Day 0 of 28 Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • First round playing with Memcached

    - by Shaun
    To be honest I have not been very interested in the caching before I’m going to a project which would be using the multi-site deployment and high connection and concurrency and very sensitive to the user experience. That means we must cache the output data for better performance. After looked for the Internet I finally focused on the Memcached. What’s the Memcached? I think the description on its main site gives us a very good and simple explanation. Free & open source, high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load. Memcached is an in-memory key-value store for small chunks of arbitrary data (strings, objects) from results of database calls, API calls, or page rendering. Memcached is simple yet powerful. Its simple design promotes quick deployment, ease of development, and solves many problems facing large data caches. Its API is available for most popular languages. The original Memcached was built on *nix system are is being widely used in the PHP world. Although it’s not a problem to use the Memcached installed on *nix system there are some windows version available fortunately. Since we are WISC (Windows – IIS – SQL Server – C#, which on the opposite of LAMP) it would be much easier for us to use the Memcached on Windows rather than *nix. I’m using the Memcached Win X64 version provided by NorthScale. There are also the x86 version and other operation system version.   Install Memcached Unpack the Memcached file to a folder on the machine you want it to be installed, we can see that there are only 3 files and the main file should be the “memcached.exe”. Memcached would be run on the server as a service. To install the service just open a command windows and navigate to the folder which contains the “memcached.exe”, let’s say “C:\Memcached\”, and then type “memcached.exe -d install”. If you are using Windows Vista and Windows 7 system please be execute the command through the administrator role. Right-click the command item in the start menu and use “Run as Administrator”, otherwise the Memcached would not be able to be installed successfully. Once installed successful we can type “memcached.exe -d start” to launch the service. Now it’s ready to be used. The default port of Memcached is 11211 but you can change it through the command argument. You can find the help by typing “memcached -h”.   Using Memcached Memcahed has many good and ready-to-use providers for vary program language. After compared and reviewed I chose the Memcached Providers. It’s built based on another 3rd party Memcached client named enyim.com Memcached Client. The Memcached Providers is very simple to set/get the cached objects through the Memcached servers and easy to be configured through the application configuration file (aka web.config and app.config). Let’s create a console application for the demonstration and add the 3 DLL files from the package of the Memcached Providers to the project reference. Then we need to add the configuration for the Memcached server. Create an App.config file and firstly add the section on top of it. Here we need three sections: the section for Memcached Providers, for enyim.com Memcached client and the log4net. 1: <configSections> 2: <section name="cacheProvider" 3: type="MemcachedProviders.Cache.CacheProviderSection, MemcachedProviders" 4: allowDefinition="MachineToApplication" 5: restartOnExternalChanges="true"/> 6: <sectionGroup name="enyim.com"> 7: <section name="memcached" 8: type="Enyim.Caching.Configuration.MemcachedClientSection, Enyim.Caching"/> 9: </sectionGroup> 10: <section name="log4net" 11: type="log4net.Config.Log4NetConfigurationSectionHandler,log4net"/> 12: </configSections> Then we will add the configuration for 3 of them in the App.config file. The Memcached server information would be defined under the enyim.com section since it will be responsible for connect to the Memcached server. Assuming I installed the Memcached on two servers with the default port, the configuration would be like this. 1: <enyim.com> 2: <memcached> 3: <servers> 4: <!-- put your own server(s) here--> 5: <add address="192.168.0.149" port="11211"/> 6: <add address="10.10.20.67" port="11211"/> 7: </servers> 8: <socketPool minPoolSize="10" maxPoolSize="100" connectionTimeout="00:00:10" deadTimeout="00:02:00"/> 9: </memcached> 10: </enyim.com> Memcached supports the multi-deployment which means you can install the Memcached on the servers as many as you need. The protocol of the Memcached responsible for routing the cached objects into the proper server. So it’s very easy to scale-out your system by Memcached. And then define the Memcached Providers configuration. The defaultExpireTime indicates how long the objected cached in the Memcached would be expired, the default value is 2000 ms. 1: <cacheProvider defaultProvider="MemcachedCacheProvider"> 2: <providers> 3: <add name="MemcachedCacheProvider" 4: type="MemcachedProviders.Cache.MemcachedCacheProvider, MemcachedProviders" 5: keySuffix="_MySuffix_" 6: defaultExpireTime="2000"/> 7: </providers> 8: </cacheProvider> The last configuration would be the log4net. 1: <log4net> 2: <!-- Define some output appenders --> 3: <appender name="ConsoleAppender" type="log4net.Appender.ConsoleAppender"> 4: <layout type="log4net.Layout.PatternLayout"> 5: <conversionPattern value="%date [%thread] %-5level %logger [%property{NDC}] - %message%newline"/> 6: </layout> 7: </appender> 8: <!--<threshold value="OFF" />--> 9: <!-- Setup the root category, add the appenders and set the default priority --> 10: <root> 11: <priority value="WARN"/> 12: <appender-ref ref="ConsoleAppender"> 13: <filter type="log4net.Filter.LevelRangeFilter"> 14: <levelMin value="WARN"/> 15: <levelMax value="FATAL"/> 16: </filter> 17: </appender-ref> 18: </root> 19: </log4net>   Get, Set and Remove the Cached Objects Once we finished the configuration it would be very simple to consume the Memcached servers. The Memcached Providers gives us a static class named DistCache that can be used to operate the Memcached servers. Get<T>: Retrieve the cached object from the Memcached servers. If failed it will return null or the default value. Add: Add an object with a unique key into the Memcached servers. Assuming that we have an operation that retrieve the email from the name which is time consuming. This is the operation that should be cached. The method would be like this. I utilized Thread.Sleep to simulate the long-time operation. 1: static string GetEmailByNameSlowly(string name) 2: { 3: Thread.Sleep(2000); 4: return name + "@ethos.com.cn"; 5: } Then in the real retrieving method we will firstly check whether the name, email information had been searched previously and cached. If yes we will just return them from the Memcached, otherwise we will invoke the slowly method to retrieve it and then cached. 1: static string GetEmailByName(string name) 2: { 3: var email = DistCache.Get<string>(name); 4: if (string.IsNullOrEmpty(email)) 5: { 6: Console.WriteLine("==> The name/email not be in memcached so need slow loading. (name = {0})==>", name); 7: email = GetEmailByNameSlowly(name); 8: DistCache.Add(name, email); 9: } 10: else 11: { 12: Console.WriteLine("==> The name/email had been in memcached. (name = {0})==>", name); 13: } 14: return email; 15: } Finally let’s finished the calling method and execute. 1: static void Main(string[] args) 2: { 3: var name = string.Empty; 4: while (name != "q") 5: { 6: Console.Write("==> Please enter the name to find the email: "); 7: name = Console.ReadLine(); 8:  9: var email = GetEmailByName(name); 10: Console.WriteLine("==> The email of {0} is {1}.", name, email); 11: } 12: } The first time I entered “ziyanxu” it takes about 2 seconds to get the email since there’s nothing cached. But the next time I entered “ziyanxu” it returned very quickly from the Memcached.   Summary In this post I explained a bit on why we need cache, what’s Memcached and how to use it through the C# application. The example is fairly simple but hopefully demonstrated on how to use it. Memcached is very easy and simple to be used since it gives you the full opportunity to consider what, when and how to cache the objects. And when using Memcached you don’t need to consider the cache servers. The Memcached would be like a huge object pool in front of you. The next step I’m thinking now are: What kind of data should be cached? And how to determined the key? How to implement the cache as a layer on top of the business layer so that the application will not notice that the cache is there. How to implement the cache by AOP so that the business logic no need to consider the cache. I will investigate on them in the future and will share my thoughts and results.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • E: Sub-process /usr/bin/dpkg returned an error code (1) seems to be choking on kde-runtime-data version issue

    - by BMT
    12.04 LTS, on a dell mini 10. Install stable until about a week ago. Updated about 1x a week, sometimes more often. Several days ago, I booted up and the system was no longer working correctly. All these symptoms occurred simultaneously: Cannot run (exit on opening, every time): Update manager, software center, ubuntuOne, libreOffice. Vinagre autostarts on boot, no explanation, not set to startup with Ubuntu. Using apt-get to fix install results in the following: maura@pandora:~$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following package was automatically installed and is no longer required: libtelepathy-farstream2 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: gwibber gwibber-service kde-runtime-data software-center Suggested packages: gwibber-service-flickr gwibber-service-digg gwibber-service-statusnet gwibber-service-foursquare gwibber-service-friendfeed gwibber-service-pingfm gwibber-service-qaiku unity-lens-gwibber The following packages will be upgraded: gwibber gwibber-service kde-runtime-data software-center 4 upgraded, 0 newly installed, 0 to remove and 39 not upgraded. 20 not fully installed or removed. Need to get 0 B/5,682 kB of archives. After this operation, 177 kB of additional disk space will be used. Do you want to continue [Y/n]? debconf: Perl may be unconfigured (Can't locate Scalar/Util.pm in @INC (@INC contains: /etc/perl /usr/local/lib/perl/5.14.2 /usr/local/share/perl/5.14.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.14 /usr/share/perl/5.14 /usr/local/lib/site_perl .) at /usr/lib/perl/5.14/Hash/Util.pm line 9. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/Hash/Util.pm line 9. Compilation failed in require at /usr/share/perl/5.14/fields.pm line 122. Compilation failed in require at /usr/share/perl5/Debconf/Log.pm line 10. Compilation failed in require at (eval 1) line 4. BEGIN failed--compilation aborted at (eval 1) line 4. ) -- aborting (Reading database ... 242672 files and directories currently installed.) Preparing to replace gwibber 3.4.1-0ubuntu1 (using .../gwibber_3.4.2-0ubuntu1_i386.deb) ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: error processing /var/cache/apt/archives/gwibber_3.4.2-0ubuntu1_i386.deb (--unpack): subprocess new pre-removal script returned error exit status 1 Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace gwibber-service 3.4.1-0ubuntu1 (using .../gwibber-service_3.4.2-0ubuntu1_all.deb) ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: error processing /var/cache/apt/archives/gwibber-service_3.4.2-0ubuntu1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace kde-runtime-data 4:4.8.3-0ubuntu0.1 (using .../kde-runtime-data_4%3a4.8.4-0ubuntu0.1_all.deb) ... Unpacking replacement kde-runtime-data ... dpkg: error processing /var/cache/apt/archives/kde-runtime-data_4%3a4.8.4-0ubuntu0.1_all.deb (--unpack): trying to overwrite '/usr/share/sounds', which is also in package sound-theme-freedesktop 0.7.pristine-2 dpkg-deb (subprocess): subprocess data was killed by signal (Broken pipe) dpkg-deb: error: subprocess <decompress> returned error exit status 2 Preparing to replace python-crypto 2.4.1-1 (using .../python-crypto_2.4.1-1_i386.deb) ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: error processing /var/cache/apt/archives/python-crypto_2.4.1-1_i386.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace software-center 5.2.2.2 (using .../software-center_5.2.4_all.deb) ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: error processing /var/cache/apt/archives/software-center_5.2.4_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace xdiagnose 2.5 (using .../archives/xdiagnose_2.5_all.deb) ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: error processing /var/cache/apt/archives/xdiagnose_2.5_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/gwibber_3.4.2-0ubuntu1_i386.deb /var/cache/apt/archives/gwibber-service_3.4.2-0ubuntu1_all.deb /var/cache/apt/archives/kde-runtime-data_4%3a4.8.4-0ubuntu0.1_all.deb /var/cache/apt/archives/python-crypto_2.4.1-1_i386.deb /var/cache/apt/archives/software-center_5.2.4_all.deb /var/cache/apt/archives/xdiagnose_2.5_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) maura@pandora:~$ ^C maura@pandora:~$

    Read the article

  • How to recover dpkg from corrupted downloads?

    - by rocker9455
    I just upgraded to the 10.10 RC earlier and had a few problems with graphic drivers (x didnt start) But i have remedied that now. When i run 'sudo apt-get install -f' i get this: will@UbuntuBox:/mnt/slax$ sudo apt-get install -f [sudo] password for will: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libmono-wcf3.0-cil openoffice.org-calc openoffice.org-core The following NEW packages will be installed: libmono-wcf3.0-cil The following packages will be upgraded: openoffice.org-calc openoffice.org-core 2 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. 16 not fully installed or removed. Need to get 0B/32.5MB of archives. After this operation, 1,929kB disk space will be freed. Do you want to continue [Y/n]? Y (Reading database ... 201565 files and directories currently installed.) Preparing to replace openoffice.org-calc 1:3.2.1-6ubuntu2~10.04.1 (using .../openoffice.org-calc_1%3a3.2.1-7ubuntu1_i386.deb) ... Unpacking replacement openoffice.org-calc ... xz: (stdin): Compressed data is corrupt dpkg-deb: subprocess <decompress> returned error exit status 1 dpkg: error processing /var/cache/apt/archives/openoffice.org-calc_1%3a3.2.1-7ubuntu1_i386.deb (--unpack): short read on buffer copy for backend dpkg-deb during `./usr/lib/openoffice/basis3.2/program/libscfiltli.so' dpkg: regarding .../openoffice.org-core_1%3a3.2.1-7ubuntu1_i386.deb containing openoffice.org-core: openoffice.org-core conflicts with openoffice.org-calc (<< 1:3.2.1-7ubuntu1) openoffice.org-calc (version 1:3.2.1-6ubuntu2~10.04.1) is present and installed. dpkg: error processing /var/cache/apt/archives/openoffice.org-core_1%3a3.2.1-7ubuntu1_i386.deb (--unpack): conflicting packages - not installing openoffice.org-core Unpacking libmono-wcf3.0-cil (from .../libmono-wcf3.0-cil_2.6.7-3ubuntu1_all.deb) ... dpkg-deb (subprocess): data: internal gzip read error: '<fd:0>: data error' dpkg-deb: subprocess <decompress> returned error exit status 2 dpkg: error processing /var/cache/apt/archives/libmono-wcf3.0-cil_2.6.7-3ubuntu1_all.deb (--unpack): subprocess dpkg-deb --fsys-tarfile returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/openoffice.org-calc_1%3a3.2.1-7ubuntu1_i386.deb /var/cache/apt/archives/openoffice.org-core_1%3a3.2.1-7ubuntu1_i386.deb /var/cache/apt/archives/libmono-wcf3.0-cil_2.6.7-3ubuntu1_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Any idea how i can get the broken packages fixed? Cheers, Will

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >