Search Results

Search found 9311 results on 373 pages for 'cache dependency'.

Page 27/373 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Maven/Ivy: Identical artifact with different name in dependency

    - by ThiamTeck
    Currently I am using Ivy for dependency management. And quite often I come across problem of getting identical jar files with different name due to transitive dependency. Example: <dependency> <groupId>javax.mail</groupId> <artifactId>mail</artifactId> <version>1.4</version> </dependency> <dependency> <groupId>org.apache.geronimo.specs</groupId> <artifactId>geronimo-javamail_1.4_spec</artifactId> <version>1.4</version> </dependency> I am thinking of trying out Maven as well. Any best practice to eliminate these identical artifact in either Ivy or Maven?

    Read the article

  • Invalidating Memcached Keys on save() in Django

    - by Zack
    I've got a view in Django that uses memcached to cache data for the more highly trafficked views that rely on a relatively static set of data. The key word is relatively: I need invalidate the memcached key for that particular URL's data when it's changed in the database. To be as clear as possible, here's the meat an' potatoes of the view (Person is a model, cache is django.core.cache.cache): def person_detail(request, slug): if request.is_ajax(): cache_key = "%s_ABOUT_%s" % settings.SITE_PREFIX, slug # Check the cache to see if we've already got this result made. json_dict = cache.get(cache_key) # Was it a cache hit? if json_dict is None: # That's a negative Ghost Rider person = get_object_or_404(Person, display = True, slug = slug) json_dict = { 'name' : person.name, 'bio' : person.bio_html, 'image' : person.image.extra_thumbnails['large'].absolute_url, } cache.set(cache_key) # json_dict will now exist, whether it's from the cache or not response = HttpResponse() response['Content-Type'] = 'text/javascript' response.write(simpljson.dumps(json_dict)) # Make sure it's all properly formatted for JS by using simplejson return response else: # This is where the fully templated response is generated What I want to do is get at that cache_key variable in it's "unformatted" form, but I'm not sure how to do this--if it can be done at all. Just in case there's already something to do this, here's what I want to do with it (this is from the Person model's hypothetical save method) def save(self): # If this is an update, the key will be cached, otherwise it won't, let's see if we can't find me try: old_self = Person.objects.get(pk=self.id) cache_key = # Voodoo magic to get that variable old_key = cache_key.format(settings.SITE_PREFIX, old_self.slug) # Generate the key currently cached cache.delete(old_key) # Hit it with both barrels of rock salt # Turns out this doesn't already exist, let's make that first request even faster by making this cache right now except DoesNotExist: # I haven't gotten to this yet. super(Person, self).save() I'm thinking about making a view class for this sorta stuff, and having functions in it like remove_cache or generate_cache since I do this sorta stuff a lot. Would that be a better idea? If so, how would I call the views in the URLconf if they're in a class?

    Read the article

  • Dependency Replication with TFS 2010 Build

    - by Jakob Ehn
    Some time ago, I wrote a post about how to implement dependency replication using TFS 2008 Build. We use this for Library builds, where we set up a build definition for a common library, and have the build check the resulting assemblies back into source control. The folder is then branched to the applications that need to reference the common library. See the above post for more details. Of course, we have reimplemented this feature in TFS 2010 Build, which results in a much nicer experience for the developer who wants to setup a new library build. Here is how it looks: There is a separate build process template for library builds registered in all team projects The following properties are used to configure the library build: Deploy Folder in Source Control is the server path where the assemblies should be checked in DeploymentFiles is a list of files and/or extensions to what files to check in. Default here is *.dll;*.pdb which means that all assemblies and debug symbols will be checked in. We can also type for example CommonLibrary.*;SomeOtherAssembly.dll in order to exclude other assemblies You can also see that we are versioning the assemblies as part of the build. This is important, since the resulting assemblies will be deployed together with the referencing application.   When the build executes, it will see of the matching assemblies exist in source control, if not, it will add the files automatically:   After the build has finished, we can see in the history of the TestDeploy folder that the build service account has in fact checked in a new version: Nice!   The implementation of the library build process template is not very complicated, it is a combination of customization of the build process template and some custom activities. We use the generic TFActivity (http://geekswithblogs.net/jakob/archive/2010/11/03/performing-checkins-in-tfs-2010-build.aspx) to check in and out files, but for the part that checks if a file exists and adds it to source control, it was easier to do this in a custom activity:   public sealed class AddFilesToSourceControl : BaseCodeActivity { // Files to add to source control [RequiredArgument] public InArgument<IEnumerable<string>> Files { get; set; } [RequiredArgument] public InArgument<Workspace> Workspace { get; set; } // If your activity returns a value, derive from CodeActivity<TResult> // and return the value from the Execute method. protected override void Execute(CodeActivityContext context) { foreach (var file in Files.Get(context)) { if (!File.Exists(file)) { throw new ApplicationException("Could not locate " + file); } var ws = this.Workspace.Get(context); string serverPath = ws.TryGetServerItemForLocalItem(file); if( !String.IsNullOrEmpty(serverPath)) { if (!ws.VersionControlServer.ServerItemExists(serverPath, ItemType.File)) { TrackMessage(context, "Adding file " + file); ws.PendAdd(file); } else { TrackMessage(context, "File " + file + " already exists in source control"); } } else { TrackMessage(context, "No server path for " + file); } } } } This build template is a very nice tool that makes it easy to do dependency replication with TFS 2010. Next, I will add funtionality for automatically merging the assemblies (using ILMerge) as part of the build, we do this to keep the number of references to a minimum.

    Read the article

  • How Do I Cache Just the Homepage with Apache .htaccess?

    - by Volomike
    This config is close... <FilesMatch "\.(php)$"> Header set Cache-Control "max-age=7200, must-revalidate" </FilesMatch> ...but it does all php pages, not just the home page like I want. Basically the developer said he wants example.com to be cached, while: http://example.com/electronics/ would not be cached. Note the developer is using pretty URLs with an MVC framework that runs everything through index.php.

    Read the article

  • Is "Turn Off Windows write-cache buffer flushing" safe on a laptop?

    - by Earlz
    my laptop's internal harddrive is a bit slow. I looked at the drive properties and there are two options: [X] Enable write caching on the device [ ] Turn off windows write-cache buffer flushing on the device As you can see, the first option is checked already, but the second option isn't. I've heard the second option can really speed things up, but it also sounds very risky. Is it safe to do on a laptop that rarely is off of AC power? (but still has battery as well)

    Read the article

  • How to tell nginx to honor backend's cache?

    - by ChocoDeveloper
    I'm using php-fpm with nginx as http server (I don't know much about reverse proxies, I just installed it and didn't touch anything), without Apache nor Varnish. I need nginx to understand and honor the http headers I send. I tried with this config (taken from the docs) but didn't work: /etc/nginx/nginx.conf: fastcgi_cache_path /var/lib/nginx/cache levels=1:2 keys_zone=website:10m inactive=10m; fastcgi_cache_key "$scheme$request_method$host$request_uri"; /etc/nginx/sites-available/website: server { fastcgi_cache website; #fastcgi_cache_valid 200 302 1h; #fastcgi_cache_valid 301 1d; #fastcgi_cache_valid any 1m; #fastcgi_cache_min_uses 1; #fastcgi_cache_use_stale error timeout invalid_header http_503; add_header X-Cache $upstream_cache_status; } I always get "MISS" and the cache dir is empty. If I uncomment the other directives, I get hit, but I don't want those "dumb" settings, I need to control them within my backend. For example, if my backend says "public, s-maxage=10", the cache should be considered stale after 10 secs. Instead, nginx will store it for 1h, because of these directives. I was thinking whether I should try proxy_cache, not sure what's the difference. In both fastcgi and proxy modules docs it says this: The cache honors backend's Cache-Control, Expires, and etc. since version 0.7.48, Cache-Control: private and no-store only since 0.7.66, though. Vary handling is not implemented. nginx version: nginx/1.1.19 Any thoughts? pd: I also have the reverse proxy that is offered by Symfony2 (which I turn off to use nginx's). The headers are interpreted correctly by it, so I think I'm doing it right.

    Read the article

  • Why is apt-cache so slow?

    - by Damn Terminal
    After upgrade to Trusty (14.04) from Saucy (13.10), all apt operations are very slow. Even those that do not include downloading anything, or connecting to any servers. For example, displaying the apt policy # time apt-cache policy [...] real 0m8.951s user 0m5.069s sys 0m3.861s takes almost ten seconds! Mostly a weird lag right after issuing the command. And it's the same even if I issue the same command again. On another system it doesn't take a tenth of a second real 0m0.096s user 0m0.070s sys 0m0.023s The other system is a little beefier but there was no noticeable difference before the upgrade. It's the same with apt-get, anything apt-related. How do I find out the source of this lag and fix it? Additional info: # cat /etc/nsswitch.conf # /etc/nsswitch.conf # # Example configuration of GNU Name Service Switch functionality. # If you have the `glibc-doc-reference' and `info' packages installed, try: # `info libc "Name Service Switch"' for information about this file. passwd: compat group: compat shadow: compat hosts: files dns networks: files protocols: db files services: db files ethers: db files rpc: db files netgroup: nis BTW is my understanding of how apt-cache works correct? It doesn't make any network connections when I run apt-cache policy, right? In case I'm wrong and it matters, here are my sources https://gist.github.com/anonymous/02920270ff68e23fc3ec

    Read the article

  • How do I fix dependency problems with the kernel in apt?

    - by Jon
    When trying to install new packages, either manually or with muon, I get these errors: jon@jon-desktop:~/Apps/mendeleydesktop-1.5-dev4-linux-x86_64/bin$ sudo apt-get install kupfer [sudo] password for jon: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: kupfer : Depends: python-keybinder but it is not going to be installed Recommends: python-wnck but it is not going to be installed linux-headers-generic : Depends: linux-headers-3.2.0-20-generic but it is not installable linux-image-generic : Depends: linux-image-3.2.0-20-generic but it is not installable E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). jon@jon-desktop:~/Apps/mendeleydesktop-1.5-dev4-linux-x86_64/bin$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: linux-generic linux-headers-generic linux-image-generic The following packages will be upgraded: linux-generic linux-headers-generic linux-image-generic 3 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. 3 not fully installed or removed. Need to get 0 B/6,658 B of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? dpkg: dependency problems prevent configuration of linux-image-generic: linux-image-generic depends on linux-image-3.2.0-20-generic; however: Package linux-image-3.2.0-20-generic is not installed. dpkg: error processing linux-image-generic (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. dpkg: dependency problems prevent configuration of linux-generic: linux-generic depends on linux-image-generic (= 3.2.0.20.22); however: Package linux-image-generic is not configured yet. dpkg: error processing linux-generic (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. dpkg: dependency problems prevent configuration of linux-headers-generic: linux-headers-generic depends on linux-headers-3.2.0-20-generic; however: Package linux-headers-3.2.0-20-generic is not installed. dpkg: error processing linux-headers-generic (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: linux-image-generic linux-generic linux-headers-generic E: Sub-process /usr/bin/dpkg returned an error code (1) As indicated above, I ran sudo apt-get -f install but it still tells me there are dependency issues.

    Read the article

  • Single database, multiple system dependency

    - by davenewza
    Consider an environment where we have a single, core database, with many separate systems using this one database. This leads to all of these systems have a common dependency, which ultimately introduces coupling between them. This means that we cannot always evolve systems independently of each other. Structural changes to the database (even if only intended for one, particular system), requires a full sweep test of ALL systems, and may require that other systems be 'patched' and subsequently released. This is especially tricky when you want to have separate teams working on different projects. What is a good 'pattern' to help in avoiding such coupling? I would imagine that a database should be exclusively depended on by one system. If other systems require data for whatever reason, they should request such from an API service of some kind. A drawback of this approach which comes to mind is performance: routing data between high-throughput systems through service calls is much slower than through a database connection.

    Read the article

  • dependency problems now not able to install or remove any package

    - by Manish gour
    I was installing wvdial, now I do not need that, but because of that machine got problem with dependencies, and due to that I am unable to install any package, please help, Below is my stack trace: The following packages have unmet dependencies: libqt3-mt:i386: Depends: libjpeg62 but it is not installed libusb-dev: Depends: libusb-0.1-4 (= 2:0.1.12-14) but 2:0.1.12-20 is installed dependency problems:i386: Depends: libc6 (= 2.4) but 2.15-0ubuntu10.4 is installed Depends: libuniconf4.4 but it is not installed Depends: libwvstreams4.4-base but it is not installed Depends: libwvstreams4.4-extras but it is not installed Depends: libxplc0.3.13 but it is not installed Depends: ppp (= 2.3.0) but it is not installed

    Read the article

  • How do I inject test objects when the real objects are created dynamically?

    - by JW01
    I want to make a class testable using dependency injection. But the class creates multiple objects at runtime, and passes different values to their constructor. Here's a simplified example: public abstract class Validator { private ErrorList errors; public abstract void validate(); public void addError(String text) { errors.add( new ValidationError(text)); } public int getNumErrors() { return errors.count() } } public class AgeValidator extends Validator { public void validate() { addError("first name invalid"); addError("last name invalid"); } } (There are many other subclasses of Validator.) What's the best way to change this, so I can inject a fake object instead of ValidationError? I can create an AbstractValidationErrorFactory, and inject the factory instead. This would work, but it seems like I'll end up creating tons of little factories and factory interfaces, for every dependency of this sort. Is there a better way?

    Read the article

  • Using <= for every dependency in case of following semantic versioning idea

    - by zerkms
    As Semantic Versioning (and common sense) declares - the major version is incremented in case if non backward compatible change is introduced. Now let's assume we have a project called Project that has a current version 1.0.42 and a library Lib it depends on that is of a 2.1.3 version at the moment. Does that mean that following semver ideology we should constraint the dependency of the Project to be Depends: Lib (< 3)? From my experience - no one does that, but I find it semantically correct and very self-descriptive. What do you think of this?

    Read the article

  • Result class dependency

    - by Stefano Borini
    I have an object containing the results of a computation. This computation is performed in a function which accepts an input object and returns the result object. The result object has a print method. This print method must print out the results, but in order to perform this operation I need the original input object. I cannot pass the input object at printing because it would violate the signature of the print function. One solution I am using right now is to have the result object hold a pointer to the original input object, but I don't like this dependency between the two, because the input object is mutable. How would you design for such case ?

    Read the article

  • Installing chrome gives an error: "dependency is not satisfiable"

    - by Sled
    I just installed ubuntu on my laptop, everything works fine, but I'd like to use chrome instead of firefox. I downloaded the .deb file from the chrome website, and when I open it, the install buton inside the software center is inactive (I can't click it) and it's telling me dependency is not satisfiable: libcurl3 I did a search for libcurl3 in the Software Center, the three results I'm getting are already installed. Any ideas how to fix this? I also tried installing chromium-browser, but that's not working out neither. I'm getting Package dependencies not resolved and this details block: The following packages have unmet dependencies: chromium-browser: Depends: libgcc1 (>= 1:4.1.1) but 1:4.5.2-8ubuntu4 is to be installed Depends: libxdamage1 (>= 1:1.1) but 1:1.1.3-1ubuntu1 is to be installed Depends: zlib1g (>= 1:1.2.3.3.dfsg) but 1:1.2.3.4.dfsg-3ubuntu3 is to be installed Depends: libnss3-1d (>= 3.12.3) but it is not going to be installed

    Read the article

  • Book: Dependency Injection in .NET

    - by CoffeeAddict
    Does anyone find this odd that this is a book from mid 2010 on a pretty popular topic and there is no "see inside" but even worse no reviews!?!?! I want to buy it but this extremely odd that for such a popular topic there isn't at least 2 or more reviews. I'd expect a ton of reviews on a book on a subject such as this. Dependency Injection in .NET (Manning) Anyone have this book that can tell me if it's worth my money? the date incorrectly states 2001 on Amazon and I've notified the author on that.

    Read the article

  • Dependency is not satisfiable: lib32gcc1 in Google Chrome

    - by user2287892
    I've tried to install Google Chrome for 32-bit .deb package. But it gave me the following errors: depends: lib32gcc1 (= 1:4.1.1) but it is not installable depends: lib32stdc++6 (= 4.6) but it is not installable depends: libc6-i386 (= 2.11) but it is not installable depends: libxss1 but it is not installable Currently, I'm using Ubuntu 13.10 (32-bit - i386). So, when i try to install the dependency, it says me that i can't install. I'm totally lost,I don't know what should I do. Thank you.

    Read the article

  • Cinnamon cannot install due to libbgjs0 dependency

    - by Kin.
    I was following the How do I install the Cinnamon Desktop?, but when i install, it like this locahost@locahost:~$ sudo apt-get install cinnamon Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: cinnamon : Depends: libgjs0- E: Unable to correct problems, you have held broken packages. How can i install the libgjs0- package?

    Read the article

  • How to get out of dependency hell

    - by maiios
    I am doing this from memory from work. Basically, I have libjpeg-turbo8 installed. But I have libjpeg-turbo8:i386 installed at version xxx4.4 and libjpeg-turbo8:amd64 installed at version xxx4.3. I am not sure how this mismatch happened. I believe that 4.3 is the right version, so I would like to roll the 32 bit version back. apt-get install libjpeg-turbo8:i386xxxxxx4.3 did not work (as it was still complaining that it couldnt do anything because of the mismatch). Basically, I cant do anything with apt-get because this mismatch is causing dependency hell. What is the proper way to resolve this. The box is 64 bit 12.04.

    Read the article

  • Dependency Management tool for REST endpoints

    - by ShaggyInjun
    I work in a Rest Oriented envrionment. The number of endpoints is quite large and span multiple applications. The dependencies between the endpoints are large in number as well and not very well planned. Applications have cyclic dependencies amongst each other. Unfortunately, there is no central location where all the endpoints are documented and declare dependencies ( the endpoints that they inturn call ). Is there a tool that will help in such dependency management. I tried searching for a tool online, but not know what such a thing would be called, I am unable to find anything. P.S. Google only helps those who know what they need help with. :(

    Read the article

  • Page cache flushing behavior under heavy append load

    - by Bryce
    I'm trying to understand the behavior of the Linux pdflush daemon when: The page cache is initially pretty much empty There is a large amount of free memory The system starts undergoing heavy write load My understanding right now is that the vm.dirty_ratio and vm.dirty_background_ratio that control page cache flushing behavior are with respect to the present size of the page cache, which means that my writes will flush earlier than they would if the page cache was pre-populated (even with dummy data from some random file), and thus throughput will be lower. Is this accurate?

    Read the article

  • Why does Tomcat try to use the cache when compilation failed?

    - by etheros
    For some reason, it appears Tomcat is trying to hit its compilation cache when compilation failed. For example, if I create a JSP containing nothing but Hello, <%=world%>!, predictably, I get an error: org.apache.jasper.JasperException: Unable to compile class for JSP. Subsequent requests however alternate between this and org.apache.jasper.JasperException: org.apache.jasper.JasperException: Unable to load class for JSP. Further, if I create a JSP containing Hello!, it of course works just fine. If I modify it contain Hello, <%=name%>!, the response alternates between the previously-mentioned compilation error, and the cached Hello!. What's going on?

    Read the article

  • How can I diagnose cache misses when using Apache as a reverse proxy?

    - by johnstok
    I have set up Apache 2.2 as a reverse proxy with the following configuration: # jBoss proxying ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /foo http://localhost:9080/foo ProxyPassReverse /foo http://localhost:9080/foo ProxyPassReverseCookiePath /foo /foo # Reverse proxy caching CacheEnable disk /foo # Compression SetOutputFilter DEFLATE BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \bMSIE\s(7|8) !no-gzip !gzip-only-text/html DeflateCompressionLevel 9 Header append Vary User-Agent env=!dont-vary However, in a number of cases where I expect a cached response to be returned the request is sent through to the origin server at localhost:9080. Responses have a HTTP Vary header of 'Accept-Encoding,User-Agent' which is to be expected given the mod_deflate configuration. How can I determine why Apache is unable to serve a response from the cache?

    Read the article

  • Does ZFS cache Compressed or Uncompressed data in a ZFS file-system with compression turned on?

    - by George Bailey
    ZFS supports file-system compression and it also caches frequently or recently accessed data. If a system has lots of CPU but the underlying data storage system is slow. It is possible that ZFS would perform better with compression turned on. This can be easily tested when writing files by measuring CPU and disk usage and throughput. (of course latency may exist,, but this would not be an issue for large files). But what about cache? If data will have to be decompressed every time it is read then this is probably less of a good idea. Is the cached data compressed?. Does anybody have some information on this?

    Read the article

  • Any way to get back Chrome's Dialog box for cache clearing instead of the new tab?

    - by Stuart P.
    As of today's release of chrome (Tuesday, March 8, 2011) on both Mac & PC the settings are now in a tab (chrome://settings/advanced), needless to say when you're clearing your cache very frequently (cmd-shift-delete on mac, cntl+shift+delete on PC) it's quite tedious going back and forth in tabs. The click & clean chrome extension doesn't have a mac counterpart (plus I like the keyboard much more than the mouse). I've searched and have yet to find a way to get a dialog box instead of the new tab.

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >