Search Results

Search found 7960 results on 319 pages for 'distributed cache'.

Page 68/319 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • How do I fix a permissions problem with MS Distributed File System?

    - by charlesrandall
    I have a computer that is new, Windows 7, that is supposed to have access to particular network resources on a Distributed File System. However, despite all permissions being set correctly, I have consistent trouble accessing them. For instance, I'm supposed to be able to reach \company.org\main\subdir. All the permissions have been granted, only when I try to access it by name, it tells me I don't have permission to access \main. This is where the fun starts. If I ping company.org, get the IP, replace company.org by the IP, I can then access \IP\main\subdir without any problems at all. However we have a ton of scripts and build tools that access the network resource by name. My sysadmin has found that using MS's dfsutil.exe, we can fix it temporary using this sequence of commands: C:\dfsutil.exe /pktinfo C:\dfsutil.exe /PktFlush C:\dfsutil.exe /SpcFlush C:\dfsutil.exe /PurgeMupCache C:\dfsutil.exe /pktinfo After that, everything is great... until I reboot, or until some unspecified time later where suddenly I don't have access to \main\ anymore. Hoping to find a more permanent solution than waiting for it to break and running a batch file.

    Read the article

  • Windows Azure Use Case: Hybrid Applications

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Organizations see the need for computing infrastructures that they can “rent” or pay for only when they need them. They also understand the benefits of distributed computing, but do not want to create this infrastructure themselves. However, they may have considerations that prevent them from moving all of their current IT investment to a distributed environment: Private data (do not want to send or store sensitive data off-site) High dollar investment in current infrastructure Applications currently running well, but may need additional periodic capacity Current applications not designed in a stateless fashion In these situations, a “hybrid” approach works best. In fact, with Windows Azure, a hybrid approach is an optimal way to implement distributed computing even when the stipulations above do not apply. Keeping a majority of the computing function in an organization local while exploring and expanding that footprint into Windows and SQL Azure is a good migration or expansion strategy. A “hybrid” architecture merely means that part of a computing cycle is shared between two architectures. For instance, some level of computing might be done in a Windows Azure web-based application, while the data is stored locally at the organization. Implementation: There are multiple methods for implementing a hybrid architecture, in a spectrum from very little interaction from the local infrastructure to Windows or SQL Azure. The patterns fall into two broad schemas, and even these can be mixed. 1. Client-Centric Hybrid Patterns In this pattern, programs are coded such that the client system sends queries or compute requests to multiple systems. The “client” in this case might be a web-based codeset actually stored on another system (which acts as a client, the user’s device serving as the presentation layer) or a compiled program. In either case, the code on the client requestor carries the burden of defining the layout of the requests. While this pattern is often the easiest to code, it’s the most brittle. Any change in the architecture must be reflected on each client, but this can be mitigated by using a centralized system as the client such as in the web scenario. 2. System-Centric Hybrid Patterns Another approach is to create a distributed architecture by turning on-site systems into “services” that can be called from Windows Azure using the service Bus or the Access Control Services (ACS) capabilities. Code calls from a series of in-process client application. In this pattern you move the “client” interface into the server application logic. If you do not wish to change the application itself, you can “layer” the results of the code return using a product (such as Microsoft BizTalk) that exposes a Web Services Definition Language (WSDL) endpoint to Windows Azure using the Application Fabric. In effect, this is similar to creating a Service Oriented Architecture (SOA) environment, and has the advantage of de-coupling your computing architecture. If each system offers a “service” of the results of some software processing, the operating system or platform becomes immaterial, assuming it adheres to a service contract. There are important considerations when you federate a system, whether to Windows or SQL Azure or any other distributed architecture. While these considerations are consistent with coding any application for distributed computing, they are especially important for a hybrid application. Connection resiliency - Applications on-premise normally have low-latency and good connection properties, something you’re not always guaranteed in a distributed and hybrid application. Whether a centralized client or a distributed one, the code should be able to handle extended retry logic. Authorization and Access - In a single authorization environment like a Active Directory domain, security is handled at a user-password level. In a distributed computing environment, you have more options. You can mitigate this with  using The Windows Azure Application Fabric feature of ACS to make the Azure application aware of the App Fabric as an ADFS provider. However, a claims-based authentication structure is often a superior choice.  Consistency and Concurrency - When you have a Relational Database Management System (RDBMS), Consistency and Concurrency are part of the design. In a Service Architecture, you need to plan for sequential message handling and lifecycle. Resources: How to Build a Hybrid On-Premise/In Cloud Application: http://blogs.msdn.com/b/ignitionshowcase/archive/2010/11/09/how-to-build-a-hybrid-on-premise-in-cloud-application.aspx  General Architecture guidance: http://blogs.msdn.com/b/buckwoody/archive/2010/12/21/windows-azure-learning-plan-architecture.aspx   

    Read the article

  • Can I configure a DNS cache not to forward AAAA queries?

    - by itsadok
    I'm setting up an internal DNS cache because my firewall is having trouble handling all the sessions created by DNS requests. I tried using bind9, dnsmasq and DJB dnscache, they all help reduce the number of requests leaving my network, but there are still a lot of request being made. Looking at the log files, and tcpdump and dnstop outputs, it seems that requests that return SERVFAIL do not get cached at all. And a lot of those failed requests are AAAA requests, which is a shame, because I do not have ipv6 enabled on any server. I've looked at several ways to help the situation, and I think if I could somehow prevent AAAA record requests from being forwarded by the DNS cache, it would reduce the number of requests significantly. The closest thing I found was the filter-aaaa-on-v4 option in BIND9. However, this only removes the record from the server response, and does not prevent it from forwarding it. Any help would be appreciated.

    Read the article

  • How to disable proxy cache when query string is empty?

    - by chx
    With nginx I have server { listen 1.2.3.4:80 proxy_cache_valid 200 302 5m; location / { try_files $uri @upstream; root $root; } } When I go http://example.com/foobar it generates a redirect to http://example.com/foobar?filter_distance=50&... which is visitor dependent so I would like to not cache this redirect. I need to bypass cache when the query string is empty. I am a bit lost because location /foobar will match both.

    Read the article

  • Does Intel Smart Response provide any statistics on the cache usage?

    - by Tom Seddon
    I've set up my Z68-based Core i7 PC with a 60GB SSD dedicated as a Smart Response cache drive. Is there any way I can get any statistics out of it? It would be nice to have some information on how much cache space is actually being used, maybe how much of it was actually accessed recently, and how many reads in general are coming from the SSD rather than from the mechanical disk. These statistics might help to quickly provide some evidence for or against the use of Smart Response, without my having to reinstall Windows on the SSD (etc.) to find out. The Windows ReadyBoost feature has some performance counters you can access via the Windows 7 perfmon tool, for example, which is the kind of thing I'm hoping is somehow available. Smart Response provides no perfmon counters, though, and the Intel Rapid Storage Utility tells you pretty much nothing except that Smart Response is switched on.

    Read the article

  • How Can I Edit Google Chrome's 'History Provider Cache' File?

    - by dissolved
    I'm interested in editing (not completely deleting) contents of some of Google Chrome's cache files. In particular, the 'History Provider Cache' (found in ~/Library/Application Support/Google/Chrome/Default on Mac). As this other question suggests, it appears to simply be a SQLite file. Unfortunately, when I try to open it using a SQLite browser (MesaSQLite) I'm asked for an encryption key. So, I'd welcome any suggestions on how to either (1) determine the encryption key, or (2) an alternate way to edit this file. The end goal is to be able to remove specific annoying suggestions in the Omnibar. I've read countless other techniques, but none seem to remove suggestions that have the clock icon next to it. Some say deleting this file entirely will do the trick (and I imagine it will), but I don't wish to trash my entire browsing history. I find most of the suggestions to be useful and helpful, and I'd like to preserve that.

    Read the article

  • How to Cache image when src is some action which returns image?

    - by Bipul
    There are lots of questions about how to force the browser to cache or not to cache any image. But, I am facing slightly different situation. In several places of my web page, I am using following code for the images. <img title="<%= Html.Encode(Model.title)%>" src="<%= Url.Action(MVC.FrontEnd.Actions.RetrieveImage(Model.SystemId))%>"/> So, in the generated HTML it is like <img title="blahblah" src="http://xyz.com/FrontEnd/Actions/RetrieveImage?imageId=X"> Where X is some integer. I have seen that though the browser (IE or Mozilla) caches images by default, it is not caching images generated by above method. Is there any way I can tell browser to cache images of above type? Thanks in advance.

    Read the article

  • Does the `Expires` HTTP header needs to be consistent across multiple cold-cache requests?

    - by chakrit
    I'm implementing a custom web server of a kind. And am looking into adding an Expires header support. However, I'm a little unsure of how exactly to implement it. If multiple cold-cache requests are being made to the same unchanged resource on the server and the server returned different Expires header (say it uses relative time to calculate the exact value of the Expires date e.g. +6 hours from the request time), does that invalidate the cache on all the proxy servers in-between as well? Or is it impossible to happen (per the spec)? Does the Expires HTTP header needs to be consistent across multiple cold-cache requests?

    Read the article

  • Why does Cache.Add return an object that represents the cached item?

    - by Pure.Krome
    From MSDN about the differences between Adding or Inserting an item the ASP.NET Cache: Note: The Add and Insert methods have the same signature, but there are subtle differences between them. First, calling the Add method returns an object that represents the cached item, while calling Insert does not. Second, their behavior is different if you call these methods and add an item to the Cache that is already stored there. The Insert method replaces the item, while the Add method fails. [emphasis mine] The second part is easy. No question about that. But with the first part, why would it want to return an object that represents the cached item? If I'm trying to Add an item to the cache, I already have/know what that item is? I don't get it. What is the reasoning behind this?

    Read the article

  • Ubuntu 13.10 Symfony installation date time issue

    - by Sambo
    I'm installing Symfony on my Ubuntu system, everything was going fine until the very last moment when I was met with a screen that said: ContextErrorException: Warning: date_default_timezone_get(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /var/www/symfony-test/app/cache/dev/classes.php line 5107 in /var/www/symfony-test/app/cache/dev/classes.php line 5107 at ErrorHandler->handle('2', 'date_default_timezone_get(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone.', '/var/www/symfony-test/app/cache/dev/classes.php', '5107', array('level' => '100', 'message' => 'Notified event "kernel.exception" to listener "Symfony\Component\HttpKernel\EventListener\ProfilerListener::onKernelException".', 'context' => array())) at date_default_timezone_get() in /var/www/symfony-test/app/cache/dev/classes.php line 5107 at Logger->addRecord('100', 'Notified event "kernel.exception" to listener "Symfony\Component\HttpKernel\EventListener\ProfilerListener::onKernelException".', array()) in /var/www/symfony-test/app/cache/dev/classes.php line 5193 at Logger->debug('Notified event "kernel.exception" to listener "Symfony\Component\HttpKernel\EventListener\ProfilerListener::onKernelException".') in /var/www/symfony-test/vendor/symfony/symfony/src/Symfony/Component/HttpKernel/Debug/TraceableEventDispatcher.php line 246 at TraceableEventDispatcher->preListenerCall('kernel.exception', array(object(ProfilerListener), 'onKernelException')) in /var/www/symfony-test/vendor/symfony/symfony/src/Symfony/Component/HttpKernel/Debug/TraceableEventDispatcher.php line 448 at TraceableEventDispatcher->Symfony\Component\HttpKernel\Debug\{closure}(object(GetResponseForExceptionEvent)) at call_user_func(object(Closure), object(GetResponseForExceptionEvent)) in /var/www/symfony-test/app/cache/dev/classes.php line 1667 at EventDispatcher->doDispatch(array(object(Closure), object(Closure)), 'kernel.exception', object(GetResponseForExceptionEvent)) in /var/www/symfony-test/app/cache/dev/classes.php line 1600 at EventDispatcher->dispatch('kernel.exception', object(GetResponseForExceptionEvent)) in /var/www/symfony-test/app/cache/dev/classes.php line 1764 at ContainerAwareEventDispatcher->dispatch('kernel.exception', object(GetResponseForExceptionEvent)) in /var/www/symfony-test/vendor/symfony/symfony/src/Symfony/Component/HttpKernel/Debug/TraceableEventDispatcher.php line 139 at TraceableEventDispatcher->dispatch('kernel.exception', object(GetResponseForExceptionEvent)) in /var/www/symfony-test/app/bootstrap.php.cache line 2870 at HttpKernel->handleException(object(ContextErrorException), object(Request), '1') in /var/www/symfony-test/app/bootstrap.php.cache line 2823 at HttpKernel->handle(object(Request), '1', true) in /var/www/symfony-test/app/bootstrap.php.cache line 2947 at ContainerAwareHttpKernel->handle(object(Request), '1', true) in /var/www/symfony-test/app/bootstrap.php.cache line 2249 at Kernel->handle(object(Request)) in /var/www/symfony-test/web/app_dev.php line 28 After many hours of trying ideas in other threads, editing php.ini and classes.php to something that might work, I have gotten absolutely nowhere! Has anyone else had this problem

    Read the article

  • C#: Is it possible to use expressions or functions as keys in a dictionary?

    - by Svish
    Would it work to use Expression<Func<T>> or Func<T> as keys in a dictionary? For example to cache the result of heavy calculations. For example, changing my very basic cache from a different question of mine a bit: public static class Cache<T> { // Alternatively using Expression<Func<T>> instead private static Dictionary<Func<T>, T> cache; static Cache() { cache = new Dictionary<Func<T>, T>(); } public static T GetResult(Func<T> f) { if (cache.ContainsKey(f)) return cache[f]; return cache[f] = f(); } } Would this even work? Edit: After a quick test, it seems like it actually works. But I discovered that it could probably be more generic, since it would now be one cache per return type... not sure how to change it so that wouldn't happen though... hmm Edit 2: Noo, wait... it actually doesn't. Well, for regular methods it does. But not for lambdas. They get various random method names even if they look the same. Oh well c",)

    Read the article

  • cached schwartzian transform

    - by davidk01
    I'm going through "Intermediate Perl" and it's pretty cool. I just finished the section on "The Schwartzian Transform" and after it sunk in I started to wonder why the transform doesn't use a cache. In lists that have several repeated values the transform recomputes the value for each one so I thought why not use a hash to cache results. Here' some code: # a place to keep our results my %cache; # the transformation we are interested in sub foo { # expensive operations } # some data my @unsorted_list = ....; # sorting with the help of the cache my @sorted_list = sort { ($cache{$a} or $cache{$a} = &foo($a)) <=> ($cache{$b} or $cache{$b} = &foo($b)) } @unsorted_list; Am I missing something? Why isn't the cached version of the Schwartzian transform listed in books and in general just better circulated because on first glance I think the cached version should be more efficient?

    Read the article

  • alias_method and class_methods don't mix?

    - by Daniel
    Greetings, I've been trying to tinker with a global Cache module, but I can't figure out why this isn't working. Does anyone have any suggestions? This is the error produced for the below code: NameError: undefined method get' for moduleCache' from (irb):21:in `alias_method' module Cache def self.get puts "original" end end module Cache def self.get_modified puts "New get" end end def peek_a_boo Cache.module_eval do # make :get_not_modified alias_method :get_not_modified, :get alias_method :get, :get_modified end Cache.get Cache.module_eval do alias_method :get, :get_not_modified end end # test first round peek_a_boo # test second round peek_a_boo TIA! -daniel

    Read the article

  • How do I temporarily monkey with a global module constant?

    - by Daniel
    Greetings, I want to tinker with the global memcache object, and I found the following problems. Cache is a constant Cache is a module I only want to modify the behavior of Cache globally for a small section of code for a possible major performance gain. Since Cache is a module, I can't re-assign it, or encapsulate it. I Would Like To Do This: Deep in a controller method... code code code... old_cache = Cache Cache = MyCache.new code code code... Cache = old_cache code code code... However, since Cache is a constant I'm forbidden to change it. Threading is not an issue at the moment. :) Would it be "good manners" for me to just alias_method the special code I need just for a small section of code and then later unalias it again? That doesn't pass the smell test IMHO. Does anyone have any ideas? TIA, -daniel

    Read the article

  • Instance caching in Objective C

    - by zoul
    Hello! I want to cache the instances of a certain class. The class keeps a dictionary of all its instances and when somebody requests a new instance, the class tries to satisfy the request from the cache first. There is a small problem with memory management though: The dictionary cache retains the inserted objects, so that they never get deallocated. I do want them to get deallocated, so that I had to overload the release method and when the retain count drops to one, I can remove the instance from cache and let it get deallocated. This works, but I am not comfortable mucking around the release method and find the solution overly complicated. I thought I could use some hashing class that does not retain the objects it stores. Is there such? The idea is that when the last user of a certain instance releases it, the instance would automatically disappear from the cache. NSHashTable seems to be what I am looking for, but the documentation talks about “supporting weak relationships in a garbage-collected environment.” Does it also work without garbage collection? Clarification: I cannot afford to keep the instances in memory unless somebody really needs them, that is why I want to purge the instance from the cache when the last “real” user releases it. Better solution: This was on the iPhone, I wanted to cache some textures and on the other hand I wanted to free them from memory as soon as the last real holder released them. The easier way to code this is through another class (let’s call it TextureManager). This class manages the texture instances and caches them, so that subsequent calls for texture with the same name are served from the cache. There is no need to purge the cache immediately as the last user releases the texture. We can simply keep the texture cached in memory and when the device gets short on memory, we receive the low memory warning and can purge the cache. This is a better solution, because the caching stuff does not pollute the Texture class, we do not have to mess with release and there is even a higher chance for cache hits. The TextureManager can be abstracted into a ResourceManager, so that it can cache other data, not only textures.

    Read the article

  • Using Memcached in Python/Django - questions.

    - by Thomas
    I am starting use Memcached to make my website faster. For constant data in my database I use this: from django.core.cache import cache cache_key = 'regions' regions = cache.get(cache_key) if result is None: """Not Found in Cache""" regions = Regions.objects.all() cache.set(cache_key, regions, 2592000) #(2592000sekund = 30 dni) return regions For seldom changed data I use signals: from django.core.cache import cache from django.db.models import signals def nuke_social_network_cache(self, instance, **kwargs): cache_key = 'networks_for_%s' % (self.instance.user_id,) cache.delete(cache_key) signals.post_save.connect(nuke_social_network_cache, sender=SocialNetworkProfile) signals.post_delete.connect(nuke_social_network_cache, sender=SocialNetworkProfile) Is it correct way? I installed django-memcached-0.1.2, which show me: Memcached Server Stats Server Keys Hits Gets Hit_Rate Traffic_In Traffic_Out Usage Uptime 127.0.0.1 15 220 276 79% 83.1 KB 364.1 KB 18.4 KB 22:21:25 Can sombody explain what columns means? And last question. I have templates where I am getting much records from a few table (relationships). So in my view I get records from one table and in templates show it and related info from others. Generating page last a few seconds for very small table (<100records). Is it some easy way to cache queries from templates? Have I to do some big structure in my view (with all related tables), cache it and send to template?

    Read the article

  • Fix N+1 query in "declarative_authorization" gem using gem "bullet"

    - by makaroni4
    Currently I am working on one big web application and to make it work faster I decided to refactor all N+1 queries (to decrease number of requests to database, http://rails-bestpractices.com/posts/29-fix-n-1-queries). So I installed gem "bullet" which doesn`t work with Rails 3.1.1 now (you can use fork from https://github.com/flyerhzm/bullet). When using declarative_authorization gem on each page I get same alerts: N+1 Query detected Role => [:permissions] Add to your finder: :include => [:permissions] N+1 Query detected Permission => [:permission_rules] Add to your finder: :include => [:permission_rules] CACHE (0.0ms) SELECT "roles".* FROM "roles" CACHE (0.0ms) SELECT "permissions".* FROM "permissions" WHERE "permissions"."role_id" = 1 CACHE (0.0ms) SELECT "permissions".* FROM "permissions" WHERE "permissions"."role_id" = 2 CACHE (0.0ms) SELECT "permissions".* FROM "permissions" WHERE "permissions"."role_id" = 3 CACHE (0.0ms) SELECT "permissions".* FROM "permissions" WHERE "permissions"."role_id" = 4 CACHE (0.0ms) SELECT "permissions".* FROM "permissions" WHERE "permissions"."role_id" = 6 CACHE (0.0ms) SELECT "permissions".* FROM "permissions" WHERE "permissions"."role_id" = 7 CACHE (0.0ms) SELECT "permissions".* FROM "permissions" WHERE "permissions"."role_id" = 8 CACHE (0.0ms) SELECT "permission_rules".* FROM "permission_rules" INNER JOIN "permission_rules_permissions" ON "permission_rules"."id" = "permission_rules_permissions"."permission_rule_id" WHERE "permission_rules_permissions"."permission_id" = 30 CACHE (0.0ms) SELECT "permission_rules".* FROM "permission_rules" INNER JOIN "permission_rules_permissions" ON "permission_rules"."id" = "permission_rules_permissions"."permission_rule_id" WHERE "permission_rules_permissions"."permission_id" = 31 ... Could you please help me with that and to make this queries faster?

    Read the article

  • HIbernate 3.5.1 - can I just drop in EHCache 2.0.1?

    - by caerphilly
    I'm using Hibernate 3.5.1, which comes with EHCache 1.5 bundled. If I want to use the latest EHCache release (2.0.1), is it just a matter of removing the ehcache-1.5.jar from my project, and replacing with ehcache-core-2.0.1.jar? Any issues to be aware of? Also - is a cache "region" in the Hibernate mapping file that same as a cache "name" in the ehcache configuration xml? What I want to do is define 2 named cache regions - one for read-only reference entities that won't change (lookup lists etc), and one for all other entities. So in ehcache I want to define two elements; <cache name="readonly"> ... </cache> <cache name="mutable"> ... </cache> And then in my Hibernate mapping files, I will specify the cache to be used for each entity: <hibernate-mapping> <class name="lookuplist"> <cache region="readonly" usage="read-only"/> <property> ... </property> </class> </hibernate-mapping> Will that work? Some of the documentation seems to imply that a separate region/cache gets created for each mapped class... Thanks.

    Read the article

  • Speed up this query joining to a table multiple times

    - by Mongus Pong
    Hi, I have this query that (stripped right down) goes something like this : SELECT [Person_PrimaryContact].[LegalName], [Person_Manager].[LegalName], [Person_Owner].[LegalName], [Person_ProspectOwner].[LegalName], [Person_ProspectBDM].[LegalName], [Person_ProspectFE].[LegalName], [Person_Signatory].[LegalName] FROM [Cache] LEFT JOIN [dbo].[Person] AS [Person_Owner] WITH (NOLOCK) ON [Person_Owner].[PersonID] = [Cache].[ClientOwnerID] LEFT JOIN [dbo].[Person] AS [Person_Manager] WITH (NOLOCK) ON [Person_Manager].[PersonID] = [Cache].[ClientManagerID] LEFT JOIN [dbo].[Person] AS [Person_Signatory] WITH (NOLOCK) ON [Person_Signatory].[PersonID] = [Cache].[ClientSignatoryID] LEFT JOIN [dbo].[Person] AS [Person_PrimaryContact] WITH (NOLOCK) ON [Person_PrimaryContact].[PersonID] = [Cache].[PrimaryContactID] LEFT JOIN [dbo].[Person] AS [Person_ProspectOwner] WITH (NOLOCK) ON [Person_ProspectOwner].[PersonID] = [Cache].[ProspectOwnerID] LEFT JOIN [dbo].[Person] AS [Person_ProspectBDM] WITH (NOLOCK) ON [Person_ProspectBDM].[PersonID] = [Cache].[ProspectBDMID] LEFT JOIN [dbo].[Person] AS [Person_ProspectFE] WITH (NOLOCK) ON [Person_ProspectFE].[PersonID] = [Cache].[ProspectFEID] Person is a huge table and each join to it has a pretty significant hit in the execution plan. Is there anyway I can adjust this query so that I am only linking to it once, or at least get SQL Server to scan through it only once?

    Read the article

  • GAE JCache NumberFormatException, will I need to write Java to avoid?

    - by Jasper
    This code below produces a NumberFormatException in this line: val cache = cf.createCache(Collections.emptyMap()) Do you see any errors? Will I need to write a Java version to avoid this, or is there a Scala way? ... import java.util.Collections import net.sf.jsr107cache._ object QueryGenerator extends ServerResource { private val log = Logger.getLogger(classOf[QueryGenerator].getName) } class QueryGenerator extends ServerResource { def getCounter(cache:Cache):long = { if (cache.containsKey("counter")) { cache.get("counter").asInstanceOf[long] } else { 0l } } @Get("html") def getHtml(): Representation = { val cf = CacheManager.getInstance().getCacheFactory() val cache = cf.createCache(Collections.emptyMap()) val counter = getCounter(cache) cache.put("counter", counter + 1) val q = QueueFactory.getQueue("query-generator") q.add(TaskOptions.Builder.url("/tasks/query-generator").method(Method.GET).countdownMillis(1000L)) QueryGenerator.log.warning(counter.toString) new StringRepresentation("QueryGenerator started!", MediaType.TEXT_HTML) } } Thanks!

    Read the article

  • How to use Jquery UI in my Custom Function? (Autocomplete)

    - by bakazero
    I want to create a function to simplify configuration of jQuery UI AutoComplete. Here is my function code: (function($) { $.fn.myAutocomplete = function() { var cache = {}; var dataUrl = args.dataUrl; var dataSend = args.dataItem; $.autocomplete({ source: function(request, response) { if (cache.term == request.term && cache.content) { response(cache.content); } if (new RegExp(cache.term).test(request.term) && cache.content && cache.content.length < 13) { var matcher = new RegExp($.ui.autocomplete.escapeRegex(request.term), "i"); response($.grep(cache.content, function(value) { return matcher.test(value.value) })); } $.ajax({ url: dataUrl, dataType: "json", type: "POST", data: dataSend, success: function(data) { cache.term = request.term; cache.content = data; response(data); } }); }, minLength: 2, }); } }) (jQuery); but when I'm using this function like: $("input#tag").myAutocomplete({ dataUrl: "/auto_complete/tag", dataSend: { term: request.term, category: $("input#category").val() } }); It's give me an error: Uncaught ReferenceError: request is not defined

    Read the article

  • Caching strategies for entities and collections

    - by Rob West
    We currently have an application framework in which we automatically cache both entities and collections of entities at the business layer (using .NET cache). So the method GetWidget(int id) checks the cache using a key GetWidget_Id_{0} before hitting the database, and the method GetWidgetsByStatusId(int statusId) checks the cache using GetWidgets_Collections_ByStatusId_{0}. If the objects are not in the cache they are retrieved from the database and added to the cache. This approach is obviously quick for read scenarios, and as a blanket approach is quick for us to implement, but requires large numbers of cache keys to be purged when CRUD operations are carried out on entities. Obviously as additional methods are added this impacts performance and the benefits of caching diminish. I'm interested in alternative approaches to handling caching of collections. I know that NHibernate caches a list of the identifiers in the collection rather than the actual entities. Is this an approach other people have tried - what are the pros and cons? In particular I am looking for options that optimise performance and can be implemented automatically through boilerplate generated code (we have our own code generation tool). I know some people will say that caching needs to be done by hand each time to meet the needs of the specific situation but I am looking for something that will get us most of the way automatically.

    Read the article

  • nginx: How can I set proxy_* directives only for matching URIs?

    - by Artem Russakovskii
    I've been at this for hours and I can't figure out a clean solution. Basically, I have an nginx proxy setup, which works really well, but I'd like to handle a few urls more manually. Specifically, there are 2-3 locations for which I'd like to set proxy_ignore_headers to Set-Cookie to force nginx to cache them (nginx doesn't cache responses with Set-Cookie as per http://wiki.nginx.org/HttpProxyModule#proxy_ignore_headers). So for these locations, all I'd like to do is set proxy_ignore_headers Set-Cookie; I've tried everything I could think of outside of setting up and duplicating every config value, but nothing works. I tried: Nesting location directives, hoping the inner location which matches on my files would just set this value and inherit the rest, but that wasn't the case - it seemed to ignore anything set in the outer location, most notably proxy_pass and I end up with a 404). Specifying the proxy_cache_valid directive in an if block that matches on $request_uri, but nginx complains that it's not allowed ("proxy_cache_valid" directive is not allowed here). Specifying a variable equal to "Set-Cookie" in an if block, and then trying to set proxy_cache_valid to that variable later, but nginx isn't allowing variables for this case and throws up. It should be so simple - modifying/appending a single directive for some requests, and yet I haven't been able to make nginx do that. What am I missing here? Is there at least a way to wrap common directives in a reusable block and have multiple location blocks refer to it, after adding their own unique bits? Thank you. Just for reference, the main location / block is included below, together with my failed proxy_ignore_headers directive for a specific URI. location / { # Setup var defaults set $no_cache ""; # If non GET/HEAD, don't cache & mark user as uncacheable for 1 second via cookie if ($request_method !~ ^(GET|HEAD)$) { set $no_cache "1"; } if ($http_user_agent ~* '(iphone|ipod|ipad|aspen|incognito|webmate|android|dream|cupcake|froyo|blackberry|webos|s8000|bada)') { set $mobile_request '1'; set $no_cache "1"; } # feed crawlers, don't want these to get stuck with a cached version, especially if it caches a 302 back to themselves (infinite loop) if ($http_user_agent ~* '(FeedBurner|FeedValidator|MediafedMetrics)') { set $no_cache "1"; } # Drop no cache cookie if need be # (for some reason, add_header fails if included in prior if-block) if ($no_cache = "1") { add_header Set-Cookie "_mcnc=1; Max-Age=17; Path=/"; add_header X-Microcachable "0"; } # Bypass cache if no-cache cookie is set, these are absolutely critical for Wordpress installations that don't use JS comments if ($http_cookie ~* "(_mcnc|comment_author_|wordpress_(?!test_cookie)|wp-postpass_)") { set $no_cache "1"; } if ($request_uri ~* wpsf-(img|js)\.php) { proxy_ignore_headers Set-Cookie; } # Bypass cache if flag is set proxy_no_cache $no_cache; proxy_cache_bypass $no_cache; # under no circumstances should there ever be a retry of a POST request, or any other request for that matter proxy_next_upstream off; proxy_read_timeout 86400s; # Point nginx to the real app/web server proxy_pass http://localhost; # Set cache zone proxy_cache microcache; # Set cache key to include identifying components proxy_cache_key $scheme$host$request_method$request_uri$mobile_request; # Only cache valid HTTP 200 responses for this long proxy_cache_valid 200 15s; #proxy_cache_min_uses 3; # Serve from cache if currently refreshing proxy_cache_use_stale updating timeout; # Send appropriate headers through proxy_set_header Host $host; # no need for this proxy_set_header X-Real-IP $remote_addr; # no need for this proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # Set files larger than 1M to stream rather than cache proxy_max_temp_file_size 1M; access_log /var/log/nginx/androidpolice-microcache.log custom; }

    Read the article

  • How do I cache WCF REST web service in IIS7?

    - by foosnazzy
    When I turn on output caching for my service it doesn't appear to be cache-worthy in IIS. It really should be since I'm returning the same JSON content over and over. The varyByQueryString option seems like it would do the trick, but since my resources are URI based, there really isn't a query string, just a path to a resource. Has anyone successfully gotten IIS to output cache a WCF REST service?

    Read the article

  • Accessing the ASP.NET Cache from a Separate Thread?

    - by maxp
    Normally i have a static class that reads and writes to HttpContext.Current.Cache However since adding threading to my project, the threads all get null reference exceptions when trying to retrieve this object. Is there any other way i can access it, workarounds or another cache i can use?

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >