Search Results

Search found 2338 results on 94 pages for 'donut caching'.

Page 14/94 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Complex knowledge management system with CRM..written internally

    - by JonH
    We've all heard of salesforce and sugarcrm and the likes of systems like this. Unfortunately at my workplace we have been asked to write a similiar system (rather then license or purchase). Basically the database is fairly large. Think of modules such as: Corporate groups, customers, programs, projects, sub projects, and issue management. In simple terms a corporate group has one to many customers. A program has one or more projects. A project has one or more sub projects. And an issue can be created on many sub projects. Of course the system is a bit more complex but instead of listing every single module I think its best to keep it simple. In any event, the system in its current state has only two resources to be working on it (basically we have to do it all: CSS, database, jquery, asp.net and C#). We've started off well by defining the UI master and footer pages that way we can reuse those across all of our pages. Now comes the hard part. The system will have about 4k end users with say 5-10% being concurrent users. We are wondering if it makes sense to cache our database data (For say 5-10 minutes) rather then continously hit our database. The reason being is some of these pages may have 5-10 search filters associated with the page. Imagine every time a selection is made from a search box how many database hits. Also some of these search fields cascade so selecting for instance an initial drop down may cascade several drop down boxes under them. Is it wrong to cache because I am not finding too many articles on whether it is a good idea or not. Remember the system is similiar to say a CRM system where we manage our various customers, projects, sub projects, issues, etc.

    Read the article

  • Changing frontend cache

    - by Utsav
    Our architecture consists of a front-end cache that most read only users obtain their data from directly. The front-end cache sits in front of a farm of webservers that serve pages written in PHP. We need to be able to detect certain conditions at the front-end cache level and pass those values through to the back-end via HTTP headers. For example we would like to manually tag the carrier network based on the IP address. So, for incoming traffic if the user is say coming from an IP address in the range of "41.202.192.0"/19 we would tag them as being a Orange Cameroon user by setting the appropriate HTTP request header, e.g., X-Carrier = "Orange Cameroon". Based on the setting of this header we would like to vary the cache and serve a different banner to the end user. How would you go about doing this? Keep in mind that we don't want to pollute the cache and we also don't want to create too many small cache segments. Assumptions: You can assume that the X-Carrier has already been detected in our cache. So, for the purposes of your test you can just set this value manually in your example script.

    Read the article

  • Writing low latency Java

    - by user997112
    Are there any Java-specific techniques (things which wouldnt apply to C++) for writing low latency code, in Java? I often see Java low latency roles and they ask for experience writing low latency Java- which sometimes seems a little bit of an oxymoron. The only think I could think of is experience with JNI, outsourcing I/O calls to native code. Also possibly using the disruptor pattern, but thats not an actual technology. Are there any Java specific tips for writing low latency code? I am aware there is a Real Time Java Spec, but I have been warned real-time is not the same as low latency....

    Read the article

  • Interesting links week #9

    - by erwin21
    Below a list of interesting links that I found this week: Frontend: Subway Map Visualization jQuery Plugin Internet Explorer 9 Guide for Developers Development: Html Agility Pack Cache Integration - Building and Using Custom OutputCache Providers in ASP.NET Marketing: A/B testing applications Other: Top 10 Reasons Web Developers Should Avoid Flash Interested in more interesting links follow me at twitter http://twitter.com/erwingriekspoor

    Read the article

  • Intel programming "performance" books? [closed]

    - by user997112
    I vaguely remember seeing that Intel have produced a few good books, especially with regards to low latency programming, but I cannot remember the titles. Could people suggest the titles of Intel books (or ones relating to Intel products)? Examples include books on: -Intel Compiler -Intel Assembler -Any low level programming on Intel assembler -The Intel CPU architecture -Intel threading blocks library

    Read the article

  • What reasons are there to reduce the max-age of a logo to just 8 days? [closed]

    - by callum
    Most websites set max-age=31536000 (1 year) on the Cache-control headers of static assets such as logo images. Examples: YouTube Yahoo Twitter BBC But there is a notable exception: Google's logo has max-age=691200 (8 days). I've checked the headers on the Google logo in the past, and it definitely used to be 1 year. (Also, it used to be part of a sprite, and now it is a standalone logo image, but that's probably another question...) What could be valid technical reasons why they would want to reduce its cache lifetime to just 8 days? Google's homepage is one of the most carefully optimised pages in the world, so I imagine there's a good reason. Edit: Please make sure you understand these points before answering: Nobody uses short max-age lifetimes to allow modifying a static asset in future. When you modify it, you just serve it at a different URL. So no, it's nothing to do with Google doodles. Think about it: even if Google didn't understand this basic trick of HTTP, 8 days still wouldn't be appropriate, as only those users who don't have the original logo cached would see the doodle on doodle-day – and then that group of users would go on seeing the doodle for the following 8 days after Google changed it back :) Web servers do not worry about "filling up" the caches of clients (or proxies). The client manages this by itself – when it hits its own storage limit, it just starts dropping the lowest priority items to make space for new items. The priority score is based on the question "How likely am I to benefit from having cached this URL?", which is nothing to do with what max-age value the server sent when the URL was originally requested; it's a heuristic based on the "frecency" of requests for that URL. The max-age simply lets the server set a cut-off point – the time at which the client is supposed to discard the item regardless of how often it's being re-used. It would be very nice and trusting of a downstream client/proxy to rely on all origin servers "holding back" from filling up their caches, but I don't think we live in that world ;)

    Read the article

  • Android application Database Framework

    - by Marek Sebera
    When creating mobile (specially Android) application, I usually come to touch with similar pattern of working with data. Usually I need to fetch some remote data (covered by authorization process) to local cache. And on next request: Check networking Check presence of cache file Check version of cache file (if networking) Get new version and save cache (if networking and file not in cache, or outdated) Data store is no-SQL JSON Document-Based (and yes, I know about CouchDB Android version, but it doesn't fit my needs yet.) Process of authorizing to data source and code for check version of local cache is adapted to application. But the other code (handling network, saving cache, handling exceptions,...) is always the same. Is there any Data Store helper I can use, which provides functions I described above?

    Read the article

  • How can I best implement 'cache until further notice' with memcache in multiple tiers?

    - by ajreal
    the term "client" used here is not referring to client's browser, but client server Before cache workflow 1. client make a HTTP request --> 2. server process --> 3. store parsed results into memcache for next use (cache indefinitely) --> 4. return results to client --> 5. client get the result, store into client's local memcache with TTL After cache workflow 1. another client make a HTTP request --> 2. memcache found return memcache results to client --> 3. client get the result, store into client's local memcache with TTL TTL = time to live Is possible for me to know when the data was updated, and to expire relevant memcache(s) accordingly. However, the pitfalls on client site cache TTL Any data update before the TTL is not pick-up by client memcache. In reverse manner, where there is no update, client memcache still expire after the TTL First request (or concurrent requests) after cache TTL will get throttle as it need to repeat the "Before cache workflow" In the event where client require several HTTP requests on a single web page, it could be very bad in performance. Ideal solution should be client to cache indefinitely until further notice. Here are the three proposals about futher notice Proposal 1 : Make use on HTTP header (current implementation) 1. client sent HTTP request last modified time header 2. server check if last data modified time=last cache time return status 304 3. client based on header to decide further processing GOOD? ---- - save some parsing for client - lesser data transfer BAD? ---- - fire a HTTP request is still slow - server end still need to process lots of requests Proposal 2 : Consistently issue a HTTP request to check all data group last modified time 1. client fire a HTTP request 2. server to return last modified time for all data group 3. client compare local last cache time with the result 4. if data group last cache time < server last modified time then request again for that data group only GOOD? ---- - only fetch what is no up-to-date - less requests for server BAD? ---- - every web page require a HTTP request Proposal 3 : Tell client when new data is available (Push) 1. when server end notice there is a change on a data group 2. notify clients on the changes 3. help clients to fetch again data 4. then reset client local memcache after data is parsed GOOD? ---- - let the cache act/behave like a true cache BAD? ---- - encourage race condition My preference is on proposal 3, and something like Gearman could be ideal Where there is a change, Gearman server to sent the task to multiple clients (workers). Am I crazy? (I know my first question is a bit crazy)

    Read the article

  • Why is facebook cache buggy?

    - by IAdapter
    I just started using facebook and I see that many times when I add something to my profile and visit it later its not there. I bet the reason is that the page is cached and not updated very often. Is this on purpose or is it a bug? P.S. For example I added the music I like and later I see that I did not add it, but next day when I visit again its there. I saw it in two web-browsers, so its a facebook bug. Does it has something to do with scalability?

    Read the article

  • Are HTTP requests cached? [closed]

    - by nischayn22
    Many HTTP requests are sent repeatedly by browsers on almost every page load, such as requesting the jQuery .js file etc. Since these are already used on too many sites doesn't modern browsers keep a cache for this? I am thinking of a system where the browser has a cached copy of the .js file used very very frequently. On a new request for the .js file, it sends the server a request for a hash of the .js file (provided the server can reply to that) and compares the returned hash with the cached copy's hash... rest is intuitive.

    Read the article

  • Distributed cache and improvement

    - by philipl
    Have this question from interview: Web Service function given x static HashMap map (singleton created) if (!map.containsKey(x)) { perform some function to retrieve result y map.put(x, y); } return y; The interviewer asked general question such as what is wrong with this distributed cache implementation. Then asked how to improve on it, due to distributed servers will have different cached key pairs in the map. There are simple mistakes to be pointed out about synchronization and key object, but what really startled me was that this guy thinks that moving to database implementation solves the problem that different servers will have different map content, i.e., the situation when value x is not on server A but on server B, therefore redundant data has to be retrieved in server A. Does his thinking make any sense? (As I understand this is the basic cons for distributed cache against database model, seems he does not understand it at all) What is the typical solution for the cache growth issue (weak reference?) and sync issue (do not know which server has the key already cached - use load balancing)? Thanks

    Read the article

  • Is the UX affected negatively by fully cacheable pages?

    - by ChocoDeveloper
    I want to have fully cacheable pages in my websites, but one cannot do that if they contain user-specific data, like the userbar or things in the UI that can change depending on the permissions the user has. So I was thinking whether it was possible to pull everything that is user-specific via ajax, and update the UI accordingly. But I'm worried that this might be annoying for the user, and also it might be difficult to develop. What do you think? Is there a pattern or something I can follow to deal with this?

    Read the article

  • Random Cache Expiry

    - by mahemoff
    I've been experimenting with random cache expiry times to avoid situations where an individual request forces multiple things to update at once. For example, a web page might include five different components. If each is set to time out in 30 minutes, the user will have a long wait time every 30 minutes. So instead, you set them all to a random time between 15 and 45 minutes to make it likely at most only one component will reload for any given page load. I'm trying to find any research or guidelines on this topic, e.g. optimal variance parameters. I do recall seeing one article about how Google (?) uses this technique, but can't locate it, and there doesn't seem to be much written about the topic.

    Read the article

  • IE browser caching and the jQuery Form Plugin

    - by Harfleur
    Like so many lost souls before me, I'm floundering in the snake pit that is Ajax form submission and IE browser caching. I'm trying to write a simple script using the jQuery Form Plugin to Ajaxify Wordpress comments. It's working fine in Firefox, Chrome, Safari, et. al., but in IE, the response text is cached with the result that Ajax is pulling in the wrong comment. jQuery(this).ajaxSubmit({ success: function(data) { var response = $("<ol>"+data+"</ol>"); response.find('.commentlist li:last').hide().appendTo(jQuery('.commentlist')).slideDown('slow'); } }); ajaxSubmit sends the comment to wp-comments-post.php, which inelegantly spits back the entire page as a response. So, despite the fact that it's ugly as toads, I'm sticking the response text in a variable, using :last to isolate the most recent comment, and sliding it down in its place. IE, however, is returning the cached version of the page, which doesn't include the new comment. So ".commentlist li:last" selects the previous comment, a duplicate of which then uselessly slides down beneath the original. I've tried setting "cache: false" in the ajaxSubmit options, but it has no effect. I've tried setting a url option and tacking on a random number or timestamp, but it winds up being attached to the POST that submits the comment to the server rather than the GET that returns the response, and so has no effect. I'm not sure what else to try. Everything works fine in IE if I turn off browser caching, but that's obviously not something I can expect anyone viewing the page to do. Any help will be hugely appreciated. Thanks in advance! EDIT WITH A PROGRESS REPORT: A couple of people have suggested using PHP headers to prevent caching, and this does indeed work. The trouble is that wp-comments-post is spitting back the entire page when a new comment is submitted, and the only way I can see to add headers is to put them in the Wordpress post template, which disables caching on all posts at all times--not quite the behavior I'm looking for. Is there a way to set a php conditional--"if is_ajax" or something like that--that would keep the headers from being applied during regular pageloads, but plug them in if the page was called by an Ajax GET?

    Read the article

  • how to use data caching with sqldatabase and asp& c#.net

    - by subash
    i have a large database which is updated every now and then. The application has been developed in asp.net and c#.net. i need to fetch data from the datbase to griview on a button click event . i am planning to use data caching , so that i can improve the performance of the application how can i use DATA caching mechanism so that i could see the updated result of the database in a gridview.

    Read the article

  • NFS caching on Ubuntu

    - by stream
    We run a bunch of ubuntu servers (mostly 8.04 LTS) which all mount an nfs share at /nfs. We use the nfs primarily for two purposes: symlinking config files (such as apache vhosts) reading & writing uploaded files This all works great except it makes us fully dependent on the central NFS server (which is a DRBD cluster with heartbeat failover from primary to secondary, but we've still seen issues). What we'd like is if we could mount the NFS through some local caching layer which would make any file which had previously been read remain available even if /nfs isn't. Writes could be disabled for this period. Searching around it looks like cachefilesd may be an option. Unfortunately, it's only packaged for ubuntu 9.10 & 10.04 it looks like. I was also looking for a FUSE-based solution which might fit the bill, but hadn't found anything yet. Any suggestions would be greatly appreciated!

    Read the article

  • Bind9 as a caching resolver fails with mismatch ID on localhost but not external IP

    - by argibbs
    I'm running Ubuntu 12.04 LTS on a machine on my private network. I have bind9 installed (v9.8.1-P1) via aptitude, so it appears to have put all the bits in the right places and the service starts automatically. I plan on adding some zones later, but first I'm just trying to get it working as a caching resolver. I installed bind, configured it, and starting using it. Initially I thought it was working ok, but then I found some sites weren't being resolved. I've pinned it down to being linked to the size of the result and bind failing-over to TCP mode. So: I'm trying to find out why bind is failing when I query for domain info and the result is 512 bytes (causing a truncation and retry on TCP). Specifically it fails with ID mismatches if I point dig at localhost, but works when I query the machine's own IP (192.168.0.2). This appears to be backwards to the problem that most people have when using bind (fails on external ip, works on localhost). If I do dig @localhost google.com (which has a response of <512 bytes) then it works; I get no warnings, and plenty of output. $ dig @localhost google.com ; <<>> DiG 9.8.1-P1 <<>> @localhost google.com [snip lots of output] ;; Query time: 39 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Thu Oct 17 23:08:34 2013 ;; MSG SIZE rcvd: 495 If I do dig @localhost play.google.com (which has a larger response) then I get back something like: $ dig @localhost play.google.com ;; Truncated, retrying in TCP mode. ;; ERROR: ID mismatch: expected ID 3696, got 27130 This seems to be standard, documented behaviour - when the UDP response is large (here 'large' == 512 bytes) it falls back to TCP. The ID mismatch is not expected though. If I do dig @192.168.0.2 play.google.com then I still get the warning about using TCP mode, but it otherwise works $ dig @192.168.0.2 play.google.com ;; Truncated, retrying in TCP mode. ; <<>> DiG 9.8.1-P1 <<>> @192.168.0.2 play.google.com [snip most of the output] ;; Query time: 5 msec ;; SERVER: 192.168.0.2#53(192.168.0.2) ;; WHEN: Thu Oct 17 23:05:55 2013 ;; MSG SIZE rcvd: 521 At the moment I've not set up any zones in my local instance, so it's just acting as a caching resolver. My options config is pretty much unchanged from standard, I've got the following set: options { directory "/var/cache/bind"; allow-query { 192.168/16; 127.0.0.1; }; forwarders { 8.8.8.8; 8.8.4.4; }; dnssec-validation auto; edns-udp-size 4096 ; allow-transfer { any; }; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; And my /etc/resolv.conf is just nameserver 127.0.0.1 search .local The problem definitely seems linked to the failover to TCP mode: if I do dig +bufsize=4096 @localhost play.google.com then it works; no warning about failover to TCP, no ID mismatch, and a standard looking result. To be honest, if there was a way to force bind to use a much larger UDP buffer, that'd probably be good enough for me, but all I've been able to find mention of is max-udp-size 4096 and that doesn't change the behaviour in any way. I've also tried setting edns-udp-size 512 in case the problem is some weird EDNS issue with my router (which seems unlikely since the +bufsize=4096 flag works fine). I've also tried dig +trace @localhost play.google.com; this works. No truncation/TCP warning, and a full result. I've also tried changing the servers used in the forwarder (e.g. to OpenDNS), but that makes no difference. There's one last data point: if I repetitively do dig @localhost play.google.com I don't always get an ID mismatch, but sometimes a REFUSED error. I'm much more likely to get a REFUSED error if I dig the non-localhost IP (192.168.0.2) first: $ dig @localhost play.google.com ;; Truncated, retrying in TCP mode. ; <<>> DiG 9.8.1-P1 <<>> @localhost play.google.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 35104 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;play.google.com. IN A ;; Query time: 4 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Thu Oct 17 23:20:13 2013 ;; MSG SIZE rcvd: 33 Any insights or things to try would be much appreciated.

    Read the article

  • Chrome browser caching

    - by Kyle B.
    I do a lot of development on my local machine and would like to start using Chrome, however I cannot seem to do a hard-refresh (ctrl+f5) or any other key combination to get my browser to forcibly refresh all content @ http://localhost. I change projects frequently in IIS and this presents a problem because I see stylesheet and image data from my previous project with no way to get this page to reload without forcibly dumping all cache data from the settings menu. Is there another key combination I am missing, or is there a place I can (on a site by site basis) turn off caching? I prefer not to have to clear out my temporary files in the browser settings as I switch projects frequently. Thanks, Kyle

    Read the article

  • How can I exceed the 60% Memory Limit of IIS7

    - by evilknot
    Pardon if this is more stackoverflow vs. serverfault. It seems to be on the border. We have an application that caches a large amount of product data for an e-commerce application using ASP.NET caching. This is a dictionary object with 65K elements, and our calculations put the object's size at ~10GB. Problem: The amount of memory the object consumes seems to be far in excess of our 10GB calculation. BIGGEST CONCERN: We can't seem to use over 60% of the 32GB in the server. What we've tried so far: In machine.config/system.web (sf doesn't allow the tags, pardon the formatting): processModel autoConfig="true" memoryLimit="80" In web.config/system.web/caching/cache (sf doesn't allow the tags, pardon the formatting): privateBytesLimit = "20000000000" (and 0, the default of course) percentagePhysicalMemoryUsedLimit = "90" Environment: Windows 2008R2 x64 32GB RAM IIS7 Nothing seems to allow us to exceed the 60% value. See attached screenshot of taskman.

    Read the article

  • Caching of path environment variable on windows?

    - by jwir3
    I'm assisting one of our testers in troubleshooting a configuration problem on a Windows XP SP3 system. Our application uses an environment variable, called APP_HOME, to refer to the directory where our application is installed. When the application is installed, we utilize the following environment variables: APP_HOME = C:\application\ PATH = %PATH%;%APP_HOME%bin Now, the problem comes in that she's working with multiple versions of the same application. So, in order to switch between version 7.0 and 8.1, for example, she might use: APP_HOME = C:\application_7.0\ (for 7.0) and then change it to: APP_HOME = C:\application_8.1\ (for 8.1) The problem is that once this change is made, the PATH environment variable apparently still is looking at the old expansion of the APP_HOME variable. So, for example, after she has changed APP_HOME, PATH still refers to the 7.0 bin directory. Any thoughts on why this might be happening? It looks to me like the PATH variable is caching the expansion of the APP_HOME environment variable. Is there any way to turn this behavior off?

    Read the article

  • Caching without file extensions

    - by Sigurs
    I'm trying to use Varnish to show the non-logged in users a cached version of my website. I'm able to perfectly detect if the user is logged in or out, but I can't cache pages without extensions. There is no file extension because nginx is rewriting the URL to a php script(so caching .php does not work). For example I'd like varnish to cache: example.com example.com/forum/ example.com/contact/ I have tried if (req.request == "GET" && req.url ~ "^/") { return(lookup); } if (req.request == "GET" && req.url ~ "") { return(lookup); } if (req.request == "GET" && req.url ~ "/") { return(lookup); } but nothing seems to work... any help?

    Read the article

  • how can I effect DNS Caching on PHP/Memcache application

    - by Niro
    In a very high loaded Ubuntu/PHP web server I found that the PHP line: $memcache-connect("int-aws_ec2.memcached.myapp.net",11211); sometimes takes ~5 secs. Replacing the url with the ip address decreases the server load from ~20 to 0 My question is - where are the settings that effect the DNS caching for this? Is it in the server level or the memcache library ? How can I change it ? Additional info: Ubuntu 10.04 lucid PHP: 5.3.2-1ubuntu4.10 Apache/2.2.14 (Ubuntu) Amazon EC2 Even more info per Celada's comment: The DNS handling for the memcache server is done by scalr (the platform I use to manage the cloud resources). They have a client located on the instances and their own DNS servers. /etc/nsswitch.conf - hosts: files dns /etc/resolv.conf: nameserver 172.16.0.23 domain ec2.internal search ec2.internal The domain is not in hosts.conf To check if I run nscd I used /etc/init.d/nscd stop and received 'no such file' so i guess I dont run nscd. Thanks !

    Read the article

  • http proxy caching headers

    - by David Hagan
    I have a service for which I'm about to upgrade the authentication. However, I'm trying to ensure that I make the right decision about where the encryption algorithms occur. I currently have two options: option 1) the authentication module is deployed to the client as a javascript library over https and executes client-side, so that the client can POST back an encrypted string. option 2) the authentication module is kept server-side so that the client need only POST back an unencrypted string. I know that many http proxies cache/log the query-string (and therefore any query parameters), but does anyone know of any http proxies that cache the headers as well? If the headers are being cached, then I'll clearly want to encrypt the password inside the SSL encryption, because to my understanding the headers of an HTTPS request may not always be encrypted (depending on the capabilities of the browser etcetera). Can anyone shed any light on the caching of headers by http proxies? Do you have one that does, or know of one that does?

    Read the article

  • Perforce Proxy Server: Caching selective files [closed]

    - by fbrereto
    I just set up a Perforce proxy server for work. I'm noticing the cache directory is filling up very quickly -- with files I know I will never need. For example, there is a 'sandbox' directory in the depot where users keep personal branches and other work; a p4 sync is causing the p4 proxy cache to grab these user's sandboxes when I'll never need them. I would create a symbolic link for the sandbox directory to /dev/null but then I wouldn't be caching my sandbox, which I am interested in. Is there any way to tell the perforce proxy something to the effect of "if I haven't had to sync it, please don't cache it?"

    Read the article

  • Rail's FileStore with Linux Disk Caching or RAMdisk?

    - by Yo Ludke
    I have a Ruby on Rails application that stores it's catched files on the filesystem (Rails file-system cache). I was thinking about changing to memcached Store, but a short test shows it isn't a big difference in speed. From linuxatemyram.com I learned a bit about file caching. On the current machine there would be around 40..45GB RAM left which isn't needed for the application and which can be used to linux-disk-cache this rails file cache store. The disk is a RAID10 system with almost 120MB disk perfomance. How can I tell Linux to use free RAM more deliberately and not to be shy about using it? Do think it's necessary to adjust a sysyctl/.. value here, or would I have performance advantages to put the File Store root diretory on a ramdisk? (Loosing the cache during a reboot wouldn't be a problem)

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >