Search Results

Search found 7913 results on 317 pages for 'resource caching'.

Page 23/317 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • IE browser caching and the jQuery Form Plugin

    - by Harfleur
    Like so many lost souls before me, I'm floundering in the snake pit that is Ajax form submission and IE browser caching. I'm trying to write a simple script using the jQuery Form Plugin to Ajaxify Wordpress comments. It's working fine in Firefox, Chrome, Safari, et. al., but in IE, the response text is cached with the result that Ajax is pulling in the wrong comment. jQuery(this).ajaxSubmit({ success: function(data) { var response = $("<ol>"+data+"</ol>"); response.find('.commentlist li:last').hide().appendTo(jQuery('.commentlist')).slideDown('slow'); } }); ajaxSubmit sends the comment to wp-comments-post.php, which inelegantly spits back the entire page as a response. So, despite the fact that it's ugly as toads, I'm sticking the response text in a variable, using :last to isolate the most recent comment, and sliding it down in its place. IE, however, is returning the cached version of the page, which doesn't include the new comment. So ".commentlist li:last" selects the previous comment, a duplicate of which then uselessly slides down beneath the original. I've tried setting "cache: false" in the ajaxSubmit options, but it has no effect. I've tried setting a url option and tacking on a random number or timestamp, but it winds up being attached to the POST that submits the comment to the server rather than the GET that returns the response, and so has no effect. I'm not sure what else to try. Everything works fine in IE if I turn off browser caching, but that's obviously not something I can expect anyone viewing the page to do. Any help will be hugely appreciated. Thanks in advance! EDIT WITH A PROGRESS REPORT: A couple of people have suggested using PHP headers to prevent caching, and this does indeed work. The trouble is that wp-comments-post is spitting back the entire page when a new comment is submitted, and the only way I can see to add headers is to put them in the Wordpress post template, which disables caching on all posts at all times--not quite the behavior I'm looking for. Is there a way to set a php conditional--"if is_ajax" or something like that--that would keep the headers from being applied during regular pageloads, but plug them in if the page was called by an Ajax GET?

    Read the article

  • how to use data caching with sqldatabase and asp& c#.net

    - by subash
    i have a large database which is updated every now and then. The application has been developed in asp.net and c#.net. i need to fetch data from the datbase to griview on a button click event . i am planning to use data caching , so that i can improve the performance of the application how can i use DATA caching mechanism so that i could see the updated result of the database in a gridview.

    Read the article

  • Strange VS2005 compile errors: unable to copy resource file

    - by Velika
    I AM GETTING THE FOLLOWING ERROR IN A VERY SIMPLE CLASS LIBRARY: Error 1 Unable to copy file "obj\Debug\SMIT.SysAdmin.BusinessLayer.Resources.resources" to "obj\Debug\SMIT.SysAdmin.BusinessLayer.SMIT.SysAdmin.BusinessLayer.Resources.resources". Could not find file 'obj\Debug\SMIT.SysAdmin.BusinessLayer.Resources.resources'. SMIT.SysAdmin.BusinessLayer Going to the Project Properties-Resource tab, I see that I defined do resources. Still, I tried to delete the resource file and recreate by going to the resource tab. When I recompile, I still get the same error. Any suggestions of things to try?

    Read the article

  • Building URL or link in REST ASP.NET MVC

    - by AWC
    I want to build a URL at runtime when a resource is render to either XML or JSON. I can do this easily when the view is HTML and is rendering only parts of the resource, but when I render a resource that contains a links to another resource I want to dynamically generate the correct URL according the host (site) and resource specific URI part. <components> <component id = "1234" name = "component A" version = "1.0"> <link rel = "/component" uri="http://localhost:8080/component/1234" /> </component> <components> How do I make sure the 'uri' value is correct?

    Read the article

  • understanding silverlight resource file format

    - by Quincy
    I'm trying to understand the format of silverlight resource files. There are 4 bytes of the data comes after PAD. I'd like to know what these values are, and how they are generated. here is the hex dump of a .g.resources file. Here is what I know: there is 0xbeefcace at the beginning, then there is dependancies, then padding. after that is the great unknown (but I really like to know). After 4 null bytes, are the file name and size of the resource. and After that is content of the said file. I'm not that familiar with .Net and silverlight resource management. would someone please tell me what the mystical 4 bytes are, or point me the url to the specification doc or something. Any help would be appreciated!

    Read the article

  • how to remove the mapping resource property from hibernate.cfg file

    - by Mrityunjay
    hi, i am currently working on one project. In my project there are many entity/POJO files. currently i am using simple hibernate.cfg.xml to add all the mapping files in to the configuration like :- <mapping resource="xml/ClassRoom.hbm.xml"/> <mapping resource="xml/Teacher.hbm.xml"/> <mapping resource="xml/Student.hbm.xml"/> i am having huge number of mapping files, which makes my hibernate.cfg file looks a bit messy, so is there any way so that i do not need to add the above in to the hibernate.cfg file. rather there can be any other way to achieve the same.. please help

    Read the article

  • best way to add route under resource in Laravel 4

    - by passingby
    I would like know if there is a better way to add additional route aside from the default of resource in Laravel 4. I have this code below which is no problem with regard to the functionality, it's just that it seems to be long: <?php Route::group(array('before' => 'auth'), function() { # API Route::group(array('prefix' => 'api'), function() { Route::resource('projects', 'ProjectsController'); Route::resource('projects.groups', 'GroupsController'); Route::post('/projects/{projects}/groups/{groups}/reorder', 'GroupsController@reorder'); }); }); If in Rails Rails.application.routes.draw do # API namespace :api, defaults: { format: 'json' } do scope module: :v1 do resources :projects do resources :groups do member do post :reorder end end end end end end

    Read the article

  • Error safe/correcting resource identifier

    - by Martin
    The receiver is my website, the sender is the same but the medium is noisy, a user. He will read an alphanumeric code of length 6 and later input the same code to identify a resource. A good use for a error correcting code, I thought, and rather than do the research I thought I'd just put the question out there. Or I might be going about it the wrong way, since the situation is rather like sending a perfect dictionary along with every transmission. The requirements on the code are simply: 6 alphanumeric digits, to start with until I run out, anyway. If the user gets it wrong I should still be able to identify the right resource. No resource is preferable to the wrong one. Easy to code or have free libraries for .net Any suggestions?

    Read the article

  • Download and replace Android resource files

    - by Casebash
    My application will have some customisation for each company that uses it. Up until now, I have been loading images and strings from resource files. The idea is that the default resources will be distributed with the application and company specific resources will be loaded from our server after they click on a link from an email to launch the initialisation intent. Does anyone know how to replace resource files? I would really like to keep using resource files to avoid rewriting a lot of code/XML. I would distribute the application from our own server, rather than through the app store, so that we could have one version per company, but unfortunately this will give quite nasty security warnings that would concern our customers.

    Read the article

  • Version resource in DLL not visible with right-click

    - by abunetta
    I'm trying to do something which is very easy to do in the regular MSVC, but not supported easily in VC++ Express. There is no resource editor in VC++ Express. So I added a file named version.rc into my DLL project. The file has the below content, which is compiled by the resource compiler and added to the final DLL. This resource is viewable in the DLL using reshacker, though not when right-clicking the DLL in Windows Explorer. What is missing from my RC file to make it appear when right-clicking? VS_VERSION_INFO VERSIONINFO FILEVERSION 1,0,0,1 PRODUCTVERSION 1,0,0,1 FILEFLAGSMASK 0x17L #ifdef _DEBUG FILEFLAGS 0x1L #else FILEFLAGS 0x0L #endif FILEOS 0x4L FILETYPE 0x1L FILESUBTYPE 0x0L BEGIN BLOCK "StringFileInfo" BEGIN BLOCK "040904b0" BEGIN VALUE "FileDescription", "something Application" VALUE "FileVersion", "1, 0, 0, 1" VALUE "InternalName", "something" VALUE "LegalCopyright", "Copyright (C) 2008 Somebody" VALUE "OriginalFilename", "something.exe" VALUE "ProductName", "something Application" VALUE "ProductVersion", "1, 0, 0, 1" END END BLOCK "VarFileInfo" BEGIN VALUE "Translation", 0x409, 1200 END END

    Read the article

  • using context resource in gwt 2 hosted mode

    - by rafael
    Hello all, I am moving a web app from gwt 1.5 to gwt 2.0. I am trying to connect to the a database resource I have in my context.xml file.In gwt 1.5 I had set up root.xml in tomcat-conf-gwt-localhost. I have no idea where to set up the resource in GWT 2.0. I tried placing my context.xml file in war-META-INF with no luck. Anyone have an idea where to place the context.xml file to be able to use a jndi database resource in GWT 2.0? Thanks in advanced

    Read the article

  • problem loading resource from class library

    - by mishal153
    I have a class library (mylibrary) which has a resource called "close.png". I used redGate reflector to confirm that the resource is actually present in the dll. Now i use mylibrary.dll in a project where i attempt to extract this "close.png" resource like this : BitmapImage crossImage = new BitmapImage(); crossImage.BeginInit(); crossImage.UriSource = new Uri(@"/mylibrary;component/Resources/close.png", UriKind.RelativeOrAbsolute); crossImage.EndInit(); This BitmapImage crossImage is then used like : Button closeButton = new Button() { Content = new System.Windows.Controls.Image() { Source = crossImage }, MaxWidth = 20, MaxHeight = 20 }; On doing this i get no exceptions being thrown but the button shows no image. Also, i do see some exception info if i investigate the button's 'content' in the debugger.

    Read the article

  • Resource File Links

    - by EzaBlade
    When you create a Silverlight Business Application you get a Silverlight application and a Web application. In the Web/Resources folder of the Silverlight app there are links to the files in the Resources folder of the Web app. These links are exactly like the files they link to in that they are heirarchical with the Resource.Designer.cs file shown as a code-behind file for the Resource.resx file When I try to link to a Resource file in this way I only get the .resx file unless I link to the .Designer.cs file separately. However in this case the Designer.cs file is then shown as a standard code file and not related to the .resx file. Does anyone know how to do this linking correctly?

    Read the article

  • Truly understand the threshold for document set in document library in SharePoint

    - by ybbest
    Recently, I am working on an issue with threshold. The problem is that when the user navigates to a view of the document library, it displays the error message “list view threshold is exceeded”. However, in the view, it has no data. The list view threshold limit is 5000 by default for the non-admin user. This limit is not the number of items returned by your query; it is the total number of items the database needs to read to calculate the returned result set. So although the view does not return any result but to calculate the result (no data to show), it needs to access more than 5000 items in the database. To fix the issue, you need to create an index for the column that you use in the filter for the view. Let’s look at the problem in details. You can download a solution to replicate this issue here. 1. Go to Central Admin ==> Web Application Management ==>General Settings==> Click on Resource Throttling 2. Change the list view threshold in web application from 5000 to 2000 so that I can show the problem without loading more than 5000 items into the list. FROM TO 3. Go to the page that displays the approved view of the Loan application document set. It displays the message as shown below although I do not have any data returned for this view. 4. To get around this, you need to create an index column. Go to list settings and click on the Index columns. 5. Click on the “Create a new index” link. 6. Select the LoanStatus field as I use this filed as the filter to create the view. 7. After the index is created now I can access the approved view, as you can see it does not return any data. Notes: List View Threshold: Specify the maximum number of items that a database operation can involve at one time. Operations that exceed this limit are prohibited. References: SharePoint lists V: Techniques for managing large lists Manage large SharePoint lists for better performance http://blogs.technet.com/b/speschka/archive/2009/10/27/working-with-large-lists-in-sharepoint-2010-list-throttling.aspx

    Read the article

  • NFS caching on Ubuntu

    - by stream
    We run a bunch of ubuntu servers (mostly 8.04 LTS) which all mount an nfs share at /nfs. We use the nfs primarily for two purposes: symlinking config files (such as apache vhosts) reading & writing uploaded files This all works great except it makes us fully dependent on the central NFS server (which is a DRBD cluster with heartbeat failover from primary to secondary, but we've still seen issues). What we'd like is if we could mount the NFS through some local caching layer which would make any file which had previously been read remain available even if /nfs isn't. Writes could be disabled for this period. Searching around it looks like cachefilesd may be an option. Unfortunately, it's only packaged for ubuntu 9.10 & 10.04 it looks like. I was also looking for a FUSE-based solution which might fit the bill, but hadn't found anything yet. Any suggestions would be greatly appreciated!

    Read the article

  • Bind9 as a caching resolver fails with mismatch ID on localhost but not external IP

    - by argibbs
    I'm running Ubuntu 12.04 LTS on a machine on my private network. I have bind9 installed (v9.8.1-P1) via aptitude, so it appears to have put all the bits in the right places and the service starts automatically. I plan on adding some zones later, but first I'm just trying to get it working as a caching resolver. I installed bind, configured it, and starting using it. Initially I thought it was working ok, but then I found some sites weren't being resolved. I've pinned it down to being linked to the size of the result and bind failing-over to TCP mode. So: I'm trying to find out why bind is failing when I query for domain info and the result is 512 bytes (causing a truncation and retry on TCP). Specifically it fails with ID mismatches if I point dig at localhost, but works when I query the machine's own IP (192.168.0.2). This appears to be backwards to the problem that most people have when using bind (fails on external ip, works on localhost). If I do dig @localhost google.com (which has a response of <512 bytes) then it works; I get no warnings, and plenty of output. $ dig @localhost google.com ; <<>> DiG 9.8.1-P1 <<>> @localhost google.com [snip lots of output] ;; Query time: 39 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Thu Oct 17 23:08:34 2013 ;; MSG SIZE rcvd: 495 If I do dig @localhost play.google.com (which has a larger response) then I get back something like: $ dig @localhost play.google.com ;; Truncated, retrying in TCP mode. ;; ERROR: ID mismatch: expected ID 3696, got 27130 This seems to be standard, documented behaviour - when the UDP response is large (here 'large' == 512 bytes) it falls back to TCP. The ID mismatch is not expected though. If I do dig @192.168.0.2 play.google.com then I still get the warning about using TCP mode, but it otherwise works $ dig @192.168.0.2 play.google.com ;; Truncated, retrying in TCP mode. ; <<>> DiG 9.8.1-P1 <<>> @192.168.0.2 play.google.com [snip most of the output] ;; Query time: 5 msec ;; SERVER: 192.168.0.2#53(192.168.0.2) ;; WHEN: Thu Oct 17 23:05:55 2013 ;; MSG SIZE rcvd: 521 At the moment I've not set up any zones in my local instance, so it's just acting as a caching resolver. My options config is pretty much unchanged from standard, I've got the following set: options { directory "/var/cache/bind"; allow-query { 192.168/16; 127.0.0.1; }; forwarders { 8.8.8.8; 8.8.4.4; }; dnssec-validation auto; edns-udp-size 4096 ; allow-transfer { any; }; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; And my /etc/resolv.conf is just nameserver 127.0.0.1 search .local The problem definitely seems linked to the failover to TCP mode: if I do dig +bufsize=4096 @localhost play.google.com then it works; no warning about failover to TCP, no ID mismatch, and a standard looking result. To be honest, if there was a way to force bind to use a much larger UDP buffer, that'd probably be good enough for me, but all I've been able to find mention of is max-udp-size 4096 and that doesn't change the behaviour in any way. I've also tried setting edns-udp-size 512 in case the problem is some weird EDNS issue with my router (which seems unlikely since the +bufsize=4096 flag works fine). I've also tried dig +trace @localhost play.google.com; this works. No truncation/TCP warning, and a full result. I've also tried changing the servers used in the forwarder (e.g. to OpenDNS), but that makes no difference. There's one last data point: if I repetitively do dig @localhost play.google.com I don't always get an ID mismatch, but sometimes a REFUSED error. I'm much more likely to get a REFUSED error if I dig the non-localhost IP (192.168.0.2) first: $ dig @localhost play.google.com ;; Truncated, retrying in TCP mode. ; <<>> DiG 9.8.1-P1 <<>> @localhost play.google.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 35104 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;play.google.com. IN A ;; Query time: 4 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Thu Oct 17 23:20:13 2013 ;; MSG SIZE rcvd: 33 Any insights or things to try would be much appreciated.

    Read the article

  • Chrome browser caching

    - by Kyle B.
    I do a lot of development on my local machine and would like to start using Chrome, however I cannot seem to do a hard-refresh (ctrl+f5) or any other key combination to get my browser to forcibly refresh all content @ http://localhost. I change projects frequently in IIS and this presents a problem because I see stylesheet and image data from my previous project with no way to get this page to reload without forcibly dumping all cache data from the settings menu. Is there another key combination I am missing, or is there a place I can (on a site by site basis) turn off caching? I prefer not to have to clear out my temporary files in the browser settings as I switch projects frequently. Thanks, Kyle

    Read the article

  • How can I exceed the 60% Memory Limit of IIS7

    - by evilknot
    Pardon if this is more stackoverflow vs. serverfault. It seems to be on the border. We have an application that caches a large amount of product data for an e-commerce application using ASP.NET caching. This is a dictionary object with 65K elements, and our calculations put the object's size at ~10GB. Problem: The amount of memory the object consumes seems to be far in excess of our 10GB calculation. BIGGEST CONCERN: We can't seem to use over 60% of the 32GB in the server. What we've tried so far: In machine.config/system.web (sf doesn't allow the tags, pardon the formatting): processModel autoConfig="true" memoryLimit="80" In web.config/system.web/caching/cache (sf doesn't allow the tags, pardon the formatting): privateBytesLimit = "20000000000" (and 0, the default of course) percentagePhysicalMemoryUsedLimit = "90" Environment: Windows 2008R2 x64 32GB RAM IIS7 Nothing seems to allow us to exceed the 60% value. See attached screenshot of taskman.

    Read the article

  • Caching of path environment variable on windows?

    - by jwir3
    I'm assisting one of our testers in troubleshooting a configuration problem on a Windows XP SP3 system. Our application uses an environment variable, called APP_HOME, to refer to the directory where our application is installed. When the application is installed, we utilize the following environment variables: APP_HOME = C:\application\ PATH = %PATH%;%APP_HOME%bin Now, the problem comes in that she's working with multiple versions of the same application. So, in order to switch between version 7.0 and 8.1, for example, she might use: APP_HOME = C:\application_7.0\ (for 7.0) and then change it to: APP_HOME = C:\application_8.1\ (for 8.1) The problem is that once this change is made, the PATH environment variable apparently still is looking at the old expansion of the APP_HOME variable. So, for example, after she has changed APP_HOME, PATH still refers to the 7.0 bin directory. Any thoughts on why this might be happening? It looks to me like the PATH variable is caching the expansion of the APP_HOME environment variable. Is there any way to turn this behavior off?

    Read the article

  • Caching without file extensions

    - by Sigurs
    I'm trying to use Varnish to show the non-logged in users a cached version of my website. I'm able to perfectly detect if the user is logged in or out, but I can't cache pages without extensions. There is no file extension because nginx is rewriting the URL to a php script(so caching .php does not work). For example I'd like varnish to cache: example.com example.com/forum/ example.com/contact/ I have tried if (req.request == "GET" && req.url ~ "^/") { return(lookup); } if (req.request == "GET" && req.url ~ "") { return(lookup); } if (req.request == "GET" && req.url ~ "/") { return(lookup); } but nothing seems to work... any help?

    Read the article

  • how can I effect DNS Caching on PHP/Memcache application

    - by Niro
    In a very high loaded Ubuntu/PHP web server I found that the PHP line: $memcache-connect("int-aws_ec2.memcached.myapp.net",11211); sometimes takes ~5 secs. Replacing the url with the ip address decreases the server load from ~20 to 0 My question is - where are the settings that effect the DNS caching for this? Is it in the server level or the memcache library ? How can I change it ? Additional info: Ubuntu 10.04 lucid PHP: 5.3.2-1ubuntu4.10 Apache/2.2.14 (Ubuntu) Amazon EC2 Even more info per Celada's comment: The DNS handling for the memcache server is done by scalr (the platform I use to manage the cloud resources). They have a client located on the instances and their own DNS servers. /etc/nsswitch.conf - hosts: files dns /etc/resolv.conf: nameserver 172.16.0.23 domain ec2.internal search ec2.internal The domain is not in hosts.conf To check if I run nscd I used /etc/init.d/nscd stop and received 'no such file' so i guess I dont run nscd. Thanks !

    Read the article

  • http proxy caching headers

    - by David Hagan
    I have a service for which I'm about to upgrade the authentication. However, I'm trying to ensure that I make the right decision about where the encryption algorithms occur. I currently have two options: option 1) the authentication module is deployed to the client as a javascript library over https and executes client-side, so that the client can POST back an encrypted string. option 2) the authentication module is kept server-side so that the client need only POST back an unencrypted string. I know that many http proxies cache/log the query-string (and therefore any query parameters), but does anyone know of any http proxies that cache the headers as well? If the headers are being cached, then I'll clearly want to encrypt the password inside the SSL encryption, because to my understanding the headers of an HTTPS request may not always be encrypted (depending on the capabilities of the browser etcetera). Can anyone shed any light on the caching of headers by http proxies? Do you have one that does, or know of one that does?

    Read the article

  • Perforce Proxy Server: Caching selective files [closed]

    - by fbrereto
    I just set up a Perforce proxy server for work. I'm noticing the cache directory is filling up very quickly -- with files I know I will never need. For example, there is a 'sandbox' directory in the depot where users keep personal branches and other work; a p4 sync is causing the p4 proxy cache to grab these user's sandboxes when I'll never need them. I would create a symbolic link for the sandbox directory to /dev/null but then I wouldn't be caching my sandbox, which I am interested in. Is there any way to tell the perforce proxy something to the effect of "if I haven't had to sync it, please don't cache it?"

    Read the article

  • Rail's FileStore with Linux Disk Caching or RAMdisk?

    - by Yo Ludke
    I have a Ruby on Rails application that stores it's catched files on the filesystem (Rails file-system cache). I was thinking about changing to memcached Store, but a short test shows it isn't a big difference in speed. From linuxatemyram.com I learned a bit about file caching. On the current machine there would be around 40..45GB RAM left which isn't needed for the application and which can be used to linux-disk-cache this rails file cache store. The disk is a RAID10 system with almost 120MB disk perfomance. How can I tell Linux to use free RAM more deliberately and not to be shy about using it? Do think it's necessary to adjust a sysyctl/.. value here, or would I have performance advantages to put the File Store root diretory on a ramdisk? (Loosing the cache during a reboot wouldn't be a problem)

    Read the article

  • How do I turn off caching in IIS7?

    - by jammus
    Hello. I'm developing an ASP classic site under Windows 7 (form a queue ladies). The problem is IIS seems to be heavily making use of its cache for both static and dynamic content which really conflicts with my 'make a small change, alt-tab, hit ctrl-F5' development style. Changes made to .asp files may take two or three refreshes to show up where as changes to .js files can take 20 times as many. How do I go about turning the caching off on my development machine? Cheers. in b4 stop using asp classic

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >