Search Results

Search found 7217 results on 289 pages for 'jboss cache'.

Page 38/289 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • javax.servlet.ServletException - how could i get to the cause?

    - by Michal
    Hi, i'm getting a very strange error while opening one of the pages in my web app. The application is built on Seam 2.2 and is using JSF (RichFaces) as a view technology. I run it on Tomcat 6. The error i'm describing doesn't occur on my machine (Mac OS X), but it does on my client's dev machines (Windows) and on the server (Linux Debian). I'm sure i'm running the same version on each system and i have tried connecting to the same database... In logs everything looks fine - each next JSF Phase executes normally, and after the last one, there is this moment when the request starts processing for the SEAM debug page... And this is the stack trace i see on the debug page (nothing is logged): Exception during request processing: Caused by javax.servlet.ServletException with message: "Servlet execution threw an exception" org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:313) org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:83) org.jboss.seam.web.IdentityFilter.doFilter(IdentityFilter.java:40) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.MultipartFilter.doFilter(MultipartFilter.java:90) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.ExceptionFilter.doFilter(ExceptionFilter.java:64) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.RedirectFilter.doFilter(RedirectFilter.java:45) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.ajax4jsf.webapp.BaseXMLFilter.doXmlFilter(BaseXMLFilter.java:178) org.ajax4jsf.webapp.BaseFilter.handleRequest(BaseFilter.java:290) org.ajax4jsf.webapp.BaseFilter.processUploadsAndHandleRequest(BaseFilter.java:388) org.ajax4jsf.webapp.BaseFilter.doFilter(BaseFilter.java:515) org.jboss.seam.web.Ajax4jsfFilter.doFilter(Ajax4jsfFilter.java:56) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.LoggingFilter.doFilter(LoggingFilter.java:60) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.HotDeployFilter.doFilter(HotDeployFilter.java:53) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.servlet.SeamFilter.doFilter(SeamFilter.java:158) org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) pl.mgibowski.alterium.util.LoggingFilter.doFilter(LoggingFilter.java:18) org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:465) org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) java.lang.Thread.run(Thread.java:619) Exception without any cause... I was trying to catch the exception with my custom Filter (LoggingFilter.java that you can see on the strack trace), using this code: try { filterChain.doFilter(servletRequest, servletResponse); } catch (Throwable e) { e.printStackTrace(); System.out.println("Stack trace:"); System.out.println(e.getStackTrace()); System.out.println("Cause:"); System.out.println(e.getCause()); } But it doesn't catch anything, the line 18 from the stack trace is this one: filterChain.doFilter(servletRequest, servletResponse); nothing gets caught by the try block. Anybody has any ideas about how could i get closer to the real cause?

    Read the article

  • Proxy cache zone static is unknown

    - by AnApprentice
    I'm working to setup a reverse proxy cache. In nginx.conf I added the following: location /blog { # Reverse Proxy # Cache the Blog Pages from Heroku proxy_cache STATIC; proxy_cache_valid 200 10m; proxy_cache_valid 404 1m; proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504; rewrite ^/blog$ /; rewrite ^/blog/(.*)$ /$1; proxy_pass http://whispering-retreat-1.herokuapp.com; break; } However when trying to restart nginx I received the following error: $ /opt/nginx/sbin/nginx -s stop nginx: [emerg] "proxy_cache" zone "STATIC" is unknown in /opt/nginx/conf/nginx.conf:182 Any ideas what's the problem is with using STATIC? I just want to cache the blog pages so it doesn't hit heroku every time which is horribly slow. Thanks

    Read the article

  • How to delete the history and cache in Opera Mobile (10.1)

    - by Mathias Lin
    I run Opera Mobile 10.1 on Android. My device is rooted. How can I clear the history and cache of the browser via shell? As su, removing /data/data/com.opera.browser/opera/profiles/smartphone/cookies4.dat /data/data/com.opera.browser/opera/profiles/smartphone/cache /data/data/com.opera.browser/opera/profiles/smartphone/cacheO and a /system/xbin/busybox killall -9 com.opera.browser afterwards doesn't seem to do the job. Afterwards, bookmarks etc. are still there. In Opera Mini I found it easy to just delete /data/data/com.opera.mini.android/cache/webviewCache /data/data/com.opera.mini.android/databases but unfortunately, Opera Mini in it's current version has a bug and doesn't work on most devices.

    Read the article

  • local cache for NAS or network folder

    - by HugoRune
    I am planning to build a network attached storage (NAS) server. Is there a way to cache frequently acccessed files from the remote storage automatically on the local PC? (I am not looking for a way to sync whole folders like rsync, but rather something that automatically and transparently caches the last accessed 50 gb of files.) Ideally I am searching for something that caches writes as well as reads, since only one pc will be accessing the server (and one day of lost changes if the local cache is damaged would be acceptable) I looked into windows offline files, but as far as I could tell this requires manual interaction to disconnect the server or go into offline mode in order to use the cache. The server would probably be running Linux or freeNAS, the pc runs Windows xp, but could be upgraded to 7 if required.

    Read the article

  • SAN cache memory upgrade

    - by Scott Lundberg
    We currently have an IBM DS4300 Dual Controller Fibre SAN. It is a good box, but getting pretty old. It came with 256MB of cache per controller. Recently we replaced the batteries in one of the controllers and noticed that the cache is a DDR PC2100 ECC DIMM. Of course, we are thinking about how cheap this RAM is now and is there any good reason we can't upgrade the RAM. IBM used to have a "Turbo" upgrade to this box that doubled the cache and had a bunch of software features for about 10K USD. Since that product has been end-of-lifed, I don't think we can get that upgrade and we don't need the software upgrades (FlashCopy, StorageCopy, etc). Besides the obvious potential warranty issue, what if any issues would we expect to see if attempting to put 2 - 1GB DIMMS in this unit? Any other things I am missing here? EDIT: Memory label: Samsung CN 0433 PC2100U-25331-A1 M381L3223ETM-CB0 256MB DDR PC2100 CL2.5 ECC

    Read the article

  • APC has no system cache entries

    - by lazzio
    I have 2 web servers to provide PHP websites. One server is : Apache + PHP-FPM + APC The other : Apache with MPM-itk + APC. For both of these servers, APC has no cache system entries but only users cache entries as you can see on the screenshot. APC with only users cache entries APC configuration is : apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 2 apc.mmap_file_mask apc.num_files_hint 1000 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.shm_segments 1 apc.shm_size 256 apc.stat 1 apc.stat_ctime 0 apc.ttl 7200 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 7200 apc.write_lock 1 Does anyone know why APC acts like this and how to make it work well ? Thank you for your help!

    Read the article

  • Cherokee high virtual memory usage even after disabling I/O Cache

    - by nidheeshdas
    I've Ubuntu 10.04LTS 64-bit running on a openvz container and Cherokee 1.0.8 compiled from source. The virtual memory usage of cherokee-worker is around 430 MB even after disabling I/O cache from Advanced - I/O Cache - NOT enabled. Is this issue particular to openvz? Because many people reported to have successfully reduced virt memory usage by disabling io cache. htop output: http://imgur.com/z5JEL.jpg (newbies not allowed to post image.) thanks in advance. nidheeshdas

    Read the article

  • Cherokee high virtual memory usage even after disabling I/O Cache

    - by nidheeshdas
    hi all I've Ubuntu 10.04LTS 64-bit running on a openvz container and Cherokee 1.0.8 compiled from source. The virtual memory usage of cherokee-worker is around 430 MB even after disabling I/O cache from Advanced - I/O Cache - NOT enabled. Is this issue particular to openvz? Because many people reported to have successfully reduced virt memory usage by disabling io cache. htop output: http://imgur.com/z5JEL.jpg (newbies not allowed to post image.) thanks in advance. nidheeshdas

    Read the article

  • No improvement in speed when using Ehcache with Hibernate

    - by paddydub
    I'm getting no improvement in speed when using Ehcache with Hibernate Here are the results I get when i run the test below. The test is reading 80 Stop objects and then the same 80 Stop objects again using the cache. On the second read it is hitting the cache, but there is no improvement in speed. Any idea's on what I'm doing wrong? Speed Test: First Read: Reading stops 1-80 : 288ms Second Read: Reading stops 1-80 : 275ms Cache Info: elementsInMemory: 79 elementsInMemoryStore: 79 elementsInDiskStore: 0 JunitCacheTest public class JunitCacheTest extends TestCase { static Cache stopCache; public void testCache() { ApplicationContext context = new ClassPathXmlApplicationContext("beans-hibernate.xml"); StopDao stopDao = (StopDao) context.getBean("stopDao"); CacheManager manager = new CacheManager(); stopCache = (Cache) manager.getCache("ie.dataStructure.Stop.Stop"); //First Read for (int i=1; i<80;i++) { Stop toStop = stopDao.findById(i); } //Second Read for (int i=1; i<80;i++) { Stop toStop = stopDao.findById(i); } System.out.println("elementsInMemory " + stopCache.getSize()); System.out.println("elementsInMemoryStore " + stopCache.getMemoryStoreSize()); System.out.println("elementsInDiskStore " + stopCache.getDiskStoreSize()); } public static Cache getStopCache() { return stopCache; } } HibernateStopDao @Repository("stopDao") public class HibernateStopDao implements StopDao { private SessionFactory sessionFactory; @Transactional(readOnly = true) public Stop findById(int stopId) { Cache stopCache = JunitCacheTest.getStopCache(); Element cacheResult = stopCache.get(stopId); if (cacheResult != null){ return (Stop) cacheResult.getValue(); } else{ Stop result =(Stop) sessionFactory.getCurrentSession().get(Stop.class, stopId); Element element = new Element(result.getStopID(),result); stopCache.put(element); return result; } } } ehcache.xml <cache name="ie.dataStructure.Stop.Stop" maxElementsInMemory="1000" eternal="false" timeToIdleSeconds="5200" timeToLiveSeconds="5200" overflowToDisk="true"> </cache> stop.hbm.xml <class name="ie.dataStructure.Stop.Stop" table="stops" catalog="hibernate3" mutable="false" > <cache usage="read-only"/> <comment></comment> <id name="stopID" type="int"> <column name="STOPID" /> <generator class="assigned" /> </id> <property name="coordinateID" type="int"> <column name="COORDINATEID" not-null="true"> <comment></comment> </column> </property> <property name="routeID" type="int"> <column name="ROUTEID" not-null="true"> <comment></comment> </column> </property> </class> Stop public class Stop implements Comparable<Stop>, Serializable { private static final long serialVersionUID = 7823769092342311103L; private Integer stopID; private int routeID; private int coordinateID; }

    Read the article

  • Is there a way to ignore Cache errors in Django?

    - by Josh Smeaton
    I've just set our development Django site to use redis for a cache backend and it was all working fine. I brought down redis to see what would happen, and sure enough Django 404's due to cache backend behaviour. Either the Connection was refused, or various other errors. Is there any way to instruct Django to ignore Cache errors, and continue processing the normal way? It seems weird that caching is a performance optimization, but can bring down an entire site if it fails. I tried to write a wrapper around the backend like so: class CacheClass(redis_backend.CacheClass): """ Wraps the desired Cache, and falls back to global_settings default on init failure """ def __init__(self, server, params): try: super(CacheClass, self).__init__(server, params) except Exception: from django.core import cache as _ _.cache = _.get_cache('locmem://') But that won't work, since I'm trying to set the cache type in the call that sets the cache type. It's all a very big mess. So, is there any easy way to swallow cache errors? Or to set the default cache backend on failure?

    Read the article

  • prototype.js equivalent to jquery ajaxSettings cache = true addthis plugin

    - by openstepmedia
    I need help from a prototype.js expert: I'm trying to achieve the following (taken from the addthis forum), and port the solution from jquery to prototype.js (I'm using magento). Original post is here: http://www.addthis.com/forum/viewtopic.php?f=3&t=22217 For the getScript() function, I can create a custom function to load the remote js, however I'm trying to load the js file via the prototype ajax call, and trying to avoid having the script cached in the browser. <script type="text/javascript" src="http://code.jquery.com/jquery-latest.js"></script> <script type="text/javascript"> $(document).ready(function() { $("#changeURL").click(function() { $(".addthis_button").attr("addthis:url","http://www.example.com"); window.addthis.ost = 0; window.addthis.ready(); }); }); // prevent jQuery from appending cache busting string to the end of the URL var cache = jQuery.ajaxSettings.cache; jQuery.ajaxSettings.cache = true; jQuery.getScript('http://s7.addthis.com/js/250/addthis_widget.js'); // Restore jQuery caching setting jQuery.ajaxSettings.cache = cache; </script> <p id="changeURL">Change URL</p> <a class="addthis_button" addthis:url="http://www.google.com"></a> <script type="text/javascript" src="http://s7.addthis.com/js/250/addthis_widget.js#username=rahf"></script>

    Read the article

  • Why does Hibernate 2nd level cache only cache within a session?

    - by Synesso
    Using a named query in our application and with ehcache as the provider, it seems that the query results are tied to the session. Any attempt to access the value from the cache for a second time results in a LazyInitializationException We have set lazy=true for the following mapping because this object is also used by another part of the system which does not require the reference... and we want to keep it lean. <class name="domain.ReferenceAdPoint" table="ad_point" mutable="false" lazy="false"> <cache usage="read-only"/> <id name="code" type="long" column="ad_point_id"> <generator class="assigned" /> </id> <property name="name" column="ad_point_description" type="string"/> <set name="synonyms" table="ad_point_synonym" cascade="all-delete-orphan" lazy="true"> <cache usage="read-only"/> <key column="ad_point_id" /> <element type="string" column="synonym_description" /> </set> </class> <query name="find.adpoints.by.heading">from ReferenceAdPoint adpoint left outer join fetch adpoint.synonyms where adpoint.adPointField.headingCode = ?</query> Here's a snippet from our hibernate.cfg.xml <property name="hibernate.cache.provider_class">net.sf.ehcache.hibernate.SingletonEhCacheProvider</property> <property name="hibernate.cache.use_query_cache">true</property> It doesn't seem to make sense that the cache would be constrained to the session. Why are the cached queries not usable outside of the (relatively short-lived) sessions?

    Read the article

  • How to implement web cache: internal fragmentation VS external fragmentation

    - by Summer_More_More_Tea
    Hi there: I come up with this question when play with Firefox web cache: in which approach does the browser cache a response in limited disk space(take my configuration as an example, 50MB is the upper bound)? I think two ways can be employed. One is cache the total response object one by one, but this is inefficient and will introduce external fragmentation, thus the total cache space may not be fully used. The second is take the total space(50MB) as a consecutive file, splitting it into fixed-length slots; incoming response objects will also be treated blocks of data with the same length as the slots. We can fill slots until the whole file is run out of, then some displacement algorithm can be used to swap out the old cached objects. The latter approach will of course bing in internal fragmentation, but in my opinion is easier to implement and maintain than the first strategy. But when I enter Firefox's Cache directory, I find it (maybe) use a different method: a lot of varied-length files reside in that directory and all those files are filled with undisplayable characters. I don't but really want to know what mechanism that a commercial browser, e.g. Firefoix, employed to implement web cache. Regards.

    Read the article

  • dpkg: error processing /var/cache/apt/archives/python2.6-minimal_2.6.6-5ubuntu1_i386.deb (--unpack)

    - by udo
    I had an issue (Question 199582) which was resolved. Unfortunately I am stuck at this point now. Running root@X100e:/var/cache/apt/archives# apt-get dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done The following NEW packages will be installed: file libexpat1 libmagic1 libreadline6 libsqlite3-0 mime-support python python-minimal python2.6 python2.6-minimal readline-common 0 upgraded, 11 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/5,204kB of archives. After this operation, 19.7MB of additional disk space will be used. Do you want to continue [Y/n]? Y (Reading database ... 6108 files and directories currently installed.) Unpacking python2.6-minimal (from .../python2.6-minimal_2.6.6-5ubuntu1_i386.deb) ... new installation of python2.6-minimal; /usr/lib/python2.6/site-packages is a directory which is expected a symlink to /usr/local/lib/python2.6/dist-packages. please find the package shipping files in /usr/lib/python2.6/site-packages and file a bug report to ship these in /usr/lib/python2.6/dist-packages instead aborting installation of python2.6-minimal dpkg: error processing /var/cache/apt/archives/python2.6-minimal_2.6.6-5ubuntu1_i386.deb (--unpack): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/python2.6-minimal_2.6.6-5ubuntu1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) results in above error. Running root@X100e:/var/cache/apt/archives# dpkg -i python2.6-minimal_2.6.6-5ubuntu1_i386.deb (Reading database ... 6108 files and directories currently installed.) Unpacking python2.6-minimal (from python2.6-minimal_2.6.6-5ubuntu1_i386.deb) ... new installation of python2.6-minimal; /usr/lib/python2.6/site-packages is a directory which is expected a symlink to /usr/local/lib/python2.6/dist-packages. please find the package shipping files in /usr/lib/python2.6/site-packages and file a bug report to ship these in /usr/lib/python2.6/dist-packages instead aborting installation of python2.6-minimal dpkg: error processing python2.6-minimal_2.6.6-5ubuntu1_i386.deb (--install): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: python2.6-minimal_2.6.6-5ubuntu1_i386.deb results in above error. Running root@X100e:/var/cache/apt/archives# dpkg -i --force-depends python2.6-minimal_2.6.6-5ubuntu1_i386.deb (Reading database ... 6108 files and directories currently installed.) Unpacking python2.6-minimal (from python2.6-minimal_2.6.6-5ubuntu1_i386.deb) ... new installation of python2.6-minimal; /usr/lib/python2.6/site-packages is a directory which is expected a symlink to /usr/local/lib/python2.6/dist-packages. please find the package shipping files in /usr/lib/python2.6/site-packages and file a bug report to ship these in /usr/lib/python2.6/dist-packages instead aborting installation of python2.6-minimal dpkg: error processing python2.6-minimal_2.6.6-5ubuntu1_i386.deb (--install): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: python2.6-minimal_2.6.6-5ubuntu1_i386.deb is not able to fix this. Any clues how to fix this?

    Read the article

  • Why doesn't Firefox redownload images already on a page?

    - by vvo
    Hello, i just read this article : https://developer.mozilla.org/en/HTTP_Caching_FAQ There's a firefox behavior (and some other browsers i guess) i'd like to understand : if i take any webpage and try to insert the same image multiple times in javascript, the image is only downloaded ONCE even if i specifiy all needed headers to say "do no ever use cache". (see article) I know there are workarounds (like addind query strings to end of urls etc) but why do firefox act like that, if i say that an image do not have to be cached, why is the image still taken from cache when i try to re-insert it ? Plus, what cache is used for this ? (I guess it's the memory cache) Is this behavior the same for dynamic inclusion for example ? ANSWSER IS NO :) I just tested it and the same headers for a js script will make firefox redownload it each time you append the script to the DOM. PS: I know you're wondering WHY i need to do that (appending same image multiple times and force to redownload but this is the way our app works) thank you The good answer is : firefox will store images for the current page load in the memory cache even if you specify he doesnt have to cache them. You can't change this behavior but this is odd because it's not the same for javascript files for example Could someone explain or link to a document describing how firefox cache WORKS?

    Read the article

  • Serialization for memcached

    - by Ram
    I have this huge domain object(say parent) which contains other domain objects. It takes a lot of time to "create" this parent object by querying a DB (OK we are optimizing the DB). So we decided to cache it using memcached (with northscale to be specific) So I have gone through my code and marked all the classes (I think) as [Serializable], but when I add it to the cache, I see a Serialization Exception getting thrown in my VS.net output window. var cache = new NorthScaleClient("MyBucket"); cache.Store(StoreMode.Set, key, value); This is the exception: A first chance exception of type 'System.Runtime.Serialization.SerializationException' occurred in mscorlib.dll SO my guess is, I have not marked all classes as [Serializable]. I am not using any third party libraries and can mark any class as [Serializable], but how do I find out which class is failing when the cache is trying to serialize the object ? Edit1: casperOne comments make me think. I was able to cache these domain object with Microsoft Cache Application Block without marking them [Serializable], but not with NorthScale memcached. It makes me think that there might be something to do with their implementation, but just out of curiosity, am still interested in finding where it fails when trying to add the object to memcached

    Read the article

  • What's the best way to cache a growing database table for html generation?

    - by McLeopold
    I've got a database table which will grow in size by about 5000 rows a hour. For a key that I would be querying by, the query will grow in size by about 1 row every hour. I would like a web page to show the latest rows for a key, 50 at a time (this is configurable). I would like to try and implement memcache to keep database activity low for reads. If I run a query and create a cache result for each page of 50 results, that would work until a new entry is added. At that time, the page of latest results gets new result and the oldest results drops off. This cascades down the list of cached pages causing me to update every cache result. It seems like a poor design. I could build the cache pages backwards, then for each page requested I should get the latest 2 pages and truncate to the proper length of 50. I'm not sure if this is good or bad? Ideally, the mechanism I use to insert a new row would also know how to invalidate the proper cache results. Has someone already solved this problem in a widely acceptable way? What's the best method of doing this? EDIT: If my understanding of the MYSQL query cache is correct, it has table level granularity in invalidation. Given the fact that I have about 5000 updates before a query on a key should need to be invalidated, it seems that the database query cache would not be used. MS SQL caches execution plans and frequently accessed data pages, so it may do better in this scenario. My query is not against a single table with TOP N. One version has joins to several tables and another has sub-selects. Also, since I want to cache the html generated table, I'm wondering if a cache at the web server level would be appropriate? Is there really no benefit to any type of caching? Is the best advice really to just allow a website site query to go through all the layers and hit the database every request?

    Read the article

  • Rolling File appender usage

    - by Anand
    What is a rollingfile appender ? I want my jboss to delete logs either exceeding a maximum size or exceeding a certain date. People on this forum have suggested me to use rollingfile appender. How do I configure it in jboss-log4j.xml file ?

    Read the article

  • How can I test caching and cache busting?

    - by Nathan Long
    In PHP, I'm trying to steal a page from the Rails playbook (see 'Using Asset Timestamps' here): By default, Rails appends assets' timestamps to all asset paths. This allows you to set a cache-expiration date for the asset far into the future, but still be able to instantly invalidate it by simply updating the file (and hence updating the timestamp, which then updates the URL as the timestamp is part of that, which in turn busts the cache). It‘s the responsibility of the web server you use to set the far-future expiration date on cache assets that you need to take advantage of this feature. Here‘s an example for Apache: # Asset Expiration ExpiresActive On <FilesMatch "\.(ico|gif|jpe?g|png|js|css)$"> ExpiresDefault "access plus 1 year" </FilesMatch> If you look at a the source for a Rails page, you'll see what they mean: the path to a stylesheet might be "/stylesheets/scaffold.css?1268228124", where the numbers at the end are the timestamp when the file was last updated. So it should work like this: The browser says 'give me this page' The server says 'here, and by the way, this stylesheet called scaffold.css?1268228124 can be cached for a year - it's not gonna change.' On reloads, the browser says 'I'm not asking for that css file, because my local copy is still good.' A month later, you edit and save the file, which changes the timestamp, which means that the file is no longer called scaffold.css?1268228124 because the numbers change. When the browser sees that, it says 'I've never seen that file! Give me a copy, please.' The cache is 'busted.' I think that's brilliant. So I wrote a function that spits out stylesheet and javascript tags with timestamps appended to the file names, and I configured Apache with the statement above. Now: how do I tell if the caching and cache busting are working? I'm checking my pages with two plugins for Firebug: Yslow and Google Page Speed. Both seem to say that my files are caching: "Add expires headers" in Yslow and "leverage browser caching" in Page Speed are both checked. But when I look at the Page Speed Activity, I see a lot of requests and waiting and no 'cache hits'. If I change my stylesheet and reload, I do see the change immediately. But I don't know if that's because the browser never cached in the first place or because the cache is busted. How can I tell?

    Read the article

  • Why reduce the size of the Java JVM thread stack?

    - by djangofan
    I was reading an article on handling Out Of Memory error conditions in Java (and on JBoss platform) and I saw this suggestion to reduce the size of the threadstack. Can anyone explain how "reducing" the size of threadstack will help with a max memory error condition? http://community.jboss.org/wiki/OutOfMemoryExceptions

    Read the article

  • Which is better: JNP or Corba?

    - by Juvinao
    Hello. I've try with EJB 3.0 and swing client and i've tried with glassfish and jboss as application server when i did use glassfish, the comunication is via corba. when i did use jboss, the comunication is via JNP. in my tests the JNP was faster than corba who has some like this?, and who could tell me which is better: JNP or Corba? and why? thanks

    Read the article

  • Seam EJB3 in an EAR is it usable by another App?

    - by Jim Ward
    Seam 2.1 and JBoss 4.2.2 I have set up the first App to have the EJB in the EAR with a local interface. the 2nd app can look up JDNI name "ear-name/ejbname/local" but fails with "NoClassDefFound". Does the EJB .jar need to be outside of the EAR? Is this a classloader visibility issue or Is this a JBoss version issue? or something else? Thank you for your thoughts..

    Read the article

  • Recovering a website

    - by Jessica
    I found my website in the Wayback Machine a few months ago, but today I've tried again and now it tells me it can't find robots.txt. My old webhost stopped paying for their servers back in August without any notice. I was going to do a backup the day it happened. Is there a way just to find the text? I have the old IP, images, but nothing else. None of the big search engines have caches anymore, and I already looked in the cache of three of my Macs with nothing to be found.

    Read the article

  • How to recover a website's lost robot.txt?

    - by Jessica
    I found my website in the Wayback Machine a few months ago, but today I've tried again and now it tells me it can't find robots.txt. My old webhost stopped paying for their servers back in August without any notice. I was going to do a backup the day it happened. Is there a way just to find the text? I have the old IP, images, but nothing else. None of the big search engines have caches anymore, and I already looked in the cache of three of my Macs with nothing to be found.

    Read the article

  • How to ensure apache2 reads htaccess for custom expiry?

    - by tzot
    I have a site with Apache 2.2.22 . I have enabled the mod-expires and mod-headers modules seemingly correctly: $ apachectl -t -D DUMP_MODULES … expires_module (shared) headers_module (shared) … Settings include: ExpiresActive On ExpiresDefault "access plus 10 minutes" ExpiresByType application/xml "access plus 1 minute" Checking the headers of requests, I see that max-age is set correctly both for the generic case and for xml files (which are auto-generated, but mostly static). I would like to have different expiries for xml files in a directory (e.g. /data), so http://site/data/sample.xml expires 24 hours later. I enter the following in data/.htaccess: ExpiresByType application/xml "access plus 24 hours" Header set Cache-control "max-age=86400, public" but it seems that apache ignores this. How can I ensure apache2 uses the .htaccess directives? I can provide further information if requested.

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >