Search Results

Search found 27152 results on 1087 pages for 'google cache'.

Page 433/1087 | < Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >

  • Do I need to enable DRS to use Dynacache in Websphere Application Server Cluster

    - by rabs
    We are running a websphere commerce application with several websphere application servers configured in a cluster. We are using dynacache, so each server in the cluster will have its own cached objects in its own JVM. We are using CACHEIVL with database triggers for all cache invalidations. I was reading http://www.ibm.com/developerworks/websphere/library/techarticles/0603_crick/0603_crick.html and found an interesting sentence: "Furthermore, cache replication is necessary to ensure that invalidation messages are shared between the servers in a cluster." After thinking about this it would make sense that for the invalidation to work it would need to be triggered on all the servers in the cluster, but I couldn't find confirmation of this in the mountains of IBM doco. Does anyone know if you can use trigger based cache invalidation (through CACHEIVL) when you have several application servers clustered each with their own cache without DRS turned on? or do I need to use DRS for this to work?

    Read the article

  • What is the correct stage to use for Google Guice in production in an application server?

    - by Yishai
    It seems like a strange question (the obvious answer would Production, duh), but if you read the java docs: /** * We want fast startup times at the expense of runtime performance and some up front error * checking. */ DEVELOPMENT, /** * We want to catch errors as early as possible and take performance hits up front. */ PRODUCTION Assuming a scenario where you have a stateless call to an application server, the initial receiving method (or there abouts) creates the injector new every call. If there all of the module bindings are not needed in a given call, then it would seem to have been better to use the Development stage (which is the default) and not take the performance hit upfront, because you may never take it at all, and here the distinction between "upfront" and "runtime performance" is kind of moot, as it is one call. Of course the downside of this would appear to be that you would lose the error checking, causing potential code paths to cause a problem by surprise. So the question boils down to are the assumptions in the above correct? Will you save performance on a large set of modules when the given lifetime of an injector is one call?

    Read the article

  • Odd optimization problem under MSVC

    - by Goz
    I've seen this blog: http://igoro.com/archive/gallery-of-processor-cache-effects/ The "weirdness" in part 7 is what caught my interest. My first thought was "Thats just C# being weird". Its not I wrote the following C++ code. volatile int* p = (volatile int*)_aligned_malloc( sizeof( int ) * 8, 64 ); memset( (void*)p, 0, sizeof( int ) * 8 ); double dStart = t.GetTime(); for (int i = 0; i < 200000000; i++) { //p[0]++;p[1]++;p[2]++;p[3]++; // Option 1 //p[0]++;p[2]++;p[4]++;p[6]++; // Option 2 p[0]++;p[2]++; // Option 3 } double dTime = t.GetTime() - dStart; The timing I get on my 2.4 Ghz Core 2 Quad go as follows: Option 1 = ~8 cycles per loop. Option 2 = ~4 cycles per loop. Option 3 = ~6 cycles per loop. Now This is confusing. My reasoning behind the difference comes down to the cache write latency (3 cycles) on my chip and an assumption that the cache has a 128-bit write port (This is pure guess work on my part). On that basis in Option 1: It will increment p[0] (1 cycle) then increment p[2] (1 cycle) then it has to wait 1 cycle (for cache) then p[1] (1 cycle) then wait 1 cycle (for cache) then p[3] (1 cycle). Finally 2 cycles for increment and jump (Though its usually implemented as decrement and jump). This gives a total of 8 cycles. In Option 2: It can increment p[0] and p[4] in one cycle then increment p[2] and p[6] in another cycle. Then 2 cycles for subtract and jump. No waits needed on cache. Total 4 cycles. In option 3: It can increment p[0] then has to wait 2 cycles then increment p[2] then subtract and jump. The problem is if you set case 3 to increment p[0] and p[4] it STILL takes 6 cycles (which kinda blows my 128-bit read/write port out of the water). So ... can anyone tell me what the hell is going on here? Why DOES case 3 take longer? Also I'd love to know what I've got wrong in my thinking above, as i obviously have something wrong! Any ideas would be much appreciated! :) It'd also be interesting to see how GCC or any other compiler copes with it as well! Edit: Jerry Coffin's idea gave me some thoughts. I've done some more tests (on a different machine so forgive the change in timings) with and without nops and with different counts of nops case 2 - 0.46 00401ABD jne (401AB0h) 0 nops - 0.68 00401AB7 jne (401AB0h) 1 nop - 0.61 00401AB8 jne (401AB0h) 2 nops - 0.636 00401AB9 jne (401AB0h) 3 nops - 0.632 00401ABA jne (401AB0h) 4 nops - 0.66 00401ABB jne (401AB0h) 5 nops - 0.52 00401ABC jne (401AB0h) 6 nops - 0.46 00401ABD jne (401AB0h) 7 nops - 0.46 00401ABE jne (401AB0h) 8 nops - 0.46 00401ABF jne (401AB0h) 9 nops - 0.55 00401AC0 jne (401AB0h) I've included the jump statetements so you can see that the source and destination are in one cache line. You can also see that we start to get a difference when we are 13 bytes or more apart. Until we hit 16 ... then it all goes wrong. So Jerry isn't right (though his suggestion DOES help a bit), however something IS going on. I'm more and more intrigued to try and figure out what it is now. It does appear to be more some sort of memory alignment oddity rather than some sort of instruction throughput oddity. Anyone want to explain this for an inquisitive mind? :D Edit 3: Interjay has a point on the unrolling that blows the previous edit out of the water. With an unrolled loop the performance does not improve. You need to add a nop in to make the gap between jump source and destination the same as for my good nop count above. Performance still sucks. Its interesting that I need 6 nops to improve performance though. I wonder how many nops the processor can issue per cycle? If its 3 then that account for the cache write latency ... But, if thats it, why is the latency occurring? Curiouser and curiouser ...

    Read the article

  • How to (legitimately) access files after putting self into chrooted sandbox?

    - by unknown google user
    Changing a Linux C++ program which gives the user limited file access. Thus the program chroots itself to a sandbox with the files the user can get at. All worked well. Now, however, the program needs to access some files for its own needs (not the user's) but they are outside the sandbox. I know chroot allows access to files opened before the chroot but in this case the needed files could a few among many hundreds so it is obviously impractical to open them all just for the couple that might be required. Is there any way to get at the files?

    Read the article

  • Writing a batch file to delete files with wildcards

    - by iamthejeff
    I have multiple websites set up in the same folder, and I want to create a batch file that will delete the cache in each of them without having to add a new line for each site. For example I am using this: del /S /Q D:\www\site-name\cache\* Which works, but I have to add a new line for every site in D:\www. The del command doesn't support: del /S /Q D:\www\*\cache\* So what is a better way to do this?

    Read the article

  • final transient fields and serialization

    - by doublep
    Is it possible to have final transient fields that are set to any non-default value after serialization in Java? My usecase is a cache variable — that's why it is transient. I also have a habit of making Map fields that won't be changed (i.e. contents of the map is changed, but object itself remains the same) final. However, these attributes seem to be contradictory — while compiler allows such a combination, I cannot have the field set to anything but null after unserialization. I tried the following, without success: simple field initialization (shown in the example): this is what I normally do, but the initialization doesn't seem to happen after unserialization; initialization in constructor (I believe this is semantically the same as above though); assigning the field in readObject() — cannot be done since the field is final. In the example cache is public only for testing. import java.io.*; import java.util.*; public class test { public static void main (String[] args) throws Exception { X x = new X (); System.out.println (x + " " + x.cache); ByteArrayOutputStream buffer = new ByteArrayOutputStream (); new ObjectOutputStream (buffer).writeObject (x); x = (X) new ObjectInputStream (new ByteArrayInputStream (buffer.toByteArray ())).readObject (); System.out.println (x + " " + x.cache); } public static class X implements Serializable { public final transient Map <Object, Object> cache = new HashMap <Object, Object> (); } } Output: test$X@1a46e30 {} test$X@190d11 null

    Read the article

  • What's the difference between DI and factory patterns?

    - by Anthony Short
    I have a class which depends on 3 classes, all 3 of which have other classes they rely on. Currently, I'm using a container class to build up all the required classes, inject them into one another and return the application. The simplified version of the container looks something like this: class Builder { private $_options; public function __construct($options) { $this->_options = $options; } public function build() { $cache = $this->getCache(); $response = $this->getResponse(); $engine = $this->getEngine(); return new Application($cache,$response,$engine); } public function getResponse() { $encoder = $this->getResponseEncoder(); $cache = $this->getResponseCache(); return new Response($encoder,$cache); } // Methods for building each object } I'm not sure if this would be classified as FactoryMethod or a DI Container. They both seem to solve the same problem in the same way - They build objects and inject dependencies. This container has some more complicated building methods, like loading observers and attaching them to observable objects. Should factories be doing all the building (loading extensions etc) and the DI container should use these factories to inject dependencies? That way the sub-packages, like Cache, Response etc, can each have their own specialised factories.

    Read the article

  • iPhone: Low memory crash...

    - by MacTouch
    Once again I'm hunting memory leaks and other crazy mistakes in my code. :) I have a cache with frequently used files (images, data records etc. with a TTL of about one week and a size limited cache (100MB)). There are sometimes more then 15000 files in a directory. On application exit the cache writes an control file with the current cache size along with other useful information. If the applications crashes for some reason (sh.. happens sometimes) I have in such case to calculate the size of all files on application start to make sure I know the cache size. My app crashes at this point because of low memory and I have no clue why. Memory leak detector does not show any leaks at all. I do not see any too. What's wrong with the code below? Is there any other fast way to calculate the total size of all files within a directory on iPhone? Maybe without to enumerate the whole contents of the directory? The code is executed on the main thread. NSUInteger result = 0; NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; NSDirectoryEnumerator *dirEnum = [[[NSFileManager defaultManager] enumeratorAtPath:path] retain]; int i = 0; while ([dirEnum nextObject]) { NSDictionary *attributes = [dirEnum fileAttributes]; NSNumber* fileSize = [attributes objectForKey:NSFileSize]; result += [fileSize unsignedIntValue]; if (++i % 500 == 0) { // I tried lower values too [pool drain]; } } [dirEnum release]; dirEnum = nil; [pool release]; pool = nil; Thanks, MacTouch

    Read the article

  • Using EhCache for session.createCriteria(...).list()

    - by James Smith
    I'm benchmarking the performance gains from using a 2nd level cache in Hibernate (enabling EhCache), but it doesn't seem to improve performance. In fact, the time to perform the query slightly increases. The query is: session.createCriteria(MyEntity.class).list(); The entity is: @Entity @Cache(usage = CacheConcurrencyStrategy.NONSTRICT_READ_WRITE) public class MyEntity { @Id @GeneratedValue private long id; @Column(length=5000) private String data; //---SNIP getters and setters--- } My hibernate.cfg.xml is: <!-- all the normal stuff to get it to connect & map the entities plus:--> <property name="hibernate.cache.region.factory_class"> net.sf.ehcache.hibernate.EhCacheRegionFactory </property> The MyEntity table contains about 2000 rows. The problem is that before adding in the cache, the query above to list all entities took an average of 65 ms. After the cache, they take an average of 74 ms. Is there something I'm missing? Is there something extra that needs to be done that will increase performance?

    Read the article

  • Ajax Request using jQuery in Rails

    - by Steve
    Hi... I am sending an Ajax Request using jQuery. What happens is that I am getting an "405 Method Not Allowed" Error. I am just posting a form, which would get the detail from the form and insert it into the DB. Just the usual stuff.I am using WEBrick that comes as default with the rails package. Can somebody please tell me how to fix this. This is the code that triggers the Ajax Request $.post($(this).attr("action") + ".js",$(this).serialize(),null,"script"); Response Headers Cache-Control no-cache Allow GET, PUT, DELETE Content-Type text/html; charset=utf-8 Content-Length 9502 Server WEBrick/1.3.1 (Ruby/1.9.1/2009-12-07) Date Wed, 02 Jun 2010 20:41:33 GMT Connection Keep-Alive Request Headers Host localhost:3000 User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept application/json, text/javascript, */* Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Content-Type application/x-www-form-urlencoded; charset=UTF-8 X-Requested-With XMLHttpRequest Referer http://localhost:3000/viewspot/3 Content-Length 141 Pragma no-cache Cache-Control no-cache

    Read the article

  • Is there a way to specify wildcarded region names when using ehcache with hibernate?

    - by bkent314
    When using Ehcache with Hibernate, is there a way to specify region names with wildcards in the ehcache.xml file? For example, to allow for cache settings at the package level (with * as a wildcard indicator): <cache name="com.example.my.package1.*" ... /> <cache name="com.example.my.package2.*" ... /> (Note: The package-level distinction is just an example. My question is to wildcards in the general case.)

    Read the article

  • Will MySql caching cause performance problems?

    - by Camran
    I am about to upload my website onto a VPS. It is a classifieds website, where all data is stored in MySql and Solr. I wonder if when using MySql:s cache, the server will slow down? Ie, if somebody makes a search for the first time, and MySql is to cache the query, will the caching make the server slower than if it would not cache anything? After the caching is done I know things will improve in terms of performance... But I would like to know if I should even use the cache or not, what do you think? Thanks

    Read the article

  • ASP.net web page still displaying cached versions

    - by user279521
    My web page is still displaying a previously cached versions of the page. I have this in the page_load event: Response.Clear(); Response.Buffer = true; Response.ExpiresAbsolute = DateTime.Now.AddDays(-1d); Response.Expires = -1; Response.CacheControl = "no-cache"; Response.Cache.SetCacheability(HttpCacheability.NoCache); I have this in the Page_Init: protected void Page_Init(object Sender, EventArgs e) { Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.Cache.SetExpires(DateTime.Now.AddDays(-1)); } Any idea what I might be missing?

    Read the article

  • Is a Multi-DAL Approach the way to go here?

    - by Krisc
    Working on the data access / model layer in this little MVC2 project and trying to think things out to future projects. I have a database with some basic tables and I have classes in the model layer that represent them. I obviously need something to connect the two. The easiest is to provide some sort of 'provider' that can run operations on the database and return objects. But this is for a website that would potentially be used "a lot" (I know, very general) so I want to cache results from the data layer and keep the cache updated as new data is generated. This question deals with how best to approach this problem of dual DALS. One that returns cached data when possible and goes to the data layer when there is a cache miss. But more importantly, how to integrate the core provider (thing that goes into database) with the caching layer so that it too can rely on cached objects rather than creating new ones. Right now I have the following interfaces: IDataProvider is used to reach the database. It doesn't concern itself with the meaning of the objects it produces, but simply the way to produce them. interface IDataProvider{ // Select, Update, Create, et cetera access IEnumerable<Entry> GetEntries(); Entry GetEntryById(int id); } IDataManager is a layer that sits on top of the IDataProvider layer and manages the cache interface IDataManager : IDataProvider{ void ClearCache(); } Note that in practice the IDataManager implementation will have useful helper functions to add objects to their related cache stores. (In the future I may define other functions on the interface) I guess what I am looking for is the best way to approach a loop back from the IDataProvider implementations so that they can access the cache. Or a different approach entirely may be in order? I am not very interested in 3rd party products at the moment as I am interested in the design of these things much more than this specific implementation. Edit: I realize the title may be a bit misleading. I apologize for that... not sure what to call this question.

    Read the article

  • JavaScript doesn't parse when mod-rewrited through a PHP file?

    - by Newbtophp
    If I do the following (this is the actual/direct path to the JavaScript file): <script href="http://localhost/tpl/blue/js/functions.js" type="text/javascript"></script> It works fine, and the JavaScript parses - as its meant too. However I'm wanting to shorten the path to the JavaScript file (aswell as do some caching) which is why I'm rewriting all JavaScript files via .htaccess to cache.php (which handles the caching). The .htaccess contains the following: <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^js/(.+?\.js)$ cache.php?file=$1 [NC] </IfModule> cache.php contains the following PHP code: <?php if (extension_loaded('zlib')) { ob_start('ob_gzhandler'); } $file = basename($_GET['file']); if (file_exists("tpl/blue/js/".$file)) { header("Content-Type: application/javascript"); header('Cache-Control: must-revalidate'); header('Expires: ' . gmdate('D, d M Y H:i:s', time() + 3600) . ' GMT'); echo file_get_contents("tpl/blue/js/".$file); } ?> and I'm calling the JavaScript file like so: <script href="http://localhost/js/functions.js" type="text/javascript"></script> But doing that the JavaScript doesn't parse? (if I call the functions which are within functions.js later on in the page they don't work) - so theirs a problem either with cache.php or the rewrite rule? (because the file by itself works fine). If I access the rewrited file- http://localhost/js/functions.js directly it prints the JavaScript code, as any JavaScript file would - so I'm confused as to what I'm doing wrong... All help is appreciated! :)

    Read the article

  • How to trigger code on two different servers in a WAS cluster?

    - by Dean J
    I have an administrative page in a web application that resets the cache, but it only resets the cache on the current JVM. The web application is deployed as a cluster to two WAS servers. Any way that I can elegantly have the "clear cache" button on each server trigger the method on both JVMs? Edit: The original developer just wrote a singleton holding a HashMap to be the cache in question. Lightweight and (previously) worked just fine for the requirements. It caches content pulled from six or seven web services for specified amounts of time. Edit: The entire application in question is three pages, so the elegant solution might well be the lightest solution.

    Read the article

  • Please explain syntax rules and scope for "typedef"

    - by unknown google user
    What are the rules? OTOH the simple case seems to imply the new type is the last thing on a line. Like here Uchar is the new type. typedef unsigned char Uchar; But a function pointer is completely different. Here the new type is pFunc: typedef int (*pFunc) (int); I can't think of any other examples offhand but I have come across some very confusing usages. So are there rules or are people just suppose to know from experience that this is how it is done because they have seen it done this way before? ALSO: What is the scope of a typedef. Thanks to everyone.

    Read the article

  • Chain of Responsibility Pattern: is it a good practice to have interdependent handlers?

    - by wei
    I have this scenario: I have a chain of query handlers, the first is to query the cache, if the cache can't answer it or the answer is stale, then hit a database, if it can't find the answer or the answer is stale again, then query a remote web service. But I am not sure if this is the right way to use this pattern, since the work flow is pretty much fixed, and the cache and database handlers depend on the next step's return result to refresh its records.

    Read the article

  • Why does Google Page-Speed say that elements need compressing, when they already are compressed?

    - by Peter Snow
    My page is compressed using the following in .htaccess <ifModule mod_gzip.c> mod_gzip_on Yes mod_gzip_dechunk Yes mod_gzip_item_include file \.(html?|txt|css|js|php|pl)$ mod_gzip_item_include handler ^cgi-script$ mod_gzip_item_include mime ^text/.* mod_gzip_item_include mime ^application/x-javascript.* mod_gzip_item_exclude mime ^image/.* mod_gzip_item_exclude rspheader ^Content-Encoding:.*gzip.* </ifModule> Yslow says that the page and specifically the elements which Page-Speed is complaining about, are compressed and it gives the page an overall score of 90/100. Why then, does Page-Speed say that Compressing the following resources with gzip could reduce their transfer size by 118.8KiB (70% reduction). and it gives the page an overall score of 33/100?

    Read the article

  • Undefined javascript function?

    - by user74283
    Working on a google maps project and stuck on what seems to be a minor issue. When i call displayMarkers function firebug returns: ReferenceError: displayMarkers is not defined [Break On This Error] displayMarkers(1); <script type="text/javascript"> function initialize() { var center = new google.maps.LatLng(25.7889689, -80.2264393); var map = new google.maps.Map(document.getElementById('map'), { zoom: 10, center: center, mapTypeId: google.maps.MapTypeId.ROADMAP }); //var data = [[25.924292, -80.124314], [26.140795, -80.3204049], [25.7662857, -80.194692]] var data = {"crs": {"type": "link", "properties": {"href": "http://spatialreference.org/ref/epsg/4326/", "type": "proj4"}}, "type": "FeatureCollection", "features": [{"geometry": {"type": "Point", "coordinates": [25.924292, -80.124314]}, "type": "Feature", "properties": {"industry": [2], "description": "hosp", "title": "shaytac hosp2"}, "id": 35}, {"geometry": {"type": "Point", "coordinates": [26.140795, -80.3204049]}, "type": "Feature", "properties": {"industry": [1, 2], "description": "retail", "title": "shaytac retail"}, "id": 48}, {"geometry": {"type": "Point", "coordinates": [25.7662857, -80.194692]}, "type": "Feature", "properties": {"industry": [2], "description": "hosp2", "title": "shaytac hosp3"}, "id": 36}]} var markers = []; for (var i = 0; i < data.features.length; i++) { var latLng = new google.maps.LatLng(data.features[i].geometry.coordinates[0], data.features[i].geometry.coordinates[1]); var marker = new google.maps.Marker({ position: latLng, title: console.log(data.features[i].properties.industry[0]), map: map }); marker.category = data.features[i].properties.industry[0]; marker.setVisible(true); markers.push(marker); } function displayMarkers(category) { var i; for (i = 0; i < markers.length; i++) { if (markers[i].category === category) { markers[i].setVisible(true); } else { markers[i].setVisible(false); } } } } google.maps.event.addDomListener(window, 'load', initialize); </script> <div id="map-container"> <div id="map"></div> </div> <input type="button" value="Retail" onclick="displayMarkers(1);">

    Read the article

  • Coldfusion 8 and HTTP PUT - is there a way to PUT an object?

    - by ciaranarcher
    Hi all We are using EHCache with CF 8 to cache stuff on a central server using a RESTful interface over HTTP. I am trying to cache a cfquery object to the cache server. I can get this to work if I call EHCache direct (i.e. store it in a local cache) but if I try to cache on a remote server over HTTP I am running into problems. The code I am using is as follows: <cfhttp url="http://localhost:8080/myCache/myKey" method="put" result="r" timeout="2" throwonerror="true" > <cfhttpparam type="body" value="#ARGUMENTS.item#" /> </cfhttp> CF doesn't like this reference to #ARGUMENTS.item# and it complains Complex object types cannot be converted to simple values. Can anyone give me an example of how to put an object over http using CF? If this is not possible with CF then a Java example would be the next best thing. Many thanks in advance! PS: I do not want to use serialization to text/JSON etc. as this approach has issues with data integrity and most importantly it's not fast enough.

    Read the article

< Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >