Search Results

Search found 7920 results on 317 pages for 'drupal cache'.

Page 83/317 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • How to specify HTTP expiration header? (ASP.NET MVC+IIS)

    - by Marek
    I am already using output caching in my ASP.NET MVC application. Page speed tells me to specify HTTP cache expiration for css and images in the response header. I know that the Response object contains some properties that control cache expiration. I know that these properties can be used to control HTTP caching for response that I am serving from my code: Response.Expires Response.ExpiresAbsolute Response.CacheControl or alternatively Response.AddHeader("Expires", "Thu, 01 Dec 1994 16:00:00 GMT"); The question is how do I set the Expires header for resources that are served automatically, e.g. images, css and such?

    Read the article

  • Caching stored procedure results in Linq'u

    - by itdebeloper
    In our web application we have a lots of stored procedures look like this one: getSomeData(/* 7 diffrent params */) This stored procedure don't make any updates. We are using Linq'u. I know that the date are changing no often than once per day so the results for the same sets of parameters values will be the same. Does Linqu have cache simple solution? I know how to 'manually' write cache mechanism in .net, but I supposed that in Linqu this problem was solved. I'm a lazy guy :) so I'm looking for something realy simple like: Linqu_global_store_procedure_configuration.CacheDuration="600" Linqu_global_store_procedure_configuration.CacheVaryByParam="*" I'm using .net 3.5 but its not any problem to move for 4.0.

    Read the article

  • static images aren't caching with php-generated page

    - by scootklein
    Our website was just converted to being generated by mod_rewrite and php scripts. Images aren't caching in browsers when they seemingly should be. All images follow format: <img src="/images/header.png" /> I must avoid the script completely caching because the PHP parser needs to handle each page dynamically on each request; however, the download overhead of the large images is cumbersome on every single page load. I would ideally provide headers for "Cache-Control: no-cache, must-revalidate" and "Expires: some_date_in_the_past" to force revalidation of the PHP script. Why isn't the browser caching static images with consistent href values across all pages?

    Read the article

  • Dot Net Nuke module works in "Edit" mode but not for "View": cache problem?

    - by Godeke
    I have a DNN task that simply runs some Javascript to compute a price based on a few input fields. This module works fine on our production site, but we had a company do a skin for us to improve the look of the site and the module fails under this new system. (DNN 05.06.00 (459) although it was 5.5 prior... I updated in a futile hope that it was a bug in the old revision.) What is incredibly odd about this is that the module works fine when I'm logged in to DNN and using the Edit mode as an administrator. In this case the small snippet of JavaScript loads fine and filling the fields results in a price. On the other hand it I click "View" (or more importantly, if I'm not logged in at all) the page loads a cached copy. Even odder, I have found the cache files in \Portals\2\Cache\Pages are generated and then only the cached data is being used. When the cached copy is loaded, the JavaScript doesn't appear (it is normally created via a Page.ClientScript.RegisterClientScriptBlock(). Additionally, the button which posts the data to the server doesn't execute any of the server side code (confirmed with a debugger) but instead just reloads the cached copy. If I manually delete the files in \Portals\2\Cache\Pages then everything works properly, but I have to do so after every page load: failing to do so simply loads the page as it was last generated repeatedly. Resetting the application (either via the UI or editing web.config) doesn't change this and clearing the cache from the Host Settings page doesn't actually clear these cached pages. I'm guessing that Edit mode bypasses the cache in some way, but I have gone as far as turning off all caching on the site (which is horrible for performance) and the cached version is still loaded. Has anyone seen anything like this? Shouldn't clearing the cache clear the files (I'm using the File provider for caching)? Shouldn't even a cached page go back to the server if the user posts back? EDIT: I should point out that permissions don't appear to be a problem on the cache directory... other pages cached output are deleted from this folder, just this page has this issue. EDIT 2: Clarifying some settings and conditions which I didn't provide. First, this module works fine in production under DNN 5.6.0. In our test environment with the consulting company's changes it fails (the changes are skin and page layout only in theory: the module source itself verifies as unchanged). All cache settings and the like have been verified the same between the two and we only resorted to setting the module cache to 0 and -1 (and disabling the test site's cache entirely) when we couldn't find another cause for the problem. I have watched the cache work correctly on many other pages in test: there is something about this page that is causing the problem. We have punted and are creating an installable skin based on the consultant's work as I suspect they have somehow corrupted the DNN install (database side I think).

    Read the article

  • Using Queries with Coherence Read-Through Caches

    - by jpurdy
    Applications that rely on partial caches of databases, and use read-through to maintain those caches, have some trade-offs if queries are required. Coherence does not support push-down queries, so queries will apply only to data that currently exists in the cache. This is technically consistent with "read committed" semantics, but the potential absence of data may make the results so unintuitive as to be useless for most use cases (depending on how much of the database is held in cache). Alternatively, the application itself may manually "push down" queries to the database, either retrieving results equivalent to querying the cache directly, or may query the database for a key set and read the values from the cache (relying on read-through to handle any missing values). Obviously, if the result set is too large, reading through the cache may cause significant thrashing. It's also worth pointing out that if the cache is asynchronously synchronized with the database (perhaps via database change listener), that an application may commit a transaction to the database, then generate a key set from the database via a query, then read cache entries through the cache, possibly resulting in a race condition where the application sees older data than it had previously committed. In theory this is not problematic but in practice it is very unintuitive. For this reason it often makes sense to invalidate the cache when updating the database, forcing the next read-through to update the cache.

    Read the article

  • Database design advice needed.

    - by user346271
    Hi all, I'm a lone developer for a telecoms company, and am after some database design advice from anyone with a bit of time to answer. I am inserting into one table ~2 million rows each day, these tables then get archived and compressed on a monthly basis. Each monthly table contains ~15,000,000 rows. Although this is increasing month on month. For every insert I do above I am combining the data from rows which belong together and creating another "correlated" table. This table is currently not being archived, as I need to make sure I never miss an update to the correlated table. (Hope that makes sense) Although in general this information should remain fairly static after a couple of days of processing. All of the above is working perfectly. However my company now wishes to perform some stats against this data, and these tables are getting too large to provide the results in what would be deemed a reasonable time. Even with the appropriate indexes set. So I guess after all the above my question is quite simple. Should I write a script which groups the data from my correlated table into smaller tables. Or should I store the queries result sets in something like memcache? I'm already using mysqls cache, but due to having limited control over how long the data is stored for, it's not working ideally. The main advantages I can see of using something like memcache: No blocking on my correlated table after the query has been cashed. Greater flexibility of sharing the collected data between the backend collector and front end processor. (i.e custom reports could be written in the backend and the results of these stored in the cache under a key which then gets shared with anyone who would want to see the data of this report) Redundancy and scalability if we start sharing this data with a large amount of customers. The main disadvantages I can see of using something like memcache: Data is not persistent if machine is rebooted / cache is flushed. The main advantages of using MySql Persistent data. Less code changes (although adding something like memcache is trivial anyway) The main disadvantages of using MySql Have to define table templates every time I want to store provide a new set of grouped data. Have to write a program which loops through the correlated data and fills these new tables. Potentially will still grow slower as the data continues to be filled. Apologies for quite a long question. It's helped me to write down these thoughts here anyway, and any advice/help/experience with dealing with this sort of problem would be greatly appreciated. Many thanks. Alan

    Read the article

  • ??1???????????????!???????????????|Oracle Coherence|??????

    - by ???02
    ????????????3????????????????????????????3??????????????????????????(?????????1?????????????????)????????????????????????RDBMS1?????????RDBMS??????????????????????????RDBMS????????????????RDBMS????????????????????????????????????SQL ????????????????????????????????????????RDBMS?????????????????????????????·???????????·??????????????????·???????????????????????????????(?????)????????????????????????????????????????????????????????????????????????????2??????????????????????????????????·??????????2?????????????????????????????·??????????????????????????????????????(?????????)???????????????????????????(????????)???????????????????·????????????????JBoss Cache??????????1????Java Map API????put/get?????????????????Java ?????????????????·??????????????????????????????????????????????1:JBoss Cache 3.0???????????????????????(Java Map API???????Java???????????????????????????)// ??????????Person????????????????// CacheFactory?DefaultCacheFactory?Cache?Fqn?Node?JBoss Cache????CacheFactory factory = new DefaultCacheFactory();// ????????Cache cache = factory.createCache();Fqn personData = Fqn.fromString("/person");// ???????????(?????·???)???Person???Node personNode = cache.getRoot().addChild(personData);// ??Person?????????????Person p1 = new Person(1234, "??", "??", "?????");// ?????·???????????personNode.put(1234, p1);?????·???·?????????·????????????????????????????????????·???·??????????????????????????·??????????????????2?????????????????????????????????????????????????????????????????????????????????????????????????????1????????RDBMS??????????????????????????2??????????????????Java API???????????????????????????????????????????????????????????·???·????????????????????????????·?????????·????????????????????????????????????????????·???·?????????????????????MapReduce???????????????????????????????????????????(?????)????????????????????·???·????????????????????????????????????????????????????????????·???????????????????????????????????????IT??????????????Web 2.0??????????????????????????????????????????????????????????????????????????????????????????????????????????????????2:??????????·???·???????Oracle Coherence?????????????????????(??????·?????JBoss Cache?????)// ??????????Person????????????????// CacheFactory?Oracle Coherence????// Person????????Map personCache = CacheFactory.getCache("person");// ??Person?????????????Person p1 = new Person(1234, "??", "??", "?????");// ?????·??????????personCache.put(1234, p1);??????????????????????3 ???????????????????????????????????·???????????????????????????·???·??????????????????????????????????·???????????????????12

    Read the article

  • What are these CPU cache settings? Snoop Filter, ACL prefetch, HW prefetch

    - by eater
    I was in my BIOS setup turning on VT-x support today and saw these other settings. A little googling indicates that they each seem to turn on some sort of optimization to do with the CPU's L2 cache. They were all turned off by default. The processor in question is an Intel Xeon quad-core 3.4GHz (X5492). My OS is Linux 2.6.35.10-74.fc14.x86_64 #1 SMP Thu Dec 23 16:04:50 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux. I have 4GB of RAM if that matters. Here's what the BIOS manufacturer has to say: Snoop Filter Enabling the snoop filter typically improves performance by reducing snoop traffic on the frontside bus in dual processor configurations. Well I like the sound of improved performance. Why would the BIOS have this off by default? Or by dual processor do they not mean multi-core? Regardless, is there a downside if this is on? ACL Prefetch When enabled, the Adjacent Cache Line Prefetcher fetches both cache lines that comprise a cache line pair when it determines required data is not currently in its cache. When disabled, the processor will only fetch the cache line required by the processor. HW Prefetch Fetches an extra line of data into L2 from external memory. Both of these sound like optimizations that have some drawbacks. What are the reasons to turn them on? What are the reasons to leave them off. Why is the default off?

    Read the article

  • JPA Entity Manager resource handling

    - by chiragshahkapadia
    Every time I call JPA method its creating entity and binding query. My persistence properties are: <property name="hibernate.dialect" value="org.hibernate.dialect.Oracle10gDialect"/> <property name="hibernate.cache.provider_class" value="net.sf.ehcache.hibernate.SingletonEhCacheProvider"/> <property name="hibernate.cache.use_second_level_cache" value="true"/> <property name="hibernate.cache.use_query_cache" value="true"/> And I am creating entity manager the way shown below: emf = Persistence.createEntityManagerFactory("pu"); em = emf.createEntityManager(); em = Persistence.createEntityManagerFactory("pu").createEntityManager(); Is there any nice way to manage entity manager resource instead create new every time or any property can set in persistence. Remember it's JPA. See below binding log every time : 15:35:15,527 INFO [AnnotationBinder] Binding entity from annotated class: * 15:35:15,527 INFO [QueryBinder] Binding Named query: * = * 15:35:15,527 INFO [QueryBinder] Binding Named query: * = * 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [EntityBinder] Bind entity com.* on table * 15:35:15,542 INFO [HibernateSearchEventListenerRegister] Unable to find org.hibernate.search.event.FullTextIndexEventListener on the classpath. Hibernate Search is not enabled. 15:35:15,542 INFO [NamingHelper] JNDI InitialContext properties:{} 15:35:15,542 INFO [DatasourceConnectionProvider] Using datasource: 15:35:15,542 INFO [SettingsFactory] RDBMS: and Real Application Testing options 15:35:15,542 INFO [SettingsFactory] JDBC driver: Oracle JDBC driver, version: 9.2.0.1.0 15:35:15,542 INFO [Dialect] Using dialect: org.hibernate.dialect.Oracle10gDialect 15:35:15,542 INFO [TransactionFactoryFactory] Transaction strategy: org.hibernate.transaction.JDBCTransactionFactory 15:35:15,542 INFO [TransactionManagerLookupFactory] No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recomm ended) 15:35:15,542 INFO [SettingsFactory] Automatic flush during beforeCompletion(): disabled 15:35:15,542 INFO [SettingsFactory] Automatic session close at end of transaction: disabled 15:35:15,542 INFO [SettingsFactory] JDBC batch size: 15 15:35:15,542 INFO [SettingsFactory] JDBC batch updates for versioned data: disabled 15:35:15,542 INFO [SettingsFactory] Scrollable result sets: enabled 15:35:15,542 INFO [SettingsFactory] JDBC3 getGeneratedKeys(): disabled 15:35:15,542 INFO [SettingsFactory] Connection release mode: auto 15:35:15,542 INFO [SettingsFactory] Default batch fetch size: 1 15:35:15,542 INFO [SettingsFactory] Generate SQL with comments: disabled 15:35:15,542 INFO [SettingsFactory] Order SQL updates by primary key: disabled 15:35:15,542 INFO [SettingsFactory] Order SQL inserts for batching: disabled 15:35:15,542 INFO [SettingsFactory] Query translator: org.hibernate.hql.ast.ASTQueryTranslatorFactory 15:35:15,542 INFO [ASTQueryTranslatorFactory] Using ASTQueryTranslatorFactory 15:35:15,542 INFO [SettingsFactory] Query language substitutions: {} 15:35:15,542 INFO [SettingsFactory] JPA-QL strict compliance: enabled 15:35:15,542 INFO [SettingsFactory] Second-level cache: enabled 15:35:15,542 INFO [SettingsFactory] Query cache: enabled 15:35:15,542 INFO [SettingsFactory] Cache region factory : org.hibernate.cache.impl.bridge.RegionFactoryCacheProviderBridge 15:35:15,542 INFO [RegionFactoryCacheProviderBridge] Cache provider: net.sf.ehcache.hibernate.SingletonEhCacheProvider 15:35:15,542 INFO [SettingsFactory] Optimize cache for minimal puts: disabled 15:35:15,542 INFO [SettingsFactory] Structured second-level cache entries: disabled 15:35:15,542 INFO [SettingsFactory] Query cache factory: org.hibernate.cache.StandardQueryCacheFactory 15:35:15,542 INFO [SettingsFactory] Statistics: disabled 15:35:15,542 INFO [SettingsFactory] Deleted entity synthetic identifier rollback: disabled 15:35:15,542 INFO [SettingsFactory] Default entity-mode: pojo 15:35:15,542 INFO [SettingsFactory] Named query checking : enabled 15:35:15,542 INFO [SessionFactoryImpl] building session factory 15:35:15,542 INFO [SessionFactoryObjectFactory] Not binding factory to JNDI, no JNDI name configured 15:35:15,542 INFO [UpdateTimestampsCache] starting update timestamps cache at region: org.hibernate.cache.UpdateTimestampsCache 15:35:15,542 INFO [StandardQueryCache] starting query cache at region: org.hibernate.cache.StandardQueryCache

    Read the article

  • DAL, Session, Cache architecture

    - by subt13
    I'm attempting to create Data Access Layer for my web application. Currently, all datatables are stored in the session. When I am finished the DAL will populate and return datatables. Is it a good idea to store the returned datatables in the session? A distributed/shared cache? Or just ping the database each time? Note: generally the number of rows in the datatable will be small < 2000. Thanks

    Read the article

  • Android drawing cache

    - by Seva Alekseyev
    Please explain how does the drawing cache work in Android. I'm implementing a custom View subclass. I want my drawing to be cached by the system. In the View constructor, I call setDrawingCacheEnabled(true); Then in the draw(Canvas c), I do: Bitmap cac = getDrawingCache(); if(cac != null) { c.drawBitmap(cac, 0, 0, new Paint()); return; } Yet the getDrawingCache() returns null to me. My draw() is not called neither from setDrawingCacheEnabled(), nor from getDrawingCache(). Please, what am I doing wrong?

    Read the article

  • Squid parent cache for text/html only

    - by Salvador
    How do I configure the squid to only request text/html to the parent cache; right now I am using : cache_peer 127.0.0.1 parent 8080 0 no-query no-digest on the second hand I get a lot of direct request that do not use the parent proxy: some queries go like FIRST_UP_PARENT and some like DIRECT, how do I tell the squid to always use parent for text/html BTW .. is a transparent proxy I have tried : cache_peer 127.0.0.1 parent 8080 0 no-query no-digest acl elhtml req_mime_type -i ^text/html$ acl elhtml req_mime_type -i text/html cache_peer_access 127.0.0.1 allow elhtml cache_peer_access 127.0.0.1 deny all and it does not works Thanks in advance for the help.

    Read the article

  • Removing http301 redirect from client's cache

    - by ChessWhiz
    Hi, I have a server/client architecture where the client hits the ASP.NET server's service at a certain host name, IP address, and port. Without thinking, I logged on to the server and set up permanent HTTP301 redirection through IIS from that service to another URL that the machine handles via IIS (same IP and port), mistakenly thinking it was another site that is hosted there. When the client hit the server at the old host name, it cached the permanent redirect. Now, even though I have removed the redirection, the client no longer uses the old address. How can I clear the client's cache so that it no longer stores the redirect? I have read about how permanent HTTP301 can be, but in this case, it should be possible to reset a single client's knowledge of the incorrectly-learned host name. Any ideas?

    Read the article

  • Ivy deletion of unwanted (older) artifacts from Ivy cache

    - by John M
    I have a local Artifactory repository in which I have two jars for commons-logging: one for version 1.0.4 and one for version 1.1.1. I'm experimenting with using Ivy to download the older one with an ant task (with the proper dependency tag in ivy.xml), and then I change the "rev" attribute of this dependency tag to 1.1.1. When using ivy:resolve in ant, this newer jar gets successfully downloaded to my cache, but the older one is not deleted automatically, and I'd like to make this happen. I can't figure out how to do so after looking at the Ivy documentation; does anyone know how to get Ivy to delete old versions of artifacts when newer ones are downloaded, either with the resolve task or something else?

    Read the article

  • fastest in-memory cache for XslCompiledTransform

    - by rudnev
    I have a set of xslt stylesheet files. I need to produce the fastest performance of XslConpiledTransform, so i want to make in-memory representation of these stylesheets. I can load them to in-memory collection as IXpathNavigable on application start, and then load each IXPAthNavigable into singleton XslCompiledTransform on each request. But this works only for styleshhets without xsl:import or xsl:include. (Xsl:import is only for files). also i can load into cache many instances of XSLCompiledTransform for each template. Is it reasonable? Are there other ways? What is the best? what are another tips for improving performance MS Xslt processor?

    Read the article

  • Page doesn't un-cache itself in ASP.NET C#

    - by waqasahmed
    Hi, I sometimes find that I need to press CTRL+REFRESH BUTTON (or simply REFRESH BUTTON) in order for pages to be updated. I thought this may have been a problem with using AJAX Update Panel and things, but it also happens on pages where there is no AJAX partial rendering. I have also removed if(!isPostBack), and yet still I need to refresh the page for the contents to be updated. Is it to do with the cache? Does anyone know of a fix for this? I believe it only happens with IE 7 (which I am using). I tried the same feature with Chrome, and it worked as it is supposed to.

    Read the article

  • Image not loading from cache after it's loaded in an iframe

    - by Amir
    I'm loading an image in an iframe and then (once the iframe is loaded) loading the image on the page. But most browsers seem to be loading the image twice. Why isn't the img tag being loading from the cache? Something like this: var loader = $('<iframe />').appendTo('body')[0]; loader.onload = function() { $('body').append('<img src="' + imgsrc + '" />'); }; loader.src = imgsrc; http://jsfiddle.net/amirshim/na3UA/ I'm using fiddler2 to see the network traffic. In case you want to know why I want to do this, check out this question

    Read the article

  • Pre-cache site as user visits

    - by strager
    I am making a static site which is 'forced' to be cached via Cache-control, etc. When a user visits my site, I want the browser to crawl my site, caching pages, so when the user navigates to a page, the load is almost instant. (I do not need a recursive crawl, as that will probably happen as the user navigates between pages. I just need to crawl the links on the current page, and of course not re-caching a page which has already been cached.) (Also, I am not changing pages using Ajax-like techniques. These are essentially normal flat HTML files with normal links.) How can I do this pre-caching using Javascript? (I am using jQuery.)

    Read the article

  • Storing object into cache using Linq classes and velocity

    - by Arun
    I careated couple of linq classes & marked the datacontext as unidirectional. Out of four classes; one is main class while other three are having the one to many relationship with first one; When I load the object of main class & put into the memory OR serialize it into an XML file; I never get the child class data while it is maked as DataContractAttribute. How can I force object to put the child class data into XML file or into cache ?

    Read the article

  • write cache and write sequence order

    - by excanoe
    ok, here i have some weird question: let say we have some binary file (.log), and sequence of write operations, for example log1, log2, log3 and each has some block size n (raw data). question: can I be sure that log1,log2 and log3 sequences can be written in the correct order in ONE file, even if i have few cache levels (disk hardware and os level)? update very interested in what will be with records order (not with records) if we have software or hardware failure (reboot or another reason). update there can be some percent of write failures, but main question is: will write order stay correct?

    Read the article

  • hibernate distributed 2nd level cache options

    - by ishmeister
    Not really a question but I'm looking for comments/suggestions from anyone who has experiences using one or more of the following: EhCache with RMI EhCache with JGroups EhCache with Terracotta Gigaspaces Data Grid A bit of background: our applications is read only for the most part but there is some user data that is read-write and some that is only written (and can also be reasonably inaccurate). In addition, it would be nice to have tools that enable us to flush and fill the cache at intervals or by admin intervention. Regarding the first option - are there any concerns about the overhead of RMI and performance of Java serialization?

    Read the article

  • apc cache compression

    - by Massimo
    I want to store some key value. I see memcache api supports on-the-fly compression: memcache_set( obj, var, value, MEMCACHE_COMPRESSED, ttl ) What about apc ? I cannot find any doc. My goal, for example in php : function cache( $key, $value ) { $data = serialize( $value ); if ( strlen( $data ) >= 1024 ) $data = 'z' . gzcompress( $data, 1 ); else $data = '=' . $data; return apc_store( $key, $data, $ttl ); }

    Read the article

  • Cache an FTP connection for use via AJAX?

    - by Chad Johnson
    I'm working on a Ruby web Application that uses the Net::FTP library. One part of it allows users to interact with an FTP site via AJAX. When the user does something, and AJAX call is made, and then Ruby reconnects to the FTP server, performs an action, and outputs information. Every time the AJAX call is made, Ruby has to reconnect to the FTP server, and that's slow. Is there a way I could cache this FTP connection? I've tried caching in the session hash, but "We're sorry, but something went wrong" is displayed, and a TCP dump is outputted in my logs whenever I attempt to store it in the session hash. I haven't tried memcache yet. Any suggestions?

    Read the article

  • Strange "cache" effect between client and server

    - by mark
    I use a Socket-based connection between Client and server with ObjectOutputStream. The objects serialized and exchanged have this structure: public class RichiestaSalvataggioArticolo implements Serializable { private ArticoloDati articolo; public RichiestaSalvataggioArticolo(ArticoloDati articolo) { this.articolo = articolo; } @Override public void ricevi(GestoreRichieste gestore) throws Exception { gestore.interpreta(this); } public ArticoloDati getArticolo() { return articolo; } } the issue is that when I try to exchange messages between C/S with incapsulated content very similar (ArticoloDati whom differ only in 2 fields out of 10), the Client sends an ArticoloDati, but the Server receives the previous one. Does the ObjectOutputStream implement some kind of cache or memory between the calls, that fails to recognize that my 2 objects are different because they are very similar?

    Read the article

  • Hibernate + Spring + session + cache

    - by andromeda
    We are using Hibernate with Spring for our Java application. We find out that when a session update something in database other sessions can not see the update. For example user1 get the account balance from database then user2 increase the balance , if user1 get the object another time he see the account balance before updating (seems that session use the value from its cache) but we want to get the updated object with new account balance. User1 use one session during all activity that is different from user2 session. Is any configuration to force to get the updated object from database? or any other help?

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >