Search Results

Search found 7960 results on 319 pages for 'distributed cache'.

Page 12/319 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Google Chrome && (cache || memory leaks).

    - by Alexey Ogarkov
    Hello All, I have a big problem with Google Chrome and its memory. My app is displaying to user several image charts and reloads them every 10s. In the interval i have code like that var image = new Image(); var src = 'myurl/image'+new Date().getTime(); image.onload = function() { document.getElementById('myimage').src = src; image.onload = image.onabort = image.onerror = null; } image.src = src; So i have no memory leaks in Firefox and IE. Here the response headers for images Server Apache-Coyote/1.1 Vary * Cache-Control no-store (// I try no-cache, must-revalidate and so on here) Content-Type image/png Content-Length 11131 Date Mon, 31 May 2010 14:00:28 GMT Vary * taken from here In about:cache page there is no my cached images. If i enable purge-memory-button for chrome (--purge-memory-button parameter) it`s not help. Images is in PNG24. So i think that the problem is not in cache. May be Google Chrome is not releasing memory for old images. Please help. Any suggestions. Thanks.

    Read the article

  • Experience with MooseFS?

    - by brown.2179
    Anyone have any experience using MooseFS? I want an easy distributed storage platform to store static data archive of about 10 TB and serve it to 20-40 nodes. Also I want to be able to add storage as the archive grows without having to rebuild the filesystem. I don't care if it's a bit slow. I just want it to be simple and stable. Basically from what I can see for OS X it's between MooseFS and Gluster. Any other suggestions?

    Read the article

  • Oracle Fusion Distributed Order Orchestration

    Designed from the ground-up using the latest technology advances and incorporating the best practices gathered from Oracle's thousands of customers, Fusion Applications are 100 percent open standards-based business applications that set a new standard for the way we innovate, work and adopt technology. Delivered as a complete suite of modular applications, Fusion Applications work with your existing portfolio to evolve your business to a new level of performance. In this AppCast, part of a special series on Fusion Applications, you hear lean how Oracle Fusion Distributed Order Orchestration can help companies improve customer service, reduce fulfillment costs, and optimize fulfillment decision making. Supporting a strategy for improving operational efficiency and boosting customer satisfaction, Fusion Distributed Order Orchestration alleviates or tempers critical production challenges many organizations face today by consolidating order information into a central location. You'll also discover how Fusion Distributed Order Orchestration works with your existing order management solutions.

    Read the article

  • Experience with MooseFS?

    - by brown.2179
    Anyone have any experience using MooseFS? I want an easy distributed storage platform to store static data archive of about 10 TB and serve it to 20-40 nodes. Also I want to be able to add storage as the archive grows without having to rebuild the filesystem. I don't care if it's a bit slow. I just want it to be simple and stable. Basically from what I can see for OS X it's between MooseFS and Gluster. Any other suggestions?

    Read the article

  • Distributed cache and improvement

    - by philipl
    Have this question from interview: Web Service function given x static HashMap map (singleton created) if (!map.containsKey(x)) { perform some function to retrieve result y map.put(x, y); } return y; The interviewer asked general question such as what is wrong with this distributed cache implementation. Then asked how to improve on it, due to distributed servers will have different cached key pairs in the map. There are simple mistakes to be pointed out about synchronization and key object, but what really startled me was that this guy thinks that moving to database implementation solves the problem that different servers will have different map content, i.e., the situation when value x is not on server A but on server B, therefore redundant data has to be retrieved in server A. Does his thinking make any sense? (As I understand this is the basic cons for distributed cache against database model, seems he does not understand it at all) What is the typical solution for the cache growth issue (weak reference?) and sync issue (do not know which server has the key already cached - use load balancing)? Thanks

    Read the article

  • Tuning Distributed Applications to Access Big Data

    Distributed applications are just that: distributed across one or more hardware platforms across the enterprise. The database administrator (DBA) has the unenviable task of monitoring these environments and configuring and tuning the database server to meet multiple needs. As multiple distributed applications now require access to a very large data store, what tuning options are available to help? Get your SQL Server database under version control now!Version control is standard for applications, but databases haven’t caught up. So how can you bring database development up to speed? Why should you start? Find out…

    Read the article

  • Looking for efficient scaling patterns for Silverlight application with distributed text-file data s

    - by Edward Tanguay
    I'm designing a Silverlight software solution for students and teachers to record flashcards, e.g. words and phrases that students find while reading and errors that teachers notice while teaching. Requirements are: each person publishes his own flashcards in a file on a web server, e.g. http://:www.mywebserver.com/flashcards.txt other people subscribe to that person's flashcards by using a Silverlight flashcard reader that I have developed and entering the URLs of flashcard files they want to subscribe to, URLs and imported flashcards being saved in IsolatedStorage the flashcards.txt file has the following simple format: title, then blocks of question/answers: Jim Smith's flashcards from English class 53-222, winter semester 2009 ==fla Das kann nicht sein. That can't be. ==fla Es sei denn, er kommt nicht. Unless he doesn't come. The user then makes public the URL to his flashcard file and other readers begin reading in his flashcards. In order to lower the bar for non-technical users to contribute, it will even be possible for them to save this text in a Google Document, which they publish and distribute the URL. The flashcard readers will then recognize it is a google document and perform the necessary screen scraping to get at the raw text. I have two technical questions about this approach: What is a best way to plan now for scalability issues: e.g. if your reader is subscribed to 10 flashcard files that are each 200K, it will have to download 2MB of text just to find out if any new flashcards are available. Or can I somehow accurately and consistently get at the last update date/time of text files on servers and published google docs? Each reader will have the ability to allow the person to test himself on imported flashcards and add meta information to them, e.g. categorize them, edit them, etc. This information will be stored in IsolatedStorage along with the important flashcards themselves. What is a good pattern to allow these readers to share and synchronize this meta data, e.g. so when you are looking at a flashcard you can see that 5 other people have made corrections to it. The best solution I can think of now is that the Silverlight readers will have to republish their data to a central database, but then there is the problem of uniquely identifying each flashcard, the best approach seems to be URL + position-in-file, or even better URL + original text of both question and answer fields, but both of these have their obvious drawbacks. The main requirement is that the bar for participation is kept as low as possible, i.e. type text in a google document, publish it, distribute the URL, and you're publishing within the flashcard community. So I want to come up with the most efficient technical solutions in order to compensate for the lack of database, lack of unique ids, etc. For those who have designed or developed similar non-traditional, distributed database projects like this, what advice, experience or best-practice tips you can share on the above two points?

    Read the article

  • Choosing a distributed shared memory solution

    - by mindas
    I have a task to build a prototype for a massively scalable distributed shared memory (DSM) app. The prototype would only serve as a proof-of-concept, but I want to spend my time most effectively by picking the components which would be used in the real solution later on. The aim of this solution is to take data input from an external source, churn it and make the result available for a number of frontends. Those "frontends" would just take the data from the cache and serve it without extra processing. The amount of frontend hits on this data can literally be millions per second. The data itself is very volatile; it can (and does) change quite rapidly. However the frontends should see "old" data until the newest has been processed and cached. The processing and writing is done by a single (redundant) node while other nodes only read the data. In other words: no read-through behaviour. I was looking into solutions like memcached however this particular one doesn't fulfil all our requirements which are listed below: The solution must at least have Java client API which is reasonably well maintained as the rest of app is written in Java and we are seasoned Java developers; The solution must be totally elastic: it should be possible to add new nodes without restarting other nodes in the cluster; The solution must be able to handle failover. Yes, I realize this means some overhead, but the overall served data size isn't big (1G max) so this shouldn't be a problem. By "failover" I mean seamless execution without hardcoding/changing server IP address(es) like in memcached clients when a node goes down; Ideally it should be possible to specify the degree of data overlapping (e.g. how many copies of the same data should be stored in the DSM cluster); There is no need to permanently store all the data but there might be a need of post-processing of some of the data (e.g. serialization to the DB). Price. Obviously we prefer free/open source but we're happy to pay a reasonable amount if a solution is worth it. In any way, paid 24hr/day support contract is a must. The whole thing has to be hosted in our data centers so SaaS offerings like Amazon SimpleDB are out of scope. We would only consider this if no other options would be available. Ideally the solution would be strictly consistent (as in CAP); however, eventual consistence can be considered as an option. Thanks in advance for any ideas.

    Read the article

  • Release management with a distributed version control system

    - by See Sharp Cheddar
    We're considering a switch from SVN to a distributed VCS at my workplace. I'm familiar with all the reasons for wanting to using a DVCS for day-to-day development: local version control, easier branching and merging, etc., but I haven't seen that much that's compelling in terms of managing software releases. Here's our release process: Discover what changes are available for merging. Run a query to find the defects/tickets associated with these changes. Filter out changes associated with "open" tickets. In our environment, tickets must be in a closed state in order to merged with a release branch. Filter out changes we don't want in the release branch. We are very conservative when it comes to merging changes. If a change isn't absolutely necessary, it doesn't get merged. Merge available changes, preferably in chronological order. We group changes together if they're associated with the same ticket. Block unwanted changes from the release branch (svnmerge block) so we don't have to deal with them again. Sometimes we can be juggling 3-5 different milestones at a time. Some milestones have very different constraints, and the block list can get quite long. I've been messing around with git, mercurial and plastic, and as far as I can tell none of them address this model very well. It seems like they would work very well when you have only one product you're releasing, but I can't imagine using them for juggling multiple, very different products from the same codebase. For example, cherry-picking seems to be an afterthought in mercurial. (You have to use the 'transplant' command). After you cherry-pick a change into a branch it still shows up as an available integration. Cherry-picking breaks the mercurial way of working. DVCS seems to be better suited for feature branches. There's no need for cherry-picking if you merge directly from a feature branch to trunk and the release branch. But who wants to do all that merging all the time? And how do you query for what's available to merge? And how do you make sure all the changes in a feature branch belong together? It sounds like total chaos. I'm torn because the coder in me wants DVCS for day-to-day work. I really want it. But I fear the day when I have to put the release manager hat and sort out what needs to be merged and what doesn't. I want to write code, I don't want to be a merge monkey.

    Read the article

  • How to invalidate a single data item in the .net cache in VB

    - by Craig
    I have the following .NET VB code to set and read objects in cache on a per user basis (i.e. a bit like session) '' Public Shared Sub CacheSet(ByVal Key As String, ByVal Value As Object) Dim userID As String = HttpContext.Current.User.Identity.Name HttpContext.Current.Cache(Key & "_" & userID) = Value End Sub Public Shared Function CacheGet(ByVal Key As Object) Dim returnData As Object = Nothing Dim userID As String = HttpContext.Current.User.Identity.Name returnData = HttpContext.Current.Cache(Key & "_" & userID) Return returnData End Function I use these functions to hold user data that I don't want to access the DB for all the time. However, when the data is updated, I want the cached item to be removed so it get created again. How do I make an Item I set disappear or set it to NOTHING or NULL? Craig

    Read the article

  • Ajax cache control

    - by Brian
    Hello, I am having a problem with ajax requests in Internet Explorer and in Chrome - I cannot bust the cache. Normal pages don't have the problem - it's just the ajax requests. I know that one workaround is to append a random query string variable to the end of the URL. However, I don't want to lose all the benefits of caching, I just want the browser to pick up the new file if the version on the server is different from the cached version. I have tried manually setting the ajax POST header, to no avail: xmlHttp.setRequestHeader("Cache-Control", "must-revalidate"); Adding this to my .htaccess file doesn't work either: <FilesMatch "\.(js|css).*" Header set Cache-Control: "max-age=172800, public, must-revalidate" </FilesMatch Any help would be greatly appreciated. Thanks, Brian

    Read the article

  • AppFabric Cache errors

    - by Joseph
    The AppFabric Cache in our production crashes almost every day, and is highly unstable. The below errors are logged: Microsoft.ApplicationServer.Caching.DataCacheException: ErrorCode:SubStatus:There is a temporary failure. Please retry later. (Sufficient secondaries not present or they are in throttled state.) Microsoft.ApplicationServer.Caching.DataCacheException: ErrorCode:SubStatus:There is a temporary failure. Please retry later. (The request did not find the primary.) AppFabric Caching service crashed.{Lease with external store expired: Microsoft.Fabric.Federation.ExternalRingStateStoreException: Lease already expired at Microsoft.Fabric.Data.ExternalStoreAuthority.UpdateNode(NodeInfo nodeInfo, TimeSpan timeout) at Microsoft.Fabric.Federation.SiteNode.PerformExternalRingStateStoreOperations(Boolean& canFormRing, Boolean isInsert, Boolean isJoining)} Could someone please provide me some inputs? This is a HA enabled cache environment with 3 cache hosts. All of them are running on Windows Server 2008 Enterprise Edition, and the SQL Server is used for config.

    Read the article

  • Full page reload on Post/Redirect/Get ignoring cache control

    - by Kristof Neirynck
    I have a page that loads a lot of images, css and javascript. I've added a far future Expires header and set Cache-Control to public on these external dependencies so they should be cached. But every time I do a Post/Redirect/Get chrome tries to load these again. This behavior is very similar to reloading the page. I've added ETags and handle the If-None-Match header which helps a bit, but it still generates too many useless requests. How do I tell chrome and safari to get the files from cache? chrome NOK safari NOK firefox OK ie OK Also see Full page reload on Post/Redirect/Get ignoring cache control on the google support forum.

    Read the article

  • HTML5 Cache manifest file itself is not cached, and called at each resource load

    - by Mic
    We have a web app that runs on the iPhone.The manifest file is ok, and the resources(html, css, js) are cached correctly.The page sits in the home screen. The trouble is, when the page loads a resource from the cache, there is as well a GET call to the server to read the Cache Manifest file.The server is configured to send the correct header (max-age=31536000; public, etc...) and caches well all other files except the cache manifest itself. Is this a normal behavior? It looks there is a slight lag, because of that call, for each resource load.Any idea, if these multiple calls can get a status 304 or even better avoided?

    Read the article

  • Make page to tell browser not to cache/preserve input values

    - by queen3
    Most browser cache form input values. So when user refreshes page, the inputs have same values. Here's my problem. When user clicks Save, server validates POSTed data (e.g. checked products), and if not valid, sends it back to browser. However, as stated above, even if server clears selection for some values, they may still be selected because of browser cache! My data has invisible (until parent item selected) checkboxes, so user may be even not aware that some previous value is still selected, until clicks Save again and gets error message - even though user thinks it's not. Which is irritating. This can be resolved by doing Ctrl-F5, but it's not even a solution.Is there automatic/programmatic way to tell browser not to cache form input data on some form/page?

    Read the article

  • cache and web-farm

    - by user285336
    I need to deploy my web-application on web-farm. Application has the following strings: public static X509Certificate2 GetIdCertificate() { string cacheKey = "Neogov.Insight.IdentityProvider.PrivateKey"; if (HttpContext.Current.Cache[cacheKey] == null) { //Load new. HttpContext.Current.Cache[cacheKey] = new X509Certificate2( System.Web.HttpContext.Current.Server.MapPath("~/") + "\\ID\\" + Neogov.Insight.IdentityProvider.BLL.IdConfig.Instance.IdPKeyFile, Neogov.Insight.IdentityProvider.BLL.IdConfig.Instance.IdPKeyPassword, X509KeyStorageFlags.MachineKeySet); } return (X509Certificate2)HttpContext.Current.Cache[cacheKey]; } will it work or not? If not then how to solve and what is solution? Thanks

    Read the article

  • How to use Zend Cache with SimpleXML objects?

    - by Jeremy Hicks
    I'm trying to cache the user timeline of a Twitter feed using Zend_Service_Twitter which returns its results as a SimpleXML object. Unfortunately the regular serialize functions (which Zend Cache uses) don't play nice with SimpleXMl objects. I found this http://www.mail-archive.com/[email protected]/msg18133.html. So it looks like I'll need to create some kind of custom frontend for Zend Cache to be able to change the serialize function used. Anybody ever done this already or can point me where to look to start?

    Read the article

  • How to remove flash cache on a web site periodically

    - by user320166
    I'm using a flash rotating banner in my website which takes images and descriptions from an XML file. I do changes to my XML very often... but in my local machine, the banner takes a day or two to get updated. Although I can clear my local machine's cache, the problem still remains for other users who visit my web page.. is there a programmatic way in flash or in html to overcome this problem ? Maybe a server configuration? Please help me with this.. PS: below code works fine, but it clears out the cache completely... i need to clear XMl cache after a specific time period.. please help. var timestamp:Date = new Date(); xmlData.load("/flash/images.xml?cachebuster=" + timestamp.getTime());

    Read the article

  • storing huge amount of records into classic asp cache object is SLOW

    - by aspm
    we have some nasty legacy asp that is performing like a dog and i narrowed it down to because we are trying to store 15K+ records into the application cache object. but that's not the killer. before it stores it, it converts the ADO stream to XML then stores it. this conversion of the huge record set to XML spikes the CPU and causes all kinds of havoc on users when it's happening. and unfortunately we do this XML conversion to read the cache a lot, causing site wide performance problems. i don't have the resources to convert everything to .net. so that's out. but i need to obviously use caching, but int his case the caching is hurting instead of helping. is there a more effecient way to store this data instead of doing this xml conversion to/from every time we read/update the cache?

    Read the article

  • C read part of file into cache

    - by Pete Jodo
    I have to do a program (for Linux) where there's an extremely large index file and I have to search and interpret the data from the file. Now the catch is, I'm only allowed to have x-bytes of the file cached at any time (determined by argument) so I have to remove certain data from the cache if it's not what I'm looking for. If my understanding is correct, fopen (r) doesn't put anything in the cache, only when I call getc or fread(specifying size) does it get cached. So my question is, lets say I use fread and read 100 bytes but after checking it, only 20 of the 100 bytes contains the data I need; how would I remove the useless 80 bytes from cache (or overwrite it) in order to read more from the file.

    Read the article

  • Force a page cache of Ajax content

    - by Webnet
    I have a page that is an search where the results are loaded via ajax. It then lists products on a page and you can click to view each product. I'd like to change this page where after you view a product if you click "back" on your browser it'll load the cache instead of forcing the user to search again. How can I achieve this? I currently have.... header('Cache-Control: private, max-age:3600'); header('Expires: '.gmdate('D, d M Y H:i:s \G\M\T', time() + 3600)); and it doesn't load the cache

    Read the article

  • mod_cache serving the wrong content

    - by J. Pablo Fernández
    I'm trying to use mod_disk_cache to speed up a web site that is running on WordPress. Whenever I enable it with CacheEnable disk / and the rest being the stock Ubuntu configuration I start to get the wrong results. When I see the main page it's fine, but when I go to a specific post I get a RSS feed instead. Like if the cache is returning the wrong content. I've disabled my RewriteRules as it seems mod_cache doesn't work with that. I'm not even sure where to start to debug such a thing. Any ideas?

    Read the article

  • Changing frontend cache

    - by Utsav
    Our architecture consists of a front-end cache that most read only users obtain their data from directly. The front-end cache sits in front of a farm of webservers that serve pages written in PHP. We need to be able to detect certain conditions at the front-end cache level and pass those values through to the back-end via HTTP headers. For example we would like to manually tag the carrier network based on the IP address. So, for incoming traffic if the user is say coming from an IP address in the range of "41.202.192.0"/19 we would tag them as being a Orange Cameroon user by setting the appropriate HTTP request header, e.g., X-Carrier = "Orange Cameroon". Based on the setting of this header we would like to vary the cache and serve a different banner to the end user. How would you go about doing this? Keep in mind that we don't want to pollute the cache and we also don't want to create too many small cache segments. Assumptions: You can assume that the X-Carrier has already been detected in our cache. So, for the purposes of your test you can just set this value manually in your example script.

    Read the article

  • JBossCacheService: exception occurred in cache put error occurred after changing cache mode to REPL_

    - by logoin
    Hi, we have a horizontal cluster set up on JBoss 4.2. The session replication worked fine until we changed cache mode from REPL_ASYNC to REPL_SYNC to fix a issue. We started to see warning for some session failovers: [org.jboss.web.tomcat.service.session.InstantSnapshotManager.ROOT] Failed to replicate session java.lang.RuntimeException bc [local7.warning] JBossCacheService: exception occurred in cache put ... org.jboss.web.tomcat.service.session.JBossCacheWrapper.put(JBossCacheWrapper.java:147) org.jboss.web.tomcat.service.session.JBossCacheService.putSession(JBossCacheService.java:315) org.jboss.web.tomcat.service.session.JBossCacheClusteredSession.processSessionRepl(JBossCacheClusteredSession.java:125) Does anyone have any idea why this happen and how to fix it if we want to still use REPL_SYNC? Any help is appreciated. Thanks!

    Read the article

  • How can I use System.Web.Caching.Cache in a Console application?

    - by Ron Klein
    Context: .Net 3.5, C# I'd like to have caching mechanism in my Console application. Instead of re-inventing the wheel, I'd like to use System.Web.Caching.Cache (and that's a final decision, I can't use other caching framework, don't ask why). However, it looks like System.Web.Caching.Cache is supposed to run only in a valid HTTP context. My very simple snippet looks like this: using System; using System.Web.Caching; using System.Web; Cache c = new Cache(); try { c.Insert("a", 123); } catch (Exception ex) { Console.WriteLine("cannot insert to cache, exception:"); Console.WriteLine(ex); } and the result is: cannot insert to cache, exception: System.NullReferenceException: Object reference not set to an instance of an object. at System.Web.Caching.Cache.Insert(String key, Object value) at MyClass.RunSnippet() So obviously, I'm doing something wrong here. Any ideas? Update: +1 to most answers, getting the cache via static methods is the correct usage, namely HttpRuntime.Cache and HttpContext.Current.Cache. Thank you all!

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >