Search Results

Search found 14653 results on 587 pages for 'disk cache'.

Page 101/587 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • Any ideas for developing a Risc Processor friendly string allocator?

    - by Richard Fabian
    I'm working on some tools to enable high throughput data-oriented development, and one thing that I've not got an immediate answer for is how you go about allocating strings quickly. On risc processors you've got another problem of implementation that the CPU doesn't like branching, which is what I'm trying to minimise or avoid. Also, cache coherence is important on most CPUs, so that's gotta be influential in the design too. So, how would you go about reducing the overhead for a generic string allocator? Sometimes it's easier to solve a more explicit problem, so any ideas for string sizes of 5-30?

    Read the article

  • Caching HTML output with PHP

    - by Mohamed Amine
    Hi! I would like to create a cache for my php pages on my site. I did find too many solutions but what I want is a script which can generate an HTML page from my database ex: I have a page for categories which grabs all the categories from the DB, so the script should be able to generate an HTML page of the sort: my-categories.html. then if I choose a category I should get a my-x-category.html page and so on and so forth for other categories and sub categories. I can see that some web sites have got URLs like: wwww.the-web-site.com/the-page-ex.html even though they are dynamic. thanks a lot for help

    Read the article

  • how to get apache mod_cache work with mod_wsgi (django)?

    - by harmv
    I thought i'd speed up my django projects, by letting apache doing some caching for me. Unfortunately I see that apache never caches my dynamic pages. Has mod_cache problems with mod_wsgi served code ? My apache config: <VirtualHost *:80 ServerName myserver.com CacheEnable mem / # for testing only CacheIgnoreQueryString On CacheIgnoreCacheControl On WSGIDaemonProcess aname processes=1 threads=25 WSGIProcessGroup aname Alias /media/ /home/harm/projects/test/media/ WSGIScriptAlias / /home/harm/projects/test/wsgi.py The response does have the correct caching headers: Content-Length 2647 Content-Encoding gzip Vary Accept-Encoding Cache-Control public, max-age=3600 Keep-Alive timeout=15, max=100 Connection Keep-Alive Content-Type application/x-javascript Am I missing something ?

    Read the article

  • How do you make your Java application memory efficient?

    - by Boune
    How do you optimize the heap size usage of an application that has a lot (millions) of long-lived objects? (big cache, loading lots of records from a db) Use the right data type Avoid java.lang.String to represent other data types Avoid duplicated objects Use enums if the values are known in advance Use object pools String.intern() (good idea?) Load/keep only the objects you need I am looking for general programming or Java specific answers. No funky compiler switch. Edit: Optimize the memory representation of a POJO that can appear millions of times in the heap. Use cases Load a huge csv file in memory (converted into POJOs) Use hibernate to retrieve million of records from a database Resume of answers: Use flyweight pattern Copy on write Instead of loading 10M objects with 3 properties, is it more efficient to have 3 arrays (or other data structure) of size 10M? (Could be a pain to manipulate data but if you are really short on memory...)

    Read the article

  • Silverlight caching HTTP POST which results in a 404

    - by Steav
    Situation: I am developing a Silverlight-Application which needs Component based on a local HTTP Listener. The HTTP-Listener can't be 'required' to be installed and running when the Application starts, so the Application does the following: Handshake via HTTP POST If Connection failed open a Click-Once Setup to install the component. So far so good.... now the Problem is: If the HTTP POST for the Handshake fails, because the Listener is not running the POST is cached.... and the following Connection Attempts after the Service is running fail, because the HTTP POST is still in Cache after the first try. NOTE: This is NOT a policy-problem. I'm using SL4 PS: I allready tried adding a random parameter to the URL like First try: Second try: didn't work :-(

    Read the article

  • apache alias and .htacess willing to understand configuration?

    - by sushil bharwani
    On our local dev enviornment we had just one server and to add far future expires and cache control header to static images we kept a .htaccess file in the root of the application things worked fine. But on our prod we have multiple apache servers having aliases to a code base on a different server. Here in this case i am not sure where to keep .htacess file on. Should i be keeping it on code base or on the individual apache servers. How can i write the same stuff that i have written in .htaccess file to httpd.conf file.

    Read the article

  • Persisting sensitve data in asp.net, odd implementation

    - by rawsonstreet
    For reasons not in scope of this question I have implemented a .net project in an iframe which runs from a classic asp page. The classic asp site persisted a few sensitive values by hitting the db on each page. I have passed there variables as xml to the aspx page, now I need to make these values available on any page of this .net site. I've looked into the cache object but we are on a web farm so I am not sure it would work. Is there a way I can can instantiate an object in a base page class and have other pages inherit from the base page to access these values? What is the best way to persist these values? A few more points to consider the site runs in https mode and I cannot use session variables, and I would like to avoid cookies if possible..

    Read the article

  • Image expire time

    - by Jens
    The google page speed tool recommends me to set 'Expires' headers for images etc. But what is the most efficient way to set an Expires header for an image? In now redirect all image requests to an imagehandler.php using htaccess: /* HTTP/1.1 404 Not Found, HTTP/1.1 400 Bad Request and content type detection stuff ... */ header( "Content-Type: " . $content_type ); header( "Cache-Control: public" ); header( "Last-Modified: ".gmdate("D, d M Y H:i:s", filemtime($path))." GMT"); header( "Expires: ". date("r",time() + (60*60*24*30))); readfile( $path ); But of course this adds extra loading time for my images on first request, and I was wondering if there was a better solution for this.

    Read the article

  • javascript not working on localhost

    - by Adam McMahon
    Ok so I'm lost here, frustrated and pulling my hair and out. Plus probably about to be fired or take a pay cut. I moved Files from a development server to my local machine. The files are consistent (used diff tool), all the dependencies are there. It works for the most part. The problem is that the some of the javascript (not all) is just not working. We're using jquery and a lot of plugins for it. I've checked with the web developer plugin in firefox and all the js files are loading. I cleared the cache in both firefox and chrome multiple times to no avail. The development server is a windows server running wamp. My local machine is running ubuntu. Somebody tell me what I missed.

    Read the article

  • Most efficient method of generating PNG as HTTP response

    - by awj
    I've built an ASP.NET page whose output stream is a dynamically-generated PNG image containing only text on a transparent background. The text is based upon database IDs contained in the querystring. There will be a limited number of variations. Which one of the following would be the most efficient means of returning the image to the client? Store each variation upon the first generation, and thenceforth retrieve this from the drive. Simply generate the image each time. Cache the output response based upon the querystring.

    Read the article

  • Drupal 6: moving localhost to server | clear cached data

    - by artmania
    Hi friends, I worked on localhost to build my drupal site. then, I moved to server. Steps: Clear cache tables (Site Configuration Performance Clear Cached Data button) on localhost Export db and then import on live server Move files/folders to live server Edit settings.php to reflect live server config everything is working great until I make Clear Cached Data on Server. than my custom theme, custom front page, etc... all mess up :( What can be the problem? I appreciate helps so much!! Thanks! !!SORTED!! I used a subtheme for Zen, named mgf . then I had taken a back-up as mgf- somehow after I make Clear Cached Data, Drupal links to this mgf- which is old and doesnt have latest stylings. I just removed this folder from themes, and it linked to the right-latest mgf.

    Read the article

  • require.js - How can I set a version on required modules as part of the URL?

    - by Ovesh
    I am using require.js to require JS modules in my application. I need a way to bust client cache on new JS modules, by way of a different requested URL. i.e., if the file hello/there.js has already been cached on the client, I can change the file name to force the browser to get the new file. In other words, for the module hello/there, I'd like require.js to request the url hello/there___v1234___.js (the file name can look different, it's just an example), according to a version string which is accessible on the client. What is the best way to achieve that?

    Read the article

  • Use Drive Mirroring for Instant Backup in Windows 7

    - by Trevor Bekolay
    Even with the best backup solution, a hard drive crash means you’ll lose a few hours of work. By enabling drive mirroring in Windows 7, you’ll always have an up-to-date copy of your data. Windows 7’s mirroring – which is only available in Professional, Enterprise, and Ultimate editions – is a software implementation of RAID 1, which means that two or more disks are holding the exact same data. The files are constantly kept in sync, so that if one of the disks fails, you won’t lose any data. Note that mirroring is not technically a backup solution, because if you accidentally delete a file, it’s gone from both hard disks (though you may be able to recover the file). As an additional caveat, having mirrored disks requires changing them to “dynamic disks,” which can only be read within modern versions of Windows (you may have problems working with a dynamic disk in other operating systems or in older versions of Windows). See this Wikipedia page for more information. You will need at least one empty disk to set up disk mirroring. We’ll show you how to mirror an existing disk (of equal or lesser size) without losing any data on the mirrored drive, and how to set up two empty disks as mirrored copies from the get-go. Mirroring an Existing Drive Click on the start button and type partitions in the search box. Click on the Create and format hard disk partitions entry that shows up. Alternatively, if you’ve disabled the search box, press Win+R to open the Run window and type in: diskmgmt.msc The Disk Management window will appear. We’ve got a small disk, labeled OldData, that we want to mirror in a second disk of the same size. Note: The disk that you will use to mirror the existing disk must be unallocated. If it is not, then right-click on it and select Delete Volume… to mark it as unallocated. This will destroy any data on that drive. Right-click on the existing disk that you want to mirror. Select Add Mirror…. Select the disk that you want to use to mirror the existing disk’s data and press Add Mirror. You will be warned that this process will change the existing disk from basic to dynamic. Note that this process will not delete any data on the disk! The new disk will be marked as a mirror, and it will starting copying data from the existing drive to the new one. Eventually the drives will be synced up (it can take a while), and any data added to the E: drive will exist on both physical hard drives. Setting Up Two New Drives as Mirrored If you have two new equal-sized drives, you can format them to be mirrored copies of each other from the get-go. Open the Disk Management window as described above. Make sure that the drives are unallocated. If they’re not, and you don’t need the data on either of them, right-click and select Delete volume…. Right-click on one of the unallocated drives and select New Mirrored Volume…. A wizard will pop up. Click Next. Click on the drives you want to hold the mirrored data and click Add. Note that you can add any number of drives. Click Next. Assign it a drive letter that makes sense, and then click Next. You’re limited to using the NTFS file system for mirrored drives, so enter a volume label, enable compression if you want, and then click Next. Click Finish to start formatting the drives. You will be warned that the new drives will be converted to dynamic disks. And that’s it! You now have two mirrored drives. Any files added to E: will reside on both physical disks, in case something happens to one of them. Conclusion While the switch from basic to dynamic disks can be a problem for people who dual-boot into another operating system, setting up drive mirroring is an easy way to make sure that your data can be recovered in case of a hard drive crash. Of course, even with drive mirroring, we advocate regular backups to external drives or online backup services. Similar Articles Productive Geek Tips Rebit Backup Software [Review]Disabling Instant Search in Outlook 2007Restore Files from Backups on Windows Home ServerSecond Copy 7 [Review]Backup Windows Home Server Folders to an External Hard Drive TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox) Hyperwords addon (Firefox) Backup Outlook 2010

    Read the article

  • HashMap key problems

    - by Peterdk
    I'm profiling some old java code and it appears that my caching of values using a static HashMap and a access method does not work. Caching code (a bit abstracted): static HashMap<Key, Value> cache = new HashMap<Key, Value>(); public static Value getValue(Key key){ System.out.println("cache size="+ cache.size()); if (cache.containsKey(key)) { System.out.println("cache hit"); return cache.get(key); } else { System.out.println("no cache hit"); Value value = calcValue(); cache.put(key, value); return value; } } Profiling code: for (int i = 0; i < 100; i++) { getValue(new Key()); } Result output: cache size=0 no cache hit (..) cache size=99 no cache hit It looked like a standard error in Key's hashing code or equals code. However: new Key().hashcode == new Key().hashcode // TRUE new Key().equals(new Key()) // TRUE What's especially weird is that cache.put(key, value) just adds another value to the hashmap, instead of replacing the current one. So, I don't really get what's going on here. Am I doing something wrong?

    Read the article

  • Disk Search / Sort Algorithm

    - by AlgoMan
    Given a Range of numbers say 1 to 10,000, Input is in random order. Constraint: At any point only 1000 numbers can be loaded to memory. Assumption: Assuming unique numbers. I propose the following efficient , "When-Required-sort Algorithm". We write the numbers into files which are designated to hold particular range of numbers. For example, File1 will have 0 - 999 , File2 will have 1000 - 1999 and so on in random order. If a particular number which is say "2535" is being searched for then we know that the number is in the file3 (Binary search over range to find the file). Then file3 is loaded to memory and sorted using say Quick sort (which is optimized to add insertion sort when the array size is small ) and then we search the number in this sorted array using Binary search. And when search is done we write back the sorted file. So in long run all the numbers will be sorted. Please comment on this proposal.

    Read the article

  • Does Django cache url regex patterns somehow?

    - by Emre Sevinç
    I'm a Django newbie who needs help: Even though I change some urls in my urls.py I keep on getting the same error message from Django. Here is the relevant line from my settings.py: ROOT_URLCONF = 'mydjango.urls' Here is my urls.py: from django.conf.urls.defaults import * # Uncomment the next two lines to enable the admin: from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', # Example: # (r'^mydjango/', include('mydjango.foo.urls')), # Uncomment the admin/doc line below and add 'django.contrib.admindocs' # to INSTALLED_APPS to enable admin documentation: #(r'^admin/doc/', include(django.contrib.admindocs.urls)), # (r'^polls/', include('mydjango.polls.urls')), (r'^$', 'mydjango.polls.views.homepage'), (r'^polls/$', 'mydjango.polls.views.index'), (r'^polls/(?P<poll_id>\d+)/$', 'mydjango.polls.views.detail'), (r'^polls/(?P<poll_id>\d+)/results/$', 'mydjango.polls.views.results'), (r'^polls/(?P<poll_id>\d+)/vote/$', 'mydjango.polls.views.vote'), (r'^polls/randomTest1/', 'mydjango.polls.views.randomTest1'), (r'^admin/', include(admin.site.urls)), ) So I expect that whenever I visit http://mydjango.yafz.org/polls/randomTest1/ the mydjango.polls.views.randomTest1 function should run because in my polls/views.py I have the relevant function: def randomTest1(request): # mainText = request.POST['mainText'] return HttpResponse("Default random test") However I keep on getting the following error message: Page not found (404) Request Method: GET Request URL: http://mydjango.yafz.org/polls/randomTest1 Using the URLconf defined in mydjango.urls, Django tried these URL patterns, in this order: 1. ^$ 2. ^polls/$ 3. ^polls/(?P<poll_id>\d+)/$ 4. ^polls/(?P<poll_id>\d+)/results/$ 5. ^polls/(?P<poll_id>\d+)/vote/$ 6. ^admin/ 7. ^polls/randomTest/$ The current URL, polls/randomTest1, didn't match any of these. I'm surprised because again and again I check urls.py and there is no ^polls/randomTest/$ in it, but there is ^polls/randomTest1/' It seems like Django is somehow storing the previous contents of urls.py and I just don't know how to make my latest changes effective. Any ideas? Why do I keep on seeing some old version of regexes when I try to load that page even though I changed my urls.py?

    Read the article

  • Where does lucene .net cache the search results?

    - by Lanceomagnifico
    Hi, I'm trying to figure out where Lucene stores the cached query results, and how it's configured to do so - and how long it caches for. This is for an ASP.NET 3.5 solution. I'm getting this problem: If I run a search and sort the result by a particular product field, it seems to work the very first time each search and sort combination is used. If I then go in and change some product attributes, reindex and run the same search and sort, I get the products returned in the same order as the very first result. example Product A is named: foo Product B is named: bar For the first search, sort by name desc. This results in: Product A Product B Now mix up the data a bit: Change names to: Product A named: bar Product B named: foo reindex verify that the index contains the changes for these two products. search Result: Product A Product B Since I changed the alphabetical order of the names, I expected: Product B Product A So I think that Lucene is caching the search results. (Which, btw, is a very good thing.) I just need to know where/how to clear these results. I've tried deleting the index files and doing an IISreset to clear the memory, but it seems to have no effect. So I'm thinking there is another set of Lucene files outside of the indexes that Lucene uses for caching. EDIT I just found out that you must create the index for field you wish to sort on as un-tokenized. I had the field as tokenized, so sorting didn't work.

    Read the article

  • Disable cache in Silverlight HttpWebRequest

    - by synergetic
    My Silverlight4 app is hosted in ASP.NET MVC 2 web application. I do web request through HttpWebRequest class but it gives back a result previously cached. How to disable this caching behavior? There are some links which talks about HttpWebRequest in .NET but Silverlight HttpWebrequest is different. Someone suggested to add unique dummy query string on every web request, but I'd prefer more elegant solution. I also tried the following, but it didn't work: _myHttpWebRequest.BeginGetRequestStream(new AsyncCallback(BeginRequest), new Guid()); In fact, by setting browser history settings it is possible to disable caching. See the following link: http://stackoverflow.com/questions/3027145/asp-net-mvc-with-sql-server-backend-returns-old-data-when-query-is-executed But asking a user to change browser settings is not an option for me.

    Read the article

  • StringTemplate: Loading a Template from disk?

    - by Amitabh
    I am using StringTemplate in c# and following code to load a template from a subdirectory of my application. StringTemplateGroup group = new StringTemplateGroup("myGroup", "/tmp"); StringTemplate query = group.GetInstanceOf("Sample"); query.SetAttribute("column", "name"); Console.WriteLine(query); I have a template file Sample.st in the tmp directory of my application. I am getting the following error. Unhandled Exception: System.ArgumentException: Can't find template Sample.st; group hierarchy is [myGroup] Does anyone know what is wrong here?

    Read the article

  • flushing database cache in SWI-Prolog

    - by JPro
    We are using swi-prolog to run our testcases. Whenever the test starts, I am opening the connection to MYSQL database and storing the Name of the Test hat is being done and then closing the DB. These tests run for about 2 days continuously. After the tests are done, the results basically gets stored in folder in the server. There is a predicate in another prolog file that is called to update the results to the MYSQL database. The code is simple, I use odbc library and just call odbc_* predicates to connect and update the mysql by issuing direct queries. The actual problem is : If I try to call the Predicate from the same Prolog window, where the test just got completed, I get an error as updating to the DB server. Although I do not get any error in the connection. If I close the session of that prolog with halt and closing all the open prolog windows , then open an other complete new instance of Prolog and run the predicate the update goes well. I have a feeling that there is some connection reference to the MySQL DB in Prolog database. Is there any way to clear the database in prolog so that I can run the same predicate without closing any existing prolog windows? Any ideas appreciated. Thanks.

    Read the article

  • Cache the result of a MySQLdb database query in memory

    - by ensnare
    Our application fetches the correct database server from a pool of database servers. So each query is really 2 queries, and they look like this: Fetch the correct DB server Execute the query We do this so we can take DB servers online and offline as necessary, as well as for load-balancing. But the first query seems like it could be cached to memory, so it only actually queries the database every 5 or 10 minutes or so. What's the best way to do this? Thanks.

    Read the article

  • Telerik RadAlert back button cache problem

    - by Michael VS
    Hi, I'm sure you have had this one before so if you could point me to something similar.... I have a server side creation of a RadAlert window using the usual Sys.Application.remove_load and add_load procedure however the alert keeps popping up as it seems to be caching when the user hits the back button after it has been activated. I have tried to put a onclick event on a button to clear the function using remove_load before it moves to the next page however it still doesn't seem to clear it. Its used in validation so if a user inputs failed validation it pops up. If they then go and enter correct validation it then moves onto the next page. If they then use back button this is where it pops up again. Any ideas? Server side: private void Page_Load(object sender, System.EventArgs e) { if (!IsPostBack) { btnSearch.Attributes.Add("onclick", "Sys.Application.remove_load(f);"); } } private void btnSearch_Click(object sender, System.EventArgs e) { string radalertscript = "(function(){var f = function(){radalert('Welcome to RadWindow Prometheus!', 330, 210); Sys.Application.remove_load(f);};Sys.Application.add_load(f);})()"; RadAjaxManager1.ResponseScripts.Add(radalertscript); } Ive also tried using RadAjaxManager1.ResponseScripts.Clear(); before it moves on to the next page on the postback event

    Read the article

  • Why does Perl's readdir() cache directory entries?

    - by Frank Straetz
    For some reason Perl keeps caching the directory entries I'm trying to read using readdir: opendir(SNIPPETS, $dir_snippets); # or die... while ( my $snippet = readdir(SNIPPETS) ) { print ">>>".$snippet."\n"; } closedir(SNIPPETS); Since my directory contains two files, test.pl and test.man, I'm expecting the following output: . .. test.pl test.man Unfortunately Perl returns a lot of files that have since vanished, for example because I tried to rename them. After I move test.pl to test.yeah Perl will return the following list: . .. test.pl test.yeah test.man What's the reason for this strange behaviour? The documentation for opendir, readdir and closedir doesn't mention some sort of caching mechanism. "ls -l" clearly lists only two files.

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >