Search Results

Search found 6528 results on 262 pages for 'appfabric cache'.

Page 44/262 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Cache consistency & spawning a thread

    - by Dave Keck
    Background I've been reading through various books and articles to learn about processor caches, cache consistency, and memory barriers in the context of concurrent execution. So far though, I have been unable to determine whether a common coding practice of mine is safe in the strictest sense. Assumptions The following pseudo-code is executed on a two-processor machine: int sharedVar = 0; myThread() { print(sharedVar); } main() { sharedVar = 1; spawnThread(myThread); sleep(-1); } main() executes on processor 1 (P1), while myThread() executes on P2. Initially, sharedVar exists in the caches of both P1 and P2 with the initial value of 0 (due to some "warm-up code" that isn't shown above.) Question Strictly speaking – preferably without assuming any particular CPU – is myThread() guaranteed to print 1? With my newfound knowledge of processor caches, it seems entirely possible that at the time of the print() statement, P2 may not have received the invalidation request for sharedVar caused by P1's assignment in main(). Therefore, it seems possible that myThread() could print 0. References These are the related articles and books I've been reading. (It wouldn't allow me to format these as links because I'm a new user - sorry.) Shared Memory Consistency Models: A Tutorial hpl.hp.com/techreports/Compaq-DEC/WRL-95-7.pdf Memory Barriers: a Hardware View for Software Hackers rdrop.com/users/paulmck/scalability/paper/whymb.2009.04.05a.pdf Linux Kernel Memory Barriers kernel.org/doc/Documentation/memory-barriers.txt Computer Architecture: A Quantitative Approach amazon.com/Computer-Architecture-Quantitative-Approach-4th/dp/0123704901/ref=dp_ob_title_bk

    Read the article

  • Load SQL query result data into cache in advance

    - by Marc
    I have the following situation: .net 3.5 WinForm client app accessing SQL Server 2008 Some queries returning relatively big amount of data are used quite often by a form Users are using local SQL Express and restarting their machines at least daily Other users are working remotely over slow network connections The problem is that after a restart, the first time users open this form the queries are extremely slow and take more or less 15s on a fast machine to execute. Afterwards the same queries take only 3s. Of course this comes from the fact that no data is cached and must be loaded from disk first. My question: Would it be possible to force the loading of the required data in advance into SQL Server cache? Note My first idea was to execute the queries in a background worker when the application starts, so that when the user starts the form the queries will already be cached and execute fast directly. I however don't want to load the result of the queries over to the client as some users are working remotely or have otherwise slow networks. So I thought just executing the queries from a stored procedure and putting the results into temporary tables so that nothing would be returned. Turned out that some of the result sets are using dynamic columns so I couldn't create the corresponding temp tables and thus this isn't a solution. Do you happen to have any other idea?

    Read the article

  • Rate Limit Calls To Api Using Cache

    - by namtax
    Hi I am using coldfusion to call the last.fm api, using a cfc bundle sourced from here I am concerned about going over the request limit, which is 5 requests per originating IP address per second, averaged over a 5 minute period. The cfc bundle has a central component which calls all the other components, which are split up into sections like "artist", "track" etc...This central component "lastFmApi.cfc." is initiated in my application, and persisted for the lifespan of the application // Application.cfc example <cffunction name="onApplicationStart"> <cfset var apiKey = '[your api key here]' /> <cfset var apiSecret = '[your api secret here]' /> <cfset application.lastFm = CreateObject('component', 'org.FrankFusion.lastFm.lastFmApi').init(apiKey, apiSecret) /> </cffunction> Now if I want to call the api through a handler/controller, for example my artist handler...I can do this <cffunction name="artistPage" cache="5 mins"> <cfset qAlbums = application.lastFm.user.getArtist(url.artistName) /> </cffunction> I am a bit confused towards caching, but am caching each call to the api in this handler for 5 mins, but does this make any difference, because each time someone hits a new artist page wont this still count as a fresh hit against the api? Wondering how best to tackle this Thanks

    Read the article

  • Radio buttons being reset in FF on cache-refresh

    - by Andrew Song
    (This is technically an addendum to an earlier StackOverflow question I had posted, but my original post asked a different question which doesn't really cover this topic -- I don't want to edit my older question as I feel this is different enough to merit its own page) While browsing my website in Firefox 3.5 (and only FF3.5), I come across a page with two radio buttons that have the following HTML code: <input id="check1" type="radio" value="True" name="check" checked="checked"/> <input id="check2" type="radio" value="False" name="check"/> This page renders as expected, with 'check1' checked and 'check2' unchecked. When I then go to refresh the page by pressing Control + R, the two radio buttons render, but they are both unchecked even though the raw HTML code is the same (as above). If I do a cache-miss refresh (via Control + F5 or Control + Shift + R), the page returns back to the way you'd expect it. This is not a problem in any other browser I've tried except FF3.5. What is causing these radio buttons to be reset on a normal refresh? How can I avoid this?

    Read the article

  • Gem Load Error about whois command and removed cache

    - by Puru puru rin..
    Hello, I have an awesome trouble with Gem. After executing this command: rm -f /usr/local/lib/ruby/gems/1.9.1/cache/* I can not do any thing. If I try for instance: gem cleanup I get this kind of answer: /usr/local/lib/ruby/gems/1.9.1/gems/gemwhois-0.1/lib/gemwhois.rb:3:in `require': no such file to load -- rubygems/commands/whois (LoadError) from /usr/local/lib/ruby/gems/1.9.1/gems/gemwhois-0.1/lib/gemwhois.rb:3:in `<top (required)>' from /usr/local/lib/ruby/gems/1.9.1/gems/gemwhois-0.1/lib/rubygems_plugin.rb:2:in `require' from /usr/local/lib/ruby/gems/1.9.1/gems/gemwhois-0.1/lib/rubygems_plugin.rb:2:in `<top (required)>' from /usr/local/lib/ruby/site_ruby/1.9.1/rubygems.rb:1113:in `load' from /usr/local/lib/ruby/site_ruby/1.9.1/rubygems.rb:1113:in `block in <top (required)>' from /usr/local/lib/ruby/site_ruby/1.9.1/rubygems.rb:1105:in `each' from /usr/local/lib/ruby/site_ruby/1.9.1/rubygems.rb:1105:in `<top (required)>' from <internal:gem_prelude>:235:in `require' from <internal:gem_prelude>:235:in `load_full_rubygems_library' from <internal:gem_prelude>:334:in `const_missing' from /usr/local/bin/gem:12:in `<main>' It's the same for gem -v, of just gem command... I'm working of Snow Leopard. What should the best solution about you? Thanks a lot!

    Read the article

  • Database cache that I'm not aware of?

    - by Martin
    I'm using asp.net mvc, linq2sql, iis7 and sqlserver express 2008. I get these intermittent server errors, primary key conflicts on insertion. I'm using a different setup on my development computer so I can't debug. After a while they go away. Restarting iis helps. I'm getting the feeling there is cache somewhere that I'm not aware of. Can somebody help me sort out these errors? Cannot insert duplicate key row in object 'dbo.EnquiryType' with unique index 'IX_EnquiryType'. Edits regarding Venemos answer Is it possible that another application is also accessing the same database simultaneously? Yes there is, but not this particular table and no inserts or updates. There is one other table with which I experience the same problem but it has to do with a different part of the model. How often an in what context do you create a new DataContext instance? Only once, using the singleton pattern. Are the primary keys generated by the database or by the application? Database. Which version of ASP.NET MVC and which version of .NET are you using? RC2 and 3.5.

    Read the article

  • Clear tableView cell cache (or remove an entry)

    - by ManniAT
    Hi, I have the same question problem as described here http://stackoverflow.com/questions/2286669/iphone-how-to-purge-a-cached-uitableviewcell But my problem can't be solved with "resetting content". To be precise - I use a custom cell (own class). While running the application it is possible that I have to use a different "cell type". It's the same class - but it has (a lot of) differnt attributes. Of course I could always reset all the things at "PrepareForReuse" but that's not a good idea I guess (there are a lot things to reset). My idea - I do all these things in the constructor of the cell. And all the rows will use this "type of cell" later. When the (seldom) situation comes that I have to change the look of all rows I create a new instance of this kind of cell with different settings. And now I want to replace the queued cell with this new one. I tried it with simply calling the constructor with the same cellidentifier (in the hope it will replace the existing one) but that doesn't work. I also didn't find a "ClearReusableCells" or something like this. Is there a way to clear the cache - or to remove / replace a specific item? Manfred

    Read the article

  • Cache layer for MVC - Model or controller?

    - by Industrial
    Hi everyone, I am having some second thoughts about where to implement the caching part. Where is the most appropriate place to implement it, you think? Inside every model, or in the controller? Approach 1 (psuedo-code): // mycontroller.php MyController extends Controller_class { function index () { $data = $this->model->getData(); echo $data; } } // myModel.php MyModel extends Model_Class{ function getData() { $data = memcached->get('data'); if (!$data) { $query->SQL_QUERY("Do query!"); } return $data; } } Approach 2: // mycontroller.php MyController extends Controller_class { function index () { $dataArray = $this->memcached->getMulti('data','data2'); foreach ($dataArray as $key) { if (!$key) { $data = $this->model->getData(); $this->memcached->set($key, $data); } } echo $data; } } // myModel.php MyModel extends Model_Class{ function getData() { $query->SQL_QUERY("Do query!"); return $data; } } Thoughts: Approach 1: No multiget/multi-set. If a high number of keys would be returned, overhead would be caused. Easier to maintain, all database/cache handling is in each model Approach 2: Better performancewise - multiset/multiget is used More code required Harder to maintain Tell me what you think!

    Read the article

  • How to cache code in PHP?

    - by Janis Peisenieks
    I am creating a custom form building system, which includes various tokens. These tokens are found using Regular Expressions, and depending on the type of toke, parsed. Some require simple replacement, some require cycles, and so forth. Now I know, that RegExp is quite resource and time consuming, so I would like to be able to parse the code for the form once, creating a php code, and then save the PHP code, for next uses. How would I go about doing this? So far I have only seen output caching. Is there a way to cache commands like echo and cycles like foreach()? Because of misunderstandings, I'll create an example. Unparsed template data: Thank You for Your interest, [*Title*] [*Firstname*] [*Lastname*]. Here are the details of Your order! [*KeyValuePairs*] Here is the link to Your request: [*LinkToRequest*]. Parsed template: "Thank You for Your interest, <?php echo $data->title;?> <?php echo $data->firstname;?> <?php echo $data->lastname;?>. Here are the details of Your order! <?php foreach($data->values as $key=>$value){ echo $key."-".$value }?> Here is the link to Your request: <?php echo $data->linkToRequest;?>. I would then save the parsed template, and instead of parsing the template every time, just pass the $data variable to the already parsed one, which would generate an output.

    Read the article

  • Fast path cache generation for a connected node graph

    - by Sukasa
    I'm trying to get a faster pathfinding mechanism in place in a game I'm working on for a connected node graph. The nodes are classed into two types, "Networks" and "Routers." In this picture, the blue circles represent routers and the grey rectangles networks. Each network keeps a list of which routers it is connected to, and vice-versa. Routers cannot connect directly to other routers, and networks cannot connect directly to other networks. Networks list which routers they're connected to Routers do the same I need to get an algorithm that will map out a path, measured in the number of networks crossed, for each possible source and destination network excluding paths where the source and destination are the same network. I have one right now, however it is unusably slow, taking about two seconds to map the paths, which becomes incredibly noticeable for all connected players. The current algorithm is a depth-first brute-force search (It was thrown together in about an hour to just get the path caching working) which returns an array of networks in the order they are traversed, which explains why it's so slow. Are there any algorithms that are more efficient? As a side note, while these example graphs have four networks, the in-practice graphs have 55 networks and about 20 routers in use. Paths which are not possible also can occur, and as well at any time the network/router graph topography can change, requiring the path cache to be rebuilt. What approach/algorithm would likely provide the best results for this type of a graph?

    Read the article

  • Ideal HTTP cache control headers for different types of resources

    - by chris_l
    I want to find a minimal set of headers, that work with "all" caches and browsers (also when using HTTPS!) On my (GWT-based) web site, I'll have three kinds of resources: 1. Forever cacheable (public / equal for all users) These files don't ever change, and they get a filename based on the MD5 of their contents (this is GWT's approach). They should get cached as much as possible, even when using HTTPS (so I assume, I should set Cache-Control: public, especially for Firefox?) 2. Changing for every new version of the site (public / equal for all users) These files can be cached, but probably need to be revalidated every time. 3. Individual for each request (private / user specific) These resources (e. g. JSON responses) should never be cached unencrypted to disk under no circumstances. (Maybe I'll have a few specific requests that could be cached.) I have a general idea on which headers I would probably use for each type, but there's always something I could be missing.

    Read the article

  • How to force client browser to download images from server rather using its cache

    - by anonim.developer
    Assume a simple aspx data entry page in which admin user can upload an image as well as some other data. They are stored in database and the next time admin visits that page to edit record, image data fetched and a preview generated and saved to disk (using GDI+) and the preview is shown in an image control. This procedure works fine for the first time however if the image changes (a new one uploaded) the next time the page is surfed it shows previously uploaded image. I debugged the application and everything works correct. The new image data is in database and new preview is stored in Temp location however the page shows previous one. If I refresh the page it shows the new image preview. I should mention that preview is always saved to disk with one name (id of each record as the name). I think that is because of IE and other browsers use client cache instead of loading images each time a page is surfed. I wonder if there is a way to force the client browser to refresh itself so the newly uploaded image is shown without user intervention. Thanks and appreciation in advance,

    Read the article

  • How to use Android's CacheManager?

    - by punnie
    I'm currently developing an Android application that fetches images using http requests. It would be quite swell if I could cache those images in order to improve to performance and bandwidth use. I came across the CacheManager class in the Android reference, but I don't really know how to use it, or what it really does. I already scoped through this example, but I need some help understanding it: /core/java/android/webkit/gears/ApacheHttpRequestAndroid.java Also, the reference states: "Network requests are provided to this component and if they can not be resolved by the cache, the HTTP headers are attached, as appropriate, to the request for revalidation of content." I'm not sure what this means or how it would work for me, since CacheManager's getCacheFile accepts only a String URL and a Map containing the headers. Not sure what the attachment mentioned means. An explanation or a simple code example would really do my day. Thanks! Update Here's what I have right now. I am clearly doing it wrong, just don't know where. public static Bitmap getRemoteImage(String imageUrl) { URL aURL = null; URLConnection conn = null; Bitmap bmp = null; CacheResult cache_result = CacheManager.getCacheFile(imageUrl, new HashMap()); if (cache_result == null) { try { aURL = new URL(imageUrl); conn = aURL.openConnection(); conn.connect(); InputStream is = conn.getInputStream(); cache_result = new CacheManager.CacheResult(); copyStream(is, cache_result.getOutputStream()); CacheManager.saveCacheFile(imageUrl, cache_result); } catch (Exception e) { return null; } } bmp = BitmapFactory.decodeStream(cache_result.getInputStream()); return bmp; }

    Read the article

  • ASP.NET CacheDependency out of ThreadPool

    - by Stephen
    In an async http handler, we add items to the ASP.NET cache, with dependencies on some files. If the async method executes on a thread from the ThreadPool, all is fine: AsyncResult result = new AsyncResult(context, cb, extraData); ThreadPool.QueueUserWorkItem(new WaitCallBack(DoProcessRequest), result); But as soon as we try to execute on a thread out of the ThreadPool: AsyncResult result = new AsyncResult(context, cb, extraData); Runner runner = new Runner(result); Thread thread = new Thread(new ThreadStart(runner.Run()); ... where Runner.Run just invokes DoProcessRequest, The dependencies do trigger right after the thread exits. I.e. the items are immediately removed from the cache, the reason being the dependencies. We want to use an out-of-pool thread because the processing might take a long time. So obviously something's missing when we create the thread. We might need to propagate the call context, the http context... Has anybody already encountered that issue? Note: off-the-shelf custom threadpools probably solve this. Writing our own threadpool is probably a bad idea (think NIH syndrom). Yet I'd like to understand this in details, though.

    Read the article

  • How to approach parallel processing of messages?

    - by Dan
    I am redesigning the messaging system for my app to use intel threading building blocks and am stumped trying to decide between two possible approaches. Basically, I have a sequence of message objects and for each message type, a sequence of handlers. For each message object, I apply each handler registered for that message objects type. The sequential version would be something like this (pseudocode): for each message in message_sequence <- SEQUENTIAL for each handler in (handler_table for message.type) apply handler to message <- SEQUENTIAL The first approach which I am considering processes the message objects in turn (sequentially) and applies the handlers concurrently. Pros: predictable ordering of messages (ie, we are guaranteed a FIFO processing order) (potentially) lower latency of processing each message Cons: more processing resources available than handlers for a single message type (bad parallelization) bad use of processor cache since message objects need to be copied for each handler to use large overhead for small handlers The pseudocode of this approach would be as follows: for each message in message_sequence <- SEQUENTIAL parallel_for each handler in (handler_table for message.type) apply handler to message <- PARALLEL The second approach is to process the messages in parallel and apply the handlers to each message sequentially. Pros: better use of processor cache (keeps the message object local to all handlers which will use it) small handlers don't impose as much overhead (as long as there are other handlers also to be run) more messages are expected than there are handlers, so the potential for parallelism is greater Cons: Unpredictable ordering - if message A is sent before message B, they may both be processed at the same time, or B may finish processing before all of A's handlers are finished (order is non-deterministic) The pseudocode is as follows: parallel_for each message in message_sequence <- PARALLEL for each handler in (handler_table for message.type) apply handler to message <- SEQUENTIAL The second approach has more advantages than the first, but non-deterministic ordering is a big disadvantage.. Which approach would you choose and why? Are there any other approaches I should consider (besides the obvious third approach: parallel messages and parallel handlers, which has the disadvantages of both and no real redeeming factors as far as I can tell)? Thanks!

    Read the article

  • How do I stop js files being cached in IE?

    - by DoctaJonez
    Hello stackers! I've created a page that uses the CKEditor javascript rich edit control. It's a pretty neat control, especially seeing as it's free, but I'm having serious issues with the way it allows you to add templates. To add a template you need to modify the templates js file in the CKEditor templates folder. The documentation page describing it is here. This works fine until I want to update a template or add a new one (or anything else that requires me to modify the js file). Internet Explorer caches the js file and doesn't pick up the update. Emptying the cache allows the update to be picked up, but this isn't an acceptable solution. Whenever I update a template I do not want to tell all of the users across the organisation to empty their IE cache. There must be a better way! Is there a way to stop IE caching the js file? Or is there another solution to this problem?

    Read the article

  • Exporting de-aggregated data

    - by Ben
    I'm currently working on a data export feature for a survey application. We are using SQL2k8. We store data in a normalized format: QuestionId, RespondentId, Answer. We have a couple other tables that define what the question text is for the QuestionId and demographics for the RespondentId... Currently I'm using some dynamic SQL to generate a pivot that joins the question table to the answer table and creates an export, its working... The problem is that it seems slow and we don't have that much data (less than 50k respondents). Right now I'm thinking "why am I 'paying' to de-aggregate the data for each query? Why don't I cache that?" The data being exported is based on dynamic criteria. It could be "give me respondents that completed on x date (or range)" or "people that like blue", etc. Because of that, I think I have to cache at the respondent level, find out what respondents are being exported and then select their combined cached de-aggregated data. To me the quick and dirty fix is a totally flat table, RespondentId, Question1, Question2, etc. The problem is, we have multiple clients and that doesn't scale AND I don't want to have to maintain the flattened table as the survey changes. So I'm thinking about putting an XML column on the respondent table and caching the results of a SELECT * FROM Data FOR XML AUTO WHERE RespondentId = x. With that in place, I would then be able to get my export with filtering and XML calls into the XML column. What are you doing to export aggregated data in a flattened format (CSV, Excel, etc)? Does this approach seem ok? I worry about the cost of XML functions on larger result sets (think SELECT RespondentId, XmlCol.value('//data/question_1', 'nvarchar(50)') AS [Why is there air?], XmlCol.RinseAndRepeat)... Is there a better technology/approach for this? Thanks!

    Read the article

  • Does running IIS7 in classic mode affect MVC output caching?

    - by Bob
    I have a need to run an application in classic mode for backwards compatibility with a specific application, and am trying to understand what kind of impact that will have on the performance of an MVC application that is running on the site. If we put a few static file maps (for .js, .css, .png, etc) above the ASP.NET wildcard map to reduce the amount of processing by the ASP.NET handler, will we be approaching the integrated mode in terms of performance? The thing i'm primarily concerned with is any effect this might have on output caching. I understand that integrated mode might (?) allow for the output cache to handle non ASP.NET content, but that isn't really a concern. We're more interested in ensuring that the MVC application has full use of the output cache. Empirically i've found that the two configurations operate on par when things go well, but if the page references resources that are not available, the integrated mode tends to fail much more quickly than the classic mode (e.g. 500 ms vs 10 seconds), reducing 'hang time' on the page load. Thanks for any feedback.

    Read the article

  • Cachefilesd (cachefiles) everything seems to be set up, still not working

    - by Evgenius
    I'm trying to set up cachefilesd to work with my network folder shared using NFS. I have seemingly everything set up, however cachefilesd starts normally, however caching isn't functioning. Here is output of commands, which I ran in the same order 1 sudo mount ... cache-1:/mnt/datashared on /mnt/nfsshare type nfs (rw,sync,ac,acregmin=3,acregmax=60,acdirmin=30,acdirmax=300,lookupcache=pos,vers=3,fsc) ... 2 lsbmod | grep cachefiles cachefiles 40555 1 fscache 57430 4 nfs,cifs,cachefiles,nfsv4 3 [edited - deleted] 4 uname -r 3.8.0-34-generic 5 grep CONFIG_NFS_FSCACHE /boot/config-3.8.0-34-generic CONFIG_NFS_FSCACHE=y 6 lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 13.04 Release: 13.04 Codename: raring 7 sudo service cachefilesd restart * Restarting FilesCache daemon cachefilesd [ OK ] 8 dmesg [6211206.141781] FS-Cache: Withdrawing cache "mycache" [6211210.135236] FS-Cache: Cache "mycache" added (type cachefiles) [6211210.135242] CacheFiles: File cache on sdb1 registered [6214644.348929] CacheFiles: File cache on sdb1 unregistering [6214644.348935] FS-Cache: Withdrawing cache "mycache" [6214654.575909] FS-Cache: Cache "mycache" added (type cachefiles) [6214654.575915] CacheFiles: File cache on sdb1 registered 9 ps aux | grep cachefilesd root 65399 0.0 0.0 4460 540 ? Ss 23:14 0:00 /sbin/cachefilesd 1000 65464 0.0 0.0 8160 916 pts/0 S+ 23:16 0:00 grep --color=auto cachefilesd finally, biggest problem is 10 cat /proc/fs/nfsfs/volumes NV SERVER PORT DEV FSID FSC v3 64476645 801 0:24 233e020f0da07a93 no tl;dr I think I configured everything properly but FSC mount option + fscachefilesd don't seem to work

    Read the article

  • Chef: nested data bag data to template file returns "can't convert String into Integer"

    - by Dalho Park
    I'm creating simple test recipe with a template and data bag. What I'm trying to do is creating a config file from data bag that has simple nested information, but I receive error "can't convert String into Integer" Here are my setting file 1) recipe/default.rb data1 = data_bag_item( 'mytest', 'qa' )['test'] data2 = data_bag_item( 'mytest', 'qa' ) template "/opt/env/test.cfg" do source "test.erb" action :create_if_missing mode 0664 owner "root" group "root" variables({ :pepe1 = data1['part.name'], :pepe2 = data2['transport.tcp.ip2'] }) end 2)my data bag named "mytest" $knife data bag show mytest qa id: qa test: part.name: L12 transport.tcp.ip: 111.111.111.111 transport.tcp.port: 9199 transport.tcp.ip2: 222.222.222.222 3)template file test.erb part.name=<%= @pepe1 % transport.tcp.binding=<%= @pepe2 % Error reurns when I run chef-client on my server, [2013-06-24T19:50:38+00:00] DEBUG: filtered backtrace of compile error: /var/chef/cache/cookbooks/config_test/recipes/default.rb:19:in []',/var/chef/cache/cookbooks/config_test/recipes/default.rb:19:inblock in from_file',/var/chef/cache/cookbooks/config_test/recipes/default.rb:12:in from_file' [2013-06-24T19:50:38+00:00] DEBUG: filtered backtrace of compile error: /var/chef/cache/cookbooks/config_test/recipes/default.rb:19:in[]',/var/chef/cache/cookbooks/config_test/recipes/default.rb:19:in block in from_file',/var/chef/cache/cookbooks/config_test/recipes/default.rb:12:infrom_file' [2013-06-24T19:50:38+00:00] DEBUG: backtrace entry for compile error: '/var/chef/cache/cookbooks/config_test/recipes/default.rb:19:in `[]'' [2013-06-24T19:50:38+00:00] DEBUG: Line number of compile error: '19' Recipe Compile Error in /var/chef/cache/cookbooks/config_test/recipes/default.rb TypeError can't convert String into Integer Cookbook Trace: /var/chef/cache/cookbooks/config_test/recipes/default.rb:19:in []' /var/chef/cache/cookbooks/config_test/recipes/default.rb:19:inblock in from_file' /var/chef/cache/cookbooks/config_test/recipes/default.rb:12:in `from_file' Relevant File Content: /var/chef/cache/cookbooks/config_test/recipes/default.rb: 12: template "/opt/env/test.cfg" do 13: source "test.erb" 14: action :create_if_missing 15: mode 0664 16: owner "root" 17: group "root" 18: variables({ 19 :pepe1 = data1['part.name'], 20: :pepe2 = data2['transport.tcp.ip2'] 21: }) 22: end 23: I tried many things and if I comment out "pepe1 = data1['part.name'],", then :pepe2 = data2['transport.tcp.ip2'] works fine. only nested data "part.name" cannot be set to @pepe1. Does anyone knows why I receive the errors? thanks,

    Read the article

  • body onload cache not clearing

    - by Mad Cow
    I'm using an image swapping function generated by Dreamweaver to allow an image to change when moused over. The images are small. I have a problem because the images are getting stored in cache and without clearing it out I cant get the new images to show. It works on some browsers, but unfortunately not on all... I've read about putting "a random query" into the javascript to force the page to reload, but I dont know where to put it (the code was generated for me by dreamweaver). A subset of my code is : <script type="text/javascript"> function MM_preloadImages() { //v3.0 var d=document; if(d.images){ if(!d.MM_p) d.MM_p=new Array(); var i,j=d.MM_p.length,a=MM_preloadImages.arguments; for(i=0; i<a.length; i++) if (a[i].indexOf("#")!=0){ d.MM_p[j]=new Image; d.MM_p[j++].src=a[i];}} } function MM_swapImgRestore() { //v3.0 var i,x,a=document.MM_sr; for(i=0;a&&i<a.length&&(x=a[i])&&x.oSrc;i++) x.src=x.oSrc; } function MM_findObj(n, d) { //v4.01 var p,i,x; if(!d) d=document; if((p=n.indexOf("?"))>0&&parent.frames.length) { d=parent.frames[n.substring(p+1)].document; n=n.substring(0,p);} if(!(x=d[n])&&d.all) x=d.all[n]; for (i=0;!x&&i<d.forms.length;i++) x=d.forms[i][n]; for(i=0;!x&&d.layers&&i<d.layers.length;i++) x=MM_findObj(n,d.layers[i].document); if(!x && d.getElementById) x=d.getElementById(n); return x; } function MM_swapImage() { //v3.0 var i,j=0,x,a=MM_swapImage.arguments; document.MM_sr=new Array; for(i=0;i<(a.length-2);i+=3) if ((x=MM_findObj(a[i]))!=null){document.MM_sr[j++]=x; if(!x.oSrc) x.oSrc=x.src; x.src=a[i+2];} } </script> </head> <body onload="MM_preloadImages('../images/navigation/social-about-us-over.jpg','../images/navigation/social-about-us.jpg','../images/navigation/social-activities-over.jpg','../images/navigation/social-ourservices-over.jpg','../images/navigation/social-howwework-over.jpg','../images/navigation/social-fundraising-over.jpg','../images/navigation/social-howtohelp-over.jpg','../images/navigation/social-contactus-over.jpg')"> My website is http://www.clockhouse.org.uk/ I'm sure there is a better way i could have written this, but if anyone can help me fix this code I'd be very grateful Many thanks

    Read the article

  • Looking for a better design: A readonly in-memory cache mechanism

    - by Dylan Lin
    Hi all, I have a Category entity (class), which has zero or one parent Category and many child Categories -- it's a tree structure. The Category data is stored in a RDBMS, so for better performance, I want to load all categories and cache them in memory while launching the applicaiton. Our system can have plugins, and we allow the plugin authors to access the Category Tree, but they should not modify the cached items and the tree(I think a non-readonly design might cause some subtle bugs in this senario), only the system knows when and how to refresh the tree. Here are some demo codes: public interface ITreeNode<T> where T : ITreeNode<T> { // No setter T Parent { get; } IEnumerable<T> ChildNodes { get; } } // This class is generated by O/R Mapping tool (e.g. Entity Framework) public class Category : EntityObject { public string Name { get; set; } } // Because Category is not stateless, so I create a cleaner view class for Category. // And this class is the Node Type of the Category Tree public class CategoryView : ITreeNode<CategoryView> { public string Name { get; private set; } #region ITreeNode Memebers public CategoryView Parent { get; private set; } private List<CategoryView> _childNodes; public IEnumerable<CategoryView> ChildNodes { return _childNodes; } #endregion public static CategoryView CreateFrom(Category category) { // here I can set the CategoryView.Name property } } So far so good. However, I want to make ITreeNode interface reuseable, and for some other types, the tree should not be readonly. We are not able to do this with the above readonly ITreeNode, so I want the ITreeNode to be like this: public interface ITreeNode<T> { // has setter T Parent { get; set; } // use ICollection<T> instead of IEnumerable<T> ICollection<T> ChildNodes { get; } } But if we make the ITreeNode writable, then we cannot make the Category Tree readonly, it's not good. So I think if we can do like this: public interface ITreeNode<T> { T Parent { get; } IEnumerable<T> ChildNodes { get; } } public interface IWritableTreeNode<T> : ITreeNode<T> { new T Parent { get; set; } new ICollection<T> ChildNodes { get; } } Is this good or bad? Are there some better designs? Thanks a lot! :)

    Read the article

  • having trouble with jpa, looks to be reading from cache when told to ignore

    - by jeff
    i'm using jpa and eclipselink. it says version 2.0.2.v20100323-r6872 for eclipselink. it's an SE application, small. local persistence unit. i am using a postgres database. i have some code the wakes up once per second and does a jpa query on a table. it's polling to see whether some fields on a given record change. but when i update a field in sql, it keeps showing the same value each time i query it. and this is after waiting, creating a new entitymanager, and giving a query hint. when i query the table from sql or python, i can see the changed field. but from within my jpa query, once i've read the value, it never changes in later executions of the query -- even though the entity manager has been recreated. i've tried telling it to not look in the cache. but it seems to ignore that. very puzzling, can you help please? here is the method. i call this once every 3 seconds. the tgt.getTrlDlrs() println shows me i get the same value for this field on every call of the method ("value won't change"). even if i change the field in the database. when i query the record from outside java, i see the change right away. also, if i stop the java program and restart it, i see the new value printed out immediately: public void exitChk(EntityManagerFactory emf, Main_1 mn){ // this will be a list of the trade close targets that are active List<Pairs01Tradeclosetgt> mn_res = new ArrayList<Pairs01Tradeclosetgt>(); //this is a list of the place number in the array list above //of positions to close ArrayList<Integer> idxsToCl = new ArrayList<Integer>(); EntityManager em = emf.createEntityManager(); em.getTransaction().begin(); em.clear(); String qryTxt = "SELECT p FROM Pairs01Tradeclosetgt p WHERE p.isClosed = :isClosed AND p.isFilled = :isFilled"; Query qMn = em.createQuery(qryTxt); qMn.setHint(QueryHints.CACHE_USAGE, CacheUsage.DoNotCheckCache); qMn.setParameter("isClosed", false); qMn.setParameter("isFilled", true); mn_res = (List<Pairs01Tradeclosetgt>) qMn.getResultList(); // are there any trade close targets we need to check for closing? if (mn_res!=null){ //if yes, see whether they've hit their target for (int i=0;i<mn_res.size();i++){ Pairs01Tradeclosetgt tgt = new Pairs01Tradeclosetgt(); tgt = mn_res.get(i); System.out.println("value won't change:" + tgt.getTrlDlrs()); here is my entity class (partial): @Entity @Table(name = "pairs01_tradeclosetgt", catalog = "blah", schema = "public") @NamedQueries({ @NamedQuery(name = "Pairs01Tradeclosetgt.findAll", query = "SELECT p FROM Pairs01Tradeclosetgt p"), @NamedQuery(name = "Pairs01Tradeclosetgt.findByClseRatio", query = "SELECT p FROM Pairs01Tradeclosetgt p WHERE p.clseRatio = :clseRatio")}) public class Pairs01Tradeclosetgt implements Serializable { private static final long serialVersionUID = 1L; @Id @Basic(optional = false) @Column(name = "id") @SequenceGenerator(name="pairs01_tradeclosetgt_id_seq", allocationSize=1) @GeneratedValue(strategy=GenerationType.SEQUENCE, generator = "pairs01_tradeclosetgt_id_seq") private Integer id; and my persitence unit: <?xml version="1.0" encoding="UTF-8"?> <persistence version="1.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd"> <persistence-unit name="testpu" transaction-type="RESOURCE_LOCAL"> <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider> <class>mourv02.Pairs01Quotes</class> <class>mourv02.Pairs01Pair</class> <class>mourv02.Pairs01Trderrs</class> <class>mourv02.Pairs01Tradereq</class> <class>mourv02.Pairs01Tradeclosetgt</class> <class>mourv02.Pairs01Orderstatus</class> <properties> <property name="javax.persistence.jdbc.url" value="jdbc:postgresql://192.168.1.80:5432/blah"/> <property name="javax.persistence.jdbc.password" value="secret"/> <property name="javax.persistence.jdbc.driver" value="org.postgresql.Driver"/> <property name="javax.persistence.jdbc.user" value="noone"/> </properties> </persistence-unit> </persistence>

    Read the article

  • IIS7 Network Bios command limit and disabling file change notifications.

    - by meandmycode
    Having experienced IIS for many years I've hit this error various times in the past, and understand it, the gist being IIS is attempting to setup so many file change notifications over a UNC share that it is hitting a specified limit of active commands. However, whilst I understand this is a registry change on the endpoints, in our internal network we're starting to hit this when running certain domains that have large content (and the content is then stored out of source control and instead on a dumb network store). Given this is a testing environment and rather than changing registry settings (which may not even work for client (dev) machines that run Vista IIS), I'd like to stop IIS setting up all these file change notifications. Is this possible to do on a per site basis? additionally the configuration can't be in the web.config, if this means having to disable FCN's for the entire IIS server this would be preferred. Thanks in advance.

    Read the article

  • If-Modified-Since vs If-None-Match

    - by Roger
    This question is based on this article response header HTTP/1.1 200 OK Last-Modified: Tue, 12 Dec 2006 03:03:59 GMT ETag: "10c24bc-4ab-457e1c1f" Content-Length: 12195 request header GET /i/yahoo.gif HTTP/1.1 Host: us.yimg.com If-Modified-Since: Tue, 12 Dec 2006 03:03:59 GMT If-None-Match: "10c24bc-4ab-457e1c1f" HTTP/1.1 304 Not Modified In this case browser is sending both If-None-Match and If-Modified-Since. My question is on the server side do I need to match BOTH etag and If-Modified-Since before I send 304. Or Should I just look at etag and send 304 if etag is a match. In this case I am ignoring If-Modified-Since .

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >