Search Results

Search found 35121 results on 1405 pages for 'object cache'.

Page 118/1405 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • alert the object

    - by user295189
    I am setting an object like this n.name = n.name.join(String.fromCharCode(255)); n.description = n.description.join(String.fromCharCode(255)); I want to be able to alert(n); but it tells me [Object] is there a way to alert complete object? thanks

    Read the article

  • How to iterate over numerically named object properties

    - by Scott Schluer
    So I have a horribly designed class that I can't change that has properties like this: object.Color1 object.Color2 object.Color3 etc... How can I iterate through those with a for loop. In other words, something like this: for (int i = 0; i <= 40; i++) { string PropertyName = "Color" + i; if (object.PropertyName != "") { // do something } } Obviously this code wouldn't work but it gives you an idea of what I'm after. I have to do some processing on each property and I don't want to repeat my code 40 times. :) A loop would be perfect, I'm just not sure how to create the name of the property on the fly.

    Read the article

  • Reference an object, based on a variable with it's name in it

    - by James G
    I'm looking for a way to reference an object, based on a variable with it's name in it. I know I can do this for properties and sub properties: var req = {body: {jobID: 12}}; console.log(req.body.jobID); //12 var subProperty = "jobID"; console.log(req.body[subProperty ]); //12 var property = "body"; console.log(req[property][subProperty]); //12 is it possible for the object itself? var req = {body: {jobID: 12}}; var object = "req"; var property = "body"; var subProperty = "jobID"; console.log([object][property][subProperty]); //12 or console.log(this[object][property][subProperty]); //12 Note: I'm doing this in node.js not a browser. Here is an exert from the function: if(action.render){ res.render(action.render,renderData); }else if(action.redirect){ if(action.redirect.args){ var args = action.redirect.args; res.redirect(action.redirect.path+req[args[0]][args[1]]); }else{ res.redirect(action.redirect.path); } } I could work around it by changing it to this, but I was looking for something more dynamic. if(action.render){ res.render(action.render,renderData); }else if(action.redirect){ if(action.redirect.args){ var args = action.redirect.args; if(args[0]==="req"){ res.redirect(action.redirect.path+req[args[1]][args[2]]); }else if(args[0]==="rows"){ rows.redirect(action.redirect.path+rows[args[1]][args[2]]); } }else{ res.redirect(action.redirect.path); } }

    Read the article

  • Joomla 2.5 disable and remove Smart Cache

    - by WooDzu
    I am maintaining a Joomla 2.5 based magazine website with 3-4 new, long articles every day. Smart Search was enabled by default and now I've got a few "finder" tables full of indexed phrases and therms. I wonder if there are any disadvantages if I'd: Disable the Smart Search plugin Remove these 'finder' tables completely Aha, we're using a Search field, which works fine, but I'm not sure what's going to happen if I disable the plugin and remove these tables. Will it then search for phrases in content Joomla tables or simply break w/o missing 'finder' tables Has anyone tried this before?

    Read the article

  • Rails - before_save that includes updated object

    - by Sam
    I have a before_save that calculates a percentage that needs to include the object that is being updated. Is there a one-liner in Rails that takes care of this? for example and this is totally made up: Object.find(:all, :include => :updated_object) Currently I'm sending the object that is getting updated to the definition that calculates the percentage and that works but it's making things messy.

    Read the article

  • Serialize object from within

    - by Maximus
    I have the class testClass which has the method save. This method saves object to a database. But it needs to serialize object before saving. How can I serialize object from within the class to do that? class testClass { private $prop = 777; public function save() { $serializedObject = serialize(self); DB::insert('objects', array('id', 'object')) ->values(array(1, $serializedObject)) ->execute(); } } serialize(self) obviously doesn't work.

    Read the article

  • Finding the type of an object in C++

    - by lemnisca
    I have a class A and another class that inherits from it, B. I am overriding a function that accepts an object of type A as a parameter, so I have to accept an A. However, I later call functions that only B has, so I want to return false and not proceed if the object passed is not of type B. What is the best way to find out which type the object passed to my function is?

    Read the article

  • C# Deserializing to a dictionary<string, Object>

    - by lovecraft
    I'm writing a C#/VB application to connect to a database and do stuff with the data. I was given this code to take a serialized byte array and deserialized it, which is then written to a Dictionary The line of code is: Dictionary<string, Object> DictionaryEmployee = (Dictionary<string, Object> Deserializer(byteArrayEmp)); The errors I'm getting are exceedingly unhelpful. "Only assignment, call, increment, decrement, await, and new object expressions can be used as a statement" if I mouse over Object and "Using the generic type 'System.Collections.Generic.Dictionary' requires 2 type arguments if I mouse over Dictionary.

    Read the article

  • Create Object using ObjectBuilder

    - by dhinesh
    Want to create objects using ObjectBuilder or ObjectBuilder2. I do not want to use StructureMap I was able to create the object having parameterless constructor using the code mentioned below. public class ObjectFactory : BuilderBase<BuilderStage> { public static T BuildUp<T>() { var builder = new Builder(); var locator = new Locator { { typeof(ILifetimeContainer), new LifetimeContainer() } }; var buildUp = builder.BuildUp<T>(locator, null, null); return buildUp; } for creating object of customer you just call ObjectFactory.BuildUp<Customer> However this creates object of class which has no parameters, however I need to create object which are having constructor with parameters.

    Read the article

  • RAM caching causes severe performance drops

    - by B T
    I have read plenty of threads on memory caching and the standard response of "large cache is good, it shouldn't effect performance", "the kernel knows best". I have recently upgraded from 12.04 to 12.10 and changed from VirtualBox to VMware Workstation and the performance differences are severe (I suspect it is because of the latter). When I am running my virtual machine the system load monitor graph shows less than 50% memory usage generally. System load indicator is showing me that the rest of my RAM is used in the cache all the time. Plain and simple this is the comparison: BEFORE Cache was very sparingly used, pretty much none of my memory usage was the cache Swappiness was 0 (caused my memory to be used first, then swap only if needed) Performance was quite good and logical RAM was used fully first, caching was minimal. I could run enough software to utilize my full 4GB of RAM without any performance degradation whatsoever Swap space was then used as needed which was obviously slower (I am on a HDD) but was still usable when the current program was loaded into memory AFTER Cache is used to fill the full 4GB as soon as my virtual machine is run Swappiness is 0 (same behaviour as before but cache uses full memory straight away) Performance is terrible and unusable while running Ubuntu software Basic things like changing windows takes 2 minutes + Changing screens happens frame by frame over sometimes up to 5 minutes Cannot run an IDE and VM like I could with ease before So basically, any suggestions on how to take my performance back to how it was before while keeping my current setup? My suspicion is VMWare is the problem, but how do I see what is tied to the use of the cache? Surely there is a way to control this behaviour in software as polished as VMware? Thanks EDIT: Could also be important to note that the behaviour differs depending on whether VMware is open or closed. If VMware is open, then the ram will lock at like 50% and 50% cache and go into the complete lock up mentioned above. Contrastingly, if VMware is closed (after being open), then the RAM will continue to rise as it needs / cache will stay as the complete remaining memory and there is no noticeable performance degradation.

    Read the article

  • How clean is deleting a computer object?

    - by Kevin
    Though quite skilled at software development, I'm a novice when it comes to Active Directory. I've noticed that AD seems to have a lot of stuff buried in the directory and schema which does not appear superficially when using simplified tools such as Active Directory Users and Computers. It kind of feels like the Windows registry, where COM classes have all kinds of intertwined references, many of which are purely by GUID, such that it's not enough to just search for anything referencing "GadgetXyz" by name in order to cleanly remove GadgetXyz. This occasionally leads to the uneasy feeling that I may have useless garbage building up in there which I have no idea how to weed out. For instance, I made the mistake a while back of trying to rename a DC, figuring I could just do it in the usual manner from Control Panel. I found references to the old name buried all over the place which made it impossible to reuse that name without considerable manual cleanup. Even long after I got it all working, I've stumbled upon the old name hidden away in LDAP. (There were no other DCs left in the picture at that time so I don't think it was a tombstone issue.) More specifically, I'm worried about the case of just outright deleting a computer from AD. I understand the cleanest way to do it is to log into the computer itself and tell it to leave the domain. (As an aside, doing this in Windows 8 seems to only disable the computer object and not delete it outright!) My concern is cases where this is not possible, for instance because it was on an already-deleted VM image. I can simply go into Active Directory Users and Computers, find the computer object, click it, and press Delete, and it seems to go away. My question is, is it totally, totally gone, or could this leave hanging references in any Active Directory nook or cranny I won't know to look in? (Excluding of course the expected tombstone records which expire after a set time.) If so, is there any good way to clean up the mess? Thank you for any insight! Kevin ps., It was over a year ago so I don't remember the exact details, but here's the gist of the DC renaming issue. I started with a single 2008 DC named ABC in a physical machine and wanted to end up instead with a DC of the same name running in a vSphere VM. Not wanting to mess with imaging the physical machine, my plan instead was: Rename ABC to XYZ. Fresh install 2008 on a VM, name it ABC, and join it to the domain. (I may have done the latter in the same step as promoting to DC; I don't recall.) dcpromo the new ABC as a 2nd DC, including GC. Make sure the new ABC replicated correctly from XYZ and then transfer the FSMO roles from XYZ to it. Once everything was confirmed to work with the new ABC alone, demote XYZ, remove the AD role, and remove it from the domain. Eventually I managed to do this but it was a much bumpier ride than expected. In particular, I got errors trying to join the new ABC to the domain. These included "The pre-windows 2000 name is already in use" and "No mapping between account names and security IDs was done." I eventually found that the computer object for XYZ had attributes that still referred to it as ABC. Among these were servicePrincipalName, msDS-AdditionalDnsHostName, and msDS-AdditionalSamAccountName. The latter I could not edit via Attribute Editor and instead had to run this against XYZ: NETDOM computername <simple-name> /add:<FQDN> There were some other hitches I don't remember exactly.

    Read the article

  • How to handle media kept on a separate server (PHP)

    - by Sandman
    So, I have three server, and the idea was to keep all media (images, files, movies) on a media server. I never got around to do it but I think I probably should. So these are the three servers: WWW server DB server Media server Visitors obviously connect to the WWW server and currently image resizing and cache:ing is done on the WWW servers as the original files are kept there. So the idea for me is for image functions I have, that does all the image compositioning, resizing and cahceing would just pie the command over to the media server that would return ther path to the finnished file. What I don't know is how to handle functions such as file_exists() and figuring out image dimensions when needed before even any image management comes into play. Do I pipe all these commands to the other server, via HTTP? I was thinking along the ways of doing it this way: function image(##ARGS##){ if ($GLOBALS["media_host"] != "localhost"){ list ($src, $width, height) = file('http://$GLOBALS[media_host]/imgfunc.php?args=##ARGS##'); return "<img src='$src' height and width >"; } .... do other stuff here } Am I approaching this the wrong way? Is there a better way to do this?

    Read the article

  • Google App Engine - Caching generated HTML

    - by Alexander
    I have written a Google App Engine application that programatically generates a bunch of HTML code that is really the same output for each user who logs into my system, and I know that this is going to be in-efficient when the code goes into production. So, I am trying to figure out the best way to cache the generated pages. The most probable option is to generate the pages and write them into the database, and then check the time of the database put operation for a given page against the time that the code was last updated. Then, if the code is newer than the last put to the database (for a particular HTML request), new HTML will be generated and served, and cached to the database. If the code is older than the last put to the database, then I will just get the HTML direct from the database and serve it (therefore avoiding all the CPU wastage of generating the HTML). I am not only looking to minimize load times, but to minimize CPU usage. However, one issue that I am having is that I can't figure out how to programatically check when the version of code uploaded to the app engine was updated. I am open to any suggestions on this approach, or other approaches for caching generated html. Note that while memcache could help in this situation, I believe that it is not the final solution since I really only need to re-generate html when the code is updated (as opposed to every time the memcache expires). Kind Regards, and thank you in advance for any suggestions you may be able to offer. -Alex

    Read the article

  • "Priming" a whole database in MSSQL for first-hit speed

    - by David Spillett
    For a particular apps I have a set of queries that I run each time the database has been restarted for any reason (server reboot usually). These "prime" SQL Server's page cache with the common core working set of the data so that the app is not unusually slow the first time a user logs in afterwards. One instance of the app is running on an over-specced arrangement where the SQL box has more RAM than the size of the database (4Gb in the machine, the DB is under 1.5Gb currently and unlikely to grow too much relative to that in the near future). Is there a neat/easy way of telling SQL Server to go away and load everything into RAM? It could be done the hard way by having a script scan sysobjects & sysindexes and running SELECT * FROM <table> WITH(INDEX(<index_name>)) ORDER BY <index_fields> for every key and index found, which should cause every used page to be read at least once and so be in RAM, but is there a cleaner or more efficient way? All planned instances where the database server is stopped are out-of-normal-working-hours (all the users are at most one timezone away and unlike me none of them work at silly hours) so such a process (until complete) slowing down users more than the working set not being primed at all would is not an issue.

    Read the article

  • Optimizing a shared buffer in a producer/consumer multithreaded environment

    - by Etan
    I have some project where I have a single producer thread which writes events into a buffer, and an additional single consumer thread which takes events from the buffer. My goal is to optimize this thing for a single machine to achieve maximum throughput. Currently, I am using some simple lock-free ring buffer (lock-free is possible since I have only one consumer and one producer thread and therefore the pointers are only updated by a single thread). #define BUF_SIZE 32768 struct buf_t { volatile int writepos; volatile void * buffer[BUF_SIZE]; volatile int readpos;) }; void produce (buf_t *b, void * e) { int next = (b->writepos+1) % BUF_SIZE; while (b->readpos == next); // queue is full. wait b->buffer[b->writepos] = e; b->writepos = next; } void * consume (buf_t *b) { while (b->readpos == b->writepos); // nothing to consume. wait int next = (b->readpos+1) % BUF_SIZE; void * res = b->buffer[b->readpos]; b->readpos = next; return res; } buf_t *alloc () { buf_t *b = (buf_t *)malloc(sizeof(buf_t)); b->writepos = 0; b->readpos = 0; return b; } However, this implementation is not yet fast enough and should be optimized further. I've tried with different BUF_SIZE values and got some speed-up. Additionaly, I've moved writepos before the buffer and readpos after the buffer to ensure that both variables are on different cache lines which resulted also in some speed. What I need is a speedup of about 400 %. Do you have any ideas how I could achieve this using things like padding etc?

    Read the article

  • bin-deploying DLLs banned in leiu of GAC on shared IIS 6 servers

    - by craigmoliver
    I need to solicit feedback about a recent security policy change at an organization I work with. They have recently banned the bin-deployment of DLLs to shared IIS 6 application servers. These servers host many isolated web application pools. The new rules require all DLLs to be installed in GAC. The is a problem for me because I bin-deploy several dlls including the ASP.NET MVC Framework, HTML Agility Pack, ELMAH, and my own shared class libraries. I do this because: Eliminates web application server dependencies to the Global Assembly Cache. Allows me (the developer) to have control of what goes on inside my application. Enables the application to deployed as a "package". Removes application deployment burden from the server administrators. Now, here are my questions. From a security perspective what are the advantages to using the GAC vs. bin-deployment? Is it possible to host multiple versions of the same DLL in the GAC? Has anyone run into similar restrictions?

    Read the article

  • Why does my javascript file sometimes compressed while sometimes not?(IIS Gzip problem)

    - by Kevin Yang
    i enable gzip for javascript file in my iis settings, here 's the corresponding config section. <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="10" dynamicCompressionLevel="8" /> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/soap+msbin1" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/javascript" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="*/*" enabled="false" /> </staticTypes> </httpCompression> currently, when i download my js file, it seems that sometimes server return the gzip one, and sometimes not. i dont know why, and how to debug that. If a file is already gzipped, it should be cached in local disk, and next time someone visit that file again, iis kernel should return the cache gzip file directly without compressing it again. Is that right?

    Read the article

  • non-cached RSLs in Flex?

    - by SteMa
    I have a project that is for several customers, the only difference is in the DB, everything else looks the same, except for the main page's text. That is loaded from an external swf file. I created a library, compiled it as an swc, imported it and using it as an RSL. The problem is that if once I've opened the page, and afterwards update the rsl (because changes in the text are needed), than it's already cached by the browser (not the flashplayer's cache but we shouldn't discuss this please!) and the updated swf won't be loaded. If I use it as an external, the page won't even start up (the browser says it's loaded, but it's blank, not even the loading progess bar of flex appear) <local:MainPage includeIn="default" currentState="{MainPageState}" id="Page" width="100%" height="100%" /> this is the code on the main page, if I comment this out, than the whole thing loads, even with the use of the "external" link-type. If it helps, in the design view, I see the component, but I get a warning for the library: Design mode could not load MainPage.swc. It may be incompatible with this SDK, or invalid. (DesignAssetLoader.CompleteTimeout)

    Read the article

  • HTML5 Local Storage of audio element source - is it possible?

    - by andrewdotcom
    Hi stackoverflow experts I've been experimenting with the audio and local storage features of html5 of late and have run into something that has me stumped. I'd like to be able to cache or store the source of the audio element locally to enable speedier and offline playback. The problem is I can't see how this is possible with the current implementation. I have tried the following using webkit: Creating a manifest file to set up local caching but the audio file appears not to be a cacheable item maybe due to the way it is stream or something I have also attempted to use javascript to put an audio object into local storage but the size of the mp3 makes this impossible due to memory issues (i think). I have tried to use the data uri and base64 to use the html as a audio transport that can be cached but again the filesize makes this prohibitive. Also the audio element does not seem to like this in webkit (works fine in mozilla) I have tried several methods of putting the data into the local database store. Again suffering the same issues as the other cases. I'd love to hear any other ideas anyone may have as to how I could achieve my goal of offline playback using caching/local storage in webkit.

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >