Search Results

Search found 18811 results on 753 pages for 'dynamic memory allocation'.

Page 96/753 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • IIS Application Pool Memory Size Problem

    - by Roni
    I increased my application pool memory size from default to 500 mb. and i have IIS 7.5. My server sometimes falling down (service unavailable) and i don't know the reason. I did couple of changes at the same day that i changed memory size in iis and from that days i am getting this problem in one of my servers. Is there anybody can tell me what is the right way to increase memory and what can be the problems???? Thankss Roni

    Read the article

  • How does azure memory usage work?

    - by Jed Grant
    I have a windows azure website. In the dashboard it shows me that I have used 1.51 GB of the 2GB available per hour. I keep increasing the number of instances available in the shared node so the site doesn't shut down. After each hour finishes, the memory usage still shows 1.51 GB used. I assume this would start at ZERO and then be used as time goes on, but that doesn't appear to be the case. How does server memory work? What are some reasons my application using this much memory? (I use no output caching and generally have just built off of the basic MVC templates provided in visual studio.) What other considerations should I be making to get the amount of memory needed to decrease?

    Read the article

  • Server memory issues, and expected level of service from hosting company

    - by Greg
    I'm involved in maintaining an Ubuntu VPS which runs our django websites (nginx/apache/mod_wsgi) and we've been having some memory spikes which have either caused the database to die, or induced kernel panic when the memory management system can't find any killable processes. I'm working on fixing the memory spikes, but I'm wondering whether there's anything I can do to better deal with the problem if it occurs again. Are there any tools I could use to detect the memory spikes and then, say, kill the offending process and email the server admin to fix it up? Killing off one website so that the server can remain operational is certainly preferable to the whole thing falling over. Also, we were charged $600 for after-hours service because we had to get the hosting company to restart the server - is this standard practice among hosting companies? Another provider I work with provides a panel with which I can stop and start the server myself, and given that a restart was all that was needed, $600 seems mightily excessive. (That's NZD, it's around $445 USD)

    Read the article

  • Orange Brightbox and NO-IP.com

    - by JSweete
    Strange one here i didnt know where to ask, and i know this is a developer resource but i was hoping with everyones tech know how someone may have a solution for my problem. Ok i had an orange livebox before and in the dynamic dns settings it had no-ip.com as a drop down option with login variables to update my account with a dynamic ip address. This worked great for years. However my livebox died and i now have a orange brightbox, and this doesnt have no-ip.com as a login update option for dynamic dns on my router. Does any one have any idea how i can get my domain to point to my home server with a dynamic ip address ideally for free? This is merely for testing and to have a backup server for my main remote server.

    Read the article

  • Understanding ESXi and Memory Usage

    - by John
    Hi, I am currently testing VMWare ESXi on a test machine. My host machine has 4gigs of ram. I have three guests and each is assigned a memory limit of 1 GB (and only 512 MB reserved). The host summary screen shows a memory capacity of 4082.55 MB and a usage of 2828 MB with two guests running. This seems to make sense, two gigs for each VM plus an overhead for the host. 800MB seems high but that is still reasonable. But on the Resource Allocation Screen I see a memory capacity of 2356 MB and an available capacity of 596 MB. Under the configuration tab, memory link I see a physical total of 4082.5 MB, System of 531.5 MB and VM of 3551.0 MB. I have only allocated my VMs for a gig each, and with two VMs running they are taking up almost two times the amount of ram allocated. Why is this, and why does the Resource Allocation screen short change me so much?

    Read the article

  • Separation of memory oriented process and CPU oriented process

    - by Jeevan Dongre
    I am develops guy working for an e-commerce company I am running my e-commerce application built using ruby on rails spree commerce. I am presently running 2 medium instances in the production. One is a high memory instance which has 3.8 RAM and single Core CPU and another one is high CPU instance which has Dual Core CPU. Basically AWS calls it has m1.medium and c1.medium instance respectively. My question is it possible to separate the processes according the cpu intense and memory intense? So that all the cpu intense process can be made run in high cpu instance and all the memory intense process can be made to run in the high memory instances. Is any tool available to identify those process. Kindly give me some heads up. Thank you

    Read the article

  • Page pool memory

    - by legiwei
    I'm currently using Windows XP SP3 32 bit, using C2D E6320 with 2GB RAM. When I am playing Starcraft 2, I encounter an error where it says my system is running low on page pool memory. Starcraft graphic settings suggested a high settings for me. I do not think it has to do with my GC but with my RAM. I then made a search to try to rectify the problem. Apparently, it's something to do with my virtual memory. I then proceed to try to the suggested solution which is to temper the registry and limit the page pool memory to 384MB. However, having done so, I still could not achieved it. I've seen screenshot settings of windows XP with 2GB having 384MB of page pool memory. My default settings puts it at 195MB whereas when I try to increase the pool limit, it can only go to a max of 229MB. I tried increasing my RAM capacity to 3GB but the pool limit still remains. I like to know how to increase my page pool memory. I've tried searching for solution but to no avail other than the one that I've mentioned above (which didn't solve my problem completely).

    Read the article

  • Increase in Virtual Memory [closed]

    - by Philly
    If we increase the virtual memory of windows server 2008 will it improve the performance on the application. upto what size can we increase the virtual memory in 32bit server. What are disadvantages and advantages of it? we are running bulk jobs(all imports and exports) everyday at 3pm.. because of that we are getting some system out of memory exceptions and and application is going down. Any suggestions. Thanks

    Read the article

  • How to restore/change Alt+Tab behaviour/ram usage and a few other things after Ubuntu upgrade from 11.04 to 11.10?

    - by fiktor
    I use Ubuntu for programming. I recently updated it from 11.04 to 11.10. There are some things I don't like in the new version of Unity desktop interface. I don't actually know if it is hard to restore previous behavior or not, and if it is not, where should I look to do that. I know a bit of programming, but I really don't know much about Linux settings. I used to have 3-6 terminal windows and switch between them with Alt+Tab and Shift+Alt+Tab. I liked half-transparent terminal windows, since with them I could open web-page with some instruction in Firefox, press Alt+Tab and type commands in a console window, being able to recognize text on a web-page under it. Now I have problems with my usual work-style because of the following. List of "negative" changes Alt+Tab shows just one icon for all console windows. When I wait some time, it, however, shows all windows, but I don't like to wait. I prefer to remember order of windows and press Alt+Tab as many times as I need to switch to the right window. Alt+Shift+Tab to switch in reverse order doesn't work now. Console windows are not transparent any more. When I don't wait, and switch to this icon, it shows all console windows altogether. So even if they were transparent, I wouldn't be able to see anything below them (I can read something only from the window, which is directly under current one, not a few levels under). When I run a few console windows in Unity I had 740Mb used on Ubuntu 11.04, but I have 1050Mb now. The question is how to make it back to 750-. I really need my memory, since I use my computer to work with 1512Mb of data and I try to save every 10Mb possible (if it doesn't take too much of machine and, more importantly, my time). When I press "The Super key" I have a field to type the name of the program I want to run. But now it sometimes shows this field, but when I'm trying to type nothing happens. Probably, focus is not on the right field. I don't really mean to restore exactly the same behavior, but I want to make my work in Ubuntu 11.10 efficient (at least as efficient as in Ubuntu 11.04). I would be happy if there are some ways to accomplish that. What have I tried I have installed CompizConfig Settings Manager. I have read this question. However enabling "Static Application Switcher" makes Alt+Tab crazy: after enabling it It says about key-binding conflicts with "Ubuntu Unity Plugin"; "Alt+Tab" switching doesn't change, but "Shift+Alt+Tab" now works and shows all windows; Memory usage increases. I have tried turning off Ubuntu Unity Plugin, but this doesn't seem right thing to do, since it seems to turn off all menus, a lot of keystrokes and app-launcher, which usually activates with "The Super key". I have found, that window transparency can be enabled by "Opacity, Brightness and Saturation" plugin from Accessibility. However I don't know if enabling it is the right thing to do (at least it increases memory usage). Update: everything solved but #3: see my own answer below. I have made a separate question about issue #3 (transparency).

    Read the article

  • Why do browsers leak memory?

    - by Dane Balia
    A colleague and I were speaking about browsers (using a browser control object in a project), and it appears as plain as day that all browsers (Firefox, Chrome, IE, Opera) display the same characteristic or side-effect from their usage and that being 'Leaking Memory'. Can someone explain why that is the case? Surely as with any form of code, there should be proper garbage collection? PS. I've read about some defensive patterns on why this can happen from a developer's perspective. I am aware of an article Crockford wrote on IE; but why is the problem symptomatic of every browser? Thanks

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Depencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. That being said though - I serialized 10,000 objects in 80ms vs. 45ms so this isn't hardly slouchy. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?On occasion dynamic loading makes sense. But there's a price to be paid in added code complexity and a performance hit. But for some operations that are not pivotal to a component or application and only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful tool. Hopefully some of you find this information useful…© Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • mp3 file streaming/download - apache server memory issue

    - by Manolis
    I have a website, in which users can upload mp3 files (uploadify), stream them using an html5 player (jplayer) and download them using a php script (www.zubrag.com/scripts/). When a user uploads a song, the path to the audio file is saved in the database and i'm using that data in order to play and show a download link for the song. The problem that i'm experiencing is that, according to my host, this method is using a lot of memory on the server, which is dedicated. Link to script: http://pastebin.com/Vus8SRa7 How should I handle the script properly? And what would be the best way to track down the problem? Any ideas on cleaning up the code? Any help much appreciated.

    Read the article

  • Why does one loop take longer to detect a shared memory update than another loop?

    - by Joseph Garvin
    I've written a 'server' program that writes to shared memory, and a client program that reads from the memory. The server has different 'channels' that it can be writing to, which are just different linked lists that it's appending items too. The client is interested in some of the linked lists, and wants to read every node that's added to those lists as it comes in, with the minimum latency possible. I have 2 approaches for the client: For each linked list, the client keeps a 'bookmark' pointer to keep its place within the linked list. It round robins the linked lists, iterating through all of them over and over (it loops forever), moving each bookmark one node forward each time if it can. Whether it can is determined by the value of a 'next' member of the node. If it's non-null, then jumping to the next node is safe (the server switches it from null to non-null atomically). This approach works OK, but if there are a lot of lists to iterate over, and only a few of them are receiving updates, the latency gets bad. The server gives each list a unique ID. Each time the server appends an item to a list, it also appends the ID number of the list to a master 'update list'. The client only keeps one bookmark, a bookmark into the update list. It endlessly checks if the bookmark's next pointer is non-null ( while(node->next_ == NULL) {} ), if so moves ahead, reads the ID given, and then processes the new node on the linked list that has that ID. This, in theory, should handle large numbers of lists much better, because the client doesn't have to iterate over all of them each time. When I benchmarked the latency of both approaches (using gettimeofday), to my surprise #2 was terrible. The first approach, for a small number of linked lists, would often be under 20us of latency. The second approach would have small spats of low latencies but often be between 4,000-7,000us! Through inserting gettimeofday's here and there, I've determined that all of the added latency in approach #2 is spent in the loop repeatedly checking if the next pointer is non-null. This is puzzling to me; it's as if the change in one process is taking longer to 'publish' to the second process with the second approach. I assume there's some sort of cache interaction going on I don't understand. What's going on?

    Read the article

  • How can i estimate memory usage of stl::map?

    - by Drakosha
    For example, I have a std::map with known sizeof(A) and sizefo(B), while map has N entries inside. How would you estimate its memory usage? I'd say it's something like (sizeof(A) + sizeof(B)) * N * factor But what is the factor? Different formula maybe? Update: Maybe it's easier to ask for upper bound?

    Read the article

  • Dell whitepaper on PowerEdge R810 R910 and M910 Memory Architecture

    - by jchang
    The Dell PowerEdge 11 th Generation Servers: R810, R910 and M910 Memory Guidance whitepaper seems to have caused some confusion. I believe the source is an error in the paper. In the section on FlexMem Bridge Technology, the Dell whitepaper says this applies to the R810 and the M910. The Dell M910 is a 4-way blade server for the Xeon 7500 series processor line. First a breif recap. The R810 is a 2-way server, by which I mean it has two sockets regardless of the number of cores on each processor....(read more)

    Read the article

  • Oracle Database In-Memory: Launch in Frankfurt

    - by Carsten Czarski
    Diesmal gibt es etwas Altes ... und etwas Neues. Zuerst das Neue: Am 11. Juni wird Larry Ellison in Redwood Shores die neue, bahnbrechende Oracle Database In-Memory Funktionalität vorstellen. Mit dieser neuen Technologie profitieren Kunden von beschleunigter Datenbankleistung für Analytics, Data Warehousing, Reporting und Online Transaction Processing (OLTP). Nur 6 Tage später - am 17. Juni -  findet, in Frankfurt, der einzige europäische Launch-Event statt. Neben Fachvorträgen, Panelveranstaltung und Demos wird ein Vortrag von Andy Mendelsohn, Head of Database Product Development, vorgesehen. Melden Sie sich heute noch an. Und hier ist das Alte: Wer erinnert sich noch die die HTML DB ...? In den Archiven der APEX Community Seite haben wir ein Video gefunden, welches zeigt, wie man Seiten in der HTML DB für andere Entwickler sperren konnte. Das gibt es heute übrigens auch noch - es sieht nur etwas anders aus. Viel Spaß beim Ansehen.

    Read the article

  • In-Memory OLTP Sample for SQL Server 2014 RTM

    - by Damian
    I have just found a very good resource about Hekaton (In-memory OLTP feature in the SQL Server 2014). On the Codeplex site you can find the newest Hekaton samples - https://msftdbprodsamples.codeplex.com/releases/view/114491. The latest samples we have were related to the CTP2 version but the newest will work with the RTM version.There are some issues fixed you might find if you tried to run the previous samples on the RTM version:Update (Apr 28, 2014): Fixed an issue where the isolation level for sample stored procedures demonstrating integrity checks was too low. The transaction isolation level for the following stored procedures was updated: Sales.uspInsertSpecialOfferProductinmem, Sales.uspDeleteSpecialOfferinmem, Production.uspInsertProductinmem, and Production.uspDeleteProductinmem. 

    Read the article

  • How to I serialize a large graph of .NET object into a SQL Server BLOB without creating a large bu

    - by Ian Ringrose
    We have code like: ms = New IO.MemoryStream bin = New System.Runtime.Serialization.Formatters.Binary.BinaryFormatter bin.Serialize(ms, largeGraphOfObjects) dataToSaveToDatabase = ms.ToArray() // put dataToSaveToDatabase in a Sql server BLOB But the memory steam allocates a large buffer from the large memory heap that is giving us problems. So how can we stream the data without needing enough free memory to hold the serialized objects. I am looking for a way to get a Stream from SQL server that can then be passed to bin.Serialize() so avoiding keeping all the data in my processes memory. Likewise for reading the data back... Some more background. This is part of a complex numerical processing system that processes data in near real time looking for equipment problems etc, the serialization is done to allow a restart when there is a problem with data quality from a data feed etc. (We store the data feeds and can rerun them after the operator has edited out bad values.) Therefore we serialize the object a lot more often then we de-serialize them. The objects we are serializing include very large arrays mostly of doubles as well as a lot of small “more normal” objects. We are pushing the memory limit on a 32 bit system and make the garage collector work very hard. (Effects are being made elsewhere in the system to improve this, e.g. reusing large arrays rather then create new arrays.) Often the serialization of the state is the last straw that courses an out of memory exception; our peak memory usage is while this serialization is being done. I think we get large memory pool fragmentation when we de-serialize the object, I expect there are also other problem with large memory pool fragmentation given the size of the arrays. (This has not yet been investigated, as the person that first looked at this is a numerical processing expert, not a memory management expert.) Are customers use a mix of Sql Server 2000, 2005 and 2008 and we would rather not have different code paths for each version of Sql Server if possible. We can have many active models at a time (in different process, across many machines), each model can have many saved states. Hence the saved state is stored in a database blob rather then a file. As the spread of saving the state is important, I would rather not serialize the object to a file, and then put the file in a BLOB one block at a time. Other related questions I have asked How to Stream data from/to SQL Server BLOB fields? Is there a SqlFileStream like class that works with Sql Server 2005?

    Read the article

  • Acceptable GC frequency for a SlimDX/Windows/.NET game?

    - by Rei Miyasaka
    I understand that the Windows GC is much better than the Xbox/WP7 GC, being that it's generational and multithreaded -- so I don't need to worry quite as much about avoiding memory allocation. SlimDX even has some unavoidable functions that generate some amount of garbage (specifically, MapSubresource creates DataBoxes), yet people don't seem to be too upset about it. I'd like to use some functional paradigms to write my code too, which also means creating objects like closures and monads. I know premature optimization isn't a good thing, but are there rules of thumb or metrics that I can follow to know whether I need to cut down on allocations? Is, say, one gen 0 GC per frame too much? One thing that has me stumped is object promotions. Gen 0 GCs will supposedly finish within a millisecond or two, but if I'm understanding correctly, it's the gen 1 and 2 promotions that start to hurt. I'm not too sure how I can predict/prevent these.

    Read the article

  • OpenGL Shading Program Object Memory Requirement

    - by Hans Wurst
    gDEbugger states that OpenGL's program objects only occupy an insignificant amount of memory. How much is this actually? I don't know if the stuff I looked up in mesa is actually that I was looking for but it requires 16KB [Edit: false, confusing struct names, less than 1KB immediate, some further behind pointers] per program object. Not quite insignificant. So is it recommended to create a unique program object for each object of the scene? Or to share a single program object and set the scene's object's custom variables just before its draw call?

    Read the article

  • In-Memory OLTP Sample for SQL Server 2014 RTM

    - by Damian
    I have just found a very good resource about Hekaton (In-memory OLTP feature in the SQL Server 2014). On the Codeplex site you can find the newest Hekaton samples - https://msftdbprodsamples.codeplex.com/releases/view/114491. The latest samples we have were related to the CTP2 version but the newest will work with the RTM version.There are some issues fixed you might find if you tried to run the previous samples on the RTM version:Update (Apr 28, 2014): Fixed an issue where the isolation level for sample stored procedures demonstrating integrity checks was too low. The transaction isolation level for the following stored procedures was updated: Sales.uspInsertSpecialOfferProductinmem, Sales.uspDeleteSpecialOfferinmem, Production.uspInsertProductinmem, and Production.uspDeleteProductinmem. 

    Read the article

  • Grails application hogging too much memory

    - by RN
    Tomcat 5.5.x and 6.0.x Grails 1.6.x Java 1.6.x OS CentOS 5.x (64bit) VPS Server with memory as 384M export JAVA_OPTS='-Xms128M -Xmx512M -XX:MaxPermSize=1024m' I have created a blank Grails application i.e simply by giving the command grails create-app and then WARed it I am running Tomcat on a VPS Server When I simply start the Tomcat server, with no apps deployed, the free memory is about 236M and used memory is about 156M When I deploy my "blank" application, the memory consumption spikes to 360M and finally the Tomcat instance is killed as soon as it takes up all free memory As you have seen, my app is as light as it can be. Not sure why the memory consumption is as high it is. I am actually troubleshooting a real application, but have narrowed down to this scenario which is easier to share and explain.

    Read the article

  • How can I give eclipse more memory than 512M?

    - by newbie
    I have following setup, but when I put 1024 and replace all 512 with 1024, then eclipse won't start at all. How can I have more than 512M memory for my eclipse JVM? -startup plugins/org.eclipse.equinox.launcher_1.0.201.R35x_v20090715.jar --launcher.library plugins/org.eclipse.equinox.launcher.win32.win32.x86_1.0.200.v20090519 -product com.springsource.sts.ide --launcher.XXMaxPermSize 512M -vm C:\Program Files (x86)\Java\jdk1.6.0_18\bin\javaw -vmargs -Dosgi.requiredJavaVersion=1.5 -Xms512m -Xmx512m -XX:MaxPermSize=512m

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >