Search Results

Search found 13889 results on 556 pages for 'scratch memory'.

Page 24/556 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Memory Bandwidth Performance for Modern Machines

    - by porgarmingduod
    I'm designing a real-time system that occasionally has to duplicate a large amount of memory. The memory consists of non-tiny regions, so I expect the copying performance will be fairly close to the maximum bandwidth the relevant components (CPU, RAM, MB) can do. This led me to wonder what kind of raw memory bandwidth modern commodity machine can muster? My aging Core2Duo gives me 1.5 GB/s if I use 1 thread to memcpy() (and understandably less if I memcpy() with both cores simultaneously.) While 1.5 GB is a fair amount of data, the real-time application I'm working on will have have something like 1/50th of a second, which means 30 MB. Basically, almost nothing. And perhaps worst of all, as I add multiple cores, I can process a lot more data without any increased performance for the needed duplication step. But a low-end Core2Due isn't exactly hot stuff these days. Are there any sites with information, such as actual benchmarks, on raw memory bandwidth on current and near-future hardware? Furthermore, for duplicating large amounts of data in memory, are there any shortcuts, or is memcpy() as good as it will get? Given a bunch of cores with nothing to do but duplicate as much memory as possible in a short amount of time, what's the best I can do?

    Read the article

  • .NET out of memory troubleshooting

    - by bushman
    After reading a few enlightening articles about memory in the .NET technology, Out of Memory does not refer to physical memory, 597499. I thought I understood why a C# app would throw an out of memory exception -- until I started experimenting with two servers-- both are having 2.5 gigs of ram, windows server 2003 and identical programs running. The only significant difference between the two being one has 7% hard drive storage left and the other more than 50%. The server with 7% storage space left is consistently throwing an out of memory while the other is performing consistently well. My app is a C# web application that process' hundreds of MBs of String object. Why would this difference happen seeing that the most likely reason for the out of memory issue is out of contiguous virtual address space -- What solutions do you guys propose -- and what do you say about the following 1. turn on the 3gb switch to increase the virtual address space -- 2. instead of using one giant string object, break it up into smaller pieces and collect it in a jagged array (here I have to find a way to return to the caller in some other way as right now, the return type is a string) thanks SO

    Read the article

  • Cryptography: best practices for keys in memory?

    - by Johan
    Background: I got some data encrypted with AES (ie symmetric crypto) in a database. A server side application, running on a (assumed) secure and isolated Linux box, uses this data. It reads the encrypted data from the DB, and writes back encrypted data, only dealing with the unencrypted data in memory. So, in order to do this, the app is required to have the key stored in memory. The question is, is there any good best practices for this? Securing the key in memory. A few ideas: Keeping it in unswappable memory (for linux: setting SHM_LOCK with shmctl(2)?) Splitting the key over multiple memory locations. Encrypting the key. With what, and how to keep the...key key.. secure? Loading the key from file each time its required (slow and if the evildoer can read our memory, he can probably read our files too) Some scenarios on why the key might leak: evildoer getting hold of mem dump/core dump; bad bounds checking in code leading to information leakage; The first one seems like a good and pretty simple thing to do, but how about the rest? Other ideas? Any standard specifications/best practices? Thanks for any input!

    Read the article

  • How is external memory, internal memory, and cache organized?

    - by goldenmean
    Consider a system as follows:= A hardware board having say ARM Cortex-A8 and Neon Vector coprocessor, and Embedded Linux OS running on Cortex-A8. On this environment, if there is some application - say, a video decoder is executing - then: How is it decided that which buffers would be in external memory, which ones would be allocated in internal SRAM, etc. When one says calloc/malloc on such system/code, the pointer returned is from which memory: internal or external? Can a user make buffers to be allocated to the memories of his choice (internal/external)? In ARM architectures, there is another memory called as Tightly coupled memory (TCM). What is that and how can user enable and use it? Can I declare buffers in this memory? Do I need to see the memory map (if any) of the hardware board to understand about all these different physical memories present in a typical hardware board? How much of a role does the OS play in distinguishing these different memories? Sorry for multiple questions, but i think they all are interlinked.

    Read the article

  • How to use most of memory available on MySQL

    - by Zilvinas
    I've got a MySQL server which has both InnoDB and MyISAM tables. InnoDB tablespace is quite small under 4 GB. MyISAM is big ~250 GB in total of which 50 GB is for indexes. Our server has 32 GB of RAM but it usually uses only ~8GB. Our key_buffer_size is only 2GB. But our key cache hit ratio is ~95%. I find it hard to believe.. Here's our key statistics: | Key_blocks_not_flushed | 1868 | | Key_blocks_unused | 109806 | | Key_blocks_used | 1714736 | | Key_read_requests | 19224818713 | | Key_reads | 60742294 | | Key_write_requests | 1607946768 | | Key_writes | 64788819 | key_cache_block_size is default at 1024. We have 52 GB's of index data and 2GB key cache is enough to get a 95% hit ratio. Is that possible? On the other side data set is 200GB and since MyISAM uses OS (Centos) caching I would expect it to use a lot more memory to cache accessed myisam data. But at this stage I see that key_buffer is completely used, our buffer pool size for innodb is 4gb and is also completely used that adds up to 6GB. Which means data is cached using just 1 GB? My question is how could I check where all the free memory could be used? How could I check if MyISAM hits OS cache for data reads instead of disk?

    Read the article

  • where is memory gone (no, not buffers or cache)

    - by Marki
    can anyone tell me where the memory is gone: (no, this time neither buffers nor cache) # free total used free shared buffers cached Mem: 3928200 3868560 59640 0 2888 92924 -/+ buffers/cache: 3772748 155452 Swap: 4192956 226352 3966604 top, sorted by memory, descending: top - 13:42:06 up 1 day, 3:47, 2 users, load average: 0.08, 0.12, 0.36 Tasks: 228 total, 1 running, 227 sleeping, 0 stopped, 0 zombie Cpu0 : 2.0%us, 4.0%sy, 0.0%ni, 90.1%id, 0.0%wa, 0.0%hi, 4.0%si, 0.0%st Cpu1 : 0.0%us, 0.0%sy, 0.0%ni, 0.0%id,100.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3928200k total, 3868020k used, 60180k free, 2896k buffers Swap: 4192956k total, 226048k used, 3966908k free, 82068k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3863 root 20 0 902m 199m 3296 S 7 5.2 99:08.77 ndsd 21906 root 20 0 138m 9076 2988 S 0 0.2 0:00.02 sfcbd 2332 root 20 0 126m 4660 1332 S 0 0.1 0:17.72 mono 4243 wwwrun 20 0 683m 4468 668 S 0 0.1 0:07.38 java 2994 root 20 0 202m 2288 1660 S 0 0.1 6:10.02 httpstkd 4338 root 20 0 184m 2240 1112 S 0 0.1 0:00.52 namcd 21898 root 20 0 32368 1832 1256 R 1 0.0 0:00.08 top In fact, some time ago oom kicked in and crashed the system (kernel panic), and I'm afraid we're again not far from that point.... UPDATE # cat /proc/meminfo MemTotal: 3928200 kB MemFree: 51336 kB Buffers: 2964 kB Cached: 72876 kB SwapCached: 29128 kB Active: 233440 kB Inactive: 88040 kB Active(anon): 188920 kB Inactive(anon): 56752 kB Active(file): 44520 kB Inactive(file): 31288 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 4192956 kB SwapFree: 3966824 kB Dirty: 32 kB Writeback: 0 kB AnonPages: 225112 kB Mapped: 11356 kB Shmem: 32 kB Slab: 1624080 kB SReclaimable: 13740 kB SUnreclaim: 1610340 kB KernelStack: 4176 kB PageTables: 10500 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 6157056 kB Committed_AS: 2397684 kB VmallocTotal: 34359738367 kB VmallocUsed: 441372 kB VmallocChunk: 34359246755 kB HardwareCorrupted: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 10240 kB DirectMap2M: 4184064 kB slabtop Active / Total Objects (% used) : 9041019 / 9207548 (98.2%) Active / Total Slabs (% used) : 401132 / 401156 (100.0%) Active / Total Caches (% used) : 91 / 159 (57.2%) Active / Total Size (% used) : 1491537.88K / 1519791.56K (98.1%) Minimum / Average / Maximum Object : 0.02K / 0.17K / 4096.00K OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME 4240470 4240319 99% 0.12K 141349 30 565396K pid 2245140 2219675 98% 0.25K 149676 15 598704K size-256 2238090 2210087 98% 0.12K 74603 30 298412K size-128 ...

    Read the article

  • Apache httpd processes and PHP out of memory

    - by Ofri
    I have a VPS running apache-php-mysql on centos and a single drupal website installed. The VPS has 256MB of RAM (could be the root cause of all my problems... maybe I just need more). Whenever I try to open my website from multiple browser tabs (about 8... not 800) all at once, apache crashes! I have this on the log: [Wed Oct 24 11:26:31 2012] [error] [client xxx] PHP Fatal error: Out of memory (allocated 28049408) (tried to allocate 201335 bytes) in xxx on line 2139, referer: xxx I have read many many posts here, but I think there is something fundamental that I'm missing - If I understand correctly some php script tried to allocate 200K after allocating 28MB, and fails to do so. First question is: should this cause the apache to crash??? Next, I tried to look at 'top' command while I do my little test. Indeed I see 7 httpd processes, each reserving about 30MB - which explains why my RAM runs out. How do I prevent apache from creating new processes until it's out of memory? I tried configuring /etc/httpd/conf/httpd.conf like this: <IfModule prefork.c> StartServers 1 MinSpareServers 1 MaxSpareServers 1 ServerLimit 1 MaxClients 1 MaxRequestsPerChild 100 </IfModule> But got the same exact result! What am I missing? Thanks a lot!

    Read the article

  • Updated my WAMP Server and MySQL is eating up 580mB of memory

    - by Jon
    I updated my dev-box's WAMPSERVER, and along with updating PHP and Apache, MySQL updated to '5.6.12'. After doing that, I copied the data folder from my old (5.1.36) install to the new one and now MySQL takes up 580mB which is way too much, since I'm the only person using it (Locally) and there are only 20 or so databases on it, none of which have 'memory' tables. How can I get this down to a decent amount? My my.ini: # For advice on how to change settings please see # http://dev.mysql.com/doc/refman/5.6/en/server-configuration-defaults.html # *** DO NOT EDIT THIS FILE. It's a template which will be copied to the # *** default location during install, and will be replaced if you # *** upgrade to a newer version of MySQL. [mysqld] # Remove leading # and set to the amount of RAM for the most important data # cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%. # innodb_buffer_pool_size = 128M # Remove leading # to turn on a very important data integrity option: logging # changes to the binary log between backups. # log_bin # These are commonly set, remove the # and set as required. # basedir = ..... # datadir = ..... # port = ..... # server_id = ..... # Remove leading # to set options mainly useful for reporting servers. # The server defaults are faster for transactions and fast SELECTs. # Adjust sizes as needed, experiment to find the optimal values. # join_buffer_size = 128M # sort_buffer_size = 2M # read_rnd_buffer_size = 2M sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES Database info: Storage Engine Data Size Index Size Total Size InnoDB 48.00 KB 0.00 B 48.00 KB MEMORY 0.00 B 0.00 B 0.00 B MyISAM 163.64 MB 122.49 MB 286.13 MB Total 163.69 MB 122.49 MB 286.18 MB

    Read the article

  • Memory overcommitment on VmWare ESXi 5.0

    - by Tibor
    I would like to understand better the possibilities of VmWare ESXi memory overcommitment. I've read this paper from VmWare, so I am familiar with general concepts, such as hypervisor swapping, memory balooning and page sharing. It seems that a combination of these techniques allows for quite a large degree of overcommitment. However, I am not sure. I am deploying a virtual test lab comprising of 4 identical sets of virtual servers and workstations and a couple of virtual router instances. Overall, I expect to be running around 20 virtual machines with Windows XP, Windows 7 and Ubuntu for workstation hosts as well as CentOS and Windows 2008 Server instances for servers. The problem is, however, that the host machine only has 12GB of RAM and I don't have an option to stuff in some more. I would like to know what is the best option to configure hosts in order to achieve reasonable performance within the constrains. I have these two options: Allocate as little as possible of RAM to each virtual machine. Allocate an extraordinary amount (such as 4 GB per instance) and let the baloon driver do the rest. Something else? Which would work better? Machines will mostly be idle, so I don't have any major performance expectations, but they should run reasonably smoothly nevertheless.

    Read the article

  • windows 2008 r2 iis worker proccess memory usage increase

    - by nLL
    I have this web site written in c#. around 400-500 users online at any time. it was on windows 2008 32 bit machine before and never ever locked/slowed down due to increased memory consumption up until i upgraded it's server to win 2008 r2 64 bit. Old server had only 4 gig ram and quad core cpu at 2ghz. site was working just fine. since i've upgraded the server i noticed (2 times with in 10 days) it started to eat ram. last night it went up to 4 gb ram. with ram increase response slows down quite a lot. recycling app pool doesn't help. I have to restart it's worker process to recover. i've noticed this usually happens if there are continuous errors. as i didn't change anything in the code am i safe to assume it is not related to memory leak in the code? did anyone came across something like that? same thing happens if i create continuous errors with classic asp. thanks

    Read the article

  • 4GB Memory Upgrade for Acer Aspire 5102WLMi

    - by Richard Slater
    I have bought a 4GB memory upgrade (2x 2GB PC2-5300 SODIMM) for my Acer Aspire 5102WLMi (Aspire 5100 Series) laptop, I installed the two memory modules correctly however with 4GB installed the laptop refuses to POST. I have tried the following: Tried both 2GB SODIMMs without the other (Worked Fine) Tried the original 512MB SODIMMs (Worked Fine) Tried with original 512MB SODIMM and new 2GB SODIMM (Worked Fine) Tried swapping over the 2GB SODIMs (Didn't Boot) Left the computer for 10 minutes with both 2GB SODIMMs installed (Didn't Boot) Checked latest BIOS installed (No Change) The Crucial website said that the laptop supported 4GB of RAM as do several other sites through found through Google, up until now I was fairly confident this would work. Couple of questions that would be good to have answered: Question: Has anyone got an Acer Aspire 5100 Series running with 4GB RAM? Answer: Yes, I have now got one working with 3.75GB Usable, the rest is occupied utilized by the Graphics Card. Question: Any tips on getting this to work; is there a CMOS reset switch? Answer: Yes there is, if both SODIMMs are removed two very small interlocking PCB tracks are revealed. If these are shorted together with a screwdriver the BIOS will be reset. Thanks.

    Read the article

  • Allocating More Than 4 GB Of Memory

    - by TPatti
    I am facing an issue with memory allocation. I have: Host OS: Microsoft Windows XP - Professional x64 Edition - Version 2003 - Service Pack 2. Host Physical Memory: 8 GB Guest OS: Red Hat Enterprise Linux WS release 4 (Nahant Update 5). I am not sure if it is 32 or 64 bits. The lsb_release -a command says that argument LSB Version: core-3.0-ia32, so I guess that would be 32 bits... VMware Player Version: 2.5.2 build-156735 I would like that VMware Player could allocate more that 4 GB, but when I go to the setting, it only lists 4 GB. If I choose the "About" option, it actually says that I have 8 GB installed in the host machine. This VMware image created by someone else and provided to me, apparently done with VMware Workstation 5. Why can't I allocate 8 GB? Where is the problem? In the WMware Player Version, Guest OS or Host OS? How can I solve this? I understand that for this version of player there isn't one version for 32 and another for 64 bits.

    Read the article

  • How to create a new WCF/MVC/jQuery application from scratch

    - by pjohnson
    As a corporate developer by trade, I don't get much opportunity to create from-the-ground-up web sites; usually it's tweaks, fixes, and new functionality to existing sites. And with hobby sites, I often don't find the challenges I run into with enterprise systems; usually it's starting from Visual Studio's boilerplate project and adding whatever functionality I want to play around with, rarely deploying outside my own machine. So my experience creating a new enterprise-level site was a bit dated, and the technologies to do so have come a long way, and are much more ready to go out of the box. My intention with this post isn't so much to provide any groundbreaking insights, but to just tie together a lot of information in one place to make it easy to create a new site from scratch. Architecture One site I created earlier this year had an MVC 3 front end and a WCF 4-driven service layer. Using Visual Studio 2010, these project types are easy enough to add to a new solution. I created a third Class Library project to store common functionality the front end and services layers both needed to access, for example, the DataContract classes that the front end uses to call services in the service layer. By keeping DataContract classes in a separate project, I avoided the need for the front end to have an assembly/project reference directly to the services code, a bit cleaner and more flexible of an SOA implementation. Consuming the service Even by this point, VS has given you a lot. You have a working web site and a working service, neither of which do much but are great starting points. To wire up the front end and the services, I needed to create proxy classes and WCF client configuration information. I decided to use the SvcUtil.exe utility provided as part of the Windows SDK, which you should have installed if you installed VS. VS also provides an Add Service Reference command since the .NET 1.x ASMX days, which I've never really liked; it creates several .cs/.disco/etc. files, some of which contained hardcoded URL's, adding duplicate files (*1.cs, *2.cs, etc.) without doing a good job of cleaning up after itself. I've found SvcUtil much cleaner, as it outputs one C# file (containing several proxy classes) and a config file with settings, and it's easier to use to regenerate the proxy classes when the service changes, and to then maintain all your configuration in one place (your Web.config, instead of the Service Reference files). I provided it a reference to a copy of my common assembly so it doesn't try to recreate the data contract classes, had it use the type List<T> for collections, and modified the output files' names and .NET namespace, ending up with a command like: svcutil.exe /l:cs /o:MyService.cs /config:MyService.config /r:MySite.Common.dll /ct:System.Collections.Generic.List`1 /n:*,MySite.Web.ServiceProxies http://localhost:59999/MyService.svc I took the generated MyService.cs file and drop it in the web project, under a ServiceProxies folder, matching the namespace and keeping it separate from classes I coded manually. Integrating the config file took a little more work, but only needed to be done once as these settings didn't often change. A great thing Microsoft improved with WCF 4 is configuration; namely, you can use all the default settings and not have to specify them explicitly in your config file. Unfortunately, SvcUtil doesn't generate its config file this way. If you just copy & paste MyService.config's contents into your front end's Web.config, you'll copy a lot of settings you don't need, plus this will get unwieldy if you add more services in the future, each with its own custom binding. Really, as the only mandatory settings are the endpoint's ABC's (address, binding, and contract) you can get away with just this: <system.serviceModel>  <client>    <endpoint address="http://localhost:59999/MyService.svc" binding="wsHttpBinding" contract="MySite.Web.ServiceProxies.IMyService" />  </client></system.serviceModel> By default, the services project uses basicHttpBinding. As you can see, I switched it to wsHttpBinding, a more modern standard. Using something like netTcpBinding would probably be faster and more efficient since the client & service are both written in .NET, but it requires additional server setup and open ports, whereas switching to wsHttpBinding is much simpler. From an MVC controller action method, I instantiated the client, and invoked the method for my operation. As with any object that implements IDisposable, I wrapped it in C#'s using() statement, a tidy construct that ensures Dispose gets called no matter what, even if an exception occurs. Unfortunately there are problems with that, as WCF's ClientBase<TChannel> class doesn't implement Dispose according to Microsoft's own usage guidelines. I took an approach similar to Technology Toolbox's fix, except using partial classes instead of a wrapper class to extend the SvcUtil-generated proxy, making the fix more seamless from the controller's perspective, and theoretically, less code I have to change if and when Microsoft fixes this behavior. User interface The MVC 3 project template includes jQuery and some other common JavaScript libraries by default. I updated the ones I used to the latest versions using NuGet, available in VS via the Tools > Library Package Manager > Manage NuGet Packages for Solution... > Updates. I also used this dialog to remove packages I wasn't using. Given that it's smart enough to know the difference between the .js and .min.js files, I was hoping it would be smart enough to know which to include during build and publish operations, but this doesn't seem to be the case. I ended up using Cassette to perform the minification and bundling of my JavaScript and CSS files; ASP.NET 4.5 includes this functionality out of the box. The web client to web server link via jQuery was easy enough. In my JavaScript function, unobtrusively wired up to a button's click event, I called $.ajax, corresponding to an action method that returns a JsonResult, accomplished by passing my model class to the Controller.Json() method, which jQuery helpfully translates from JSON to a JavaScript object.$.ajax calls weren't perfectly straightforward. I tried using the simpler $.post method instead, but ran into trouble without specifying the contentType parameter, which $.post doesn't have. The url parameter is simple enough, though for flexibility in how the site is deployed, I used MVC's Url.Action method to get the URL, then sent this to JavaScript in a JavaScript string variable. If the request needed input data, I used the JSON.stringify function to convert a JavaScript object with the parameters into a JSON string, which MVC then parses into strongly-typed C# parameters. I also specified "json" for dataType, and "application/json; charset=utf-8" for contentType. For success and error, I provided my success and error handling functions, though success is a bit hairier. "Success" in this context indicates whether the HTTP request succeeds, not whether what you wanted the AJAX call to do on the web server was successful. For example, if you make an AJAX call to retrieve a piece of data, the success handler will be invoked for any 200 OK response, and the error handler will be invoked for failed requests, e.g. a 404 Not Found (if the server rejected the URL you provided in the url parameter) or 500 Internal Server Error (e.g. if your C# code threw an exception that wasn't caught). If an exception was caught and handled, or if the data requested wasn't found, this would likely go through the success handler, which would need to do further examination to verify it did in fact get back the data for which it asked. I discuss this more in the next section. Logging and exception handling At this point, I had a working application. If I ran into any errors or unexpected behavior, debugging was easy enough, but of course that's not an option on public web servers. Microsoft Enterprise Library 5.0 filled this gap nicely, with its Logging and Exception Handling functionality. First I installed Enterprise Library; NuGet as outlined above is probably the best way to do so. I needed a total of three assembly references--Microsoft.Practices.EnterpriseLibrary.ExceptionHandling, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging, and Microsoft.Practices.EnterpriseLibrary.Logging. VS links with the handy Enterprise Library 5.0 Configuration Console, accessible by right-clicking your Web.config and choosing Edit Enterprise Library V5 Configuration. In this console, under Logging Settings, I set up a Rolling Flat File Trace Listener to write to log files but not let them get too large, using a Text Formatter with a simpler template than that provided by default. Logging to a different (or additional) destination is easy enough, but a flat file suited my needs. At this point, I verified it wrote as expected by calling the Microsoft.Practices.EnterpriseLibrary.Logging.Logger.Write method from my C# code. With those settings verified, I went on to wire up Exception Handling with Logging. Back in the EntLib Configuration Console, under Exception Handling, I used a LoggingExceptionHandler, setting its Logging Category to the category I already had configured in the Logging Settings. Then, from code (e.g. a controller's OnException method, or any action method's catch block), I called the Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicy.HandleException method, providing the exception and the exception policy name I had configured in the Exception Handling Settings. Before I got this configured correctly, when I tried it out, nothing was logged. In working with .NET, I'm used to seeing an exception if something doesn't work or isn't set up correctly, but instead working with these EntLib modules reminds me more of JavaScript (before the "use strict" v5 days)--it just does nothing and leaves you to figure out why, I presume due in part to the listener pattern Microsoft followed with the Enterprise Library. First, I verified logging worked on its own. Then, verifying/correcting where each piece wires up to the next resolved my problem. Your C# code calls into the Exception Handling module, referencing the policy you pass the HandleException method; that policy's configuration contains a LoggingExceptionHandler that references a logCategory; that logCategory should be added in the loggingConfiguration's categorySources section; that category references a listener; that listener should be added in the loggingConfiguration's listeners section, which specifies the name of the log file. One final note on error handling, as the proper way to handle WCF and MVC errors is a whole other very lengthy discussion. For AJAX calls to MVC action methods, depending on your configuration, an exception thrown here will result in ASP.NET'S Yellow Screen Of Death being sent back as a response, which is at best unnecessarily and uselessly verbose, and at worst a security risk as the internals of your application are exposed to potential hackers. I mitigated this by overriding my controller's OnException method, passing the exception off to the Exception Handling module as above. I created an ErrorModel class with as few properties as possible (e.g. an Error string), sending as little information to the client as possible, to both maximize bandwidth and mitigate risk. I then return an ErrorModel in JSON format for AJAX requests: if (filterContext.HttpContext.Request.IsAjaxRequest()){    filterContext.Result = Json(new ErrorModel(...));    filterContext.ExceptionHandled = true;} My $.ajax calls from the browser get a valid 200 OK response and go into the success handler. Before assuming everything is OK, I check if it's an ErrorModel or a model containing what I requested. If it's an ErrorModel, or null, I pass it to my error handler. If the client needs to handle different errors differently, ErrorModel can contain a flag, error code, string, etc. to differentiate, but again, sending as little information back as possible is ideal. Summary As any experienced ASP.NET developer knows, this is a far cry from where ASP.NET started when I began working with it 11 years ago. WCF services are far more powerful than ASMX ones, MVC is in many ways cleaner and certainly more unit test-friendly than Web Forms (if you don't consider the code/markup commingling you're doing again), the Enterprise Library makes error handling and logging almost entirely configuration-driven, AJAX makes a responsive UI more feasible, and jQuery makes JavaScript coding much less painful. It doesn't take much work to get a functional, maintainable, flexible application, though having it actually do something useful is a whole other matter.

    Read the article

  • How to solve QPixmap::fromImage memory leak?

    - by dodoent
    Hello everyone! I have a problem with Qt. Here is a part of code that troubles me: void FullScreenImage::QImageIplImageCvt(IplImage *input) { help=cvCreateImage(cvGetSize(input), input->depth, input->nChannels); cvCvtColor(input, help, CV_BGR2RGB); QImage tmp((uchar *)help->imageData, help->width, help->height, help->widthStep, QImage::Format_RGB888); this->setPixmap(QPixmap::fromImage(tmp).scaled(this->size(), Qt::IgnoreAspectRatio, Qt::SmoothTransformation)); cvReleaseImage(&help); } void FullScreenImage::hideOnScreen() { this->hide(); this->clear(); } void FullScreenImage::showOnScreen(IplImage *slika, int delay) { QImageIplImageCvt(slika); this->showFullScreen(); if(delay>0) QTimer::singleShot(delay*1000, this, SLOT(hideOnScreen())); } So, the method showOnScreen uses private method QImageIplImageCvt to create QImage from IplImage (which is used by the openCV), which is then used to create QPixmap in order to show the image in full screen. FullScreenImage class inherits QLabel. After some delay, the fullscreen picture should be hidden, so I use QTimer to trigger an event after some delay. The event handler is the hideOnScreen method which hides the label and should clear the memory. The problem is the following: Whenever I call QPixmap::fromImage, it allocates the memory for the pixmap data and copies the data from QImage memory buffer to the QPixmap memory buffer. After the label is hidden, the QPixmap data still remains allocated, and even worse, after the new QPixmap::fromImage call the new chunk of memory is allocated for the new picture, and the old data is not freed from memory. This causes a memory leak (cca 10 MB per method call with my testing pictures). How can I solve that leak? I've even tried to create a private QPixmap variable, store pixmap created by the QPixmap::fromImage to it, and then tried to call its destructor in hideOnScreen method, but it didn't help. Is there a non-static way to create QPixmap from QImage? Or even better, is there a way to create QPixmap directly from IplImage* ? Thank you in advance for your answers.

    Read the article

  • Does a CPU assigns a value atomically to memory?

    - by Poni
    Hi! A quick question I've been wondering about for some time; Does the CPU assign values atomically, or, is it bit by bit (say for example a 32bit integer). If it's bit by bit, could another thread accessing this exact location get a "part" of the to-be-assigned value? Think of this: I have two threads and one shared "unsigned int" variable (call it "g_uiVal"). Both threads loop. On is printing "g_uiVal" with printf("%u\n", g_uiVal). The second just increase this number. Will the printing thread ever print something that is totally not or part of "g_uiVal"'s value? In code: unsigned int g_uiVal; void thread_writer() { g_uiVal++; } void thread_reader() { while(1) printf("%u\n", g_uiVal); }

    Read the article

  • Plan Caching and Query Memory Part II (Hash Match) – When not to use stored procedure - Most common performance mistake SQL Server developers make.

    - by sqlworkshops
    SQL Server estimates Memory requirement at compile time, when stored procedure or other plan caching mechanisms like sp_executesql or prepared statement are used, the memory requirement is estimated based on first set of execution parameters. This is a common reason for spill over tempdb and hence poor performance. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Hash Match operations with examples. It is recommended to read Plan Caching and Query Memory Part I before this article which covers an introduction and Query memory for Sort. In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a query does not change significantly based on predicates.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts  Let’s create a Customer’s State table that has 99% of customers in NY and the rest 1% in WA.Customers table used in Part I of this article is also used here.To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'. --Example provided by www.sqlworkshops.com drop table CustomersState go create table CustomersState (CustomerID int primary key, Address char(200), State char(2)) go insert into CustomersState (CustomerID, Address) select CustomerID, 'Address' from Customers update CustomersState set State = 'NY' where CustomerID % 100 != 1 update CustomersState set State = 'WA' where CustomerID % 100 = 1 go update statistics CustomersState with fullscan go   Let’s create a stored procedure that joins customers with CustomersState table with a predicate on State. --Example provided by www.sqlworkshops.com create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1) end go  Let’s execute the stored procedure first with parameter value ‘WA’ – which will select 1% of data. set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' goThe stored procedure took 294 ms to complete.  The stored procedure was granted 6704 KB based on 8000 rows being estimated.  The estimated number of rows, 8000 is similar to actual number of rows 8000 and hence the memory estimation should be ok.  There was no Hash Warning in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Now let’s execute the stored procedure with parameter value ‘NY’ – which will select 99% of data. -Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  The stored procedure took 2922 ms to complete.   The stored procedure was granted 6704 KB based on 8000 rows being estimated.    The estimated number of rows, 8000 is way different from the actual number of rows 792000 because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘WA’ in our case. This underestimation will lead to spill over tempdb, resulting in poor performance.   There was Hash Warning (Recursion) in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Let’s recompile the stored procedure and then let’s first execute the stored procedure with parameter value ‘NY’.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts, www.sqlworkshops.com/webcasts for further details.   exec sp_recompile CustomersByState go --Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  Now the stored procedure took only 1046 ms instead of 2922 ms.   The stored procedure was granted 146752 KB of memory. The estimated number of rows, 792000 is similar to actual number of rows of 792000. Better performance of this stored procedure execution is due to better estimation of memory and avoiding spill over tempdb.   There was no Hash Warning in SQL Profiler.   Now let’s execute the stored procedure with parameter value ‘WA’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go  The stored procedure took 351 ms to complete, higher than the previous execution time of 294 ms.    This stored procedure was granted more memory (146752 KB) than necessary (6704 KB) based on parameter value ‘NY’ for estimation (792000 rows) instead of parameter value ‘WA’ for estimation (8000 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘NY’ in this case. This overestimation leads to poor performance of this Hash Match operation, it might also affect the performance of other concurrently executing queries requiring memory and hence overestimation is not recommended.     The estimated number of rows, 792000 is much more than the actual number of rows of 8000.  Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined data range.Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, recompile) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 297 ms and 1102 ms in line with previous optimal execution times.   The stored procedure with parameter value ‘WA’ has good estimation like before.   Estimated number of rows of 8000 is similar to actual number of rows of 8000.   The stored procedure with parameter value ‘NY’ also has good estimation and memory grant like before because the stored procedure was recompiled with current set of parameter values.  Estimated number of rows of 792000 is similar to actual number of rows of 792000.    The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.   There was no Hash Warning in SQL Profiler.   Let’s recreate the stored procedure with optimize for hint of ‘NY’. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, optimize for (@State = 'NY')) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 353 ms with parameter value ‘WA’, this is much slower than the optimal execution time of 294 ms we observed previously. This is because of overestimation of memory. The stored procedure with parameter value ‘NY’ has optimal execution time like before.   The stored procedure with parameter value ‘WA’ has overestimation of rows because of optimize for hint value of ‘NY’.   Unlike before, more memory was estimated to this stored procedure based on optimize for hint value ‘NY’.    The stored procedure with parameter value ‘NY’ has good estimation because of optimize for hint value of ‘NY’. Estimated number of rows of 792000 is similar to actual number of rows of 792000.   Optimal amount memory was estimated to this stored procedure based on optimize for hint value ‘NY’.   There was no Hash Warning in SQL Profiler.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.  Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • Why does jquery leak memory so badly?

    - by Thomas Lane
    This is kind of a follow-up to a question I posted last week: http://stackoverflow.com/questions/2429056/simple-jquery-ajax-call-leaks-memory-in-ie I love the jquery syntax and all of its nice features, but I've been having trouble with a page that automatically updates table cells via ajax calls leaking memory. So I created two simple test pages for experimenting. Both pages do an ajax call every .1 seconds. After each successful ajax call, a counter is incremented and the DOM is updated. The script stops after 1000 cycles. One uses jquery for both the ajax call and to update the DOM. The other uses the Yahoo API for the ajax and does a document.getElementById(...).innerHTML to update the DOM. The jquery version leaks memory badly. Running in drip (on XP Home with IE7), it starts at 9MB and finishes at about 48MB, with memory growing linearly the whole time. If I comment out the line that updates the DOM, it still finishes at 32MB, suggesting that even simple DOM updates leak a significant amount of memory. The non-jquery version starts and finishes at about 9MB, regardless of whether it updates the DOM. Does anyone have a good explanation of what is causing jquery to leak so badly? Am I missing something obvious? Is there a circular reference that I'm not aware of? Or does jquery just have some serious memory issues? Here is the source for the leaky (jquery) version: <html> <head> <script type="text/javascript" src="http://www.google.com/jsapi"></script> <script type="text/javascript"> google.load('jquery', '1.4.2'); </script> <script type="text/javascript"> var counter = 0; leakTest(); function leakTest() { $.ajax({ url: '/html/delme.x', type: 'GET', success: incrementCounter }); } function incrementCounter(data) { if (counter<1000) { counter++; $('#counter').text(counter); setTimeout(leakTest,100); } else $('#counter').text('finished.'); } </script> </head> <body> <div>Why is memory usage going up?</div> <div id="counter"></div> </body> </html> And here is the non-leaky version: <html> <head> <script type="text/javascript" src="http://yui.yahooapis.com/2.8.0r4/build/yahoo/yahoo-min.js"></script> <script type="text/javascript" src="http://yui.yahooapis.com/2.8.0r4/build/event/event-min.js"></script> <script type="text/javascript" src="http://yui.yahooapis.com/2.8.0r4/build/connection/connection_core-min.js"></script> <script type="text/javascript"> var counter = 0; leakTest(); function leakTest() { YAHOO.util.Connect.asyncRequest('GET', '/html/delme.x', {success:incrementCounter}); } function incrementCounter(o) { if (counter<1000) { counter++; document.getElementById('counter').innerHTML = counter; setTimeout(leakTest,100); } else document.getElementById('counter').innerHTML = 'finished.' } </script> </head> <body> <div>Memory usage is stable, right?</div> <div id="counter"></div> </body> </html>

    Read the article

  • Is special memory required for a MacBook Pro ?

    - by user38900
    I have a MacBook Pro (MacBookPro5,2 / 2.8 GHz) with 4 GB of ram (2x2GB). I'm looking to upgrade to 8GB. The memory in it now is DDR3 PC3-8500 1067. Checking out prices for 4 GB sticks of PC3-8500 there is about $100 difference for "apple certified" ram. Will any DDR3 PC3-8500 module work or is there really a difference?

    Read the article

  • How memory hungry is Jetty?

    - by Sanoj
    I am planning to use Jetty + MySQL on a small VPS with just 256 or 512MB memory, for serving a few websites. I haven't used Jetty before, only PHP on shared hosting. Is 256 or 512MB too limited for a Jetty server? or should I go with nginx + php + php-fpm setup instead? The websites will not have much traffic, they are just small sites.

    Read the article

  • MySQL: Load database to memory

    - by Adam Matan
    Hi, Is there a way to load an entire MySQL database to the RAM, especially on en EC2 server? The database is quite small (~500 MegaBytes) I have enough memory Speed issues are crucial - the resulted queries are used to serve a dynamic webpage. Thanks, Adam

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >