Search Results

Search found 10215 results on 409 pages for 'ram usage'.

Page 354/409 | < Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >

  • iPhone - Memory Management - Using Leaks tool and getting some bizarre readings.

    - by Robert
    Hey all, putting the finishing touches on a project of mine so I figured I would run through it and see if and where I had any memory leaks. Found and fixed most of them but there are a couple of things regarding the memory leaks and object alloc that I am confused about. 1) There are 2 memory leaks that do not show me as responsible. There are 8 leaks attributed to AudioToolbox with the function being RegisterEmbeddedAudioCodecs(). This accounts for about 1.5 kb of leaks. The other one is detected immediately when the app begins. Core Graphics is responsible with the extra info being open_handle_to_dylib_path. For the audio leak I have looked over my audio code and to me it seems ok. self.musicPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:[NSURL fileURLWithPath:songFilePath] error:NULL]; [musicPlayer prepareToPlay]; [musicPlayer play] is called later on in a function. 2) Is it normal for there to be a spike in Object Allocation whenever a new view or controller is presented? My total memory usage is very, very low except for whenever I present a view controller. It spikes then immediately goes back down. I am guessing that this is just the phone handling all the information for switching or something. Blegh. Wall of text. Thanks in advance to anyone who helps! =)

    Read the article

  • Can the Singleton be replaced by Factory?

    - by lostiniceland
    Hello Everyone There are already quite some posts about the Singleton-Pattern around, but I would like to start another one on this topic since I would like to know if the Factory-Pattern would be the right approach to remove this "anti-pattern". In the past I used the singleton quite a lot, also did my fellow collegues since it is so easy to use. For example, the Eclipse IDE or better its workbench-model makes heavy usage of singletons as well. It was due to some posts about E4 (the next big Eclipse version) that made me start to rethink the singleton. The bottom line was that due to this singletons the dependecies in Eclipse 3.x are tightly coupled. Lets assume I want to get rid of all singletons completely and instead use factories. My thoughts were as follows: hide complexity less coupling I have control over how many instances are created (just store the reference I a private field of the factory) mock the factory for testing (with Dependency Injection) when it is behind an interface In some cases the factories can make more than one singleton obsolete (depending on business logic/component composition) Does this make sense? If not, please give good reasons for why you think so. An alternative solution is also appreciated. Thanks Marc

    Read the article

  • Book recommendation for Silverlight

    - by Mathias Weyel
    Hi there, yet another question for recommendations for a book on Silverlight. I look for a book that covers the UI and styling and, if possible, custom drawing and graphics. Very important for me is the style of the book - it should focus on the actual programming and not on where to click in Visual Studio to get things done. Let's take a fictional example for proper usage of the DataGrid control: Bad: To use the data grid, drag it from the toolbox onto the control. You can change the background color by clicking on “Background” in the properties. To define custom columns, click on columns and edit them in the configuration window that opens. Good: To use the DataGrid, you need a reference to the blah dll and declare the namespace in the XAML like this (blah), the data model should be like blah and if you want to define how the columns look like, you need to define them like this (more blah). And if you want to do this in C# because you for whatever reason aren’t able or willing to use XAML, this would look like blah. Bonus points for coverage of topics like how to manage resources (images/fonts) and internationalization. There are quite some snippets on how to do that on the internet but somehow each of them looks like they work but are not a proper way of doing it. cheers Mathias

    Read the article

  • Which language should I use to program a GUI application?

    - by Roman
    I would like to write a GUI application for management of information (text documents). In more details, it should be similar to the TiddlyWiki. I would like to have there some good visual effects (like nice representation for three structures, which you can rotate, some sound). I also would like to include some communication via Internet (for sharing and collaboration). In should include some features of such applications as a web browser, word processor, Skype. Which programming language should I use? I like the idea of usage of JavaScripts (like TddlyWiki). The good thing about that, is that user should not install anything. They open a file in a browser and it works! The bad thing is that JavaScript cannot communicate via internet with other applications. I think the choice of the programming language, in my case, id conditioned by 2 things: What can be done with this programming language (which restrictions are there). How easy to program. I would like to have "block" which can do a lot of things (rather than to program then and, in this way, to "rediscover a bicycle") ADDED: I would like to make it platform independent.

    Read the article

  • Splitting build cross the network?

    - by Dandikas
    Is there a known solution for splitting build process cross the network machines? Use case: We are an average software development company. We own around 50 development workstations (Quad Core 2.66Ghz, 4 GB ram, 200 GB raid). No need to tell that at any single moment not every machine is loaded to the max. There are 5 to 15 projects running simultaneously at any single moment. Obviously all of them are continuously build on server, than deployed to proper environment. Single project build is taking from 3 to 15 minutes. The problem: Whenever we build 5 projects in a row the last project is going to be ready after around 25 - 50 minutes. Building in parallel does not solve the problem (build is only a part of the game, than you need to deploy, run tests etc.) YES the correct solution is to add another build server, but "That involves buying new Expensive hardware, and we already spent a lot!". Yea, right(damn them)! Anyway. What about splitting build among developers workstation? Lets say whenever we need to build project "A" we check 5 workstations and start build on all that are not overloaded. The build can be canceled by a developer if he really needs all the power of his machine as long as there is at least 1 machine that is still building. After build is finished deployment can be performed to a proper environment (hosted on some server, not on workstation :) ). The bigger the company the more this makes sense to me. Anyone tried something like this? Are there any good practices? Any helpful software? (90% of the projects are .net C#)

    Read the article

  • Index Tuning for SSIS tasks

    - by Raj More
    I am loading tables in my warehouse using SSIS. Since my SSIS is slow, it seemed like a great idea to build indexes on the tables. There are no primary keys (and therefore, foreign keys), indexes (clustered or otherwise), constraints, on this warehouse. In other words, it is 100% efficiency free. We are going to put indexes based on usage - by analyzing new queries and current query performance. So, instead of doing it our old fashioned sweat and grunt way of actually reading the SQL statements and execution plans, I thought I'd put the shiny new Database Engine Tuning Advisor to use. I turned SQL logging off in my SSIS package and ran a "Tuning" trace, saved it to a table and analyzed the output in the Tuning Advisor. Most of the lookups are done as: exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',1 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',2 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',3 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',4 and when analyzed, these statements have the reason "Event does not reference any tables". Huh? Does it not see the FROM dbo.Company??!! What is going on here? So, I have multiple questions: How do I get it to capture the actual statement executing in my trace, not what was submitted in a batch? Are there any best practices to follow for tuning performance related to SSIS packages running against SQL Server 2008?

    Read the article

  • jmap -histo is missing a lot of memory

    - by ripper234
    I have a JVM with 12 gigs of total RAM, out of which 7 GB is allocated to the old generation. There seems to be some memory leak, because almost the entire old gen is full, and will not release when I schedule a GC (the process is not doing anything else at that time). A jmap -histo dump only reveals less than 1 gigabyte worth of objects. Where are the missing 6 gigs? What better tool do you propose for diagnosing this? Here is the top of the jmap output: num #instances #bytes class name ---------------------------------------------- 1: 429853 68725736 <constMethodKlass> 2: 429853 51594040 <methodKlass> 3: 37503 49611368 <constantPoolKlass> 4: 37503 31109576 <instanceKlassKlass> 5: 191716 28019968 [C 6: 32573 26933152 <constantPoolCacheKlass> 7: 86158 13789560 [I 8: 53532 11244232 [B 9: 284 10507216 [J 10: 137608 7210664 <symbolKlass> 11: 203072 6498304 java.lang.String 12: 10132 5219512 <methodDataKlass> 13: 39694 4128176 java.lang.Class 14: 55713 3792816 [S 15: 61816 3141936 [[I 16: 90109 2883488 java.util.HashMap$Entry

    Read the article

  • Lightweight alternative to Manual/AutoResetEvent in C#

    - by sweetlilmre
    Hi, I have written what I hope is a lightweight alternative to using the ManualResetEvent and AutoResetEvent classes in C#/.NET. The reasoning behind this was to have Event like functionality without the weight of using a kernel locking object. Although the code seems to work well in both testing and production, getting this kind of thing right for all possibilities can be a fraught undertaking and I would humbly request any constructive comments and or criticism from the StackOverflow crowd on this. Hopefully (after review) this will be useful to others. Usage should be similar to the Manual/AutoResetEvent classes with Notify() used for Set(). Here goes: using System; using System.Threading; public class Signal { private readonly object _lock = new object(); private readonly bool _autoResetSignal; private bool _notified; public Signal() : this(false, false) { } public Signal(bool initialState, bool autoReset) { _autoResetSignal = autoReset; _notified = initialState; } public virtual void Notify() { lock (_lock) { // first time? if (!_notified) { // set the flag _notified = true; // unblock a thread which is waiting on this signal Monitor.Pulse(_lock); } } } public void Wait() { Wait(Timeout.Infinite); } public virtual bool Wait(int milliseconds) { lock (_lock) { bool ret = true; // this check needs to be inside the lock otherwise you can get nailed // with a race condition where the notify thread sets the flag AFTER // the waiting thread has checked it and acquires the lock and does the // pulse before the Monitor.Wait below - when this happens the caller // will wait forever as he "just missed" the only pulse which is ever // going to happen if (!_notified) { ret = Monitor.Wait(_lock, milliseconds); } if (_autoResetSignal) { _notified = false; } return (ret); } } }

    Read the article

  • Existing LINQ extension method similar to Parallel.For?

    - by Joel Martinez
    The linq extension methods for ienumerable are very handy ... but not that useful if all you want to do is apply some computation to each item in the enumeration without returning anything. So I was wondering if perhaps I was just missing the right method, or if it truly doesn't exist as I'd rather use a built-in version if it's available ... but I haven't found one :-) I could have sworn there was a .ForEach method somewhere, but I have yet to find it. In the meantime, I did write my own version in case it's useful for anyone else: using System.Collections; using System.Collections.Generic; public delegate void Function<T>(T item); public delegate void Function(object item); public static class EnumerableExtensions { public static void For(this IEnumerable enumerable, Function func) { foreach (object item in enumerable) { func(item); } } public static void For<T>(this IEnumerable<T> enumerable, Function<T> func) { foreach (T item in enumerable) { func(item); } } } usage is: myEnumerable.For<MyClass>(delegate(MyClass item) { item.Count++; });

    Read the article

  • Python - Why use anything other than uuid4() for unique strings?

    - by orokusaki
    I see quit a few implementations of unique string generation for things like uploaded image names, session IDs, et al, and many of them employ the usage of hashes like SHA1, or others. I'm not questioning the legitimacy of using custom methods like this, but rather just the reason. If I want a unique string, I just say this: >>> import uuid >>> uuid.uuid4() 07033084-5cfd-4812-90a4-e4d24ffb6e3d And I'm done with it. I wasn't very trusting before I read up on uuid, so I did this: >>> import uuid >>> s = set() >>> for i in range(5000000): # That's 5 million! >>> s.add(uuid.uuid4()) ... ... >>> len(s) 5000000 Not one repeater (I didn't expect one considering the odds are like 1.108e+50, but it's comforting to see it in action). You could even half the odds by just making your string by combining 2 uuid4()s. So, with that said, why do people spend time on random() and other stuff for unique strings, etc? Is there an important security issue or other regarding uuid?

    Read the article

  • Why Doesn't UIWebView release all of its memory?

    - by Theory
    I have a graphic-intensive iPad app that features a UIWebView. Using the simulator (iOS 4.2.1), I can see Real Mem increase quite a lot as I browse. The more I browse, the more RAM it uses. When I close the UIWebView and release it, some of the memory it used is freed, but not all of it. This is annoying. Okay, so maybe it's because it isn't deallocated right away. Fine. But then I would expect the system to do some cleanup when there's a memory warning. However, if I browse around, then close the UIWebView (and release it), then trigger a memory warning in the simulator, Real Mem does not change! WTF? So why is this? Why isn't UIWebView better at releasing memory back to the system? And why doesn't it appear to respond to memory warnings? Am I missing something?

    Read the article

  • Free Memory Occupied by Std List, Vector, Map etc

    - by Graviton
    Coming from a C# background, I have only vaguest idea on memory management on C++-- all I know is that I would have to free the memory manually. As a result my C++ code is written in such a way that objects of the type std::vector, std::list, std::map are freely instantiated, used, but not freed. I didn't realize this point until I am almost done with my programs, now my code is consisted of the following kinds of patterns: struct Point_2 { double x; double y; }; struct Point_3 { double x; double y; double z; }; list<list<Point_2>> Computation::ComputationJob(list<Point_3> pts3D, vector<Point_2> vectors) { map<Point_2, double> pt2DMap=ConstructPointMap(pts3D); vector<Point_2> vectorList = ConstructVectors(vectors); list<list<Point_2>> faceList2D=ConstructPoints(vectorList , pt2DMap); return faceList2D; } My question is, must I free every.single.one of the list usage ( in the above example, this means that I would have to free pt2DMap, vectorList and faceList2D)? That would be very tedious! I might just as well rewrite my Computation class so that it is less prone to memory leak. Any idea how to fix this?

    Read the article

  • How do I sign requests reliably for the Last.fm api in C#?

    - by Arda Xi
    I'm trying to implement authorization through Last.fm. I'm submitting my arguments as a Dictionary to make the signing easier. This is the code I'm using to sign my calls: public static string SignCall(Dictionary<string, string> args) { IOrderedEnumerable<KeyValuePair<string, string>> sortedArgs = args.OrderBy(arg => arg.Key); string signature = sortedArgs.Select(pair => pair.Key + pair.Value). Aggregate((first, second) => first + second); return MD5(signature + SecretKey); } I've checked the output in the debugger, it's exactly how it should be, however, I'm still getting WebExceptions every time I try. Here's my code I use to generate the URL in case it'll help: public static string GetSignedURI(Dictionary<string, string> args, bool get) { var stringBuilder = new StringBuilder(); if (get) stringBuilder.Append("http://ws.audioscrobbler.com/2.0/?"); foreach (var kvp in args) stringBuilder.AppendFormat("{0}={1}&", kvp.Key, kvp.Value); stringBuilder.Append("api_sig="+SignCall(args)); return stringBuilder.ToString(); } And sample usage to get a SessionKey: var args = new Dictionary<string, string> { {"method", "auth.getSession"}, {"api_key", ApiKey}, {"token", token} }; string url = GetSignedURI(args, true); EDIT: Oh, and the code references an MD5 function implemented like this: public static string MD5(string toHash) { byte[] textBytes = Encoding.UTF8.GetBytes(toHash); var cryptHandler = new System.Security.Cryptography.MD5CryptoServiceProvider(); byte[] hash = cryptHandler.ComputeHash(textBytes); return hash.Aggregate("", (current, a) => current + a.ToString("x2")); }

    Read the article

  • Why doesn't this jQuery snippet work in IE8 like it does in Firefox or Chrome (Live Demo Included) ?

    - by Siracuse
    I asked for help earlier on Stackoverflow involving highlighting spans with the same Class when a mouse hovers over any Span with that same Class. It is working great: http://stackoverflow.com/questions/2709686/how-can-i-add-a-border-to-all-the-elements-that-share-a-class-when-the-mouse-has $('span[class]').hover( function() { $('.' + $(this).attr('class')).css('background-color','green'); }, function() { $('.' + $(this).attr('class')).css('background-color','yellow'); } ) Here is an example of it in usage: http://dl.dropbox.com/u/638285/0utput.html However, it doesn't appear to work properly in IE8, while it DOES work in Chrome/Firefox. Here is a screenshot of it in IE8, with my mouse hovered over the " min) { min" section in the middle. As you can see, it highlighted the span that the mouse is hovering over perfectly fine. However, it has also highlighted some random spans above and below it that don't have the same class! Only the span's with the same Class as the one where the mouse is over should be highlighted green. In this screenshot, only that middle green section should be green. Here is a screenshot of it working properly in Firefox/Chrome with my mouse in the exact same position: This screenshot is correct as the span that the mouse is over (the green section) is the only one in this section that shares that class. Why is IE8 randomly green-highlighting spans when it shouldn't be (they don't share the same class) using my little jQuery snippet? Again, if you want to see it live I have it here: http://dl.dropbox.com/u/638285/0utput.html

    Read the article

  • question about MySQL database migration

    - by WilliamLou
    Hi there: If I have a MySQL database with several tables on a live server, now I would like to migrate this database to another server. Of course, the migration I mean here involves some database tables, for example: add some new columns to several tables, add some new tables etc.. Now, the only method I can think of is to use some php/python(two scripts I know) script, connect two databases, dump the data from the old database, and then write into the new database. However, this method is not efficient at all. For example: in old database, table A has 28 columns; in new database, table A has 29 columns, but the extra column will have default value 0 for all the old rows. My script still needs to dump the data row by row and insert each row into the new database. Is there any tools or a better method than writing a script yourself? Here, I dont need to worry about multithread writing problems etc.., I mean the old database will be down (not open to public usage etc.., only for upgrade ) for a while. Thanks!!

    Read the article

  • Passing data between ViewControllers versus doing local Fetch in each VC

    - by Tofrizer
    Hi All, I'm developing an iPhone app using Core Data and I'm looking for some general advice and recommendations on whether its acceptable to pass data between ViewControllers versus doing a local fetch in each ViewController as you navigate to it. Ordinarily I would say it all depends on various factors (e.g. performance etc) but the passing data approach is so prevalent in my app and I'm spooked by all the stories about Apple rejecting apps because of not conforming to their standard guidelines. So let me put another way -- is it non-standard to pass data between VC's? The reason I pass data so much is because each ViewController is just another view on to data present in my object model / graph. Once I have a handle on my first object in the first view controller (which I of course do have to fetch), I can use the existing object composition / relationships to drill down into the next level of detail into data and so I just pass these objects to the next VC. Separately, one possible downside with this passing-data-to-each-VC approach is I don't benefit from (what I perceive to be) the optimisation/benefits that NSFetchedResultsController provides in terms of efficient memory usage and section handling. My app is read-only but I do have one table with 5000 rows and I'm curious if I am missing out on NSFetchedResultsController benefits. Any thoughts on this as well? Can I somehow still benefit from NSFetchedResultsController goodness without having to do a full fetch (as I would have already passed in the data from my previous VC)? Thanks a lot.

    Read the article

  • Can a stateless WCF service benefit from built-in database connection pooling?

    - by vladimir
    I understand that a typical .NET application that accesses a(n SQL Server) database doesn't have to do anything in particular in order to benefit from the connection pooling. Even if an application repeatedly opens and closes database connections, they do get pooled by the framework (assuming that things such as credentials do not change from call to call). My usage scenario seems to be a bit different. When my service gets instantiated, it opens a database connection once, does some work, closes the connection and returns the result. Then it gets torn down by the WCF, and the next incoming call creates a new instance of the service. In other words, my service gets instantiated per client call, as in [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]. The service accesses an SQL Server 2008 database. I'm using .NET framework 3.5 SP1. Does the connection pooling still work in this scenario, or I need to roll my own connection pool in form of a singleton or by some other means (IInstanceContextProvider?). I would rather avoid reinventing the wheel, if possible.

    Read the article

  • Problem with load testing Web Service - VSTS 2008

    - by Carlos
    Hello, I have a webtest with makes a simple call to a WebService which looks like that: MyWebService webService = new MyWebService(); webService.Timeout = 180000; webService.myMethod(); I am not using ThinkTimes, also the Run Duration is set to 5 minutes. When I ran this test simulating only 1 user, I check the counters and I found something like that: Tests Total: 4500 Network Interface\Bytes sent (agent machine): 35,500 Then I ran the same tests, but this time simulating 2 users and I got something like that: Tests Total: 2225 Network Interface\Bytes sent (agent machine): 30,500 So when I increased the numbers of users the tests/sec was half than when I use only 1 user and the bytes sent by the agent was also lower. I think it is strange, because it doesn't seems I have a bottleneck in my agent machine since CPU is never higher than 30% and I have over 1.5GB of RAM free, also my network utilization is like 0.5% of its capacity. In order to troubleshot this I ran a test using Step Pattern, the simulated users went from 20 to 800 users. When I check the requests/sec it is practically constant through the whole test, so it is clear there is something in my test or my environment which is preventing the number of requests from gets higher. It would be a expected behavior if the "response time" was getting higher because it would tell me the requests wasn't been processed properly, but the strange thing is the response time is practically constant all the time and it is pretty low actually. I have no idea why my agent can't send more requests when I increase the numbers of users, any help/tip/guess would be really appreciate.

    Read the article

  • Javascript force GC collection? / Forcefully free object?

    - by plash
    I have a js function for playing any given sound using the Audio interface (creating a new instance for every call). This works quite well, until about the 32nd call (sometimes less). This issue is directly related to the release of the Audio instance. I know this because I've allowed time for the GC in Chromium to run and it will allow me to play another 32 or so sounds again. Here's an example of what I'm doing: <html><head> <script language="javascript"> function playSound(url) { snd = new Audio(url); snd.play(); delete snd; snd = null; } </script> </head> <body> <a href="#" onclick="playSound('blah.mp3');">Play sound</a> </body></html> I also have this, which works well for pages that have less than 32 playSound calls: var AudioPlayer = { cache: {}, play: function(url) { if (!AudioPlayer.cache[url]) AudioPlayer.cache[url] = new Audio(url); AudioPlayer.cache[url].play(); } }; But this will not work for what I want to do (dynamically replace a div with other content (from separate files), which have even more sounds on them - 1. memory usage would easily skyrocket, 2. many sounds will never play). I need a way to release the sound immediately. Is it possible to do this? I have found no free/close/unload method for the Audio interface. The pages will be viewed locally, so the constant loading of sounds is not a big factor at all (and most sounds are rather short).

    Read the article

  • Qt/C++, Problems with large QImage

    - by David Günzel
    I'm pretty new to C++/Qt and I'm trying to create an application with Visual Studio C++ and Qt (4.8.3). The application displays images using a QGraphicsView, I need to change the images at pixel level. The basic code is (simplified): QImage* img = new QImage(img_width,img_height,QImage::Format_RGB32); while(do_some_stuff) { img->setPixel(x,y,color); } QGraphicsPixmapItem* pm = new QGraphicsPixmapItem(QPixmap::fromImage(*img)); QGraphicsScene* sc = new QGraphicsScene; sc->setSceneRect(0,0,img->width(),img->height()); sc->addItem(pm); ui.graphicsView->setScene(sc); This works well for images up to around 12000x6000 pixel. The weird thing happens beyond this size. When I set img_width=16000 and img_height=8000, for example, the line img = new QImage(...) returns a null image. The image data should be around 512,000,000 bytes, so it shouldn't be too large, even on a 32 bit system. Also, my machine (Win 7 64bit, 8 GB RAM) should be capable of holding the data. I've also tried this version: uchar* imgbuf = (uchar*) malloc(img_width*img_height*4); QImage* img = new QImage(imgbuf,img_width,img_height,QImage::Format_RGB32); At first, this works. The img pointer is valid and calling img-width() for example returns the correct image width (instead of 0, in case the image pointer is null). But as soon as I call img-setPixel(), the pointer becomes null and img-width() returns 0. So what am I doing wrong? Or is there a better way of modifying large images on pixel level? Regards, David

    Read the article

  • how can exec change the behavior of exec'ed program

    - by R Samuel Klatchko
    I am trying to track down a very odd crash. What is so odd about it is a workaround that someone discovered and which I cannot explain. The workaround is this small program which I'll refer to as 'runner': #include <stdio.h> #include <unistd.h> #include <string.h> #include <errno.h> int main(int argc, char *argv[]) { if (argc == 1) { fprintf(stderr, "Usage: %s prog [args ...]\n", argv[0]); return 1; } execvp(argv[1], argv + 1); fprintf(stderr, "execv failed: %s\n", strerror(errno)); // If exec returns because the program is not found or we // don't have the appropriate permission return 255; } As you can see, all this program does is use execvp to replace itself with a different program. The program crashes when it is directly invoked from the command line: /path/to/prog args # this crashes but works fine when it is indirectly invoked via my runner shim: /path/to/runner /path/to/prog args # works successfully For the life of me, I can figure out how having an extra exec can change the behavior of the program being run (as you can see the program does not change the environment). Some background on the crash. The crash itself is happening in the C++ runtime. Specifically, when the program does a throw, the crashing version incorrectly thinks there is no matching catch (although there is) and calls terminate. When I invoke the program via runner, the exception is properly caught. My question is any idea why the extra exec changes the behavior of the exec'ed program?

    Read the article

  • Do fluent interfaces significantly impact runtime performance of a .NET application?

    - by stakx
    I'm currently occupying myself with implementing a fluent interface for an existing technology, which would allow code similar to the following snippet: using (var directory = Open.Directory(@"path\to\some\directory")) { using (var file = Open.File("foobar.html").In(directory)) { // ... } } In order to implement such constructs, classes are needed that accumulate arguments and pass them on to other objects. For example, to implement the Open.File(...).In(...) construct, you would need two classes: // handles 'Open.XXX': public static class OpenPhrase { // handles 'Open.File(XXX)': public static OpenFilePhrase File(string filename) { return new OpenFilePhrase(filename); } // handles 'Open.Directory(XXX)': public static DirectoryObject Directory(string path) { // ... } } // handles 'Open.File(XXX).XXX': public class OpenFilePhrase { internal OpenFilePhrase(string filename) { _filename = filename } // handles 'Open.File(XXX).In(XXX): public FileObject In(DirectoryObject directory) { // ... } private readonly string _filename; } That is, the more constituent parts statements such as the initial examples have, the more objects need to be created for passing on arguments to subsequent objects in the chain until the actual statement can finally execute. Question: I am interested in some opinions: Does a fluent interface which is implemented using the above technique significantly impact the runtime performance of an application that uses it? With runtime performance, I refer to both speed and memory usage aspects. Bear in mind that a potentially large number of temporary, argument-saving objects would have to be created for only very brief timespans, which I assume may put a certain pressure on the garbage collector. If you think there is significant performance impact, do you know of a better way to implement fluent interfaces?

    Read the article

  • Netty options for real-time distribution of small messages to a large number of clients?

    - by user439407
    I am designing a (near) real-time Netty server to distribute a large number of very small messages to a large number of clients across the internet. In internal, go as fast as you can testing, I found that I could do 10k clients no sweat, but now that we are trying to go across the internet, where the latency, bandwidth etc varies pretty wildly we are running into the dreaded outOfMemory issues, even with 2 gigs of RAM. I have tried various workarounds(setting the socket stack sizes smaller, setting high and low water marks, cancelling things that are too old), and they help a little, but they seem to only help a little bit. What would some good ways to optimize Netty for sending large #s of small messages without significant delays? Also, the bulk of the message only consists of one kind of message that I don't particularly care if it doesn't arrive. I would use UDP but because we don't control the client, thats not really a possibility. Is it possible to set a separate timeout solely for this kind of message without affecting the other messages? Any insight you could offer would be greatly appreciated.

    Read the article

  • Better way of looping to detect change.

    - by Dremation
    As of now I'm using a while(true) method to detect changes in memory. The problem with this is it's kill the applications performance. I have a list of 30 pointers that need checked as rapidly as possible for changes, without sacrificing a huge performance loss. Anyone have ideas on this? memScan = new Thread(ScanMem); public static void ScanMem() { int i = addy.Length; while (true) { Thread.Sleep(30000); //I do this to cut down on cpu usage for (int j = 0; j < i; j++) { string[] values = addy[j].Split(new char[] { Convert.ToChar(",") }); //MessageBox.Show(values[2]); try { if (Memory.Scanner.getIntFromMem(hwnd, (IntPtr)Convert.ToInt32(values[0], 16), 32).ToString() != values[1].ToString()) { //Ok, it changed lets do our work //work if (Globals.Working) return; SomeFunction("Results: " + values[2].ToString(), "Memory"); Globals.Working = true; }//end if }//end try catch { } }//end for }//end while }//end void

    Read the article

  • Best practice PHP Form Action

    - by Rob
    Hi there i've built a new script (from scratch not a CMS) and i've done alot of work on reducing memory usage and the time it takes for the page to be displayed (caching HTML etc) There's one thing that i'm not sure about though. Take a simple example of an article with a comments section. If the comment form posts to another page that then redirects back to the article page I won't have the problem of people clicking refresh and resending the information. However if I do it that way, I have to load up my script twice use twice as much memory and it takes twice as long whilst i'm still only displaying the page once. Here's an example from my load log. The first load of the article is from the cache, the second rebuilds the page after the comment is posted. Example 1 0 queries using 650856 bytes of memory in 0.018667 - domain.com/article/1/my_article.html 9 queries using 1325723 bytes of memory in 0.075825 - domain.com/article/1/my_article/newcomment.html 0 queries using 650856 bytes of memory in 0.029449 - domain.com/article/1/my_article.html Example 2 0 queries using 650856 bytes of memory in 0.023526 - domain.com/article/1/my_article.html 9 queries using 1659096 bytes of memory in 0.060032 - domain.com/article/1/my_article.html Obviously the time fluctuates so you can't really compare that. But as you can see with the first method I use more memory and it takes longer to load. BUT the first method avoides the refresh problem. Does anyone have any suggestions for the best approach or for alternative ways to avoid the extra load (admittadely minimal but i'd still like to avoid it) whilst also avoiding the refresh problem?

    Read the article

< Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >