Search Results

Search found 2575 results on 103 pages for 'deep zoom'.

Page 87/103 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • bad_alloc occuring when allocating small structs

    - by SalamiArmi
    A bad_alloc has started showing up in some code which looks perfectly valid to me and has worked very well in the past. The bad alloc only occurs once every 50-3000 iterations of the code, which is also confusing. The code itself is from a singly linked list, simply adding a new element to the queue: template<typename T> struct container { inline container() : next(0) {} container *next; T data; }; void push(const T &data) { container<T> *newQueueMember = new container<T>; //... unrelated to crash } Where T is: struct test { int m[256]; }; Changing the size of the array allocated array to anything but very small values (1-8 ints) still results in a bad_alloc occasionally. A few extra notes about my program: - I used Poco::ThreadPool to thread my program. I've only recently added this functionality, before I had it running with Win32 threads. However, only the main thread ever calls push(). - I am also occasionally getting other crashes which could be related. However, when I try to debug with visual studio 2008, I can't navigate back to the call stack, or the crash happens deep within new(). Thanks in advance.

    Read the article

  • Can an interface define the signature of a c#-constructor

    - by happyclicker
    I have a .net-app that provides a mechanism to extend the app with plugins. Each plugin must implement a plugin-interface and must provide furthermore a constructor that receives one parameter (a resource context). During the instantiation of the plugin-class I look via reflection, if the needed constructor exists and if yes, I instantiate the class (via Reflection). If the constructor does not exists, I throw an exception that says that the plugin not could be created, because the desired constructor is not available. My question is, if there is a way to declare the signature of a constructor in the plugin-interface so that everyone that implements the plugin-interface must also provide a constructor with the desired signature. This would ease the creation of plugins. I don’t think that such a possibility exists because I think such a feature falls not in the main purpose for what interfaces were designed for but perhaps someone knows a statement that does this, something like: public interface IPlugin { ctor(IResourceContext resourceContext); int AnotherPluginFunction(); } I want to add that I don't want to change the constructor to be parameterless and then set the resource-context through a property, because this will make the creation of plugins much more complicated. The persons that write plugins are not persons with deep programming experience. The plugins are used to calculate statistical data that will be visualized by the app.

    Read the article

  • What is the most efficient/elegant way to parse a flat table into a tree?

    - by Tomalak
    Assume you have a flat table that stores an ordered tree hierarchy: Id Name ParentId Order 1 'Node 1' 0 10 2 'Node 1.1' 1 10 3 'Node 2' 0 20 4 'Node 1.1.1' 2 10 5 'Node 2.1' 3 10 6 'Node 1.2' 1 20 What minimalistic approach would you use to output that to HTML (or text, for that matter) as a correctly ordered, correctly intended tree? Assume further you only have basic data structures (arrays and hashmaps), no fancy objects with parent/children references, no ORM, no framework, just your two hands. The table is represented as a result set, which can be accessed randomly. Pseudo code or plain English is okay, this is purely a conceptional question. Bonus question: Is there a fundamentally better way to store a tree structure like this in a RDBMS? EDITS AND ADDITIONS To answer one commenter's (Mark Bessey's) question: A root node is not necessary, because it is never going to be displayed anyway. ParentId = 0 is the convention to express "these are top level". The Order column defines how nodes with the same parent are going to be sorted. The "result set" I spoke of can be pictured as an array of hashmaps (to stay in that terminology). For my example was meant to be already there. Some answers go the extra mile and construct it first, but thats okay. The tree can be arbitrarily deep. Each node can have N children. I did not exactly have a "millions of entries" tree in mind, though. Don't mistake my choice of node naming ('Node 1.1.1') for something to rely on. The nodes could equally well be called 'Frank' or 'Bob', no naming structure is implied, this was merely to make it readable. I have posted my own solution so you guys can pull it to pieces.

    Read the article

  • ExecutorService memory leak on exception

    - by TofuBeer
    I am having a hard time tracking this down since the profiler keeps crashing (hotspot error). Before I go too deep into figuring it out I'd like to know if I really have a problem or not :-) I have a few thread pools created via: Executors.newFixedThreadPool(10); The threads connect to different web sites and, on occasion, I get connection refused and wind up throwing an exception. When I later on call Future.get() to get the result it will then catch the ExecutionException that wraps the exception that was thrown when the connection could not be made. The program uses a fairly constant amount of memory up until the point in time that the exceptions get thrown (they tend to happen in batches when a particular site is overloaded). After that point the memory again remains constant but at a higher level. So my question is along the lines of is the memory behaviour (reported by "top" on Unix) expected because the exceptions just triggered something or do I probably have an actual leak that I'll need to track down? Additionally when Future.get() throws an exception is there anything else I need to do besides catch the exception (such as call Future.cancel() on it)?

    Read the article

  • Thread-safe data structure design

    - by Inso Reiges
    Hello, I have to design a data structure that is to be used in a multi-threaded environment. The basic API is simple: insert element, remove element, retrieve element, check that element exists. The structure's implementation uses implicit locking to guarantee the atomicity of a single API call. After i implemented this it became apparent, that what i really need is atomicity across several API calls. For example if a caller needs to check the existence of an element before trying to insert it he can't do that atomically even if each single API call is atomic: if(!data_structure.exists(element)) { data_structure.insert(element); } The example is somewhat awkward, but the basic point is that we can't trust the result of "exists" call anymore after we return from atomic context (the generated assembly clearly shows a minor chance of context switch between the two calls). What i currently have in mind to solve this is exposing the lock through the data structure's public API. This way clients will have to explicitly lock things, but at least they won't have to create their own locks. Is there a better commonly-known solution to these kinds of problems? And as long as we're at it, can you advise some good literature on thread-safe design? EDIT: I have a better example. Suppose that element retrieval returns either a reference or a pointer to the stored element and not it's copy. How can a caller be protected to safely use this pointer\reference after the call returns? If you think that not returning copies is a problem, then think about deep copies, i.e. objects that should also copy another objects they point to internally. Thank you.

    Read the article

  • Why doesn't C# do "simple" type inference on generics?

    - by Ken Birman
    Just curious: sure, we all know that the general case of type inference for generics is undecidable. And so C# won't do any kind of subtyping at all: if Foo<T> is a generic, Foo<int> isn't a subtype of Foo<T>, or Foo<Object> or of anything else you might cook up. And sure, we all hack around this with ugly interface or abstract class definitions. But... if you can't beat the general problem, why not just limit the solution to cases that are easy. For example, in my list above, it is OBVIOUS that Foo<int> is a subtype of Foo<T> and it would be trivial to check. Same for checking against Foo<Object>. So is there some other deep horror that would creep forth from the abyss if they were to just say, aw shucks, we'll do what we can? Or is this just some sort of religious purity on the part of the language guys at Microsoft?

    Read the article

  • Does a CS PhD Help for Software Engineering Career?

    - by SiLent SoNG
    I would like to seek advice on whether or not to take a PhD offer from a good university. My only concern is the PhD will take at least 4 year's commitment. During the period I won't have good monetary income. I am also concerned whether the PhD will help my future career development. My career goal is software engineering only. Some of the PhD info: The PhD is CS related. The research area is Information Retrieval, Machine Learning, and Nature Language Processing. More specifically, the research topic is Deep Web search. Some of backgrounds: I worked in Oracle for 3 years in database development after obtained a CS degree from some good university. In last year I received an email telling an interesting project from a professor and thereafter I was lured into his research team. The team consists of 4 PhD students; those students have little or no industry experiences and their coding skills are really really bad. By saying bad I mean they do not know some common patterns and they do not know pitfalls of the programming languages as well as idioms for doing things right. I guess a at least 4 year commitment is worth of serious consideration. I am 27 at this moment. If I take the offer, that implies I will be 31+ upon graduation. Wah... becoming.. what to say, no longer young. Hence, here I am seeking advice on whether it is good or not to take the PhD offer, and whether a CS PhD is good for my future career growth as a software engineer? I do not intent to go for academia.

    Read the article

  • Having trouble with confusing behaviour of error between debug and release modes in Xcode

    - by Cocorico
    Hi guys, I am confused over something (what is new!). I have an iPhone program I am writing and using some sqlite in a certain method, and there is some error which is giving me a message that says "Program received signal: “EXC_BAD_ACCESS” Okay, so I am trying to hunt down why this is doing this, and I notice something: When I run the program in debug mode, it gives me this error every single time I access this method (I test on the device). However when I run the program in release mode, I can access this method 2 times, and then it will give me this error the third time. So I mean, can someone just give me an explanation of what might cause this, I think that maybe deep-down I am not that smart on the difference in XCode of debug and release modes. I think that release mode does optimizing, and I guess the actual assembly machine code comes out different, yes? I AM A BIG NEWBIE USER UNFORTUNATELY! I am not clear on a lot of things like this, or whether it needs for I to remove nslog commands in the release build and such. Maybe I should just post the actual code in separate Stack OVerflow post, and see if people can see the error, then maybe this all become clear to me.

    Read the article

  • Why does Microsoft advise against readonly fields with mutable values?

    - by Weeble
    In the Design Guidelines for Developing Class Libraries, Microsoft say: Do not assign instances of mutable types to read-only fields. The objects created using a mutable type can be modified after they are created. For example, arrays and most collections are mutable types while Int32, Uri, and String are immutable types. For fields that hold a mutable reference type, the read-only modifier prevents the field value from being overwritten but does not protect the mutable type from modification. This simply restates the behaviour of readonly without explaining why it's bad to use readonly. The implication appears to be that many people do not understand what "readonly" does and will wrongly expect readonly fields to be deeply immutable. In effect it advises using "readonly" as code documentation indicating deep immutability - despite the fact that the compiler has no way to enforce this - and disallows its use for its normal function: to ensure that the value of the field doesn't change after the object has been constructed. I feel uneasy with this recommendation to use "readonly" to indicate something other than its normal meaning understood by the compiler. I feel that it encourages people to misunderstand the meaning of "readonly", and furthermore to expect it to mean something that the author of the code might not intend. I feel that it precludes using it in places it could be useful - e.g. to show that some relationship between two mutable objects remains unchanged for the lifetime of one of those objects. The notion of assuming that readers do not understand the meaning of "readonly" also appears to be in contradiction to other advice from Microsoft, such as FxCop's "Do not initialize unnecessarily" rule, which assumes readers of your code to be experts in the language and should know that (for example) bool fields are automatically initialised to false, and stops you from providing the redundancy that shows "yes, this has been consciously set to false; I didn't just forget to initialize it". So, first and foremost, why do Microsoft advise against use of readonly for references to mutable types? I'd also be interested to know: Do you follow this Design Guideline in all your code? What do you expect when you see "readonly" in a piece of code you didn't write?

    Read the article

  • Are document-oriented databases any more suitable than relational ones for persisting objects?

    - by Owen Fraser-Green
    In terms of database usage, the last decade was the age of the ORM with hundreds competing to persist our object graphs in plain old-fashioned RMDBS. Now we seem to be witnessing the coming of age of document-oriented databases. These databases are highly optimized for schema-free documents but are also very attractive for their ability to scale out and query a cluster in parallel. Document-oriented databases also hold a couple of advantages over RDBMS's for persisting data models in object-oriented designs. As the tables are schema-free, one can store objects belonging to different classes in an inheritance hierarchy side-by-side. Also, as the domain model changes, so long as the code can cope with getting back objects from an old version of the domain classes, one can avoid having to migrate the whole database at every change. On the other hand, the performance benefits of document-oriented databases mainly appear to come about when storing deeper documents. In object-oriented terms, classes which are composed of other classes, for example, a blog post and its comments. In most of the examples of this I can come up with though, such as the blog one, the gain in read access would appear to be offset by the penalty in having to write the whole blog post "document" every time a new comment is added. It looks to me as though document-oriented databases can bring significant benefits to object-oriented systems if one takes extreme care to organize the objects in deep graphs optimized for the way the data will be read and written but this means knowing the use cases up front. In the real world, we often don't know until we actually have a live implementation we can profile. So is the case of relational vs. document-oriented databases one of swings and roundabouts? I'm interested in people's opinions and advice, in particular if anyone has built any significant applications on a document-oriented database.

    Read the article

  • Getting a handle on GIS math, where do I start?

    - by Joshua
    I am in charge of a program that is used to create a set of nodes and paths for consumption by an autonomous ground vehicle. The program keeps track of the locations of all items in its map by indicating the item's position as being x meters north and y meters east of an origin point of 0,0. In the real world, the vehicle knows the location of the origin's lat and long, as it is determined by a dgps system and is accurate down to a couple centimeters. My program is ignorant of any lat long coordinates. It is one of my goals to modify the program to keep track of lat long coords of items in addition to an origin point and items' x,y position in relation to that origin. At first blush, it seems that I am going to modify the program to allow the lat long coords of the origin to be passed in, and after that I desire that the program will automatically calculate the lat long of every item currently in a map. From what I've researched so far, I believe that I will need to figure out the math behind converting to lat long coords from a UTM like projection where I specify the origin points and meridians etc as opposed to whatever is defined already for UTM. I've come to ask of you GIS programmers, am I on the right track? It seems to me like there is so much to wrap ones head around, and I'm not sure if the answer isn't something as simple as, "oh yea theres a conversion from meters to lat long, here" Currently, due to the nature of DGPS, the system really doesn't need to care about locations more than oh, what... 40 km? radius away from the origin. Given this, and the fact that I need to make sure that the error on my coordinates is not greater than .5 meters, do I need anything more complex than a simple lat/long to meters conversion constant? I'm knee deep in materials here. I could use some pointers about what concepts to research. Thanks much!

    Read the article

  • How can i get HWND of external application's listview? In Windows Api using c++

    - by Marko29
    So i am trying to make app to get content of my explorer listviews and get item text etc.. from it but here are the problems... If i inspect windows explorer folder(using spy++) with listview, just for testing purposes i will use random folder. It shows me that caption of the window is "FolderView" with class "SysListView32" and the top level window where this listview is nested is called "reference", this is also the title of windows explorer folder where all the files are. So what i do is.. HWND hWndLV = FindWindow(NULL, TEXT("reference")); // first i get hwnd of the main window, this is where listview window is also nested according to spy++, thats why i do this first. HWND child = FindWindowEx(hWndLV, NULL,NULL,TEXT("FolderView")); // trying to get hwnd of the listview here but it fails, same happens if i also put the class name along as HWND child = FindWindowEx(hWndLV, NULL,TEXT("SysListView32"),TEXT("FolderView")); I am using bool test = IsWindow(child); to test for fail, also VS debugger shows 0x0000000000 each time so i am sure i am reading results well. So i am stuck on this probably simple thing for most of people:( p.s. i am on vista64(if that matters anyhow) edit: It appears that this function works only if i search the first nested level of a parent window i am searching. So i assume what i need is a way to get handle with some sort of deep nested level search. I also tried to go step by step by defining hwnd of every parent then i use findwindowex on it but oh boy then i get to the point where there are 5 nested windows all with the same name and only one of them contains my listview, so nice uh?

    Read the article

  • JQuery fadeIn fadeOut loop issue

    - by Tarun
    I am trying to create a jQuery fadeIn fadeout effect for my page content using the code below. $(document).ready(function (){ $("#main").click(function(){ $("#content").fadeOut(800, function(){ $("#content").load("main.html", function(){ $("#content").fadeIn(800); }); }); }); $("#gallery").click(function(){ $("#content").fadeOut(800, function(){ $("#content").load("gallery.html", function(){ $("#content").fadeIn(800); }); }); }); }); So whenever a user clicks on either the main link or gallery link, the old content fades out and new content fades in. The problem I am facing is that for every link I have to repeat the same code again and again. So I tried to use a loop to simplify this but it doesn't work. Here is my code: var p = ["#main","#gallery", "#contact"]; var q = ["main.html", "gallery.html", "contact.html"]; for (i=0;i<=(p.length-1);i++){ $(p[i]).click(function(){ $("#content").fadeOut(500, function(){ $("#content").load(q[i], function(){ $("#content").fadeIn(500); }); }); }); } It works fine when I write repeat the scripts for each link but it doesn't work when I combine them in a loop. I only get the FadeOut effect and nothing happens after that. This might be a very simple issue or may be something deep into jQuery. Any hint or help is greatly appreciated. TK

    Read the article

  • How can I recurse up a DOM tree?

    - by smartdirt
    So I have a series of nested ul elements as part of a tree like below: <ul> <li> <ul> <li>1.1</li> <li>1.2</li> </ul> <ul> <li>2.1</li> <li> <ul> <li>2.2</li> </ul> </li> </ul> <ul> <li>3.1</li> <li>3.2</li> </ul> </li> </ul> Let's say when 3.1 is the selected node and when the user clicks previous the selected node should then be 2.2. The bad news is that there could be any number of levels deep. How can I find the previous node (li) in relationship to the currently selected node using jquery?

    Read the article

  • Using Subsonic 3.0 Advanced Templates

    - by umit
    Hi all, I've been trying to use Subsonic Advanced Templates in a project for a while but most of the time I find myself writing a Stored Procedure as I can't find a proper way of doing it in code. Subsonic created corresponding objects for my DB tables and for foreign keys it created IQueryable fields inside each object. These fields are not loaded by default and a new SQL query is executed when you access them. 1- Is there a way to get all data in one query (deep load)? Also these fields can not be assigned. So when I want to create an object in a maintenance page, I can't put all the data into this object before saving it in DB: Post post = new Post(); //get photos for this post IList<PostPhoto> postPhotos = GetPostPhotos(); post.PostPhotos = postPhotos; 2- Is it possible to have one Post object with all fields set from user input? Think of the Post object above and assume I've successfully assigned its fields. Now I need to save it to the DB. 3- Is using BatchQuery the only way to do it in one query? If I have 4 photos in PostPhotos field; 2 of them previously saved and 2 of them new, can I use the Update method to handle both the adding and updating of these photos? Any ideas or links are appreciated. Cheers...

    Read the article

  • How is a referencing environment generally implemented for closures?

    - by Alexandr Kurilin
    Let's say I have a statically/lexically scoped language with deep binding and I create a closure. The closure will consist of the statements I want executed plus the so called referencing environment, or, to quote this post, the collection of variables which can be used. What does this referencing environment actually look like implementation-wise? I was recently reading about ObjectiveC's implementation of blocks, and the author suggests that behind the scenes you get a copy of all of the variables on the stack and also of all the references to heap objects. The explanation claims that you get a "snapshot" of the referencing environment at the point in time of the closure's creation. Is that more or less what happens, or did I misread that? Is anything done to "freeze" a separate copy of the heap objects, or is it safe to assume that if they get modified between closure creation and the closure executing, the closure will no longer be operating on the original version of the object? If indeed there's copying being made, are there memory usage considerations in situations where one might want to create plenty of closures and store them somewhere? I think that misunderstanding of some of these concepts might lead to tricky issues like the ones Eric Lippert mentions in this blog post. It's interesting because you'd think that it wouldn't make sense to keep a reference to a value type that might be gone by the time the closure is called, but I'm guessing that in C# the compiler will figure out that the variable is needed later and put it into the heap instead. It seems that in most memory-managed languages everything is a reference and thus ObjectiveC is a somewhat unique situation with having to deal with copying what's on the stack.

    Read the article

  • How can a SVN::Error callback identify the context from which it was called

    - by Colin Fine
    I've written some fairly extensive Perl modules and scripts using the Perl bindings SVN::Client etc. Since the calls to SVN::Client are all deep in a module, I have overridden the default error handling. So far I have done so by setting $SVN::Error::handler = undef as described in [1], but this makes the individual calls a bit messy because you have to remember to make each call to SVN::Client in list context and test the first value for errors. I would like to switch to using an error handler I would write; but $SVN::Error::handler is global, so I can't see any way that my callback can determine where the error came from, and what object to set an error code in. I wondered if I could use a pool for this purpose: so far I have ignored pools as irrelevant to working in Perl, but if I call a SVN::Client method with a pool I have created, will any SVN::Error object be created in the same pool? Has anybody any knowledge or experience which bears on this? [1]: http://search.cpan.org/~mschwern/Alien-SVN-1.4.6.0/src/subversion/subversion/bindings/swig/perl/native/Core.pm#svn_error_t_-_SVN::Error SVN::Core POD

    Read the article

  • C++ operator lookup rules / Koenig lookup

    - by John Bartholomew
    While writing a test suite, I needed to provide an implementation of operator<<(std::ostream&... for Boost unit test to use. This worked: namespace theseus { namespace core { std::ostream& operator<<(std::ostream& ss, const PixelRGB& p) { return (ss << "PixelRGB(" << (int)p.r << "," << (int)p.g << "," << (int)p.b << ")"); } }} This didn't: std::ostream& operator<<(std::ostream& ss, const theseus::core::PixelRGB& p) { return (ss << "PixelRGB(" << (int)p.r << "," << (int)p.g << "," << (int)p.b << ")"); } Apparently, the second wasn't included in the candidate matches when g++ tried to resolve the use of the operator. Why (what rule causes this)? The code calling operator<< is deep within the Boost unit test framework, but here's the test code: BOOST_AUTO_TEST_SUITE(core_image) BOOST_AUTO_TEST_CASE(test_output) { using namespace theseus::core; BOOST_TEST_MESSAGE(PixelRGB(5,5,5)); // only compiles with operator<< definition inside theseus::core std::cout << PixelRGB(5,5,5) << "\n"; // works with either definition BOOST_CHECK(true); // prevent no-assertion error } BOOST_AUTO_TEST_SUITE_END() For reference, I'm using g++ 4.4 (though for the moment I'm assuming this behaviour is standards-conformant).

    Read the article

  • Reverse geocode street name and city as text

    - by Taylor Satula
    Hello, I have been having some trouble finding a good way to output just the street name and city as text (Infinite Loop, Cupertino shown here) that can be displayed in my iPhone app. This needs to be able to dynamically change as you change streets and city. I don't have the slightest idea of how to do this, I hope someone can help. I have attached a very rough image of what I am trying to acheave. I have found this (http://code.google.com/apis/maps/documentation/javascript/services.html#Geocoding) for google maps about how to reverse geocode using javascript, but what I do not understand is how this would be done in a iPhone development setting. I work in web design and I see how it would be done in HTML but I am very new to iPhone development and don't have the slightest clue of how it would be done here. If someone could spell out how to do this I would be extremely grateful. I cannot seem to find what I am looking for by searching Google. Reference picture: http://www.threepixeldrift.com/images/deep-storage/reversegeocodeiphoneapp.jpg

    Read the article

  • protocol parsing in c

    - by nomad.alien
    I have been playing around with trying to implement some protocol decoders, but each time I run into a "simple" problem and I feel the way I am solving the problem is not optimal and there must be a better way to do things. I'm using C. Currently I'm using some canned data and reading it in as a file, but later on it would be via TCP or UDP. Here's the problem. I'm currently playing with a binary protocol at work. All fields are 8 bits long. The first field(8bits) is the packet type. So I read in the first 8 bits and using a switch/case I call a function to read in the rest of the packet as I then know the size/structure of it. BUT...some of these packets have nested packets inside them, so when I encounter that specific packet I then have to read another 8-16 bytes have another switch/case to see what the next packet type is and on and on. (Luckily the packets are only nested 2 or 3 deep). Only once I have the whole packet decoded can I handle it over to my state machine for processing. I guess this can be a more general question as well. How much data do you have to read at a time from the socket? As much as possible? As much as what is "similar" in the protocol headers? So even though this protocol is fairly basic, my code is a whole bunch of switch/case statements and I do a lot of reading from the file/socket which I feel is not optimal. My main aim is to make this decoder as fast as possible. To the more experienced people out there, is this the way to go or is there a better way which I just haven't figured out yet? Any elegant solution to this problem?

    Read the article

  • ExecutorSerrvice memory leak on exception

    - by TofuBeer
    I am having a hard time tracking this down since the profiler keeps crashing (hotspot error). Before I go too deep into figuring it out I'd like to know if I really have a problem or not :-) I have a few thread pools created via: Executors.newFixedThreadPool(10); The threads connect to different web sites and, on occasion, I get connection refused and wind up throwing an exception. When I later on call Future.get() to get the result it will then catch the ExecutionException that wraps the exception that was thrown when the connection could not be made. The program uses a fairly constant amount of memory up until the point in time that the exceptions get thrown (they tend to happen in batches when a particular site is overloaded). After that point the memory again remains constant but at a higher level. So my question is along the lines of is the memory behaviour (reported by "top" on Unix) expected because the exceptions just triggered something or do I probably have an actual leak that I'll need to track down? Additionally when Future.get() throws an exception is there anything else I need to do besides catch the exception (such as call Future.cancel() on it)?

    Read the article

  • How do I make VC++'s debugger break on exceptions?

    - by Mason Wheeler
    I'm trying to debug a problem in a DLL written in C that keeps causing access violations. I'm using Visual C++ 2008, but the code is straight C. I'm used to Delphi, where if an exception occurs while running under the debugger, the program will immediately break to the debugger and it will give you a chance to examine the program state. In Visual C++, though, all I get is a message in the Output tab: First-chance exception at blah blah blah: Access violation reading location 0x04410000. No breaks, nothing. It just goes and unwinds the stack until it's back in my Delphi EXE, which recognizes something's wrong and alerts me there, but by that point I've lost several layers of call stack and I don't know what's going on. I've tried other debugging techniques, but whatever it's doing is taking place deep within a nested loop inside a C macro that's getting called more than 500 times, and that's just a bit beyond my skill (or my patience) to trace through. I figure there has to be some way to get the "first-chance" exception to actually give me a "chance" to handle it. There's probably some "break immediately on first-chance exceptions" configuration setting I don't know about, but it doesn't seem to be all that discoverable. Does anyone know where it is and how to enable it?

    Read the article

  • Boost multi_index_container crash in release mode

    - by Zan Lynx
    I have a program that I just changed to using a boost::multi_index_container collection. After I did that and tested my code in debug mode, I was feeling pretty good about myself. However, then I compiled a release build with NDEBUG set, and the code crashed. Not immediately, but sometimes in single-threaded tests and often in multi-threaded tests. The segmentation faults happen deep inside boost insert and rotate functions related to the index updates and they are happening because a node has NULL left and right pointers. My code looks a bit like this: struct Implementation { typedef std::pair<uint32_t, uint32_t> update_pair_type; struct watch {}; struct update {}; typedef boost::multi_index_container< update_pair_type, boost::multi_index::indexed_by< boost::multi_index::ordered_unique< boost::multi_index::tag<watch>, boost::multi_index::member<update_pair_type, uint32_t, &update_pair_type::first> >, boost::multi_index::ordered_non_unique< boost::multi_index::tag<update>, boost::multi_index::member<update_pair_type, uint32_t, &update_pair_type::second> > > > update_map_type; typedef std::vector< update_pair_type > update_list_type; update_map_type update_map; update_map_type::iterator update_hint; void register_update(uint32_t watch, uint32_t update); void do_updates(uint32_t start, uint32_t end); }; void Implementation::register_update(uint32_t watch, uint32_t update) { update_pair_type new_pair( watch_offset, update_offset ); update_hint = update_map.insert(update_hint, new_pair); if( update_hint->second != update_offset ) { bool replaced _unused_ = update_map.replace(update_hint, new_pair); assert(replaced); } }

    Read the article

  • What makes an effective UI for displaying versioning of structured hierarchical data

    - by Fadrian Sudaman
    Traditional version control system are displaying versioning information by grouping Projects-Folders-Files with Tree view on the left and details view on the right, then you will click on each item to look at revision history for that configuration history. Assuming that I have all the historical versioning information available for a project from Object-oriented model perspective (e.g. classes - methods - parameters and etc), what do you think will be the most effective way to present such information in UI so that you can easily navigate and access the snapshot view of the project and also the historical versioning information? Put yourself in the position that you are using a tool like this everyday in your job like you are currently using SVN, SS, Perforce or any VCS system, what will contribute to the usability, productivity and effectiveness of the tool. I personally find the classical way for display folders and files like above are very restrictive and less effective for displaying deep nested logical models. Assuming that this is a greenfield project and not restricted by specific technology, how do you think I should best approach this? I am looking for idea and input here to add values to my research project. Feel free to make any suggestions that you think is valuable. Thanks again for anyone that shares their thoughts.

    Read the article

  • How to reduce the need for IISRESET for developing ASP.NET web app in IIS 5.1

    - by John Galt
    I have a web application project on my dev PC running WinXP and hence IIS 5.1. The changes I'm making to this site seem to "take effect" only after I do IISRESET. That is, I make a source change, Rebuild the project and then Start without Debugging (or with debugging). The newly changed code is not "visible" or in effect unless I intervene with an IISRESET. BTW, the "web" tab on the Properties display for the web app project is configured to use the Local IIS web server at project Url: http://localhost/myVirtualDirectory ... but I've noticed the same issue when using the VStudio Dev Server (i.e. I have to stop it by visiting the taskbar tray area in order to see my source changes take effect). Is this something I can change? EDIT UPDATE: Just wanting to clear this up if possible. Two answers diverge below; not sure how to move forward. One states this is to be expected (weakness of IIS 5.1 which in turn is the best WinXP can provide). Another states this is not expected behavior (and I tend to agree since this is the first I've encounted this on the same old WinXP dev platform I've had a long time). I suspect it may be something "deep inside" the Visual Studio 2008 web app which was upgraded to this new IDE from VStudio 2002 (ASP.NET 1.1). I've tried to add comment/questions down each answer path. Thanks.

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >