Search Results

Search found 10428 results on 418 pages for 'places bar'.

Page 302/418 | < Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >

  • Silverlight performance with many loaded controls

    - by gius
    I have a SL application with many DataGrids (from Silverlight Toolkit), each on its own view. If several DataGrids are opened, changing between views (TabItems, for example) takes a long time (few seconds) and it freezes the whole application (UI thread). The more DataGrids are loaded, the longer the change takes. These DataGrids that slow the UI chanage might be on other places in the app and not even visible at that moment. But once they are opened (and loaded with data), they slow showing other DataGrids. Note that DataGrids are NOT disposed and then recreated again, they still remain in memory, only their parent control is being hidden and visible again. I have profiled the application. It shows that agcore.dll's SetValue function is the bottleneck. Unfortunately, debug symbols are not available for this Silverlight native library responsible for drawing. The problem is not in the DataGrid control - I tried to replace it with XCeed's grid and the performance when changing views is even worse. Do you have any idea how to solve this problem? Why more opened controls slow down other controls? I have created a sample that shows this issue: http://cenud.cz/PerfTest.zip UPDATE: Using VS11 profiler on the sample provided suggests that the problem could be in MeasureOverride being called many times (for each DataGridCell, I guess). But still, why is it slower as more controls are loaded elsewhere? Is there a way to improve the performance?

    Read the article

  • AnyCPU/x86/x64 for C# application and it's C++/CLI dependency

    - by Soonts
    I'm Windows developer, I'm using Microsoft visual studio 2008 SP1. My developer machine is 64 bit. The software I'm currently working on is managed .exe written in C#. Unfortunately, I was unable to solve the whole problem solely in C#. That's why I also developed a small managed DLL in C++/CLI. Both projects are in the same solution. My C# .exe build target is "Any CPU". When my C++ DLL build target is "x86", the DLL is not loaded. As far as I understood when I googled, the reason is C++/CLI language, unlike other .NET languages, compiles to the native code, not managed code. I switched the C++ DLL build target to x64, and everything works now. However, AFAIK everything will stop working as soon as my client will install my product on a 32-bit OS. I have to support Windows Vista and 7, both 32 and 64 bit versions of each of them. I don't want to fall back to 32 bits. That 250 lines of C++ code in my DLL is only 2% of my codebase. And that DLL is only used in several places, so in the typical usage scenario it's not even loaded. My DLL implements two COM objects with ATL, so I can't use "/clr:safe" project setting. Is there way to configure the solution and the projects so that C# project builds "Any CPU" version, the C++ project builds both 32 bit and 64 bit versions, then in the runtime when the managed .EXE is starting up, it uses either 32-bit DLL or 64-bit DLL depending on the OS? Or maybe there's some better solution I'm not aware of? Thanks in advance!

    Read the article

  • Recursive QuickSort suffering a StackOverflowException -- Need fresh eyes

    - by jon
    I am working on a Recursive QuickSort method implementation in a GenericList Class. I will have a second method that accepts a compareDelegate to compare different types, but for development purposes I'm sorting a GenericList<int I am recieving stackoverflow areas in different places depending on the list size. I've been staring at and tracing through this code for hours and probably just need a fresh pair of (more experienced)eyes. Definitely wanting to learn why it is broken, not just how to fix it. public void QuickSort() { int i, j, lowPos, highPos, pivot; GenericList<T> leftList = new GenericList<T>(); GenericList<T> rightList = new GenericList<T>(); GenericList<T> tempList = new GenericList<T>(); lowPos = 1; highPos = this.Count; if (lowPos < highPos) { pivot = (lowPos + highPos) / 2; for (i = 1; i <= highPos; i++) { if (this[i].CompareTo(this[pivot]) <= 0) leftList.Add(this[i]); else rightList.Add(this[i]); } leftList.QuickSort(); rightList.QuickSort(); for(i=1;i<=leftList.Count;i++) tempList.Add(leftList[i]); for(i=1;i<=rightList.Count;i++) tempList.Add(rightList[i]); this.items = tempList.items; this.count = tempList.count; } }

    Read the article

  • iPhone OpenGL ES Texture2D Masking

    - by Robert Neagu
    What's the best choice when trying to mask a texture like ColorSplash or other apps like iSteam, etc? I started learning OPENGL ES like... 4 days ago (I'm a total rookie) and tried the following approach: 1) I created a colored texture2D, a grayscale version of the first texture and a third texture2D called mask 2) I also created a texture2D for the brush... which is grayscale and it's opaque (brush = black = 0,0,0,1 and surroundings = white = 1,1,1,1). My intention was to create an antialiased brush with smooth edges but i'm fine with a normal one right now 3) I searched for masking techniques on the internet and found this tutorial ZeusCMD - Design and Development Tutorials : OpenGL ES Programming Tutorials - Masking about masking. The tutorial tells me to use blending to achieve masking... first draw colored, then mask with glBlendFunc(GL_DST_COLOR, GL_ZERO) and then grayscale with glBlendFunc(GL_ONE, GL_ONE) ... and this gives me something close to what i want... but not exactly what i want. The result is masked but it's somehow overbright-ed 4) For drawing to the mask texture i used an extra frame buffer object (FBO) I'm not really happy with the resulting image (overbright-ed picture) nor with the speed achieved with this method. I think the normal way was to draw directly to the grayscale (overlay) texture2D affecting only it's alpha channel in the places where the brush hits. Is there a fast way to achieve this? I have searched a lot and never got an answer that's clear and understandable. Then, in the main draw loop I could only draw the colored texture and then blend the grayscale ontop with glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). I just want to learn to use OPENGL ES and it's driving me nuts because i can't get it to work properly. An advice, a link to a tutorial would be much appreciated.

    Read the article

  • UIImageView Centering Problem.

    - by sagar
    To understand my problem let's follow this steps. Open - XCode - New Project - View based application. Open View Controller.xib file. Drag an image-view into viewController's view. Place that image view on center of viewController's view. Apply size of 125x125 to image-view. Take any image with dimension of 320 width & 460 height. Drag that image into your application resources directory. Now, Again back to .xib File & select image-view Set image (which is in your resources) to image-view Set Image-view Mode to - Center Ok. Let me explain the situation now. after performing all above steps - image-view has an image which is larger than image-view's size. However, Image-view places that image on center & in View controller image-view is looking proper ( not scaled ). But When you run your application, image-view will be displayed in size of 320x460. ( but it is not actually scaled. ) Now, I can't find the reason, why interface builder is displaying something else & simulator/iphone is displaying something else ? My Query. I want the image's center portion within image-view no matter what image size is. Thanks in advance for sharing your knowledge. Sagar.

    Read the article

  • Dynamically invoke web service at runtime

    - by Ulrik Rasmussen
    So, our application needs support for dynamically calling web services which are unknown at compile time. The user should therefore be able to specify a URL to a WSDL, and specify some data bindings for the request and reply parameters. When Googling for answers, it seems like the way to do this is by actually compiling a web service proxy class at runtime, loading it, and invoking the methods using reflection. I think this seems like a rather clunky approach, given that I don't really need a strongly typed set of classes when I'm going to cast my data dynamically anyway. Dynamically compiling code for doing something that simple also just seems like The Wrong Way To Do It. Restricting ourself to the SOAP protocol, is there any library for C# that implements this protocol for dynamic use? I can imagine that it would be possible to generate runtime key/value data structures from the WSDL, which could be used to specify the request messages, as well as reading the replies. The library should then be able to send well-formed SOAP messages to the server, and parse the replies, without the programmer having to generate the XML manually (at least not the headers and other plumbing). I can't seem to find any library that actually does this. Is what I want to do really that esoteric, or have I just searched the wrong places? Thanks, Ulrik

    Read the article

  • How do I use Core Data with the Cocoa Text Input system?

    - by the Joel
    Hobbyist Cocoa programmer here. Have been looking around all the usual places, but this seems relatively under-explained: I am writing something a little out of the ordinary. It is much simpler than, but similar to, a desktop publishing app. I want editable text boxes on a canvas, arbitrarily placed. This is document-based and I’d really like to use Core Data. Now, The cocoa text-handling system seems to deal with a four-class structure: NSTextStorage, NSLayoutManager, NSTextContainer and finally NSTextView. I have looked into these and know how to use them, sort of. Have been making some prototypes and it works for simple apps. The problem arrives when I get into persistency. I don't know how to, by way of Cocoa Bindings or something else, store the contents of NSTextStorage (= the actual text) in my managed object context. I have considered overriding methods pairs like -words, -setWords: in these objects. This would let me link the words to a String, which I know how to store in Core Data. However, I’d have to override any method that affects the text - and that seems a little much. Thankful for any insights.

    Read the article

  • An online php debugger/code editor

    - by Zirak
    It's a simple deal: I'm sometimes in places where I don't have my laptop, and find myself with spare time and an idea for a project. But unfortunately, I can't do anything about it. I tried a variety of solutions, which include running IDEs (like phpstorm or Aptana) on a disc-on-key or cd (very slow and unappealing), trying several online solutions (like http://phpanywhere.net) and found that all of them are either buggy, overloaded or underloaded with features, just difficult to use, require FTP etc etc. All that is required here is a syntax highlighting and debugging alerts; no actual running of code. So the question is split into two: 1)Do you know of a good online php editor that you've used and enjoyed? 2)If no, then how would you go about making one? The second one seems a bit general, so I'll try and expand...It might be a good idea; if you can't find one, make one. The question is about the concept of making a syntax highlighter (shouldn't be too difficult), and the difficult part of catching php errors WITHOUT executing any php code. Thank you in advance.

    Read the article

  • How good is the memory mapped Circular Buffer on Wikipedia?

    - by abroun
    I'm trying to implement a circular buffer in C, and have come across this example on Wikipedia. It looks as if it would provide a really nice interface for anyone reading from the buffer, as reads which wrap around from the end to the beginning of the buffer are handled automatically. So all reads are contiguous. However, I'm a bit unsure about using it straight away as I don't really have much experience with memory mapping or virtual memory and I'm not sure that I fully understand what it's doing. What I think I understand is that it's mapping a shared memory file the size of the buffer into memory twice. Then, whenever data is written into the buffer it appears in memory in 2 places at once. This allows all reads to be contiguous. What would be really great is if someone with more experience of POSIX memory mapping could have a quick look at the code and tell me if the underlying mechanism used is really that efficient. Am I right in thinking for example that the file in /dev/shm used for the shared memory always stays in RAM or could it get written to the hard drive (performance hit) at some point? Are there any gotchas I should be aware of? As it stands, I'm probably going to use a simpler method for my current project, but it'd be good to understand this to have it in my toolbox for the future. Thanks in advance for your time.

    Read the article

  • What is a fast way to set debugging code at a given line in a function?

    - by Josh O'Brien
    Preamble: R's trace() is a powerful debugging tool, allowing users to "insert debugging code at chosen places in any function". Unfortunately, using it from the command-line can be fairly laborious. As an artificial example, let's say I want to insert debugging code that will report the between-tick interval calculated by pretty.default(). I'd like to insert the code immediately after the value of delta is calculated, about four lines up from the bottom of the function definition. (Type pretty.default to see where I mean.) To indicate that line, I need to find which step in the code it corresponds to. The answer turns out to be step list(c(12, 3, 3)), which I zero in on by running through the following steps: as.list(body(pretty.default)) as.list(as.list(body(pretty.default))[[12]]) as.list(as.list(as.list(body(pretty.default))[[12]])[[3]]) as.list(as.list(as.list(body(pretty.default))[[12]])[[3]])[[3]] I can then insert debugging code like this: trace(what = 'pretty.default', tracer = quote(cat("\nThe value of delta is: ", delta, "\n\n")), at = list(c(12,3,3))) ## Try it a <- pretty(c(1, 7843)) b <- pretty(c(2, 23)) ## Clean up untrace('pretty.default') Questions: So here are my questions: Is there a way to print out a function (or a parsed version of it) with the lines nicely labeled by the steps to which they belong? Alternatively, is there another easier way, from the command line, to quickly set debugging code for a specific line within a function? Addendum: I used the pretty.default() example because it is reasonably tame, but with real/interesting functions, repeatedly using as.list() quickly gets tiresome and distracting. Here's an example: as.list(as.list(as.list(as.list(as.list(as.list(as.list(as.list(as.list(body(# model.frame.default))[[26]])[[3]])[[2]])[[4]])[[3]])[[4]])[[4]])[[4]])[[3]]

    Read the article

  • Fast sketching tools for drawing C/C++ structs, pointers, etc...

    - by tomasorti
    Hi. I would like to know what do you use to sketch relations between different entities in C/C++. This can be a very broad issue, so I'll try to clarify a bit more my question and give an example. I'm looking for something that is simple enough as a user, and let me sketch easily containers, pointers, etc... in an informal way. The aim is to document some structs relations to pass them to junior developers. A look at the drawings is supposed to accelerate the understanding of the code. My solutions at this moment are to use: 1) Paper & pencil. 2) Microsoft PowerPoint/Word Autoshapes. 3) Freeware Dia. Other ones could be: 4) Microsoft Visio, but my company does not own licenses. 5) UML tools. I don't want to go this way. This is what I mean a more formal solution. I know tools like Rational Rose are great, and I tried boUML and violet and they are fine in some parts, but I prefer the flexibility of options 2) or 3). Finally, let me write down a more concrete example: Let's say I what to sketch a map that contains another map as the mapped value, and that one contains a struct as the mapped value, that holds a vector of pointers of a type and a pointer to other type. Also, there exist other structs that hold pointers to the objects pointed by the previous map, so there are objects pointed from different places. This is just one example I have, but you can easily come with one from you experience. What would you use to sketch this example or another similar you have dealt with? Best regards, Tomas.

    Read the article

  • NSNotification vs. Delegate Protocols?

    - by jr
    I have an iPhone application which basically is getting information from an API (in XML, but maybe JSON eventually). The result objects are typically displayed in view controllers (tables mainly). Here is the architecture right now. I have NSOperation classes which fetch the different objects from the remote server. Each of these NSOperation classes, will take a custom delegate method which will fire back the resulting objects as they are parsed, and then finally a method when no more results are available. So, the protocol for the delegates will be something like: (void) ObjectTypeResult:(ObjectType *)result; (void) ObjectTypeNoMoreResults; I think the solution works well, but I do end up with a bunch of delegate protocols around and then my view controllers have to implement all these delegate methods. I don't think its that bad, but I'm always on the lookout for a better design. So, I'm thinking about using NSNotifications to remove the use of the delegates. I could include the object in the userInfo part of the notification and just post objects as received, and then a final event when no more are available. Then I could just have one method in each view controller to receive all the data, even when using multiple objects in one controller.† So, can someone share with me some pros/cons of each approach. Should I consider refactoring my code to use Events rather then the delegates? Is one better then the other in certain situations? In my scenario I'm really not looking to receive notifications in multiple places, so maybe the protocol based delegates are the way to go. Thanks!

    Read the article

  • What are the arguments against the inclusion of server side scripting in JavaScript code blocks?

    - by James Wiseman
    I've been arguing for some time against embedding server-side tags in JavaScript code, but was put on the spot today by a developer who seemed unconvinced The code in question was a legacy ASP application, although this is largely unimportant as it could equally apply to ASP.NET or PHP (for example). The example in question revolved around the use of a constant that they had defined in ServerSide code. 'VB Const MY_CONST: MY_CONST = 1 If sMyVbVar = MY_CONST Then 'Do Something End If //JavaScript if (sMyJsVar === "<%= MY_CONST%>"){ //DoSomething } My standard arguments against this are: Script injection: The server-side tag could include code that can break the JavaScript code Unit testing. Harder to isolate units of code for testing Code Separation : We should keep web page technologies apart as much as possible. The reason for doing this was so that the developer did not have to define the constant in two places. They reasoned that as it was a value that they controlled, that it wasn't subject to script injection. This reduced my justification for (1) to "We're trying to keep the standards simple, and defining exception cases would confuse people" The unit testing and code separation arguments did not hold water either, as the page itself was a horrible amalgam of HTML, JavaScript, ASP.NET, CSS, XML....you name it, it was there. No code that was every going to be included in this page could possibly be unit tested. So I found myself feeling like a bit of a pedant insisting that the code was changed, given the circumstances. Are there any further arguments that might support my reasoning, or am I, in fact being a bit pedantic in this insistence?

    Read the article

  • Good way to allow people to select a lot of things?

    - by jphenow
    I'm using jQuery, ASP.NET, SQL Server, and the other usual suspects to design a company CRM. After they put in contact info, notes, dates, places and so forth they have to be able to select many different people to be "CC'ed." A group of people will be required to be one either "CC'ed" or "ToDo." The rest of the people can be nothing or "CC" or "ToDo." Currently we have it set up as a huge databind to templates with radio buttons for each option. Looks like shit. Anyone have any suggestions? I'd like to use a template with a datasource and have a good way to retrieve their answers and use them. I'm leaning jQuery direction but like I said I'll need there to be up to 3 possible options for the people. This is going to be all opinion so I'm just looking for options. Just to re-clarify, this concept is similar to email but I don't want them to have to type anything in as it is a set group of names that they're allowed to select from. Looking for quick simple and pretty. somewhere in the range of 120 names.

    Read the article

  • Flex profiling - what is [enterFrameEvent] doing?

    - by Herms
    I've been tasked with finding (and potentially fixing) some serious performance problems with a Flex application that was delivered to us. The application will consistently take up 50 to 100% of the CPU at times when it is simply idling and shouldn't be doing anything. My first step was to run the profiler that comes with FlexBuilder. I expected to find some method that was taking up most of the time, showing me where the bottleneck was. However, I got something unexpected. The top 4 methods were: [enterFrameEvent] - 84% cumulative, 32% self time [reap] - 20% cumulative and self time [tincan] - 8% cumulative and self time global.isNaN - 4% cumulative and self time All other methods had less than 1% for both cumulative and self time. From what I've found online, the [bracketed methods] are what the profiler lists when it doesn't have an actual Flex method to show. I saw someone claim that [tincan] is the processing of RTMP requests, and I assume [reap] is the garbage collector. Does anyone know what [enterFrameEvent] is actually doing? I assume it's essentially the "main" function for the event loop, so the high cumulative time is expected. But why is the self time so high? What's actually going on? I didn't expect the player internals to be taking up so much time, especially since nothing is actually happening in the app (and there are no UI updates going on). Is there any good way to find dig into what's happening? I know something is going on that shouldn't be (it looks like there must be some kind of busy wait or other runaway loop), but the profiler isn't giving me any results that I was expecting. My next step is going to be to start adding debug trace statements in various places to try and track down what's actually happening, but I feel like there has to be a better way.

    Read the article

  • ADO.NET DataTable/DataRow Thread Safety

    - by Allen E. Scharfenberg
    Introduction A user reported to me this morning that he was having an issue with inconsistent results (namely, column values sometimes coming out null when they should not be) of some parallel execution code that we provide as part of an internal framework. This code has worked fine in the past and has not been tampered with lately, but it got me to thinking about the following snippet: Code Sample lock (ResultTable) { newRow = ResultTable.NewRow(); } newRow["Key"] = currentKey; foreach (KeyValuePair<string, object> output in outputs) { object resultValue = output.Value; newRow[output.Name] = resultValue != null ? resultValue : DBNull.Value; } lock (ResultTable) { ResultTable.Rows.Add(newRow); } (No guarantees that that compiles, hand-edited to mask proprietery information.) Explanation We have this cascading type of locking code other places in our system, and it works fine, but this is the first instance of cascading locking code that I have come across that interacts with ADO .NET. As we all know, members of framework objects are usually not thread safe (which is the case in this situation), but the cascading locking should ensure that we are not reading and writing to ResultTable.Rows concurrently. We are safe, right? Hypothesis Well, the cascading lock code does not ensure that we are not reading from or writing to ResultTable.Rows at the same time that we are assigning values to columns in the new row. What if ADO .NET uses some kind of buffer for assigning column values that is not thread safe--even when different object types are involved (DataTable vs. DataRow)? Has anyone run into anything like this before? I thought I would ask here at StackOverflow before beating my head against this for hours on end :) Conclusion Well, the consensus appears to be that changing the cascading lock to a full lock has resolved the issue. That is not the result that I expected, but the full lock version has not produced the issue after many, many, many tests. The lesson: be wary of cascading locks used on APIs that you do not control. Who knows what may be going on under the covers!

    Read the article

  • Refactoring or Rewriting Monolithic PHP Spaghetti Codebase

    - by nategood
    I've inherited a really poorly designed PHP spaghetti code project. It's been gaining a good bit of traffic recently and is starting to have performance issues on top of the poor monolithic code base. Its maxing out performance on a chunky 16GB dedicated machine when it really shouldn't be. I'm planning on doing some performance tweaks right off the bat to help the performance issue, but this still won't really help the horrible code base. The team is small but expecting to grow very soon. I've read Joel's article on the troubles of doing a complete rewrite and see the concerns. But how bad does the code base have to be before you consider a rewrite? There is PHP handling logic interjected into what one would usually consider a "view". Even worse, in some places SQL statements are in these same files! The only real separation of presentation and logic are a few PHP scripts that serve as function libraries. These scripts do most of the ORM stuff... if you can even call it that. Trying to slowly refractor this seems like a nightmare. Open to your thoughts and opinions... however not interested in hearing, "Run away, Run away!".

    Read the article

  • Difference in behaviour (GCC and Visual C++)

    - by Prasoon Saurav
    Consider the following code. #include <stdio.h> #include <vector> #include <iostream> struct XYZ { int X,Y,Z; }; std::vector<XYZ> A; int rec(int idx) { int i = A.size(); A.push_back(XYZ()); if (idx >= 5) return i; A[i].X = rec(idx+1); return i; } int main(){ A.clear(); rec(0); puts("FINISH!"); } I couldn't figure out the reason why the code gives a segmentation fault on Linux (IDE used: Code::Blocks) whereas on Windows (IDE used: Visual C++) it doesn't. When I used Valgrind just to check what actually the problem was, I got this output. I got Invalid write of size 4 at four different places. Then why didn't the code crash when I used Visual C++? Am I missing something?

    Read the article

  • Can not open ports in iptables on CentOS 5??

    - by abszero
    I am trying to open up ports in CentOS's firewall and am having a terrible go at it. I have followed the "HowTo" here: http://wiki.centos.org/HowTos/Network/IPTables as well as a few other places on the Net but I still can't get the bloody thing to work. Basically I wanted to get two things working: VNC and Apache over the internal network. The problem is that the firewall is blocking all attempts to connect to these services. Now if I issue service iptables stop and then try to access the server via VNC or hit the webserver everything works as expected. However the moment I turn iptables back on all of my access is blocked. Below is a truncated version of my iptables file as it appears in vi -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 5801 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 5901 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 6001 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 5900 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT Really I would just be happy if I could get port 80 opened up for Apache since I can do most stuff via putty but if I could figure out VNC as well that would be cool. As far as VNC goes there is just a single/user desktop that I am trying to connect to via: [ipaddress]:1 Any help would be greatly appreciated!

    Read the article

  • MS Acess SQL - where clause to get year from date based on the year - data located in MS access form

    - by primus285
    OK, back again. I have a problem getting a drop down list to populate based on information in two fields. I have got the SQL correct as to Select just one year if I put DateValue('01/01/2001') in both places, but I am trying now to get it to grab the year data from the MS access form - another drop down named "cboYear". I'd hate to have to do something in VB, unless necessary. so far I have gotten this to pull up something (its always incorrect) SELECT DISTINCT Database_New.ASEC FROM Database_New WHERE Database_New.Date>=DateValue('01/01/' & [cboYear]) And Database_New.Date<=DateValue('12/31/' & [cboYear]); and SELECT DISTINCT Database_New.ASEC FROM Database_New WHERE Database_New.Date>=DateValue('01/01/' + [cboYear]) And Database_New.Date<=DateValue('12/31/' + [cboYear]); SELECT DISTINCT Database_New.ASEC FROM Database_New WHERE Database_New.Date>=DateValue('01/01/' AND [cboYear]) And Database_New.Date<=DateValue('12/31/' AND [cboYear]); both give errors saying that it is too complex to compute. Its probably something simple, but where do I go from here?

    Read the article

  • Difference in behaviour( gcc and MSVC++ )

    - by Prasoon Saurav
    Consider the following code. #include <stdio.h> #include <vector> #include <iostream> struct XYZ { int X,Y,Z; }; std::vector<XYZ> A; int rec(int idx) { int i = A.size(); A.push_back(XYZ()); if (idx >= 5) return i; A[i].X = rec(idx+1); return i; } int main(){ A.clear(); rec(0); puts("FINISH!"); } I couldn't figure out the reason why the code gives segmentation fault on Linux(IDE used: Code::Blocks) whereas on Windows(IDE used : MSVC++) it doesn't. When I used valgrind just to check what actually the problem was, I got this output. I got Invalid write of size 4 at four different places. Then why didn't the code crash when I used MSVC++? Am I missing something?

    Read the article

  • How to see external libraries code when debugging

    - by Sanva
    Hello!! First of all... this is my message #1 in this place, so... please be nice with me ;) I just started recently to study Gnome apps/libraries and I found that debuggers are an excellent way to learn, because seeing the code running helps a lot in understanding the structure of the program. But I have a problem. For example, debugging gnome-panel I found a lot of calls to external functions (basically the GTK+ functions), and although pretending to see all the code of all the functions applications like this call would be crazy, there are a lot that will be very interesting to see in action. The problem is that the debugger hasn't the code of those libraries loaded and it can't show it to me —at most it shows the line number where the execution is. I'm using Nemiver and when it tries to enter in an external function it claims because it can't find a file it supposed to be somewhere. For example, trying to enter in gtk_window_set_default_icon_name it tries to load /build/buildd/gtk+2.0-2.16.1/gtk/gtkwindow.c, and calling XSetIOErrorHandler, ../../src/ErrHndlr.c. So now I think that I'm doing something wrong... Why Nevimer are looking for those source files in those places?? My system does not even have the /build/buildd/ folders... and I don't know if I'm doing something wrong or I need to install somethig or what. Any suggestion? How do you debug this kind of applications? Best regards and thanks a lot for your time —and forgive me if my English is bad.

    Read the article

  • Can't get KnownType to work with WCF

    - by Kelly Cline
    I have an interface and a class defined in separate assemblies, like this: namespace DataInterfaces { public interface IPerson { string Name { get; set; } } } namespace DataObjects { [DataContract] [KnownType( typeof( IPerson ) ) ] public class Person : IPerson { [DataMember] public string Name { get; set; } } } This is my Service Interface: public interface ICalculator { [OperationContract] IPerson GetPerson ( ); } When I update my Service Reference for my Client, I get this in the Reference.cs: public object GetPerson() { return base.Channel.GetPerson(); I was hoping that KnownType would give me IPerson instead of "object" here. I have also tried [KnownType( typeof( Person ) ) ] with the same result. I have control of both client and server, so I have my DataObjects (where Person is defined) and DataInterfaces (where IPerson is defined) assemblies in both places. Is there something obvious I am missing? I thought KnownType was the answer to being able to use interfaces with WCF. ----- FURTHER INFORMATION ----- I removed the KnownType from the Person class and added [ServiceKnownType( typeof( Person ) ) ] to my service interface, as suggested by Richard. The client-side proxy still looks the same, public object GetPerson() { return base.Channel.GetPerson(); , but now it doesn't blow up. The client just has an "object", though, so it has to cast it to IPerson before it is useful. var person = client.GetPerson ( ); Console.WriteLine ( ( ( IPerson ) person ).Name );

    Read the article

  • Serializable object in intent returning as String

    - by B_
    In my application, I am trying to pass a serializable object through an intent to another activity. The intent is not entirely created by me, it is created and passed through a search suggestion. In the content provider for the search suggestion, the object is created and placed in the SUGGEST_COLUMN_INTENT_EXTRA_DATA column of the MatrixCursor. However, when in the receiving activity I call getIntent().getSerializableExtra(SearchManager.EXTRA_DATA_KEY), the returned object is of type String and I cannot cast it into the original object class. I tried making a parcelable wrapper for my object that calls out.writeSerializable(...) and use that instead but the same thing happened. The string that is returned is like a generic Object toString(), i.e. com.foo.yak.MyAwesomeClass@4350058, so I'm assuming that toString() is being called somewhere where I have no control. Hopefully I'm just missing something simple. Thanks for the help! Edit: Some of my code This is in the content provider that acts as the search authority: //These are the search suggestion columns private static final String[] COLUMNS = { "_id", // mandatory column SearchManager.SUGGEST_COLUMN_TEXT_1, SearchManager.SUGGEST_COLUMN_INTENT_EXTRA_DATA }; //This places the serializable or parcelable object (and other info) into the search suggestion private Cursor getSuggestions(String query, String[] projection) { List<Widget> widgets = WidgetLoader.getMatches(query); MatrixCursor cursor = new MatrixCursor(COLUMNS); for (Widget w : widgets) { cursor.addRow(new Object[] { w.id w.name w.data //This is the MyAwesomeClass object I'm trying to pass }); } return cursor; } This is in the activity that receives the search suggestion: public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); Object extra = getIntent().getSerializableExtra(SearchManager.EXTRA_DATA_KEY); //extra.getClass() returns String, when it should return MyAwesomeClass, so this next line throws a ClassCastException and causes a crash MyAwesomeClass mac = (MyAwesomeClass)extra; ... }

    Read the article

  • How to use named_scope to incrementally construct a complex query?

    - by wbharding
    I want to use Rails named_scope to incrementally build complex queries that are returned from external methods. I believe I can get the behavior I want with anonymous scopes like so: def filter_turkeys Farm.scoped(:conditions => {:animal => 'turkey'}) end def filter_tasty Farm.scoped(:conditions => {:tasty => true}) end turkeys = filter_turkeys is_tasty = filter_tasty tasty_turkeys = filter_turkeys.filter_tasty But say that I already have a named_scope that does what these anonymous scopes do, can I just use that, rather than having to declare anonymous scopes? Obviously one solution would be to pass my growing query to each of the filter methods, a la def filter_turkey(existing_query) existing_query.turkey # turkey is a named_scoped that filters for turkey end But that feels like an un-DRY way to solve the problem. Oh, and if you're wondering why I would want to return named_scope pieces from methods, rather than just build all the named scopes I need and concatenate them together, it's because my queries are too complex to be nicely handled with named_scopes themselves, and the queries get used in multiple places, so I don't want to manually build the same big hairy concatenation multiple times.

    Read the article

< Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >