Search Results

Search found 9889 results on 396 pages for 'behind the compiler'.

Page 317/396 | < Previous Page | 313 314 315 316 317 318 319 320 321 322 323 324  | Next Page >

  • WPF validation red border doesn't show If UserControl collapsed first

    - by Creepy Gnome
    There seems to be a bug with WPF in 3.5, and I was hoping someone may have found a workaround. Basically if you have a custom UserControl that contains a TextBox and it is in a Window but initialized to be Collapsed by default in the xaml or code behind if it fails validation when you make the control visible it will not show the red border until it fails while visible. However, this works correctly when visibility is set to Hidden, just no when Collapsed. I am already overriding the ErrorTemplate with a style to workaround the Adornment issue with the red border staying visibile when you collapse the control. Below is my full style for the TextBox. If there is any additional changes or additions to make it work correctly with collapsed controls that would be great. <Style TargetType="TextBox"> <Setter Property="Margin" Value="3" /> <Setter Property="Validation.ErrorTemplate"> <Setter.Value> <ControlTemplate> <ControlTemplate.Resources> <BooleanToVisibilityConverter x:Key="converter" /> </ControlTemplate.Resources> <DockPanel LastChildFill="True"> <Border BorderThickness="2" BorderBrush="Red" Visibility="{ Binding ElementName=placeholder, Mode=OneWay, Path=AdornedElement.IsVisible, Converter={StaticResource converter}}" > <AdornedElementPlaceholder x:Name="placeholder" /> </Border> </DockPanel> </ControlTemplate> </Setter.Value> </Setter> <Style.Triggers> <Trigger Property="Validation.HasError" Value="true" > <Setter Property="ToolTip" Value="{Binding RelativeSource={RelativeSource Self}, Path=(Validation.Errors)[0].ErrorContent}" /> </Trigger> </Style.Triggers> </Style>

    Read the article

  • What Scheme Does Ghuloum Use?

    - by Don Wakefield
    I'm trying to work my way through Compilers: Backend to Frontend (and Back to Front Again) by Abdulaziz Ghuloum. It seems abbreviated from what one would expect in a full course/seminar, so I'm trying to fill in the pieces myself. For instance, I have tried to use his testing framework in the R5RS flavor of DrScheme, but it doesn't seem to like the macro stuff: src/ghuloum/tests/tests-driver.scm:6:4: read: illegal use of open square bracket I've read his intro paper on the course, An Incremental Approach to Compiler Construction, which gives a great overview of the techniques used, and mentions a couple of Schemes with features one might want to implement for 'extra credit', but he doesn't mention the Scheme he uses in the course. Update I'm still digging into the original question (investigating options such as Petit Scheme suggested by Eli below), but found an interesting link relating to Gholoum's work, so I am including it here. [Ikarus Scheme](http://en.wikipedia.org/wiki/Ikarus_(Scheme_implementation)) is the actual implementation of Ghuloum's ideas, and appears to have been part of his Ph.D. work. It's supposed to be one of the first implementations of R6RS. I'm trying to install Ikarus now, but the configure script doesn't want to recognize my system's install of libgmp.so, so my problems are still unresolved. Example The following example seems to work in PLT 2.4.2 running in DrEd using the Pretty Big (require lang/plt-pretty-big) (load "/Users/donaldwakefield/ghuloum/tests/tests-driver.scm") (load "/Users/donaldwakefield/ghuloum/tests/tests-1.1-req.scm") (define (emit-program x) (unless (integer? x) (error "---")) (emit " .text") (emit " .globl scheme_entry") (emit " .type scheme_entry, @function") (emit "scheme_entry:") (emit " movl $~s, %eax" x) (emit " ret") ) Attempting to replace the require directive with #lang scheme results in the error message foo.scm:7:3: expand: unbound identifier in module in: emit which appears to be due to a failure to load tests-driver.scm. Attempting to use #lang r6rs disables the REPL, which I'd really like to use, so I'm going to try to continue with Pretty Big. My thanks to Eli Barzilay for his patient help.

    Read the article

  • Block element, horizontal center aligned and minimum width possible

    - by Nazgulled
    Hi, I'm trying to have this block element to be horizontally aligned in the middle but at the same time I would like the width to be the minimum possible for the contents inside. But I don't think it's possible or I'm not able to do it myself... a#button-link { background: url("images/button-link.png") no-repeat scroll center left; margin: 12px auto 0 auto; display: block; width: 125px; height: 32px; line-height: 32px; padding-left: 35px; font-weight: bold; } This is my current code... The idea behind this is that the text for that tag could be slightly bigger or smaller depending on the user language and how much characters the same sentence has for that specific user's language. There's no way I can control but I still would like to have this element horizontally aligned to center. Is this possible with CSS?

    Read the article

  • iphone - using NSInvocation: constant value

    - by Mike
    I am dealing with an old iPhone OS 2.x project and I want to keep compatibility, while designing for 3.x. I am using NSInvocation, is a code like this NSInvocation* invoc = [NSInvocation invocationWithMethodSignature: [cell methodSignatureForSelector: @selector(initWithStyle:reuseIdentifier:)]]; [invoc setTarget:cell]; [invoc setSelector:@selector(initWithStyle:reuseIdentifier:)]; int arg2 = UITableViewCellStyleDefault; //???? [invoc setArgument:&arg2 atIndex:2]; [invoc setArgument:&identificadorNormal atIndex:3]; [invoc invoke]; to call 3.0 APIs on 2.0. I am having a problem on the line I marked with question marks. The problem there is that I am trying to assing to arg2, a constant that has not been defined in OS 2.0. As everything with NSInvocation is to do stuff indirectly to avoid compiler errors, how do I set this constant to a variable in an indirect way? Some sort of performSelector "assign value to variable"... is that possible? thanks for any help.

    Read the article

  • How do I use a custom authentication mechanism for a Java web application with Spring Security?

    - by Adam
    Hi, I'm working on a project to convert an existing Java web application to use Spring Web MVC. As a part of this I will migrate the existing log-on/log-off mechanism to use Spring Security. The idea at this stage is to replicate the existing functionality and replace only the web layer, leaving the service classes and objects in place. The required functionality is simple. Access is controlled to URLs and to access certain pages the user must log on. Authentication is performed with a simple username and password along with an extra static piece of information that comes from the login page. There is no notion of a role: once a user has logged on they have access to all of the pages. Behind the scenes, the service layer has a class with a simple authentication method: doAuthenticate(String username, String password, String info) throws ServiceException An exception is thrown if the login fails. I'd like to leave this existing service object that does the authentication intact but to "plug it into" the Spring Security mechanism. Can somebody suggest the best approach to take for this please? Naturally, I'd like to take the path of least resistance and leave the work where possible to Spring... Thanks in advance, Adam.

    Read the article

  • question regarding templatization of virtual function

    - by jan
    Hi, I am new to this forum and sorry If I am repeating this question. I know that you cannot templatize the virtual function and I do understand the concept behind it. But I still need a way to get across some errors I am getting it. I am able to make my stuff work but it doesn't look to me. Here's the deal, I have class called System, #include "Vector.h" class System { virtual void VectorToLocal(Vector<T>& global_dir,const Vector<T>* global_pos = 0) const = 0; }; class UnresolvedSystem : public System { virtual void VectorToLocal(Vector<T>& global_dir,const Vector<T>* global_pos = 0) const { //do something } }; In Vector.h tenplate<typename T> class Vector { //some functions }; See now I want to templatize VectorToLocal in system.h to take just Vector, but I cannot do it as it is a virtual function. I want a work around. I know I can have VectorToLocal take Vector, Vector etc as arguments. But I do not want to do it. Any help would be really appreciated. Thanks in advance, Jan

    Read the article

  • How to find specific/local files via CMake

    - by Andreas Romeyke
    Hello, I have a problem with a locally installed library. In my project there is the xmlrpc++0.7-library: myproject/ +-- xmlrpc++0.7/ +-- src/ I want that CMake fallbacks using the local xmlrpc++0.7 directory if not found otherwise. Two problems, the first one, find_path() or find_library() does not work with local dir. I used a workaround testing if variables processed by find_xxx() are empty or not. If empty I set them manually. The cmake generates the Makefile without errors now. But if I want to compile the project via make, the c++ compiler returns "error: XmlRpc.h: file not found". The file XmlRpc.h lies in myproject/xmlrpc++0.7/src and if I compile all them manually it works fine. Here is my CMakeLists.txt. I am very happy if anyone could me point to the right solution to use cmake under conditions described above. --- CMakeLists.txt --- project(webservice_tesseract) cmake_minimum_required(VERSION 2.6) set(CMAKE_INCLUDE_CURRENT_DIR ON) # find tesseract find_path(TESSERACT_INCLUDE_DIR tesseract/tesseractmain.h /opt/local/include /usr/local/include /usr/include ) find_library(TESSERACT_LIBRARY_DIR NAMES tesseract_main PATHS /opt/local/lib/ /usr/local/lib/ /usr/lib ) message(STATUS "looked for tesseract library.") message(STATUS "Include file detected: [${TESSERACT_INCLUDE_DIR}].") message(STATUS "Lib file detected: [${TESSERACT_LIBRARY_DIR}].") add_library(tesseract STATIC IMPORTED) set_property(TARGET tesseract PROPERTY IMPORTED_LOCATION ${TESSERACT_LIBRARY_DIR}/libtesseractmain.a ) #find xmlrpc++ message(STATUS "cmake home dir: [${CMAKE_HOME_DIRECTORY}].") set(LOCAL_XMLRPCPLUSPLUS ${CMAKE_HOME_DIRECTORY}/xmlrpc0.7++/) message(STATUS "xmlrpc++ local dir: [${LOCAL_XMLRPCPLUSPLUS}].") find_path(XMLRPCPLUSPLUS_INCLUDE_DIR XmlRpcServer.h ${LOCAL_XMLRPCPLUSPLUS}src /opt/local/include /usr/local/include /usr/include ) find_library(XMLRPCPLUSPLUS_LIBRARY_DIR NAMES XmlRpc PATHS ${LOCAL_XMLRPCPLUSPLUS} /opt/local/lib/ /usr/local/lib/ /usr/lib/ ) # next lines are an ugly workaround because cmake find_xxx() does not find local stuff if (XMLRPCPLUSPLUS_INCLUDE_DIR) else (XMLRPCPLUSPLUS_INCLUDE_DIR) set(XMLRPCPLUSPLUS_INCLUDE_DIR ${LOCAL_XMLRPCPLUSPLUS}src) endif (XMLRPCPLUSPLUS_INCLUDE_DIR) if (XMLRPCPLUSPLUS_LIBRARY_DIR) else (XMLRPCPLUSPLUS_LIBRARY_DIR) set(XMLRPCPLUSPLUS_LIBRARY_DIR ${LOCAL_XMLRPCPLUSPLUS}) endif (XMLRPCPLUSPLUS_LIBRARY_DIR) message(STATUS "looked for xmlrpc++ library.") message(STATUS "Include file detected: [${XMLRPCPLUSPLUS_INCLUDE_DIR}].") message(STATUS "Lib file detected: [${XMLRPCPLUSPLUS_LIBRARY_DIR}].") add_library(xmlrpc STATIC IMPORTED) set_property(TARGET xmlrpc PROPERTY IMPORTED_LOCATION ${XMLRPCPLUSPLUS_LIBRARY_DIR}/libXmlRpc.a ) #### link together include_directories(${XMLRPCPLUSPLUS_INCLUDE_DIR} ${TESSERACT_INCLUDE_DIR}) link_directories(${XMLRPCPLUSPLUS_LIBRARY_DIR} ${TESSERACT_LIBRARY_DIR}) add_library(simpleocr STATIC simple_ocr.cpp) add_executable(webservice_tesseract webservice.cpp) target_link_libraries(webservice_tesseract xmlrpc tesseract simpleocr)

    Read the article

  • Writing fortran robust and "modern" code

    - by Blklight
    In some scientific environments, you often cannot go without FORTRAN as most of the developers only know that idiom, and there is lot of legacy code and related experience. And frankly, there are not many other cross-platform options for high performance programming ( C++ would do the task, but the syntax, zero-starting arrays, and pointers are too much for most engineers ;-) ). I'm a C++ guy but I'm stuck with some F90 projects. So, let's assume a new project must use FORTRAN (F90), but I want to build the most modern software architecture out of it. while being compatible with most "recent" compilers (intel ifort, but also including sun/HP/IBM own compilers) So I'm thinking of imposing: global variable forbidden, no gotos, no jump labels, "implicit none", etc. "object-oriented programming" (modules with datatypes + related subroutines) modular/reusable functions, well documented, reusable libraries assertions/preconditions/invariants (implemented using preprocessor statements) unit tests for all (most) subroutines and "objects" an intense "debug mode" (#ifdef DEBUG) with more checks and all possible Intel compiler checks possible (array bounds, subroutine interfaces, etc.) uniform and enforced legible coding style, using code processing tools C stubs/wrappers for libpthread, libDL (and eventually GPU kernels, etc.) C/C++ implementation of utility functions (strings, file operations, sockets, memory alloc/dealloc reference counting for debug mode, etc.) ( This may all seem "evident" modern programming assumptions, but in a legacy fortran world, most of these are big changes in the typical programmer workflow ) The goal with all that is to have trustworthy, maintainable and modular code. Whereas, in typical fortran, modularity is often not a primary goal, and code is trustworthy only if the original developer was very clever, and the code was not changed since then ! (i'm a bit joking here, but not much) I searched around for references about object-oriented fortran, programming-by-contract (assertions/preconditions/etc.), and found only ugly and outdated documents, syntaxes and papers done by people with no large-scale project involvement, and dead projects. Any good URL, advice, reference paper/books on the subject?

    Read the article

  • What's the best way to read a UDT from a database with Java?

    - by Lukas Eder
    I thought I knew everything about UDTs and JDBC until someone on SO pointed out some details of the Javadoc of java.sql.SQLInput and java.sql.SQLData JavaDoc to me. The essence of that hint was (from SQLInput): An input stream that contains a stream of values representing an instance of an SQL structured type or an SQL distinct type. This interface, used only for custom mapping, is used by the driver behind the scenes, and a programmer never directly invokes SQLInput methods. This is quite the opposite of what I am used to do (which is also used and stable in productive systems, when used with the Oracle JDBC driver): Implement SQLData and provide this implementation in a custom mapping to ResultSet.getObject(int index, Map mapping) The JDBC driver will then call-back on my custom type using the SQLData.readSQL(SQLInput stream, String typeName) method. I implement this method and read each field from the SQLInput stream. In the end, getObject() will return a correctly initialised instance of my SQLData implementation holding all data from the UDT. To me, this seems like the perfect way to implement such a custom mapping. Good reasons for going this way: I can use the standard API, instead of using vendor-specific classes such as oracle.sql.STRUCT, etc. I can generate source code from my UDTs, with appropriate getters/setters and other properties My questions: What do you think about my approach, implementing SQLData? Is it viable, even if the Javadoc states otherwise? What other ways of reading UDT's in Java do you know of? E.g. what does Spring do? what does Hibernate do? What does JPA do? What do you do? Addendum: UDT support and integration with stored procedures is one of the major features of jOOQ. jOOQ aims at hiding the more complex "JDBC facts" from client code, without hiding the underlying database architecture. If you have similar questions like the above, jOOQ might provide an answer to you.

    Read the article

  • Calling a SLSB with Seam security from a servlet

    - by wilth
    Hello, I have an existing application written in SEAM that uses SEAM Security (http://docs.jboss.org/seam/2.1.1.GA/reference/en-US/html/security.html). In a stateless EJB, I might find something like this: @In Identity identity; ... if(identity.hasRole("admin")) throw new AuthException(); As far as I understand, Seam injects the Identity object from the SessionContext of the servlet that invokes the EJB (this happens "behind the scenes", since Seam doesn't really use servlets) and removes it after the call. Is this correct? Is it now possible to access this EJB from another servlet (in this case, that servlet is the server side of a GWT application)? Do I have to "inject" the correct Identity instance? If I don't do anything, Seam injects an instance, but doesn't correctly correlate the sessions and instances of Identity (so the instances of Identity are shared between sessions and sometimes calls get new instances etc.). Any help and pointers are very welcome - thanks! Technology: EJB3, Seam 2.1.2. The servlets are actually the server-side of a GWT app, although I don't think this matters much. I'm using JBoss 5.

    Read the article

  • Controls added in the designer are null during Page_Load

    - by mwright
    All of the names below are generic and not the actual names used. I have a custom UserControl with a Panel that contains a a couple Labels, both .aspx controls. .aspx: <asp:Panel runat="server"> <asp:Label ID="label1" runat="server"> </asp:Label> </asp:Panel> <asp:Panel runat="server"> <asp:Label ID="label2" runat="server"> </asp:Label> </asp:Panel> Codebehind: private readonly Object object; protected void Page_Load(object sender, EventArgs e) { // These are the lines that are failing // label1 and label2 are null label1.Text = object.Value1; label2.Text = object.Value2; } public ObjectRow(Object objectToDisplay) { object = objectToDisplay; } On another page, in the code behind, I create a new instance of the custom user control. protected void Page_Load(object sender, EventArgs e) { CustomControl control = new CustomControl(object); } The user control takes the parameter and attempts to set the labels based off of the object passed in. The labels that it tries to assign the values to are however, null. Is this an ASP.net lifecycle issue that I'm not understanding? My understanding based on the Microsoft ASP.net lifecycle page was that page controls were available after the Page_Initialization. What is the proper way to do this? Is there a better way?

    Read the article

  • Silverlight databinding error

    - by Petezah
    I found an example online that explains how to perform databinding to a ListBox control using LINQ in WPF. The example works fine but when I replicate the same code in Silverlight it doesn't work. Is there a fundamental difference between Silverlight and WPF that I'm not aware of? Here is an Example of the XAML: <ListBox x:Name="listBox1"> <ListBox.ItemTemplate> <DataTemplate> <StackPanel> <TextBlock Text="{Binding Name}" FontSize="18"/> <TextBlock Text="{Binding Role}" /> </StackPanel> </DataTemplate> </ListBox.ItemTemplate> </ListBox> Here is an example of my code behind: private void UserControl_Loaded(object sender, RoutedEventArgs e) { string[] names = new string[] { "Captain Avatar", "Derek Wildstar", "Queen Starsha" }; string[] roles = new string[] { "Hero", "Captain", "Queen of Iscandar" }; listBox1.ItemSource = from n in names from r in roles select new { Name = n, Role = r} }

    Read the article

  • Default template parameters with forward declaration

    - by Seth Johnson
    Is it possible to forward declare a class that uses default arguments without specifying or knowing those arguments? For example, I would like to declare a boost::ptr_list< TYPE > in a Traits class without dragging the entire Boost library into every file that includes the traits. I would like to declare namespace boost { template<class T> class ptr_list< T >; }, but that doesn't work because it doesn't exactly match the true class declaration: template < class T, class CloneAllocator = heap_clone_allocator, class Allocator = std::allocator<void*> > class ptr_list { ... }; Are my options only to live with it or to specify boost::ptr_list< TYPE, boost::heap_clone_allocator, std::allocator<void*> in my traits class? (If I use the latter, I'll also have to forward declare boost::heap_clone_allocator and include <memory>, I suppose.) I've looked through Stroustrup's book, SO, and the rest of the internet and haven't found a solution. Usually people are concerned about not including STL, and the solution is "just include the STL headers." However, Boost is a much more massive and compiler-intensive library, so I'd prefer to leave it out unless I absolutely have to.

    Read the article

  • Eclipse (Aptana) Typing Lag

    - by Zack
    Hello SO, I've been using Aptana for some time now, and as of recent I've been dealing with files that are really, really big (500+ lines of code, which is huge for me, being a novice developer). Whenever I deal with smaller files, I get that weird sensation that I'm "in front of" what's typing, but now I'm quite sure of it--there is a significant lag between when I type something and when I see the text appear on screen. I don't have this issue with Dreamweaver CS3, so I know my computer has the capability to edit these files without this happening, but Eclipse still lags. I also don't see when something is being deleted if I hold down backspace, I see the first few characters get deleted, but then everything "hangs." Once I release the backspace key, the characters that would've been shown deleting instantly vanish all at once. The same thing happens with the forward delete key. I'm beginning to think this is an issue with Java, since I have the same feeling that everything is slightly "behind me" when I'm using -any- Java application. The computer is an intel Pentium 4 3.2 GHz Prescott, with 2GB's of DDR400 RAM and a Radeon HD3650 graphics card. If anyone knows how to fix this lagging issue, I'm all ears (eyes?); if anyone can recommend a different IDE with capabilities similar to Aptana (I do Python, HTML, CSS and JS; I use Git for SCM), I'd be glad to give it a try. Thanks!

    Read the article

  • How close can I get C# to the performance of C++ for small intensive tasks?

    - by SLC
    I was thinking about the speed difference of C++ to C# being mostly about C# compiling to byte-code that is taken in by the JIT compiler (is that correct?) and all the checks C# does. I notice that it is possible to turn a lot of these functions off, both in the compile options, and possibly through using the unsafe keyword as unsafe code is not verifiable by the common language runtime. Therefore if you were to write a simple console application in both languages, that flipped an imaginary coin an infinite number of times and displayed the results to the screen every 10,000 or so iterations, how much speed difference would there be? I chose this because it's a very simple program. I'd like to test this but I don't know C++ or have the tools to compile it. This is my C# version though: static void Main(string[] args) { unsafe { Random rnd = new Random(); int heads = 0, tails = 0; while (true) { if (rnd.NextDouble() > 0.5) heads++; else tails++; if ((heads + tails) % 1000000 == 0) Console.WriteLine("Heads: {0} Tails: {1}", heads, tails); } } } Is the difference enough to warrant deliberately compiling sections of code "unsafe" or into DLLs that do not have some of the compile options like overflow checking enabled? Or does it go the other way, where it would be beneficial to compile sections in C++? I'm sure interop speed comes into play too then. To avoid subjectivity, I reiterate the specific parts of this question as: Does C# have a performance boost from using unsafe code? Do the compile options such as disabling overflow checking boost performance, and do they affect unsafe code? Would the program above be faster in C++ or negligably different? Is it worth compiling long intensive number-crunching tasks in a language such as C++ or using /unsafe for a bonus? Less subjectively, could I complete an intensive operation faster by doing this?

    Read the article

  • Go: using a pointer to array

    - by Sean
    I'm having a little play with google's Go language, and I've run into something which is fairly basic in C but doesn't seem to be covered in the documentation I've seen so far When I pass a pointer to an array to a function, I presumed we'd have some way to access it as follows: func conv(x []int, xlen int, h []int, hlen int, y *[]int) for i := 0; i<xlen; i++ { for j := 0; j<hlen; j++ { *y[i+j] += x[i]*h[j] } } } But the Go compiler doesn't like this: sean@spray:~/dev$ 8g broke.go broke.go:8: invalid operation: y[i + j] (index of type *[]int) Fair enough - it was just a guess. I have got a fairly straightforward workaround: func conv(x []int, xlen int, h []int, hlen int, y_ *[]int) { y := *y_ for i := 0; i<xlen; i++ { for j := 0; j<hlen; j++ { y[i+j] += x[i]*h[j] } } } But surely there's a better way. The annoying thing is that googling for info on Go isn't very useful as all sorts of C\C++\unrelated results appear for most search terms.

    Read the article

  • Am I Writing Assembly Or NASM?

    - by cam
    I'm fed up with this. I've been trying to just get a grip on assembly for awhile, but I feel like I'm coding towards my compiler rather than a language. I've been using this tutorial, and so far it's giving me hell. I'm using NASM, which may be the problem, but I figured it was the most popular one. I'm simply trying to learn the most general form of assembly, so I decided to learn x86. I keep running into stupid errors, like not being able to increment a variable. Here's the latest one: not being able to use div. mov bx, 0; mov cx, 0; jmp start; start: inc cx; mov ax, cx; div 3; <-- invalid combination of opcode and operand cmp ah,0; jz totalvalue; mov ax, cx; div 5; <-- invalid combination of opcode and operand cmp ah, 0; jz totalvalue; cmp cx, 1000; jz end; totalvalue: add bx,cx; jmp start; jmp end; end: mov ah,4ch; mov al,00; int 21h; Should I change compilers? It seems like division should be standard. Do I need to read two tutorials (one on NASM, and one on x86?). Any specific help on this problem?

    Read the article

  • Eclipse CDT: cannot debug or terminate application

    - by Paul Lammertsma
    I have Eclipse set up fairly nicely to run the G++ compiler through Cygwin. Even the character encoding is set up correctly! There still seems to be something wrong with my configuration: I can't debug. The pause button in the debug view is simply disabled, and no threads appear in my application tree. It seems that gdb is simply not communicating with Eclipse. Presently, I have the debug settings as follows: Debugger: "Cygwin gdb Debugger" GDB debugger: gdb GDB command file: .gdbinit Protocol: Default I should mention here that I have no idea what .gdbinit does; in my project it is merely an empty file. What is wrong with my configuration? Debugging When attempting to terminate the application in debug mode, Eclipse displays the following error: Target request failed: failed to interrupt. I can't kill the process, either; I have to kill its parent gdb.exe, which in turn kills my application. Running When running it normally, a bunch of kill.exes are called, doing nothing, while Eclipse displays the following error: Terminate failed. I can kill FaceDetector.exe from the task manager. Process Explorer This is what it looks like in Process Explorer (debugging left, running right):

    Read the article

  • Noob question about a statement in a Java program

    - by happysoul
    I am beginner to java and was trying out this code puzzle from the book head first java which I solved as follows and got the output correct :D class DrumKit { boolean topHat=true; boolean snare=true; void playSnare() { System.out.println("bang bang ba-bang"); } void playTopHat() { System.out.println("ding ding da-ding"); } } public class DrumKitTestDriver { public static void main(String[] args) { DrumKit d =new DrumKit(); if(d.snare==true) { d.playSnare(); } d.playTopHat(); } } Output is :: bang bang ba-bang ding ding da-ding Now the problem is that in that code puzzle one code snippet is left that I did not include..it's as follows d.snare=false; Even though I did not write it , I got the output like the book. I am wondering why is there need for us to set it's value as false even when we know the code is gonna run without it too !?? I am wondering what the coder had in mind ..I mean what could be the possible future use and motive behind doing this ? I am sorry if it's a dumb question. I just wanna know why or why not to include that particular statement ? It's not like there's a loop or something that we need to come out of. Why is that statement there ?

    Read the article

  • Why use INCLUDE in a SQL index

    - by StarLite
    I recently encountered an index in a database I maintain that was of the form: CREATE INDEX [IX_Foo] ON [Foo] ( Id ASC ) INCLUDE ( SubId ) In this particular case, the performance problem that I was encountering (a slow SELECT filtering on both Id and SubId) could be fixed by simply moving the SubId column into the index proper rather than as an included column. This got me thinking however that I don't understand the reasoning behind included columns at all, when generally, they could simply be a part of the index itself. Even if I don't particularly care about the items being in the index itself is there any downside to having column in the index rather than simply being included. After some research, I am aware that there are a number of restrictions on what can go into an indexed column (maximum width of the index, and some column types that can't be indexed like 'image'). In these cases I can see that you would be forced to include the column in the index page data. The only thing I can think of is that if there are updates on SubId, the row will not need to be relocated if the column is included (though the value in the index would need to be changed). Is there something else that I'm missing? I'm considering going through the other indexes in the database and shifting included columns in the index proper where possible. Would this be a mistake? I'm primarily interested in MS SQL Server, but information on other DB engines is welcome also.

    Read the article

  • In .NET, Why Can I Access Private Members of a Class Instance within the Class?

    - by AMissico
    While cleaning some code today written by someone else, I changed the access modifier from Public to Private on a class variable/member/field. I expected a long list of compiler errors that I use to "refactor/rework/review" the code that used this variable. Imagine my surprise when I didn't get any errors. After reviewing, it turns out that another instance of the Class can access the private members of another instance declared within the Class. Totally unexcepted. Is this normal? I been coding in .NET since the beginning and never ran into this issue, nor read about it. I may have stumbled onto it before, but only "vaguely noticed" and move on. Can anyone explain this behavoir to me? I would like to know the "why" I can do this. Please explain, don't just tell me the rule. Am I doing something wrong? I found this behavior in both C# and VB.NET. The code seems to take advantage of the ability to access private variables. Sincerely, Totally Confused Class Jack Private _int As Integer End Class Class Foo Public Property Value() As Integer Get Return _int End Get Set(ByVal value As Integer) _int = value * 2 End Set End Property Private _int As Integer Private _foo As Foo Private _jack As Jack Private _fred As Fred Public Sub SetPrivate() _foo = New Foo _foo.Value = 4 'what you would expect to do because _int is private _foo._int = 3 'TOTALLY UNEXPECTED _jack = New Jack '_jack._int = 3 'expected compile error _fred = New Fred '_fred._int = 3 'expected compile error End Sub Private Class Fred Private _int As Integer End Class End Class

    Read the article

  • Silverlight - how to render image from non-visible data bound user control?

    - by MagicMax
    Hello! I have such situation - I'd like to build timeline control. So I have UserControl and ItemsControl on it (every row represents some person). The ItemsControl contains another ItemsControl as ItemsControl.ItemTemplate - it shows e.g. events for the person arranged by event's date. So it looks as some kind of grid with dates as a column headers and e.g. peoples as row headers. ........................|.2010.01.01.....2010.01.02.....2010.01.03 Adam Smith....|......[some event#1].....[some event#2]...... John Dow.......|...[some event#3].....[some event#4]......... I can have a lot of persons (ItemsControl #1 - 100-200 items) and a lot of events occured by some day (1-10-30 events per person in one day) the problem is that when user scrolls ItemsControl #1/#2 it happened toooo sloooooowwww due to a lot of elements should be rendered in one time (as I have e.g. a bit of text boxes and other elements in description of particular event) Question #1 - how can I improve it? May be somebody knows better way to build such user control? I have to mention that I'm using custom Virtual panel, based on some custom Virtual panel implementation found somewhere in internet... Question #2 - I'd like to make image with help of WriteableBitmap and render data bound control to image and to show image instead of a lot of elements. Problem is that I'm trying to render invisible data bound control (created in code behind) and it has actualWidth/Height equals to zero (so nothing rendered). How can I solve it? Thank you very much for you help!

    Read the article

  • Why doesen't it work to write this NSMutableArray to a plist?

    - by Emil
    edited. Hey, I am trying to write an NSMutableArray to a plist. The compiler does not show any errors, but it does not write to the plist anyway. I have tried this on a real device too, not just the Simulator. Basically, what this code does, is that when you click the accessoryView of a UITableViewCell, it gets the indexPath pressed, edits an NSMutableArray and tries to write that NSMutableArray to a plist. It then reloads the arrays mentioned (from multiple plists) and reloads the data in a UITableView from the arrays. Code: NSIndexPath *indexPath = [table indexPathForRowAtPoint:[[[event touchesForView:sender] anyObject] locationInView:table]]; [arrayFav removeObjectAtIndex:[arrayFav indexOfObject:[NSNumber numberWithInt:[[arraySub objectAtIndex:indexPath.row] intValue]]]]; NSString *rootPath = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0]; NSString *plistPath = [rootPath stringByAppendingPathComponent:@"arrayFav.plist"]; NSLog(@"%@ - %@", rootPath, plistPath); [arrayFav writeToFile:plistPath atomically:YES]; // Reloads data into the arrays [self loadDataFromPlists]; // Reloads data in tableView from arrays [tableFarts reloadData];

    Read the article

  • SFINAE failing with enum template parameter

    - by zeroes00
    Can someone explain the following behaviour (I'm using Visual Studio 2010). header: #pragma once #include <boost\utility\enable_if.hpp> using boost::enable_if_c; enum WeekDay {MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY, SATURDAY, SUNDAY}; template<WeekDay DAY> typename enable_if_c< DAY==SUNDAY, bool >::type goToWork() {return false;} template<WeekDay DAY> typename enable_if_c< DAY!=SUNDAY, bool >::type goToWork() {return true;} source: bool b = goToWork<MONDAY>(); compiler this gives error C2770: invalid explicit template argument(s) for 'enable_if_c<DAY!=6,bool>::type goToWork(void)' and error C2770: invalid explicit template argument(s) for 'enable_if_c<DAY==6,bool>::type goToWork(void)' But if I change the function template parameter from the enum type WeekDay to int, it compiles fine: template<int DAY> typename enable_if_c< DAY==SUNDAY, bool >::type goToWork() {return false;} template<int DAY> typename enable_if_c< DAY!=SUNDAY, bool >::type goToWork() {return true;} Also the normal function template specialization works fine, no surprises there: template<WeekDay DAY> bool goToWork() {return true;} template<> bool goToWork<SUNDAY>() {return false;} To make things even weirder, if I change the source file to use any other WeekDay than MONDAY or TUESDAY, i.e. bool b = goToWork<THURSDAY>(); the error changes to this: error C2440: 'specialization' : cannot convert from 'int' to 'const WeekDay' Conversion to enumeration type requires an explicit cast (static_cast, C-style cast or function-style cast)

    Read the article

  • Silverlight - Get the ItemsControl of a DataTemplate

    - by user208662
    Hello, I have a Silverlight application that is using a DataGrid. Inside of that DataGrid I have a DataTemplate that is defined like the following: <Grid x:Name="myGrid" Tag="{Binding}" Loaded="myGrid_Loaded"> <ItemsControl ItemsSource="{Binding MyItems}" Tag="{Binding}"> <ItemsControl.ItemTemplate> <DataTemplate> <StackPanel Orientation="Horizontal"> <StackPanel Orientation="Horizontal" Width="138"> <TextBlock Text="{Binding Type}" /> <TextBox x:Name="myTextBox" TextChanged="myTextBox_TextChanged" /> </StackPanel> </StackPanel> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> </Grid> When a user enters text into the TextBox, I have an event (myTextBox_TextChanged) that must be fired at this point. When that event gets fired, I would like to get the ItemsControl element that is the container for this TextBox. How do I get that ItemsControl from my event handler? Please note: Because the ItemsControl is in the DataTemplate of DataGrid, I don't believe I can just add an x:Name and reference it from my code-behind. Or is there a way to do that? Thank you!

    Read the article

< Previous Page | 313 314 315 316 317 318 319 320 321 322 323 324  | Next Page >