Search Results

Search found 3162 results on 127 pages for 'compiled'.

Page 101/127 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • Update Web Reference in Visual Studio

    - by NeilD
    Hi, I have inherited a web site project that makes use of a number of WCF Web Services hosted on a BizTalk server. We have two environments that I need to deploy this project to, with different URLs for the different BizTalk servers. i.e. In the Staging environment, I need to point the services at xx.xx.xx.101 In the Live environment, I need to point them at xx.xx.xx.102, or whatever. Currently, we've got all of the URLs stored in keys in the web.config file, so that we can change them dynamically... Unfortunately this isn't working! If I change the URL in the web.config to something other than what the project was compiled with, I get an error when calling the service: Server did not recognize the value of HTTP Header SOAPAction: xx.xx.xx.101\ServiceName\MethodName I'm told that the only way they've known to deploy this is to update the web.config URLs, change all of the web references in Visual Studio to match, click on "update web reference" for each reference in Visual Studio, and then compile. It's driving me mad! I've written a pre-build NAnt script to go through and replace all instances of the URL found anywhere in the project directory, and even that isn't making any difference. There must be something else being pulled down from the service when I click the "update reference", but I'm new to working with web services, and so I'm not sure what. Does anyone have any ideas? Is there a way to do this programatically? Thanks.

    Read the article

  • Capturing wildcards in java generics

    - by Rollerball
    From this orcale java tutorial: The WildcardError example produces a capture error when compiled: import java.util.List; public class WildcardError { void foo(List<?> i) { i.set(0, i.get(0)); } } After this error demonstration, they fix the problem by using a helper method: public class WildcardFixed { void foo(List<?> i) { fooHelper(i); } // Helper method created so that the wildcard can be captured // through type inference. private <T> void fooHelper(List<T> l) { l.set(0, l.get(0)); } } First, they say that the list input parameter (i) is seen as an Object: In this example, the compiler processes the i input parameter as being of type Object. Why then i.get(0) does not return an Object? if it was already passed in as such? Furthermore what is the point of using a <?> when then you have to use an helper method using <T>. Would not be better using directly which can be inferred?

    Read the article

  • How do I alias the scala setter method 'myvar_$(myval)' to something more pleasing when in java?

    - by feydr
    I've been converting some code from java to scala lately trying to tech myself the language. Suppose we have this scala class: class Person() { var name:String = "joebob" } Now I want to access it from java so I can't use dot-notation like I would if I was in scala. So I can get my var's contents by issuing: person = Person.new(); System.out.println(person.name()); and set it via: person = Person.new(); person.name_$eq("sallysue"); System.out.println(person.name()); This holds true cause our Person Class looks like this in javap: Compiled from "Person.scala" public class Person extends java.lang.Object implements scala.ScalaObject{ public Person(); public void name_$eq(java.lang.String); public java.lang.String name(); public int $tag() throws java.rmi.RemoteException; } Yes, I could write my own getters/setters but I hate filling classes up with that and it doesn't make a ton of sense considering I already have them -- I just want to alias the _$eq method better. (This actually gets worse when you are dealing with stuff like antlr because then you have to escape it and it ends up looking like person.name_\$eq("newname"); Note: I'd much rather have to put up with this rather than fill my classes with more setter methods. So what would you do in this situation?

    Read the article

  • Why does VS2005 skip execution of lines when debugging managed C++ without optimizations?

    - by Sakin
    I ran into a rather odd behavior that I don't even know how to start describing. I wrote a piece of managed C++ code that makes calls to native methods. A (very) simplified version of the code would look like this (I know it looks like a full native function, just assume there is managed stuff being done all over the place): int somefunction(ptrHolder x) { // the accessptr method returns a native pointer if (x.accessptr() != nullptr) // I tried this with nullptr, NULL, 0) { try { x->doSomeNativeVeryImportantStuff(); // or whatever, doesn't matter } catch (SomeCustomExceptionClass &) { return 0; } } SomeOtherNativeClass::doStaticMagic(); return 1; } I compiled this code without optimizations using the /clr flag (VS.NET 2005, SP2) and when running it in the debugger I get to the if statement, since the pointer is actually null, I don't enter the if, but surprisingly, the cursor jumps directly to the return 1 statement, ignoring the doStaticMagic() method completely!!! When looking at the assembly code, I see that it really jumps directly to that line. If I force the debugger to enter the if block, I also jump to the return 1 statement after I press F10. Any ideas why this is happening? Thanks, Ariel

    Read the article

  • Strange Access Denied warning when running the simplest C++ program.

    - by DaveJohnston
    I am just starting to learn C++ (coming from a Java background) and I have come across something that I can't explain. I am working through the C++ Primer book and doing the exercises. Every time I get to a new exercise I create a new .cpp file and set it up with the main method (and any includes I think I will need) e.g.: #include <list> #include <vector> int main(int argc, char **args) { } and just to make sure I go to the command prompt and compile and run: g++ whatever.cpp a.exe Normally this works just fine and I start working on the exercise, but I just did it and got a strange error. It compiles fine, but when I run it it says Access Denied and AVG pops up telling me that a threat has been detected 'Trojan Horse Generic 17.CKZT'. I tried compiling again using the Microsoft Compiler (cl.exe) and it runs fines. So I went back, and added: #include <iostream> compiled using g++ and ran. This time it worked fine. So can anyone tell me why AVG would report an empty main method as a trojan horse but if the iostream header is included it doesn't?

    Read the article

  • What is wrong with this Fortran '77 snippet?

    - by notJim
    I've been tasked with maintaing some legacy fortran code, and I'm having trouble getting it to compile with gfortran. I've written a fair amount of Fortran 95, but this is my first experience with Fortran 77. This snippet of code is the problematic one: CHARACTER*22 IFILE, OFILE IFILE='TEST.IN' OFILE='TEST.OUT' OPEN(5,FILE=IFILE,STATUS='NEW') OPEN(6,FILE=OFILE,STATUS='NEW') common/pabcde/nfghi When I compile with gfortran file.FOR, all lines starting with the common statement are errors (e.g. Error: Unexpected COMMON statement at (1) for each following line until it hits the 25 error limit). I compiled with -Wall -pedantic, but fixing the warnings did not fix this problem. The crazy thing is that if I comment out all 4 lines starting with IF='TEST.IN', the program compiles and works as expected, but I must comment out all of them. Leaving any of them uncommented gives me the same errors starting with the common statement. If I comment out the common statement, I get the same errors, just starting on the following line. I am on OS X Leopard (not Snow Leopard) using gfortran. I've used this very system with gfortran extensively to write Fortran 95 programs, so in theory the compiler itself is sane. What the hell is going on with this code?

    Read the article

  • Problem using a COM interface as parameter

    - by Cesar
    I have the following problem: I have to projects Project1 and Project2. In Project1 I have an interface IMyInterface. In Project2 I have an interface IMyInterface2 with a method that receives a pointer to IMyInterface1. When I use import "Project1.idl"; in my Project2.idl, a #include "Project1.h" appears in Project2___i.h. But this file does not even exist!. What is the proper way to import an interface defined into other library into a idl file? I tried to replace the #include "Project1.h" by *#include "Project1_i.h"* or *#include "Project1_i.c"*, but it gave me a lot of errors. I also tried to use importlib("Project1.tlb") and define my interface IMyInterface2 within the library definition. But when I compile Project2PS project, an error is raised (something like dlldata.c is not generated if no interface is defined). I tried to create a dummy Project1.h. But when Project2___i.h is compiled, compiler cannot find MyInterface1. And if I include Project1___i.h I get a lot of errors again! Apparently, it is a simple issue, but I don't know how to solve it. I'm stuck with that!. By the way, I'm using VS2008 SP1. Thanks in advance.

    Read the article

  • Boost::Thread linking error on OSX?

    - by gct
    So I'm going nuts trying to figure this one out. Here's my basic setup: I'm compiling a shared library with a bunch of core functionality that uses a lot of boost stuff. We'll call this library libpf_core.so. It's linked with the boost static libraries, specifically the python, system, filesystem, thread, and program_options libraries. This all goes swimmingly. Now, I have a little test program called test_socketio which is compiled into a shared library (it's loaded as a plugin at runtime). It uses some boost stuff like boost::bind and boost::thread, and it's linked again libpf_core.so (which has the boost libraries included remember). When I go to compile test_socketio though, out of all my plugins it gives me a linking error: [ Building test_socketio ] g++ -c -pg -g -O0 -I/usr/local/include -I../include test_socketio.cc -o test_socketio.o g++ -shared test_socketio.o -lpy_core -o test_socketio.so Undefined symbols: "boost::lock_error::lock_error()", referenced from: boost::unique_lock<boost::mutex>::lock() in test_socketio.o ld: symbol(s) not found collect2: ld returned 1 exit status And I'm going crazy trying to figure out why this is. I've tried explicitly linking boost::thread into the plugin to no avail, tried ensuring that I'm using the boost headers associated with the libraries linked into libpf_core.so in case there was a conflict there. Is there something OSX specific regarding boost that I'm missing? In my searching on google I've seen a number of other people get this error but no one seems to have come up with a satisfactory solution.

    Read the article

  • Using resources with the same name in Xcode

    - by Roberto
    Is there a way to add multiple resources with the same name to an Xcode project and have 1 of them take priority over the others? Example: I added 2 files, both called icon.png, to an Xcode project. They are on different folders in the file system (Folder1/icon.png and Folder2/icon.png) and on different groups in Xcode. Is there a way to tell Xcode to have Folder2/icon.png take priority over Folder1/icon.png? And if only 1 icon.png exists, then use that one. Thanks. EDIT (2010-12-23): You can have multiple files with the same name in an Xcode project even if they are not in separate folder references, but they are in separate groups. Once compiled, the app bundle (which will be flat with no folders in it), will only have one copy of the file (icon.png). How do you pick which copy of the file to use? I was told that you can do this for BlackBerry. It works something like this: The compiler will go down the list of files in the project and start adding them to the app bundle. If it sees a duplicate, it will overwrite it (or not), so the files at the bottom (or at the top) will have higher precedence and will be the final bundle.

    Read the article

  • How can I copy files in the middle of a build in Team System?

    - by Dana
    I have two solutions that I want to include in a build. Solution two requires the dll's from solution one to successfully build. Solution two has a Binaries folder where the dll's from solution one need to be copied before building Solution two. I've been trying an AfterBuild Target, hoping that it would copy the items after the first SolutionToBuild, but it doesn't fire then. I'm guessing that it would probably fire after both solutions have compiled, but that's not what I want. <SolutionToBuild Include="$(BuildProjectFolderPath)/../../Main/Framework.sln"> <Targets>AfterCompileFramework</Targets> <Properties></Properties> </SolutionToBuild> <SolutionToBuild Include="$(BuildProjectFolderPath)/../../../Dashboard/Main/Dashboard.sln"> <Targets></Targets> <Properties></Properties> </SolutionToBuild> <ItemGroup> <FrameworkBinaries Include="$(DropLocation)\$(BuildNumber)\Release\Framework.*.dll"/> </ItemGroup> <Message Text="FrameworkBinaries: @(FrameworkBinaries)" Importance="high"/> <Copy SourceFiles="@(FrameworkBinaries)" DestinationFolder="$(BuildProjectFolderPath)/../../../Dashboard/Main/Binaries"/>

    Read the article

  • Programmatically change the icon of the executable

    - by Dennis Delimarsky
    I am developing an application called WeatherBar. Its main functionality is based on its interaction with the Windows 7 taskbar — it changes the icon depending on the weather conditions in a specific location. The icons I am using in the application are all stored in a compiled native resource file (.res) — I am using it instead of the embedded resource manifest for icons only. By default, I modify the Icon property of the main form to change the icons accordingly and it works fine, as long as the icon is not pinned to the taskbar. When it gets pinned, the icon in the taskbar automatically switches to the default one for the executable (with index 0 in the resource file). After doing a little bit of research, I figured that a way to change the icon would be changing the shortcut icon (as all pinned applications are actually shortcuts stored in the user folder). But it didn't work. I assume that I need to change the icon for the executable, and therefore use UpdateResource, but I am not entirely sure about this. My executable is not digitally signed, so it shouldn't be an issue modifying it. What would be the way to solve this issue?

    Read the article

  • Visual C++ 2010, rvalue reference bug?

    - by Sergey Shandar
    Is it a bug in Visual C++ 2010 or right behaviour? template<class T> T f(T const &r) { return r; } template<class T> T f(T &&r) { static_assert(false, "no way"); return r; } int main() { int y = 4; f(y); } I thought, the function f(T &&) should never be called but it's called with T = int &. The output: main.cpp(10): error C2338: no way main.cpp(17) : see reference to function template instantiation 'T f<int&>(T)' being compiled with [ T=int & ] Update 1 Do you know any C++x0 compiler as a reference? I've tried comeau online test-drive but could not compile r-value reference. Update 2 Workaround (using SFINAE): #include <boost/utility/enable_if.hpp> #include <boost/type_traits/is_reference.hpp> template<class T> T f(T &r) { return r; } template<class T> typename ::boost::disable_if< ::boost::is_reference<T>, T>::type f(T &&r) { static_assert(false, "no way"); return r; } int main() { int y = 4; f(y); // f(5); // generates "no way" error, as expected. }

    Read the article

  • tcp/ip accept not returning, but client does

    - by paquetp
    server: vxworks 6.3 calls the usual socket, bind, listen, then: for (;;) { client = accept(sfd,NULL,NULL); // pass client to worker thread } client: .NET 2.0 TcpClient constructor to connect to server that takes the string hostname and int port, like: TcpClient client = new TcpClient(server_ip, port); This is working fine when the server is compiled and executed in windows (native c++). intermittently, the constructor to TcpClient will return the instance, without throwing any exception, but the accept call in vxWorks does not return with the client fd. tcpstatShow indicates no accept occurred. What could possibly make the TcpClient constructor (which calls 'Connect') return the instance, while the accept call on the server not return? It seems to be related to what the system is doing in the background - it seems more likely to get this symptom to occur when the server is busy persisting data to flash or an NFS share when the client attempts to connect, but can happen when it isn't also. I've tried adjusting priority of the thread running accept I've looked at the size of the queue in 'listen'. There's enough. The total number of file descriptors available should be enough (haven't validated this yet though, first thing in the morning)

    Read the article

  • WinUSB failing on non-development computers

    - by Giawa
    Good afternoon, WinUSB is working well on the development computer that I am using (Win XP SP3). I am able to download new firmware to the Cypress FX2, and then connect to the new USB device once it 'renumerates'. However, if I've tried the same code with the WinUSB driver on a few other computers (Win XP SP3, Win7 x64) and they both returned the error "A device attached to the system is not functioning." when trying to use CreateFile to get a handle to the USB device. The devicePath was found successfully, so I'm not sure why it cannot connect to the device. Furthermore, the device manager states that my device is working properly. I'm curious if I'm missing something when compiling the code? I would guess that my development computer has something installed on it that the other computers do not? Or perhaps it's a power setting and the device is going to sleep (although I've fooled around with the Power Options on each computer to no avail). Does anyone have any ideas? I've compiled under Visual Studio 2008, and have installed the Microsoft C++ 2008 Redistributable Package on the computers that I've tested on. Thanks, Giawa

    Read the article

  • Grails target folder doesn't appear to be on application's classpath

    - by Kobi
    I have a grails project with some additional java source files under src/java folder. When compiling/running the server, the files under that directory get compiled into the project's target folder, together with all other groovy/grails classes. So far so good. However, when I try to load one of the java source files (from src/java) using reflection (Class.forName to be exact), a ClassNotFoundException gets thrown. What is peculiar about the whole thing is that if I copy that same class from project's target/ folder into the following location (on windows): <myuser>\.grails\1.2.2\projects\<MyProjectName>\resources then no exception gets thrown and the corresponding class is loaded fine. This seems to indicate to me that the integrated grails server only looks at classes within the user's dynamically generated project folder, and not the actual project's target folder. Is my understanding correct? Is there a way to force grails to copy certain classfiles from target/ folder into the resources folder within the user dir? Is there a different way to load the classfiles using reflection? I was looking at using the grailsApplication's classloader but that didn't seem to work either. Any tips would be more than welcome. Thanks a lot in advance!

    Read the article

  • Can I share code & resources between Android projects without using a library?

    - by Tom
    The standard advice for sharing code & resources between Android projects is to use a library. Personally I find this works poorly if (a) the shared code changes a lot, or (b) your computer isn't fast enough. I also don't want to get into deploying multiple APK's, which seems to be necessary when I use dependent projects (i.e. Java Build Path, Projects Tab). On the other hand, sharing a folder of source code by using the Eclipse linked source feature works great (Java Build Path, Source tab, Link Source button), but for these two issues: 1) I can't use the same technique to share resources. I can create the link to the resources parent folder but then things get wonky and the shared resources don't get compiled (I'm using ADT 21). 2) So then I settle for copying the shared resources into each project, but this doesn't work because either. The shared code can't import the copy of its resources because it doesn't know the package name of the project that uses it. The solution I've been using is to access the resources dynamically, but that has become cumbersome as the number of resources grows. So, I need a solution to either (1) or (2), or I'll have to go back to a library project. (Or maybe there is another option I haven't thought of?)

    Read the article

  • How do I alias the scala setter method 'myvar_$eq(myval)' to something more pleasing when in java?

    - by feydr
    I've been converting some code from java to scala lately trying to teach myself the language. Suppose we have this scala class: class Person() { var name:String = "joebob" } Now I want to access it from java so I can't use dot-notation like I would if I was in scala. So I can get my var's contents by issuing: person = Person.new(); System.out.println(person.name()); and set it via: person = Person.new(); person.name_$eq("sallysue"); System.out.println(person.name()); This holds true cause our Person Class looks like this in javap: Compiled from "Person.scala" public class Person extends java.lang.Object implements scala.ScalaObject{ public Person(); public void name_$eq(java.lang.String); public java.lang.String name(); public int $tag() throws java.rmi.RemoteException; } Yes, I could write my own getters/setters but I hate filling classes up with that and it doesn't make a ton of sense considering I already have them -- I just want to alias the _$eq method better. (This actually gets worse when you are dealing with stuff like antlr because then you have to escape it and it ends up looking like person.name_\$eq("newname"); Note: I'd much rather have to put up with this rather than fill my classes with more setter methods. So what would you do in this situation?

    Read the article

  • JAR files, don't they just bloat and slow Java down?

    - by Josamoto
    Okay, the question might seem dumb, but I'm asking it anyways. After struggling for hours to get a Spring + BlazeDS project up and running, I discovered that I was having problems with my project as a result of not including the right dependencies for Spring etc. There were .jars missing from my WEB-INF/lib folder, yes, silly me. After a while, I managed to get all the .jar files where they belong, and it comes at a whopping 12.5MB at that, and there's more than 30 of them! Which concerns me, but it probably and hopefully shouldn't be concerned. How does Java operate in terms of these JAR files, they do take up quite a bit of hard drive space, taking into account that it's compressed and compiled source code. So that can really quickly populate a lot of RAM and in an instant. My questions are: Does Java load an entire .jar file into memory when say for instance a class in that .jar is instantiated? What about stuff that's in the .jar that never gets used. Do .jars get cached somehow, for optimized application performance? When a single .jar is loaded, I understand that the thing sits in memory and is available across multiple HTTP requests (i.e. for the lifetime of the server instance running), unlike PHP where objects are created on the fly with each request, is this assumption correct? When using Spring, I'm thinking, I had to include all those fiddly .jars, wouldn't I just be better off just using native Java, with say at least and ORM solution like Hibernate? So far, Spring just took extra time configuring, extra hard drive space, extra memory, cpu consumption, so I'm concerned that the framework is going to cost too much application performance just to get for example, IoC implemented with my BlazeDS server. There still has to come ORM, a unit testing framework and bits and pieces here and there. It's just so easy to bloat up a project quickly and irresponsibly easily. Where do I draw the line?

    Read the article

  • Ftp throws WebException only in .NET 4.0

    - by Trakhan
    I have following c# code. It runs just fine when compiled against .NET framework 3.5 or 2.0 (I did not test it against 3.0, but it will most likely work too). The problem is, that it fails when built against .NET framework 4.0. FtpWebRequest Ftp = (FtpWebRequest)FtpWebRequest.Create(Url_ + '/' + e.Name); Ftp.Method = WebRequestMethods.Ftp.UploadFile; Ftp.Credentials = new NetworkCredential(Login_, Password_); Ftp.UseBinary = true; Ftp.KeepAlive = false; Ftp.UsePassive = true; Stream S = Ftp.GetRequestStream(); byte[] Content = null; bool Continue = false; do { try { Continue = false; Content = File.ReadAllBytes(e.FullPath); } catch (Exception) { Continue = true; System.Threading.Thread.Sleep(500); } } while (Continue); S.Write(Content, 0, Content.Length); S.Close(); FtpWebResponse Resp = (FtpWebResponse)Ftp.GetResponse(); if (Resp.StatusCode != FtpStatusCode.CommandOK) Console.WriteLine(Resp.StatusDescription); The problem is in call Stream S = Ftp.GetRequestStream();, which throws an en instance of WebException with message “The remote server returned an error: (500) Syntax error, command unrecognized”. Does anybody know why it is so? PS. I communicate virtual ftp server in ECM Alfresco.

    Read the article

  • How do I set up the python/c library correctly?

    - by Bartvbl
    I have been trying to get the python/c library to like my mingW compiler. The python online doncumentation; http://docs.python.org/c-api/intro.html#include-files only mentions that I need to import the python.h file. I grabbed it from the installation directory (as is required on the windows platform), and tested it by compiling the script: #include "Python.h". This compiled fine. Next, I tried out the snippet of code shown a bit lower on the python/c API page: PyObject *t; t = PyTuple_New(3); PyTuple_SetItem(t, 0, PyInt_FromLong(1L)); PyTuple_SetItem(t, 1, PyInt_FromLong(2L)); PyTuple_SetItem(t, 2, PyString_FromString("three")); For some reason, the compiler would compile the code if I'd remove the last 4 lines (so that only the pyObject variable definition would be left), yet calling the actual constructor of the tuple returned errors. I am probably missing something completely obvious here, given I am very new to C, but does anyone know what it is?

    Read the article

  • How to get from JRuby a correctly typed ruby implementation of a Java interface?

    - by Guss
    I'm trying to use JRuby (through the JSR233 interface included in JRuby 1.5) from a Java application to load a ruby implementation of a Java interface. My sample implementation looks like this: Interface: package some.package; import java.util.List; public interface ScriptDemoIf { int fibonacci(int d); List<String> filterLength(List<String> source, int maxlen); } Ruby Implementation: require 'java' include Java class ScriptDemo java_implements some.package.ScriptDemoIf java_signature 'int fibonacci(int d)' def fibonacci(d) d < 2 ? d : fibonacci(d-1) + fibonacci(d-2) end java_signature 'List<String> filterLength(List<String> source, int maxlen)' def filterLength(source, maxlen) source.find_all { |str| str.length <= maxlen } end end Class loader: public ScriptDemoIf load(String filename) throws ScriptException { ScriptEngine engine = new ScriptEngineManager().getEngineByName("jruby"); FileReader script = new FileReader(filename); try { engine.eval(new FileReader(script)); } catch (FileNotFoundException e) { throw new ScriptException("Failed to load " + filename); } return (ScriptDemoIf) m_engine.eval("ScriptDemo.new"); } (Obviously the loader is a bit more generic in real life - it doesn't assume that the implementation class name is "ScriptDemo" - this is just for simplicity). Problem - I get a class cast exception in the last line of the loader - the engine.eval() return a RubyObject type which doesn't cast down nicely to my interface. From stuff I read all over the web I was under the impression that the whole point of use java_implements in the Ruby section was for the interface implementations to be compiled in properly. What am I doing wrong?

    Read the article

  • Small openmp programm freezes sometimes (gcc, c, linux)

    - by osgx
    Hello Just write a small omp test, and it does not work correctly all the times: #include <omp.h> int main() { int i,j=0; #pragma omp parallel for(i=0;i<1000;i++) { #pragma omp barrier j+= j^i; } return j; } The usage of j for writing from all threads is incorrect in this example, BUT there must be only nondeterministic value of j I have a freeze. Compiled with gcc-4.3.1 -fopenmp a.c -o gcc -static Run on 4-core x86_Core2 Linux server: $ ./gcc and got freeze (sometimes; like 1 freeze for 4-5 fast runs). Strace: [pid 13118] <... futex resumed> ) = 0 [pid 13118] futex(0x80d3014, FUTEX_WAIT, 2, NULL <unfinished ...> [pid 13120] <... futex resumed> ) = 0 [pid 13119] futex(0x80d3014, FUTEX_WAIT, 2, NULL <unfinished ...> [pid 13120] futex(0x80d3014, FUTEX_WAKE, 1) = 1 [pid 13120] futex(0x80cd798, FUTEX_WAIT, 1, NULL <unfinished ...> [pid 13109] <... futex resumed> ) = 0 [pid 13109] futex(0x80d3014, FUTEX_WAKE, 1) = 1 [pid 13109] futex(0x80d3020, FUTEX_WAIT, 251, NULL <unfinished ...> [pid 13118] <... futex resumed> ) = 0 [pid 13118] futex(0x80d3014, FUTEX_WAKE, 1) = 1 [pid 13119] <... futex resumed> ) = 0 [pid 13118] futex(0x80d3020, FUTEX_WAIT, 251, NULL <unfinished ...> [pid 13119] futex(0x80d3014, FUTEX_WAKE, 1) = 0 [pid 13119] futex(0x80d3020, FUTEX_WAIT, 251, NULL <freeze> Why do I have a freeze (deadlock)?

    Read the article

  • Guru of the Week 2 no match for the operator==

    - by Adam
    From Guru of the Week 2. We have the function: string FindAddr(const list<Employee> l, string name) { for( list<Employee>::const_iterator i = l.begin(); i != l.end(); i++) { if( *i == name ) // here will be compilation error { return (*i).addr; } } return ""; } I added dummy Employee class to that: class Employee { string n; public: string addr; Employee(string name) : n(name) {} Employee() {} string name() const { return n; } operator string() { return n; } }; And got compilation error: error: no match for ‘operator==’ in ‘i.std::_List_iterator<_Tp>::operator* [with _Tp = Employee]() == name’ It works only if add operator== to Employee. But, Herb Sutter wrote that: The Employee class isn't shown, but for this to work it must either have a conversion to string or a conversion ctor taking a string. But Employee has a conversion function and conversion constructor as well. GCC version 4.4.3. Compiled normally, g++ file.cpp without any flags. There should be implicit conversion and it should work, why it doesn't?

    Read the article

  • C++ Operator Ambiguity

    - by Scott
    Forgive me, for I am fairly new to C++, but I am having some trouble regarding operator ambiguity. I think it is compiler-specific, for the code compiled on my desktop. However, it fails to compile on my laptop. I think I know what's going wrong, but I don't see an elegant way around it. Please let me know if I am making an obvious mistake. Anyhow, here's what I'm trying to do: I have made my own vector class called Vector4 which looks something like this: class Vector4 { private: GLfloat vector[4]; ... } Then I have these operators, which are causing the problem: operator GLfloat* () { return vector; } operator const GLfloat* () const { return vector; } GLfloat& operator [] (const size_t i) { return vector[i]; } const GLfloat& operator [] (const size_t i) const { return vector[i]; } I have the conversion operator so that I can pass an instance of my Vector4 class to glVertex3fv, and I have subscripting for obvious reasons. However, calls that involve subscripting the Vector4 become ambiguous to the compiler: enum {x, y, z, w} Vector4 v(1.0, 2.0, 3.0, 4.0); glTranslatef(v[x], v[y], v[z]); Here are the candidates: candidate 1: const GLfloat& Vector4:: operator[](size_t) const candidate 2: operator[](const GLfloat*, int) <built-in> Why would it try to convert my Vector4 to a GLfloat* first when the subscript operator is already defined on Vector4? Is there a simple way around this that doesn't involve typecasting? Am I just making a silly mistake? Thanks for any help in advance.

    Read the article

  • Disabling Remote-Debugging Connection on Windows 2000

    - by Kim Leeper
    I have two machines one running Win 2000 and one running Win XP both with VSC++ 6. I created an application on the Win XP machine (local) and sucessfuly used the Win2000 machine (remote) as the target for debugging. The code was in a shared drive on the Win2000 machine. This setup worked well, just like in the movies! However, I now wish to use my Win2000 machine as a develpment machine again and I find I can not. When attempting to execute a natively compiled application on this machine, I get a dialog with the title of "Remote Execution Path And Filename" asking me for same. When dialog is cancelled as the program that is attemptng to execute is not remote, the program terminates without error. Extra info! On the WinXP machine, VSC++, under the Build menu-Debugger Remote Connection-Remote Connection Dialog-the "Connection:" list box has two entries 'Local' and 'Network', on the Win2000 machine the 'Local' entry is not present, only the entry for 'Network'. How do I get back my 'Local' entry on what used to be my target machine?

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >