Search Results

Search found 16218 results on 649 pages for 'compiler errors'.

Page 72/649 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • c++ possible errors

    - by lego69
    hello I've got error a1 was not declared in this scope, can somebody please explain me my fault my header: #ifndef QUIZ_H_ #define QUIZ_H_ #include "quiz.cpp" class A { private: int player; public: A(int initPlayer); ~A(); void foo(); }; #endif /* QUIZ_H_ */ implementation of class functions: #include "quiz.h" #include <iostream> using std::cout; using std::endl; A::A(int initPlayer = 0){ player = initPlayer; } A::~A(){ } void A::foo(){ cout << player; } my main #include "quiz.h" int main() { quiz(7); return 0; } function quiz: #include "quiz.h" void quiz(int i) { A a1(i); a1.foo(); }

    Read the article

  • error - inherited class field undeclared according to g++

    - by infoholic_anonymous
    I have a code that has the following logic. g++ gives me the error that I have not declared n in my iterator2. What could be wrong? template <typename T> class List{ template <typename TT> class Node; Node<T> *head; /* (...) */ template <bool D> class iterator1{ protected: Node<T> n; public: iterator1( Node<T> *nn ) { n = nn } /* (...) */ }; template <bool D> class iterator2 : public iterator1<D>{ public: iterator2( Node<T> *nn ) : iterator1<D>( nn ) {} void fun( Node<T> *nn ) { n = nn; } /* (...) */ }; }; EDIT : I attach the actual header file. iterator1 would be iterable_frame and iterator2 - switchable_frame. #ifndef LST_H #define LST_H template <typename T> class List { public: template <typename TT> class Node; private: Node<T> *head; public: List() { head = new Node<T>; } ~List() { empty_list(); delete head; } List( const List &l ); inline bool is_empty() const { return head->next[0] == head; } void empty_list(); template <bool DIM> class iterable_frame { protected: Node<T> *head; Node<T> **caret; public: iterable_frame( const List &l ) { head = *(caret = &l.head); } iterable_frame( const iterable_frame &i ) { head = *(caret = i.caret); } ~iterable_frame() {} /* (...) - a few methods follow */ template <bool _DIM> friend class supervised_frame; }; template <bool DIM> class switchable_frame : public iterable_frame<DIM> { Node<T> *main_head; public: switchable_frame( const List& l ) : iterable_frame<DIM>(l) { main_head = head; } inline bool next_frame() { caret = &head->next[!DIM]; head = *caret; return head != main_head; } }; template <bool DIM> class supervised_frame { iterable_frame<DIM> sentinels; iterable_frame<DIM> cells; public: supervised_frame( const List &l ) : sentinels(l), cells(l) {} ~supervised_frame() {} /* (...) - a few methods follow */ }; template <typename TT> class Node { unsigned index[2]; TT num; Node<TT> *next[2]; public: Node( unsigned x = 0, unsigned y = 0 ) { index[0]=x; index[1]=y; next[0] = this; next[1] = this; } Node( unsigned x, unsigned y, TT d ) { index[0]=x; index[1]=y; num=d; next[0] = this; next[1] = this; } Node( const Node &n ) { index[0] = n.index[0]; index[1] = n.index[1]; num = n.num; next[0] = next[1] = this; } ~Node() {} friend class List; }; }; #include "List.cpp" #endif the exact error log is the following: In file included from main.cpp:1: List.h: In member function ‘bool List<T>::switchable_frame<DIM>::next_frame()’: List.h:77: error: ‘caret’ was not declared in this scope

    Read the article

  • ILMerge - Unresolved assembly reference not allowed: System.Core

    - by Steve Michelotti
    ILMerge is a utility which allows you the merge multiple .NET assemblies into a single binary assembly more for convenient distribution. Recently we ran into problems when attempting to use ILMerge on a .NET 4 project. We received the error message: An exception occurred during merging: Unresolved assembly reference not allowed: System.Core.     at System.Compiler.Ir2md.GetAssemblyRefIndex(AssemblyNode assembly)     at System.Compiler.Ir2md.GetTypeRefIndex(TypeNode type)     at System.Compiler.Ir2md.VisitReferencedType(TypeNode type)     at System.Compiler.Ir2md.GetMemberRefIndex(Member m)     at System.Compiler.Ir2md.PopulateCustomAttributeTable()     at System.Compiler.Ir2md.SetupMetadataWriter(String debugSymbolsLocation)     at System.Compiler.Ir2md.WritePE(Module module, String debugSymbolsLocation, BinaryWriter writer)     at System.Compiler.Writer.WritePE(String location, Boolean writeDebugSymbols, Module module, Boolean delaySign, String keyFileName, String keyName)     at System.Compiler.Writer.WritePE(CompilerParameters compilerParameters, Module module)     at ILMerging.ILMerge.Merge()     at ILMerging.ILMerge.Main(String[] args) It turns out that this issue is caused by ILMerge.exe not being able to find the .NET 4 framework by default. The answer was ultimately found here. You either have to use the /lib option to point to your .NET 4 framework directory (e.g., “C:\Windows\Microsoft.NET\Framework\v4.0.30319” or “C:\Windows\Microsoft.NET\Framework64\v4.0.30319”) or just use an ILMerge.exe.config file that looks like this: 1: <configuration> 2: <startup useLegacyV2RuntimeActivationPolicy="true"> 3: <requiredRuntime safemode="true" imageVersion="v4.0.30319" version="v4.0.30319"/> 4: </startup> 5: </configuration> This was able to successfully resolve my issue.

    Read the article

  • What to do when 'dpkg --configure -a' fails with too many errors?

    - by rudivonstaden
    During an upgrade from lucid (10.04) to precise (12.04), the X session froze, and I have been trying to recover the upgrade to get a stable system. I have performed the following steps: Used ssh to log in to the stalled system over the network. Checked the contents of the /var/log/dist-upgrade directory. There was no activity on main.log, apt.log or term.log. top showed that process 'precise' was using about 3% CPU, but I could find no evidence that the upgrade process was still doing anything. 'dpkg' did not show up in top, but it came up with pgrep dpkg | xargs ps Killed the 'dpkg' and 'precise' processes Tried to recover the upgrade by running sudo fuser -vki /var/lib/dpkg/lock;sudo dpkg --configure -a. This was partially successful (some packages were configured), but failed with the message Processing was halted because there were too many errors. I ran the same command a few times, and each time some packages were configured but others failed. Tried running sudo apt-get -f install. It fails with similar errors to dpkg. The current situation is that dpkg --configure -a and sudo apt-get -f install fails with two kinds of error: Dependency issues, e.g.: dpkg: dependency problems prevent configuration of cifs-utils: cifs-utils depends on samba-common; however: Package samba-common is not configured yet. dpkg: error processing cifs-utils (--configure): dependency problems - leaving unconfigured Resource conflict, e.g.: debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable Additionally, it seems there's reference to potential boot problems, so I'm not keen to reboot without fixing the install first: dpkg: too many errors, stopping Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-25-generic cryptsetup: WARNING: failed to detect canonical device of /dev/sda1 cryptsetup: WARNING: could not determine root device from /etc/fstab So my question is, how to get a working install when dpkg --configure -a fails?

    Read the article

  • GCC emits extra code for boost::shared_ptr dereference

    - by Checkers
    I have the following code: #include <boost/shared_ptr.hpp> struct Foo { int a; }; static int A; void func_shared(const boost::shared_ptr<Foo> &foo) { A = foo->a; } void func_raw(Foo * const foo) { A = foo->a; } I thought the compiler would create identical code, but for shared_ptr version an extra seemingly redundant instruction is emitted. Disassembly of section .text: 00000000 <func_raw(Foo*)>: 0: 55 push ebp 1: 89 e5 mov ebp,esp 3: 8b 45 08 mov eax,DWORD PTR [ebp+8] 6: 5d pop ebp 7: 8b 00 mov eax,DWORD PTR [eax] 9: a3 00 00 00 00 mov ds:0x0,eax e: c3 ret f: 90 nop 00000010 <func_shared(boost::shared_ptr<Foo> const&)>: 10: 55 push ebp 11: 89 e5 mov ebp,esp 13: 8b 45 08 mov eax,DWORD PTR [ebp+8] 16: 5d pop ebp 17: 8b 00 mov eax,DWORD PTR [eax] 19: 8b 00 mov eax,DWORD PTR [eax] 1b: a3 00 00 00 00 mov ds:0x0,eax 20: c3 ret I'm just curious, is this necessary, or it is just an optimizer's shortcoming? Compiling with g++ 4.1.2, -O3 -NDEBUG.

    Read the article

  • Is there a standard lexer/parser tool for Python?

    - by Salim Fadhley
    A volunteer job requires us to convert a large number of LaTeX documents into ePub format. It's a series of open-source fiction book which has so far only been produced only on paper via a print on demand service. We'd like to be able to offer the book to users of book-reader devices (such as Kindle) which require the ePub format for best results. Fortunately, ePub is a very simple format, however there's no trivial way for LaTeX to produce the XHTML outut required. We experimented with alternative LaTeX compilers (e.g. plastex) but in the end we figured that it would probably be a lot easier to simply write our own compiler which understands a tiny subset of the LaTeX language and compiles directly to XHTML / ePub. Previously I used a tool on Windows called GOLD. This allowed me to go directly from BNF grammars to a stub parser. It also alllowed me to implement the parser in any language I liked. (I'd choose Python). This product has to work on Linux, so I'm wondering if there's an equivalent toolchain that works as well under Ubutnu / Eclipse / Python. The idea is that we will take the grammar of TeX and just implement a teeny subset of that, but we do not want to spend a huge amount of time worrying about grammar and parsing. A parser generator would obviously save us a great deal of time. Sal UPDATE 1: Bonus marks for a solution with excellent documentation or tutorials.

    Read the article

  • Anonymous union definition/declaration in a macro GNU vs VS2008

    - by Alan_m
    I am attempting to alter an IAR specific header file for a lpc2138 so it can compile with Visual Studio 2008 (to enable compatible unit testing). My problem involves converting register definitions to be hardware independent (not at a memory address) The "IAR-safe macro" is: #define __IO_REG32_BIT(NAME, ADDRESS, ATTRIBUTE, BIT_STRUCT) \ volatile __no_init ATTRIBUTE union \ { \ unsigned long NAME; \ BIT_STRUCT NAME ## _bit; \ } @ ADDRESS //declaration //(where __gpio0_bits is a structure that names //each of the 32 bits as P0_0, P0_1, etc) __IO_REG32_BIT(IO0PIN,0xE0028000,__READ_WRITE,__gpio0_bits); //usage IO0PIN = 0x0xAA55AA55; IO0PIN_bit.P0_5 = 0; This is my comparable "hardware independent" code: #define __IO_REG32_BIT(NAME, BIT_STRUCT)\ volatile union \ { \ unsigned long NAME; \ BIT_STRUCT NAME##_bit; \ } NAME; //declaration __IO_REG32_BIT(IO0PIN,__gpio0_bits); //usage IO0PIN.IO0PIN = 0xAA55AA55; IO0PIN.IO0PIN_bit.P0_5 = 1; This compiles and works but quite obviously my "hardware independent" usage does not match the "IAR-safe" usage. How do I alter my macro so I can use IO0PIN the same way I do in IAR? I feel this is a simple anonymous union matter but multiple attempts and variants have proven unsuccessful. Maybe the IAR GNU compiler supports anonymous unions and vs2008 does not. Thank you.

    Read the article

  • Recognizing terminals in a CFG production previously not defined as tokens.

    - by kmels
    I'm making a generator of LL(1) parsers, my input is a CoCo/R language specification. I've already got a Scanner generator for that input. Suppose I've got the following specification: COMPILER 1. CHARACTERS digit="0123456789". TOKENS number = digit{digit}. decnumber = digit{digit}"."digit{digit}. PRODUCTIONS Expression = Term{"+"Term|"-"Term}. Term = Factor{"*"Factor|"/"Factor}. Factor = ["-"](Number|"("Expression")"). Number = (number|decnumber). END 1. So, if the parser generated by this grammar receives a word "1+1", it'd be accepted i.e. a parse tree would be found. My question is, the character "+" was never defined in a token, but it appears in the non-terminal "Expression". How should my generated Scanner recognize it? It would not recognize it as a token. Is this a valid input then? Should I add this terminal in TOKENS and then consider an error routine for a Scanner for it to skip it? How does usual language specifications handle this?

    Read the article

  • Raw types and subtyping

    - by Dmitrii
    We have generic class SomeClass<T>{ } We can write the line: SomeClass s= new SomeClass<String>(); It's ok, because raw type is supertype for generic type. But SomeClass<String> s= new SomeClass(); is correct to. Why is it correct? I thought that type erasure was before type checking, but it's wrong. From Hacker's Guide to Javac When the Java compiler is invoked with default compile policy it performs the following passes: parse: Reads a set of *.java source files and maps the resulting token sequence into AST-Nodes. enter: Enters symbols for the definitions into the symbol table. process annotations: If Requested, processes annotations found in the specified compilation units. attribute: Attributes the Syntax trees. This step includes name resolution, type checking and constant folding. flow: Performs data ow analysis on the trees from the previous step. This includes checks for assignments and reachability. desugar: Rewrites the AST and translates away some syntactic sugar. generate: Generates Source Files or Class Files. Generic is syntax sugar, hence type erasure invoked at 6 pass, after type checking, which invoked at 4 pass. I'm confused.

    Read the article

  • Is there really such a thing as a char or short in modern programming?

    - by Dean P
    Howdy all, I've been learning to program for a Mac over the past few months (I have experience in other languages). Obviously that has meant learning the Objective C language and thus the plainer C it is predicated on. So I have stumbles on this quote, which refers to the C/C++ language in general, not just the Mac platform. With C and C++ prefer use of int over char and short. The main reason behind this is that C and C++ perform arithmetic operations and parameter passing at integer level, If you have an integer value that can fit in a byte, you should still consider using an int to hold the number. If you use a char, the compiler will first convert the values into integer, perform the operations and then convert back the result to char. So my question, is this the case in the Mac Desktop and IPhone OS environments? I understand when talking about theses environments we're actually talking about 3-4 different architectures (PPC, i386, Arm and the A4 Arm variant) so there may not be a single answer. Nevertheless does the general principle hold that in modern 32 bit / 64 bit systems using 1-2 byte variables that don't align with the machine's natural 4 byte words doesn't provide much of the efficiency we may expect. For instance, a plain old C-Array of 100,000 chars is smaller than the same 100,000 ints by a factor of four, but if during an enumeration, reading out each index involves a cast/boxing/unboxing of sorts, will we see overall lower 'performance' despite the saved memory overhead?

    Read the article

  • VS 2008 C++ build output?

    - by STingRaySC
    Why when I watch the build output from a VC++ project in VS do I see: 1Compiling... 1a.cpp 1b.cpp 1c.cpp 1d.cpp 1e.cpp [etc...] 1Generating code... 1x.cpp 1y.cpp [etc...] The output looks as though several compilation units are being handled before any code is generated. Is this really going on? I'm trying to improve build times, and by using pre-compiled headers, I've gotten great speedups for each ".cpp" file, but there is a relatively long pause during the "Generating Code..." message. I do not have "Whole Program Optimization" nor "Link Time Code Generation" turned on. If this is the case, then why? Why doesn't VC++ compile each ".cpp" individually (which would include the code generation phase)? If this isn't just an illusion of the output, is there cross-compilation-unit optimization potentially going on here? There don't appear to be any compiler options to control that behavior (I know about WPO and LTCG, as mentioned above). EDIT: The build log just shows the ".obj" files in the output directory, one per line. There is no indication of "Compiling..." vs. "Generating code..." steps. EDIT: I have confirmed that this behavior has nothing to do with the "maximum number of parallel project builds" setting in Tools - Options - Projects and Solutions - Build and Run. Nor is it related to the MSBuild project build output verbosity setting. Indeed if I cancel the build before the "Generating code..." step, the ".obj" files will not exist for the most recent set of "compiled" files. E.g., if I cancel the build during "c.cpp" above, I will see only "a.obj" and "b.obj".

    Read the article

  • How would I code a complex formula parser manually?

    - by StormianRootSolver
    Hm, this is language - agnostic, I would prefer doing it in C# or F#, but I'm more interested this time in the question "how would that work anyway". What I want to accomplish ist: a) I want to LEARN it - it's about my ego this time, it's for a fun project where I want to show myself that I'm a really good at this stuff b) I know a tiny little bit about EBNF (although I don't know yet, how operator precedence works in EBNF - Irony.NET does it right, I checked the examples, but this is a bit ominous to me) c) My parser should be able to take this: 5 * (3 + (2 - 9 * (5 / 7)) + 9) for example and give me the right results d) To be quite frankly, this seems to be the biggest problem in writing a compiler or even an interpreter for me. I would have no problem generating even 64 bit assembler code (I CAN write assembler manually), but the formula parser... e) Another thought: even simple computers (like my old Sharp 1246S with only about 2kB of RAM) can do that... it can't be THAT hard, right? And even very, very old programming languages have formula evaluation... BASIC is from 1964 and they already could calculate the kind of formula I presented as an example f) A few ideas, a few inspirations would be really enough - I just have no clue how to do operator precedence and the parentheses - I DO, however, know that it involves an AST and that many people use a stack So, what do you think?

    Read the article

  • How is a referencing environment generally implemented for closures?

    - by Alexandr Kurilin
    Let's say I have a statically/lexically scoped language with deep binding and I create a closure. The closure will consist of the statements I want executed plus the so called referencing environment, or, to quote this post, the collection of variables which can be used. What does this referencing environment actually look like implementation-wise? I was recently reading about ObjectiveC's implementation of blocks, and the author suggests that behind the scenes you get a copy of all of the variables on the stack and also of all the references to heap objects. The explanation claims that you get a "snapshot" of the referencing environment at the point in time of the closure's creation. Is that more or less what happens, or did I misread that? Is anything done to "freeze" a separate copy of the heap objects, or is it safe to assume that if they get modified between closure creation and the closure executing, the closure will no longer be operating on the original version of the object? If indeed there's copying being made, are there memory usage considerations in situations where one might want to create plenty of closures and store them somewhere? I think that misunderstanding of some of these concepts might lead to tricky issues like the ones Eric Lippert mentions in this blog post. It's interesting because you'd think that it wouldn't make sense to keep a reference to a value type that might be gone by the time the closure is called, but I'm guessing that in C# the compiler will figure out that the variable is needed later and put it into the heap instead. It seems that in most memory-managed languages everything is a reference and thus ObjectiveC is a somewhat unique situation with having to deal with copying what's on the stack.

    Read the article

  • Enum exeeding the 65535 bytes limit of static initializer... what's best to do?

    - by Daniel Bleisteiner
    I've started a rather large Enum of so called Descriptors that I've wanted to use as a reference list in my model. But now I've come across a compiler/VM limit the first time and so I'm looking for the best solution to handle this. Here is my error : The code for the static initializer is exceeding the 65535 bytes limit It is clear where this comes from - my Enum simply has far to much elements. But I need those elements - there is no way to reduce that set. Initialy I've planed to use a single Enum because I want to make sure that all elements within the Enum are unique. It is used in a Hibernate persistence context where the reference to the Enum is stored as String value in the database. So this must be unique! The content of my Enum can be devided into several groups of elements belonging together. But splitting the Enum would remove the unique safety I get during compile time. Or can this be achieved with multiple Enums in some way? My only current idea is to define some Interface called Descriptor and code several Enums implementing it. This way I hope to be able to use the Hibernate Enum mapping as if it were a single Enum. But I'm not even sure if this will work. And I loose unique safety. Any ideas how to handle that case?

    Read the article

  • Determine target architecture of binary file in Linux (library or executable)

    - by Fernando Miguélez
    We have an issue related to a Java application running under a (rather old) FC3 on a Advantech POS board with a Via C3 processor. The java application has several compiled shared libs that are accessed via JNI. Via C3 processor is suppossed to be i686 compatible. Some time ago after installing Ubuntu 6.10 on a MiniItx board with the same processor I found out that the previous statement is not 100% true. The Ubuntu kernel hanged on startup due to the lack of some specific and optional instructions of the i686 set in the C3 processor. These instructions missing in C3 implementation of i686 set are used by default by GCC compiler when using i686 optimizations. The solution in this case was to go with a i386 compiled version of Ubuntu distribution. The base problem with the Java application is that the FC3 distribution was installed on the HD by cloning from an image of the HD of another PC, this time an Intel P4. Afterwards the distribution needed some hacking to have it running such as replacing some packages (such as the kernel one) with the i383 compiled version. The problem is that after working for a while the system completely hangs without a trace. I am afraid that some i686 code is left somewhere in the system and could be executed randomly at any time (for example after recovering from suspend mode or something like that). My question is: Is there any tool or way to find out at what specific architecture is an binary file (executable or library) aimed provided that "file" does not give so much information?

    Read the article

  • How would I instruct extconf.rb to use additional g++ optimization flags, and which are advisable?

    - by mohawkjohn
    I'm using Rice to write a C++ extension for a Ruby gem. The extension is in the form of a shared object (.so) file. This requires 'mkmf-rice' instead of 'mkmf', but the two (AFAIK) are pretty similar. By default, the compiler uses the flags -g -O2. Personally, I find this kind of silly, since it's hard to debug with any optimization enabled. I've resorted to editing the Makefile to take out the flags I don't like (e.g., removing -fPIC -shared when I need to debug using main() instead of Ruby's hooks). But I figure there's got to be a better way. I know I can just do $CPPFLAGS += " -DRICE" to add additional flags. But how do I remove things without editing the Makefile directly? A secondary question: what optimizations are safe for shared objects loaded by Ruby? Can I do things like -funroll-loops? What do you all recommend? It's a scientific computing project, so the faster the better. Memory is not much of an issue. Many thanks!

    Read the article

  • Communication between lexer and parser

    - by FredOverflow
    Every time I write a simple lexer and parser, I stumble upon the same question: how should the lexer and the parser communicate? I see four different approaches: The lexer eagerly converts the entire input string into a vector of tokens. Once this is done, the vector is fed to the parser which converts it into a tree. This is by far the simplest solution to implement, but since all tokens are stored in memory, it wastes a lot of space. Each time the lexer finds a token, it invokes a function on the parser, passing the current token. In my experience, this only works if the parser can naturally be implemented as a state machine like LALR parsers. By contrast, I don't think it would work at all for recursive descent parsers. Each time the parser needs a token, it asks the lexer for the next one. This is very easy to implement in C# due to the yield keyword, but quite hard in C++ which doesn't have it. The lexer and parser communicate through an asynchronous queue. This is commonly known under the title "producer/consumer", and it should simplify the communication between the lexer and the parser a lot. Does it also outperform the other solutions on multicores? Or is lexing too trivial? Is my analysis sound? Are there other approaches I haven't thought of? What is used in real-world compilers? It would be really cool if compiler writers like Eric Lippert could shed some light on this issue.

    Read the article

  • Packages name conflicting with getters and setters?

    - by MrKishi
    Hello, folks. So, I've came across this compilation error a while ago.. As there's an easy fix and I didn't find anything relevant at the time, I eventually let it go. I just remembered it and I'm now wondering if this is really part of the language grammar (which I highly doubt) or if it's a compiler bug. I'm being purely curious about this -- it doesn't really affect development, but it would be nice to see if any of you have seen this already. package view { import flash.display.Sprite; public class Main extends Sprite { private var _view:Sprite = new Sprite(); public function Main() { this.test(); } private function test():void { trace(this.view.x, this.view.y); //1178: Attempted access of inaccessible property x through a reference with static type view:Main. //1178: Attempted access of inaccessible property y through a reference with static type view:Main. //Note that I got this due to the package name. //It runs just fine if I rename the package or getter. } public function get view():Sprite { return this._view; } } }

    Read the article

  • How do I stop MSYS from transforming my compiler options?

    - by Carl Norum
    Is there a way to stop MSYS/MinGW from transforming what it thinks are paths on my command lines? I have a project that's using nmake & Microsoft Visual Studio 2003 (yeecccch). I have the build system all ported and ready to go for GNU make (and tested with Cygwin). Something weird is happening to my compiler flags when I try to compile in an MSYS environment, though. Here's a simplified example: $ cl /nologo Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 13.10.6030 for 80x86 Copyright (C) Microsoft Corporation. All rights reserved. /out:nologo.exe C:/msys/1.0/nologo LINK : fatal error LNK1181: cannot open input file 'C:/msys/1.0/nologo.obj' As you can see, MSYS is transforming the /nologo compiler switch into a windows path, and then sending that to the compiler. I really don't want this to happen - in fact I'd be happy if MSYS never transformed any paths - my build system had to take care of all that when I first ported to Cygwin. Is there a way to make that happen? It does work to change the command to $ cl -nologo Which produces the expected results, but this build system is very large and very painful to update. I really don't want to have to go in and change every use of a / for a flag to a -. In particular, there may be tools that don't support the use of the - at all, and then I'll really be stuck. Thanks for any suggestions!

    Read the article

  • xcode linker error on iPhone app (Only on simulator)

    - by RexOnRoids
    Im getting this linker error that won't let me compile. It only happens on the simulator. KEY POINTS: - Happens only in simulator - Similar to THIS question, but found no FRAMEWORK_SEARCH_PATHS in my .pbxproj file - Though my OS is 10.6.2, I had to build target 1.5 to avoid other linker errors - libxml2.dylib IS required and is in my Frameworks group - The other cited libraries I have never heard of. - Tried bringing in those other Libs under frameworks, didn't solve. Build SpaceTweet of project SpaceTweet with configuration Debug Ld build/Debug-iphonesimulator/SpaceTweet.app/SpaceTweet normal i386 cd "/Users/Scott/Desktop/iPhone Dev/SpaceTweet(Experimental)" setenv MACOSX_DEPLOYMENT_TARGET 10.5 setenv PATH "/Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin:/Developer/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin" /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin/gcc-4.2 -arch i386 -isysroot /Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator3.1.3.sdk "-L/Users/Scott/Desktop/iPhone Dev/SpaceTweet(Experimental)/build/Debug-iphonesimulator" -L/Users/Scott/Desktop "-L/Users/Scott/Desktop/iPhone Dev/SpaceTweet(Experimental)/../../libYAJLIPhone-0" -L/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.0.sdk/usr/lib -L/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator3.1.3.sdk/usr/lib -L/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator3.0.sdk/usr/lib "-F/Users/Scott/Desktop/iPhone Dev/SpaceTweet(Experimental)/build/Debug-iphonesimulator" -filelist "/Users/Scott/Desktop/iPhone Dev/SpaceTweet(Experimental)/build/SpaceTweet.build/Debug-iphonesimulator/SpaceTweet.build/Objects-normal/i386/SpaceTweet.LinkFileList" -mmacosx-version-min=10.5 -framework Foundation -framework UIKit -framework CoreGraphics -framework AVFoundation -framework MessageUI -lYAJLIPhone -lxml2 -o "/Users/Scott/Desktop/iPhone Dev/SpaceTweet(Experimental)/build/Debug-iphonesimulator/SpaceTweet.app/SpaceTweet" ld: warning: in /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.0.sdk/usr/lib/libxml2.dylib, missing required architecture i386 in file ld: warning: in /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.0.sdk/usr/lib/libSystem.dylib, missing required architecture i386 in file ld: in /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.0.sdk/usr/lib/libobjc.A.dylib, missing required architecture i386 in file collect2: ld returned 1 exit status Command /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin/gcc-4.2 failed with exit code 1 CLUE: Again, MY question is very similar to THIS SOLVED QUESTION except that in my case I did NOT find a FRAMEWORK_SEARCH_PATHS entry in the .pbxproj file in my project bundle and thus could not solve in the manner in which that question was solved.

    Read the article

  • iPhone app linker error on Snow Leopard (Only on simulator)

    - by RexOnRoids
    Im getting this linker error that won't let me compile. It only happens on the simulator. NOTE: - OSX 10.6.2 but had to use 10.5 as build target to avoid other linker errors - libxml2.dylib IS required and is in my Frameworks group - The other cited libraries I have never heard of. - Tried bringing in those other Libs under frameworks, didn't solve. Build SpaceTweet of project SpaceTweet with configuration Debug Ld build/Debug-iphonesimulator/SpaceTweet.app/SpaceTweet normal i386 cd "/Users/Scott/Desktop/iPhone Dev/SpaceTweet(Experimental)" setenv MACOSX_DEPLOYMENT_TARGET 10.5 setenv PATH "/Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin:/Developer/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin" /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin/gcc-4.2 -arch i386 -isysroot /Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator3.1.3.sdk "-L/Users/Scott/Desktop/iPhone Dev/SpaceTweet(Experimental)/build/Debug-iphonesimulator" -L/Users/Scott/Desktop "-L/Users/Scott/Desktop/iPhone Dev/SpaceTweet(Experimental)/../../libYAJLIPhone-0" -L/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.0.sdk/usr/lib -L/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator3.1.3.sdk/usr/lib -L/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator3.0.sdk/usr/lib "-F/Users/Scott/Desktop/iPhone Dev/SpaceTweet(Experimental)/build/Debug-iphonesimulator" -filelist "/Users/Scott/Desktop/iPhone Dev/SpaceTweet(Experimental)/build/SpaceTweet.build/Debug-iphonesimulator/SpaceTweet.build/Objects-normal/i386/SpaceTweet.LinkFileList" -mmacosx-version-min=10.5 -framework Foundation -framework UIKit -framework CoreGraphics -framework AVFoundation -framework MessageUI -lYAJLIPhone -lxml2 -o "/Users/Scott/Desktop/iPhone Dev/SpaceTweet(Experimental)/build/Debug-iphonesimulator/SpaceTweet.app/SpaceTweet" ld: warning: in /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.0.sdk/usr/lib/libxml2.dylib, missing required architecture i386 in file ld: warning: in /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.0.sdk/usr/lib/libSystem.dylib, missing required architecture i386 in file ld: in /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.0.sdk/usr/lib/libobjc.A.dylib, missing required architecture i386 in file collect2: ld returned 1 exit status Command /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin/gcc-4.2 failed with exit code 1

    Read the article

  • steps to fix a project that won't compile

    - by eco_bach
    Hi Pulling my hair out in trying to get a simple window based project to compile. I am running both 3.2.2 and 3.2.3 versions of Xcode. The latter is set up in a separate folder. Originally I used the latter to create and compile my project against the new 4.0 sdk. It compiled fine. Then I made the mistake of deleting some sdks I thought I no longer needed. Ever since I can no longer compile. Right now I get a dozen or so errors similar to the following "_OBJC_CLASS_$_CATransition", referenced from: objc-class-ref-to-CATransition in ViewTransitionsAppDelegate.o My active executable is the iphone simulator 4 and Base SDK is iPhone device 3.0. I tried reinstalling the xcode3.2.3 installer, no difference. I'm totally stymied, as my project WAS working and compiling fine, both to the simulator and external device. Are there any best practices or recommended steps in fixing or rebuilding a project when it won't compile? Any help welcome!

    Read the article

  • Main Function Error C++

    - by Arjun Nayini
    I have this main function: #ifndef MAIN_CPP #define MAIN_CPP #include "dsets.h" using namespace std; int main(){ DisjointSets s; s.uptree.addelements(4); for(int i=0; i<s.uptree.size(); i++) cout <<uptree.at(i) << endl; return 0; } #endif And the following class: class DisjointSets { public: void addelements(int x); int find(int x); void setunion(int x, int y); private: vector<int> uptree; }; #endif My implementation is this: void DisjointSets::addelements(int x){ for(int i=0; i<x; i++) uptree.push_back(-1); } //Given an int this function finds the root associated with that node. int DisjointSets::find(int x){ //need path compression if(uptree.at(x) < 0) return x; else return find(uptree.at(x)); } //This function reorders the uptree in order to represent the union of two //subtrees void DisjointSets::setunion(int x, int y){ } Upon compiling main.cpp (g++ main.cpp) I'm getting these errors: dsets.h: In function \u2018int main()\u2019: dsets.h:25: error: \u2018std::vector DisjointSets::uptree\u2019 is private main.cpp:9: error: within this context main.cpp:9: error: \u2018class std::vector \u2019 has no member named \u2018addelements\u2019 dsets.h:25: error: \u2018std::vector DisjointSets::uptree\u2019 is private main.cpp:10: error: within this context main.cpp:11: error: \u2018uptree\u2019 was not declared in this scope I'm not sure exactly whats wrong. Any help would be appreciated.

    Read the article

  • Conditional references in .NET project, possible to get rid of warning?

    - by Lasse V. Karlsen
    I have two references to a SQLite assembly, one for 32-bit and one for 64-bit, which looks like this (this is a test project to try to get rid of the warning, don't get hung up on the paths): <Reference Condition=" '$(Platform)' == 'x64' " Include="System.Data.SQLite, Version=1.0.61.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139, processorArchitecture=AMD64"> <SpecificVersion>True</SpecificVersion> <HintPath>..\..\LVK Libraries\SQLite3\version_1.0.65.0\64-bit\System.Data.SQLite.DLL</HintPath> </Reference> <Reference Condition=" '$(Platform)' == 'x86' " Include="System.Data.SQLite, Version=1.0.65.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139, processorArchitecture=x86"> <SpecificVersion>True</SpecificVersion> <HintPath>..\..\LVK Libraries\SQLite3\version_1.0.65.0\32-bit\System.Data.SQLite.DLL</HintPath> </Reference> This produces the following warning: Warning 1 The referenced component 'System.Data.SQLite' could not be found. Is it possible for me to get rid of this warning? One way I've looked at it to just configure my project to be 32-bit when I develop, and let the build machine fix the reference when building for 64-bit, but this seems a bit awkward and probably prone to errors. Any other options? The reason I want to get rid of it is that the warning is apparently being picked up by TeamCity and periodically flagged as something I need to look into, so I'd like to get completely rid of it.

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >