Search Results

Search found 125 results on 5 pages for 'llvm'.

Page 2/5 | < Previous Page | 1 2 3 4 5  | Next Page >

  • LLVM Clang 5.0 explicit in copy-initialization error

    - by kevzettler
    I'm trying to compile an open source project on OSX that has only been tested on Linux. $: g++ -v Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1 Apple LLVM version 5.0 (clang-500.2.79) (based on LLVM 3.3svn) Target: x86_64-apple-da I'm trying to compile with the following command line options g++ -MMD -Wall -std=c++0x -stdlib=libc++ -Wno-sign-compare -Wno-unused-variable -ftemplate-depth=1024 -I /usr/local/Cellar/boost/1.55.0/include/boost/ -g -O3 -c level.cpp -o obj-opt/level.o I am seeing several errors that look like this: ./square.h:39:70: error: chosen constructor is explicit in copy-initialization int strength = 0, double flamability = 0, map<SquareType, int> constructions = {}, bool ticking = false); The project states the following are requirements for the Linux setup. How can I confirm I'm making that? gcc-4.8.2 git libboost 1.5+ with libboost-serialize libsfml-dev 2+ (Ubuntu ppa that contains libsfml 2: ) freeglut-dev libglew-dev

    Read the article

  • Loading LLVM passes on Cygwin

    - by user666730
    I am trying to write an LLVM pass on Windows using Cygwin. When I make the project, a dll gets created in the Release/bin directory instead of a .so file in the Release/lib directory. The latter is what is shown in the LLVM document. When I try to load this dll using the -load flag, nothing happens. $opt -load ../../../Release/bin/Pass.dll -help The pass that I am trying to load isn't printed after this. How do I get this right?

    Read the article

  • tail call generated by clang 1.1 and 1.0 (llvm 2.7 and 2.6)

    - by ony
    After compilation next snippet of code with clang -O2 (or with online demo): #include <stdio.h> #include <stdlib.h> int flop(int x); int flip(int x) { if (x == 0) return 1; return (x+1)*flop(x-1); } int flop(int x) { if (x == 0) return 1; return (x+0)*flip(x-1); } int main(int argc, char **argv) { printf("%d\n", flip(atoi(argv[1]))); } I'm getting next snippet of llvm assembly in flip: bb1.i: ; preds = %bb1 %4 = add nsw i32 %x, -2 ; <i32> [#uses=1] %5 = tail call i32 @flip(i32 %4) nounwind ; <i32> [#uses=1] %6 = mul nsw i32 %5, %2 ; <i32> [#uses=1] br label %flop.exit I thought that tail call means dropping current stack (i.e. return will be to the upper frame, so next instruction should be ret %5), but according to this code it will do mul for it. And in native assembly there is simple call without tail optimisation (even with appropriate flag for llc) Can sombody explain why clang generates such code? As well I can't understand why llvm have tail call if it can simply check that next ret will use result of prev call and later do appropriate optimisation or generate native equivalent of tail-call instruction?

    Read the article

  • c++ g++ llvm-clang compiler profiling

    - by anon
    Note, my question is not: how do I tell my compiler to compile with profiling on. I want to profile my compiles process. For each file, I'd like to know how much time is spent on each line of the program. I'm working on a project, some files have huge compile times, I'm trying to figure out why. Is there anyway to do this with g++ or llvm-clang? Thanks!

    Read the article

  • llvm preprocessor g++ passes

    - by anon
    Suppose I want to write my own preprocessor. So I want something like this: all *.cpp and *.hpp (even the included ones), before they go to g++, they go: file --> my preprocessor -> g++ Is there a easy way to do this in the LLVM framework? i.e. to add in a stage that says: "after you load up the source file, pipe it through this program before compling it" ? Thanks!

    Read the article

  • llvm's getelementptr instruction with array types

    - by vava
    I'm trying to use array type in llvm and can't get a hold of it yet. As far as I can understand from documentation, array should grow all by itself. But how does it happen, should I just getelementptr with whatever index I have and it'll grow so that index will still be in bounds? That's not what happens, I get all sorts of funny problems which hide away the moment I create array big enough to accommodate all my data. So, should the following code work by itself or I have to call something else for array to increase it's size? %stack = alloca [0 x i32] ; <[0 x i32]*> %"stack[idx]" = getelementptr [0 x i32]* %stack, i32 0, i32 1 ; <i32*>

    Read the article

  • llvmc problem on ubuntu

    - by simk
    I have the exact problem as described in the link below http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=537285 when i try to do llvmc hello.c -o hello it gives me this error: llvmc: Can't find program 'llvm-gcc' can anybody suggest me how to get rid of this issue? I have llvm-gcc installed, i am using ubuntu 9.10.

    Read the article

  • Why isn't there a good scheme/lisp on llvm?

    - by anon
    There is Gambit scheme, MIT scheme, PLT scheme, chicken scheme, bigloo, larceny, ...; then there are all the lisps. Yet, there's not (to my knowledge) a single popular scheme/lisp on LLVM, even though LLVM provides lots of nice things like: easier to generate code than x85 easy to make C ffi calls ... So why is it that there isn't a good scheme/lisp on LLVM?

    Read the article

  • Is it viable to make a port from a C++ application to Java through LLVM

    - by Javier Mr
    how viable is it to port a C++ application to Java bytecode using LLVM (I guess LLJVM)? The thing is that we currently have a process written in C++ but a new client has made mandatory to been able to run the program in a multiplatform way, using the Java Virtual Machine with obviously no native code (no JNI). The idea is to be able to take the generated jar and copy then to different systems (Linux, Win, 32 bits - 64 bits) and it should just work. Looking around looks like it is possible to compile C++ to LLVM IR code and then that code to java bytecode. There is no need of the generated code to be readable. I have test a bit with similar things using emscripten, this takes C++ code and compile it to JavaScript. The result is valid JS but totally unreadable (looks like assambler). Does anybody done a port of an application from C++ to Java bytecode using this tecnique? What problems could we face? Is a valid approach for production code? Note: I am aware that currently we have some non standard C++ and close source libraries, we are looking to removing this non standard code and all close source libraries and use Free Libre Open Source Software, so lets suppose all code is standard C++ code with all code available at compile time. Note: It is not an option to write portable C++ code and then compile it to the desired target platform, the compiled program must be mltiplatform thus the use of JVM (right now we are not looking in similar solutions but Python or other language base, but i would also like to heard about it)

    Read the article

  • Graphics driver being reported as Gallium 0.4 on llvmpipe (LLVM 0x300) instead of intel

    - by schonjones
    I have an integrated Intel 945GM in a Toshiba laptop. Previously the graphics driver was reported correctly, but at some point it has changed. I've noticed general poor performance and though it should meet minimum requirements for unity 3d is using unity 2d. Under the details panel in system settings it is now reporting Gallium 0.4 on llvmpipe (LLVM 0x300). any help would be appreciated. I have searched google for hours trying to find an answer.

    Read the article

  • How to build LLVM using GCC 4 on Windows?

    - by Steve314
    I have been able to build LLVM 2.6 (the llvm-2.6.tar.gz package) using MinGW GCC 3.4.5. I haven't tested properly, but it seems to work. The trouble is, I have libraries of my own which don't build using GCC3, but which work fine in GCC4 (template issues). I believe the first official GCC4 version for MinGW is GCC 4.4.0. EDIT Decluttered - everything useful in the "tried this tried that" info is now in the answer. EDIT Most of this question/answer is redundant with LLVM 2.7 - the standard configure, make routine works fine in MinGW with no hacks or workarounds.

    Read the article

  • LLVM: Passing a pointer to a struct, which holds a pointer to a function, to a JIT function

    - by Rusky
    I have an LLVM (version 2.7) module with a function that takes a pointer to a struct. That struct contains a function pointer to a C++ function. The module function is going to be JIT-compiled, and I need to build that struct in C++ using the LLVM API. I can't seem get the pointer to the function as an LLVM value, let alone pass a pointer to the ConstantStruct that I can't build. I'm not sure if I'm even on the track, but this is what I have so far: void print(char*); vector<Constant*> functions; functions.push_back(ConstantExpr::getIntToPtr( ConstantInt::get(Type::getInt32Ty(context), (int)print), /* function pointer type here, FunctionType::get(...) doesn't seem to work */ )); ConstantStruct* struct = cast<ConstantStruct>(ConstantStruct::get( cast<StructType>(m->getTypeByName("printer")), functions )); Function* main = m->getFunction("main"); vector<GenericValue> args; args[0].PointerVal = /* not sure what goes here */ ee->runFunction(main, args);

    Read the article

  • Using GCC 4.2 to compile *.mm files is very very slow, but LLVM has done a very good job, any difference?

    - by jianhua
    My project is obj-c and C++ hybirid, filled with by both *.m and *.mm. When compiling, if choose GCC 4.2, *.m obj-c source files compile speed is very fast but *.mm very very slow, but LLVM 2.0 can do a very good job, it is very fast for both *.m and *.mm. My question: Is there any difference between LLVM and GCC 4.2 during compliling *.mm files? why GCC 3.2 is so slow? Any ieda or discussion will be appreciated, thanks in advance. ENV: XCODE 4.0.1

    Read the article

  • Aren't there compilers better at telling the programmer what's wrong in a code ?

    - by jokoon
    I have worked a little while with the Microsoft compiler from Visual C++ but I worked a long time with G++, and I remember often having bad times understanding what was wrong in my code with the former. Beside binary code generation and optimisation, I think this is a very important feature of a C++ compiler: giving the programmer a clue that makes him understand as fast as possible what is wrong with his/her code. I can understand some programmers understand programming as some sort of "competition" to make less errors, but to me that's a counter productive opinion. I once tried Clang compiler for C from the LLVM thingie, I didn't use it for a long time, but I was impressed on how explicit and easy to understand the error messages were. What are your experiences, and how do you think this matters ? Some WIP of C++ Clang: http://clang.llvm.org/cxx_status.html

    Read the article

  • How to invoke an Objective-C Block via the LLVM C++ API?

    - by smokris
    Say, for example, I have an Objective-C compiled Module that contains something like the following: typedef bool (^BoolBlock)(void); BoolBlock returnABlock(void) { return Block_copy(^bool(void){ printf("Block executing.\n"); return YES; }); } ...then, using the LLVM C++ API, I load that Module and create a CallInst to call the returnABlock() function: Function *returnABlockFunction = returnABlockModule->getFunction(std::string("returnABlock")); CallInst *returnABlockCall = CallInst::Create(returnABlockFunction, "returnABlockCall", entryBlock); How can I then invoke the Block returned via the returnABlockCall object?

    Read the article

  • Curious: Could LLVM be used for Infocom z-machine code, and if so how? (in general)

    - by jonhendry2
    Forgive me if this is a silly question, but I'm wondering if/how LLVM could be used to obtain a higher performance Z-Machine VM for interactive fiction. (If it could be used, I'm just looking for some high-level ideas or suggestions, not a detailed solution.) It might seem odd to desire higher performance for a circa-1978 technology, but apparently Z-Machine games produced by the modern Inform 7 IDE can have performance issues due to the huge number of rules that need to be evaluated with each turn. Thanks! FYI: The Z-machine architecture was reverse-engineered by Graham Nelson and is documented at http://www.inform-fiction.org/zmachine/standards/z1point0/overview.html

    Read the article

  • Enable LLVM + Clang in Xcode new project causes linking errors

    - by Ger Teunis
    I've done a complete clean uninstall of XCode and deleted the prefs and deleted complete /Developer folder and reinstalled XCode again. I create a new Cocoa application, go over to Target, doing a "Get info" in the target and enable "C / C++ compiler version" to "LLVM compiler 1.0.2" and press Build. I get: ld: warning: directory '/usr/lib/gcc/i686-apple-darwin10/4.2.1/x86_64' following -L not found ld: warning: directory '/usr/lib/gcc/i686-apple-darwin10/4.2.1/x86_64' following -L not found ld: warning: directory '/usr/lib/i686-apple-darwin10/4.2.1' following -L not found ld: warning: directory '/usr/lib/gcc/i686-apple-darwin10/4.2.1' following -L not found ld: warning: directory '/usr/lib/gcc/i686-apple-darwin10/4.2.1' following -L not found ld: warning: directory '/usr/lib/gcc/i686-apple-darwin10/4.2.1/../../../i686-apple-darwin10/4.2.1' following -L not found ld: warning: directory '/usr/lib/gcc/i686-apple-darwin10/4.2.1/../../..' following -L not found ld: library not found for -lgcc Command /Developer/usr/bin/clang failed with exit code 1 Anyone able to help me here? LLVM + GCC frontend does work though but I really would like to use Clang (LLVM compiler 1.0.2). New XCode install, new Cocoa project still have this issue.

    Read the article

  • Clang LLVM doesn't generate warnings in Xcode

    - by John Gallagher
    I want lots of lovely warnings when compiling. I've set my build configuration to be based on a build config file I have. When I switch to GCC 4.0, it generates all the required warnings. As soon as I change to the Clang LLVM compiler, all the warnings disappear. Every other setting is identical. What am I missing?

    Read the article

  • llvm/clang re-compilation with itself

    - by teppic
    After reading many questions on here, I decided to give clang a go, and installed the svn version on Ubuntu 12.04 (64bit). I was expecting issues, but it all installed smoothly with no warnings. I noticed though that when re-running the configure script, if clang/clang++ is in your path it will choose this over gcc/g++ for its own compilation. Is it a good idea to recompile llvm/clang with itself? I know this is absolutely standard with gcc, but I've read that clang's C++ implementation isn't quite good enough yet (maybe this is out of date info...).

    Read the article

  • iPhone static library Clang/LLVM error: non_lazy_symbol_pointers

    - by Bekenn
    After several hours of experimentation, I've managed to reduce the problem to the following example (C++): extern "C" void foo(); struct test { ~test() { } }; void doTest() { test t; // 1 foo(); // 2 } This is being compiled for iOS devices in XCode 4.2, using the provided Clang compiler (Apple LLVM compiler 3.0) and the iOS 5.0 SDK. The project is configured as a Cocoa Touch Static Library, and "Enable Linking With Shared Libraries" is set to No because I'm building an AIR native extension. The function foo is defined in another external library. (In my actual project, this would be any of the C API functions defined by Adobe for use in AIR native extensions.) When attempting to compile this code, I get back the error: FATAL:incompatible feature used: section type non_lazy_symbol_pointers (must specify "-dynamic" to be used) clang: error: assembler command failed with exit code 1 (use -v to see invocation) The error goes away if I comment out either of the lines marked 1 or 2 above, or if I change the build setting "Enable Linking With Shared Libraries" to Yes. (However, if I change the build setting, then I get multiple ld warning: unexpected srelocation type 9 warnings when linking the library into the final project, and the application crashes when running on the device.) The build error also goes away if I remove the destructor from test. So: Is this a bug in Clang? Am I missing some all-important and undocumented build setting? The interaction between an externally-provided function and a struct with a destructor is very peculiar, to say the least.

    Read the article

  • Built in Analyzer in Xcode 3.1.4

    - by Mustafa
    Hi all, I wonder if the built in Analyzer in Xcode 3.1.4 makes it redundant to use LLVM/Clang Static Analyzer separately? Please refer to the original article here: Finding memory leaks with the LLVM/Clang Static Analyzer Thanks.

    Read the article

  • Why does Clang/LLVM warn me about using default in a switch statement where all enumerated cases are covered?

    - by Thomas Catterall
    Consider the following enum and switch statement: typedef enum { MaskValueUno, MaskValueDos } testingMask; void myFunction(testingMask theMask) { switch theMask { case MaskValueUno: {}// deal with it case MaskValueDos: {}// deal with it default: {} //deal with an unexpected or uninitialized value } }; I'm an Objective-C programmer, but I've written this in pure C for a wider audience. Clang/LLVM 4.1 with -Weverything warns me at the default line: Default label in switch which covers all enumeration values Now, I can sort of see why this is there: in a perfect world, the only values entering in the argument theMask would be in the enum, so no default is necessary. But what if some hack comes along and throws an uninitialized int into my beautiful function? My function will be provided as a drop in library, and I have no control over what could go in there. Using default is a very neat way of handling this. Why do the LLVM gods deem this behaviour unworthy of their infernal device? Should I be preceding this by an if statement to check the argument?

    Read the article

  • How do I make compiling code not bring my system to its knees?

    - by Jason Baker
    I have a macbook with snow leopard and 2 gigs of RAM. When I compile C or C++ code, my system becomes all but unusable. For instance, when I compile llvm I notice that there are about 10 or 11 processes (cc1plus) getting launched at a time that suck up my CPU time and memory. Is there any way to maybe make it compile less at one time? I'll gladly wait a while longer to have my system usable while I'm compiling. Or is this something that you just have to live with when compiling C or C++?

    Read the article

< Previous Page | 1 2 3 4 5  | Next Page >