Search Results

Search found 9889 results on 396 pages for 'behind the compiler'.

Page 42/396 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • When compiling programs to run inside a VM, what should march and mtune be set to?

    - by Russ
    With VMs being slave to whatever the host machine is providing, what compiler flags should be provided to gcc? I would normally think that -march=native would be what you would use when compiling for a dedicated box, but the fine detail that -march=native is going to as indicated in this article makes me extremely wary of using it. So... what to set -march and -mtune to inside a VM? For a specific example... My specific case right now is compiling python (and more) in a linux guest inside a KVM-based "cloud" host that I have no real control over the host hardware (aside from 'simple' stuff like CPU GHz m CPU count, and available RAM). Currently, cpuinfo tells me I've got an "AMD Opteron(tm) Processor 6176" but I honestly don't know (yet) if that is reliable and whether the guest can get moved around to different architectures on me to meet the host's infrastructure shuffling needs (sounds hairy/unlikely). All I can really guarantee is my OS, which is a 64-bit linux kernel where uname -m yields x86_64.

    Read the article

  • Find the "name" of a library (-L -l switches)

    - by sebastiangeiger
    Being fairly new to C++ I have a question bascially concerning the g++ compiler and especially the inclusion of libraries. Consider the following makefile: CPPFLAGS= -I libraries/boost_1_43_0-bin/include/ -I libraries/jpeg-8b-bin/include/ LDLIBS= libraries/jpeg-8b-bin/lib/libjpeg.a # LDLIBS= -L libraries/jpeg-8b-bin/lib -llibjpeg all: main main: main.o c++ -o main main.o $(LDLIBS) main.o: main.cpp c++ $(CPPFLAGS) -c main.cpp clean: rm -rf *.o main As you can see I declared the LDLIBS variable twice. My code is compiling and working if I use the makefile above. But if I deactivate the first LDLIBS entry and active the second one I get ld: library not found for -llibjpeg. I assume my libjpeg.a is just not called libjpeg but bears some different name. Is there a way to find out the name of a given "libraryfile" libsomething.a or libsomething.dyn?

    Read the article

  • strange results with /fp:fast

    - by martinus
    We have some code that looks like this: inline int calc_something(double x) { if (x > 0.0) { // do something return 1; } else { // do something else return 0; } } Unfortunately, when using the flag /fp:fast, we get calc_something(0)==1 so we are clearly taking the wrong code path. This only happens when we use the method at multiple points in our code with different parameters, so I think there is some fishy optimization going on here from the compiler (Microsoft Visual Studio 2008, SP1). Also, the above problem goes away when we change the interface to inline int calc_something(const double& x) { But I have no idea why this fixes the strange behaviour. Can anyone explane this behaviour? If I cannot understand what's going on we will have to remove the /fp:fastswitch, but this would make our application quite a bit slower.

    Read the article

  • Getting the errors for code in unopened .aspx pages

    - by Glennular
    Is there a way to check for errors in unopened *.ASPX pages. For example, if you change the name of a function Visual Studio will catch the error on the page and list it in the "Error List" only if the page is opened and being validated? I guess the question could be is there a validation option opposed to the compile option to check for errors? (Yes, i know code should go into the pre-compiled code-behind pages.) How do i find out about the following without running the page through the webserver or opening the page to be validated in VS? <script runat="server"> Public Sub MyFunciton() Undefined_FUNCTION() End Sub </script>

    Read the article

  • A general question about compilation and interpretation.

    - by wucnuc
    Hi stackoverflow, I apologize in advance for the possible stupidity of this question. However, the following has been the source of some confusion for me and I know the people here will be able to handily clear up the confusion for me. Basically, I would like to finally understand the relationship between any and all of the following terms. Some of the terms I do actually understand pretty well, but some of them are similar in my mind and I would like to once and for all to see their relationships/distinctions laid out all at once. They are: compiler interpreter bytecode machine code assembler assembly language binary object code executable Ideally, an answer would use examples from Java and C++ and other well-known programming languages that a young-ish student like me would be familiar with. Also, if you want to throw in any other useful terms that would be fine too :)

    Read the article

  • scala jrebel superclass change

    - by coubeatczech
    hi, I'm using JRebel with Scala and I'm quite frequently experiencing the need for restart of server due to the fact that JRebel is unable to load a class if the superclass was changed. This is done mainly when I change anonymous functions as I can deduce from the JRebel error desription: Class 'mypackage.NewBook$$anonfun$2' superclass was changed from 'scala.runtime.AbstractFunction1' to 'scala.runtime.AbstractFunction2' and could not be reloaded. Is there any way, how can I design my code to avoid this? Does scala compiler take the functions, numbers them from one as they appear in source code?

    Read the article

  • Tips on using GCC as a new user

    - by ultrajohn
    I am really new to GCC and I don't how to use it. I already have a copy of a pre-compiled gcc binaries i've downloaded from one of the mirror sites in the gcc website.. Now, I don't where to go from here... Please give me some tips on the different path to proceed.. I am sorry for the rather vague question.. What I want are tips on how to use GCC... I've programmed in C in the past using the TC compiler... Thanks! I really appreciate all of your suggestions... Thanks again.. :)

    Read the article

  • Error preverifying class java during build (BlackBerry)

    - by Davide Vosti
    I'm trying do build and debug a small project for BlackBerry. During the build I'm getting this error Error preverifying class java ... I read on the net this error could be caused by referencing multiple projects but I tried to move every package in a single project but the error is still there. I tried with multiple JDE version (currently 4.7) and the Java compiler is set to 1.6. Eclipse version is 3.4.1 as recommended by RIM's documentations. Does someone have some clue?

    Read the article

  • Why only the constant expression can be used as case expression in Switch statement?

    - by sinec
    Hi, what is bothering me is that I can't found an info regarding the question from the title. I found that assembly form the Switch-case statement is compiled into bunch of (MS VC 2008 compiler) cmp and je calls: 0041250C mov eax,dword ptr [i] 0041250F mov dword ptr [ebp-100h],eax 00412515 cmp dword ptr [ebp-100h],1 0041251C je wmain+52h (412532h) 0041251E cmp dword ptr [ebp-100h],2 00412525 je wmain+5Bh (41253Bh) 00412527 cmp dword ptr [ebp-100h],3 0041252E je wmain+64h (412544h) 00412530 jmp wmain+6Bh (41254Bh) where for above code will in case that if condition (i) is equal to 2, jump to address 41253Bh and do execution form there (at 41253Bh starts the code for 'case 2:' block) What I don't understand is why in case that,for instance, function is used as 'case expression', why first function couldn't be evaluated and then its result compared with the condition? Am I missing something? Thank you in advance

    Read the article

  • How do I indicate that a class doesn't support certain operators?

    - by romeovs
    I'm writing a class that represents an ordinal scale, but has no logical zero-point (eg time). This scale should permit addition and substraction (operator+, operator+=, ...) but not multiplication. Yet, I always felt it to be a good practice that when one overloads one operator of a certain group (in this case the math operators), one should also overload all the others that belong to that group. In this case that would mean I should need to overload the multiplication and division operators also, because if a user can use A+B he would probable expect to be able the other operators. Is there a method that I can use to throw an error for this at compiler time? The easiest method would be just no to overload the operators operator*, ... yet it would seem appropriate to add a bit more explaination than operator* is not know for class "time". Or is this something that I really should not care about (RTFM user)?

    Read the article

  • LALR(1) or GLR on Windows - Alternatives to Bison++ / Flex++ that are current?

    - by mrjoltcola
    I have been using the same version of bison++ (1.21-8) and flex++ (2.3.8-7) since 2002. I'm not looking for an alternative to LALR(1) or GLR at this time, just looking for the most current options. Is anyone aware of any later ports of these than the original that aren't Cygwin dependent? What are other folks using in Windows environments for C++ compiler development (besides ANTLR or Boost.spirit)? Commercial options are ok, if you have firsthand experience. I do need to compile on Linux as well.

    Read the article

  • Warning vs. error

    - by Samuel
    I had an annoying issue, getting a "Possible loss of precision" error when compiling my Java program on BlueJ (But from what i read this isn't connected to a specific IDE). I was surprised by the fact that the compiler told me there is a possible loss of precision and wouldnt let me compile/run the program. Why is this an error and not a warning saying you might loose precision here, if you don't want that change your code? The program runs just fine when i drop the float values, it wouldn't matter since there is no point (e.g [143.08, 475.015]) on my screen. On the other hand when i loop through an ArrayList and in this loop i have an if clause removing elements from the ArrayList it runs fine, just throws an error and doesn't display the ArrayList [used for drawing circles] for a fraction of a second. This appears to me as a severe error but doesn't cause (hardly) any troubles, while i wouldn't want to have such a thing in my code at all. What's the boundary?

    Read the article

  • Eliminating false dependencies

    - by Klaus
    Hi all, I have a quite general question regarding false dependencies. As the name implies, those or no real dependencies and can be elimianated. I am aware of techniqes called register renaming that eliminate such dependencies at a hardware level. Of course I could eliminate those already at a "higher" level when writing assembler code that avoids false dependencies. But now I am wondering if also the compiler provides support to keep the number of false dependencies low or does it more rely on the hardware to eliminate those? Thanks

    Read the article

  • C++ iterators & loop optimization

    - by Quantum7
    I see a lot of c++ code that looks like this: for( const_iterator it = list.begin(), const_iterator ite = list.end(); it != ite; ++it) As opposed to the more concise version: for( const_iterator it = list.begin(); it != list.end(); ++it) Will there be any difference in speed between these two conventions? Naively the first will be slightly faster since list.end() is only called once. But since the iterator is const, it seems like the compiler will pull this test out of the loop, generating equivalent assembly for both.

    Read the article

  • What are the disadvantages of targeting the JVM instead of x86?

    - by Pindatjuh
    I'm developing a new language. My initial target was to compile to native x86 for the Windows platform, but now I am in doubt. I've seen some new languages target the JVM (most notable Scala and Clojure). Ofcourse it's not possible to port every language easily to the JVM; to do so may lead to small changes to the language and it's design. After posing this question, I even doubted more about this decision. I now know some "pro" JVM arguments. The original question was: is targetting the JVM a good idea, when creating a compiler for a new language? Updated the question: What are the disadvantages of targeting the JVM instead of x86 on Windows?

    Read the article

  • What Should be the Structure of a C++ Project?

    - by Ell
    I have recently started learning C++ and coming from a Ruby environment I have found it very hard to structure a project in a way that it still compiles correctly, I have been using Code::Blocks which is brilliant but a downside is that when I add a new header file or c++ source file, it will generate some code and even though it is only a mere 3 or 4 lines, I do not know what these lines do. First of all I would like to ask this question: What do these lines do? #ifndef TEXTGAME_H_INCLUDED #define TEXTGAME_H_INCLUDED #endif // TEXTGAME_H_INCLUDED My second question is, do I need to #include both the .h file and the .cpp file, and in which order. My third question is where can I find the GNU GCC Compiler that, I beleive, was packaged with Code::Blocks and how do I use it without Code::Blocks? I would rather develop in a notepad++ sort of way because that is what I'm used to in Ruby but since C++ is compiled, you may think differently (please give advice and views on that as well) Thanks in advance, ell.

    Read the article

  • Is call to function object inlined?

    - by dehmann
    In the following code, Foo::add calls a function via a function object: struct Plus { inline int operator()(int x, int y) const { return x + y; } }; template<class Fct> struct Foo { Fct fct; Foo(Fct f) : fct(f) {} inline int add(int x, int y) { return fct(x,y); // same efficiency adding directly? } }; Is this the same efficiency as calling x+y directly in Foo::add? In other words, does the compiler typically directly replace fct(x,y) with the actual call, inlining the code, when compiling with optimizations enabled?

    Read the article

  • Why doesn't a 32bit .deb package install on 64bit Ubuntu?

    - by codebox_rob
    My .deb package, built on 32-bit Ubuntu and containing executables compiled with gcc, won't install on the 64-bit version of the OS (the error message says 'Wrong architecture i386'). This is confusing to me because I thought that in general 32-bit software worked on 64-bit hardware, but not vice-versa. Will it be possible for me to produce a .deb file that I can install on a 64-bit OS, using my 32-bit machine? Is it just a matter of using the appropriate compiler flags to produce the executables (and if so what are they), or is the .deb file itself somehow specific to one processor architecture?

    Read the article

  • Delphi 2010 - Why can't I declare an abstract method with a generic type parameter?

    - by James
    I am trying to do the following in Delphi 2010: TDataConverter = class abstract public function Convert<T>(const AData: T): string; virtual; abstract; end; However, I keep getting the following compiler error: E2533 Virtual, dynamic and message methods cannot have type parameters I don't quite understand the reason why I can't do this. I can do this in C# e.g. public abstract class DataConverter { public abstract string Convert<T>(T data); } Anyone know the reasoning behind this?

    Read the article

  • Beginner C++ Question

    - by Donal Rafferty
    I have followed the code example here toupper c++ example And implemented it in my own code as follows void CharString::MakeUpper() { char* str[strlen(m_pString)]; int i=0; str[strlen(m_pString)]=m_pString; char* c; while (str[i]) { c=str[i]; putchar (toupper(c)); i++; } } But this gives me the following compiler error CharString.cpp: In member function 'void CharString::MakeUpper()': CharString.cpp:276: error: invalid conversion from 'char*' to 'int' CharString.cpp:276: error: initializing argument 1of 'int toupper(int)' CharString.cpp: In member function 'void CharString::MakeLower()': This is line 276 putchar (toupper(c)); I understand that toupper is looking for int as a parameter and returns an int also, is that the problem? If so how does the example work?

    Read the article

  • g++ no matching function call error

    - by gufftan
    I've got a compiler error but I can't figure out why. the .hpp: #ifndef _CGERADE_HPP #define _CGERADE_HPP #include "CVektor.hpp" #include <string> class CGerade { protected: CVektor o, rv; public: CGerade(CVektor n_o, CVektor n_rv); CVektor getPoint(float t); string toString(); }; the .cpp: #include "CGerade.hpp" CGerade::CGerade(CVektor n_o, CVektor n_rv) { o = n_o; rv = n_rv.getUnitVector(); } the error message: CGerade.cpp:10: error: no matching function for call to ‘CVektor::CVektor()’ CVektor.hpp:28: note: candidates are: CVektor::CVektor(float, float, float) CVektor.hpp:26: note: CVektor::CVektor(bool, float, float, float) CVektor.hpp:16: note: CVektor::CVektor(const CVektor&) CGerade.cpp:10: error: no matching function for call to ‘CVektor::CVektor()’ CVektor.hpp:28: note: candidates are: CVektor::CVektor(float, float, float) CVektor.hpp:26: note: CVektor::CVektor(bool, float, float, float) CVektor.hpp:16: note: CVektor::CVektor(const CVektor&)

    Read the article

  • What types of conditions can be used for conditional compilation in C++?

    - by user1002288
    This is an exam question for C++: Which of the following statements accurately describe the condition that can be used for conditional compilation in C++? A. The condition can depend on the value of environment variables. B. The condition can depend on the value of any const variables. C. The condition can depend on the value of program variables. D. The condition can use the sizeof() operator to make decision about compiler-dependent operations based on the size of standard data type. E. The condition must evaluate to either a 0 or 1 during preprocessing. I think the answer is E. Is this correct?

    Read the article

  • variable scope in statement blocks

    - by fearofawhackplanet
    for (int i = 0; i < 10; i++) { Foo(); } int i = 10; // error, 'i' already exists ---------------------------------------- for (int i = 0; i < 10; i++) { Foo(); } i = 10; // error, 'i' doesn't exist By my understanding of scope, the first example should be fine. The fact neither of them are allowed seems even more odd. Surely 'i' is either in scope or not. Is there something non-obvious about scope I don't understand which means the compiler genuinely can't resolve this? Or is just a case of nanny-state compilerism?

    Read the article

  • Are unspecified and undefined behavior required to be consistent between compiles?

    - by sharptooth
    Let's pretend my program contains a specific construct the C++ Standard states to be unspecified behavior. This basically means the implementation has to do something reasonable but is allowed not to document it. But is the implementation required to produce the same behavior every time it compiles a specific construct with unspecified behavior or is it allowed to produce different behavior in different compiles? What about undefined behavior? Let's pretend my program contains a construct that is UB according to the Standard. The implementation is allowed to exhibit any behavior. But can this behavior differ between compiles of the same program on the same compiler with same settings in the same environment? In other words, if I dereference a null pointer on line 78 in file X.cpp and the implementation formats the drive in such case does it mean that it will do the same after the program is recompiled?

    Read the article

  • How does an optimizing compiler react to a program with nested loops?

    - by D.Singh
    Say you have a bunch of nested loops. public void testMethod() { for(int i = 0; i<1203; i++){ //some computation for(int k=2; k<123; k++){ //some computation for(int j=2; j<12312; j++){ //some computation for(int l=2; l<123123; l++){ //some computation for(int p=2; p<12312; p++){ //some computation } } } } } } When the above code reaches the stage where the compiler will try to optimize it (I believe it's when the intermediate language needs to converted to machine code?), what will the compiler try to do? Is there any significant optimization that will take place? I understand that the optimizer will break up the loops by means of loop fission. But this is only per loop isn't it? What I mean with my question is will it take any action exclusively based on seeing the nested loops? Or will it just optimize the loops one by one? If the Java VM complicates the explanation then please just assume that it's C or C++ code.

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >