Search Results

Search found 4423 results on 177 pages for 'compiler'.

Page 5/177 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Java assert nasty side-effect - compiler bug?

    - by Alex
    This public class test { public static void main(String[] args) { Object o = null; assert o != null; if(o != null) System.out.println("o != null"); } } prints out "o != null"; both 1.5_22 and 1.6_18. Compiler bug? Commenting out the assert fixes it. The byte code appears to jump directly to the print statement when assertions are disabled: public static main(String[]) : void L0 LINENUMBER 5 L0 ACONST_NULL ASTORE 1 L1 LINENUMBER 6 L1 GETSTATIC test.$assertionsDisabled : boolean IFNE L2 ALOAD 1: o IFNONNULL L2 NEW AssertionError DUP INVOKESPECIAL AssertionError.<init>() : void ATHROW L2 LINENUMBER 8 L2 GETSTATIC System.out : PrintStream LDC "o != null" INVOKEVIRTUAL PrintStream.println(String) : void L3 LINENUMBER 9 L3 RETURN L4

    Read the article

  • Delphi Compiler Directive to Evaluate Arguments in Reverse

    - by Peter Turner
    I was really impressed with this delphi two liner using the IFThen function from Math.pas. However, it evaluates the DB.ReturnFieldI first, which is unfortunate because I need to call DB.first to get the first record. DB.RunQuery('select awesomedata1 from awesometable where awesometableid = "great"'); result := IfThen(DB.First = 0, DB.ReturnFieldI('awesomedata1')); Obviously this isn't such a big deal, as I could make it work with five robust liners. But all I need for this to work is for Delphi to evaluate DB.first first and DB.ReturnFieldI second. I don't want to change math.pas and I don't think this warrants me making a overloaded ifthen because there's like 16 ifthen functions. Just let me know what the compiler directive is, if there is an even better way to do this, or if there is no way to do this and anyone whose procedure is to call db.first and blindly retrieve the first thing he finds is not a real programmer.

    Read the article

  • default maven compiler setting

    - by Jeeyoung Kim
    Hello Maven gurus, Right now, I'm writing a small java application by my own, with few maven pom.xml files. I want to make all my maven packages to compile with jdk 1.6, and I can't find a good way to do it without manually setting it on every single POMs - I'm sick of copy-and-pasting <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.6</source> <target>1.6</target> </configuration> in every single pom.xml file I generate. Is there a simpler way to resolve this issue?

    Read the article

  • Code crashing compiler...

    - by AndrejaKo
    Hi! I'm experimenting with a piece of C code. Can anyone tell me why is VC 9.0 with SP1 crashing for me? Oh, and the code is meant to be an example used in a discussion why something like void main (void) is evil. struct foo { int i; double d; } main (double argc, struct foo argv) { struct foo a; a.d=0; a.i=0; return a.i; } If I put return a; compiler doesn't crash.

    Read the article

  • C# Compiler should give warning but doesn't?

    - by Cristi Diaconescu
    Someone on my team tried fixing a 'variable not used' warning in an empty catch clause. try { ... } catch (Exception ex) { } - gives a warning about ex not being used. So far, so good. The fix was something like this: try { ... } catch (Exception ex) { string s = ex.Message; } Seeing this, I thought "Just great, so now the compiler will complain about s not being used." But it doesn't! There are no warnings on that piece of code and I can't figure out why. Any ideas? PS. I know catch-all clauses that mute exceptions are a bad thing, but that's a different topic. I also know the initial warning is better removed by doing something like this, that's not the point either. try { ... } catch (Exception) { } or try { ... } catch { }

    Read the article

  • Code crashing compiler: main() returning a struct instead of an int

    - by AndrejaKo
    Hi! I'm experimenting with a piece of C code. Can anyone tell me why is VC 9.0 with SP1 crashing for me? Oh, and the code is meant to be an example used in a discussion why something like void main (void) is evil. struct foo { int i; double d; } main (double argc, struct foo argv) { struct foo a; a.d=0; a.i=0; return a.i; } If I put return a; compiler doesn't crash.

    Read the article

  • How do I use compiler intrinsic __fmul_?

    - by Eric Thoma
    I am writing a massively parallel GPU application. I have been optimizing it by hand. I received a 20% performance increase with _fdividef(x, y), and according to The Cuda C Programming Guide (section C.2.1), using similar functions for multiplication and adding is also beneficial. The function is stated as this: "_fmulrn,rz,ru,rd". __fdividef(x,y) was not stated with the arguments in brackets. I was wondering, what are those brackets? If I run the simple code: int t = __fmul_(5,4); I a compiler error about how _fmul is undefined. I have the CUDA runtime included, so I don't think it is a setup thing; rather it is something to do with those square brackets. How do I correctly use this function? Thank you.

    Read the article

  • C++ -malign-double compiler flag

    - by Martin
    I need some help on compiler flags in c++. I'm using a library that is a port to linux from windows, that has to be compiled with the -malign-double flag, "for Win32 compatibility". It's my understanding that this mean I absolutely have to compile my own code with this flag as well? How about other .so shared libraries, do they have be recompiled with this flag as well? If so, is there any way around this? I'm a linux newbie (and c++), so even though I tried to recompile all the libraries I'm using for my project, it was just too complicated to recursively find the source for all the libraries and the libraries they're dependent on, and recompile everything.

    Read the article

  • Possible compiler bug in MSVC12 (VS2013) with designated initializer

    - by diapir
    Using VS2013 Update 2, I've stumbled on some strange error message : // test.c int main(void) { struct foo { int i; float f; }; struct bar { unsigned u; struct foo foo; double d; }; struct foo some_foo = { .i = 1, .f = 2.0 }; struct bar some_bar = { .u = 3, // error C2440 : 'initializing' : cannot convert from 'foo' to 'int' .foo = some_foo, .d = 4.0 }; // Works fine some_bar.foo = some_foo; return 0; } Both GCC and Clang accept it. Am I missing something or does this piece of code exposes a compiler bug ? EDIT : Duplicate: Initializing struct within another struct using designated initializer causes compile error in Visual Studio 2013

    Read the article

  • C++0x optimizing compiler quality

    - by aaa
    hello. I do some heavy numbercrunching and for me floating-point performance is very important. I like performance of Intel compiler very much and quite content with quality of assembly it produces. I am thinking at some point to try C++0x mainly for sugar parts, like auto, initializer list, etc, but also lambdas. at this point I use those features in regular C++ by the means of boost. How good of assembly code do compilers C++0x generate? specifically Intel and gcc compilers. Do they produce SSE code? is performance comparable to C++? are there any benchmarks? My Google search did not reveal much. Thank you.

    Read the article

  • C++ performance, optimizing compiler, empty function in .cpp

    - by Dodo
    I've a very basic class, name it Basic, used in nearly all other files in a bigger project. In some cases, there needs to be debug output, but in release mode, this should not be enabled and be a NOOP. Currently there is a define in the header, which switches a makro on or off, depending on the setting. So this is definetely a NOOP, when switched off. I'm wondering, if I have the following code, if a compiler (MSVS / gcc) is able to optimize out the function call, so that it is again a NOOP. (By doing that, the switch could be in the .cpp and switching will be much faster, compile/link time wise). --Header-- void printDebug(const Basic* p); class Basic { Basic() { simpleSetupCode; // this should be a NOOP in release, // but constructor could be inlined printDebug(this); } }; --Source-- // PRINT_DEBUG defined somewhere else or here #if PRINT_DEBUG void printDebug(const Basic* p) { // Lengthy debug print } #else void printDebug(const Basic* p) {} #endif

    Read the article

  • Compiler: Translation to assembly

    - by sub
    I've written an interpreter for my experimental language and know I want to move on and write a small compiler for it. It will probably take the source, go through the same steps as the interpreter (tokenizer, parser) and then translate the source to assembly. Now my questions: Can I expect that every command in my language can be 1:1 translated to a bunch of assembly instructions? What I mean is if I will have to completely throw over the whole input program or if it is just translated to assembly per line. Which assembler should I use as output format?

    Read the article

  • C# logic order and compiler behavior

    - by Terrapin
    In C#, (and feel free to answer for other languages), what order does the runtime evaluate a logic statement? Example: DataTable myDt = new DataTable(); if (myDt != null && myDt.Rows.Count > 0) { //do some stuff with myDt } Which statement does the runtime evaluate first - myDt != null or: myDt.Rows.Count > 0 ? Is there a time when the compiler would ever evaluate the statement backwards? Perhaps when an "OR" operator is involved?

    Read the article

  • Source-to-source compiler framework wanted

    - by cheungcc_2000
    Dear all, I used to use OpenC++ (http://opencxx.sourceforge.net/opencxx/html/overview.html) to perform code generation like: Source: class MyKeyword A { public: void myMethod(inarg double x, inarg const std::vector<int>& y, outarg double& z); }; Generated: class A { public: void myMethod(const string& x, double& y); // generated method below: void _myMehtod(const string& serializedInput, string& serializedOutput) { double x; std::vector<int> y; // deserialized x and y from serializedInput double z; myMethod(x, y, z); } }; This kind of code generation directly matches the use case in the tutorial of OpenC++ (http://www.csg.is.titech.ac.jp/~chiba/opencxx/tutorial.pdf) by writing a meta-level program for handling "MyKeyword", "inarg" and "outarg" and performing the code generation. However, OpenC++ is sort of out-of-date and inactive now, and my code generator can only work on g++ 3.2 and it triggers error on parsing header files of g++ of higher version. I have looked at VivaCore, but it does not provide the infra-structure for compiling meta-level program. I'm also looking at LLVM, but I cannot find documentation that tutor me on working out my source-to-source compilation usage. I'm also aware of the ROSE compiler framework, but I'm not sure whether it suits my usage, and whether its proprietary C++ front-end binary can be used in a commercial product, and whether a Windows version is available. Any comments and pointers to specific tutorial/paper/documentation are much appreciated.

    Read the article

  • Strange VS2005 compile errors: unable to locate resource file (because the compiler keeps deleting i

    - by Velika
    I AM GETTING THE FOLLOWING ERROR IN A VERY SIMPLE CLASS LIBRARY: Error 1 Unable to copy file "obj\Debug\SMIT.SysAdmin.BusinessLayer.Resources.resources" to "obj\Debug\SMIT.SysAdmin.BusinessLayer.SMIT.SysAdmin.BusinessLayer.Resources.resources". Could not find file 'obj\Debug\SMIT.SysAdmin.BusinessLayer.Resources.resources'. SMIT.SysAdmin.BusinessLayer Going to the Project Properties-Resource tab, I see that I defined do resources. Still, I tried to delete the resource file and recreate by going to the resource tab. When I recompile, I still get the same error. Why is it even looking for a resource file? I define no resources on teh project properties tab and added no new resource file items. Any suggestions of things to try? Update: I found the missing file in an old backup. I copied it to the location where the compiler expected it, and then successfully recompiled the project that previously had compile time errors. However, when I rebuild the entire solution, it deletes the file that I previously restored and I'm back to where I started. My solution contains several projects (maybe 10 or so). Could VS 2005 be having a problem determining dependencies and the proper order to compile these projects?

    Read the article

  • Specific compiler flags for specific files in Xcode

    - by Jasarien
    I've been tasked to work on a project that has some confusing attributes. The project is of the nature that it won't compile for the iPhone Simulator And the iPhone Device with the same compile settings. I think it has to do with needing to be specifically compiled for x86 or arm6/7 depending on the target platform. So the project's build settings, when viewed in Xcode's Build Settings view doesn't enable me to set specific compiler flags per specific files. However, the previous developer that worked on this project has somehow declared the line: CE7FEB5710F09234004DE356 /* MyFile.m in Sources */ = {isa = PBXBuildFile; fileRef = CE7FEB5510F09234004DE356 /* MyFile.m */; settings = {COMPILER_FLAGS = "-fasm-blocks -marm -mfpu=neon"; }; }; Is there any way to do this without editing the project file by hand? I know that editing the project file can result in breaking it completely, so I'd rather not do that, as I obviously don't know as much as the previous developer. So to clarify, the question is: The build fails when compiling for simulator unless I remove the -fasm-blocks flag. The build fails when compiling for device unless I add the -fasm-blocks flag. Is there a way to set this flag per file without editing the project file by hand?

    Read the article

  • convincing C# compiler that execution will stop after a member returns

    - by Sarah Vessels
    I don't think this is currently possible or if it's even a good idea, but it's something I was thinking about just now. I use MSTest for unit testing my C# project. In one of my tests, I do the following: MyClass instance; try { instance = getValue(); } catch (MyException ex) { Assert.Fail("Caught MyException"); } instance.doStuff(); // Use of unassigned local variable 'instance' To make this code compile, I have to assign a value to instance either at its declaration or in the catch block. However, Assert.Fail will never, to the best of my knowledge, allow execution to proceed past it, hence instance will never be used without a value. Why is it then that I must assign a value to it? If I change the Assert.Fail to something like throw ex, the code compiles fine, I assume because it knows that exception will disallow execution to proceed to a point where instance would be used uninitialized. So is it a case of runtime versus compile-time knowledge about where execution will be allowed to proceed? Would it ever be reasonable for C# to have some way of saying that a member, in this case Assert.Fail, will never allow execution after it returns? Maybe that could be in the form of a method attribute. Would this be useful or an unnecessary complexity for the compiler?

    Read the article

  • How do determine what is *really* causing your compiler error

    - by ML
    Hi All, I am porting a very large code base and I am having more difficulty with old code. For example, this causes a compiler error: inline CP_M_ReferenceCounted * FrAssignRef(CP_M_ReferenceCounted * & to, CP_M_ReferenceCounted * from) { if (from) from->AddReference(); if (to) to->RemoveReference(); to = from; return to; } The error is: error: expected initializer before '*' token. How do I know what this is. I looked up inline member functions to be sure I understood and I dont think the inlining is the cause but I am not sure what is. Another example: template <class eachClass> eachClass FrReferenceIfClass(FxRC * ptr) { eachClass getObject = dynamic_cast<eachClass>(ptr); if (getObject) getObject->AddReference(); return getObject; } The error is: error: template declaration of 'eachClass FrReferenceIfClass' That is all. How do I decide what this is?. I am admittedly rusty with templates.

    Read the article

  • Should a new language compiler target the JVM?

    - by Pindatjuh
    I'm developing a new language. My initial target was to compile to native x86 for the Windows platform, but now I am in doubt. I've seen some new languages target the JVM (most notable Scala and Clojure). Ofcourse it's not possible to port every language easily to the JVM; to do so, it may lead to small changes to the language and it's design. So that's the reason behind this doubt, and thus this question: Is targetting the JVM a good idea, when creating a compiler for a new language? Or should I stick with x86? I have experience in generating JVM bytecode. Are there any workarounds to JVM's GC? The language has deterministic implicit memory management. How to produce JIT-compatible bytecode, such that it will get the highest speedup? Is it similar to compiling for IA-32, such as the 4-1-1 muops pattern on Pentium? I can imagine some advantages (please correct me if I'm wrong): JVM bytecode is easier than x86. Like x86 communicates with Windows, JVM communicates with the Java Foundation Classes. To provide I/O, Threading, GUI, etc. Implementing "lightweight"-threads.I've seen a very clever implementation of this at http://www.malhar.net/sriram/kilim/. Most advantages of the Java Runtime (portability, etc.) The disadvantages, as I imagined, are: Less freedom? On x86 it'll be more easy to create low-level constructs, while JVM has a higher level (more abstract) processor. Most disadvantages of the Java Runtime (no native dynamic typing, etc.)

    Read the article

  • io operations in compilers

    - by Aastha
    How are constructs of io operations handled by a compiler? Like the RTL mapping for memory related operations which is done in a compiler at the time of target code generation, where and how exactly is the same done for io operations? How are the appeoaches different for processors supporting MMIO and I/O mapped I/O? Are there any optimizations done for the io operations in compilers?

    Read the article

  • Prioritize compiler functionality/tasks, when designing a new language

    - by Mahdi
    Well, the question should be so hard to ask and I expect couple of down votes, however, I'm really interested to have your ideas and recommendations. :) I've already made a very simple compiler, with a few and limited functionality. Now I'm getting more on it to make it more like a real-world compiler. I definitely need to start over 'cause I've much more experience and ideas in this area rather a few years ago. So, I want to know, right now, from the very first step again, which tasks/features for the new compiler should implement first and which tasks has lower priority rather than others? For example, I'd say, first I'd go to decide about the object-oriented structure for the new language, but you might say, hey, just go for a compiler that could define a variable, when you finished that, then start thinking about OOP designs ... I prefer to hear the pros and cons for your suggestions also. Actually I like to start from Bottom to Top, where I could add simplest tasks first, and later adding more complex ones, but I'm totally open for any new ideas, and really appreciate that. Also please consider that I'm thinking about the design concepts. Actually I expect answers like: Priority from Highest to Lowest: variables, because .... functions, because .... loops, because .... ... Not: define a syntax for your new language, and start parsing your source code ...

    Read the article

  • Reliance on the compiler

    - by koan
    I've been programming in C and C++ for some time, although I would say I'm far from being expert. For some time I've been using various strategies to develop my code such as unit tests, test driven design, code reviews and so on. When I wrote my first programs in BASIC I typed in long listings before finding they would not run and they were a nightmare to debug. So I learnt to write a small bit and then test it. These days I often find myself repeatedly writing a small bit of code then using the compiler to find all the mistakes. That's OK if it picks up a typo but when you start adjusting the parameters types etc just to make it compile you can screw up the design. It also seems that the compiler is creeping into the design process when it should only be used for checking syntax. There's a danger here of over reliance on the compiler to make my programs better. Are there better strategies than this ? I vaguely remember some time ago an article on a company developing a type of C compiler where an extra header file also specified the prototypes. The idea was that inconsistencies in the API definition would be easier to catch if you had to define it twice in different ways.

    Read the article

  • g++ C++0x enum class Compiler Warnings

    - by Travis G
    I've been refactoring my horrible mess of C++ type-safe psuedo-enums to the new C++0x type-safe enums because they're way more readable. Anyway, I use them in exported classes, so I explicitly mark them to be exported: enum class __attribute__((visibility("default"))) MyEnum : unsigned int { One = 1, Two = 2 }; Compiling this with g++ yields the following warning: type attributes ignored after type is already defined This seems very strange, since, as far as I know, that warning is meant to prevent actual mistakes like: class __attribute__((visibility("default"))) MyClass { }; class __attribute__((visibility("hidden"))) MyClass; Of course, I'm clearly not doing that, since I have only marked the visibility attributes at the definition of the enum class and I'm not re-defining or declaring it anywhere else (I can duplicate this error with a single file). Ultimately, I can't make this bit of code actually cause a problem, save for the fact that, if I change a value and re-compile the consumer without re-compiling the shared library, the consumer passes the new values and the shared library has no idea what to do with them (although I wouldn't expect that to work in the first place). Am I being way too pedantic? Can this be safely ignored? I suspect so, but at the same time, having this error prevents me from compiling with Werror, which makes me uncomfortable. I would really like to see this problem go away.

    Read the article

  • MSBuild 4 fails to build VS2008 csproj due to 1 compiler warning

    - by David White
    We have a VS2008 CS DLL project targeting .NET 3.5. It builds successfully on our CI server when using MSBuild 3.5. When CI is upgraded to use MSBuild 4.0, the same project fails to build, due to 1 warning message: c:\WINDOWS\Microsoft.NET\Framework\v3.5\Microsoft.Common.targets(1418,9): warning MSB3283: Cannot find wrapper assembly for type library "ADODB". The warning does not occur with MSBuild 3.5, and I'm surprised that it results in Build FAILED. We do not have the project set to treat warnings as errors. All our other projects build successfully with either version of MSBuild.

    Read the article

  • CSharpCodeProvider doesn't return compiler warnings when there are no errors

    - by Sandor Drieënhuizen
    I'm using the CSharpCodeProvider class to compile a C# script which I use as a DSL in my application. When there are warnings but no errors, the Errors property of the resulting CompilerResults instance contains no items. But when I introduce an error, the warnings suddenly get listed in the Errors property as well. string script = @" using System; using System; \\ generate a warning namespace MyNamespace { public class MyClass { public void MyMethod() { \\ uncomment the next statement to generate an error \\intx = 0; } } } "; CSharpCodeProvider provider = new CSharpCodeProvider( new Dictionary<string, string>() { { "CompilerVersion", "v4.0" } }); CompilerParameters compilerParameters = new CompilerParameters(); compilerParameters.GenerateExecutable = false; compilerParameters.GenerateInMemory = true; CompilerResults results = provider.CompileAssemblyFromSource( compilerParameters, script); foreach (CompilerError error in results.Errors) { Console.Write(error.IsWarning ? "Warning: " : "Error: "); Console.WriteLine(error.ErrorText); } So how to I get hold of the warnings when there are no errors? By the way, I don't want to set TreatWarningsAsErrors to true.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >