Search Results

Search found 5086 results on 204 pages for 'compiler constants'.

Page 26/204 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • pro*C in oracle XE

    - by srandpersonia
    I downloaded the free express edition of oracle, Oracle XE. I couldn't find the pro*c compiler in this edition. I read somewhere that oracle 9i client has pro*C, So I presumed that oracle client for 10g XE should have it too and downloaded it. But to my disappointment, I can't find it there too. :(. Is there a way to download the older oracle 9i and use it connect to 10g XE without any compatibility problems?. Or is it possible to download the pro*C compiler alone?. I don't want to download the standard editions as they are too large(2 GB). Thanks.

    Read the article

  • Compiler Errors...it ran yesterday!?

    - by howdytest
    This is a pre-existing Java project being run in Eclipse 3.5.2 32 bit.. Day 1: Install Java SE 6 Update 20 JDK. Experience Crash in Eclipse. Install Java 5. Same problem-(uninstall java 5). Re-install Java 6. Install Eclipse 3.3.1. Install Eclipse 3.5.2. 32-bit. No problems. Run Eclipse 3.5.2. 64-bit. No problems. Set up the project, configure, and run. No problems. Day 2: Load Eclipse to start a new project. Previous project now has 940 errors. Error Type is "Java Problem". The project ran 100% without a problem on Day 1. The only thing that happened between Day 1 and Day 2 was restarting my computer. I just tried to recreate the project, step by step, and am still getting the same errors. I know it's not the code -- it was working. Not to mention that it's an opensource project, such a problem would be documented. I'm thinking something is wrong with my Java install. Or, perhaps, it's a 32-bit/64-bit problem. I'm running win7 64bit. So before formatting my window's partition, I thought I'd throw the problem your way to see if anyone knows what's going on. Thanks.

    Read the article

  • From interpeted to native code: "dynamic" languages compiler support

    - by Daniel
    First, I am aware that dynamic languages is a term used mainly by a vendor; I am using it just to have a container word to include languages like Perl (a favorite of mine), Python, Tcl, Ruby, PHP and so on. They are interpreted but I am interested here to refer to languages featuring strong capability to support the programmer efficiency and the support for typical constructs of modern interpreted languages My question is: there are dynamic languages can be compiled efficiently in native executable code - typically for Windows platforms? Which ones? Maybe using some third part ad-hoc tools? I am not talking about huge executables carrying with them a full interpreter or some similar tricks nor some smart module able to include its own dependances or some required modules, but a honest, straight, standard, solid executable code. If not, there is some technical reason inhibiting the availability of such a best-of-both-world feature? Thanks! Daniel

    Read the article

  • SOLVED: Lisp: macro calling a function works in interpreter, fails in compiler (SBCL + CMUCL)

    - by ttsiodras
    As suggested in a macro-related question I recently posted to SO, I coded a macro called "fast" via a call to a function (here is the standalone code in pastebin): (defun main () (progn (format t "~A~%" (+ 1 2 (* 3 4) (+ 5 (- 8 6)))) (format t "~A~%" (fast (+ 1 2 (* 3 4) (+ 5 (- 8 6))))))) This works in the REPL, under both SBCL and CMUCL: $ sbcl This is SBCL 1.0.52, an implementation of ANSI Common Lisp. ... * (load "bug.cl") 22 22 $ Unfortunately, however, the code no longer compiles: $ sbcl This is SBCL 1.0.52, an implementation of ANSI Common Lisp. ... * (compile-file "bug.cl") ... ; during macroexpansion of (FAST (+ 1 2 ...)). Use *BREAK-ON-SIGNALS* to ; intercept: ; ; The function COMMON-LISP-USER::CLONE is undefined. So it seems that by having my macro "fast" call functions ("clone","operation-p") at compile-time, I trigger issues in Lisp compilers (verified in both CMUCL and SBCL). Any ideas on what I am doing wrong and/or how to fix this?

    Read the article

  • Correct Delphi compiler switches to stop in the user's code, not my component's

    - by Jeremy Mullin
    I'm modifying our VCL components so the end user's application links to our dcu files, instead of building our source code each time. We have everything working, but I want the debugger to stop on the user's code when an exception is raised. At first it would stop in our dcu and open the CPU window. I was able to prevent that by removing debug info from the dcu files. But now it still doesn't stop in the users code (like DevExpress libraries and others do). The following screencast is a short example. The first time I cause an exception in the DevExpress code, and the debugger correctly stops in my button event. The second time I cause an exception in my components, but the debugger doesn't have my button event on the call stack, and doesn't show me where the problem was. Any ideas why? http://screencast.com/t/NjhlOTRk Currently building the DCU's with these options: -$W+ -$D- -h -w -q Update: The TDataSet methods in between my component and the button event seem to cause this behavior. If I instead call a direct method of my table, I get the expected behavior. I'm guessing there isn't anything I can do about this, but I'm still curious why it happens.

    Read the article

  • Smooth Camera Zoom Factor Change

    - by Siddharth
    I have game play scene in which user can zoom in and out. For which I used smooth camera in the following manner. public static final int CAMERA_WIDTH = 1024; public static final int CAMERA_HEIGHT = 600; public static final float MAXIMUM_VELOCITY_X = 400f; public static final float MAXIMUM_VELOCITY_Y = 400f; public static final float ZOOM_FACTOR_CHANGE = 1f; mSmoothCamera = new SmoothCamera(0, 0, Constants.CAMERA_WIDTH, Constants.CAMERA_HEIGHT, Constants.MAXIMUM_VELOCITY_X, Constants.MAXIMUM_VELOCITY_Y, Constants.ZOOM_FACTOR_CHANGE); mSmoothCamera.setBounds(0f, 0f, Constants.CAMERA_WIDTH, Constants.CAMERA_HEIGHT); But above thing create problem for me. When user perform zoom in and leave game play scene then other scene behaviour not look good. I already set zoom factor to 1 for this purpose. But now it show camera translation in other scene. Because scene switching time it so much small that player can easily saw translation of camera that I don't want to show. After camera reposition, everything works perfect but how to set camera its proper position. For example my loading text move from bottom to top or vice versa based on camera movement. Any more detail you want then I can able to give you.

    Read the article

  • Compiler error: Variable or field declared void [closed]

    - by ?? ?
    i get some error when i try to run this, could someone please tell me the mistakes, thank you! [error: C:\Users\Ethan\Desktop\Untitled1.cpp In function `int main()': 25 C:\Users\Ethan\Desktop\Untitled1.cpp variable or field `findfactors' declared void 25 C:\Users\Ethan\Desktop\Untitled1.cpp initializer expression list treated as compound expression] #include<iostream> #include<cmath> using namespace std; void prompt(int&, int&, int&); int gcd(int , int , int );//3 input, 3 output void findfactors(int , int , int, int, int&, int&);//3 input, 2 output void display(int, int, int, int, int);//5 inputs int main() { int a, b, c; //The coefficients of the quadratic polynomial int ag, bg, cg;//value of a, b, c after factor out gcd int f1, f2; //The two factors of a*c which add to be b int g; //The gcd of a, b, c prompt(a, b, c);//Call the prompt function g=gcd(a, b, c);//Calculation of g void findfactors(a, b, c, f1, f2);//Call findFactors on factored polynomial display(g, f1, f2, a, c);//Call display function to display the factored polynomial system("PAUSE"); return 0; } void prompt(int& num1, int& num2, int& num3) //gets 3 ints from the user { cout << "This program factors polynomials of the form ax^2+bx+c."<<endl; while(num1==0) { cout << "Enter a value for a: "; cin >> num1; if(num1==0) { cout<< "a must be non-zero."<<endl; } } while(num2==0 && num3==0) { cout << "Enter a value for b: "; cin >> num2; cout << "Enter a value for c: "; cin >> num3; if(num2==0 && num3==0) { cout<< "b and c cannot both be 0."<<endl; } } } int gcd(int num1, int num2, int num3) { int k=2, gcd=1; while (k<=num1 && k<=num2 && k<=num3) { if (num1%k==0 && num2%k==0 && num3%k==0) gcd=k; k++; } return gcd; } void findFactors(int Ag, int Bg, int Cg,int& F1, int& F2) { int y=Ag*Cg; int z=sqrt(abs(y)); for(int i=-z; i<=z; i++) //from -sqrt(|y|) to sqrt(|y|) { if(i==0)i++; //skips 0 if(y%i==0) //if i is a factor of y { if(i+y/i==Bg) //if i and its partner add to be b F1=i, F2=y/i; else F1=0, F2=0; } } } void display(int G, int factor1, int factor2, int A, int C) { int k=2, gcd1=1; while (k<=A && k<=factor1) { if (A%k==0 && factor1%k==0) gcd1=k; k++; } int t=2, gcd2=1; while (t<=factor2 && t<=C) { if (C%t==0 && factor2%t==0) gcd2=t; t++; } cout<<showpos<<G<<"*("<<gcd1<<"x"<<gcd2<<")("<<A/gcd1<<"x"<<C/gcd2<<")"<<endl; }

    Read the article

  • when is java faster than c++ (or when is JIT faster then precompiled)?

    - by kostja
    I have heard that under certain circumstances, Java programs or rather parts of java programs are able to be executed faster than the "same" code in C++ (or other precompiled code) due to JIT optimizations. This is due to the compiler being able to determine the scope of some variables, avoid some conditionals and pull similar tricks at runtime. Could you give an (or better - some) example, where this applies? And maybe outline the exact conditions under which the compiler is able to optimize the bytecode beyond what is possible with precompiled code? NOTE : This question is not about comparing Java to C++. Its about the possibilities of JIT compiling. Please no flaming. I am also not aware of any duplicates. Please point them out if you are.

    Read the article

  • Can you make an incrementing compiler constant?

    - by Keith Nicholas
    While sounding nonsensical..... I want a Contant where every time you use it it will increment by 1 int x; int y; x = INCREMENTING_CONSTNAT; y = INCREMENTING_CONSTNAT; where x == 1; and y == 2 Note I don't want y = INCREMENTING_CONSTNAT+1 type solutions. Basically I want to use it as a compile time unique ID ( generally it wouldn't be used in code like the example but inside another macro)

    Read the article

  • Why are forward declarations necessary?

    - by user199421
    In languages like C# and Java there is no need to declare (for example) a class before using it. If I understand it correctly this is because the compiler does two passes on the code. In the first it just "collects the information available" and in the second one it checks that the code is correct. In C and C++ the compiler does only one pass so everything needs to be available at that time. So my question basically is why isn't it done this way in C and C++. Wouldn't it eliminate the needs for header files?

    Read the article

  • Protobuf compiler for several languages

    - by Stipa
    Hi, In my project I have components written on Python, ObjectiveC and J2ME. I want to use protobuf as data interchange format. However, there are one issue I need to resolve. Google implementation doesn't support ObjC and J2ME. There are 3rd party implementations that supports those languages. But I really don't want to be depended of several protoc implementations. What is the best way for me to have protobufs for several languages? Do I need to use different compilers or there is other option? Thanks, -Lev

    Read the article

  • why optimization does not happen?

    - by aaa
    hi. I have C/C++ code, that looks like this: static int function(double *I) { int n = 0; // more instructions, loops, for (int i; ...; ++i) n += fabs(I[i] > tolerance); return n; } function(I); // return value is not used. compiler inlines function, however it does not optimize out n manipulations. I would expect compiler is able to recognize that value is never used as rhs only. Is there some side effect, which prevents optimization? Thanks

    Read the article

  • C++ compiler unable to find function (namespace related)

    - by CS student
    I'm working in Visual Studio 2008 on a C++ programming assignment. We were supplied with files that define the following namespace hierarchy (the names are just for the sake of this post, I know "namespace XYZ-NAMESPACE" is redundant): (MAIN-NAMESPACE){ a bunch of functions/classes I need to implement... (EXCEPTIONS-NAMESPACE){ a bunch of exceptions } (POINTER-COLLECTIONS-NAMESPACE){ Set and LinkedList classes, plus iterators } } The MAIN-NAMESPACE contents are split between a bunch of files, and for some reason which I don't understand the operator<< for both Set and LinkedList is entirely outside of the MAIN-NAMESPACE (but within Set and LinkedList's header file). Here's the Set version: template<typename T> std::ostream& operator<<(std::ostream& os, const MAIN-NAMESPACE::POINTER-COLLECTIONS-NAMESPACE::Set<T>& set) Now here's the problem: I have the following data structure: Set A Set B Set C double num It's defined to be in a class within MAIN-NAMESPACE. When I create an instance of the class, and try to print one of the sets, it tells me that: error C2679: binary '<<' : no operator found which takes a right-hand operand of type 'const MAIN-NAMESPACE::POINTER-COLLECTIONS-NAMESPACE::Set' (or there is no acceptable conversion) However, if I just write a main() function, and create Set A, fill it up, and use the operator- it works. Any idea what is the problem? (note: I tried any combination of using and include I could think of).

    Read the article

  • Controlling read and write access width to memory mapped registers in C

    - by srking
    I'm using and x86 based core to manipulate a 32-bit memory mapped register. My hardware behaves correctly only if the CPU generates 32-bit wide reads and writes to this register. The register is aligned on a 32-bit address and is not addressable at byte granularity. What can I do to guarantee that my C (or C99) compiler will only generate full 32-bit wide reads and writes in all cases? For example, if I do a read-modify-write operation like this: volatile uint32_t* p_reg = 0xCAFE0000; *p_reg |= 0x01; I don't want the compiler to get smart about the fact that only the bottom byte changes and generate 8-bit wide read/writes. Since the machine code is often more dense for 8-bit operations on x86, I'm afraid of unwanted optimizations. Disabling optimizations in general is not an option.

    Read the article

  • How does compiling circular dependencies work?

    - by Fabio F.
    I've made the example in Java but I think (not tested) that it works in other (all?) languages. You have 2 files. First, M.java: public class MType { XType x; MType() {x = null;} } Second, another file (in the same directory), XType.java: public class XType { MType m; public XType(MType m) {this.m = m;} } Ok it's bad programming, but if you run javac XType it compiles: compiles even MType because XType needs it. But ... MType needs XType ... how does that work? How does the compiler know what is happening? Probably this is a stupid question, but I would like to know how the compiler (javac or any other compilers you know) manages that situation, not how to avoid it. I'm asking because i'm writing a precompiler and I would like to manage that situation.

    Read the article

  • How do I pull `static final` constants from a Java class into a Clojure namespace?

    - by Joe Holloway
    I am trying to wrap a Java library with a Clojure binding. One particular class in the Java library defines a bunch of static final constants, for example: class Foo { public static final int BAR = 0; public static final int SOME_CONSTANT = 1; ... } I had a thought that I might be able to inspect the class and pull these constants into my Clojure namespace without explicitly def-ing each one. For example, instead of explicitly wiring it up like this: (def *foo-bar* Foo/BAR) (def *foo-some-constant* Foo/SOME_CONSTANT) I'd be able to inspect the Foo class and dynamically wire up *foo-bar* and *foo-some-constant* in my Clojure namespace when the module is loaded. I see two reasons for doing this: A) Automatically pull in new constants as they are added to the Foo class. In other words, I wouldn't have to modify my Clojure wrapper in the case that the Java interface added a new constant. B) I can guarantee the constants follow a more Clojure-esque naming convention I'm not really sold on doing this, but it seems like a good question to ask to expand my knowledge of Clojure/Java interop. Thanks

    Read the article

  • Trying to set up iPhone-gcc compiler

    - by Am4d
    So I've installed iPhone-gcc, make and ldid from cydia, i can't compile yet though because I don't have the headers setup. I've looked around and there's not really much info on setting up the headers, librarys and frameworks. I just need to know where to extract them from the sdk and where to place them on my iPhone. I've been trying to get this to work for a while, Any help would be great. Thanks Adam M

    Read the article

  • How compiling circular dependencies works?

    - by Fabio F.
    I've made the example in Java but I think (not tested) that it works in other (all?) languages. You have 2 files M.java that says public class MType{ XType x; MType(){ x = null;} } and another file XType.java (in the same directory) public class XType{ MType m; public XType(MType m){ this.m=m;} } Ok it's BAD programming , but.. if you run javac XType it compiles: compiles even MTypes because XType needs it. But.. MType needs XType.. how it works? How does the compiler know what is happening? Probably is a stupid question, but I would like to know how the compiler (javac or other compilers if you know.) manages that situation, not how to avoid it. I'm asking because i'm writing a precompiler and I would like to manage that situation.. Thank you

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >