Search Results

Search found 4771 results on 191 pages for 'aspnet compiler'.

Page 27/191 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Create My own language with "Functional Programming Language"

    - by esehara
    I prefer Haskell. I already know How to create my own language with Procedural Language (for example: C, Java, Python, etc). But, I know How to create my own language with Functional Language (for example Haskell, Clojure and Scala). I've already read: Internet Resources Write Yourself a Scheme in 48 Hours Real World Haskell - Chapter 16.Using Persec Writing A Lisp Interpreter In Haskell Parsec, a fast combinator parser Implementing functional languages: a tutorial Books Introduction Functional Programming Using Haskell 2nd Edition -- Haskell StackOverflow (but with procedural language) Learning to write a compiler create my own programming language Source Libraries and tools/HJS -- Haskell Are there any other good sources? I wants to get more links,or sources.

    Read the article

  • Two '==' equality operators in same 'if' condition are not working as intended.

    - by Manav MN
    I am trying to establish equality of three equal variables, but the following code is not printing the obvious true answer which it should print. Can someone explain, how the compiler is parsing the given if condition internally? #include<stdio.h> int main() { int i = 123, j = 123, k = 123; if ( i == j == k) printf("Equal\n"); else printf("NOT Equal\n"); return 0; } Output: manav@workstation:~$ gcc -Wall -pedantic calc.c calc.c: In function ‘main’: calc.c:5: warning: suggest parentheses around comparison in operand of ‘==’ manav@workstation:~$ ./a.out NOT Equal manav@workstation:~$ EDIT: Going by the answers given below, is the following statement okay to check above equality? if ( (i==j) == (j==k))

    Read the article

  • Is this call to a function object inlined?

    - by dehmann
    In the following code, Foo::add calls a function via a function object: struct Plus { inline int operator()(int x, int y) const { return x + y; } }; template<class Fct> struct Foo { Fct fct; Foo(Fct f) : fct(f) {} inline int add(int x, int y) { return fct(x,y); // same efficiency adding directly? } }; Is this the same efficiency as calling x+y directly in Foo::add? In other words, does the compiler typically directly replace fct(x,y) with the actual call, inlining the code, when compiling with optimizations enabled?

    Read the article

  • Help with writing a php code that repeats itself per array value

    - by Mohammad
    Hi I'm using Closure Compiler to compress and join a few JavaScript files the syntax is something like this; $c = new PhpClosure(); $c->add("JavaScriptA.js") ->add("JavaScriptB.js") ->write(); How could I make it systematically add more files from an array lets say for each array element in $file = array('JavaScriptA.js','JavaScriptB.js','JavaScriptC.js',..) it would execute the following code $c = new PhpClosure(); $c->add("JavaScriptA.js") ->add("JavaScriptB.js") ->add("JavaScriptC.js") ->add ... ->write(); Thank you so much in advance!

    Read the article

  • Solutions for redundant server and client code?

    - by Fragsworth
    In our system, the code which exists on the client side (in Flash and Javascript) mirrors the code that exists on the server side (e.g. in Python or PHP), normally with respect to the models, the methods available for those models, and the unit tests written for them. This becomes a problem in systems where you want to minimize data transfer (e.g. multiplayer games). I do not want to write the same code and unit tests redundantly for both the client and server, but I don't know of any standard solutions to deal with this. Basically, I want a language/compiler which can produce models and methods for three main languages: Actionscript, Javascript, and any server language. Does something like this exist?

    Read the article

  • Where to learn about VS debugger 'magic names'

    - by Gael Fraiteur
    If you've ever used Reflector, you probably noticed that the C# compiler generates types, methods, fields, and local variables, that deserve 'special' display by the debugger. For instance, local variables beginning with 'CS$' are not displayed to the user. There are other special naming conventions for closure types of anonymous methods, backing fields of automatic properties, and so on. My question: where to learn about these naming conventions? Does anyone know about some documentation? My objective is to make PostSharp 2.0 use the same conventions. Thank you!

    Read the article

  • Why a variable is seen in the loop and is not seen outside the loop?

    - by Roman
    I have the following code: String serviceType; ServiceBrowser tmpBrowser; for (String playerName: players) { serviceType = "_" + playerName + "._tcp"; tmpBrowser = BrowsersGenerator.getBrowser(serviceType); tmpBrowser.browse(); System.out.println(tmpBrowser.getStatus()); } System.out.println(tmpBrowser.getStatus()); The compiler complains about the last line. It writes "variable tmpBrowser might not been initialized". If I comment the last line the compile does not complain.

    Read the article

  • C++ Class Static variable problem - C programmer new to C++

    - by Microkernel
    Hi guys, I am a C programmer, but had learnt C++ @school longtime back. Now I am trying to write code in C++ but getting compiler error. Please check and tell me whats wrong with my code. typedef class _filter_session { private: static int session_count; /* Number of sessions count -- Static */ public: _filter_session(); /* Constructor */ ~_filter_session(); /* Destructor */ }FILTER_SESSION; _filter_session::_filter_session(void) { (this->session_count)++; return; } _filter_session::~_filter_session(void) { (this->session_count)--; return; } The error that I am getting is "error LNK2001: unresolved external symbol "private: static int _filter_session::session_count" (?session_count@_filter_session@@0HA)" I am using Visual Studio 2005 by the way. Plz plz help me. Regards, Microkernel

    Read the article

  • When compiling programs to run inside a VM, what should march and mtune be set to?

    - by Russ
    With VMs being slave to whatever the host machine is providing, what compiler flags should be provided to gcc? I would normally think that -march=native would be what you would use when compiling for a dedicated box, but the fine detail that -march=native is going to as indicated in this article makes me extremely wary of using it. So... what to set -march and -mtune to inside a VM? For a specific example... My specific case right now is compiling python (and more) in a linux guest inside a KVM-based "cloud" host that I have no real control over the host hardware (aside from 'simple' stuff like CPU GHz m CPU count, and available RAM). Currently, cpuinfo tells me I've got an "AMD Opteron(tm) Processor 6176" but I honestly don't know (yet) if that is reliable and whether the guest can get moved around to different architectures on me to meet the host's infrastructure shuffling needs (sounds hairy/unlikely). All I can really guarantee is my OS, which is a 64-bit linux kernel where uname -m yields x86_64.

    Read the article

  • How is the ">" operator implemented (on 32 bit integers)?

    - by Ron Klein
    Let's say that the environment is x86. How do compilers compile the "" operator on 32 bit integers. Logically, I mean. Without any knowledge of Assembly. Let's say that the high level language code is: int32 x, y; x = 123; y = 456; bool z; z = x > y; What does the compiler do for evaluating the expression x > y? Does it perform something like (assuming that x and y are positive integers): w = sign_of(x - y); if (w == 0) // expression is 'false' else if (w == 1) // expression is 'true' else // expression is 'false' Is there any reference for such information?

    Read the article

  • Find the "name" of a library (-L -l switches)

    - by sebastiangeiger
    Being fairly new to C++ I have a question bascially concerning the g++ compiler and especially the inclusion of libraries. Consider the following makefile: CPPFLAGS= -I libraries/boost_1_43_0-bin/include/ -I libraries/jpeg-8b-bin/include/ LDLIBS= libraries/jpeg-8b-bin/lib/libjpeg.a # LDLIBS= -L libraries/jpeg-8b-bin/lib -llibjpeg all: main main: main.o c++ -o main main.o $(LDLIBS) main.o: main.cpp c++ $(CPPFLAGS) -c main.cpp clean: rm -rf *.o main As you can see I declared the LDLIBS variable twice. My code is compiling and working if I use the makefile above. But if I deactivate the first LDLIBS entry and active the second one I get ld: library not found for -llibjpeg. I assume my libjpeg.a is just not called libjpeg but bears some different name. Is there a way to find out the name of a given "libraryfile" libsomething.a or libsomething.dyn?

    Read the article

  • strange results with /fp:fast

    - by martinus
    We have some code that looks like this: inline int calc_something(double x) { if (x > 0.0) { // do something return 1; } else { // do something else return 0; } } Unfortunately, when using the flag /fp:fast, we get calc_something(0)==1 so we are clearly taking the wrong code path. This only happens when we use the method at multiple points in our code with different parameters, so I think there is some fishy optimization going on here from the compiler (Microsoft Visual Studio 2008, SP1). Also, the above problem goes away when we change the interface to inline int calc_something(const double& x) { But I have no idea why this fixes the strange behaviour. Can anyone explane this behaviour? If I cannot understand what's going on we will have to remove the /fp:fastswitch, but this would make our application quite a bit slower.

    Read the article

  • A general question about compilation and interpretation.

    - by wucnuc
    Hi stackoverflow, I apologize in advance for the possible stupidity of this question. However, the following has been the source of some confusion for me and I know the people here will be able to handily clear up the confusion for me. Basically, I would like to finally understand the relationship between any and all of the following terms. Some of the terms I do actually understand pretty well, but some of them are similar in my mind and I would like to once and for all to see their relationships/distinctions laid out all at once. They are: compiler interpreter bytecode machine code assembler assembly language binary object code executable Ideally, an answer would use examples from Java and C++ and other well-known programming languages that a young-ish student like me would be familiar with. Also, if you want to throw in any other useful terms that would be fine too :)

    Read the article

  • scala jrebel superclass change

    - by coubeatczech
    hi, I'm using JRebel with Scala and I'm quite frequently experiencing the need for restart of server due to the fact that JRebel is unable to load a class if the superclass was changed. This is done mainly when I change anonymous functions as I can deduce from the JRebel error desription: Class 'mypackage.NewBook$$anonfun$2' superclass was changed from 'scala.runtime.AbstractFunction1' to 'scala.runtime.AbstractFunction2' and could not be reloaded. Is there any way, how can I design my code to avoid this? Does scala compiler take the functions, numbers them from one as they appear in source code?

    Read the article

  • Tips on using GCC as a new user

    - by ultrajohn
    I am really new to GCC and I don't how to use it. I already have a copy of a pre-compiled gcc binaries i've downloaded from one of the mirror sites in the gcc website.. Now, I don't where to go from here... Please give me some tips on the different path to proceed.. I am sorry for the rather vague question.. What I want are tips on how to use GCC... I've programmed in C in the past using the TC compiler... Thanks! I really appreciate all of your suggestions... Thanks again.. :)

    Read the article

  • Error preverifying class java during build (BlackBerry)

    - by Davide Vosti
    I'm trying do build and debug a small project for BlackBerry. During the build I'm getting this error Error preverifying class java ... I read on the net this error could be caused by referencing multiple projects but I tried to move every package in a single project but the error is still there. I tried with multiple JDE version (currently 4.7) and the Java compiler is set to 1.6. Eclipse version is 3.4.1 as recommended by RIM's documentations. Does someone have some clue?

    Read the article

  • Why only the constant expression can be used as case expression in Switch statement?

    - by sinec
    Hi, what is bothering me is that I can't found an info regarding the question from the title. I found that assembly form the Switch-case statement is compiled into bunch of (MS VC 2008 compiler) cmp and je calls: 0041250C mov eax,dword ptr [i] 0041250F mov dword ptr [ebp-100h],eax 00412515 cmp dword ptr [ebp-100h],1 0041251C je wmain+52h (412532h) 0041251E cmp dword ptr [ebp-100h],2 00412525 je wmain+5Bh (41253Bh) 00412527 cmp dword ptr [ebp-100h],3 0041252E je wmain+64h (412544h) 00412530 jmp wmain+6Bh (41254Bh) where for above code will in case that if condition (i) is equal to 2, jump to address 41253Bh and do execution form there (at 41253Bh starts the code for 'case 2:' block) What I don't understand is why in case that,for instance, function is used as 'case expression', why first function couldn't be evaluated and then its result compared with the condition? Am I missing something? Thank you in advance

    Read the article

  • How do I indicate that a class doesn't support certain operators?

    - by romeovs
    I'm writing a class that represents an ordinal scale, but has no logical zero-point (eg time). This scale should permit addition and substraction (operator+, operator+=, ...) but not multiplication. Yet, I always felt it to be a good practice that when one overloads one operator of a certain group (in this case the math operators), one should also overload all the others that belong to that group. In this case that would mean I should need to overload the multiplication and division operators also, because if a user can use A+B he would probable expect to be able the other operators. Is there a method that I can use to throw an error for this at compiler time? The easiest method would be just no to overload the operators operator*, ... yet it would seem appropriate to add a bit more explaination than operator* is not know for class "time". Or is this something that I really should not care about (RTFM user)?

    Read the article

  • LALR(1) or GLR on Windows - Alternatives to Bison++ / Flex++ that are current?

    - by mrjoltcola
    I have been using the same version of bison++ (1.21-8) and flex++ (2.3.8-7) since 2002. I'm not looking for an alternative to LALR(1) or GLR at this time, just looking for the most current options. Is anyone aware of any later ports of these than the original that aren't Cygwin dependent? What are other folks using in Windows environments for C++ compiler development (besides ANTLR or Boost.spirit)? Commercial options are ok, if you have firsthand experience. I do need to compile on Linux as well.

    Read the article

  • Warning vs. error

    - by Samuel
    I had an annoying issue, getting a "Possible loss of precision" error when compiling my Java program on BlueJ (But from what i read this isn't connected to a specific IDE). I was surprised by the fact that the compiler told me there is a possible loss of precision and wouldnt let me compile/run the program. Why is this an error and not a warning saying you might loose precision here, if you don't want that change your code? The program runs just fine when i drop the float values, it wouldn't matter since there is no point (e.g [143.08, 475.015]) on my screen. On the other hand when i loop through an ArrayList and in this loop i have an if clause removing elements from the ArrayList it runs fine, just throws an error and doesn't display the ArrayList [used for drawing circles] for a fraction of a second. This appears to me as a severe error but doesn't cause (hardly) any troubles, while i wouldn't want to have such a thing in my code at all. What's the boundary?

    Read the article

  • Eliminating false dependencies

    - by Klaus
    Hi all, I have a quite general question regarding false dependencies. As the name implies, those or no real dependencies and can be elimianated. I am aware of techniqes called register renaming that eliminate such dependencies at a hardware level. Of course I could eliminate those already at a "higher" level when writing assembler code that avoids false dependencies. But now I am wondering if also the compiler provides support to keep the number of false dependencies low or does it more rely on the hardware to eliminate those? Thanks

    Read the article

  • C++ iterators & loop optimization

    - by Quantum7
    I see a lot of c++ code that looks like this: for( const_iterator it = list.begin(), const_iterator ite = list.end(); it != ite; ++it) As opposed to the more concise version: for( const_iterator it = list.begin(); it != list.end(); ++it) Will there be any difference in speed between these two conventions? Naively the first will be slightly faster since list.end() is only called once. But since the iterator is const, it seems like the compiler will pull this test out of the loop, generating equivalent assembly for both.

    Read the article

  • What are the disadvantages of targeting the JVM instead of x86?

    - by Pindatjuh
    I'm developing a new language. My initial target was to compile to native x86 for the Windows platform, but now I am in doubt. I've seen some new languages target the JVM (most notable Scala and Clojure). Ofcourse it's not possible to port every language easily to the JVM; to do so may lead to small changes to the language and it's design. After posing this question, I even doubted more about this decision. I now know some "pro" JVM arguments. The original question was: is targetting the JVM a good idea, when creating a compiler for a new language? Updated the question: What are the disadvantages of targeting the JVM instead of x86 on Windows?

    Read the article

  • What Should be the Structure of a C++ Project?

    - by Ell
    I have recently started learning C++ and coming from a Ruby environment I have found it very hard to structure a project in a way that it still compiles correctly, I have been using Code::Blocks which is brilliant but a downside is that when I add a new header file or c++ source file, it will generate some code and even though it is only a mere 3 or 4 lines, I do not know what these lines do. First of all I would like to ask this question: What do these lines do? #ifndef TEXTGAME_H_INCLUDED #define TEXTGAME_H_INCLUDED #endif // TEXTGAME_H_INCLUDED My second question is, do I need to #include both the .h file and the .cpp file, and in which order. My third question is where can I find the GNU GCC Compiler that, I beleive, was packaged with Code::Blocks and how do I use it without Code::Blocks? I would rather develop in a notepad++ sort of way because that is what I'm used to in Ruby but since C++ is compiled, you may think differently (please give advice and views on that as well) Thanks in advance, ell.

    Read the article

  • Is call to function object inlined?

    - by dehmann
    In the following code, Foo::add calls a function via a function object: struct Plus { inline int operator()(int x, int y) const { return x + y; } }; template<class Fct> struct Foo { Fct fct; Foo(Fct f) : fct(f) {} inline int add(int x, int y) { return fct(x,y); // same efficiency adding directly? } }; Is this the same efficiency as calling x+y directly in Foo::add? In other words, does the compiler typically directly replace fct(x,y) with the actual call, inlining the code, when compiling with optimizations enabled?

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >