Search Results

Search found 5015 results on 201 pages for 'compiler construction'.

Page 8/201 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • How do I create my own programming language and a compiler for it

    - by Dave
    I am thorough with programming and have come across languages including BASIC, FORTRAN, COBOL, LISP, LOGO, Java, C++, C, MATLAB, Mathematica, Python, Ruby, Perl, JavaScript, Assembly and so on. I can't understand how people create programming languages and devise compilers for it. I also couldn't understand how people create OS like Windows, Mac, UNIX, DOS and so on. The other thing that is mysterious to me is how people create libraries like OpenGL, OpenCL, OpenCV, Cocoa, MFC and so on. The last thing I am unable to figure out is how scientists devise an assembly language and an assembler for a microprocessor. I would really like to learn all of these stuff and I am 15 years old. I always wanted to be a computer scientist someone like Babbage, Turing, Shannon, or Dennis Ritchie. I have already read Aho's Compiler Design and Tanenbaum's OS concepts book and they all only discuss concepts and code in a high level. They don't go into the details and nuances and how to devise a compiler or operating system. I want a concrete understanding so that I can create one myself and not just an understanding of what a thread, semaphore, process, or parsing is. I asked my brother about all this. He is a SB student in EECS at MIT and hasn't got a clue of how to actually create all these stuff in the real world. All he knows is just an understanding of Compiler Design and OS concepts like the ones that you guys have mentioned (i.e. like Thread, Synchronization, Concurrency, memory management, Lexical Analysis, Intermediate code generation and so on)

    Read the article

  • How do I create my own programming language and a compiler for it

    - by Dave
    I am thorough with programming and have come across languages including BASIC, FORTRAN, COBOL, LISP, LOGO, Java, C++, C, MATLAB, Mathematica, Python, Ruby, Perl, Javascript, Assembly and so on. I can't understand how people create programming languages and devise compilers for it. I also couldn't understand how people create OS like Windows, Mac, UNIX, DOS and so on. The other thing that is mysterious to me is how people create libraries like OpenGL, OpenCL, OpenCV, Cocoa, MFC and so on. The last thing I am unable to figure out is how scientists devise an assembly language and an assembler for a microprocessor. I would really like to learn all of these stuff and I am 15 years old. I always wanted to be a computer scientist some one like Babbage, Turing, Shannon, or Dennis Ritchie. I have already read Aho's Compiler Design and Tanenbaum's OS concepts book and they all only discuss concepts and code in a high level. They don't go into the details and nuances and how to devise a compiler or operating system. I want a concrete understanding so that I can create one myself and not just an understanding of what a thread, semaphore, process, or parsing is. I asked my brother about all this. He is a SB student in EECS at MIT and hasn't got a clue of how to actually create all these stuff in the real world. All he knows is just an understanding of Compiler Design and OS concepts like the ones that you guys have mentioned (ie like Thread, Synchronisation, Concurrency, memory management, Lexical Analysis, Intermediate code generation and so on)

    Read the article

  • In asp.Net, writing code in the control tag generates compile error

    - by Nour Sabouny
    Hi this is really strange !! But look at the following asp code: <div runat="server" id="MainDiv"> <%foreach (string str in new string[]{"First#", "Second#"}) { %> <div id="<%=str.Replace("#","div") %>"> </div> <%} %> </div> now if you put this code inside any web page (and don't worry about the moral of this code, I made it just to show the idea) you'll get this error : Compiler Error Message: CS1518: Expected class, delegate, enum, interface, or struct Of course the error has nothing to do with the real problem, I searched for the code that was generated by asp.net and figured out the following : private void @__RenderMainDiv(System.Web.UI.HtmlTextWriter @__w, System.Web.UI.Control parameterContainer) { @__w.Write("\r\n "); #line 20 "blabla\blabla\Default.aspx" foreach (string str in new string[] { "First#", "Second#" }) { #line default #line hidden @__w.Write("\r\n <div id=\""); #line 22 "blabla\blabla\Default.aspx" @__w.Write(str.Replace("#", "div")); #line default #line hidden @__w.Write("\">\r\n "); } This is the code that was generated from the asp page and this is the method that is meant to render our div (MainDiv), I found out that there is a missing bracket "}" that closes the method or the (for loop). now the problem has three parts: 1- first you should have a server control (in our situation is the MainDiv) and I'm not sure if it is only the div tag. 2- HTML control inside the server control and a code inside it using the double quotation mark ( for example <div id="<%=str instead of <div id='<%=str. 3-Any keyword which has block brackets e.g.:for{},while{},using{}...etc. now removing any part, will solve the problem !!! how is this happening ?? any ideas ? BTW: please help me to make the question more obvious, because I couldn't find the best words to describe the problem.

    Read the article

  • Compiler Dependencies [closed]

    - by asghar ashgari
    I'm a newbie researcher who's passion is programming languages (Web era). I'm wondering why all the Web frameworks and Web-based general purposes languages, have a huge number of dependencies when you want to install and then use (e.g., extend, alternate, etc.) their compilers. For example, Ruby on Rails or Scala. If I want to download their source code, and try to build it again, to me at least, feels like a can of worms. I have a MAC, so I need to install MACports, then update my XCode, then get the compiler source code that has bunch of other dependencies, then its hard to set things up; just to see the installed open-source compiler works fine.

    Read the article

  • Google Closure Compiler - what does the name mean?

    - by mikez302
    I am curious about the Google Closure Compiler. Why did they name it that? Does it have anything to do with lexical closures? EDIT: I tried researching it in the FAQ and documentation, as well as doing Google searches such as "closure compiler name". I couldn't find anything definite, hence the reason I am asking. I don't think I will get a profoundly helpful answer but I was hoping that I could at least satisfy my curiosity. I am not trying to solve a specific problem. I am just curious.

    Read the article

  • Switching from Debug into Release Mode with VS2010 as IDE and Intel C++ Compiler 13

    - by Drazick
    I have a code of a Plug In from an SDK. The code is in Debug Mode. I use Intel Compiler which only applies optimizations in Release Mode. Under configuration manager of the project only "Debug" mode is defined. How could I switch to "Release" mode and enable all Intel Compiler's optimizations? If I enable them on debug mode nothing is applied (Empty Report). I couldn't find the trick to do so. Thank You.

    Read the article

  • Theory: Can JIT Compiler be used to parse the whole program first, then execute later?

    - by unknownthreat
    Normally, JIT Compiler works by reads the byte code, translate it into machine code, and execute it. This is what I understand, but in theory, is it possible to make the JIT Compiler parses the whole program first, then execute the program later as machine code? I do not know how JIT Compiler works technically and exactly, so I don't know any feasibility in this case. But theoretically, is it possible? Or am I doing it wrong?

    Read the article

  • Parantheses around method invokation: why is the compiler complaining about assignment?

    - by polygenelubricants
    I know why the following code doesn't compile: public class Main { public static void main(String args[]) { main((null)); // this is fine! (main(null)); // this is NOT! } } What I'm wondering is why my compiler (javac 1.6.0_17, Windows version) is complaining "The left hand side of an assignment must be a variable". I'd expect something like "Don't put parantheses around a method invokation, dummy!", instead. So why is the compiler making a totally unhelpful complaint about something that is blatantly irrelevant? Is this the result of an ambiguity in the grammar? A bug in the compiler? If it's the former, could you design a language such that a compiler would never be so off-base about a syntax error like this?

    Read the article

  • Is a C++ compiler allowed to emit different machine code compiling the same program?

    - by sharptooth
    Consider a situation. We have some specific C++ compiler, a specific set of compiler settings and a specific C++ program. We compile that specific programs with that compiler and those settings two times, doing a "clean compile" each time. Should the machine code emitted be the same (I don't mean timestamps and other bells and whistles, I mean only real code that will be executed) or is it allowed to vary from one compilation to another?

    Read the article

  • why it is up to the compiler to decide what value to assign when assigning an out-of-range value to

    - by Allopen
    in C++ Primer 4th edition 2.1.1, it says "when assigning an out-of-range value to a signed type, it is up to the compiler to decide what value to assign". I can't understand it. I mean, if you have code like "char 5 = 299", certainly the compiler will generate asm code like "mov BYTE PTR _sc$[ebp], 43"(VC) or "movb $43, -2(%ebp)"(gcc+mingw), it IS decided by the compiler. but what if we assign a value that is given by the user input? like, via command line? and the asm code generated will be "movb %al, -1(%ebp)"(gcc+mingw) and " mov cl, BYTE PTR _i$[ebp] mov BYTE PTR _sc$[ebp], cl "(VC), so now how can compiler decide what will happen? I think now it is decided by the CPU. Can you give me a clear explanation?

    Read the article

  • Two phase Construction in C++

    - by tommieb75
    I have as part of assignment to look into a development kit that uses the "two-phase" construction for C++ classes: // Include Header class someFubar{ public: someFubar(); bool Construction(void); ~someFubar(); private: fooObject _fooObj; } In the source // someFubar.cpp someFubar::someFubar : _fooObj(null){ } bool someFubar::Construction(void){ bool rv = false; this->_fooObj = new fooObject(); if (this->_fooObj != null) rv = true; return rv; } someFubar::~someFubar(){ if (this->_fooObj != null) delete this->_fooObj; } Why would this "two-phase" be used and what benefits are there? Why not just instantiate the object initialization within the actual constructor?

    Read the article

  • Need help regarding one LALR(1) parsing.

    - by AppleGrew
    I am trying to parse a context-free language, called Context Free Art. I have created its parser in Javascript using a YACC-like JS LALR(1) parser generator JSCC. Take the example of following CFA (Context Free Art) code. This code is a valid CFA. startshape A rule A { CIRCLE { s 1} } Notice the A and s in above. s is a command to scale the CIRCLE, but A is just a name of this rule. In the language's grammar I have set s as token SCALE and A comes under token STRING (I have a regular expression to match string and it is at the bottom of of all tokens). This works fine, but in the below case it breaks. startshape s rule s { CIRCLE { s 1} } This too is a perfectly valid code, but since my parser marks s after rule as SCALE token so it errors out saying that it was expecting STRING. Now my question is, if there is any way to re-write the production rules of the parser to account for this? The related production rule is:- rule: RULE STRING '{' buncha_replacements '}' [* rule(%2, 1) *] | RULE STRING RATIONAL '{' buncha_replacements '}' [* rule(%2, 1*%3) *] ; One simple solution I can think of is create a copy of above rule with STRING replaced by SCALE, but this is just one of the many similar rules which would need such fixing. Furthermore there are many other terminals which can get matched to STRING. So that means way too many rules!

    Read the article

  • Benefits of 'Optimize code' option in Visual Studio build

    - by gt
    Much of our C# release code is built with the 'Optimize code' option turned off. I believe this is to allow code built in Release mode to be debugged more easily. Given that we are creating fairly simple desktop software which connects to backend Web Services, (ie. not a particularly processor-intensive application) then what if any sort of performance hit might be expected? And is any particular platform likely to be worse affected? Eg. multi-processor / 64 bit.

    Read the article

  • When to use certain optimizations such as -fwhole-program and -fprofile-generate with several shared libraries

    - by James
    Probably a simple answer; I get quite confused with the language used in the GCC documentation for some of these flags! Anyway, I have three libraries and a programme which uses all these three. I compile each of my libraries seperately with individual (potentially) different sets of warning flags. However, I compile all three libraries with the same set of optimisation flags. I then compile my main programme linking in these three libraries with its own set of warning flags and the same optimisation flags used during the libraries' compilation. 1) Do I have to compile the libraries with optimisation flags present or can I just use these flags when compiling the final programme and linking to the libraries? If the latter, will it then optimise all or just some (presumably that which is called) of the code in these libraries? 2) I would like to use -fwhole-program -flto -fuse-linker-plugin and the linker plugin gold. At which stage do I compile with these on ... just the final compilation or do these flags need to be present during the compilation of the libraries? 3) Pretty much the same as 2) however with, -fprofile-generate -fprofile-arcs and -fprofile-use. I understand one first runs a programme with generate, and then with use. However, do I have to compile each of the libraries with generate/use etc. or just the final programme? And if it is just the last programme, when I then compeil with -fprofile-use will it also optimise the libraries functionality? Many thanks, James

    Read the article

  • Delphi: All constants are constant, but some are more constant than others?

    - by Ian Boyd
    Consider: clHotlight: TColor = $00FF9933; clLink = clHotLight; //alias of clHotlight [Error] file.pas: Constant expression expected and the alternate wording that works: clHotlight = TColor($00FF9933); clLink = clHotLight; //alias of clHotlight Explain. Then consider: AdministratorGUID: TGUID = '{DE44EEA0-6712-11D4-ADD4-0006295717DA}'; SuperuserGUID = AdministratorGUID; //alias of AdministratorGUID [Error] file.pas: Constant expression expected And fix.

    Read the article

  • Languages and VMs: Features that are hard to optimize and why

    - by mrjoltcola
    I'm doing a survey of features in preparation for a research project. Name a mainstream language or language feature that is hard to optimize, and why the feature is or isn't worth the price paid, or instead, just debunk my theories below with anecdotal evidence. Before anyone flags this as subjective, I am asking for specific examples of languages or features, and ideas for optimization of these features, or important features that I haven't considered. Also, any references to implementations that prove my theories right or wrong. Top on my list of hard to optimize features and my theories (some of my theories are untested and are based on thought experiments): 1) Runtime method overloading (aka multi-method dispatch or signature based dispatch). Is it hard to optimize when combined with features that allow runtime recompilation or method addition. Or is it just hard, anyway? Call site caching is a common optimization for many runtime systems, but multi-methods add additional complexity as well as making it less practical to inline methods. 2) Type morphing / variants (aka value based typing as opposed to variable based) Traditional optimizations simply cannot be applied when you don't know if the type of someting can change in a basic block. Combined with multi-methods, inlining must be done carefully if at all, and probably only for a given threshold of size of the callee. ie. it is easy to consider inlining simple property fetches (getters / setters) but inlining complex methods may result in code bloat. The other issue is I cannot just assign a variant to a register and JIT it to the native instructions because I have to carry around the type info, or every variable needs 2 registers instead of 1. On IA-32 this is inconvenient, even if improved with x64's extra registers. This is probably my favorite feature of dynamic languages, as it simplifies so many things from the programmer's perspective. 3) First class continuations - There are multiple ways to implement them, and I have done so in both of the most common approaches, one being stack copying and the other as implementing the runtime to use continuation passing style, cactus stacks, copy-on-write stack frames, and garbage collection. First class continuations have resource management issues, ie. we must save everything, in case the continuation is resumed, and I'm not aware if any languages support leaving a continuation with "intent" (ie. "I am not coming back here, so you may discard this copy of the world"). Having programmed in the threading model and the contination model, I know both can accomplish the same thing, but continuations' elegance imposes considerable complexity on the runtime and also may affect cache efficienty (locality of stack changes more with use of continuations and co-routines). The other issue is they just don't map to hardware. Optimizing continuations is optimizing for the less-common case, and as we know, the common case should be fast, and the less-common cases should be correct. 4) Pointer arithmetic and ability to mask pointers (storing in integers, etc.) Had to throw this in, but I could actually live without this quite easily. My feelings are that many of the high-level features, particularly in dynamic languages just don't map to hardware. Microprocessor implementations have billions of dollars of research behind the optimizations on the chip, yet the choice of language feature(s) may marginalize many of these features (features like caching, aliasing top of stack to register, instruction parallelism, return address buffers, loop buffers and branch prediction). Macro-applications of micro-features don't necessarily pan out like some developers like to think, and implementing many languages in a VM ends up mapping native ops into function calls (ie. the more dynamic a language is the more we must lookup/cache at runtime, nothing can be assumed, so our instruction mix is made up of a higher percentage of non-local branching than traditional, statically compiled code) and the only thing we can really JIT well is expression evaluation of non-dynamic types and operations on constant or immediate types. It is my gut feeling that bytecode virtual machines and JIT cores are perhaps not always justified for certain languages because of this. I welcome your answers.

    Read the article

  • 'Lexical' scoping of type parameters in C#

    - by leppie
    I have 2 scenarios. This fails: class F<X> { public X X { get; set; } } error CS0102: The type 'F' already contains a definition for 'X' This works: class F<X> { class G { public X X { get; set; } } } The only logical explanation is that in the second snippet the type parameter X is out of scope, which is not true... Why should a type parameter affect my definitions in a type? IMO, for consistency, either both should work or neither should work. Any other ideas? PS: I call it 'lexical', but it probably is not not the correct term.

    Read the article

  • Printing the address of a struct object

    - by bdhar
    I have a struct like this typedef struct _somestruct { int a; int b; }SOMESTRUCT,*LPSOMESTRUCT; I am creating an object for the struct and trying to print it's address like this int main() { LPSOMESTRUCT val = (LPSOMESTRUCT)malloc(sizeof(SOMESTRUCT)); printf("0%x\n", val); return 0; } ..and I get this warning warning C4313: 'printf' : '%x' in format string conflicts with argument 1 of type 'LPSOMESTRUCT' So, I tried to cast the address to int like this printf("0%x\n", static_cast<int>(val)); But I get this error: error C2440: 'static_cast' : cannot convert from 'LPSOMESTRUCT' to 'int' What am I missing here? How to avoid this warning? Thanks.

    Read the article

  • the problem only happens when i try create a release...

    - by ace
    I'm sorry if im not presenting this right, but i trully cannot understand what the problem is. i have a project to hand in, a code of 600 lines defined within a main, .cpp, and header file. if i compile the project with just a debugger and no release, it's fine. when i create it with the release, the following error occurs, for every function!!! 1st error: |36|multiple definition of `countLines(int&, std::vector const&)'| 2nd error: |36|first defined here| if someone will allow me and i can send them the entire code, that would be awesome - i have to have this done within 3 hours.

    Read the article

  • Break in Class Module vs. Break on Unhandled Errors (VB6 Error Trapping, Options Setting in IDE)

    - by Erx_VB.NExT.Coder
    Basically, I'm trying to understand the difference between the "Break in Class Module" and "Break on Unhandled Errors" that appear in the Visual Basic 6.0 IDE under the following path: Tools --> Options --> General --> Error Trapping The three options appear to be: Break on All Errors Break in Class Module Break on Unhandled Errors Now, apparently, according to MSDN, the second option (Break in Class Module) really just means "Break on Unhandled Errors in Class Modules". Also, this option appears to be set by default (ie: I think its set to this out of the box). What I am trying to figure out is, if I have the second option selected, do I get the third option (Break on Unhandled Errors) for free? In that, does it come included by default for all scenarios outside of the Class Module spectrum? To advise, I don't have any Class Modules in my currently active project. I have .bas modules though. Also, is it possible that by Class Mdules they may be referring to normal .bas Modules as well? (this is my second sub-question). Basically, I just want the setting to ensure there won't be any surprises once the exe is released. I want as many errors to display as possible while I am developing, and non to be displayed when in release mode. Normally, I have two types of On Error Resume Next on my forms where there isn't explicit error handling, they are as follows: On Error Resume Next ' REQUIRED On Error Resume Next ' NOT REQUIRED The required ones are things like, checking to see if an array has any length, if a call to its UBound errors out, that means it has no length, if it returns a value 0 or more, then it does have length (and therefore, exists). These types of Error Statements need to remain active even while I am developing. However, the NOT REQUIRED ones shouldn't remain active while I am developing, so I have them all commented out to ensure that I catch all the errors that exist. Once I am ready to release the exe, I do a CTRL+H to find all occurrences of: 'On Error Resume Next ' NOT REQUIRED (You may have noticed they are commented out)... And replace them with: On Error Resume Next ' NOT REQUIRED ... The uncommented version, so that in release mode, if there are any leftover errors, they do not show to users. For more on the description by MSDN on the three options (which I've read twice and still don't find adequate) you can visit the following link: http://webcache.googleusercontent.com/search?q=cache:yUQZZK2n2IYJ:support.microsoft.com/kb/129876&hl=en&lr=lang_en%7Clang_tr&gl=au&tbs=lr:lang_1en%7Clang_1tr&prmd=imvns&strip=1 I’m also interested in hearing your thoughts if you feel like volunteering them (and this would be my tentative/totally optional third sub-question, that being, your thoughts on fall-back error handling techniques). Just to summarize, the first two questions were, do we get option 3 included in all non-class scenarios if we choose option 2? And, is it possible that when they use the term "Class Module" they may be referring to .bas Modules as well? (Since a .bad Module is really just a class module that is pre-instantiated in the background during start-up). Thank you.

    Read the article

  • Webcast Series Part I: The Shifting of Healthcare’s Infrastructure Strategy – A lesson in how we got here

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Register today for the first part of a three-part webcast series and discover the changing strategy of healthcare capital planning and construction. Learn how Project Portfolio Management solutions are the key to financial discipline, increased operation efficiency and risk mitigation in this changing environment. Register here for the first webcast on Thursday, November 1, 2012 10:00am PT/ 1:00 p.m ET In this engaging and informative Webcast, Garrett Harley, Sr. Industry Strategist, Oracle Primavera and Thomas Koulouris, Director, PricewaterhouseCoopers will explore: Evolution of the healthcare delivery system Drivers & challenges facing the current healthcare infrastructure Importance of communication and integration between Providers and Contractors to their bottom lines View the evite for more details.

    Read the article

  • Dependency injection: what belongs in the constructor?

    - by Adam Backstrom
    I'm evaluating my current PHP practices in an effort to write more testable code. Generally speaking, I'm fishing for opinions on what types of actions belong in the constructor. Should I limit things to dependency injection? If I do have some data to populate, should that happen via a factory rather than as constructor arguments? (Here, I'm thinking about my User class that takes a user ID and populates user data from the database during construction, which obviously needs to change in some way.) I've heard it said that "initialization" methods are bad, but I'm sure that depends on what exactly is being done during initialization. At the risk of getting too specific, I'll also piggyback a more detailed example onto my question. For a previous project, I built a FormField class (which handled field value setting, validation, and output as HTML) and a Model class to contain these fields and do a bit of magic to ease working with fields. FormField had some prebuilt subclasses, e.g. FormText (<input type="text">) and FormSelect (<select>). Model would be subclassed so that a specific implementation (say, a Widget) had its own fields, such as a name and date of manufacture: class Widget extends Model { public function __construct( $data = null ) { $this->name = new FormField('length=20&label=Name:'); $this->manufactured = new FormDate; parent::__construct( $data ); // set above fields using incoming array } } Now, this does violate some rules that I have read, such as "avoid new in the constructor," but to my eyes this does not seem untestable. These are properties of the object, not some black box data generator reading from an external source. Unit tests would progressively build up to any test of Widget-specific functionality, so I could be confident that the underlying FormFields were working correctly during the Widget test. In theory I could provide the Model with a FieldFactory() which could supply custom field objects, but I don't believe I would gain anything from this approach. Is this a poor assumption?

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >