Search Results

Search found 5086 results on 204 pages for 'compiler constants'.

Page 32/204 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Partial specialization with reference template parameter fails to compile in VS2005

    - by Blair Holloway
    I have code that boils down to the following: template struct Foo {}; template & I struct FooBar {}; //////// template struct Baz {}; template & I struct Baz< FooBar { static void func(FooBar& value); }; //////// struct MyStruct { static const Foo s_floatFoo; }; // Elsewhere: const Foo MyStruct::s_floatFoo; void callBaz() { typedef FooBar FloatFooBar; FloatFooBar myFloatFooBar; Baz::func(myFloatFooBar); } This compiles successfully under GCC, however, under VS2005, I get: error C2039: 'func' : is not a member of 'Baz' with [ T=FloatFooBar ] error C3861: 'func': identifier not found However, if I change const Foo<T>& I to const Foo<T>* I (passing I by pointer rather than by reference), and defining FloatFooBar as: typedef FooBar FloatFooBar; Both GCC and VS2005 are happy. What's going on? Is this some kind of subtle template substitution failure that VS2005 is handling differently to GCC, or a compiler bug? (The strangest thing: I thought I had the above code working in VS2005 earlier this morning. But that was before my morning coffee. I'm now not entirely certain I wasn't under some sort of caffeine-craving-induced delirium...)

    Read the article

  • GCC emits extra code for boost::shared_ptr dereference

    - by Checkers
    I have the following code: #include <boost/shared_ptr.hpp> struct Foo { int a; }; static int A; void func_shared(const boost::shared_ptr<Foo> &foo) { A = foo->a; } void func_raw(Foo * const foo) { A = foo->a; } I thought the compiler would create identical code, but for shared_ptr version an extra seemingly redundant instruction is emitted. Disassembly of section .text: 00000000 <func_raw(Foo*)>: 0: 55 push ebp 1: 89 e5 mov ebp,esp 3: 8b 45 08 mov eax,DWORD PTR [ebp+8] 6: 5d pop ebp 7: 8b 00 mov eax,DWORD PTR [eax] 9: a3 00 00 00 00 mov ds:0x0,eax e: c3 ret f: 90 nop 00000010 <func_shared(boost::shared_ptr<Foo> const&)>: 10: 55 push ebp 11: 89 e5 mov ebp,esp 13: 8b 45 08 mov eax,DWORD PTR [ebp+8] 16: 5d pop ebp 17: 8b 00 mov eax,DWORD PTR [eax] 19: 8b 00 mov eax,DWORD PTR [eax] 1b: a3 00 00 00 00 mov ds:0x0,eax 20: c3 ret I'm just curious, is this necessary, or it is just an optimizer's shortcoming? Compiling with g++ 4.1.2, -O3 -NDEBUG.

    Read the article

  • Is there a standard lexer/parser tool for Python?

    - by Salim Fadhley
    A volunteer job requires us to convert a large number of LaTeX documents into ePub format. It's a series of open-source fiction book which has so far only been produced only on paper via a print on demand service. We'd like to be able to offer the book to users of book-reader devices (such as Kindle) which require the ePub format for best results. Fortunately, ePub is a very simple format, however there's no trivial way for LaTeX to produce the XHTML outut required. We experimented with alternative LaTeX compilers (e.g. plastex) but in the end we figured that it would probably be a lot easier to simply write our own compiler which understands a tiny subset of the LaTeX language and compiles directly to XHTML / ePub. Previously I used a tool on Windows called GOLD. This allowed me to go directly from BNF grammars to a stub parser. It also alllowed me to implement the parser in any language I liked. (I'd choose Python). This product has to work on Linux, so I'm wondering if there's an equivalent toolchain that works as well under Ubutnu / Eclipse / Python. The idea is that we will take the grammar of TeX and just implement a teeny subset of that, but we do not want to spend a huge amount of time worrying about grammar and parsing. A parser generator would obviously save us a great deal of time. Sal UPDATE 1: Bonus marks for a solution with excellent documentation or tutorials.

    Read the article

  • Anonymous union definition/declaration in a macro GNU vs VS2008

    - by Alan_m
    I am attempting to alter an IAR specific header file for a lpc2138 so it can compile with Visual Studio 2008 (to enable compatible unit testing). My problem involves converting register definitions to be hardware independent (not at a memory address) The "IAR-safe macro" is: #define __IO_REG32_BIT(NAME, ADDRESS, ATTRIBUTE, BIT_STRUCT) \ volatile __no_init ATTRIBUTE union \ { \ unsigned long NAME; \ BIT_STRUCT NAME ## _bit; \ } @ ADDRESS //declaration //(where __gpio0_bits is a structure that names //each of the 32 bits as P0_0, P0_1, etc) __IO_REG32_BIT(IO0PIN,0xE0028000,__READ_WRITE,__gpio0_bits); //usage IO0PIN = 0x0xAA55AA55; IO0PIN_bit.P0_5 = 0; This is my comparable "hardware independent" code: #define __IO_REG32_BIT(NAME, BIT_STRUCT)\ volatile union \ { \ unsigned long NAME; \ BIT_STRUCT NAME##_bit; \ } NAME; //declaration __IO_REG32_BIT(IO0PIN,__gpio0_bits); //usage IO0PIN.IO0PIN = 0xAA55AA55; IO0PIN.IO0PIN_bit.P0_5 = 1; This compiles and works but quite obviously my "hardware independent" usage does not match the "IAR-safe" usage. How do I alter my macro so I can use IO0PIN the same way I do in IAR? I feel this is a simple anonymous union matter but multiple attempts and variants have proven unsuccessful. Maybe the IAR GNU compiler supports anonymous unions and vs2008 does not. Thank you.

    Read the article

  • Recognizing terminals in a CFG production previously not defined as tokens.

    - by kmels
    I'm making a generator of LL(1) parsers, my input is a CoCo/R language specification. I've already got a Scanner generator for that input. Suppose I've got the following specification: COMPILER 1. CHARACTERS digit="0123456789". TOKENS number = digit{digit}. decnumber = digit{digit}"."digit{digit}. PRODUCTIONS Expression = Term{"+"Term|"-"Term}. Term = Factor{"*"Factor|"/"Factor}. Factor = ["-"](Number|"("Expression")"). Number = (number|decnumber). END 1. So, if the parser generated by this grammar receives a word "1+1", it'd be accepted i.e. a parse tree would be found. My question is, the character "+" was never defined in a token, but it appears in the non-terminal "Expression". How should my generated Scanner recognize it? It would not recognize it as a token. Is this a valid input then? Should I add this terminal in TOKENS and then consider an error routine for a Scanner for it to skip it? How does usual language specifications handle this?

    Read the article

  • Raw types and subtyping

    - by Dmitrii
    We have generic class SomeClass<T>{ } We can write the line: SomeClass s= new SomeClass<String>(); It's ok, because raw type is supertype for generic type. But SomeClass<String> s= new SomeClass(); is correct to. Why is it correct? I thought that type erasure was before type checking, but it's wrong. From Hacker's Guide to Javac When the Java compiler is invoked with default compile policy it performs the following passes: parse: Reads a set of *.java source files and maps the resulting token sequence into AST-Nodes. enter: Enters symbols for the definitions into the symbol table. process annotations: If Requested, processes annotations found in the specified compilation units. attribute: Attributes the Syntax trees. This step includes name resolution, type checking and constant folding. flow: Performs data ow analysis on the trees from the previous step. This includes checks for assignments and reachability. desugar: Rewrites the AST and translates away some syntactic sugar. generate: Generates Source Files or Class Files. Generic is syntax sugar, hence type erasure invoked at 6 pass, after type checking, which invoked at 4 pass. I'm confused.

    Read the article

  • Is there really such a thing as a char or short in modern programming?

    - by Dean P
    Howdy all, I've been learning to program for a Mac over the past few months (I have experience in other languages). Obviously that has meant learning the Objective C language and thus the plainer C it is predicated on. So I have stumbles on this quote, which refers to the C/C++ language in general, not just the Mac platform. With C and C++ prefer use of int over char and short. The main reason behind this is that C and C++ perform arithmetic operations and parameter passing at integer level, If you have an integer value that can fit in a byte, you should still consider using an int to hold the number. If you use a char, the compiler will first convert the values into integer, perform the operations and then convert back the result to char. So my question, is this the case in the Mac Desktop and IPhone OS environments? I understand when talking about theses environments we're actually talking about 3-4 different architectures (PPC, i386, Arm and the A4 Arm variant) so there may not be a single answer. Nevertheless does the general principle hold that in modern 32 bit / 64 bit systems using 1-2 byte variables that don't align with the machine's natural 4 byte words doesn't provide much of the efficiency we may expect. For instance, a plain old C-Array of 100,000 chars is smaller than the same 100,000 ints by a factor of four, but if during an enumeration, reading out each index involves a cast/boxing/unboxing of sorts, will we see overall lower 'performance' despite the saved memory overhead?

    Read the article

  • VS 2008 C++ build output?

    - by STingRaySC
    Why when I watch the build output from a VC++ project in VS do I see: 1Compiling... 1a.cpp 1b.cpp 1c.cpp 1d.cpp 1e.cpp [etc...] 1Generating code... 1x.cpp 1y.cpp [etc...] The output looks as though several compilation units are being handled before any code is generated. Is this really going on? I'm trying to improve build times, and by using pre-compiled headers, I've gotten great speedups for each ".cpp" file, but there is a relatively long pause during the "Generating Code..." message. I do not have "Whole Program Optimization" nor "Link Time Code Generation" turned on. If this is the case, then why? Why doesn't VC++ compile each ".cpp" individually (which would include the code generation phase)? If this isn't just an illusion of the output, is there cross-compilation-unit optimization potentially going on here? There don't appear to be any compiler options to control that behavior (I know about WPO and LTCG, as mentioned above). EDIT: The build log just shows the ".obj" files in the output directory, one per line. There is no indication of "Compiling..." vs. "Generating code..." steps. EDIT: I have confirmed that this behavior has nothing to do with the "maximum number of parallel project builds" setting in Tools - Options - Projects and Solutions - Build and Run. Nor is it related to the MSBuild project build output verbosity setting. Indeed if I cancel the build before the "Generating code..." step, the ".obj" files will not exist for the most recent set of "compiled" files. E.g., if I cancel the build during "c.cpp" above, I will see only "a.obj" and "b.obj".

    Read the article

  • How would I code a complex formula parser manually?

    - by StormianRootSolver
    Hm, this is language - agnostic, I would prefer doing it in C# or F#, but I'm more interested this time in the question "how would that work anyway". What I want to accomplish ist: a) I want to LEARN it - it's about my ego this time, it's for a fun project where I want to show myself that I'm a really good at this stuff b) I know a tiny little bit about EBNF (although I don't know yet, how operator precedence works in EBNF - Irony.NET does it right, I checked the examples, but this is a bit ominous to me) c) My parser should be able to take this: 5 * (3 + (2 - 9 * (5 / 7)) + 9) for example and give me the right results d) To be quite frankly, this seems to be the biggest problem in writing a compiler or even an interpreter for me. I would have no problem generating even 64 bit assembler code (I CAN write assembler manually), but the formula parser... e) Another thought: even simple computers (like my old Sharp 1246S with only about 2kB of RAM) can do that... it can't be THAT hard, right? And even very, very old programming languages have formula evaluation... BASIC is from 1964 and they already could calculate the kind of formula I presented as an example f) A few ideas, a few inspirations would be really enough - I just have no clue how to do operator precedence and the parentheses - I DO, however, know that it involves an AST and that many people use a stack So, what do you think?

    Read the article

  • How is a referencing environment generally implemented for closures?

    - by Alexandr Kurilin
    Let's say I have a statically/lexically scoped language with deep binding and I create a closure. The closure will consist of the statements I want executed plus the so called referencing environment, or, to quote this post, the collection of variables which can be used. What does this referencing environment actually look like implementation-wise? I was recently reading about ObjectiveC's implementation of blocks, and the author suggests that behind the scenes you get a copy of all of the variables on the stack and also of all the references to heap objects. The explanation claims that you get a "snapshot" of the referencing environment at the point in time of the closure's creation. Is that more or less what happens, or did I misread that? Is anything done to "freeze" a separate copy of the heap objects, or is it safe to assume that if they get modified between closure creation and the closure executing, the closure will no longer be operating on the original version of the object? If indeed there's copying being made, are there memory usage considerations in situations where one might want to create plenty of closures and store them somewhere? I think that misunderstanding of some of these concepts might lead to tricky issues like the ones Eric Lippert mentions in this blog post. It's interesting because you'd think that it wouldn't make sense to keep a reference to a value type that might be gone by the time the closure is called, but I'm guessing that in C# the compiler will figure out that the variable is needed later and put it into the heap instead. It seems that in most memory-managed languages everything is a reference and thus ObjectiveC is a somewhat unique situation with having to deal with copying what's on the stack.

    Read the article

  • Enum exeeding the 65535 bytes limit of static initializer... what's best to do?

    - by Daniel Bleisteiner
    I've started a rather large Enum of so called Descriptors that I've wanted to use as a reference list in my model. But now I've come across a compiler/VM limit the first time and so I'm looking for the best solution to handle this. Here is my error : The code for the static initializer is exceeding the 65535 bytes limit It is clear where this comes from - my Enum simply has far to much elements. But I need those elements - there is no way to reduce that set. Initialy I've planed to use a single Enum because I want to make sure that all elements within the Enum are unique. It is used in a Hibernate persistence context where the reference to the Enum is stored as String value in the database. So this must be unique! The content of my Enum can be devided into several groups of elements belonging together. But splitting the Enum would remove the unique safety I get during compile time. Or can this be achieved with multiple Enums in some way? My only current idea is to define some Interface called Descriptor and code several Enums implementing it. This way I hope to be able to use the Hibernate Enum mapping as if it were a single Enum. But I'm not even sure if this will work. And I loose unique safety. Any ideas how to handle that case?

    Read the article

  • Help with Java Generics: Cannot use "Object" as argument for "? extends Object"

    - by AniDev
    Hello, I have the following code: import java.util.*; public class SellTransaction extends Transaction { private Map<String,? extends Object> origValueMap; public SellTransaction(Map<String,? extends Object> valueMap) { super(Transaction.Type.Sell); assignValues(valueMap); this.origValueMap=valueMap; } public SellTransaction[] splitTransaction(double splitAtQuantity) { Map<String,? extends Object> valueMapPart1=origValueMap; valueMapPart1.put(nameMappings[3],(Object)new Double(splitAtQuantity)); Map<String,? extends Object> valueMapPart2=origValueMap; valueMapPart2.put(nameMappings[3],((Double)origValueMap.get(nameMappings[3]))-splitAtQuantity); return new SellTransaction[] {new SellTransaction(valueMapPart1),new SellTransaction(valueMapPart2)}; } } The code fails to compile when I call valueMapPart1.put and valueMapPart2.put, with the error: The method put(String, capture#5-of ? extends Object) in the type Map is not applicable for the arguments (String, Object) I have read on the Internet about generics and wildcards and captures, but I still don't understand what is going wrong. My understanding is that the value of the Map's can be any class that extends Object, which I think might be redundant, because all classes extend Object. And I cannot change the generics to something like ? super Object, because the Map is supplied by some library. So why is this not compiling? Also, if I try to cast valueMap to Map<String,Object>, the compiler gives me that 'Unchecked conversion' warning. Thanks!

    Read the article

  • Determine target architecture of binary file in Linux (library or executable)

    - by Fernando Miguélez
    We have an issue related to a Java application running under a (rather old) FC3 on a Advantech POS board with a Via C3 processor. The java application has several compiled shared libs that are accessed via JNI. Via C3 processor is suppossed to be i686 compatible. Some time ago after installing Ubuntu 6.10 on a MiniItx board with the same processor I found out that the previous statement is not 100% true. The Ubuntu kernel hanged on startup due to the lack of some specific and optional instructions of the i686 set in the C3 processor. These instructions missing in C3 implementation of i686 set are used by default by GCC compiler when using i686 optimizations. The solution in this case was to go with a i386 compiled version of Ubuntu distribution. The base problem with the Java application is that the FC3 distribution was installed on the HD by cloning from an image of the HD of another PC, this time an Intel P4. Afterwards the distribution needed some hacking to have it running such as replacing some packages (such as the kernel one) with the i383 compiled version. The problem is that after working for a while the system completely hangs without a trace. I am afraid that some i686 code is left somewhere in the system and could be executed randomly at any time (for example after recovering from suspend mode or something like that). My question is: Is there any tool or way to find out at what specific architecture is an binary file (executable or library) aimed provided that "file" does not give so much information?

    Read the article

  • What C++ templates issue is going on with this error?

    - by WilliamKF
    Running gcc v3.4.6 on the Botan v1.8.8 I get the following compile time error building my application after successfully building Botan and running its self test: ../../src/Botan-1.8.8/build/include/botan/secmem.h: In member function `Botan::MemoryVector<T>& Botan::MemoryVector<T>::operator=(const Botan::MemoryRegion<T>&)': ../../src/Botan-1.8.8/build/include/botan/secmem.h:310: error: missing template arguments before '(' token What is this compiler error telling me? Here is a snippet of secmem.h that includes line 130: [...] /** * This class represents variable length buffers that do not * make use of memory locking. */ template<typename T> class MemoryVector : public MemoryRegion<T> { public: /** * Copy the contents of another buffer into this buffer. * @param in the buffer to copy the contents from * @return a reference to *this */ MemoryVector<T>& operator=(const MemoryRegion<T>& in) { if(this != &in) set(in); return (*this); } // This is line 130! [...]

    Read the article

  • How would I instruct extconf.rb to use additional g++ optimization flags, and which are advisable?

    - by mohawkjohn
    I'm using Rice to write a C++ extension for a Ruby gem. The extension is in the form of a shared object (.so) file. This requires 'mkmf-rice' instead of 'mkmf', but the two (AFAIK) are pretty similar. By default, the compiler uses the flags -g -O2. Personally, I find this kind of silly, since it's hard to debug with any optimization enabled. I've resorted to editing the Makefile to take out the flags I don't like (e.g., removing -fPIC -shared when I need to debug using main() instead of Ruby's hooks). But I figure there's got to be a better way. I know I can just do $CPPFLAGS += " -DRICE" to add additional flags. But how do I remove things without editing the Makefile directly? A secondary question: what optimizations are safe for shared objects loaded by Ruby? Can I do things like -funroll-loops? What do you all recommend? It's a scientific computing project, so the faster the better. Memory is not much of an issue. Many thanks!

    Read the article

  • Java Newbie can't do simple Math, operator error

    - by elguapo-85
    Trying to do some really basic math here, but my lack of understanding of Java is causing some problems for me. double[][] handProbability = new double[][] {{0,0,0},{0,0,0},{0,0,0}}; double[] handProbabilityTotal = new double[] {0,0,0}; double positivePot = 0; double negativePot = 0; int localAhead = 0; int localTied = 1; int localBehind = 2; //do some stuff that adds values to handProbability and handProbabilityTotal positivePot = (handProbability[localBehind][localAhead] + (handProbability[localBehind][localTied] / 2.0) + (handProbability[localTied][localAhead] / 2.0) ) / (handProbabilityTotal[localBehind] + (handProbability[localTied] / 2.0)); negativePot = (handProbability[localAhead][localBehind] + (handProbability[localAhead][localTied] / 2.0) + (handProbability[localTied][localBehind] / 2.0) ) / (handProbabilityTotal[localAhead] + (handProbabilityTotal[localTied] / 2.0)); The last two lines are giving me problems (sorry for their lengthiness). Compiler Errors: src/MyPokerClient/MyPokerClient.java:180: operator / cannot be applied to double[],double positivePot = ( handProbability[localBehind][localAhead] + (handProbability[localBehind][localTied] / 2.0) + (handProbability[localTied][localAhead] / 2.0) ) / (handProbabilityTotal[localBehind] + (handProbability[localTied] / 2.0) ); ^ src/MyPokerClient/MyPokerClient.java:180: operator + cannot be applied to double, positivePot = ( handProbability[localBehind][localAhead] + (handProbability[localBehind][localTied] / 2.0) + (handProbability[localTied][localAhead] / 2.0) ) / (handProbabilityTotal[localBehind] + (handProbability[localTied] / 2.0) ); ^ src/MyPokerClient/MyPokerClient.java:180: operator / cannot be applied to double, positivePot = ( handProbability[localBehind][localAhead] + (handProbability[localBehind][localTied] / 2.0) + (handProbability[localTied][localAhead] / 2.0) ) / (handProbabilityTotal[localBehind] + (handProbability[localTied] / 2.0) ); Not really sure what the problem is. You shouldn't need anything special for basic math, right?

    Read the article

  • Communication between lexer and parser

    - by FredOverflow
    Every time I write a simple lexer and parser, I stumble upon the same question: how should the lexer and the parser communicate? I see four different approaches: The lexer eagerly converts the entire input string into a vector of tokens. Once this is done, the vector is fed to the parser which converts it into a tree. This is by far the simplest solution to implement, but since all tokens are stored in memory, it wastes a lot of space. Each time the lexer finds a token, it invokes a function on the parser, passing the current token. In my experience, this only works if the parser can naturally be implemented as a state machine like LALR parsers. By contrast, I don't think it would work at all for recursive descent parsers. Each time the parser needs a token, it asks the lexer for the next one. This is very easy to implement in C# due to the yield keyword, but quite hard in C++ which doesn't have it. The lexer and parser communicate through an asynchronous queue. This is commonly known under the title "producer/consumer", and it should simplify the communication between the lexer and the parser a lot. Does it also outperform the other solutions on multicores? Or is lexing too trivial? Is my analysis sound? Are there other approaches I haven't thought of? What is used in real-world compilers? It would be really cool if compiler writers like Eric Lippert could shed some light on this issue.

    Read the article

  • Packages name conflicting with getters and setters?

    - by MrKishi
    Hello, folks. So, I've came across this compilation error a while ago.. As there's an easy fix and I didn't find anything relevant at the time, I eventually let it go. I just remembered it and I'm now wondering if this is really part of the language grammar (which I highly doubt) or if it's a compiler bug. I'm being purely curious about this -- it doesn't really affect development, but it would be nice to see if any of you have seen this already. package view { import flash.display.Sprite; public class Main extends Sprite { private var _view:Sprite = new Sprite(); public function Main() { this.test(); } private function test():void { trace(this.view.x, this.view.y); //1178: Attempted access of inaccessible property x through a reference with static type view:Main. //1178: Attempted access of inaccessible property y through a reference with static type view:Main. //Note that I got this due to the package name. //It runs just fine if I rename the package or getter. } public function get view():Sprite { return this._view; } } }

    Read the article

  • How do I stop MSYS from transforming my compiler options?

    - by Carl Norum
    Is there a way to stop MSYS/MinGW from transforming what it thinks are paths on my command lines? I have a project that's using nmake & Microsoft Visual Studio 2003 (yeecccch). I have the build system all ported and ready to go for GNU make (and tested with Cygwin). Something weird is happening to my compiler flags when I try to compile in an MSYS environment, though. Here's a simplified example: $ cl /nologo Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 13.10.6030 for 80x86 Copyright (C) Microsoft Corporation. All rights reserved. /out:nologo.exe C:/msys/1.0/nologo LINK : fatal error LNK1181: cannot open input file 'C:/msys/1.0/nologo.obj' As you can see, MSYS is transforming the /nologo compiler switch into a windows path, and then sending that to the compiler. I really don't want this to happen - in fact I'd be happy if MSYS never transformed any paths - my build system had to take care of all that when I first ported to Cygwin. Is there a way to make that happen? It does work to change the command to $ cl -nologo Which produces the expected results, but this build system is very large and very painful to update. I really don't want to have to go in and change every use of a / for a flag to a -. In particular, there may be tools that don't support the use of the - at all, and then I'll really be stuck. Thanks for any suggestions!

    Read the article

  • hand coding a parser

    - by John Leidegren
    For all you compiler gurus, I wanna write a recursive descent parser and I wanna do it with just code. No generating lexers and parsers from some other grammar and don't tell me to read the dragon book, i'll come around to that eventually. I wanna get into the gritty details about implementing a lexer and parser for a reasonable simple langauge, say CSS. And I wanna do this right. This will probably end up being a series of questions but right now I'm starting with a lexer. Tokenization rules for CSS can be found here. I find my self writing code like this (hopefully you can infer the rest from this snippet): public CssToken ReadNext() { int val; while ((val = _reader.Read()) != -1) { var c = (char)val; switch (_stack.Top) { case ParserState.Init: if (c == ' ') { continue; // ignore } else if (c == '.') { _stack.Transition(ParserState.SubIdent, ParserState.Init); } break; case ParserState.SubIdent: if (c == '-') { _token.Append(c); } _stack.Transition(ParserState.SubNMBegin); break; What is this called? and how far off am I from something reasonable well understood? I'm trying to balence something which is fair in terms of efficiency and easy to work with, using a stack to implement some kind of state machine is working quite well, but I'm unsure how to continue like this. What I have is an input stream, from which I can read 1 character at a time. I don't do any look a head right now, I just read the character then depending on the current state try to do something with that. I'd really like to get into the mind set of writing reusable snippets of code. This Transition method is currently means to do that, it will pop the current state of the stack and then push the arguments in reverse order. That way, when I write Transition(ParserState.SubIdent, ParserState.Init) it will "call" a sub routine SubIdent which will, when complete, return to the Init state. The parser will be implemented in much the same way, currently, having everyhing in a single big method like this allows me to easily return a token when I found one, but it also forces me to keep everything in one single big method. Is there a nice way to split these tokenization rules into seperate methods? Any input/advice on the matter would be greatly appriciated!

    Read the article

  • Using a function with reference as a function with pointers?

    - by epatel
    Today I stumbled over a piece of code that looked horrifying to me. The pieces was chattered in different files, I have tried write the gist of it in a simple test case below. The code base is routinely scanned with FlexeLint on a daily basis, but this construct has been laying in the code since 2004. The thing is that a function implemented with a parameter passing using references is called as a function with a parameter passing using pointers...due to a function cast. The construct has worked since 2004 on Irix and now when porting it actually do work on Linux/gcc too. My question now. Is this a construct one can trust? I can understand if compiler constructors implement the reference passing as it was a pointer, but is it reliable? Are there hidden risks? Should I change the fref(..) to use pointers and risk braking anything in the process? What to you think? #include <iostream> using namespace std; // ---------------------------------------- // This will be passed as a reference in fref(..) struct string_struct { char str[256]; }; // ---------------------------------------- // Using pointer here! void fptr(const char *str) { cout << "fptr: " << str << endl; } // ---------------------------------------- // Using reference here! void fref(string_struct &str) { cout << "fref: " << str.str << endl; } // ---------------------------------------- // Cast to f(const char*) and call with pointer void ftest(void (*fin)()) { void (*fcall)(const char*) = (void(*)(const char*))fin; fcall("Hello!"); } // ---------------------------------------- // Let's go for a test int main() { ftest((void (*)())fptr); // test with fptr that's using pointer ftest((void (*)())fref); // test with fref that's using reference return 0; }

    Read the article

  • How to Work Around Limitations in Generic Type Constraints in C#?

    - by Jose
    Okay I'm looking for some input, I'm pretty sure this is not currently supported in .NET 3.5 but here goes. I want to require a generic type passed into my class to have a constructor like this: new(IDictionary<string,object>) so the class would look like this public MyClass<T> where T : new(IDictionary<string,object>) { T CreateObject(IDictionary<string,object> values) { return new T(values); } } But the compiler doesn't support this, it doesn't really know what I'm asking. Some of you might ask, why do you want to do this? Well I'm working on a pet project of an ORM so I get values from the DB and then create the object and load the values. I thought it would be cleaner to allow the object just create itself with the values I give it. As far as I can tell I have two options: 1) Use reflection(which I'm trying to avoid) to grab the PropertyInfo[] array and then use that to load the values. 2) require T to support an interface like so: public interface ILoadValues { void LoadValues(IDictionary values); } and then do this public MyClass<T> where T:new(),ILoadValues { T CreateObject(IDictionary<string,object> values) { T obj = new T(); obj.LoadValues(values); return obj; } } The problem I have with the interface I guess is philosophical, I don't really want to expose a public method for people to load the values. Using the constructor the idea was that if I had an object like this namespace DataSource.Data { public class User { protected internal User(IDictionary<string,object> values) { //Initialize } } } As long as the MyClass<T> was in the same assembly the constructor would be available. I personally think that the Type constraint in my opinion should ask (Do I have access to this constructor? I do, great!) Anyways any input is welcome.

    Read the article

  • Generic Type constraint in .net

    - by Jose
    Okay I'm looking for some input, I'm pretty sure this is not currently supported in .NET 3.5 but here goes. I want to require a generic type passed into my class to have a constructor like this: new(IDictionary<string,object>) so the class would look like this public MyClass<T> where T : new(IDictionary<string,object>) { T CreateObject(IDictionary<string,object> values) { return new T(values); } } But the compiler doesn't support this, it doesn't really know what I'm asking. Some of you might ask, why do you want to do this? Well I'm working on a pet project of an ORM so I get values from the DB and then create the object and load the values. I thought it would be cleaner to allow the object just create itself with the values I give it. As far as I can tell I have two options: 1) Use reflection(which I'm trying to avoid) to grab the PropertyInfo[] array and then use that to load the values. 2) require T to support an interface like so: public interface ILoadValues { void LoadValues(IDictionary values); } and then do this public MyClass<T> where T:new(),ILoadValues { T CreateObject(IDictionary<string,object> values) { T obj = new T(); obj.LoadValues(values); return obj; } } The problem I have with the interface I guess is philosophical, I don't really want to expose a public method for people to load the values. Using the constructor the idea was that if I had an object like this namespace DataSource.Data { public class User { protected internal User(IDictionary<string,object> values) { //Initialize } } } As long as the MyClass<T> was in the same assembly the constructor would be available. I personally think that the Type constraint in my opinion should ask (Do I have access to this constructor? I do, great!) Anyways any input is welcome.

    Read the article

  • Is there a programming language with be semantics close to English ?

    - by ivo s
    Most languages allow to 'tweek' to certain extend parts of the syntax (C++,C#) and/or semantics that you will be using in your code (Katahdin, lua). But I have not heard of a language that can just completely define how your code will look like. So isn't there some language which already exists that has such capabilities to override all syntax & define semantics ? Example of what I want to do is basically from the C# code below: foreach(Fruit fruit in Fruits) { if(fruit is Apple) { fruit.Price = fruit.Price/2; } } I want do be able to to write the above code in my perfect language like this: Check if any fruits are Macintosh apples and discount the price by 50%. The advantages that come to my mind looking from a coder's perspective in this "imaginary" language are: It's very clear what is going on (self descriptive) - it's plain English after all even kid would understand my program Hides all complexities which I have to write in C#. But why should I care to learn that if statements, arithmetic operators etc since there are already implemented The disadvantages that I see for a coder who will maintain this program are: Maybe you would express this program differently from me so you may not get all the information that I've expressed in my sentence Programs can be quite verbose and hard to debug but if possible to even proximate this type of syntax above maybe more people would start programming right? That would be amazing I think. I can go to work and just write an essay to draw a square on a winform like this: Create a form called MyGreetingForm. Draw a square with in the middle of MyGreetingFormwith a side of 100 points. In the middle of the square write "Hello! Click here to continue" in Arial font. In the above code the parser must basically guess that I want to use the unnamed square from the previous sentence, it'd be hard to write such a smart parser I guess, yet it's so simple what I want to do. If the user clicks on square in the middle of MyGreetingForm show MyMainForm. In the above code 'basically' the compiler must: 1)generate an event handler 2) check if there is any square in the middle of the form and if there is - 3) hide the form and show another form It looks very hard to do but it doesn't look impossible IMO to me at least approximate this (I can personally generate a parser to perform the 3 steps above np & it's basically the same that it has to do any way when you add even in c# a.MyEvent=+handler; so I don't see a problem here) so I'm thinking maybe somebody already did something like this ? Or is there some practical burden of complexity to create such a 'essay style' programming language which I can't see ? I mean what's the worse that can happen if the parser is not that good? - your program will crash so you have to re-word it:)

    Read the article

  • Is it possible to implement bitwise operators using integer arithmetic?

    - by Statement
    Hello World! I am facing a rather peculiar problem. I am working on a compiler for an architecture that doesn't support bitwise operations. However, it handles signed 16 bit integer arithmetics and I was wondering if it would be possible to implement bitwise operations using only: Addition (c = a + b) Subtraction (c = a - b) Division (c = a / b) Multiplication (c = a * b) Modulus (c = a % b) Minimum (c = min(a, b)) Maximum (c = max(a, b)) Comparisons (c = (a < b), c = (a == b), c = (a <= b), et.c.) Jumps (goto, for, et.c.) The bitwise operations I want to be able to support are: Or (c = a | b) And (c = a & b) Xor (c = a ^ b) Left Shift (c = a << b) Right Shift (c = a b) (All integers are signed so this is a problem) Signed Shift (c = a b) One's Complement (a = ~b) (Already found a solution, see below) Normally the problem is the other way around; how to achieve arithmetic optimizations using bitwise hacks. However not in this case. Writable memory is very scarce on this architecture, hence the need for bitwise operations. The bitwise functions themselves should not use a lot of temporary variables. However, constant read-only data & instruction memory is abundant. A side note here also is that jumps and branches are not expensive and all data is readily cached. Jumps cost half the cycles as arithmetic (including load/store) instructions do. On other words, all of the above supported functions cost twice the cycles of a single jump. Some thoughts that might help: I figured out that you can do one's complement (negate bits) with the following code: // Bitwise one's complement b = ~a; // Arithmetic one's complement b = -1 - a; I also remember the old shift hack when dividing with a power of two so the bitwise shift can be expressed as: // Bitwise left shift b = a << 4; // Arithmetic left shift b = a * 16; // 2^4 = 16 // Signed right shift b = a >>> 4; // Arithmetic right shift b = a / 16; For the rest of the bitwise operations I am slightly clueless. I wish the architects of this architecture would have supplied bit-operations. I would also like to know if there is a fast/easy way of computing the power of two (for shift operations) without using a memory data table. A naive solution would be to jump into a field of multiplications: b = 1; switch (a) { case 15: b = b * 2; case 14: b = b * 2; // ... exploting fallthrough (instruction memory is magnitudes larger) case 2: b = b * 2; case 1: b = b * 2; } Or a Set & Jump approach: switch (a) { case 15: b = 32768; break; case 14: b = 16384; break; // ... exploiting the fact that a jump is faster than one additional mul // at the cost of doubling the instruction memory footprint. case 2: b = 4; break; case 1: b = 2; break; }

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >