Search Results

Search found 5086 results on 204 pages for 'compiler constants'.

Page 85/204 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • C++ Operator Ambiguity

    - by Scott
    Forgive me, for I am fairly new to C++, but I am having some trouble regarding operator ambiguity. I think it is compiler-specific, for the code compiled on my desktop. However, it fails to compile on my laptop. I think I know what's going wrong, but I don't see an elegant way around it. Please let me know if I am making an obvious mistake. Anyhow, here's what I'm trying to do: I have made my own vector class called Vector4 which looks something like this: class Vector4 { private: GLfloat vector[4]; ... } Then I have these operators, which are causing the problem: operator GLfloat* () { return vector; } operator const GLfloat* () const { return vector; } GLfloat& operator [] (const size_t i) { return vector[i]; } const GLfloat& operator [] (const size_t i) const { return vector[i]; } I have the conversion operator so that I can pass an instance of my Vector4 class to glVertex3fv, and I have subscripting for obvious reasons. However, calls that involve subscripting the Vector4 become ambiguous to the compiler: enum {x, y, z, w} Vector4 v(1.0, 2.0, 3.0, 4.0); glTranslatef(v[x], v[y], v[z]); Here are the candidates: candidate 1: const GLfloat& Vector4:: operator[](size_t) const candidate 2: operator[](const GLfloat*, int) <built-in> Why would it try to convert my Vector4 to a GLfloat* first when the subscript operator is already defined on Vector4? Is there a simple way around this that doesn't involve typecasting? Am I just making a silly mistake? Thanks for any help in advance.

    Read the article

  • Iterating over member typed collection fails when using untyped reference to generic object

    - by Alexander Pavlov
    Could someone clarify why iterate1() is not accepted by compiler (Java 1.6)? I do not see why iterate2() and iterate3() are much better. This paragraph is added to avoid silly "Your post does not have much context to explain the code sections; please explain your scenario more clearly." protection. import java.util.Collection; import java.util.HashSet; public class Test<T> { public Collection<String> getCollection() { return new HashSet<String>(); } public void iterate1(Test test) { for (String s : test.getCollection()) { // ... } } public void iterate2(Test test) { Collection<String> c = test.getCollection(); for (String s : c) { // ... } } public void iterate3(Test<?> test) { for (String s : test.getCollection()) { // ... } } } Compiler output: $ javac Test.java Test.java:11: incompatible types found : java.lang.Object required: java.lang.String for (String s : test.getCollection()) { ^ Note: Test.java uses unchecked or unsafe operations. Note: Recompile with -Xlint:unchecked for details. 1 error

    Read the article

  • Boost Include Files in VC++

    - by Dr. K
    For the last few years, I have been exclusively a C# developer. Previously, I developed in C++ and have a C++ application that I built about 3 years ago using VS2005. It made extensive use of the Boost libraries. I recently decided to brush off the old app and rebuild it in VS2008 with the latest version of Boost (the latest version with the "easy" installation program from BoostPro Computing), 1.39. Previously when I had the program running I was at 1.33. Also, the last time the program was running was at least 2 OS installations ago. The Boost installation is located on my machine at: "C:\Program Files\boost\boost_1_39". Anyway, I have done the following: Set the project's "Additional Include Directories" directory to "C:\Program Files\boost\boost_1_39" Added "C:\Program Files\boost\boost_1_39" to VS2008's Tools - Options - Projects and Solutions - VC++ Directories - Include Files I have a number of Boost includes in my stdafx.h file. The compiler fails upon attempting to open the first one - #include <boost/algorithm/string/string.hpp> I have confirmed that the above file is indeed located at "C:\Program Files\boost\boost_1_39\boost\algorithm\string\string.hpp" I continue to get: fatal error C1083: Cannot open include file: 'boost/algorithm/string/string.hpp': No such file or directory Any tips on what else to check would be greatly appreciated. Again, this is an application that compiled fine a few years ago, but the source has now been moved to a new machine/compiler.

    Read the article

  • Isn't an Iterator in c++ a kind of a pointer?

    - by Bilthon
    Ok this time I decided to make a list using the STL. I need to create a dedicated TCP socket for each client. So everytime I've got a connection, I instantiate a socket and add a pointer to it on a list. list<MyTcp*> SocketList; //This is the list of pointers to sockets list<MyTcp*>::iterator it; //An iterator to the list of pointers to TCP sockets. Putting a new pointer to a socket was easy, but now every time the connection ends I should disconnect the socket and delete the pointer so I don't get a huge memory leak, right? well.. I thought I was doing ok by setting this: it=SocketList.begin(); while( it != SocketList.end() ){ if((*it)->getClientId() == id){ pSocket = it; // <-------------- compiler complains at this line SocketList.remove(pSocket); pSocket->Disconnect(); delete pSocket; break; } } But the compiler is saying this: error: invalid cast from type ‘std::_List_iterator<MyTcp*>’ to type ‘MyTcp*’ Can someone help me here? i thought I was doing things right, isn't an iterator at any given time just pointing to one of the elements of the set? how can I fix it?

    Read the article

  • Where to add an overloaded operator for the tr1::array?

    - by phlipsy
    Since I need to add an operator& for the std::tr1::array<bool, N> I wrote the following lines template<std::size_t N> std::tr1::array<bool, N> operator& (const std::tr1::array<bool, N>& a, const std::tr1::array<bool, N>& b) { std::tr1::array<bool, N> result; std::transform(a.begin(), a.end(), b.begin(), result.begin(), std::logical_and<bool>()); return result; } Now I don't know in which namespace I've to put this function. I considered the std namespace as a restricted area. Only total specialization and overloaded function templates are allowed to be added by the user. Putting it into the global namespace isn't "allowed" either in order to prevent pollution of the global namespace and clashes with other declarations. And finally putting this function into the namespace of the project doesn't work since the compiler won't find it there. What had I best do? I don't want to write a new array class putted into the project namespace. Because in this case the compiler would find the right namespace via argument dependent name lookup. Or is this the only possible way because writing a new operator for existing classes means extending their interfaces and this isn't allowed either for standard classes?

    Read the article

  • C++: IDE Code assistance (Linux)

    - by Martijn Courteaux
    Hi, My IDE (NetBeans) thinks this is wrong code, but it compiles correct: std::cout << "i = " << i << std::endl; std::cout << add(5, 7) << std::endl; std::string test = "Boe"; std::cout << test << std::endl; He always says: unable to resolve identifier .... (.... = cout, endl, string); So I think it has something to do with the code assistance. I think I have to change/add/remove some folders. Currently, I have this include folders: C compiler: /usr/local/include /usr/lib/gcc/i486-linux-gnu/4.4.3/include /usr/lib/gcc/i486-linux-gnu/4.4.3/include-fixed /usr/include C++ compiler: /usr/include/c++/4.4.3 /usr/include/c++/4.4.3/i486-linux-gnu /usr/include/c++/4.4.3/backward /usr/local/include /usr/lib/gcc/i486-linux-gnu/4.4.3/include /usr/include Thanks

    Read the article

  • How to Implement an Interface that Requires Duplicate Member Names in C#?

    - by Will Marcouiller
    I often have to implement some interfaces such as IEnumerable<T> in my code. Each time, when implementing automatically, I encounter the following: public IEnumerator<T> GetEnumerator() { // Code here... } public IEnumerator GetEnumerator1() { // Code here... } Though I have to implement both GetEnumerator() methods, they impossibly can have the same name, even if we understand that they do the same, somehow. The compiler can't treat them as one being the overload of the other, because only the return type differs. When doing so, I manage to set the GetEnumerator1() accessor to private. This way, the compiler doesn't complaint about not implementing the interface member, and I simply throw a NotImplementedException within the methods body. However, I wonder whether it is a good practice, or if I shall proceed differently, as perhaps a method alias or something like so. What is the best approach while implementing an interface such as IEnumerable<T> that requires the implementation of two different methods with the same name?

    Read the article

  • C++ creating generic template function specialisations

    - by Fire Lancer
    I know how to specialise a template function, however what I want to do here is specialise a function for all types which have a given method, eg: template<typename T> void foo(){...} template<typename T, if_exists(T::bar)>void foo(){...}//always use this one if the method T::bar exists T::bar in my classes is static and has different return types. I tried doing this by having an empty base class ("class HasBar{};") for my classes to derive from and using boost::enable_if with boost::is_base_of on my "specialised" version. However the problem then is that for classes that do have bar, the compiler cant resolve which one to use :(. template<typename T> typename boost::enable_if<boost::is_base_of(HasBar, T>, void>::type f() {...} I know that I could use boost::disable_if on the "normal" version, however I do not control the normal version (its provided by a third party library and its expected for specialisations to be made, I just don't really want to make explicit specialisations for my 20 or so classes), nor do I have that much control over the code using these functions, just the classes implementing T::bar and the function that uses it. Is there some way to tell the compiler to "always use this version if possible no matter what" without altering the other versions?

    Read the article

  • calling constructor of the class in the destructor of the same class

    - by dicaprio
    Experts !! I know this question is one of the lousy one , but still I dared to open my mind , hoping I would learn from all. I was trying some examples as part of my routine and did this horrible thing, I called the constructor of the class from destructor of the same class. I don't really know if this is ever required in real programming , I cant think of any real time scenarios where we really need to call functions/CTOR in our destructor. Usually , destructor is meant for cleaning up. If my understanding is correct, why the compiler doesn't complain ? Is this because it is valid for some good reasons ? If so what are they ? I tried on Sun Forte, g++ and VC++ compiler and none of them complain about it. using namespace std; class test{ public: test(){ cout<<"CTOR"<<endl; } ~test() {cout<<"DTOR"<<endl; test(); }};

    Read the article

  • Interpretation of int (*a)[3]

    - by kapuzineralex
    When working with arrays and pointers in C, one quickly discovers that they are by no means equivalent although it might seem so at a first glance. I know about the differences in L-values and R-values. Still, recently I tried to find out the type of a pointer that I could use in conjunction with a two-dimensional array, i.e. int foo[2][3]; int (*a)[3] = foo; However, I just can't find out how the compiler "understands" the type definition of a in spite of the regular operator precedence rules for * and []. If instead I were to use a typedef, the problem becomes significantly simpler: int foo[2][3]; typedef int my_t[3]; my_t *a = foo; At the bottom line, can someone answer me the questions as to how the term int (*a)[3] is read by the compiler? int a[3] is simple, int *a[3] is simple as well. But then, why is it not int *(a[3])? EDIT: Of course, instead of "typecast" I meant "typedef" (it was just a typo).

    Read the article

  • Can g++ fill uninitialized POD variables with known values?

    - by Bob Lied
    I know that Visual Studio under debugging options will fill memory with a known value. Does g++ (any version, but gcc 4.1.2 is most interesting) have any options that would fill an uninitialized local POD structure with recognizable values? struct something{ int a; int b; }; void foo() { something uninitialized; bar(uninitialized.b); } I expect uninitialized.b to be unpredictable randomness; clearly a bug and easily found if optimization and warnings are turned on. But compiled with -g only, no warning. A colleague had a case where code similar to this worked because it coincidentally had a valid value; when the compiler upgraded, it started failing. He thought it was because the new compiler was inserting known values into the structure (much the way that VS fills 0xCC). In my own experience, it was just different random values that didn't happen to be valid. But now I'm curious -- is there any setting of g++ that would make it fill memory that the standard would otherwise say should be uninitialized?

    Read the article

  • Type-safe generic data structures in plain-old C?

    - by Bradford Larsen
    I have done far more C++ programming than "plain old C" programming. One thing I sorely miss when programming in plain C is type-safe generic data structures, which are provided in C++ via templates. For sake of concreteness, consider a generic singly linked list. In C++, it is a simple matter to define your own template class, and then instantiate it for the types you need. In C, I can think of a few ways of implementing a generic singly linked list: Write the linked list type(s) and supporting procedures once, using void pointers to go around the type system. Write preprocessor macros taking the necessary type names, etc, to generate a type-specific version of the data structure and supporting procedures. Use a more sophisticated, stand-alone tool to generate the code for the types you need. I don't like option 1, as it is subverts the type system, and would likely have worse performance than a specialized type-specific implementation. Using a uniform representation of the data structure for all types, and casting to/from void pointers, so far as I can see, necessitates an indirection that would be avoided by an implementation specialized for the element type. Option 2 doesn't require any extra tools, but it feels somewhat clunky, and could give bad compiler errors when used improperly. Option 3 could give better compiler error messages than option 2, as the specialized data structure code would reside in expanded form that could be opened in an editor and inspected by the programmer (as opposed to code generated by preprocessor macros). However, this option is the most heavyweight, a sort of "poor-man's templates". I have used this approach before, using a simple sed script to specialize a "templated" version of some C code. I would like to program my future "low-level" projects in C rather than C++, but have been frightened by the thought of rewriting common data structures for each specific type. What experience do people have with this issue? Are there good libraries of generic data structures and algorithms in C that do not go with Option 1 (i.e. casting to and from void pointers, which sacrifices type safety and adds a level of indirection)?

    Read the article

  • Can Delphi 5 generate a .PDB file that VS can use?

    - by Vilx-
    We've got this large application written in Delphi 5, and development is ongoing to this day. There is research going on into migrating to newer versions, but so far there is no success, as some 3rd party components have not been updated in ages and do not work on later versions. In the meantime however people need to continue work on it. Now Delphi 5 IDE is no real treat. It's pretty bug-ridden and lacks a lot of features of contemporary IDEs which makes it difficult to use. Especially when it comes to debugging. So I was wondering - would it be possible to use Visual Studio in the process? As far as I know the .PDB file format is pretty old and is well documented. Could it be possible to make the Delphi compiler to somehow generate a .PDB files for it's compiled results? Then the program could be debugged with Visual Studio, possibly to a much greater extent than in the original IDE. Well, the absolute Holy Grail would be to move all development to VS, just keeping the compiler from Delphi, but I imagine that would be pretty impossible.

    Read the article

  • Should '#include' and 'using' statements be repeated in both header and implementation files (C++)?

    - by Dr. Monkey
    I'm fairly new to C++, but my understanding is that a #include statement will essentially just dump the contents of the #included file into the location of that statement. This means that if I have a number of '#include' and 'using' statements in my header file, my implementation file can just #include the header file, and the compiler won't mind if I don't repeat the other statements. What about people though? My main concern is that if I don't repeat the '#include', 'using', and also 'typedef' (now that I think of it) statements, it takes that information away from the file in which it's used, which could lead to confusion. I am just working on small projects at the moment where it won't really cause any issues, but I can imagine that in larger projects with more people working on them it could become a significant issue. An example follows: //Unit.h #include <string> #include <ostream> #include "StringSet.h" using std::string; using std::ostream; class Unit { public: //public members private: //private members //unrelated side-question: should private members //even be included in the header file? } ; //Unit.cpp #include "Unit.h" //The following are all redundant from a compiler perspective: #include <string> #include <ostream> #include "StringSet.h" using std::string; using std::ostream; //implementation goes here

    Read the article

  • Operator+ for a subtype of a template class.

    - by baol
    I have a template class that defines a subtype. I'm trying to define the binary operator+ as a template function, but the compiler cannot resolve the template version of the operator+. #include <iostream> template<typename other_type> struct c { c(other_type v) : cs(v) {} struct subtype { subtype(other_type v) : val(v) {} other_type val; } cs; }; template<typename other_type> typename c<other_type>::subtype operator+(const typename c<other_type>::subtype& left, const typename c<other_type>::subtype& right) { return typename c<other_type>::subtype(left.val + right.val); } // This one works // c<int>::subtype operator+(const c<int>::subtype& left, // const c<int>::subtype& right) // { return c<int>::subtype(left.val + right.val); } int main() { c<int> c1 = 1; c<int> c2 = 2; c<int>::subtype cs3 = c1.cs + c2.cs; std::cerr << cs3.val << std::endl; } I think the reason is because the compiler (g++4.3) cannot guess the template type so it's searching for operator+<int> instead of operator+. What's the reason for that? What elegant solution can you suggest?

    Read the article

  • How do I set up the python/c library correctly?

    - by Bartvbl
    I have been trying to get the python/c library to like my mingW compiler. The python online doncumentation; http://docs.python.org/c-api/intro.html#include-files only mentions that I need to import the python.h file. I grabbed it from the installation directory (as is required on the windows platform), and tested it by compiling the script: #include "Python.h". This compiled fine. Next, I tried out the snippet of code shown a bit lower on the python/c API page: PyObject *t; t = PyTuple_New(3); PyTuple_SetItem(t, 0, PyInt_FromLong(1L)); PyTuple_SetItem(t, 1, PyInt_FromLong(2L)); PyTuple_SetItem(t, 2, PyString_FromString("three")); For some reason, the compiler would compile the code if I'd remove the last 4 lines (so that only the pyObject variable definition would be left), yet calling the actual constructor of the tuple returned errors. I am probably missing something completely obvious here, given I am very new to C, but does anyone know what it is?

    Read the article

  • Can't access media assets in an SWC

    - by Sold Out Activist
    I'm having trouble accessing content in an SWC. The main project compiles without error, but the assets (sound, music) aren't displayed or played. My workflow: i. Using Flash CS5 1. Create MAsset.fla 2. Import sounds, art 3. Assign class names, check export in frame 1 4a. Write out the classes in the actions panel in frame 1 4b. OR. Add a document class and write out the classes there 5. Export SWC. Filesize is similar to what it is when I directly import assets in the main project library. 6. Create Project.fla and document class Project.as 7. Import SWC into main project through the Actionscript panel. 8. Add code, which calls the class names from the SWC (e.g. DCL_UI_MOUSE) 9. Compile. No compiler errors, but nothing doing. And the resulting SWF filesize doesn't reflect anything more than the compiled code from the main project. Regarding step 4, if I just write the class name in the root timeline or document class, the compiler error will go away and the asset appear to be compiled in the SWC. But I have also tried: var asset0000:DCL_UI_MOUSE; And: var asset0000:DCL_UI_MOUSE = new DCL_UI_MOUSE(); Regardless the assets don't make it into the final SWF. What am I doing wrong?

    Read the article

  • Make conversion to a native type explicit in C++

    - by Tal Pressman
    I'm trying to write a class that implements 64-bit ints for a compiler that doesn't support long long, to be used in existing code. Basically, I should be able to have a typedef somewhere that selects whether I want to use long long or my class, and everything else should compile and work. So, I obviously need conversion constructors from int, long, etc., and the respective conversion operators (casts) to those types. This seems to cause errors with arithmetic operators. With native types, the compiler "knows" that when operator*(int, char) is called, it should promote the char to int and call operator*(int, int) (rather than casting the int to char, for example). In my case it gets confused between the various built-in operators and the ones I created. It seems to me like if I could flag the conversion operators as explicit somehow, that it would solve the issue, but as far as I can tell the explicit keyword is only for constructors (and I can't make constructors for built-in types). So is there any way of marking the casts as explicit? Or am I barking up the wrong tree here and there's another way of solving this? Or maybe I'm just doing something else wrong...

    Read the article

  • Are programming languages and methods ineffective? (assembler and C knowledge needed)

    - by b-gen-jack-o-neill
    Hi, for a long time, I am thinking and studying output of C language compiler in asemlber form, as well as CPU architecture. I know this may be silly to you, but it seems to me that something is very ineffective. Please, don´t be angry if I am wrong, and there is some reason I do not see for all these principles. I will be very glad if you tell me why is it designed this way. I actually trully believe I am wrong, I know the genius minds of people which get PCs together knew a reason to do so. What exactly, do you ask? I´ll tell you right away, I use C as a example: 1, Stack local scope memory allocation: So, typical local memory allocation uses stack. Just copy esp to ebp and than allocate all the memory via ebp. OK, I would understand this if you explicitly need allocate RAM by default stack values, but if I do understand it correctly, modern OS use paging as a translation layer between application and physical RAM, when adress you desire is further translated before reaching actuall RAM byte. So why don´t just say 0x00000000 is int a,0x00000004 is int b and so? And access them just by mov 0x00000000,#10? Becouse you wont actually access memory blocks 0x00000000 and 0x00000004 but those your OS set the paging tables to. Actually, since memory allocation by ebp and esp use indirect adressing, "my" way would be even faster. 2, Variable allocation duplicitly: When you run aaplication, Loader load its code into RAM. When you create variable, or string, compiler generates code that pushes these values on the top o stack when created in main. So there is actuall instruction for do so, and that actuall number in memory. So, there are 2 entries of the same value in RAM. One in fomr of instruction, second in form of actuall bytes in the RAM. But why? Why not to just when declaring variable count at which memory block it would be, than when used, just insert this memory location?

    Read the article

  • IS operator behaving a bit strangely

    - by flockofcode
    1) According to my book, IS operator can check whether expression E (E is type) can be converted to the target type only if E is either a reference conversion, boxing or unboxing. Since in the following example IS doesn’t check for either of the three types of conversion, the code shouldn’t work, but it does: int i=100; if (i is long) //returns true, indicating that conversion is possible l = i; 2) a) B b; A a = new A(); if (a is B) b = (B)a; int i = b.l; class A { public int l = 100; } class B:A { } The above code always causes compile time error “Use of unassigned variable”. If condition a is B evaluates to false, then b won’t be assigned a value, but if condition is true, then it will. And thus by allowing such a code compiler would have no way of knowing whether the usage of b in code following the if statement is valid or not ( due to not knowing whether a is b evaluates to true or false) , but why should it know that? Intsead why couldn’t runtime handle this? b) But if instead we’re dealing with non reference types, then compiler doesn’t complain, even though the code is identical.Why? int i = 100; long l; if (i is long) l = i; thank you

    Read the article

  • Operator+ for a subtype of a template classe.

    - by baol
    I have a template class that defines a subtype. I'm trying to define the binary operator+ as a template function, but the compiler cannot resolve the template version of the operator+. #include <iostream> template<typename other_type> struct c { c(other_type v) : cs(v) {} struct subtype { subtype(other_type v) : val(v) {} other_type val; } cs; }; template<typename other_type> typename c<other_type>::subtype operator+(const typename c<other_type>::subtype& left, const typename c<other_type>::subtype& right) { return typename c<other_type>::subtype(left.val + right.val); } // This one works // c<a>::subtype operator+(const c<a>::subtype& left, // const c<a>::subtype& right) // { return c<a>::subtype(left.val + right.val); } int main() { c<int> c1 = 1; c<int> c2 = 2; c<int>::subtype cs3 = c1.cs + c2.cs; std::cerr << cs3.val << std::endl; } I think the reason is because the compiler (g++4.3) cannot guess the template type so it's searching for operator+<int> instead of operator+. What's the reason for that? What elegant solution can you suggest?

    Read the article

  • optimization math computation (multiplication and summing)

    - by wiso
    Suppose you want to compute the sum of the square of the differences of items: $\sum_{i=1}^{N-1} (x_i - x_{i+1})^2$, the simplest code (the input is std::vector<double> xs, the ouput sum2) is: double sum2 = 0.; double prev = xs[0]; for (vector::const_iterator i = xs.begin() + 1; i != xs.end(); ++i) { sum2 += (prev - (*i)) * (prev - (*i)); // only 1 - with compiler optimization prev = (*i); } I hope that the compiler do the optimization in the comment above. If N is the length of xs you have N-1 multiplications and 2N-3 sums (sums means + or -). Now suppose you know this variable: sum = $x_1^2 + x_N^2 + 2 sum_{i=2}^{N-1} x_i^2$ Expanding the binomial square: $sum_i^{N-1} (x_i-x_{i+1})^2 = sum - 2\sum_{i=1}^{N-1} x_i x_{i+1}$ so the code becomes: double sum2 = 0.; double prev = xs[0]; for (vector::const_iterator i = xs.begin() + 1; i != xs.end(); ++i) { sum2 += (*i) * prev; prev = (*i); } sum2 = -sum2 * 2. + sum; Here I have N multiplications and N-1 additions. In my case N is about 100. Well, compiling with g++ -O2 I got no speed up (I try calling the inlined function 2M times), why?

    Read the article

  • C# Function Inheritance--Use Child Class Vars with Base Class Function

    - by Sean O'Connor
    Good day, I have a fairly simple question to experienced C# programmers. Basically, I would like to have an abstract base class that contains a function that relies on the values of child classes. I have tried code similar to the following, but the compiler complains that SomeVariable is null when SomeFunction() attempts to use it. Base class: public abstract class BaseClass { protected virtual SomeType SomeVariable; public BaseClass() { this.SomeFunction(); } protected void SomeFunction() { //DO SOMETHING WITH SomeVariable } } A child class: public class ChildClass:BaseClass { protected override SomeType SomeVariable=SomeValue; } Now I would expect that when I do: ChildClass CC=new ChildClass(); A new instance of ChildClass should be made and CC would run its inherited SomeFunction using SomeValue. However, this is not what happens. The compiler complains that SomeVariable is null in BaseClass. Is what I want to do even possible in C#? I have used other managed languages that allow me to do such things, so I certain I am just making a simple mistake here. Any help is greatly appreciated, thank you.

    Read the article

  • Make All Types Constant by Default in C++

    - by Jon Purdy
    What is the simplest and least obtrusive way to indicate to the compiler, whether by means of compiler options, #defines, typedefs, or templates, that every time I say T, I really mean T const? I would prefer not to make use of an external preprocessor. Since I don't use the mutable keyword, that would be acceptable to repurpose to indicate mutable state. Potential (suboptimal) solutions so far: // I presume redefinition of keywords is implementation-defined or illegal. #define int int const #define ptr * const int i(0); int ptr j(&i); typedef int const Int; typedef int const* const Intp; Int i(0); Intp j(&i); template<class T> struct C { typedef T const type; typedef T const* const ptr; }; C<int>::type i(0); C<int>::ptr j(&i);

    Read the article

  • What does the Asterisk * mean in Objective-C?

    - by Thanks
    Is it true, that the Asterisk always means "Hey, that is a pointer!" And an Pointer always holds an memory adress? (Yes I know for the exception that a * is used for math operation) For Example: NSString* myString; or SomeClass* thatClass; or (*somePointerToAStruct).myStructComponent = 5; I feel that there is more I need to know about the Asterirsk (*) than that I use it when defining an Variable that is a pointer to a class. Because sometimes I already say in the declaration of an parameter that the Parameter variable is a pointer, and still I have to use the Asterisk in front of the Variable in order to access the value. That recently happened after I wanted to pass a pointer of an struct to a method in a way like [myObj myMethod:&myStruct], I could not access a component value from that structure even though my method declaration already said that there is a parameter (DemoStruct*)myVar which indeed should be already known as a pointer to that demostruct, still I had always to say: "Man, compiler. Listen! It IIISSS a pointer:" and write: (*myVar).myStructComponentX = 5; I really really really do not understand why I have to say that twice. And only in this case. When I use the Asterisk in context of an NSString* myString then I can just access myString however I like, without telling the compiler each time that it's a pointer. i.e. like using *myString = @"yep". It just makes no sense to me.

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >