Search Results

Search found 7672 results on 307 pages for 'compiler optimization'.

Page 282/307 | < Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >

  • Performance issue between builds

    - by DeadMG
    I've been developing a small indie game in my spare time and have run across an inexplicable issue. Some builds of the game will randomly run several hundred frames per second slower than other builds. For example, when rendering some text and no 3D scene, I can achieve 1800FPS on my own hardware. Add one 3D sphere (10k verts, pixel shaded), achieve 1700 FPS. Add two more spheres, achieve 800 FPS. Remove all spheres, achieve 1100FPS- even though the code now renders the same scene as I previously achieved at 1800FPS, which is just the FPS counter being rendered. I've tried rebuilding and cleaning the project and rebooting the compiler. This is in Release mode and I turned on all the optimizations I could find. Any suggestions as to the cause? I ran a quick profile, and Visual Studio seems to think that over 90% of my time was spent in D3D9_43.dll, suggesting that it's not a bug in my app, which doesn't explain why it manifests in only some builds. I rebooted my machine and it's back up to 1800FPS. I think it's a bug in the DirectX SDK tools (amongst many others). Going to delete this question.

    Read the article

  • C++ const qualifier

    - by avd
    I have a Point2D class as follows: class Point2D{ int x; int y; public: Point2D(int inX, int inY){ x = inX; y = inY; }; int getX(){return x;}; int getY(){return y;}; }; Now I have defined a class Line as: class Line { Point2D p1,p2; public: LineVector(const Point2D &p1,const Point2D &p2):p1(p1),p2(p2) { int x1,y1,x2,y2; x1=p1.getX();y1=p1.getY();x2=p2.getX();y2=p2.getY(); } }; Now the compiler gives the error in the last line( where getX() etc are called): error: passing `const Point2D' as `this' argument of `int Point2D::getX()' discards qualifiers If I remove the const keyword at both places, then it compiles successfully. What is the error? Is it because getX() etc are defined inline? Is there any way to recify this retaining them inline?

    Read the article

  • Obj-c method override/polymorphism problem

    - by Rod
    Ok, so I'm using Objective-C. Now, say I have: TopClass : NSObject - (int) getVal {return 1;} MidClass : TopClass - (int) getVal {return 2;} BotClass : MidClass - (int) getVal {return 3;} I then put objects of each type into an NSMutableArray and take one out. What I want to do is run the getVal func on the appropriate object type, but when I put id a = [allObjects objectAtIndex:0]; if ([a isKindOfClass:[TopClass class]]) { int i; i = [a getVal]; } I get firstly a warning about multiple methods called getVal (presumably because the compiler can't determine the actual object type until runtime). But more seriously I also get an error "void value not ignored as it should be" and it won't compile. If I don't try and use the return from [a getVal] then it compiles fine e.g. [a getval]; //obviously no good if I want to use the return value It will also work if I use isMemberOfClass statements to cast the object to a class before running the function e.g. if ([a isMemberOfClass:[BotClass]) i = [(BotClass*) a getVal]; But surely I shouldn't have to do this to get the functionality I require? Otherwise I'll have to put in a statement for every single subclass, and worse have to add a new line if I add a new sub class, which rather defeats the point of method overriding doesn't it? Surely there is a better way?

    Read the article

  • How can I put back a character that I've read when I detect it's the start of a new row?

    - by gcc
    char nm; int i=0; double thelow, theupp; double numbers[200]; for(i=0;i<4;++i) { { char nm; double thelow,theupp; /*after erased ,created again*/ scanf("%c %lf %lf", &nm, &thelow, &theupp); for (k = 0; ; ++k) ; { scanf("%lf",numbers[k]); if(numbers[k]=='\n') break; } /*calling function and sending data(nm,..) to it*/ } /*after } is seen (nm ..) is erased*/ ; } I want say compiler : hey my dear code read only i-th row,dont touch characters at placed in next line. because characters at placed in next line is token after i increased by 1 and nm ,thelow,theupp is being zero or erased after then again created. how can I do ? input; D -1.5 0.5 .012 .025 .05 .1 .1 .1 .025 .012 0 0 0 .012 .025 .1 .2 .1 .05 .039 .025 .025 B 1 3 .117 .058 .029 .015 .007 .007 .007 .015 .022 .029 .036 .044 .051 .058 .066 .073 .080 .088 .095 .103

    Read the article

  • C++: What is the size of an object of an empty class?

    - by Ashwin
    I was wondering what could be the size of an object of an empty class. It surely could not be 0 bytes since it should be possible to reference and point to it like any other object. But, how big is such an object? I used this small program: #include <iostream> using namespace std; class Empty {}; int main() { Empty e; cerr << sizeof(e) << endl; return 0; } The output I got on both Visual C++ and Cygwin-g++ compilers was 1 byte! This was a little surprising to me since I was expecting it to be of the size of the machine word (32 bits or 4 bytes). Can anyone explain why the size of 1 byte? Why not 4 bytes? Is this dependent on compiler or the machine too? Also, can someone give a more cogent reason for why an empty class object will not be of size 0 bytes?

    Read the article

  • const object in c++

    - by Codenotguru
    I have a question on constant objects. In the following program: class const_check{ int a; public: const_check(int i); void print() const; void print2(); }; const_check::const_check(int i):a(i) {} void const_check::print() const { int a=19; cout<<"The value in a is:"<<a; } void const_check::print2() { int a=10; cout<<"The value in a is:"<<a; } int main(){ const_check b(5); const const_check c(6); b.print2(); c.print(); } void print() is constant member function of the class const_check, so according to the definition of constants if any attempt to change int a it should result in an error but the program works fine for me.I think i am having some confusion here, can anybody tell me why the compiler is not flagging it as an error??

    Read the article

  • Operator Overloading << in c++

    - by thlgood
    I'm a fresh man in C++. I write this simple program to practice Overlaoding. This is my code: #include <iostream> #include <string> using namespace std; class sex_t { private: char __sex__; public: sex_t(char sex_v = 'M'):__sex__(sex_v) { if (sex_v != 'M' && sex_v != 'F') { cerr << "Sex type error!" << sex_v << endl; __sex__ = 'M'; } } const ostream& operator << (const ostream& stream) { if (__sex__ == 'M') cout << "Male"; else cout << "Female"; return stream; } }; int main(int argc, char *argv[]) { sex_t me('M'); cout << me << endl; return 0; } When I compiler it, It print a lots of error message: The error message was in a mess. It's too hard for me to found useful message sex.cpp: ???‘int main(int, char**)’?: sex.cpp:32:10: ??: ‘operator<<’?‘std::cout << me’????? sex.cpp:32:10: ??: ???: /usr/include/c++/4.6/ostream:110:7: ??: std::basic_ostream<_CharT, _Traits>::__ostream_type& std::basic_ostream<_CharT, _Traits>::operator<<(std::basic_ostre

    Read the article

  • Is there any alternative way of writing this switch statement(C#3.0)

    - by Newbie
    Can it be done in a better way public static EnumFactorType GetFactorEnum(string str) { Standardization e = new Standardization(); switch (str.ToLower()) { case "beta": e.FactorType = EnumFactorType.BETA; break; case "bkp": e.FactorType = EnumFactorType.BOOK_TO_PRICE; break; case "yld": e.FactorType = EnumFactorType.DIVIDEND_YIELD; break; case "growth": e.FactorType = EnumFactorType.GROWTH; break; case "mean": e.FactorType = EnumFactorType.MARKET_CAP; break; case "momentum": e.FactorType = EnumFactorType.MOMENTUM; break; case "size": e.FactorType = EnumFactorType.SIZE; break; case "stat_fact1": e.FactorType = EnumFactorType.STAT_FACT_1; break; case "stat_fact2": e.FactorType = EnumFactorType.STAT_FACT_2; break; case "value": e.FactorType = EnumFactorType.VALUE; break; } return e.FactorType; } If I create a Static class(say Constatant) and declare variable like public static string BETA= "beta"; and then if I try to put that in the Case expression like Case Constants.BETA : e.FactorType = EnumFactorType.BETA; break; then the compiler will report error.(quite expected) So is there any other way?(I canot change the switch statement) Using C#3.0 Thanks

    Read the article

  • Misaligned Pointer Performance

    - by Elite Mx
    Aren't misaligned pointers (in the BEST possible case) supposed to slow down performance and in the worst case crash your program (assuming the compiler was nice enough to compile your invalid c program). Well, the following code doesn't seem to have any performance differences between the aligned and misaligned versions. Why is that? /* brutality.c */ #ifdef BRUTALITY xs = (unsigned long *) ((unsigned char *) xs + 1); #endif ... /* main.c */ #include <stdio.h> #include <stdlib.h> #define size_t_max ((size_t)-1) #define max_count(var) (size_t_max / (sizeof var)) int main(int argc, char *argv[]) { unsigned long sum, *xs, *itr, *xs_end; size_t element_count = max_count(*xs) >> 4; xs = malloc(element_count * (sizeof *xs)); if(!xs) exit(1); xs_end = xs + element_count - 1; sum = 0; for(itr = xs; itr < xs_end; itr++) *itr = 0; #include "brutality.c" itr = xs; while(itr < xs_end) sum += *itr++; printf("%lu\n", sum); /* we could free the malloc-ed memory here */ /* but we are almost done */ exit(0); } Compiled and tested on two separate machines using gcc -pedantic -Wall -O0 -std=c99 main.c for i in {0..9}; do time ./a.out; done

    Read the article

  • for loop vs std::for_each with lambda

    - by Andrey
    Let's consider a template function written in C++11 which iterates over a container. Please exclude from consideration the range loop syntax because it is not yet supported by the compiler I'm working with. template <typename Container> void DoSomething(const Container& i_container) { // Option #1 for (auto it = std::begin(i_container); it != std::end(i_container); ++it) { // do something with *it } // Option #2 std::for_each(std::begin(i_container), std::end(i_container), [] (typename Container::const_reference element) { // do something with element }); } What are pros/cons of for loop vs std::for_each in terms of: a) performance? (I don't expect any difference) b) readability and maintainability? Here I see many disadvantages of for_each. It wouldn't accept a c-style array while the loop would. The declaration of the lambda formal parameter is so verbose, not possible to use auto there. It is not possible to break out of for_each. In pre- C++11 days arguments against for were a need of specifying the type for the iterator (doesn't hold any more) and an easy possibility of mistyping the loop condition (I've never done such mistake in 10 years). As a conclusion, my thoughts about for_each contradict the common opinion. What am I missing here?

    Read the article

  • C++ compile time polymorphism doubt ?

    - by user313921
    Below program contains two show() functions in parent and child classes, but first show() function takes FLOAT argument and second show() function takes INT argument. .If I call show(10.1234) function by passing float argument, it should call class A's show(float a) function , but it calls class B's show(int b). #include<iostream> using namespace std; class A{ float a; public: void show(float a) { this->a = a; cout<<"\n A's show() function called : "<<this->a<<endl; } }; class B : public A{ int b; public: void show(int b) { this->b = b; cout<<"\n B's show() function called : "<<this->b<<endl; } }; int main() { float i=10.1234; B Bobject; Bobject.show((float) i); return 0; } Output: B's show() function called : 10 Expected output: A's show() function called : 10.1234 Why g++ compiler chosen wrong show() function i.e class B's show(int b) function ?

    Read the article

  • Why does defined(X) not work in a preprocessor definition without a space?

    - by Devin
    A preprocessor definition that includes defined(X) will never evaluate to true, but (defined X) will. This occurs in MSVC9; I have not tested other preprocessors. A simple example: #define FEATURE0 1 #define FEATURE1 0 #define FEATURE2 1 #define FEATURE3 (FEATURE0 && !FEATURE1 && (defined(FEATURE2))) #define FEATURE4 (FEATURE0 && !FEATURE1 && (defined FEATURE2)) #define FEATURE5 (FEATURE0 && !FEATURE1 && (defined (FEATURE2))) #if FEATURE3 #pragma message("FEATURE3 Enabled") #elif (FEATURE0 && !FEATURE1 && (defined(FEATURE2))) #pragma message("FEATURE3 Enabled (Fallback)") #endif #if FEATURE4 #pragma message("FEATURE4 Enabled") #elif (FEATURE0 && !FEATURE1 && (defined FEATURE2)) #pragma message("FEATURE4 Enabled (Fallback)") #endif #if FEATURE5 #pragma message("FEATURE5 Enabled") #elif (FEATURE0 && !FEATURE1 && (defined (FEATURE2))) #pragma message("FEATURE5 Enabled (Fallback)") #endif The output from the compiler is: 1FEATURE3 Enabled (Fallback) 1FEATURE4 Enabled 1FEATURE5 Enabled Working cases: defined (X), defined( X ), and defined X. Broken case: defined(X) Why is defined evaluated differently when part of a definition, as in the #if cases in the example, compared to direct evaluation, as in the #elif cases in the example?

    Read the article

  • Looking for something to add some standard rules for my c++ project.

    - by rkb
    Hello all, My team is developing a C++ project on linux. We use vim as editor. I want to enforce some code standard rules in our team in such a way that if the code is not in accordance with it, some sort of warning or error will be thrown when it builds or compiles. Not necessarily it builds but at least I can run some plugin or tools on that code to make sure it meets the standard. So that before committing to svn everyone need to run the code through some sort of plugin or script and make sure it meets the requirement and then only he/she can commit. Not sure if we can add some rules to vim, if there are any let me know about it. For eg. In our code standards all the member variables and private functions should start with _ class A{ private: int _count; float _amount; void _increment_count(){ ++_count; } } So I want to throw some warning or error or some sort of messages for this class if the variables are declared as follows. class A{ private: int count; float amount; void increment_count(){ ++_count; } } Please note that warning and error are not from compiler becoz program is still valid. Its from the tool I want to use so that code goes to re-factoring but still works fine on the executable side. I am looking for some sort of plugin or pre parsers or scripts which will help me in achieving all this. Currently we use svn; just to anser the comment.

    Read the article

  • Template function overloading with identical signatures, why does this work?

    - by user1843978
    Minimal program: #include <stdio.h> #include <type_traits> template<typename S, typename T> int foo(typename T::type s) { return 1; } template<typename S, typename T> int foo(S s) { return 2; } int main(int argc, char* argv[]) { int x = 3; printf("%d\n", foo<int, std::enable_if<true, int>>(x)); return 0; } output: 1 Why doesn't this give a compile error? When the template code is generated, wouldn't the functions int foo(typename T::type search) and int foo(S& search) have the same signature? If you change the template function signatures a little bit, it still works (as I would expect given the example above): template<typename S, typename T> void foo(typename T::type s) { printf("a\n"); } template<typename S, typename T> void foo(S s) { printf("b\n"); } Yet this doesn't and yet the only difference is that one has an int signature and the other is defined by the first template parameter. template<typename T> void foo(typename T::type s) { printf("a\n"); } template<typename T> void foo(int s) { printf("b\n"); } I'm using code similar to this for a project I'm working on and I'm afraid that there's a subtly to the language that I'm not understanding that will cause some undefined behavior in certain cases. I should also mention that it does compile on both Clang and in VS11 so I don't think it's just a compiler bug.

    Read the article

  • Java getMethod with subclass parameter

    - by SelectricSimian
    I'm writing a library that uses reflection to find and call methods dynamically. Given just an object, a method name, and a parameter list, I need to call the given method as though the method call were explicitly written in the code. I've been using the following approach, which works in most cases: static void callMethod(Object receiver, String methodName, Object[] params) { Class<?>[] paramTypes = new Class<?>[params.length]; for (int i = 0; i < param.length; i++) { paramTypes[i] = params[i].getClass(); } receiver.getClass().getMethod(methodName, paramTypes).invoke(receiver, params); } However, when one of the parameters is a subclass of one of the supported types for the method, the reflection API throws a NoSuchMethodException. For example, if the receiver's class has testMethod(Foo) defined, the following fails: receiver.getClass().getMethod("testMethod", FooSubclass.class).invoke(receiver, new FooSubclass()); even though this works: receiver.testMethod(new FooSubclass()); How do I resolve this? If the method call is hard-coded there's no issue - the compiler just uses the overloading algorithm to pick the best applicable method to use. It doesn't work with reflection, though, which is what I need. Thanks in advance!

    Read the article

  • Cannot determine why pointer variable will not address elements in a string in this program?

    - by Smith Will Suffice
    I am attempting to utilize a pointer variable to access elements of a string and there are issues with my code generating a compilation error: #include <stdio.h> #define MAX 29 char arrayI[250]; char *ptr; int main(void) { ptr = arrayI; puts("Enter string to arrayI: up to 29 chars:\n"); fgets(arrayI, MAX, stdin); printf("\n Now printing array by pointer:\n"); printf("%s", *ptr); ptr = arrayI[1]; //(I set the pointer to the second array char element) printf("%c", *ptr); //Here is where I was wanting to use my pointer to //point to individual array elements. return 0; } My compiler crieth: [Warning] assignment makes pointer from integer without a cast [enabled by default] I do not see where my pointer was ever assigned to the integer data type? Could someone please explain why my attempt to implement a pointer variable is failing? Thanks all!

    Read the article

  • Best Methods for Dynamically Creating New Objects

    - by frankV
    I'm looking for a method to dynamically create new class objects during runtime of a program. So far what I've read leads me to believe it's not easy and normally reserved for more advanced program requirements. What I've tried so far is this: // create a vector of type class vector<class_name> vect; // and use push_back (method 1) vect.push_back(*new Object); //or use for loop and [] operator (method 2) vect[i] = *new Object; neither of these throw errors from the compiler, but I'm using ifstream to read data from a file and dynamically create the objects... the file read is taking in some weird data and occasionally reading a memory address, and it's obvious to me it's due to my use/misuse of the code snippet above. The file read code is as follows: // in main ifstream fileIn fileIn.open( fileName.c_str() ); // passes to a separate function along w/ vector loadObjects (fileIn, vect); void loadObjects (ifstream& is, vector<class_name>& Object) { int data1, data2, data3; int count = 0; string line; if( is.good() ){ for (int i = 0; i < 4; i++) { is >> data1 >> data2 >> data3; if (data1 == 0) { vect.push_back(*new Object(data2, data3) ) } } } }

    Read the article

  • warning: format ‘%s’ expects type ‘char *’, but argument 2 has type ‘int’

    - by pyz
    code: #include <stdio.h> #include <stdlib.h> #include <netdb.h> #include <sys/socket.h> int main(int argc, char **argv) { char *ptr, **pptr; struct hostent *hptr; char str[32]; //ptr = argv[1]; ptr = "www.google.com"; if ((hptr = gethostbyname(ptr)) == NULL) { printf("gethostbyname error for host:%s\n", ptr); } printf("official hostname:%s\n", hptr->h_name); for (pptr = hptr->h_aliases; *pptr != NULL; pptr++) printf(" alias:%s\n", *pptr); switch (hptr->h_addrtype) { case AF_INET: case AF_INET6: pptr = hptr->h_addr_list; for (; *pptr != NULL; pptr++) printf(" address:%s\n", inet_ntop(hptr->h_addrtype, *pptr, str, sizeof(str))); break; default: printf("unknown address type\n"); break; } return 0; } compiler output below: zhumatoMacBook:CProjects zhu$ gcc gethostbynamedemo.c gethostbynamedemo.c: In function ‘main’: gethostbynamedemo.c:31: warning: format ‘%s’ expects type ‘char *’, but argument 2 has type ‘int’

    Read the article

  • ?Selected node changed

    - by user175084
    I have a tree view like this and i want to navigate to 3 different pages using response .redirect --machine groups (main) ----dept (Parent) ------xyz (child) protected void TreeView2_SelectedNodeChanged(object sender, EventArgs e) { if (TreeView2.SelectedValue == "Machine Groups") { Response.Redirect("~/Gridviewpage.aspx"); } else switch (e.Node.Depth) { case 0: Response.Redirect("~/Machineupdate.aspx?node=" + TreeView2.SelectedNode.Value); break; case 1: Response.Redirect("~/MachineUpdatechild.aspx?node=" + TreeView3.SelectedNode.Value); break; } } } now if i put EventArgs it points to an error on e.Node that system.EventArgs does not contain definition for Node. If i replace EventArgs with TreeNodeEventArgs then that error goes but i get an error on compilation. Compiler Error Message: CS0123: No overload for 'TreeView2_SelectedNodeChanged' matches delegate 'System.EventHandler' <asp:TreeView ID="TreeView2" runat="server" OnUnload="TreeViewMain_Unload" ontreenodepopulate="TreeView2_TreeNodePopulate" onselectednodechanged="TreeView2_SelectedNodeChanged"> <Nodes> <asp:TreeNode PopulateOnDemand="True" Text="Machine Groups" Value="Machine Groups"></asp:TreeNode> </Nodes> </asp:TreeView> Please help me out.... I would also like to kno what is the diff between EventArgs and TreeNodeEventArgs Thanks

    Read the article

  • Difficulty getting Saxon into XQuery mode instead of XSLT

    - by Rosarch
    I'm having difficulty getting XQuery to work. I downloaded Saxon-HE 9.2. It seems to only want to work with XSLT. When I type: java -jar saxon9he.jar I get back usage information for XSLT. When I use the command syntax for XQuery, it doesn't recognize the parameters (like -q), and gives XSLT usage information. Here are some command line interactions: >java -jar saxon9he.jar No source file name Saxon-HE 9.2.0.6J from Saxonica Usage: see http://www.saxonica.com/documentation/using-xsl/commandline.html Options: -a Use xml-stylesheet PI, not -xsl argument -c:filename Use compiled stylesheet from file -config:filename Use configuration file -cr:classname Use collection URI resolver class -dtd:on|off Validate using DTD -expand:on|off Expand defaults defined in schema/DTD -explain[:filename] Display compiled expression tree -ext:on|off Allow|Disallow external Java functions -im:modename Initial mode -ief:class;class;... List of integrated extension functions -it:template Initial template -l:on|off Line numbering for source document -m:classname Use message receiver class -now:dateTime Set currentDateTime -o:filename Output file or directory -opt:0..10 Set optimization level (0=none, 10=max) -or:classname Use OutputURIResolver class -outval:recover|fatal Handling of validation errors on result document -p:on|off Recognize URI query parameters -r:classname Use URIResolver class -repeat:N Repeat N times for performance measurement -s:filename Initial source document -sa Use schema-aware processing -strip:all|none|ignorable Strip whitespace text nodes -t Display version and timing information -T[:classname] Use TraceListener class -TJ Trace calls to external Java functions -tree:tiny|linked Select tree model -traceout:file|#null Destination for fn:trace() output -u Names are URLs not filenames -val:strict|lax Validate using schema -versionmsg:on|off Warn when using XSLT 1.0 stylesheet -warnings:silent|recover|fatal Handling of recoverable errors -x:classname Use specified SAX parser for source file -xi:on|off Expand XInclude on all documents -xmlversion:1.0|1.1 Version of XML to be handled -xsd:file;file.. Additional schema documents to be loaded -xsdversion:1.0|1.1 Version of XML Schema to be used -xsiloc:on|off Take note of xsi:schemaLocation -xsl:filename Stylesheet file -y:classname Use specified SAX parser for stylesheet --feature:value Set configuration feature (see FeatureKeys) -? Display this message param=value Set stylesheet string parameter +param=filename Set stylesheet document parameter ?param=expression Set stylesheet parameter using XPath !option=value Set serialization option >java -jar saxon9he.jar -q:"..\w3xQueryTut.xq" Unknown option -q:..\w3xQueryTut.xq Saxon-HE 9.2.0.6J from Saxonica Usage: see http://www.saxonica.com/documentation/using-xsl/commandline.html Options: -a Use xml-stylesheet PI, not -xsl argument // etc... >java net.sf.saxon.Query -q:"..\w3xQueryTut.xq" Exception in thread "main" java.lang.NoClassDefFoundError: net/sf/saxon/Query Caused by: java.lang.ClassNotFoundException: net.sf.saxon.Query // etc... Could not find the main class: net.sf.saxon.Query. Program will exit. I'm probably making some stupid mistake. Do you know what it could be?

    Read the article

  • int, short, byte performance in back-to-back for-loops

    - by runrunraygun
    (background: http://stackoverflow.com/questions/1097467/why-should-i-use-int-instead-of-a-byte-or-short-in-c) To satisfy my own curiosity about the pros and cons of using the "appropriate size" integer vs the "optimized" integer i wrote the following code which reinforced what I previously held true about int performance in .Net (and which is explained in the link above) which is that it is optimized for int performance rather than short or byte. DateTime t; long a, b, c; t = DateTime.Now; for (int index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } a = DateTime.Now.Ticks - t.Ticks; t = DateTime.Now; for (short index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } b=DateTime.Now.Ticks - t.Ticks; t = DateTime.Now; for (byte index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } c=DateTime.Now.Ticks - t.Ticks; Console.WriteLine(a.ToString()); Console.WriteLine(b.ToString()); Console.WriteLine(c.ToString()); This gives roughly consistent results in the area of... ~950000 ~2000000 ~1700000 which is in line with what i would expect to see. However when I try repeating the loops for each data type like this... t = DateTime.Now; for (int index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } for (int index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } for (int index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } a = DateTime.Now.Ticks - t.Ticks; the numbers are more like... ~4500000 ~3100000 ~300000 Which I find puzzling. Can anyone offer an explanation? NOTE: In the interest of compairing like for like i've limited the loops to 127 because of the range of the byte value type. Also this is an act of curiosity not production code micro-optimization.

    Read the article

  • Loading .png image from array of uint8_t into OpenGL ES texture

    - by unknownthreat
    Normally, when we want to load a texture for OpenGL ES with .png, we simply add the .png images into XCode. The .png files will be altered for optimization by XCode and these altered png files can loaded into OpenGL ES texture during the runtime. However, what I am trying to do is quite different. I am trying to load a .png file that is not from the prebuilt/compile. The png file will be transmitted externally from UDP, and it will be in the form of array of bytes. I am very sure that the png is transferred correctly, but when it comes to displaying the png image in the form of the OpenGL ES texture, the image somehow shows incorrectly. The colors that are being sent are presented but the positions are somehow very incorrect. However, the position of the colors still retain some aspects of the original position. Here: The left image shows the original .png, while the right shows the png being displayed on iPhone using OpenGL ES Texture. It looks more like the png data is not being decoded or incorrectly processed. Below is OpenGL ES code for turning the image into texture: - (void) setTextureFromImageByte: (uint8_t*)imageByte{ if (self = [super init]){ NSData* imageData = [[NSData alloc] initWithBytes: imageByte length: imageLength]; UIImage* img = [[UIImage alloc] initWithData: imageData]; CGImageRef image = img.CGImage; int width = 512; int height = 512; if (image){ int tempWidth = (int)width, tempHeight = (int)height; if ((tempWidth & (tempWidth - 1)) != 0 ){ NSLog(@"CAUTION! width is not power of 2. width == %d", tempWidth); }else if ((tempHeight & (tempHeight - 1)) != 0 ){ NSLog(@"CAUTION! height is not power of 2. height == %d", tempHeight); }else{ void *spriteData = calloc(width * 4, height * 4); CGContextRef spriteContext = CGBitmapContextCreate(spriteData, width, height, 8, width*4, CGImageGetColorSpace(image), kCGImageAlphaPremultipliedLast); CGContextDrawImage(spriteContext, CGRectMake(0.0, 0.0, width, height), image); CGContextRelease(spriteContext); glBindTexture(GL_TEXTURE_2D, 1); glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 320, 435, GL_RGBA, GL_UNSIGNED_BYTE, spriteData); free(spriteData); } }else NSLog(@"ERROR: Image not loaded..."); [img release]; [imageData release]; } } So does anyone knows how to deal with this? Is it because of iPhone only accepts altered png from XCode? What can we do in this case in order to make the png image be able to display correctly?

    Read the article

  • GCC, -O2, and bitfields - is this a bug or a feature?

    - by Rooke
    Today I discovered alarming behavior when experimenting with bit fields. For the sake of discussion and simplicity, here's an example program: #include <stdio.h> struct Node { int a:16 __attribute__ ((packed)); int b:16 __attribute__ ((packed)); unsigned int c:27 __attribute__ ((packed)); unsigned int d:3 __attribute__ ((packed)); unsigned int e:2 __attribute__ ((packed)); }; int main (int argc, char *argv[]) { Node n; n.a = 12345; n.b = -23456; n.c = 0x7ffffff; n.d = 0x7; n.e = 0x3; printf("3-bit field cast to int: %d\n",(int)n.d); n.d++; printf("3-bit field cast to int: %d\n",(int)n.d); } The program is purposely causing the 3-bit bit-field to overflow. Here's the (correct) output when compiled using "g++ -O0": 3-bit field cast to int: 7 3-bit field cast to int: 0 Here's the output when compiled using "g++ -O2" (and -O3): 3-bit field cast to int: 7 3-bit field cast to int: 8 Checking the assembly of the latter example, I found this: movl $7, %esi movl $.LC1, %edi xorl %eax, %eax call printf movl $8, %esi movl $.LC1, %edi xorl %eax, %eax call printf xorl %eax, %eax addq $8, %rsp The optimizations have just inserted "8", assuming 7+1=8 when in fact the number overflows and is zero. Fortunately the code I care about doesn't overflow as far as I know, but this situation scares me - is this a known bug, a feature, or is this expected behavior? When can I expect gcc to be right about this? Edit (re: signed/unsigned) : It's being treated as unsigned because it's declared as unsigned. Declaring it as int you get the output (with O0): 3-bit field cast to int: -1 3-bit field cast to int: 0 An even funnier thing happens with -O2 in this case: 3-bit field cast to int: 7 3-bit field cast to int: 8 I admit that attribute is a fishy thing to use; in this case it's a difference in optimization settings I'm concerned about.

    Read the article

  • Understanding CLR 2.0 Memory Model

    - by Eloff
    Joe Duffy, gives 6 rules that describe the CLR 2.0+ memory model (it's actual implementation, not any ECMA standard) I'm writing down my attempt at figuring this out, mostly as a way of rubber ducking, but if I make a mistake in my logic, at least someone here will be able to catch it before it causes me grief. Rule 1: Data dependence among loads and stores is never violated. Rule 2: All stores have release semantics, i.e. no load or store may move after one. Rule 3: All volatile loads are acquire, i.e. no load or store may move before one. Rule 4: No loads and stores may ever cross a full-barrier (e.g. Thread.MemoryBarrier, lock acquire, Interlocked.Exchange, Interlocked.CompareExchange, etc.). Rule 5: Loads and stores to the heap may never be introduced. Rule 6: Loads and stores may only be deleted when coalescing adjacent loads and stores from/to the same location. I'm attempting to understand these rules. x = y y = 0 // Cannot move before the previous line according to Rule 1. x = y z = 0 // equates to this sequence of loads and stores before possible re-ordering load y store x load 0 store z Looking at this, it appears that the load 0 can be moved up to before load y, but the stores may not be re-ordered at all. Therefore, if a thread sees z == 0, then it also will see x == y. If y was volatile, then load 0 could not move before load y, otherwise it may. Volatile stores don't seem to have any special properties, no stores can be re-ordered with respect to each other (which is a very strong guarantee!) Full barriers are like a line in the sand which loads and stores can not be moved over. No idea what rule 5 means. I guess rule 6 means if you do: x = y x = z Then it is possible for the CLR to delete both the load to y and the first store to x. x = y z = y // equates to this sequence of loads and stores before possible re-ordering load y store x load y store z // could be re-ordered like this load y load y store x store z // rule 6 applied means this is possible? load y store x // but don't pop y from stack (or first duplicate item on top of stack) store z What if y was volatile? I don't see anything in the rules that prohibits the above optimization from being carried out. This does not violate double-checked locking, because the lock() between the two identical conditions prevents the loads from being moved into adjacent positions, and according to rule 6, that's the only time they can be eliminated. So I think I understand all but rule 5, here. Anyone want to enlighten me (or correct me or add something to any of the above?)

    Read the article

  • gcc/g++: error when compiling large file

    - by Alexander
    Hi, I have a auto-generated C++ source file, around 40 MB in size. It largely consists of push_back commands for some vectors and string constants that shall be pushed. When I try to compile this file, g++ exits and says that it couldn't reserve enough virtual memory (around 3 GB). Googling this problem, I found that using the command line switches --param ggc-min-expand=0 --param ggc-min-heapsize=4096 may solve the problem. They, however, only seem to work when optimization is turned on. 1) Is this really the solution that I am looking for? 2) Or is there a faster, better (compiling takes ages with these options acitvated) way to do this? Best wishes, Alexander Update: Thanks for all the good ideas. I tried most of them. Using an array instead of several push_back() operations reduced memory usage, but as the file that I was trying to compile was so big, it still crashed, only later. In a way, this behaviour is really interesting, as there is not much to optimize in such a setting -- what does the GCC do behind the scenes that costs so much memory? (I compiled with deactivating all optimizations as well and got the same results) The solution that I switched to now is reading in the original data from a binary object file that I created from the original file using objcopy. This is what I originally did not want to do, because creating the data structures in a higher-level language (in this case Perl) was more convenient than having to do this in C++. However, getting this running under Win32 was more complicated than expected. objcopy seems to generate files in the ELF format, and it seems that some of the problems I had disappeared when I manually set the output format to pe-i386. The symbols in the object file are by standard named after the file name, e.g. converting the file inbuilt_training_data.bin would result in these two symbols: binary_inbuilt_training_data_bin_start and binary_inbuilt_training_data_bin_end. I found some tutorials on the web which claim that these symbols should be declared as extern char _binary_inbuilt_training_data_bin_start;, but this does not seem to be right -- only extern char binary_inbuilt_training_data_bin_start; worked for me.

    Read the article

< Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >