Search Results

Search found 1273 results on 51 pages for 'vertex shader'.

Page 46/51 | < Previous Page | 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • How to create a Binary Tree from a General Tree?

    - by mno4k
    I have to solve the following constructor for a BinaryTree class in java: BinaryTree(GeneralTree<T> aTree) This method should create a BinaryTree (bt) from a General Tree (gt) as follows: Every Vertex from gt will be represented as a leaf in bt. If gt is a leaf, then bt will be a leaf with the same value as gt If gt is not a leaf, then bt will be constructed as an empty root, a left subTree (lt) and a right subTree (lr). Lt is a stric binary tree created from the oldest subtree of gt (the left-most subtree) and lr is a stric binary tree created from gt without its left-most subtree. The frist part is trivial enough, but the second one is giving me some trouble. I've gotten this far: public BinaryTree(GeneralTree<T> aTree){ if (aTree.isLeaf()){ root= new BinaryNode<T>(aTree.getRootData()); }else{ root= new BinaryNode<T>(null); // empty root LinkedList<GeneralTree<T>> childs = aTree.getChilds(); // Childs of the GT are implemented as a LinkedList of SubTrees child.begin(); //start iteration trough list BinaryTree<T> lt = new BinaryTree<T>(childs.element(0)); // first element = left-most child this.addLeftChild(lt); aTree.DeleteChild(hijos.elemento(0)); BinaryTree<T> lr = new BinaryTree<T>(aTree); this.addRightChild(lr); } } Is this the right way? If not, can you think of a better way to solve this? Thank you!

    Read the article

  • Find all cycles in graph, redux

    - by Shadow
    Hi, I know there are a quite some answers existing on this question. However, I found none of them really bringing it to the point. Some argue that a cycle is (almost) the same as a strongly connected components (s. http://stackoverflow.com/questions/546655/finding-all-cycles-in-graph/549402#549402) , so one could use algorithms designed for that goal. Some argue that finding a cycle can be done via DFS and checking for back-edges (s. boost graph documentation on file dependencies). I now would like to have some suggestions on whether all cycles in a graph can be detected via DFS and checking for back-edges? My opinion is that it indeed could work that way as DFS-VISIT (s. pseudocode of DFS) freshly enters each node that was not yet visited. In that sense, each vertex exhibits a potential start of a cycle. Additionally, as DFS visits each edge once, each edge leading to the starting point of a cycle is also covered. Thus, by using DFS and back-edge checking it should indeed be possible to detect all cycles in a graph. Note that, if cycles with different numbers of participant nodes exist (e.g. triangles, rectangles etc.), additional work has to be done to discriminate the acutal "shape" of each cycle.

    Read the article

  • When to use "property" builtin: auxiliary functions and generators

    - by Seth Johnson
    I recently discovered Python's property built-in, which disguises class method getters and setters as a class's property. I'm now being tempted to use it in ways that I'm pretty sure are inappropriate. Using the property keyword is clearly the right thing to do if class A has a property _x whose allowable values you want to restrict; i.e., it would replace the getX() and setX() construction one might write in C++. But where else is it appropriate to make a function a property? For example, if you have class Vertex(object): def __init__(self): self.x = 0.0 self.y = 1.0 class Polygon(object): def __init__(self, list_of_vertices): self.vertices = list_of_vertices def get_vertex_positions(self): return zip( *( (v.x,v.y) for v in self.vertices ) ) is it appropriate to add vertex_positions = property( get_vertex_positions ) ? Is it ever ok to make a generator look like a property? Imagine if a change in our code meant that we no longer stored Polygon.vertices the same way. Would it then be ok to add this to Polygon? @property def vertices(self): for v in self._new_v_thing: yield v.calculate_equivalent_vertex()

    Read the article

  • Simulating 3D 'cards' with just orthographic rendering

    - by meds
    I am rendering textured quads from an orthographic perspective and would like to simulate 'depth' by modifying UVs and the vertex positions of the quads four points (top left, top right, bottom left, bottom right). I've found if I make the top left and bottom right corners y position be the same I don't get a linear 'skew' but rather a warped one where the texture covering the top triangle (which makes up the quad) seems to get squashed while the bottom triangles texture looks normal. I can change UVs, any of the four points on the quad (but only in 2D space, it's orthographic projection anyway so 3D space won't matter much). So basically I'm trying to simulate perspective on a two dimensional quad in orthographic projection, any ideas? Is it even mathematically possible/feasible? ideally what I'd like is a situation where I can set an x/y rotation as well as a virtual z 'position' (which simulates z depth) through a function and see it internally calclate the positions/uvs to create the 3D effect. It seems like this should all be mathematical where a set of 2D transforms can be applied to each corner of the quad to simulate depth, I just don't know how to make it happen. I'd guess it requires trigonometry or something, I'm trying to crunch the math but not making much progress. here's what I mean: Top left is just the card, center is the card with a y rotation of X degrees and right most is a card with an x and y rotation of different degrees.

    Read the article

  • Bind a generic list to a listbox and also use a datatemplate

    - by muku
    Hello, I'm trying to implement something quite simple but I'm on my first steps in WPF and I'm having some problems. I have a class called Component which has a property called Vertices. Vertices is a generic List of type Point. What I want is to bind the vertices property to a listbox. This is easy by using this code in my XAML in the listbox declaration: ItemsSource="{Binding Path=Component.Vertices, Mode=OneWay, Converter={StaticResource verticesconverter},UpdateSourceTrigger=PropertyChanged}" The tricky part is when I try to create a datatemplate for the listbox. I want each row of the listbox to display a textbox with the values of the Vertex (Point.X, Point.Y) and a button to allow me to delete the item. Could you help me on the datatemplate definition. The code below doesn't work to bind the X,Y values into two separate textboxes. Could you point me on the mistake and why nothing is displayed in the textboxes? <ListBox ItemsSource="{Binding Path=Component.Vertices, Mode=OneWay,UpdateSourceTrigger=PropertyChanged}"> <ListBox.ItemTemplate> <DataTemplate> <StackPanel Orientation="Horizontal" Margin="0,10,0,0"> <TextBox Text="{Binding X}" MinWidth="35" MaxWidth="35"/> <TextBox Text="{Binding Y}" MinWidth="35" MaxWidth="35"/> </StackPanel> </DataTemplate> </ListBox.ItemTemplate> /ListBox>

    Read the article

  • C# class design - expose variables for reading but not setting

    - by James Brauman
    I have a a polygon class which stores a list of Microsoft.Xna.Framework.Vector2 as the vertices of the polygon. Once the polygon is created, I'd like other classes to be able to read the position of the vertices, but not change them. I am currently exposing the vertices through this field: /// <summary> /// Gets the vertices stored for this polygon. /// </summary> public List<Vector2> Vertices { get { return _vertices; } } List<Vector2> _vertices; However you can change any vertex using code like: Polygon1.Vertices[0] = new Vector2(0, 0); or Polygon1.Vertices[0].X = 0; How can I limit other classes to be able to only read the properties of these vertices, and not be able to set a new one to my List? The only thing I can think of is to pass a copy to classes that request it. Note that Vector2 is a struct that is part of the XNA framework and I cannot change it. Thanks.

    Read the article

  • Enumerate all paths in a weighted graph from A to B where path length is between C1 and C2

    - by awmross
    Given two points A and B in a weighted graph, find all paths from A to B where the length of the path is between C1 and C2. Ideally, each vertex should only be visited once, although this is not a hard requirement. I supose I could use a heuristic to sort the results of the algorithm to weed out "silly" paths (e.g. a path that just visits the same two nodes over and over again) I can think of simple brute force algorithms, but are there any more sophisticed algorithms that will make this more efficient? I can imagine as the graph grows this could become expensive. In the application I am developing, A & B are actually the same point (i.e. the path must return to the start), if that makes any difference. Note that this is an engineering problem, not a computer science problem, so I can use an algorithm that is fast but not necessarily 100% accurate. i.e. it is ok if it returns most of the possible paths, or if most of the paths returned are within the given length range.

    Read the article

  • ArrayList<String> NullPointerException

    - by Carlucho
    Am trying to solve a labyrinth by DFS, using adj List to represent the vertices and edges of the graph. In total there are 12 nodes (3 rows[A,B,C] * 4 cols[0,..,3]). My program starts by saving all the vertex labels (A0,..C3), so far so good, then checks the adjacent nodes, also no problems, if movement is possible, it proceeds to create the edge, here its where al goes wrong. adjList[i].add(vList[j].label); I used the debugger and found that vList[j].label is not null it contains a correct string (ie. "B1"). The only variables which show null are in adjList[i], which leads me to believe i have implemented it wrongly. this is how i did it. public class GraphList { private ArrayList<String>[] adjList; ... public GraphList(int vertexcount) { adjList = (ArrayList<String>[]) new ArrayList[vertexCount]; ... } ... public void addEdge(int i, int j) { adjList[i].add(vList[j].label); //NULLPOINTEREXCEPTION HERE } ... } I will really appreaciate if anyone can point me on the right track regrading to what its going wrong... Thanks!

    Read the article

  • How to define trees with more than one type in ML programing language

    - by user550413
    Well, I am asked to do the next thing: To define a binary tree which can contain 2 different types: ('a,'b) abtree and these are the requirements: Any inner vertex (not a leaf) must be of the type 'a or 'b and the leafs have no value. For every path in the tree all 'a values must appear before the 'b value: examples of paths: 'a->'a->'a-'b (legal) 'a->'b->'b (legal) 'a->'a->'a (legal) 'b->'b->'b (legal) 'a->'b->'a (ILLEGAL) and also I need to define another tree which is like the one described above but now I have got also 'c and in the second requirement it says that for every path I 'a values appear before the 'b values and all the 'b values appear before the 'c values. First, I am not sure how to define binary trees to have more than 1 type in them. I mean the simplest binary tree is: datatype 'a tree = leaf | br of 'a * 'a tree * 'a tree; And also how I can define a tree to have these requirements. Any help will be appreciated. Thanks.

    Read the article

  • Textures in Opengl ES 2 not working properly

    - by Adl
    Hi! I'm working with Opengl ES 2 on iphone and right now I am trying to get my textures working on my objects. I'm using .obj files and all the data in them are correct. I have written a parser myself to retrieve all data, I convert it to static arrays in C. I discard the material properties for now, only getting the image path from the .mtl files manually. I have an object with 336 triangles, making this non-trivial to observe, with appertaining vertices, vertex faces and texture coordinates (u,v). Passing all data into the shaders, the resulting image is this: http://img530.imageshack.us/img530/9637/pic1io.png http://img404.imageshack.us/img404/7358/pic2pg.png But it should look like this (Displaying it in an object viewer). Please ignore the material properties. http://img16.imageshack.us/img16/1401/pic3cq.png Using this image as a texture: http://img217.imageshack.us/img217/1300/shirtdiffuse.png I'm thinking it might have to do with texture coordinate faces ? It is defined in my .obj file, and I'm not using them at all. In books and tutorials I have not found anything concerning this. Regards Niclas

    Read the article

  • Enumerating pixel formats for adaptors and modes with OpenGL

    - by Robinson
    I'm trying to code an OpenGL path for my 3D engine. The D3D path enumerates all device adaptors, all modes (by mode I mean bit depth, dimensions, available windowed, and refresh rate) for each adaptor and then all pixel formats available for the given mode and adaptor, along side certain useful caps (shader version, filter types, etc.). So, I have broadly got the following protected functions in the class: // Enumerate all back/front buffer combinations. virtual void EnumerateBackFrontBufferCombinations(CComPtr<IDirect3D9>& d3d9); // Enumerate all depth/stencil formats. virtual void EnumerateDepthStencilFormats(CComPtr<IDirect3D9>& d3d9); // Enumerate all multi-sample formats. virtual void EnumerateMultiSampleTypes(CComPtr<IDirect3D9>& d3d9); // Enumerate all device formats, i.e. dynamic, static, render target, etc. virtual void EnumerateMapFormats(CComPtr<IDirect3D9>& d3d9); // Enumerate all capabilities. virtual void EnumerateCapabilities(CComPtr<IDirect3D9>& d3d9); The adaptors are enumerated with EnumDisplayDevices, the modes (resolutions and refresh rates) are enumerated with EnumDisplaySettings, so this can be done for either GL or D3D. The other functions I'm not so sure about with OpenGL. What are the equivalents to the IDirect3D9's CheckDeviceType, CheckDeviceFormat, CheckDeviceMultiSampleType, CheckDepthStencilMatch? I know I can use DescribePixelFormat, given a DC, but you kind-of need to have created the window before you can use a DC with it, but you can't create the window correctly until you know what formats you're going to use. Any tips you can give me? Thanks.

    Read the article

  • How to setup/calculate texturebuffer in glTexCoordPointer when importing from OBJ-file

    - by JohnMurdoch
    Hi all, I'm parsing an OBJ-file in Android and my goal is to render & display the object. Everything works fine except the correct texture mapping (importing the resource/image into opengl etc works fine). I don't know how to populate the texture related data from the obj-file into an texturebuffer-object. In the OBJ-file I've vt-lines: vt 0.495011 0.389417 vt 0.500686 0.561346 and face-lines: f 127/73/62 98/72/62 125/75/62 My draw-routine looks like (only relevant parts): gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_NORMAL_ARRAY); gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer); gl.glNormalPointer(GL10.GL_FLOAT, 0, normalsBuffer); gl.glTexCoordPointer(2, GL10.GL_SHORT, 0, t.getvtBuffer()); gl.glDrawElements(GL10.GL_TRIANGLES, t.getFacesCount(), GL10.GL_UNSIGNED_SHORT, t.getFaceBuffer()); Output of the counts of the OBJ-file: Vertex-count: 1023 Vns-count: 1752 Vts-count: 524 ///////////////////////// Part 0 Material name:default Number of faces:2037 Number of vnPointers:2037 Number of vtPointers:2037 Any advise is welcome.

    Read the article

  • Determining polygon intersection and containment

    - by Victor Liu
    I have a set of simple (no holes, no self-intersections) polygons, and I need to check that they don't intersect each other (one can be entirely contained in another; that is okay). I can check this by simply checking the per-vertex inside-ness of one polygon versus other polygons. I also need to determine the containment tree, which is the set of relationships that say which polygon contains any given polygon. Since no polygon can intersect any other, then any contained polygon has a unique container; the "next-bigger" one. In other words, if A contains B contains C, then A is the parent of B, and B is the parent of C, and we don't consider A the parent of C. The question: How do I efficiently determine the containment relationships and check the non-intersection criterion? I ask this as one question because maybe a combined algorithm is more efficient than solving each problem separately. The algorithm should take as input a list of polygons, given by a list of their vertices. It should produce a boolean B indicating if none of the polygons intersect any other polygon, and also if B = true, a list of pairs (P, C) where polygon P is the parent of child C. This is not homework. This is for a hobby project I am working on.

    Read the article

  • Instanced drawing with OpenGL ES 2.0

    - by Mårten Wikström
    In short: Is it possible to use the gl_InstanceID built-in variable in OpenGL ES 2.0? And, if so, how? Some more info: I want to draw multiple instances of an object using glDrawArraysInstanced and gl_InstanceID, and I want my application to run on multiple platforms, including iOS. The specification clearly says that these features require ES 3.0. According to the iOS Device Compatibility Reference ES 3.0 is only available on a few devices (those based on the A7 GPU; so iPhone 5s, but not on iPhone 5 or earlier). So my first assumption was that I needed to avoid using instanced drawing on older iOS devices. However, further down in the compatibility reference document it says that the EXT_draw_instanced extension is supported for all SGX Series 5 processors (that includes iPhone 5 and 4s). This makes me think that I could indeed use instanced drawing on older iOS devices too, by looking up and using the appropriate extension function (EXT or ARB) for glDrawArraysInstanced. I'm currently just running some test code using SDL and GLEW on Windows so I haven't tested anything on iOS yet. However, in my current setup I'm having trouble using the gl_InstanceID built-in variable in a vertex shader. I'm getting the following error message: 'gl_InstanceID' : variable is not available in current GLSL version Enabling the "draw_instanced" extension in GLSL has no effect: #extension GL_ARB_draw_instanced : enable #extension GL_EXT_draw_instanced : enable The error goes away when I specifically declare that I need ES 3.0 (GLSL 300 ES): #version 300 es Although that seem to work fine on my Windows desktop machine in an ES 2.0 context I doubt that this would work on an iPhone 5. So, shall I abandon the idea of being able to use instanced drawing on older iOS devices?

    Read the article

  • Glitch when moving camera in OpenGL

    - by CG
    I am writing a tile-based game engine for the iPhone and it works in general apart from the following glitch. Basically, the camera will always keep the player in the centre of the screen, and it moves to follow the player correctly and draws everything correctly when stationary. However whilst the player is moving, the tiles of the surface the player is walking on glitch as shown: Compared to the stationary (correct): Does anyone have any idea why this could be? Thanks for the responses so far. Floating point error was my first thought also and I tried slightly increasing the size of the tiles but this did not help. Changing glClearColor to red still leaves black gaps so maybe it isn't floating point error. Since the tiles in general will use different textures, I don't know if vertex arrays can be used (I always thought that the same texture had to be applied to everything in the array, correct me if I'm wrong), and I don't think VBO is available in OpenGL ES. Setting the filtering to nearest neighbour improved things but the glitch still happens every ten frames or so, and the pixelly result means that this solution is not viable anyway. The main difference between what I'm doing now and what I've done in the past is that this time I am moving the camera rather than the stationary objects in the world (i.e. the tiles, the player is still being moved). The code I'm using to move the camera is: void Camera::CentreAtPoint( GLfloat x, GLfloat y ) { glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrthof(x - size.x / 2.0f, x + size.x / 2.0f, y + size.y / 2.0f, y - size.y / 2.0f, 0.01f, 5.0f); glMatrixMode(GL_MODELVIEW); } Is there a problem with doing things this way and if so is there a solution?

    Read the article

  • gl_FragColor and glReadPixels

    - by chun0216
    I am still trying to read pixels from fragment shader and I have some questions. I know that gl_FragColor returns with vec4 meaning RGBA, 4 channels. After that, I am using glReadPixels to read FBO and write it in data GLubyte *pixels = new GLubyte[640*480*4]; glReadPixels(0, 0, 640,480, GL_RGBA, GL_UNSIGNED_BYTE, pixels); This works fine but it really has speed issue. Instead of this, I want to just read RGB so ignore alpha channels. I tried: GLubyte *pixels = new GLubyte[640*480*3]; glReadPixels(0, 0, 640,480, GL_RGB, GL_UNSIGNED_BYTE, pixels); instead and this didn't work though. I guess it's because gl_FragColor returns 4 channels and maybe I should do something before this? Actually, since my returned image (gl_FragColor) is grayscale, I did something like float gray = 0.5 //or some other values gl_FragColor = vec4(gray,gray,gray,1.0); So is there any efficient way to use glReadPixels instead of using the first 4 channels method? Any suggestion? By the way, this is on opengl es 2.0 code.

    Read the article

  • Can C++ do something like an ML case expression?

    - by Nathan Andrew Mullenax
    So, I've run into this sort of thing a few times in C++ where I'd really like to write something like case (a,b,c,d) of (true, true, _, _ ) => expr | (false, true, _, false) => expr | ... But in C++, I invariably end up with something like this: bool c11 = color1.count(e.first)>0; bool c21 = color2.count(e.first)>0; bool c12 = color1.count(e.second)>0; bool c22 = color2.count(e.second)>0; // no vertex in this edge is colored // requeue if( !(c11||c21||c12||c22) ) { edges.push(e); } // endpoints already same color // failure condition else if( (c11&&c12)||(c21&&c22) ) { results.push_back("NOT BICOLORABLE."); return true; } // nothing to do: nodes are already // colored and different from one another else if( (c11&&c22)||(c21&&c12) ) { } // first is c1, second is not set else if( c11 && !(c12||c22) ) { color2.insert( e.second ); } // first is c2, second is not set else if( c21 && !(c12||c22) ) { color1.insert( e.second ); } // first is not set, second is c1 else if( !(c11||c21) && c12 ) { color2.insert( e.first ); } // first is not set, second is c2 else if( !(c11||c21) && c22 ) { color1.insert( e.first ); } else { std::cout << "Something went wrong.\n"; } I'm wondering if there's any way to clean all of those if's and else's up, as it seems especially error prone. It would be even better if it were possible to get the compiler complain like SML does when a case expression (or statement in C++) isn't exhaustive. I realize this question is a bit vague. Maybe, in sum, how would one represent an exhaustive truth table with an arbitrary number of variables in C++ succinctly? Thanks in advance.

    Read the article

  • additive texture combiner

    - by ivicaa
    I have a problem which is driving me crazy. Enironment: IPHONE, OpenGL ES 1.1 Basically I have a simple GL_COMBINE for vertex color and texture color. glColor4f(0.1f, 0.1f, 0.1f, 0); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE); glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_ADD); glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_RGB, GL_PRIMARY_COLOR); glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_RGB, GL_SRC_COLOR); glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_RGB, GL_TEXTURE); glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_RGB, GL_SRC_COLOR); glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_ALPHA, GL_ADD); glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_ALPHA, GL_PRIMARY_COLOR); glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_ALPHA, GL_SRC_ALPHA); glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_ALPHA, GL_TEXTURE); glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_ALPHA, GL_SRC_ALPHA); It should simply do VertexColorRGBA + TextureRGBA. With Alpha everything works fine, but if as soon as I change R,G,B in the glColor4f call, the final alpha is also modified. Does anyone have a hint for this unexpected behavior? Thanks in advance! Ivica

    Read the article

  • OpenGL circle rotation

    - by user350632
    I'm using following code to draw my circles: double theta = 2 * 3.1415926 / num_segments; double c = Math.Cos(theta);//precalculate the sine and cosine double s = Math.Sin(theta); double t; double x = r;//we start at angle = 0 double y = 0; GL.glBegin(GL.GL_LINE_LOOP); for(int ii = 0; ii < num_segments; ii++) { float first = (float)(x * scaleX + cx) / xyFactor; float second = (float)(y * scaleY + cy) / xyFactor; GL.glVertex2f(first, second); // output Vertex //apply the rotation matrix t = x; x = c * x - s * y; y = s * t + c * y; } GL.glEnd(); The problem is that when scaleX is different from scaleY then circles are transformed in the right way except for the rotation. In my code sequence looks like this: circle.Scale(tmp_p.scaleX, tmp_p.scaleY); circle.Rotate(tmp_p.rotateAngle); My question is what other calculations should i perform for circle to rotate properly when scaleX and scaleY are not equal?

    Read the article

  • Why does GLSL's arithmetic functions yield so different results on the iPad than on the simulator?

    - by cheeesus
    I'm currently chasing some bugs in my OpenGL ES 2.0 fragment shader code which is running on iOS devices. The code runs fine in the simulator, but on the iPad it has huge problems and some of the calculations yield vastly different results, I had for example 0.0 on the iPad and 4013.17 on the simulator, so I'm not talking about small differences which could be the result of some rounding errors. One of the things I noticed is that, on the iPad, float1 = pow(float2, 2.0); can yield results which are very different from the results of float1 = float2 * float2; Specifically, when using pow(x, 2.0) on a variable containing a larger negative number like -8, it seemed to return a value which satified the condition if (powResult <= 0.0). Also, the result of both operations (pow(x, 2.0) as well as x*x) yields different results in the simulator than on the iPad. Used floats are mediump, but I get the same stuff with highp. Is there a simple explanation for those differences? I'm narrowing the problem down, but it takes so much time, so maybe someone can help me here with a simple explanation.

    Read the article

  • How much is too much memory allocation in NDK?

    - by Maximus
    The NDK download page notes that, "Typical good candidates for the NDK are self-contained, CPU-intensive operations that don't allocate much memory, such as signal processing, physics simulation, and so on." I came from a C background and was excited to try to use the NDK to operate most of my OpenGL ES functions and any native functions related to physics, animation of vertices, etc... I'm finding that I'm relying quite a bit on Native code and wondering if I may be making some mistakes. I've had no trouble with testing at this point, but I'm curious if I may run into problems in the future. For example, I have game struct defined (somewhat like is seen in the San-Angeles example). I'm loading vertex information for objects dynamically (just what is needed for an active game area) so there's quite a bit of memory allocation happening for vertices, normals, texture coordinates, indices and texture graphic data... just to name the essentials. I'm quite careful about freeing what is allocated between game areas. Would I be safer setting some caps on array sizes or should I charge bravely forward as I'm going now?

    Read the article

  • What OpenGL functions are not GPU accelerated?

    - by Xavier Ho
    I was shocked when I read this (from the OpenGL wiki): glTranslate, glRotate, glScale Are these hardware accelerated? No, there are no known GPUs that execute this. The driver computes the matrix on the CPU and uploads it to the GPU. All the other matrix operations are done on the CPU as well : glPushMatrix, glPopMatrix, glLoadIdentity, glFrustum, glOrtho. This is the reason why these functions are considered deprecated in GL 3.0. You should have your own math library, build your own matrix, upload your matrix to the shader. For a very, very long time I thought most of the OpenGL functions use the GPU to do computation. I'm not sure if this is a common misconception, but after a while of thinking, this makes sense. Old OpenGL functions (2.x and older) are really not suitable for real-world applications, due to too many state switches. This makes me realise that, possibly, many OpenGL functions do not use the GPU at all. So, the question is: Which OpenGL function calls don't use the GPU? I believe knowing the answer to the above question would help me become a better programmer with OpenGL. Please do share some of your insights.

    Read the article

  • Is glDisableClientState required?

    - by Shawn
    Every example I've come across for rendering array data is similar to the following code, in which in your drawing loop you first call glEnableClientState for what you will be using and when you are done you call glDisableClientState: void drawScene(void) { glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glBindTexture(GL_TEXTURE_2D, texturePointerA); glTexCoordPointer(2, GL_FLOAT, 0,textureCoordA); glVertexPointer(3, GL_FLOAT, 0, verticesA); glDrawElements(GL_QUADS, numPointsDrawnA, GL_UNSIGNED_BYTE, drawIndicesA); glBindTexture(GL_TEXTURE_2D, texturePointerB); glTexCoordPointer(2, GL_FLOAT, 0,textureCoordB); glVertexPointer(3, GL_FLOAT, 0, verticesB); glDrawElements(GL_QUADS, numPointsDrawnB, GL_UNSIGNED_BYTE, drawIndicesB); glDisableClientState(GL_TEXTURE_COORD_ARRAY); glDisableClientState(GL_VERTEX_ARRAY); } In my program I am always using texture coordinates and vertex arrays, so I thought it was pointless to keep enabling and disabling them every frame. I moved the glEnableClientState outside of the loop like so: bool initGL(void) { //... glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); } void drawScene(void) { glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); glBindTexture(GL_TEXTURE_2D, texturePointerA); glTexCoordPointer(2, GL_FLOAT, 0,textureCoordA); glVertexPointer(3, GL_FLOAT, 0, verticesA); glDrawElements(GL_QUADS, numPointsDrawnA, GL_UNSIGNED_BYTE, drawIndicesA); glBindTexture(GL_TEXTURE_2D, texturePointerB); glTexCoordPointer(2, GL_FLOAT, 0,textureCoordB); glVertexPointer(3, GL_FLOAT, 0, verticesB); glDrawElements(GL_QUADS, numPointsDrawnB, GL_UNSIGNED_BYTE, drawIndicesB); } It seems to work fine. My question is: Do I need to call glDisableClientState somewhere; perhaps when the program is closed?. Also, is it ok to do it like this? Is there something I'm missing since everyone else enables and disables each frame?

    Read the article

  • Which OpenGL functions are not GPU-accelerated?

    - by Xavier Ho
    I was shocked when I read this (from the OpenGL wiki): glTranslate, glRotate, glScale Are these hardware accelerated? No, there are no known GPUs that execute this. The driver computes the matrix on the CPU and uploads it to the GPU. All the other matrix operations are done on the CPU as well : glPushMatrix, glPopMatrix, glLoadIdentity, glFrustum, glOrtho. This is the reason why these functions are considered deprecated in GL 3.0. You should have your own math library, build your own matrix, upload your matrix to the shader. For a very, very long time I thought most of the OpenGL functions use the GPU to do computation. I'm not sure if this is a common misconception, but after a while of thinking, this makes sense. Old OpenGL functions (2.x and older) are really not suitable for real-world applications, due to too many state switches. This makes me realise that, possibly, many OpenGL functions do not use the GPU at all. So, the question is: Which OpenGL function calls don't use the GPU? I believe knowing the answer to the above question would help me become a better programmer with OpenGL. Please do share some of your insights.

    Read the article

  • Generate texture from polygon (openGL)

    - by user146780
    I have a quad and I would like to use the gradient it produces as a texture for another polygon. glPushMatrix(); glTranslatef(250,250,0); glBegin(GL_POLYGON); glColor3f(255,0,0); glVertex2f(10,0); glVertex2f(100,0); glVertex2f(100,100); glVertex2f(50,50); glVertex2f(0,100); glEnd(); //End quadrilateral coordinates glPopMatrix(); glBegin(GL_QUADS); //Begin quadrilateral coordinates glVertex2f(0,0); glColor3f(0,255,0); glVertex2f(150,0); glVertex2f(150,150); glColor3f(255,0,0); glVertex2f(0,150); glEnd(); //End quadrilateral coordinates My goal is to make the 5 vertex polygon have the gradient of the quad (maybe a texture is not the best bet) Thanks

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51  | Next Page >