Search Results

Search found 13494 results on 540 pages for 'board game'.

Page 334/540 | < Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >

  • Calculating distance from viewer to object in a shader

    - by Jay
    Good morning, I'm working through creating the spherical billboards technique outlined in this paper. I'm trying to create a shader that calculates the distance from the camera to all objects in the scene and stores the results in a texture. I keep getting either a completely black or white texture. Here are my questions: I assume the position that's automatically sent to the vertex shader from ogre is in object space? The gpu interpolates the output position from the vertex shader when it sends it to the fragment shader. Does it do the same for my depth calculation or do I need to move that calculation to the fragment shader? Is there a way to debug shaders? I have no errors but I'm not sure I'm getting my parameters passed into the shaders correctly. Here's my shader code: void DepthVertexShader( float4 position : POSITION, uniform float4x4 worldViewProjMatrix, uniform float3 eyePosition, out float4 outPosition : POSITION, out float Depth ) { // position is in object space // outPosition is in camera space outPosition = mul( worldViewProjMatrix, position ); // calculate distance from camera to vertex Depth = length( eyePosition - position ); } void DepthFragmentShader( float Depth : TEXCOORD0, uniform float fNear, uniform float fFar, out float4 outColor : COLOR ) { // clamp output using clip planes float fColor = 1.0 - smoothstep( fNear, fFar, Depth ); outColor = float4( fColor, fColor, fColor, 1.0 ); } fNear is the near clip plane for the scene fFar is the far clip plane for the scene

    Read the article

  • How should I choose quadtree depth?

    - by Evpok
    I'm using a quadtree to prune collision detection pairs in a 2d world. How should I choose to what depth said quadtree is calculated? The world is made mostly of moving objects1, so the cost of dispatching the objects between the quadtree cells matters. What is the relationship between the gain from less collision checking and the loss from more dispatching? How can I strike a balance that performs optimally? 1 To be completely explicit, they are autonomous self-replicating cells competing for food sources. This is an attempt to show my pupils predator-prey dynamics and genetic evolution at work.

    Read the article

  • FPS camera specification

    - by user1095108
    I remember I once composed a FPS viewing transformation, as a composition of 3 rotations, each with an angle as a parameter. The first angle specified the left/right rotation around the y-axis, the second angle the up/down rotation around the x-axis, and the third around the z-axis. The viewing transformation was therefore specified by 3 angles. Naturally, this transformation had a gimbal lock, depending in what order the transformation were performed. What should I look at to derive my viewing transformation without the gimbal lock? I know the "lookAt" method already, but I consider that cumbersome. EDIT: MY first guess is to do the first 2 transformations to get a viewing direction and then the axis-angle rotation on this axis.

    Read the article

  • Problem with sprite direction and rotation

    - by user2236165
    I have a sprite called Tool that moves with a speed represented as a float and in a direction represented as a Vector2. When I click the mouse on the screen the sprite change its direction and starts to move towards the mouseclick. In addition to that I rotate the sprite so that it is facing in the direction it is heading. However, when I add a camera that is suppose to follow the sprite so that the sprite is always centered on the screen, the sprite won't move in the given direction and the rotation isn't accurate anymore. This only happens when I add the Camera.View in the spriteBatch.Begin(). I was hoping anyone could maybe shed a light on what I am missing in my code, that would be highly appreciated. Here is the camera class i use: public class Camera { private const float zoomUpperLimit = 1.5f; private const float zoomLowerLimit = 0.1f; private float _zoom; private Vector2 _pos; private int ViewportWidth, ViewportHeight; #region Properties public float Zoom { get { return _zoom; } set { _zoom = value; if (_zoom < zoomLowerLimit) _zoom = zoomLowerLimit; if (_zoom > zoomUpperLimit) _zoom = zoomUpperLimit; } } public Rectangle Viewport { get { int width = (int)((ViewportWidth / _zoom)); int height = (int)((ViewportHeight / _zoom)); return new Rectangle((int)(_pos.X - width / 2), (int)(_pos.Y - height / 2), width, height); } } public void Move(Vector2 amount) { _pos += amount; } public Vector2 Position { get { return _pos; } set { _pos = value; } } public Matrix View { get { return Matrix.CreateTranslation(new Vector3(-_pos.X, -_pos.Y, 0)) * Matrix.CreateScale(new Vector3(Zoom, Zoom, 1)) * Matrix.CreateTranslation(new Vector3(ViewportWidth * 0.5f, ViewportHeight * 0.5f, 0)); } } #endregion public Camera(Viewport viewport, float initialZoom) { _zoom = initialZoom; _pos = Vector2.Zero; ViewportWidth = viewport.Width; ViewportHeight = viewport.Height; } } And here is my Update and Draw-method: protected override void Update (GameTime gameTime) { float elapsed = (float)gameTime.ElapsedGameTime.TotalSeconds; TouchCollection touchCollection = TouchPanel.GetState (); foreach (TouchLocation tl in touchCollection) { if (tl.State == TouchLocationState.Pressed || tl.State == TouchLocationState.Moved) { //direction the tool shall move towards direction = touchCollection [0].Position - toolPos; if (direction != Vector2.Zero) { direction.Normalize (); } //change the direction the tool is moving and find the rotationangle the texture must rotate to point in given direction toolPos += (direction * speed * elapsed); RotationAngle = (float)Math.Atan2 (direction.Y, direction.X); } } if (direction != Vector2.Zero) { direction.Normalize (); } //move tool in given direction toolPos += (direction * speed * elapsed); //change cameracentre to the tools position Camera.Position = toolPos; base.Update (gameTime); } protected override void Draw (GameTime gameTime) { graphics.GraphicsDevice.Clear (Color.Blue); spriteBatch.Begin (SpriteSortMode.BackToFront, BlendState.AlphaBlend, null, null, null, null, Camera.View); spriteBatch.Draw (tool, new Vector2 (toolPos.X, toolPos.Y), null, Color.White, RotationAngle, originOfToolTexture, 1, SpriteEffects.None, 1); spriteBatch.End (); base.Draw (gameTime); }

    Read the article

  • Translating an object along its heading

    - by Kuros
    I am working on a simulation that requires me to have several objects moving around in 3D space (text output of their current position on the grid and heading is fine, I do not need graphics), and I am having some trouble getting objects to move along their relative headings. I have a basic understanding of vectors and matrices. I am using a vector to represent their position, and I am also using Euler Angles. I can translate one of my entities with a matrix along whatever axis, and I can alter their heading. For example, if I have an entity at (order is XYZ) 1, 1, 1, with a heading of 0, I can apply a translation matrix to get them to talk to 1, 1, 2 fine. However, if I change their heading to 270, they still walk to 1, 1, 3, instead of 2, 1, 2 as I desire. I have a feeling that my problem lies in not translating my matrix from world space to object space, but I am not sure how to go about that. How can I do this? Addition: I am using 3D vectors to represent their current position and their heading (using the three euler angles). For now, all I want to do is have an entity walk in a square, reporting their current position at each step. So, assuming it starts at 10, 10, 10 I want it to walk as follows: 10,10,10 -> 10, 10, 15 10, 10, 15 -> 5, 10, 15 5, 10, 15 -> 5, 10, 10 5, 10, 10 -> 10, 10, 10 My 1 Z unit translation matrix is as follows: [1 0 0 0] [0 1 0 0] [0 0 1 1] [0 0 0 1] My rotation matrix is as follows: [0 0 1 0] [0 1 0 0] [-1 0 0 0] [0 0 0 1]

    Read the article

  • What data-structure/algorithm will allow me to send a list of key/value dictionaries using the least amount of bits?

    - by user12365
    I have server objects that have corresponding client objects. The data to be kept in sync is inside the server object's key/value dictionary. To keep the client objects in sync with the sever objects, I want the server to send the key/value dictionary every frame for each object. What data-structure/algorithm will allow me to send a list of key/value dictionaries using the least amount of bits? Bonus constraint 1: For each type of object, the values of some keys change more often than others. Bonus constraint 2: Memory usage on the server side is relatively expensive.

    Read the article

  • Prediction happening on (sending) client side

    - by Daniel
    This seems like a simple enough concept, but I haven't seen this implemented anywhere yet. Assuming that the server just forwards and verifies data... I'm using mouse-based movement, so it's not too difficult to predict the location of the player 150ms from when the event is sent. I'm thinking it is more accurate than using old data and older data on the receiving clients' side. The question I have, is why can I not find any examples of this? Is there something fundamentally wrong with this that I cannot find anyone implementing or talking about implementing this.

    Read the article

  • Recommended formats to store bitmaps in memory?

    - by Geotarget
    I'm working with general purpose image rendering, and high-performance image processing, and so I need to know how to store bitmaps in-memory. (24bpp/32bpp, compressed/raw, etc) I'm not working with 3D graphics or DirectX / OpenGL rendering and so I don't need to use graphics card compatible bitmap formats. My questions: What is the "usual" or "normal" way to store bitmaps in memory? (in C++ engines/projects?) How to store bitmaps for high-performance algorithms, such that read/write times are the fastest? (fixed array? with/without padding? 24-bpp or 32-bpp?) How to store bitmaps for applications handling a lot of bitmap data, to minimize memory usage? (JPEG? or a faster [de]compression algorithm?) Some possible methods: Use a fixed packed 24-bpp or 32-bpp int[] array and simply access pixels using pointer access, all pixels are allocated in one continuous memory chunk (could be 1-10 MB) Use a form of "sparse" data storage so each line of the bitmap is allocated separately, reusing more memory and requiring smaller contiguous memory segments Store bitmaps in its compressed form (PNG, JPG, GIF, etc) and unpack only when its needed, reducing the amount of memory used. Delete the unpacked data if its not used for 10 secs.

    Read the article

  • Rotate a vector relative to itself

    - by Paul Manta
    I have a plane defined by transform.forward and transform.right, with 0 degrees corresponding to the forward vector and positive 90 degrees to the right vector. How can I create a third vector rotated in this plane. A rotation of 0 degrees would mean the vector is identical to transform.forward, a rotation of 30 degrees would mean it forms a 30 degree angle with the forward vector. In other words, I want to rotate the forward vector relative to itself, in the plane it defines with the right vector.

    Read the article

  • Information about rendering, batches, the graphical card, performance etc. + XNA?

    - by Aidiakapi
    I know the title is a bit vague but it's hard to describe what I'm really looking for, but here goes. When it comes to CPU rendering, performance is mostly easy to estimate and straightforward, but when it comes to the GPU due to my lack of technical background information, I'm clueless. I'm using XNA so it'd be nice if theory could be related to that. So what I actually wanna know is, what happens when and where (CPU/GPU) when you do specific draw actions? What is a batch? What influence do effects, projections etc have? Is data persisted on the graphics card or is it transferred over every step? When there's talk about bandwidth, are you talking about a graphics card internal bandwidth, or the pipeline from CPU to GPU? Note: I'm not actually looking for information on how the drawing process happens, that's the GPU's business, I'm interested on all the overhead that precedes that. I'd like to understand what's going on when I do action X, to adapt my architectures and practices to that. Any articles (possibly with code examples), information, links, tutorials that give more insight in how to write better games are very much appreciated. Thanks :)

    Read the article

  • 3D open source physics engine suitable for mobile platforms (Android and iOS)

    - by lukeluke
    I have made some research and found that bullet, ode, newton and some others are open source physics engines that should be portable enough (but I have never tried to comile/use anyone of them on phones). I am writing my games for mobile platforms in C++, so the engine should be C or C++. I need a fast engine, since mobile platforms have limited resources. I need a free engine. A good design would be nice to have too. What engine is best suited for my task? What I really would like to hear from you is your direct experience. Documentation and support (for example, forum or an IRC channel) is a really important aspect to take into consideration.

    Read the article

  • How do I multiply pixels on an SDL Surface?

    - by NoobScratcher
    Okay so I'm able to put blank pixels into a surface and also draw gradient pixels rectangles,etc But I don't know how to multiply the pixels on a surface so I was hoping someone could provide me information on this topic. I was thinking you could get the members pixel and then * it by 2 but that didn't provide results I wanted so I'm now thinking that you have to actually get to the position in bytes in one location to the left and one location to the right and then store it in memory and then * that by 2 am I correct or what? If so what is it that allows me to do that and how do I do that?

    Read the article

  • how to calculate intersection time and place of multiple moving arcs

    - by user20733
    I have rocks orbiting moons, moons orbiting planets, planets orbiting suns, and suns orbiting black holes, and the current system could have many many layers of orbitage. the position of any object is a function of time and relative to the object it orbits. (so far so good). now I want to know for a given 2 objects(A,B), a start time and a speed, how can I work out the when and where to go. I can work out where A and B is given a time. so i just need. 1: direction to travel in from A to B(remember B is moving(not in a straight line)) 2: Time to get to b in a straight line. travel must be in a straight line with the shortest possible distance. as an extension to this question, how will i know if its better to wait, EG is it faster to stay on object A and wait for a hour when the objects may be closer, than to set off from A to B at the start. Cheers, it hurt my brain.

    Read the article

  • Rotate a vector

    - by marc wellman
    I want my first-person camera to smoothly change its viewing direction from direction d1 to direction d2. The latter direction is indicated by a target position t2. So far I have implemented a rotation that works fine but the speed of the rotation slows down the closer the current direction gets to the desired one. This is what I want to avoid. Here are the two very simple methods I have written so far: // this method initiates the direction change and sets the parameter public void LookAt(Vector3 target) { _desiredDirection = target - _cameraPosition; _desiredDirection.Normalize(); _rotation = new Matrix(); _rotationAxis = Vector3.Cross(Direction, _desiredDirection); _isLooking = true; } // this method gets execute by the Update()-method if _isLooking flag is up. private void _lookingAt() { dist = Vector3.Distance(Direction, _desiredDirection); // check whether the current direction has reached the desired one. if (dist >= 0.00001f) { _rotationAxis = Vector3.Cross(Direction, _desiredDirection); _rotation = Matrix.CreateFromAxisAngle(_rotationAxis, MathHelper.ToRadians(1)); Direction = Vector3.TransformNormal(Direction, _rotation); } else { _onDirectionReached(); _isLooking = false; } } Again, rotation works fine; camera reaches its desired direction. But the speed is not equal over the course of movement - it slows down. How to achieve a rotation with constant speed ?

    Read the article

  • Android Loading Screen: How do I go about using a stack to load elements, and the option of incrementing the size counter?

    - by tom_mai78101
    I have some problems with figuring out what value I should put in the function: int value_needed_to_figure_out = X; ProgressBar.incrementProgressBy(value_needed_to_figure_out); I've been researching about loading screens and how to use them. Some examples I've seen have implemented Thread.sleep() in a Handler.post(new Runnable()) function. To me, I got most of that concept of using the Handler to update the ProgressBar, while pretending to do some heavy crunching work. So, I kept looking. I have read this thread here: How do I load chunks of data from an assest manager during a loading screen? It said that I can try using a stack it needs to load, and adding a size counter as I add elements to the stack. What does it mean? This is the part where I'm totally stumped. If anyone would provide some hints, I'll gladly appreciate it. Thanks in advance.

    Read the article

  • Collision: Vector class (java)

    - by user8363
    When handling collision detection / response and you need a Vector class, do you need to create that class yourself or is there a java class you can use? A vector class should have methods like: subtract(Vector v), normalize(), dotProduct(Vector v), ... At the moment it seems logical to use classes like java.awt.Rectangle and java.awt.Polygon to calculate collisions. Would I be right to use these classes for this purpose? My question is not about how to implement collision detection, I know how that works. However I'm wondering what would be a correct and clean way to implement it in java since I'm fairly new to the language and to application development in general.

    Read the article

  • why specular light is not running?

    - by nkint
    hi, i'm on JOGL this is my method for lighting: private void lights(GL gl) { float[] LightPos = {0.0f, 0.0f, -1.0f, 1.0f}; float[] LightAmb = {0.2f, 0.2f, 0.2f, 1.0f}; float[] LightDif = {0.6f, 0.6f, 0.6f, 1.0f}; float[] LightSpc = {0.9f, 0.9f, 0.9f, 1.0f}; gl.glLightfv(GL.GL_LIGHT1, GL.GL_POSITION, LightPos, 0); gl.glLightfv(GL.GL_LIGHT1, GL.GL_AMBIENT, LightAmb, 0); gl.glLightfv(GL.GL_LIGHT1, GL.GL_DIFFUSE, LightDif, 0); gl.glLightfv(GL.GL_LIGHT1, GL.GL_SPECULAR, LightSpc, 0); gl.glLightfv(GL.GL_LIGHT0, GL.GL_SPECULAR, LightSpc, 0); gl.glEnable(GL.GL_LIGHT0); gl.glEnable(GL.GL_LIGHT1); gl.glShadeModel(GL.GL_SMOOTH); gl.glEnable(GL.GL_LIGHTING); } and i see my objects flat, no specular light.. any ideas? ps. to render my objects: gl.glColor3f(1f,0f,0f); gl.glBegin(GL.GL_TRIANGLES); for(Triangle t : tubeModel.getTriangles()) { gl.glVertex3f(t.v1.x, t.v1.y, t.v1.z); gl.glVertex3f(t.v2.x, t.v2.y, t.v2.z); gl.glVertex3f(t.v3.x, t.v3.y, t.v3.z); } gl.glEnd();

    Read the article

  • rotating an object on an arc

    - by gardian06
    I am trying to get a turret to rotate on an arc, and have hit a wall. I have 8 possible starting orientations for the turrets, and want them to rotate on a 90 degree arc. I currently take the starting rotation of the turret, and then from that derive the positive, and negative boundary of the arc. because of engine restrictions (Unity) I have to do all of my tests against a value which is between [0,360], and due to numerical precision issues I can not test against specific values. I would like to write a general test without having to go in, and jury rig cases //my current test is: // member variables public float negBound; public float posBound; // found in Start() function (called immediately after construction) // eulerAngles.y is the the degree measure of the starting y rotation negBound = transform.eulerAngles.y-45; posBound = transform.eulerAngles.y+45; // insure that values are within bounds if(negBound<0){ negBound+=360; }else if(posBound>360){ posBound-=360; } // called from Update() when target not in firing line void Rotate(){ // controlls what direction if(transform.eulerAngles.y>posBound){ dir = -1; } else if(transform.eulerAngles.y < negBound){ dir = 1; } // rotate object } follows is a table of values for my different cases (please excuse my force formatting) read as base is the starting rotation of the turret, neg is the negative boundry, pos is the positive boundry, range is the acceptable range of values, and works is if it performs as expected with the current code. |base-|-neg-|-pos--|----------range-----------|-works-| |---0---|-315-|--45--|-315-0,0-45----------|----------| |--45--|---0---|--90--|-0-45,54-90----------|----x----| |-135-|---90--|-180-|-90-135,135-180---|----x----| |-180-|--135-|-225-|-135-180,180-225-|----x----| |-225-|--180-|-270-|-180-225,225-270-|----x----| |-270-|--225-|-315-|-225-270,270-315-|----------| |-315-|--270-|---0---|--270-315,315-0---|----------| I will need to do all tests from derived, or stored values, but can not figure out how to get all of my cases to work simultaneously. //I attempted to concatenate the 2 tests: if((transform.eulerAngles.y>posBound)&&(transform.eulerAngles.y < negBound)){ dir *= -1; } this caused only the first case to be successful // I attempted to store a opposite value, and do a void Rotate(){ // controlls what direction if((transform.eulerAngles.y > posBound)&&(transform.eulerAngles.y<oposite)){ dir = -1; } else if((transform.eulerAngles.y < negBound)&&(transform.eulerAngles.y>oposite)){ dir = 1; } // rotate object } this causes the opposite situation as indicated on the table. What am I missing here?

    Read the article

  • How can I create a VBO of a type determined at runtime?

    - by lapin
    I've written a Vbo template class to work with OpenGL. I'd like to set the type from a config file at runtime. For example: <vbo type="bump_vt" ... /> Vbo* pVbo = new Vbo(bump_vt, ...); Is there some way I can do this without a large if-else block such as: if( sType.compareTo("bump_vt") == 0 ) Vbo* pVbo = new Vbo(bump_vt, ...); else if ... I'm writing for multiple platforms in C++.

    Read the article

  • How can I determine the first visible tile in an isometric perspective?

    - by alekop
    I am trying to render the visible portion of a diamond-shaped isometric map. The "world" coordinate system is a 2D Cartesian system, with the coordinates increasing diagonally (in terms of the view coordinate system) along the axes. The "view" coordinates are simply mouse offsets relative to the upper left corner of the view. My rendering algorithm works by drawing diagonal spans, starting from the upper right corner of the view and moving diagonally to the right and down, advancing to the next row when it reaches the right view edge. When the rendering loop reaches the lower left corner, it stops. There are functions to convert a point from view coordinates to world coordinates and then to map coordinates. Everything works when rendering from tile 0,0, but as the view scrolls around the rendering needs to start from a different tile. I can't figure out how to determine which tile is closest to the upper right corner. At the moment I am simply converting the coordinates of the upper right corner to map coordinates. This works as long as the view origin (upper right corner) is inside the world, but when approaching the edges of the map the starting tile coordinate obviously become invalid. I guess this boils down to asking "how can I find the intersection between the world X axis and the view X axis?"

    Read the article

  • Algorithm for procedural city generation?

    - by Zove Games
    I am planning on making a (simple) procedural city generator using Java. I need ideas on whan algorithm to use for the layout, and the actual buildings. The city will mostly have skyscrapers, not really much complex stuff. For the layout I already have a simple algorithm implemented: Create a Map with java.awt.Point keys and Integer values. Fill it with all the points in the city's bounds with the value as -1 (unnassigned) Shuffle the map, and assign the 1st 10 of the keys IDs (from 1-10) Loop until all points have IDs: Loop though all points: Assign points next to an assigned point IDs of the point next to them, if 2 or more points border the point, then randomly choose which ID the point will get. You will end up with 10 random regions. Make roads bordering these regions. Fill the inside of each region with a randomly spaced and randomly rotated grid PROBLEM: This is not the fastest way to do it. What algorithm should I use for the layout. And what should I use to make each building's design? I don't even know how I'm going to do that yet (fractals maybe). I just need some ideas, not actual code.

    Read the article

  • Calculating the force of an impact?

    - by meds
    I'm trying to figure out a way to determine the force two objects collide in. I have two vectors defining their linear velocity at the time of impact, their mass and their angular velocity. Keep in mind this is all for a 2D physics engine. I don't think it's as simple as adding up these values and figuring out if it's large enogh it makes a large impact since that doesn't take into account if the two objects are travelling in the same direction (as an example). Any ideas?

    Read the article

  • Drawing flaming letters in 3D with OpenGL ES 2.0

    - by Chiquis
    I am a bit confused about how to achieve this. What I want is to "draw with flames". I have achieved this with textures successfully, but now my concern is about doing this with particles to achieve the flaming effect. Am I supposed to create a path along which I should add many particle emitters that will be emitting flame particles? I understand the concept for 2D, but for 3D are the particles always supposed to be facing the user? Something else I'm worried about is the performance hit that will occur by having that many particle emitters, because there can be many letters and drawings at the same time, and each of these elements will have many particle emitters. More detailed explanation: I have a path of points, which is my model. Imagine a dotted letter "S" for example. I want make the "S" be on fire. The "S" is just an example it can be a circle, triangle, a line, pretty much any path described by my set of points. For achieving this fire effect I thought about using particles. So I am using a program called "Particle Designer" to create a fire style particle emitter. This emitter looks perfect on 2D on the iphone screen dimensions. So then I thought that I could probably draw an S or any other figure if i place many particle emitters next to each other following the path described. To move from the 2D version to the 3D version I thought about, scaling the emitter (with a scale matrix multiplication in its model matrix) and then moving it to a point in my 3D world. I did this and it works. So now I have 1 particle emitter in the 3D world. My question is, is this how you would achieve a flaming letter? Is this too inefficient if i expect to have many flaming paths on my world? Am i supposed to rotate the particle's quad so that its always looking at the user? (the last one is because i noticed that if u look at it from the side the particles start to flatten out)

    Read the article

  • Java 2D World question

    - by Munkybunky
    I have a 2D world background made up of a Grid of graphics, which I display on screen with a viewport (800x600) and it all works. My question is I have the following code to convert the mouse co-ordinates to world co-ordinates then World co-ordinates to grid co-ordinates then grid co-ordinates to screen co-ordinates. //Add camerax to mouse screen co-ords to convert to world co-ords. int cursorx_world=(int)camerax+(int)GameInput.mousex; int cursorx_grid=(int)cursorx_world/blocksize; // World Co-ords / gridsize give grid co-ords int cursorx_screen=-(int)camerax+(cursorx_grid*blocksize); So is there anyway I can convert straight from mouse screen co-ords to screen co-ordinates?

    Read the article

  • Writing a basic shader for large input files

    - by Zoltan Varadi
    I started writing a shader for my iOS app and instead of starting from scratch i used this tutorial here: http://www.raywenderlich.com/3664/opengl-es-2-0-for-iphone-tutorial I wrote an import function, first to import wavefront .obj models. My problem is that with I can't handle larger inputs (with a simple cube it was working). I realized that the indices array is an array of GLubyte values, which is unsigned char, so as a result i cant have more than 256 indexes. I modified it to GLuint, but then only get a blank screen. What else needs to me modified? p.s.: the source can be downloaded from here: http://d1xzuxjlafny7l.cloudfront.net/downloads/HelloOpenGL.zip

    Read the article

< Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >