Search Results

Search found 33194 results on 1328 pages for 'development approach'.

Page 507/1328 | < Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >

  • Why is my arrow texture being drawn in odd places?

    - by tyjkenn
    This is a script I wrote that places an arrow on the screen, pointing to an enemy off-screen, or, if the enemy is on-screen, it places an arrow hovering above the enemy. Everything seems to work, except for some odd reason, I see random arrows floating around, often skewed and resized (which I really don't understand, because I only rotate and place in this script). Even when I only have one enemy in the scene, I still see these random arrows. It should only be drawing one per enemy. Note: when all enemies are removed, no arrows appear. var arrow : Texture; var cam : Camera; var dim : int = 30; function OnGUI() { var objects = GameObject.FindGameObjectsWithTag("Enemy"); for(var ob : GameObject in objects) { var pos = cam.WorldToViewportPoint(ob.transform.position); if(gameObject.GetComponent(FollowCamera).target != null){ var tar = gameObject.GetComponent(FollowCamera).target.parent; } if(pos.z>1 && ob.transform != tar){ var xDiff = (pos.x*cam.pixelWidth)-(cam.pixelWidth/2); var yDiff = (pos.y*cam.pixelHeight)-(cam.pixelHeight/2); var angle = Mathf.Rad2Deg*Mathf.Atan(yDiff/xDiff)+180; if(xDiff>0) angle += 180; var dist = Mathf.Sqrt(xDiff*xDiff + yDiff*yDiff); var slope = yDiff/xDiff; var camSlope = cam.pixelHeight/cam.pixelWidth; var theX = -1000.0; var theY = -1000.0; var mult = 0; var temp; if(Mathf.Abs(xDiff)>(cam.pixelWidth/2)||Mathf.Abs(yDiff)>(cam.pixelHeight/2)){ //touching right if(slope<camSlope && slope>-camSlope) { if(xDiff>(cam.pixelWidth/2)) { theX = cam.pixelWidth - (dim/2); mult = -1; }else if(xDiff<-(cam.pixelWidth/2)) { theX = (dim/2); mult = 1; } temp = ((cam.pixelWidth/2)*yDiff)/xDiff; theY =(cam.pixelHeight/2)+(mult*temp); } else{ if(yDiff>(cam.pixelHeight/2)) { theY = (dim/2); mult = 1; }else if(yDiff<-(cam.pixelHeight/2)) { theY = cam.pixelHeight - (dim/2); mult = -1; } temp = ((cam.pixelHeight/2)*xDiff)/yDiff; theX =(cam.pixelWidth/2)+(mult*temp); } } else { angle = -90; theX = (cam.pixelWidth/2)+xDiff; theY = (cam.pixelHeight/2)-yDiff-dim; } GUIUtility.RotateAroundPivot(-angle, Vector2(theX, theY)); Graphics.DrawTexture(Rect(theX-(dim/2),theY-(dim/2),dim,dim),arrow,null); GUIUtility.RotateAroundPivot(angle, Vector2(theX, theY)); } } }

    Read the article

  • Creating an update method in a different class

    - by Sweta Dwivedi
    I have created a class called 3D model which will animate my 3D model by changing the model position according to the values based in a .txt file through a list... Since i'm using a foreach loop to read the point values when it reaches the end of the file.. XNA throws an out of bounds exception .. (which is obvious) but if i add the same code in my Game.cs update(gameTime) method.. then i dont have this problem..Any idea how to make my 3D model update work same as the update in game.cs .. Here is the code for some idea: public void patterns(GameTime gameTime) { motion_z = new List<Point3D>(); if (pattern == 1) { f = "E:/Motion_Track-output/Output1.txt"; } if (pattern == 2) { f = "E:/Motion_Track-output/cruse.txt"; } // TODO: Add your update logic here using (StreamReader r = new StreamReader(f)) { string line; //Viewport view = graphics.GraphicsDevice.Viewport; int maxWidth = view.Width; int maxHeight = view.Height; while ((line = r.ReadLine()) != null) { string[] temp = line.Split(','); int x = (int)Math.Floor(((float.Parse(temp[0]) * 0.5f) + 0.5f) * maxWidth); int y = (int)Math.Floor(((float.Parse(temp[1]) * -0.5f) + 0.5f) * maxHeight); int z = (int)Math.Floor(((float.Parse(temp[2]) / 4 * 20000))); motion_z.Add(new Point3D(x, y, z)); } modelPosition.X = (float)(motion_z[i].X); modelPosition.Y = (float)(motion_z[i].Y); modelPosition.Z = (float)(motion_z[i].Z); i++; } //Console.WriteLine("modelposX:" + modelPosition.X + "," + "motionzX:" + motion_z[i].X); }

    Read the article

  • Confusion about Rotation matrices from Euler Angles

    - by xEnOn
    I am trying to learn more about Euler Angles so as to help myself in understanding how I can control my camera better in the game. I came across the following formula that converts Euler Angles to rotation matrices: In the equation, I could see that the first matrix from the left is the rotation matrix about x-axis, the second is about y-axis and the third is about z-axis. From my understanding about ordinary matrix transformations, the later transformation is always applied to the right hand side. And if I'm right about this, then the above equation should have a rotation order starting from rotating about z-axis, y-axis, then finally x-axis. But, from the symbols it seems that the rotation order start rotating about x-axis, then y-axis, then finally z-axis. What should the actual order of the rotation be? Also, I am confuse about if the input vector, in this case, would be a row vector on the left, or a column vector on the right?

    Read the article

  • How does Minecraft render its sunset and sky?

    - by Nick
    In Minecraft, the sunset looks really beautiful and I've always wanted to know how they do it. Do they use several skyboxes rendered over eachother? That is, one for the sky (which can turn dark and light depending on the time of the day), one for the sun and moon, and one for the orange horizon effect? I was hoping someone could enlighten me... I wish I could enter wireframe or something like that but as far as I know that is not possible.

    Read the article

  • Can't use SFML sprite drawing and OpenGL rendering at the same time

    - by Ken
    I'm using some SFML built in functions to draw sprites and text as an overlay on top of some OpenGL rending in an SFML RenderWindow. The opengl rendering appears fine until I add the code to draw the sprites or text. The sprite or text drawing causes the OpenGL stuff to disappear. The follow code show what I'm trying to do sf::RenderWindow window(sf::VideoMode(viewport.width,viewport.height,32), "SFML Window"); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0,viewport.width,0,viewport.height,0,1); while (window.pollEvent(Event)) { //event handling... //begin drawing glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glBegin(GL_TRIANGLES); glColor3f(col.x,col.y,col.z); for(int i=0;i<3;i++) glVertex2f(pos.x+verts[i].x,pos.y+verts[i].y); glEnd(); // adding this line causes all the previous opengl triangles not to appear window.draw("Sometext"); window.display(); }

    Read the article

  • Procedural Planets, Heightmaps and Textures

    - by henryprescott
    I am currently working on an OpenGL procedural planet generator. I hope to use it for a space RPG, that will not allow players to go down to the surface of a planet so I have ignored anything ROAM related. At the moment I am drawing a cube with VBOs and mapping onto a sphere. I am familiar with most fractal heightmap generating techniques and have already implemented my own version of midpoint displacement (not that useful in this case I know). My question is, what is the best way to procedurally generate the heightmap. I have looked at libnoise which allows me to make tilable heightmaps/textures, but as far as I can see I would need to generate a net like this. Leaving the tiling obvious. Could anyone advise me on the best route to take? Any input would be much appreciated.

    Read the article

  • time issue in render libgdx [duplicate]

    - by jaysingh
    This question is an exact duplicate of: time issue in render libgdx [duplicate] pls. help how to implement this loop in render method next_game_tick and GetTickCount(); always contain same time value. so position never updated @Override public void render() { float deltaTime = Gdx.graphics.getDeltaTime(); Update(deltaTime); Render(deltaTime); } const int TICKS_PER_SECOND = 50; const int SKIP_TICKS = 1000 / TICKS_PER_SECOND; const int MAX_FRAMESKIP = 10; DWORD next_game_tick = GetTickCount(); int loops; bool game_is_running = true; while( game_is_running ) { loops = 0; while( GetTickCount() > next_game_tick && loops < MAX_FRAMESKIP) { update_game(); next_game_tick += SKIP_TICKS; loops++; } display_game(); }

    Read the article

  • time issue in render libgdx [duplicate]

    - by jaysingh
    This question is an exact duplicate of: deWitters Game loop in libgdx(Android) pls. help how to implement this loop in render method next_game_tick and GetTickCount(); always contain same time value so player position not updated. @Override public void render() { float deltaTime = Gdx.graphics.getDeltaTime(); Update(deltaTime); Render(deltaTime); } const int TICKS_PER_SECOND = 50; const int SKIP_TICKS = 1000 / TICKS_PER_SECOND; const int MAX_FRAMESKIP = 10; DWORD next_game_tick = GetTickCount(); int loops; bool game_is_running = true; while( game_is_running ) { loops = 0; while( GetTickCount() > next_game_tick && loops < MAX_FRAMESKIP) { update_game(); next_game_tick += SKIP_TICKS; loops++; } display_game(); }

    Read the article

  • Rotate camera around player and set new forward directions

    - by Samurai Fox
    I have a 3rd person camera which can rotate around the player. When I look at the back of the player and press forward, player goes forward. Then I rotate 360 around the player and "forward direction" is tilted for 90 degrees. So every 360 turn there is 90 degrees of direction change. For example when camera is facing the right side of the player, when I press button to move forward, I want player to turn to the left and make that the "new forward". I have Player object with Camera as child object. Camera object has Camera script. Inside Camera script there are Player and Camera classes. Player object itself, has Input Controller. Also I'm making this script for joystick/ controller primarily. My camera script so far: using UnityEngine; using System.Collections; public class CameraScript : MonoBehaviour { public GameObject Target; public float RotateSpeed = 10, FollowDistance = 20, FollowHeight = 10; float RotateSpeedPerTime, DesiredRotationAngle, DesiredHeight, CurrentRotationAngle, CurrentHeight, Yaw, Pitch; Quaternion CurrentRotation; void LateUpdate() { RotateSpeedPerTime = RotateSpeed * Time.deltaTime; DesiredRotationAngle = Target.transform.eulerAngles.y; DesiredHeight = Target.transform.position.y + FollowHeight; CurrentRotationAngle = transform.eulerAngles.y; CurrentHeight = transform.position.y; CurrentRotationAngle = Mathf.LerpAngle(CurrentRotationAngle, DesiredRotationAngle, 0); CurrentHeight = Mathf.Lerp(CurrentHeight, DesiredHeight, 0); CurrentRotation = Quaternion.Euler(0, CurrentRotationAngle, 0); transform.position = Target.transform.position; transform.position -= CurrentRotation * Vector3.forward * FollowDistance; transform.position = new Vector3(transform.position.x, CurrentHeight, transform.position.z); Yaw = Input.GetAxis("Right Horizontal") * RotateSpeedPerTime; Pitch = Input.GetAxis("Right Vertical") * RotateSpeedPerTime; transform.Translate(new Vector3(Yaw, -Pitch, 0)); transform.position = new Vector3(transform.position.x, transform.position.y, transform.position.z); transform.LookAt(Target.transform); } }

    Read the article

  • Periodic updates of an object in Unity

    - by Blue
    I'm trying to make a collider appear every 1 second. But I can't get the code right. I tried enabling the collider in the Update function and putting a yield to make it update every second or so. But it's not working (it gives me an error: Update() cannot be a coroutine.) How would I fix this? Would I need a timer system to toggle the collider? var waitTime : float = 1; var trigger : boolean = false; function Update () { if(!trigger){ collider.enabled = false; yield WaitForSeconds(waitTime); } if(trigger){ collider.enabled = true; yield WaitForSeconds(waitTime); } } }

    Read the article

  • Deferred rendering order?

    - by Nick Wiggill
    There are some effects for which I must do multi-pass rendering. I've got the basics set up (FBO rendering etc.), but I'm trying to get my head around the most suitable setup. Here's what I'm thinking... The framebuffer objects: FBO 1 has a color attachment and a depth attachment. FBO 2 has a color attachment. The render passes: Render g-buffer: normals and depth (used by outline & DoF blur shaders); output to FBO no. 1. Render solid geometry, bold outlines (as in toon shader), and fog; output to FBO no. 2. (can all render via a single fragment shader -- I think.) (optional) DoF blur the scene; output to the default frame buffer OR ELSE render FBO2 directly to default frame buffer. (optional) Mesh wireframes; composite over what's already in the default framebuffer. Does this order seem viable? Any obvious mistakes?

    Read the article

  • Why is my shadowmap all white?

    - by Berend
    I was trying out a shadowmap. But all my shadow is white. I think there is some problem with my homogeneous component. Can anybody help me? The rest of my code is written in xna Here is the hlsl code I used float4x4 xWorld; float4x4 xView; float4x4 xProjection; struct VertexToPixel { float4 Position : POSITION; float4 ScreenPos : TEXCOORD1; float Depth : TEXCOORD2; }; struct PixelToFrame { float4 Color : COLOR0; }; //------- Technique: ShadowMap -------- VertexToPixel MyVertexShader(float4 inPos: POSITION0, float3 inNormal: NORMAL0) { VertexToPixel Output = (VertexToPixel)0; float4x4 preViewProjection = mul(xView, xProjection); float4x4 preWorldViewProjection = mul(xWorld, preViewProjection); Output.Position =mul(inPos, mul(xWorld, preViewProjection)); Output.Depth = Output.Position.z / Output.Position.w; Output.ScreenPos = Output.Position; return Output; } float4 MyPixelShader(VertexToPixel PSIn) : COLOR0 { PixelToFrame Output = (PixelToFrame)0; Output.Color = PSIn.ScreenPos.z/PSIn.ScreenPos.w; return Output.Color; } technique ShadowMap { pass Pass0 { VertexShader = compile vs_2_0 MyVertexShader(); PixelShader = compile ps_2_0 MyPixelShader(); } }

    Read the article

  • Is there a library that handles hexagon tiled 2D maps?

    - by Pete Mancini
    It would represent a map that is semi-square of arbitrary size. It would have a simple system for representation of the map coordinates such as 0101 (first column, 1st hex). I'd want the map to be able to tell me the distance between two points, and what other hexes lay between those two points as a list or array. I don't care as much about the language but c# or python would be ideal. Does one exist?

    Read the article

  • How can I generate signed distance fields in real time, fast?

    - by heishe
    In a previous question, it was suggested that signed distance fields can be precomputed, loaded at runtime and then used from there. For reasons I will explain at the end of this question (for people interested), I need to create the distance fields in real time. There are some papers out there for different methods which are supposed to be viable in real-time environments, such as methods for Chamfer distance transforms and Voronoi diagram-approximation based transforms (as suggested in this presentation by the Pixeljunk Shooter dev guy), but I (and thus can be assumed a lot of other people) have a very hard time actually putting them to use, since they're usually long, largely bloated with math and not very algorithmic in their explanation. What algorithm would you suggest for creating the distance fields in real-time (favourably on the GPU) especially considering the resulting quality of the distance fields? Since I'm looking for an actual explanation/tutorial as opposed to a link to just another paper or slide, this question will receive a bounty once it's eligible for one :-). Here's why I need to do it in real time:

    Read the article

  • Can these game be fully coded in html5/javascript?

    - by RufioLJ
    I mean the mechanics of the game. Would it be possible? -Pokemon GBA series, rendering the world would be easy, but what about battle mechanics? -MapleStory, after seen dragonbound.net which is an identical copy of Gunbound I would think it's rather possible, but I'm still not sure if javascript can handle all the mechanics of the world. It would be heavy on resources I guess? I'm asking this because I'm really interested in html5 game develop(I really think in a future will destroy flash on game dev ground). I want to have an idea of how far games developed with the html5/javascript technology can go. I got especially inspired by dragonbound. I really think it pushes htmlt/javascript to the limits (game dev).

    Read the article

  • How to keep a data structure synchronized over a network?

    - by David Gouveia
    Context In the game I'm working on (a sort of a point and click graphic adventure), pretty much everything that happens in the game world is controlled by an action manager that is structured a bit like: So for instance if the result of examining an object should make the character say hello, walk a bit and then sit down, I simply spawn the following code: var actionGroup = actionManager.CreateGroup(); actionGroup.Add(new TalkAction("Guybrush", "Hello there!"); actionGroup.Add(new WalkAction("Guybrush", new Vector2(300, 300)); actionGroup.Add(new SetAnimationAction("Guybrush", "Sit")); This creates a new action group (an entire line in the image above) and adds it to the manager. All of the groups are executed in parallel, but actions within each group are chained together so that the second one only starts after the first one finishes. When the last action in a group finishes, the group is destroyed. Problem Now I need to replicate this information across a network, so that in a multiplayer session, all players see the same thing. Serializing the individual actions is not the problem. But I'm an absolute beginner when it comes to networking and I have a few questions. I think for the sake of simplicity in this discussion we can abstract the action manager component to being simply: var actionManager = new List<List<string>>(); How should I proceed to keep the contents of the above data structure syncronized between all players? Besides the core question, I'm also having a few other concerns related to it (i.e. all possible implications of the same problem above): If I use a server/client architecture (with one of the players acting as both a server and a client), and one of the clients has spawned a group of actions, should he add them directly to the manager, or only send a request to the server, which in turn will order every client to add that group? What about packet losses and the like? The game is deterministic, but I'm thinking that any discrepancy in the sequence of actions executed in a client could lead to inconsistent states of the world. How do I safeguard against that sort of problem? What if I add too many actions at once, won't that cause problems for the connection? Any way to alleviate that?

    Read the article

  • Dynamic navigation mesh changes

    - by Nairou
    I'm currently trying to convert from grids to navigation meshes for pathfinding, since grids are either too coarse for accurate navigation, or too fine to be useful for object tracking. While my map is fairly static, and the navigation mesh could be created in advance, this is somewhat of a tower defense game, where objects can be placed to block paths, so I need a way to recalculate portions of the navigation mesh to allow pathing around them. Is there any existing documentation on good ways to do this? I'm still very new to navigation meshes, so the prospect of modifying them to cut or fill holes sounds daunting.

    Read the article

  • About floating point precision and why do we still use it

    - by system_is_b0rken
    Floating point has always been troublesome for precision on large worlds. This article explains behind-the-scenes and offers the obvious alternative - fixed point numbers. Some facts are really impressive, like: "Well 64 bits of precision gets you to the furthest distance of Pluto from the Sun (7.4 billion km) with sub-micrometer precision. " Well sub-micrometer precision is more than any fps needs (for positions and even velocities), and it would enable you to build really big worlds. My question is, why do we still use floating point if fixed point has such advantages? Most rendering APIs and physics libraries use floating point (and suffer it's disadvantages, so developers need to get around them). Are they so much slower? Additionally, how do you think scalable planetary engines like outerra or infinity handle the large scale? Do they use fixed point for positions or do they have some space dividing algorithm?

    Read the article

  • How can I convert an image from raw data in Android without any munging?

    - by stephelton
    I have raw image data (may be .png, .jpg, ...) and I want it converted in Android without changing its pixel depth (bpp). In particular, when I load a grayscale (8 bpp) image that I want to use as alpha (glTexImage() with GL_ALPHA), it converts it to 16 bpp (presumably 5_6_5). While I do have a plan B (actually, I'm probably on plan 'E' by now, this is really becoming annoying) I would really like to discover an easy way to do this using what is readily available in the API. So far, I'm using BitmapFactory.decodeByteArray(). While I'm at it. I'm doing this from a native environment via JNI (passing the buffer in from C, and a new buffer back to C from Java). Any portable solution in C/C++ would be preferable, but I don't want to introduce anything that might break in future versions of Android, etc.

    Read the article

  • How to implement the light trails for a tron game?

    - by Link
    Well I was creating a TRON style game, but had an issue with creating the actual light trails for the game. What I'm doing currently is I have an array the same size as my window in pixel size, implemented like this: int* collision[800][600]; Then when the bike goes on a certain pixel, it is marked with a 1 for traveled on. However what is the most efficient way to create a working light trail display? I tried to do something like this: int i, j; for(i=0; i<800; i++) for(j=0; j<600; j++) if(*collision[i][j] == 1) Image::applySurface(i, j, trailSurface, gameScreen); But it isn't working properly? It just fills the whole screen with a sprite instead. Whats a better/faster/working way to do this?

    Read the article

  • How to perform simple collision detection?

    - by Rob
    Imagine two squares sitting side by side, both level with the ground like so: A simple way to detect if one is hitting the other is to compare the location of each side. They are touching if all of the following are false: The right square's left side is to the right of the left square's right side. The right square's right side is to the left of the left square's left side. The right square's bottom side is above the left square's top side. The right square's top side is below the left square's bottom side. If any of those are true, the squares are not touching. But consider a case like this, where one square is at a 45 degree angle: Is there an equally simple way to determine if those squares are touching?

    Read the article

  • How to balance this Pokémon simulator metagame by feedback?

    - by Dokkat
    This is a Pokémon simulator where you build a team of 6 pokémon and battle with someone. Unfortunately, some Pokémon are stronger than others and only a few of the hundredth species are practical. I'm trying to create a metagame where all of them are competitive. For this, I am tagging a Pokémon with a parameter (level) that changes it's strength and scales up/down depending on the it's performance. That is, if the system detects Mewtwo is overperforming, it should decrease it's level tag until Mewtwo is balanced. The question is: how can I identify if a Pokémon is causing an unbalance? The data I have is the historic of the battles (player 1, player 2, pokémon list, winner). The most basic solution I can think of is victory/loss counting.

    Read the article

  • Tic-Tac-Toe game AI

    - by David Jones
    I'm looking into creating a simple tic tac toe/noughts and crosses game in Actionscript3 and am trying to understand the ideas behind the AI used in a game like this. I've seen some simplistic examples online but from what I've read a game tree or something like minimax is the best way to go about this. Can anyone help explain or reference any good examples of this? I've seen that there is a library called as3ds - data structures for game developers which has a number of classes that might help tie this together? Any info/examples or help is much appreciated.

    Read the article

  • Dynamic model interactions

    - by Richard
    I am just curious as to how in many games (namely games like arkham asylum/city, manhunt, hitman) do they make it so that your character can "grab" a character in front of you and do stuff to them. I know this may sound very confusing but for an example go to youtube and search "hitman executions", and the first video is an example of what i'm asking. Basically I'm wondering how they make your model dynamically interact with whatever other model you come across, so in hitman when you come up behind some one with the fibre wire you strangle the other character or if you have the anesthetic you come up behind some person and put your hand over there mouth while they struggle and slowly go to the floor where you lay them down. I am confused as to whether it was animated to use two models using specific bone/skeletal identifiers, if it is just two completely separate animations that are played at the correct time to make it look like they are actually interacting or something else all together. I am not an animator so i assume most of what i just said is not right but i hope that some one can understand what i mean and provide an answer. PS) I am a programmer and I am in the process of building a hitmanesque game, just because i love that style of game and I want to increase my skills on something fun, so if you do know what i'm talking about have some examples with involving both models and programming (i use c++ and mainly Ogre3D at the moment but i am getting into unity and XNA) i would greatly appreciate it. Thanks.

    Read the article

  • Indexed Drawing in OpenGL not working

    - by user2050846
    I am trying to render 2 types of primitives- - points ( a Point Cloud ) - triangles ( a Mesh ) I am rendering points simply without any index arrays and they are getting rendered fine. To render the meshes I am using indexed drawing with the face list array having the indices of the vertices to be rendered as Triangles. Vertices and their corresponding vertex colors are stored in their corresponding buffers. But the indexed drawing command do not draw anything. The code is as follows- Main Display Function: void display() { simple->enable(); simple->bindUniform("MV",modelview); simple->bindUniform("P", projection); // rendering Point Cloud glBindVertexArray(vao); // Vertex buffer Point Cloud glBindBuffer(GL_ARRAY_BUFFER,vertexbuffer); glEnableVertexAttribArray(0); glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,0); // Color Buffer point Cloud glBindBuffer(GL_ARRAY_BUFFER,colorbuffer); glEnableVertexAttribArray(1); glVertexAttribPointer(1,3,GL_FLOAT,GL_FALSE,0,0); // Render Colored Point Cloud //glDrawArrays(GL_POINTS,0,model->vertexCount); glDisableVertexAttribArray(0); glDisableVertexAttribArray(1); // ---------------- END---------------------// //// Floor Rendering glBindBuffer(GL_ARRAY_BUFFER,fl); glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,0); glVertexAttribPointer(1,4,GL_FLOAT,GL_FALSE,0,(void *)48); glDrawArrays(GL_QUADS,0,4); glDisableVertexAttribArray(0); glDisableVertexAttribArray(1); // -----------------END---------------------// //Rendering the Meshes //////////// PART OF CODE THAT IS NOT DRAWING ANYTHING //////////////////// glBindVertexArray(vid); for(int i=0;i<NUM_MESHES;i++) { glBindBuffer(GL_ARRAY_BUFFER,mVertex[i]); glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,0); glVertexAttribPointer(1,3,GL_FLOAT,GL_FALSE,0,(void *)(meshes[i]->vertexCount*sizeof(glm::vec3))); //glDrawArrays(GL_TRIANGLES,0,meshes[i]->vertexCount); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,mFace[i]); //cout<<gluErrorString(glGetError()); glDrawElements(GL_TRIANGLES,meshes[i]->faceCount*3,GL_FLOAT,(void *)0); glDisableVertexAttribArray(0); glDisableVertexAttribArray(1); } glUseProgram(0); glutSwapBuffers(); glutPostRedisplay(); } Point Cloud Buffer Allocation Initialization: void initGLPointCloud() { glGenBuffers(1,&vertexbuffer); glGenBuffers(1,&colorbuffer); glGenBuffers(1,&fl); //Populates the position buffer glBindBuffer(GL_ARRAY_BUFFER,vertexbuffer); glBufferData(GL_ARRAY_BUFFER, model->vertexCount * sizeof (glm::vec3), &model->positions[0], GL_STATIC_DRAW); //Populates the color buffer glBindBuffer(GL_ARRAY_BUFFER, colorbuffer); glBufferData(GL_ARRAY_BUFFER, model->vertexCount * sizeof (glm::vec3), &model->colors[0], GL_STATIC_DRAW); model->FreeMemory(); // To free the not needed memory, as the data has been already // copied on graphic card, and wont be used again. glBindBuffer(GL_ARRAY_BUFFER,0); } Meshes Buffer Initialization: void initGLMeshes(int i) { glBindBuffer(GL_ARRAY_BUFFER,mVertex[i]); glBufferData(GL_ARRAY_BUFFER,meshes[i]->vertexCount*sizeof(glm::vec3)*2,NULL,GL_STATIC_DRAW); glBufferSubData(GL_ARRAY_BUFFER,0,meshes[i]->vertexCount*sizeof(glm::vec3),&meshes[i]->positions[0]); glBufferSubData(GL_ARRAY_BUFFER,meshes[i]->vertexCount*sizeof(glm::vec3),meshes[i]->vertexCount*sizeof(glm::vec3),&meshes[i]->colors[0]); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,mFace[i]); glBufferData(GL_ELEMENT_ARRAY_BUFFER,meshes[i]->faceCount*sizeof(glm::vec3), &meshes[i]->faces[0],GL_STATIC_DRAW); meshes[i]->FreeMemory(); //glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,0); } Initialize the Rendering, load and create shader and calls the mesh and PCD initializers. void initRender() { simple= new GLSLShader("shaders/simple.vert","shaders/simple.frag"); //Point Cloud //Sets up VAO glGenVertexArrays(1, &vao); glBindVertexArray(vao); initGLPointCloud(); //floorData glBindBuffer(GL_ARRAY_BUFFER, fl); glBufferData(GL_ARRAY_BUFFER, sizeof(floorData), &floorData[0], GL_STATIC_DRAW); glBindBuffer(GL_ARRAY_BUFFER,0); glBindVertexArray(0); //Meshes for(int i=0;i<NUM_MESHES;i++) { if(i==0) // SET up the new vertex array state for indexed Drawing { glGenVertexArrays(1, &vid); glBindVertexArray(vid); glGenBuffers(NUM_MESHES,mVertex); glGenBuffers(NUM_MESHES,mColor); glGenBuffers(NUM_MESHES,mFace); } initGLMeshes(i); } glEnable(GL_DEPTH_TEST); } Any help would be much appreciated, I have been breaking my head on this problem since 3 days, and still it is unsolved.

    Read the article

< Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >