Search Results

Search found 19338 results on 774 pages for 'game loop'.

Page 372/774 | < Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >

  • Offset Forward vector of object based on Rotation

    - by Taylor
    I'm using the Bullet 3D physics engine in a iOS application running openGL ES 1.1 Currently I'm accepting info from the gyroscope to allow the user to "look around" a 3d world that follows a bouncing ball (note: it only takes in the yaw to look around 360 degrees). Im also accepting information from the accelerometer based on the tilt to push the ball. As of right now, to move forward, the user tilts the devise forward (using the accelerometer); to move to the right, the user tilts the devise to the right and so on. The forward vector is currently along it's local Z-axis. The problem is that I want to change the ball bounce based on where the user has changed the view. If I change the view, the ball bounces in the fixed direction. I want to change the forward facing direction so that when a user changes the view (say to the look at the right of the world, the user rotates the device), tilting the devise forward will result in a forward force in that direction. Basically, I want the forward vector to take the rotation into consideration. Sorry if I didn't explain the issue well enough, its kind of confusing to write down.

    Read the article

  • What platform is best for Android and iPhone development?

    - by Toy Yoda
    I've been developing non-mobile apps for linux; mainly stuff like interpreters, compilers, database engines and business apps. I've been told that if I wanted to learn how to develop iPhone/iPad applications, I should buy a Mac since Apple has all it's development tools for iPhone/iPad on Mac. Now, what about Android phones / tablets? Are the development tools better on Mac or PC? I need to buy a new laptop, and I would like to factor in mobile development in my choice of PC or Mac.

    Read the article

  • The right way to add images to Monogame/Windows

    - by ashes999
    I'm starting out with MonoGame. For now, I'm only targeting Windows (desktop -- not Windows 8 specifically). I've used a couple of XNA products in the past (raw XNA, FlatRedBall, SilverSprite), so I may have a misunderstanding about how I should add images to my content. How do I add images to my project? Currently, I created a new Monogame project, added a folder called "Content," and added images under there; the only caveat is that I need to set the Copy to Output Directory action to one of the Copy ones. It seems strange, because my "raw" XNA project just last week had a Content project in it (XNA Framework Content Pipeline, according to VS2010), which compiled my images to XNB (I think). It seems like Monogame doesn't use the same content pipeline, but I'm not sure. Edit: My question is not about "how do I get the XNA content pipeline to work with Monogame." My question is "why would I want to use the XNA content pipeline in Monogame?" Because there are (at least) two solutions (that I see today): Add the images to the Monogame project and set the Copy to Output Directory options to copy. Add a XNA content pipeline project and add my images to that instead; reference it from my MOnogame project. Which solution should I use, and why? I currently have a working version with the first option.

    Read the article

  • Rotating a child shape relative to its parent's orientation

    - by user1423893
    When rotating a shape using a quaternion value I also wish rotate its child shape. The parent and child shapes both start with different orientations but their relative orientations should always be the same. How can I use the difference between the previous and current quaternions of the parent shape in order to transform the child segment and rotate it relative to its parent shape? public Quaternion Orientation { get { return entity.Orientation; } set { Quaternion previousValue = entity.Orientation; entity.Orientation = value; // Use the difference between the quaternion values to update child orientation } }

    Read the article

  • proper way to creation multiple similiar buttons/panels

    - by JayAvon
    I have the below Code which i tried to do, but it only shows(the minus/plus button) on the last GirdLayout (Intelligence stat): JButton plusButton = new JButton("+"); JButton minusButton = new JButton("-"); statStrengthGridPanel = new JPanel(new GridLayout(1,3)); statStrengthGridPanel.add(minusButton); statStrengthGridPanel.add(new JLabel("10")); statStrengthGridPanel.add(plusButton); statConstitutionGridPanel = new JPanel(new GridLayout(1,3)); statConstitutionGridPanel.add(minusButton); statConstitutionGridPanel.add(new JLabel("10")); statConstitutionGridPanel.add(plusButton); statDexterityGridPanel = new JPanel(new GridLayout(1,3)); statDexterityGridPanel.add(minusButton); statDexterityGridPanel.add(new JLabel("10")); statDexterityGridPanel.add(plusButton); statIntelligenceGridPanel = new JPanel(new GridLayout(1,3)); statIntelligenceGridPanel.add(minusButton); statIntelligenceGridPanel.add(new JLabel("10")); statIntelligenceGridPanel.add(plusButton); I know I can do something like I did for the Panel names(have multiple ones), but I did not want to do that for the Panels in the first place. I am trying to use best practice and not have my code be repetitive. Any suggestions?? The goal is to have 4 stats, to assign points to, with decrement and increment buttons(I decided against sliders). Eventually I will have them have upper and lower limits, decrement the "unused" label, and all of that good stuff, but I just want to not be repetitive. Thanks for any help.

    Read the article

  • How do I get the point coords of a rotated SFML shaperect?

    - by user15498
    I am trying to get collisions of bullets working, and am using SFML. I am using code to get the position of the points of the rectangle for collisions, however I think there's a way to do this without having to get points but by simply getting the points from SFML, since the shape is a rectangle and the points are stored in that way. Is there a way to do that? Through a combination of getPoint() and getGlobalBounds() maybe? While on this topic, is it better to use shapeRects or sprites? I used to only use sprites, however with the addition of textures and more low level stuff I think it would be best to switch to using rectangles and setting their size.

    Read the article

  • Understanding IDAT chunk of PNG file format

    - by DRapp
    From the sample image below, I have a border in yellow just for display purposes only. The actual .png file is a simple black/white image 3 pixels by 3 pixels. I was originally thinking to try as a 2x2, but that would not help trying to interpret low/hi vs hi/low drawing stream. At least this way, I would have two black, one white from the top, or one white, two black from the bottom.. So I read the chunks of data, get to the IDAT chunk, decode that (zlib) and come up with 12 bytes as follows 00 20 00 40 00 80 So, my question, how does the above get broken down into the 3x3 black and white sample... Also, it is saved in palette format and properly recognizes the bit depth of 1 and color palette of 2... color pallet[0] is RGBA all zeros. Palette1 has RGBA of 255, 255, 255, 0 I'll eventually get into the multiple other depth formats later, just wanted to start with what would expect to be the easiest. Part II. Any guidance on handling the other depth formats would help if anything special to be considered especially regarding alpha channel (which I am already looking for in the palette) that might trip me up.

    Read the article

  • Difference between Sound and Music

    - by Southpaw Hare
    What are the key differences between the Sound and Music classes in Pygame? What are the limitations of each? In what situation would one use one or the other? Is there a benefit to using them in an unintuitive way such as using Sound objects to play music files or visa-versa? Are there specifically issues with channel limitations, and do one or both have the potential to be dropped from their channel unreliably? What are the risks of playing music as a Sound?

    Read the article

  • How to balance this Pokémon simulator metagame by feedback?

    - by Dokkat
    This is a Pokémon simulator where you build a team of 6 pokémon and battle with someone. Unfortunately, some Pokémon are stronger than others and only a few of the hundredth species are practical. I'm trying to create a metagame where all of them are competitive. For this, I am tagging a Pokémon with a parameter (level) that changes it's strength and scales up/down depending on the it's performance. That is, if the system detects Mewtwo is overperforming, it should decrease it's level tag until Mewtwo is balanced. The question is: how can I identify if a Pokémon is causing an unbalance? The data I have is the historic of the battles (player 1, player 2, pokémon list, winner). The most basic solution I can think of is victory/loss counting.

    Read the article

  • Periodic updates of an object in Unity

    - by Blue
    I'm trying to make a collider appear every 1 second. But I can't get the code right. I tried enabling the collider in the Update function and putting a yield to make it update every second or so. But it's not working (it gives me an error: Update() cannot be a coroutine.) How would I fix this? Would I need a timer system to toggle the collider? var waitTime : float = 1; var trigger : boolean = false; function Update () { if(!trigger){ collider.enabled = false; yield WaitForSeconds(waitTime); } if(trigger){ collider.enabled = true; yield WaitForSeconds(waitTime); } } }

    Read the article

  • Need to make animation whereby the character shatters into a bunch of pieces

    - by theprojectabot
    I would like to take a 3d character model, cut out a bunch of shapes (or a bunch of triangles in the shape of the pieces I want) and then have the pieces separate from each other at the beginning of the animation and fall apart with gravity so it looks like the model is falling apart in shattered pieces. Is there a way to run a script on a mesh, cut out these pieces, instantiate all of them as separate models and then run gravity on them during the simulation?

    Read the article

  • How can I convert an image from raw data in Android without any munging?

    - by stephelton
    I have raw image data (may be .png, .jpg, ...) and I want it converted in Android without changing its pixel depth (bpp). In particular, when I load a grayscale (8 bpp) image that I want to use as alpha (glTexImage() with GL_ALPHA), it converts it to 16 bpp (presumably 5_6_5). While I do have a plan B (actually, I'm probably on plan 'E' by now, this is really becoming annoying) I would really like to discover an easy way to do this using what is readily available in the API. So far, I'm using BitmapFactory.decodeByteArray(). While I'm at it. I'm doing this from a native environment via JNI (passing the buffer in from C, and a new buffer back to C from Java). Any portable solution in C/C++ would be preferable, but I don't want to introduce anything that might break in future versions of Android, etc.

    Read the article

  • XNA - Am I screwing up the LoadContent for Texture2D?

    - by Bombcode
    I've read forum threads and questions and I done just about everything. I need to know what am I screwing up here. Here's the code in the constructor. Content.RootDirectory = "GameStateContent"; //Content.RootDirectory = "Content"; And this is in the LoadContent method menu = this.Content.Load<Texture2D>("mainmenu"); And here's the image screen shot of the folder structure. http://i.imgur.com/HnndE.png Any helps on this? Thanks.

    Read the article

  • Per fragment lighting with OpenGL 4.x tessellated model

    - by Finlaybob
    I'm experienced with OpenGL 3+. I'm dabbling with tessellation shaders and have now got to a point where I have a nicely tessellated teapot/plane demo (quick look here) As can be seen from the screenshots, the lighting is broken (though admittedly doesn't look too bad in the image) I've tried to add a normal map to the equation but it still doesn't come out right, I can calculate the normals, tangents and binormals per triangle in the geometry shader but still looks wrong. I think the question would be; How do I add per fragment lighting to a tessellated model? The teapot is 32 16-point patches, the plane is one single 16 point patch. The shaders are here, but they are a complete mess, so I don't blame anyone who cant make sense of them. But peruse at your leisure if you like. Also, if this question is more suited to be somewhere else i.e. Stack Overflow or the Programming stack please let me know.

    Read the article

  • Unable to find good parameters for behavior of a puck in Farseer

    - by Krumelur
    EDIT: I have tried all kinds of variations now. The last one was to adjust the linear velocity in each step: newVel = oldVel * 0.9f - all of this including your proposals kind of work, however in the end if the velocity is little enough, the puck sticks to the wall and slides either vertically or horizontally. I can't get rid of this behavior. I want it to bounce, no matter how slow it is. Retitution is 1 for all objects, Friction is 0 for all. There is no gravity. What am I possibly doing wrong? In my battle to learn and understand Farseer I'm trying to setup a simple Air Hockey like table with a puck on it. The table's border body is made up from: Vertices aBorders = new Vertices( 4 ); aBorders.Add( new Vector2( -fHalfWidth, fHalfHeight ) ); aBorders.Add( new Vector2( fHalfWidth, fHalfHeight ) ); aBorders.Add( new Vector2( fHalfWidth, -fHalfHeight ) ); aBorders.Add( new Vector2( -fHalfWidth, -fHalfHeight ) ); FixtureFactory.AttachLoopShape( aBorders, this ); this.CollisionCategories = Category.All; this.CollidesWith = Category.All; this.Position = Vector2.Zero; this.Restitution = 1f; this.Friction = 0f; The puck body is defined with: FixtureFactory.AttachCircle( DIAMETER_PHYSIC_UNITS / 2f, 0.5f, this ); this.Restitution = 0.1f; this.Friction = 0.5f; this.BodyType = FarseerPhysics.Dynamics.BodyType.Dynamic; this.LinearDamping = 0.5f; this.Mass = 0.2f; I'm applying a linear force to the puck: this.oPuck.ApplyLinearImpulse( new Vector2( 1f, 1f ) ); The problem is that the puck and the walls appear to be sticky. This means that the puck's velocity drops to zero to quickly below a certain velocity. The puck gets reflected by the walls a couple of times and then just sticks to the left wall and continues sliding downwards the left wall. This looks very unrealistic. What I'm really looking for is that a puck-wall-collision does slow down the puck only a tiny little bit. After tweaking all values left and right I was wondering if I'm doing something wrong. Maybe some expert can comment on the parameters?

    Read the article

  • Unity Problem with colliding instances of same object

    - by Kuba Sienkiewicz
    I want to check if object's instance is overlapping with another instance (any spawned object with another spawned object, not necessary the same object). I'm doing this by detecting collisions between bodies. But I have a problem. Spawned object (instances) are detecting collision with everything but other spawned objects. I've checked collision layers etc. All of spawned objects have rigidbodies and mesh colliders. Also when I attach my script to another body and I touch that body with an instanced object it detects collision. So problem is visible only in collision between spawned objects. And one more information I have script, rigid body and collider attached to child of main object. using UnityEngine; using System.Collections; public class CantPlace : MonoBehaviour { public bool collided = false; // Use this for initialization void Start () { } // Update is called once per frame void Update () { //Debug.Log (collided); } void OnTriggerEnter(Collider collider) { //if (true) { //foreach (Transform child in this.transform) { // if (child.name == "Cylinder") { //collided = true; Color c; c = this.renderer.material.color; c.g = 0f; c.b = 1f; c.r = 0f; this.renderer.material.color = c; Debug.Log (collider.name); //} // } //} //foreach (ContactPoint contact in collision.contacts) { // Debug.DrawRay(contact.point, contact.normal, Color.red,15f); // } } }

    Read the article

  • Making efficeint voxel engines using "chunks"

    - by Wardy
    Concept I'm currently looking in to how voxel engines work with a view to possibly making one myself. I see a lot of stuff like this ... https://sites.google.com/site/letsmakeavoxelengine/home/chunks ... which talks about how to go about reducing the draw calls. What I can't seem to understand is how it actually saves draw call counts on the basis of the logic being something like this ... Without chunks foreach voxel in myvoxels DrawIfVisible() With Chunks foreach chunk in mychunks DrawIfVisible() which then does ... foreach voxel in myvoxels DrawIfVisible() So surely you saved nothing ?!?! You still make a draw call for each visible voxel do you not? A visible voxel needs a draw call in either scenario. The only real saving I can see is that the logic that evaluates a chunk will be able to determine if a large number of voxels are visible or not effectively saving a bit of "is this chunk visible" cpu time. But it's the draw calls that interest me ... The fewer of those, the faster the application. EDIT: In case it makes any difference I will probably be using XNA (DX not OpenGL) for my engine so don't consider my choice of example in the link above my choice of technology. But this question is such that I doubt it would matter.

    Read the article

  • Arranging Gizmos in Unity 3D [on hold]

    - by Simran kaur
    I have this arrangement of Gizmos which was handed over to me. ! 1. How do I get it? I have read the documentation but I could get it as shown. I have basically track or lane that is coming towards the camera by moving towards negative z. I am moving lanes so that it appears as if cars are moving, The roads need to be rotated by 90 degrees otherwise they appear to move towards the upper end of the screen and that too parellely.Why exactly is that?

    Read the article

  • Creating a 2D Line Branch (Part 2)

    - by Danran
    Yesterday i asked this question on how to create a 2D line branch; Creating a 2D Line Branch And thanks to the answered provided, i now have this nice looking main branch; *coloured to show the different segments in the final item. Now is the time now to branch things off as discussed in the article; http://drilian.com/2009/02/25/lightning-bolts/ Again however i am confused as to the meaning of the following pseudo code; splitEnd = Rotate(direction, randomSmallAngle)*lengthScale + midPoint; I'm unsure how to actually rotate this correctly. In all honesty i'm abit unsure what to-do completely at this part, "splitEnd" will be a Vector3, so whatever happens in the rotate function must then return some form of directional rotation which is then * by a scale to create length and then added to the midPoint. I'm not sure. If someone could explain what i'm meant to be doing in this part that would be really grateful.

    Read the article

  • node.js callback getting unexpected value for variable

    - by defrex
    I have a for loop, and inside it a variable is assigned with var. Also inside the loop a method is called which requires a callback. Inside the callback function I'm using the variable from the loop. I would expect that it's value, inside the callback function, would be the same as it was outside the callback during that iteration of the loop. However, it always seems to be the value from the last iteration of the loop. Am I misunderstanding scope in JavaScript, or is there something else wrong? The program in question here is a node.js app that will monitor a working directory for changes and restart the server when it finds one. I'll include all of the code for the curious, but the important bit is the parse_file_list function. var posix = require('posix'); var sys = require('sys'); var server; var child_js_file = process.ARGV[2]; var current_dir = __filename.split('/'); current_dir = current_dir.slice(0, current_dir.length-1).join('/'); var start_server = function(){ server = process.createChildProcess('node', [child_js_file]); server.addListener("output", function(data){sys.puts(data);}); }; var restart_server = function(){ sys.puts('change discovered, restarting server'); server.close(); start_server(); }; var parse_file_list = function(dir, files){ for (var i=0;i<files.length;i++){ var file = dir+'/'+files[i]; sys.puts('file assigned: '+file); posix.stat(file).addCallback(function(stats){ sys.puts('stats returned: '+file); if (stats.isDirectory()) posix.readdir(file).addCallback(function(files){ parse_file_list(file, files); }); else if (stats.isFile()) process.watchFile(file, restart_server); }); } }; posix.readdir(current_dir).addCallback(function(files){ parse_file_list(current_dir, files); }); start_server(); The output from this is: file assigned: /home/defrex/code/node/ejs.js file assigned: /home/defrex/code/node/templates file assigned: /home/defrex/code/node/web file assigned: /home/defrex/code/node/server.js file assigned: /home/defrex/code/node/settings.js file assigned: /home/defrex/code/node/apps file assigned: /home/defrex/code/node/dev_server.js file assigned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js For those from the future: node.devserver.js

    Read the article

  • Exporting UV coords from Blender

    - by Soapy
    So I have searched on google and various other websites but I've not found an answer. The only ones I did find did not work. So my question is how do I get UV coords from blender (2.63)? Currently I'm writing my own custom file exporter, and so far have managed to export vertices and their normals. Is there a way to export the UV coords? N.B. I'm currently try to figure it out using a simple cube that is unwrapped and has a texture applied to it.

    Read the article

  • Can't export Blender model for use in jMonkeyEngine SDK

    - by Nathan Sabruka
    I have a scene rendered in blender called "civ1.blend" which contains multiple materials (for example, I have one called "white"). I want to use this model in jMonkeyEngine, so I used the OGRE exporter to create .scene and .material files. This gives me, for example, a civ1.scene file and a white.material file.However, when I then try to import civ1.scene into the jMonkeyEngine SDK, I get an error along the lines of "Cannot find material file 'civ1.material'". Like I said, I have a white.material file, but I do not have a civ1.material file. Did anyone encounter this problem? How do I fix this?

    Read the article

  • What forms of non-interactive RPG battle systems exist?

    - by Landstander
    I am interested in systems that allow players to develop a battle plan or setup strategy for the party or characters prior to entering battle. During the battle the player either cannot input commands or can choose not to. Rule Based In this system the player can setup a list of rules in the form of [Condition - Action] that are then ordered by priority. Gambits in Final Fantasy XII Tactics in Dragon Age Origin & II

    Read the article

  • Basic 3D Collision detection in XNA 4.0

    - by NDraskovic
    I have a problem with detecting collision between 2 models using BoundingSpheres in XNA 4.0. The code I'm using i very simple: private bool IsCollision(Model model1, Matrix world1, Model model2, Matrix world2) { for (int meshIndex1 = 0; meshIndex1 < model1.Meshes.Count; meshIndex1++) { BoundingSphere sphere1 = model1.Meshes[meshIndex1].BoundingSphere; sphere1 = sphere1.Transform(world1); for (int meshIndex2 = 0; meshIndex2 < model2.Meshes.Count; meshIndex2++) { BoundingSphere sphere2 = model2.Meshes[meshIndex2].BoundingSphere; sphere2 = sphere2.Transform(world2); if (sphere1.Intersects(sphere2)) return true; } } return false; } The problem I'm getting is that when I call this method from the Update method, the program behaves as if this method always returns true value (which of course is not correct). The code for calling is very simple (although this is only the test code): if (IsCollision(model1, worldModel1, model2, worldModel2)) { Window.Title = "Intersects"; } What is causing this?

    Read the article

  • Indexed Drawing in OpenGL not working

    - by user2050846
    I am trying to render 2 types of primitives- - points ( a Point Cloud ) - triangles ( a Mesh ) I am rendering points simply without any index arrays and they are getting rendered fine. To render the meshes I am using indexed drawing with the face list array having the indices of the vertices to be rendered as Triangles. Vertices and their corresponding vertex colors are stored in their corresponding buffers. But the indexed drawing command do not draw anything. The code is as follows- Main Display Function: void display() { simple->enable(); simple->bindUniform("MV",modelview); simple->bindUniform("P", projection); // rendering Point Cloud glBindVertexArray(vao); // Vertex buffer Point Cloud glBindBuffer(GL_ARRAY_BUFFER,vertexbuffer); glEnableVertexAttribArray(0); glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,0); // Color Buffer point Cloud glBindBuffer(GL_ARRAY_BUFFER,colorbuffer); glEnableVertexAttribArray(1); glVertexAttribPointer(1,3,GL_FLOAT,GL_FALSE,0,0); // Render Colored Point Cloud //glDrawArrays(GL_POINTS,0,model->vertexCount); glDisableVertexAttribArray(0); glDisableVertexAttribArray(1); // ---------------- END---------------------// //// Floor Rendering glBindBuffer(GL_ARRAY_BUFFER,fl); glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,0); glVertexAttribPointer(1,4,GL_FLOAT,GL_FALSE,0,(void *)48); glDrawArrays(GL_QUADS,0,4); glDisableVertexAttribArray(0); glDisableVertexAttribArray(1); // -----------------END---------------------// //Rendering the Meshes //////////// PART OF CODE THAT IS NOT DRAWING ANYTHING //////////////////// glBindVertexArray(vid); for(int i=0;i<NUM_MESHES;i++) { glBindBuffer(GL_ARRAY_BUFFER,mVertex[i]); glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,0); glVertexAttribPointer(1,3,GL_FLOAT,GL_FALSE,0,(void *)(meshes[i]->vertexCount*sizeof(glm::vec3))); //glDrawArrays(GL_TRIANGLES,0,meshes[i]->vertexCount); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,mFace[i]); //cout<<gluErrorString(glGetError()); glDrawElements(GL_TRIANGLES,meshes[i]->faceCount*3,GL_FLOAT,(void *)0); glDisableVertexAttribArray(0); glDisableVertexAttribArray(1); } glUseProgram(0); glutSwapBuffers(); glutPostRedisplay(); } Point Cloud Buffer Allocation Initialization: void initGLPointCloud() { glGenBuffers(1,&vertexbuffer); glGenBuffers(1,&colorbuffer); glGenBuffers(1,&fl); //Populates the position buffer glBindBuffer(GL_ARRAY_BUFFER,vertexbuffer); glBufferData(GL_ARRAY_BUFFER, model->vertexCount * sizeof (glm::vec3), &model->positions[0], GL_STATIC_DRAW); //Populates the color buffer glBindBuffer(GL_ARRAY_BUFFER, colorbuffer); glBufferData(GL_ARRAY_BUFFER, model->vertexCount * sizeof (glm::vec3), &model->colors[0], GL_STATIC_DRAW); model->FreeMemory(); // To free the not needed memory, as the data has been already // copied on graphic card, and wont be used again. glBindBuffer(GL_ARRAY_BUFFER,0); } Meshes Buffer Initialization: void initGLMeshes(int i) { glBindBuffer(GL_ARRAY_BUFFER,mVertex[i]); glBufferData(GL_ARRAY_BUFFER,meshes[i]->vertexCount*sizeof(glm::vec3)*2,NULL,GL_STATIC_DRAW); glBufferSubData(GL_ARRAY_BUFFER,0,meshes[i]->vertexCount*sizeof(glm::vec3),&meshes[i]->positions[0]); glBufferSubData(GL_ARRAY_BUFFER,meshes[i]->vertexCount*sizeof(glm::vec3),meshes[i]->vertexCount*sizeof(glm::vec3),&meshes[i]->colors[0]); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,mFace[i]); glBufferData(GL_ELEMENT_ARRAY_BUFFER,meshes[i]->faceCount*sizeof(glm::vec3), &meshes[i]->faces[0],GL_STATIC_DRAW); meshes[i]->FreeMemory(); //glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,0); } Initialize the Rendering, load and create shader and calls the mesh and PCD initializers. void initRender() { simple= new GLSLShader("shaders/simple.vert","shaders/simple.frag"); //Point Cloud //Sets up VAO glGenVertexArrays(1, &vao); glBindVertexArray(vao); initGLPointCloud(); //floorData glBindBuffer(GL_ARRAY_BUFFER, fl); glBufferData(GL_ARRAY_BUFFER, sizeof(floorData), &floorData[0], GL_STATIC_DRAW); glBindBuffer(GL_ARRAY_BUFFER,0); glBindVertexArray(0); //Meshes for(int i=0;i<NUM_MESHES;i++) { if(i==0) // SET up the new vertex array state for indexed Drawing { glGenVertexArrays(1, &vid); glBindVertexArray(vid); glGenBuffers(NUM_MESHES,mVertex); glGenBuffers(NUM_MESHES,mColor); glGenBuffers(NUM_MESHES,mFace); } initGLMeshes(i); } glEnable(GL_DEPTH_TEST); } Any help would be much appreciated, I have been breaking my head on this problem since 3 days, and still it is unsolved.

    Read the article

< Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >