Search Results

Search found 25660 results on 1027 pages for 'dotnetnuke development'.

Page 484/1027 | < Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >

  • Best way to mask 2D sprites in XNA?

    - by electroflame
    I currently am trying to mask some sprites. Rather than explaining it in words, I've made up some example pictures: The area to mask (in white) Now, the red sprite that needs to be cropped. The final result. Now, I'm aware that in XNA you can do two things to accomplish this: Use the Stencil Buffer. Use a Pixel Shader. I have tried to do a pixel shader, which essentially did this: float4 main(float2 texCoord : TEXCOORD0) : COLOR0 { float4 tex = tex2D(BaseTexture, texCoord); float4 bitMask = tex2D(MaskTexture, texCoord); if (bitMask.a > 0) { return float4(tex.r, tex.g, tex.b, tex.a); } else { return float4(0, 0, 0, 0); } } This seems to crop the images (albeit, not correct once the image starts to move), but my problem is that the images are constantly moving (they aren't static), so this cropping needs to be dynamic. Is there a way I could alter the shader code to take into account it's position? Alternatively, I've read about using the Stencil Buffer, but most of the samples seem to hinge on using a rendertarget, which I really don't want to do. (I'm already using 3 or 4 for the rest of the game, and adding another one on top of it seems overkill) The only tutorial I've found that doesn't use Rendertargets is one from Shawn Hargreaves' blog over here. The issue with that one, though is that it's for XNA 3.1, and doesn't seem to translate well to XNA 4.0. It seems to me that the pixel shader is the way to go, but I'm unsure of how to get the positioning correct. I believe I would have to change my onscreen coordinates (something like 500, 500) to be between 0 and 1 for the shader coordinates. My only problem is trying to work out how to correctly use the transformed coordinates. Thanks in advance for any help!

    Read the article

  • What are the common character animation techniques used in tile based hack&slash games?

    - by Gorky
    I wonder what kind of animation techniques are used for creature and character animation in modern hack&slash type tile based games? Keyframing for different actions may be one option. Skeletal framing may be another. But how about the physics? Or do they use a totally hybrid system of inverse kinematics supported with a skeleton,physics and mixed with interpolated keyframing for more realistic animations? If so, how and for what reasons? I can think of many different solutions for the issues below but I wonder what's used and best suited for issues like: Walking or moving on an uneven terrain Combat interaction, combat physics and collisions Attaching rigid items to character and their iteractions ih physics world Soft body dynamics like hair, vegetation, clothes and fabric in line with animations and iteractions.

    Read the article

  • Can these game be fully coded in html5/javascript?

    - by RufioLJ
    I mean the mechanics of the game. Would it be possible? -Pokemon GBA series, rendering the world would be easy, but what about battle mechanics? -MapleStory, after seen dragonbound.net which is an identical copy of Gunbound I would think it's rather possible, but I'm still not sure if javascript can handle all the mechanics of the world. It would be heavy on resources I guess? I'm asking this because I'm really interested in html5 game develop(I really think in a future will destroy flash on game dev ground). I want to have an idea of how far games developed with the html5/javascript technology can go. I got especially inspired by dragonbound. I really think it pushes htmlt/javascript to the limits (game dev).

    Read the article

  • How do I use D3DXVec3Unproject with D3D11?

    - by Miguel P
    I'm having a small issue with D3DXVec3Unproject. I'm currently using Direct3D 11 and not 10, and the signature for this function is: D3DXVECTOR3 *pOut, CONST D3DXVECTOR3 *pV, CONST D3D10_VIEWPORT *pViewport, CONST D3DXMATRIX *pProjection, CONST D3DXMATRIX *pView, CONST D3DXMATRIX *pWorld As you may have noticed, it requires a D3D10_VIEWPORT, and I'm using a Direct3D 11 viewport, D3D11_VIEWPORT. Do you have any ideas how I can use D3DXVec3Unproject with Direct3D 11?

    Read the article

  • Can't use SFML sprite drawing and OpenGL rendering at the same time

    - by Ken
    I'm using some SFML built in functions to draw sprites and text as an overlay on top of some OpenGL rending in an SFML RenderWindow. The opengl rendering appears fine until I add the code to draw the sprites or text. The sprite or text drawing causes the OpenGL stuff to disappear. The follow code show what I'm trying to do sf::RenderWindow window(sf::VideoMode(viewport.width,viewport.height,32), "SFML Window"); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0,viewport.width,0,viewport.height,0,1); while (window.pollEvent(Event)) { //event handling... //begin drawing glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glBegin(GL_TRIANGLES); glColor3f(col.x,col.y,col.z); for(int i=0;i<3;i++) glVertex2f(pos.x+verts[i].x,pos.y+verts[i].y); glEnd(); // adding this line causes all the previous opengl triangles not to appear window.draw("Sometext"); window.display(); }

    Read the article

  • How can I design good continuous (seamless) tiles?

    - by Mikalichov
    I have trouble designing tiles so that when assembled, they don't look like tiles, but look like a homogeneous thing. For example, see the image below: Even though the main part of the grass is only one tile, you don't "see" the grid; you know where it is if you look a bit carefully, but it is not obvious. Whereas when I design tiles, you can only see "oh, jeez, 64 times the same tile," like in this image: (I took this from another GDSE question, sorry; not be critical of the game, but it proves my point. And actually has better tile design that what I manage, anyway.) I think the main problem is that I design them so they are independent, there is no junction between two tiles if put closed to each other. I think having the tiles more "continuous" would have a smoother effect, but can't manage to do it, it seems overly complex to me. I think it is probably simpler than I think once you know how to do it, but couldn't find a tutorial on that specific point. Is there a known method to design continuous / homogeneous tiles? (My terminology might be totally wrong, don't hesitate to correct me.)

    Read the article

  • XNA 4.0 Point Vertex Rendering

    - by luis
    I have a buffer of about 134 million particles and a very powerful computer to render them smoothly but I am getting an error when trying to render them as primitive lines it says I cannot render more than around 1 million. I wonder how can I do this, also if is there a better way to render this other than with lines, I'm comfortable with having 1 pixel points or anything as long as the vertices are shown all the time. I'm basically just plotting the points. thanks.

    Read the article

  • Kinect joint coordinates and XNA animation

    - by Sweta Dwivedi
    I have written a program to record the x,y,z coordinated of the Hand joint and I want to animate my models 2D or 3D according to these coordinates. . .However the output of the x,y,z coordinates are fluctuating from -0 to 1 but not more than that.. So i assume I will need to multiply them back with the screen width and height, however it still doesnt seem to animate according to the original x,y,z points Any transformations I might be missing out? while ((line = r.ReadLine()) != null) { string[] temp = line.Split(','); int x = (int) float.Parse(temp[0]))* maxWidth); int y = (int) float.Parse(temp[1])) * maxHeight); }

    Read the article

  • Arcball Problems with UDK

    - by opdude
    I'm trying to re-create an arcball example from a Nehe, where an object can be rotated in a more realistic way while floating in the air (in my game the object is attached to the player at a distance like for example the Physics Gun) however I'm having trouble getting this to work with UDK. I have created an LGArcBall which follows the example from Nehe and I've compared outputs from this with the example code. I think where my problem lies is what I do to the Quaternion that is returned from the LGArcBall. Currently I am taking the returned Quaternion converting it to a rotation matrix. Getting the product of the last rotation (set when the object is first clicked) and then returning that into a Rotator and setting that to the objects rotation. If you could point me in the right direction that would be great, my code can be found below. class LGArcBall extends Object; var Quat StartRotation; var Vector StartVector; var float AdjustWidth, AdjustHeight, Epsilon; function SetBounds(float NewWidth, float NewHeight) { AdjustWidth = 1.0f / ((NewWidth - 1.0f) * 0.5f); AdjustHeight = 1.0f / ((NewHeight - 1.0f) * 0.5f); } function StartDrag(Vector2D startPoint, Quat rotation) { StartVector = MapToSphere(startPoint); } function Quat Update(Vector2D currentPoint) { local Vector currentVector, perp; local Quat newRot; //Map the new point to the sphere currentVector = MapToSphere(currentPoint); //Compute the vector perpendicular to the start and current perp = startVector cross currentVector; //Make sure our length is larger than Epsilon if (VSize(perp) > Epsilon) { //Return the perpendicular vector as the transform newRot.X = perp.X; newRot.Y = perp.Y; newRot.Z = perp.Z; //In the quaternion values, w is cosine (theta / 2), where //theta is the rotation angle newRot.W = startVector dot currentVector; } else { //The two vectors coincide, so return an identity transform newRot.X = 0.0f; newRot.Y = 0.0f; newRot.Z = 0.0f; newRot.W = 0.0f; } return newRot; } function Vector MapToSphere(Vector2D point) { local float x, y, length, norm; local Vector result; //Transform the mouse coords to [-1..1] //and inverse the Y coord x = (point.X * AdjustWidth) - 1.0f; y = 1.0f - (point.Y * AdjustHeight); length = (x * x) + (y * y); //If the point is mapped outside of the sphere //( length > radius squared) if (length > 1.0f) { norm = 1.0f / Sqrt(length); //Return the "normalized" vector, a point on the sphere result.X = x * norm; result.Y = y * norm; result.Z = 0.0f; } else //It's inside of the sphere { //Return a vector to the point mapped inside the sphere //sqrt(radius squared - length) result.X = x; result.Y = y; result.Z = Sqrt(1.0f - length); } return result; } DefaultProperties { Epsilon = 0.000001f } I'm then attempting to rotate that object when the mouse is dragged, with the following update code in my PlayerController. //Get Mouse Position MousePosition.X = LGMouseInterfacePlayerInput(PlayerInput).MousePosition.X; MousePosition.Y = LGMouseInterfacePlayerInput(PlayerInput).MousePosition.Y; newQuat = ArcBall.Update(MousePosition); rotMatrix = MakeRotationMatrix(QuatToRotator(newQuat)); rotMatrix = rotMatrix * LastRot; LGMoveableActor(movingPawn.CurrentUseableObject).SetPhysics(EPhysics.PHYS_Rotating); LGMoveableActor(movingPawn.CurrentUseableObject).SetRotation(MatrixGetRotator(rotMatrix));

    Read the article

  • How do I plot individual pixels using the XNA APIs?

    - by izb
    If I wanted to fill my game screen with individually coloured pixels, how would I do this? For example, if I wanted to write a 'game of life'-type game where each pixel was a cell, how would I achieve this using XNA? I've tried just calling SetData() on a Texture2D object using a screen-sized array of Color values, but it complains with: You may not call SetData on a resource while it is actively set on the GraphicsDevice. Unset it from the device before calling SetData. How do I do as it asks? Or better still... is there an alternative, better, efficient way to fill a screen with arbitrary pixels?

    Read the article

  • Initial direction of intersection between two moving vehicles?

    - by Larolaro
    I'm working with a bit of projectile prediction for my AI and I'm looking for some ideas, any input? If a blue vehicle is moving in a direction at a constant speed of X m/s and a stationary orange vehicle has a rocket that travels Y m/s, which initial direction would the orange vehicle have to fire the rocket for it to hit the blue vehicle at the earliest time in the future? Thanks for reading!

    Read the article

  • Interpolation between two 3D points?

    - by meds
    I'm working with some splines which define a path a character follows (you can see a gameplay video here to get a better understanding of what's going on: http://www.youtube.com/watch?v=BndobjOiZ6g). Basically the characters 'forward' look direction is set to the 'forward' direction of the spline and when players tilt their phone left and right the character is strafed along its 'right' coordinate. The issue with this is (rather obviously) in performance, interpolating over a spline to find the nearest position and tangent relative to the player is an incredibly costly operation. To get by this I cache a finite number of positions in what I call 'SplineDetails', the class is as follows: public class SplineDetails { public SplineDetails() { Forward = Vector3.forward; Position = Vector3.one * float.MaxValue; Alpha = -1; } public float Alpha; // [0,1] measured along length of spline where 0 is the initial point and 1 is the end point of the spline public Vector3 Position; // the point of the spline at this alpha public Vector3 Forward; // the forward tangent of the spline at this alpha } I populate this with say 30 coordinates and I can give a rough estimate of a coordinate and 'forward' based on a position past in. It's not as accurate but it's much faster. But now I'd like to make the system work better by estimating positions and 'forward' directions by interpolating between two of the cached points though I'm stuck trying to figure out some logic. My first problem is, how can I determine between which two points the object is? Given each point can be placed at different intervals along the spline it could mean that two points in front or behind the object can be closer to the object. The other problem is to figure out the proportion between the two paths it's between, i.e. if there is a point a at coordinate (0,0,0) and point b at coordinate (1,0,0) if the object is at position (0.5,0,0) then the result it should give is '0.5' (as it is equal distance away from point a and point b). That's a simple example, but what if the object is at coordinate (0.5,3,0) for example?

    Read the article

  • How to procedurally (create) grow an artistic (2D) tree in real-time (L-System?).

    - by lalan
    Recently I programmed an L-system module, It got me interested further. I am a Plants vs Zombies junkie as well, really liked the concept of Tree of Wisdom. Would love to create similar procedural art just for fun and learn more. Question: How should I approach the process of creating an artistic tree (2d perhaps with fixed camera/perspective) dynamically? Ideally I would like to start with a plant (only a stem with a leaf) and grow it dynamically using some influence (input/user action) over its structure. These influences may result in different type of branching, curves in branches, its spread, location of fruits, color of flowers, etc. Want it to be really full of life/spirit. :) Plants vs Zombies: Tree of wisdom It would be great to dynamically grow a similar tree, but with lot more variation and animations happening. My Background: Student / Programmer, have used few game engines (Ogre3d, cocos2d, unity). Haven't really programmed directly using openGL, trying to fix that :). I am ready to spend considerable time, Please let me know about the APIs? and how would an expert like you would take on this problem? Why 2D? I think it's easier to solve the problem only considering 2 dimensions. Artistic inspirations: Only the tree, with fruits and leaves, without the shrubs at the bottom The large tree (visible branches, green leaves, flowers, fruits, etc) on the left, behind monkey. PixelJunk's Eden (Art style inspiration). Procedurally Generated Apple Tree using Fractals Please let me know if it was easy for you to understand the question, I may elaborate further. I hope a discussion of various approach would be helpful for everyone. You guys are awesome.

    Read the article

  • Tips for building an AI for a 2D racing game

    - by declique
    I have a school project to build an AI for a 2D racing game in which it will compete with several other AIs (no collision). We are given a black and white bitmap image of the racing track, we are allowed to choose basic stats for our car (handling, acceleration, max speed and brakes) after we receive the map. The AI connects to the game's server and gives to it several times a second numbers for the current acceleration and steering. The language I chose is C++, by the way. The questions are: What is the best strategy or algorithm (since I want to try and win)? I currently have in mind some ideas found on the net and one or two of my own, but I would like before I start to code that my perspective is one of the best. What good books are there on that matter? What sites should I refer to?

    Read the article

  • How are these bullets done?

    - by Mike
    I really want to know how the bullets in Radiangames Inferno are done. The bullets seem like they are just billboard particles but I am curious about how their tails are implemented. They can curve so this means they are not just a billboard. Also, they appear continuous which implies that the tails are not made of a bunch of smaller particles (I think). Can anyone shead some light on this for me?

    Read the article

  • Are there any open source projects for car engine sound simulation?

    - by Petteri Hietavirta
    I have been thinking how to create realistic sound for a car. The main sound is the engine, then all kind of wind, road and suspension sounds. Are there any open source projects for the engine sound simulation? Simply pitching up the sample does not sound too great. The ideal would be to something that allows me to pick type of the engine (i.e. inline-4 vs v-8), add extras like turbo/supercharger whine and finally set the load and rpm. Edit: Something like http://www.sonory.org/examples.html

    Read the article

  • Mobile Multiplayer games and coping with high latency

    - by spaceOwl
    I'm currently researching regarding a design for an online (realtime) mobile multiplayer game. As such, i'm taking into consideration that latencies (lag) is going to be high (perhaps higher than PC/consoles). I'd like to know if there are ways to overcome this or minimize the issues of high latency? The model i'll be using is peer-to-peer (using Photon cloud to broadcast messages to all other players). How do i deal with a scenario where a message about a local object's state at time t will only get to other players at *t + HUGE_LAG* ?

    Read the article

  • Using a Vertex Buffer and DrawUserIndexedPrimitives?

    - by MattMcg
    Let's say I have a large but static world and only a single moving object on said world. To increase performance I wish to use a vertex and index buffer for the static part of the world. I set them up and they work fine however if I throw in another draw call to DrawUserIndexedPrimitives (to draw my one single moving object) after the call to DrawIndexedPrimitives, it will error out saying a valid vertex buffer must be set. I can only assume the DrawUserIndexedPrimitive call destroyed/replaced the vertex buffer I set. In order to get around this I must call device.SetVertexBuffer(vertexBuffer) every frame. Something tells me that isn't correct as that kind of defeats the point of a buffer? To shed some light, the large vertex buffer is the final merged mesh of many repeated cubes (think Minecraft) which I manually create to reduce the amount of vertices/indexes needed (for example two connected cubes become one cuboid, the connecting faces are cut out), and also the amount of matrix translations (as it would suck to do one per cube). The moving objects would be other items in the world which are dynamic and not fixed to the block grid, so things like the NPCs who move constantly. How do I go about handling the large static world but also allowing objects to freely move about?

    Read the article

  • Tips on how to notify a user of new features in your game

    - by brent777
    I have noticed a problem when releasing new features for a game that I wrote for Android and published on Google Play Store. Because my game is "stage-based" - and not a game like Hay Day, for example, where users will just go into the game every day since it can't really be finished - my users are not aware of new features that I release for the game. For example, if I publish a new version of my game and it contains a couple new stages, most of their devices will just auto-update the game and they don't even notice this and think to check out what's new. So this is why an approach like popping open a dialog that showcases the new feature(s) when they open the game for the first time after the update was done is not really sufficient. I am looking for some tips on an approach that will draw my users back into the game and then they could read more detail about new features on such a dialog. I was thinking of something like a notification that tells them to check out the new features after an update is done but I am not sure if this is a good idea. Any suggestions to help me solve this problem would be awesome.

    Read the article

  • Dynamic navigation mesh changes

    - by Nairou
    I'm currently trying to convert from grids to navigation meshes for pathfinding, since grids are either too coarse for accurate navigation, or too fine to be useful for object tracking. While my map is fairly static, and the navigation mesh could be created in advance, this is somewhat of a tower defense game, where objects can be placed to block paths, so I need a way to recalculate portions of the navigation mesh to allow pathing around them. Is there any existing documentation on good ways to do this? I'm still very new to navigation meshes, so the prospect of modifying them to cut or fill holes sounds daunting.

    Read the article

  • Cocos2d update leaking memory

    - by Andrey Chernukha
    I have a weird issue - my app is leaking memory on device only, not on a simulator. It is leaking if i schedule update method anywhere, on any scene. It is leaking despite update method is empty, there's nothing inside it except NSLog. How can it be? I have even scheduled update on the very first scene where it seems there's nothing to leak, and scheduled another empty and it's leaking or not leaking but allocating something, the result is the same - the volume of the memory consumed is increasing and my app is crashing soon. I can detect the leakage via using Instruments-Memory-Activity Monitor or with help of following function: void report_memory(void) { struct task_basic_info info; mach_msg_type_number_t size = sizeof(info); kern_return_t kerr = task_info(mach_task_self(), TASK_BASIC_INFO, (task_info_t)&info, &size); if( kerr == KERN_SUCCESS ) { NSLog(@"Memory in use (in bytes): %u", info.resident_size); } else { NSLog(@"Error with task_info(): %s", mach_error_string(kerr)); } } Can anyone explain me what's going on?

    Read the article

  • Bridge made out of blocks at an angle

    - by Pozzuh
    I'm having a bit of trouble with the math behind my project. I want the player to be able to select 2 points (vectors). With these 2 points a floor should be created. When these points are parallel to the x-axis it's easy, just calculate the amount of blocks needed by a simple division, loop through that amount (in x and y) and keep increasing the coordinate by the size of that block. The trouble starts when the 2 vectors aren't parallel to an axis, for example at an angle of 45 degrees. How do I handle the math behind this? If I wasn't completely clear, I made this awesome drawing in paint to demonstrate what I want to achieve. The 2 red dots would be the player selected locations. (The blocks indeed aren't square.) http://i.imgur.com/pzhFMEs.png.

    Read the article

  • Indexed Drawing in OpenGL not working

    - by user2050846
    I am trying to render 2 types of primitives- - points ( a Point Cloud ) - triangles ( a Mesh ) I am rendering points simply without any index arrays and they are getting rendered fine. To render the meshes I am using indexed drawing with the face list array having the indices of the vertices to be rendered as Triangles. Vertices and their corresponding vertex colors are stored in their corresponding buffers. But the indexed drawing command do not draw anything. The code is as follows- Main Display Function: void display() { simple->enable(); simple->bindUniform("MV",modelview); simple->bindUniform("P", projection); // rendering Point Cloud glBindVertexArray(vao); // Vertex buffer Point Cloud glBindBuffer(GL_ARRAY_BUFFER,vertexbuffer); glEnableVertexAttribArray(0); glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,0); // Color Buffer point Cloud glBindBuffer(GL_ARRAY_BUFFER,colorbuffer); glEnableVertexAttribArray(1); glVertexAttribPointer(1,3,GL_FLOAT,GL_FALSE,0,0); // Render Colored Point Cloud //glDrawArrays(GL_POINTS,0,model->vertexCount); glDisableVertexAttribArray(0); glDisableVertexAttribArray(1); // ---------------- END---------------------// //// Floor Rendering glBindBuffer(GL_ARRAY_BUFFER,fl); glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,0); glVertexAttribPointer(1,4,GL_FLOAT,GL_FALSE,0,(void *)48); glDrawArrays(GL_QUADS,0,4); glDisableVertexAttribArray(0); glDisableVertexAttribArray(1); // -----------------END---------------------// //Rendering the Meshes //////////// PART OF CODE THAT IS NOT DRAWING ANYTHING //////////////////// glBindVertexArray(vid); for(int i=0;i<NUM_MESHES;i++) { glBindBuffer(GL_ARRAY_BUFFER,mVertex[i]); glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,0); glVertexAttribPointer(1,3,GL_FLOAT,GL_FALSE,0,(void *)(meshes[i]->vertexCount*sizeof(glm::vec3))); //glDrawArrays(GL_TRIANGLES,0,meshes[i]->vertexCount); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,mFace[i]); //cout<<gluErrorString(glGetError()); glDrawElements(GL_TRIANGLES,meshes[i]->faceCount*3,GL_FLOAT,(void *)0); glDisableVertexAttribArray(0); glDisableVertexAttribArray(1); } glUseProgram(0); glutSwapBuffers(); glutPostRedisplay(); } Point Cloud Buffer Allocation Initialization: void initGLPointCloud() { glGenBuffers(1,&vertexbuffer); glGenBuffers(1,&colorbuffer); glGenBuffers(1,&fl); //Populates the position buffer glBindBuffer(GL_ARRAY_BUFFER,vertexbuffer); glBufferData(GL_ARRAY_BUFFER, model->vertexCount * sizeof (glm::vec3), &model->positions[0], GL_STATIC_DRAW); //Populates the color buffer glBindBuffer(GL_ARRAY_BUFFER, colorbuffer); glBufferData(GL_ARRAY_BUFFER, model->vertexCount * sizeof (glm::vec3), &model->colors[0], GL_STATIC_DRAW); model->FreeMemory(); // To free the not needed memory, as the data has been already // copied on graphic card, and wont be used again. glBindBuffer(GL_ARRAY_BUFFER,0); } Meshes Buffer Initialization: void initGLMeshes(int i) { glBindBuffer(GL_ARRAY_BUFFER,mVertex[i]); glBufferData(GL_ARRAY_BUFFER,meshes[i]->vertexCount*sizeof(glm::vec3)*2,NULL,GL_STATIC_DRAW); glBufferSubData(GL_ARRAY_BUFFER,0,meshes[i]->vertexCount*sizeof(glm::vec3),&meshes[i]->positions[0]); glBufferSubData(GL_ARRAY_BUFFER,meshes[i]->vertexCount*sizeof(glm::vec3),meshes[i]->vertexCount*sizeof(glm::vec3),&meshes[i]->colors[0]); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,mFace[i]); glBufferData(GL_ELEMENT_ARRAY_BUFFER,meshes[i]->faceCount*sizeof(glm::vec3), &meshes[i]->faces[0],GL_STATIC_DRAW); meshes[i]->FreeMemory(); //glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,0); } Initialize the Rendering, load and create shader and calls the mesh and PCD initializers. void initRender() { simple= new GLSLShader("shaders/simple.vert","shaders/simple.frag"); //Point Cloud //Sets up VAO glGenVertexArrays(1, &vao); glBindVertexArray(vao); initGLPointCloud(); //floorData glBindBuffer(GL_ARRAY_BUFFER, fl); glBufferData(GL_ARRAY_BUFFER, sizeof(floorData), &floorData[0], GL_STATIC_DRAW); glBindBuffer(GL_ARRAY_BUFFER,0); glBindVertexArray(0); //Meshes for(int i=0;i<NUM_MESHES;i++) { if(i==0) // SET up the new vertex array state for indexed Drawing { glGenVertexArrays(1, &vid); glBindVertexArray(vid); glGenBuffers(NUM_MESHES,mVertex); glGenBuffers(NUM_MESHES,mColor); glGenBuffers(NUM_MESHES,mFace); } initGLMeshes(i); } glEnable(GL_DEPTH_TEST); } Any help would be much appreciated, I have been breaking my head on this problem since 3 days, and still it is unsolved.

    Read the article

  • Limiting the speed of the mouse cursor

    - by idlewire
    I am working on a simple game where you can drag objects around with the mouse cursor. As I drag the object around quickly, I notice there is some juddering, which seems to be due to the fact that I can move the mouse cursor faster than the game's update/draw. So, although I maintain the offset from where the player initially clicked on the object, the mouse's relative position to the object shifts around slightly before settling as I move the object very quickly. The only way I have found to get smooth, exact 1:1 movement is if I turn both IsFixedTimeStep and SynchronizeWithVerticalRetrace to false. However, I'd rather not have to do that. I have also tried making a custom mouse cursor, hiding the real mouse, taking the real mouse delta and clamping it to a maximum speed. Here is the problem: In windowed mode, the "real" mouse cursor moves off the window while the custom mouse cursor (since it's movement is being scaled) is still somewhere inside the game window. This becomes bizarre and is obviously not desired, as clicking at this point means clicking on things outside the game window. Is there any way to accomplish this in windowed mode? In fullscreen mode, the "real" mouse cursor is bounded to the edges of the screen. So I get to a point where there is no more mouse delta, yet my custom cursor is still somewhere in the middle of the screen and hence can't move further in that direction. If I wanted to clamp it to the edge of the screen when the real cursor is at the edge, then I would get an abrupt jump to the edge of the screen, which isn't desired either Any help would be appreciated. I'd like to be able to limit the speed of the mouse, but also would appreciate help with the first issue (the non-smooth relative offset between mouse cursor movement and object movement).

    Read the article

  • Browser Game Database structure

    - by John Svensson
    users id username password email userlevel characters id userid level strength exp max_exp map id x y This is what I have so far. I want to be able to implement and put different NPC's on my map location. I am thinking of some npc_entities table, would that be a good approach? And then I would have a npc_list table with details as how much damage, level, etc the NPC is. Give me some ideas with the map, map entities, npc how I can structure it?

    Read the article

< Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >