Search Results

Search found 32375 results on 1295 pages for 'dnn module development'.

Page 520/1295 | < Previous Page | 516 517 518 519 520 521 522 523 524 525 526 527  | Next Page >

  • Directx and Open Libraries list? [closed]

    - by OVERTONE
    I've just been looking for comparissons between open and proprietary frameworks and libraries. More so just to get an idea of what exists than how they compare. For example: We have DirectX (graphics) and its open counterpart OpenGL DirectX (sound) and OpenAL But there are other DirectX libraries that I can't find open alternatives to such as DirectInput DXGI Direct2D DirectWrite Doe's anyone have any list's or Comparisons between Directx and their open counterparts?

    Read the article

  • How to implement a game launch counter in LibGDX

    - by Vishal Kumar
    I'm writing a game using LibGDX in which I want to save the number of launches of a game in a text file. So, In the create() of my starter class, I have the following code ..but it's not working public class MainStarter extends Game { private int count; @Override public void create() { // Set up the application AppSettings.setUp(); if(SettingsManager.isFirstLaunch()){ SettingsManager.createTextFileInLocalStorage("gamedata"); SettingsManager.writeLine("gamedata", "Launched:"+count ,FileType.LOCAL_FILE ); } else{ SettingsManager.writeLine("gamedata", "Not First launch :"+count++ ,FileType.LOCAL_FILE ); } // // Load assets before setting the screen // ##################################### Assets.loadAll(); // Set the tests screen setScreen(new MainMenuScreen(this, "Main Menu")); } } What is the proper way to do this?

    Read the article

  • OpenGL sprites and point size limitation

    - by Srdan
    I'm developing a simple particle system that should be able to perform on mobile devices (iOS, Andorid). My plan was to use GL_POINT_SPRITE/GL_PROGRAM_POINT_SIZE method because of it's efficiency (GL_POINTS are enough), but after some experimenting, I found myself in a trouble. Sprite size is limited (to usually 64 pixels). I'm calculating size using this formula gl_PointSize = in_point_size * some_factor / distance_to_camera to make particle sizes proportional to distance to camera. But at some point, when camera is close enough, problem with size limitation emerges and whole system starts looking unrealistic. Is there a way to avoid this problem? If no, what's alternative? I was thinking of manually generating billboard quad for each particle. Now, I have some questions about that approach. I guess minimum geometry data would be four vertices per particle and index array to make quads from these vertices (with GL_TRIANGLE_STRIP). Additionally, for each vertex I need a color and texture coordinate. I would put all that in an interleaved vertex array. But as you can see, there is much redundancy. All vertices of same particle share same color value, and four texture coordinates are same for all particles. Because of how glDrawArrays/Elements works, I see no way to optimise this. Do you know of a better approach on how to organise per-particle data? Should I use buffers or vertex arrays, or there is no difference because each time I have to update all particles' data. About particles simulation... Where to do it? On CPU or on a vertex processors? Something tells me that mobile's CPU would do it faster than it's vertex unit (at least today in 2012 :). So, any advice on how to make a simple and efficient particle system without particle size limitation, for mobile device, would be appreciated. (animation of camera passing through particles should be realistic)

    Read the article

  • Cocos2d Tiled Dynamic Object Layer

    - by Rodrigo Camargo
    I'm trying to develop a cocos2d tiled based game using a sort of 'dynamic' object layer. What I want to do is after the tiled map is loaded, the user can drag something into the map and that will become an event when the 'hero' pass over it. I know how to build an object layer in tiled but it seems that is for fixed positions and what I want is a dynamic action position based on what the user can select. For instance, the user can drag a rock into a tile and when the character hit that rock he may die, or something. I'm a little lost about how to make it work. Do you have any idea of what should I use or what should I look for? Thanks in advance!

    Read the article

  • SDL2 with OpenGL -- weird results, what's wrong?

    - by ber4444
    I'm porting an app to iOS, and therefore need to upgrade it to SDL2 from SDL1.2 (so far I'm testing it as an on OS X desktop app only). However, when running the code with SDL2, I'm getting weird results as shown on the second image below (the first image is how it looks with SDL, correctly). The single changeset that causes this is this one, do you see something obviously wrong there, or does SDL2 have some OpenGL nuances I'm unaware of? My SDL is based on changeset dd7e57847ea9 from HG (since then there is one "Allow specifying of OpenGL 3.2 Core Profile on Mac OS X" commit, not sure if that would help).

    Read the article

  • Would someone please explain Octree Collisions to me?

    - by A-Type
    I've been reading everything I can find on the subject and I feel like the pieces are just about to fall into place, but I just can't quite get it. I'm making a space game, where collisions will occur between planets, ships, asteroids, and the sun. Each of these objects can be subdivided into 'chunks', which I have implemented to speed up rendering (the vertices can and will change often at runtime, so I've separated the buffers). These subdivisions also have bounding primitives to test for collision. All of these objects are made of blocks (yeah, it's that kind of game). Blocks can also be tested for rough collisions, though they do not have individual bounding primitives for memory reasons. I think the rough testing seems to be sufficient, though. So, collision needs to be fairly precise; at block resolution. Some functions rely on two blocks colliding. And, of course, attacking specific blocks is important. Now what I am struggling with is filtering my collision pairs. As I said, I've read a lot about Octrees, but I'm having trouble applying it to my situation as many tutorials are vague with very little code. My main issues are: Are Octrees recalculated each frame, or are they stored in memory and objects are shuffled into different divisions as they move? Despite all my reading I still am not clear on this... the vagueness of it all has been frustrating. How far do Octrees subdivide? Planets in my game are quite large, while asteroids are smaller. Do I subdivide to the size of the planet, or asteroid (where planet is in multiple divisions)? Or is the limit something else entirely, like number of elements in the division? Should I load objects into the octrees as 'chunks' or in the whole, then break into chunks later? This could be specific to my implementation, I suppose. I was going to ask about how big my root needed to be, but I did manage to find this question, and the second answer seems sufficient for me. I'm afraid I don't really get what he means by adding new nodes and doing subdivisions upon adding new objects, probably because I'm confused about whether the tree is maintained in memory or recalculated per-frame.

    Read the article

  • Issue with a point coordinates, which creates an unwanted triangle

    - by Paul
    I would like to connect the points from the red path, to the y-axis in blue. I figured out that the problem with my triangles came from the first point (V0) : it is not located where it should be. In the console, it says its location is at 0,0, but in the emulator, it is not. The code : for(int i = 1; i < 2; i++) { CCLOG(@"_polyVertices[i-1].x : %f, _polyVertices[i-1].y : %f", _polyVertices[i-1].x, _polyVertices[i-1].y); CCLOG(@"_polyVertices[i].x : %f, _polyVertices[i].y : %f", _polyVertices[i].x, _polyVertices[i].y); ccDrawLine(_polyVertices[i-1], _polyVertices[i]); } The output : _polyVertices[i-1].x : 0.000000, _polyVertices[i-1].y : 0.000000 _polyVertices[i].x : 50.000000, _polyVertices[i].y : 0.000000 And the result : (the layer goes up, i could not take the screenshot before the layer started to go up, but the first red point starts at y=0) : Then it creates an unwanted triangle when the code continues : Would you have any idea about this? (So to force the first blue point to start at 0,0, and not at 50,0 as it seems to be now) Here is the code : - (void)generatePath{ float x = 50; //first red point float y = 0; for(int i = 0; i < kMaxKeyPoints+1; i++) { if (i<3){ _hillKeyPoints[i] = CGPointMake(x, y); x = 150 + (random() % (int) 30); y += -40; } else if(i<20){ //going right _hillKeyPoints[i] = CGPointMake(x, y); x += (random() % (int) 30); y += -40; } else if(i<25){ //stabilize _hillKeyPoints[i] = CGPointMake(x, y); x = 150 + (random() % (int) 30); y += -40; } else if(i<30){ //going left _hillKeyPoints[i] = CGPointMake(x, y); //x -= (random() % (int) 10); x = 150 + (random() % (int) 30); y += -40; } else { //back to normal _hillKeyPoints[i] = CGPointMake(x, y); x = 150 + (random() % (int) 30); y += -40; } } } -(void)generatePolygons{ static int prevFromKeyPointI = -1; static int prevToKeyPointI = -1; // key points interval for drawing while (_hillKeyPoints[_fromKeyPointI].y > -_offsetY+winSizeTop) { _fromKeyPointI++; } while (_hillKeyPoints[_toKeyPointI].y > -_offsetY-winSizeBottom) { _toKeyPointI++; } if (prevFromKeyPointI != _fromKeyPointI || prevToKeyPointI != _toKeyPointI) { _nPolyVertices = 0; float x1 = 0; int keyPoints = _fromKeyPointI; for (int i=_fromKeyPointI; i<_toKeyPointI; i++){ //V0: at (0,0) _polyVertices[_nPolyVertices] = CGPointMake(x1, y1); //first blue point _polyTexCoords[_nPolyVertices++] = CGPointMake(x1, y1); //V1: to the first "point" _polyVertices[_nPolyVertices] = CGPointMake(_hillKeyPoints[keyPoints].x, _hillKeyPoints[keyPoints].y); _polyTexCoords[_nPolyVertices++] = CGPointMake(_hillKeyPoints[keyPoints].x, _hillKeyPoints[keyPoints].y); keyPoints++; //from point at index 0 to 1 //V2, same y as point n°2: _polyVertices[_nPolyVertices] = CGPointMake(0, _hillKeyPoints[keyPoints].y); _polyTexCoords[_nPolyVertices++] = CGPointMake(0, _hillKeyPoints[keyPoints].y); //V1 again _polyVertices[_nPolyVertices] = _polyVertices[_nPolyVertices-2]; _polyTexCoords[_nPolyVertices++] = _polyVertices[_nPolyVertices-2]; //V2 again _polyVertices[_nPolyVertices] = _polyVertices[_nPolyVertices-2]; _polyTexCoords[_nPolyVertices++] = _polyVertices[_nPolyVertices-2]; //CCLOG(@"_nPolyVertices V2 again : %i", _nPolyVertices); //V3 = same x,y as point at index 1 _polyVertices[_nPolyVertices] = CGPointMake(_hillKeyPoints[keyPoints].x, _hillKeyPoints[keyPoints].y); _polyTexCoords[_nPolyVertices] = CGPointMake(_hillKeyPoints[keyPoints].x, _hillKeyPoints[keyPoints].y); y1 = _polyVertices[_nPolyVertices].y; _nPolyVertices++; } prevFromKeyPointI = _fromKeyPointI; prevToKeyPointI = _toKeyPointI; } } - (void) draw { //RED glColor4f(1, 1, 1, 1); for(int i = MAX(_fromKeyPointI, 1); i <= _toKeyPointI; ++i) { glColor4f(1.0, 0, 0, 1.0); ccDrawLine(_hillKeyPoints[i-1], _hillKeyPoints[i]); } //BLUE glColor4f(0, 0, 1, 1); for(int i = 1; i < 2; i++) { CCLOG(@"_polyVertices[i-1].x : %f, _polyVertices[i-1].y : %f", _polyVertices[i-1].x, _polyVertices[i-1].y); CCLOG(@"_polyVertices[i].x : %f, _polyVertices[i].y : %f", _polyVertices[i].x, _polyVertices[i].y); ccDrawLine(_polyVertices[i-1], _polyVertices[i]); } } Thanks

    Read the article

  • Rotating wheel with touch adding velocity

    - by Lewis
    I have a wheel control in a game which is setup like so: - (void)ccTouchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; CGPoint location = [touch locationInView:[touch view]]; location = [[CCDirector sharedDirector] convertToGL:location]; if (CGRectContainsPoint(wheel.boundingBox, location)) { CGPoint firstLocation = [touch previousLocationInView:[touch view]]; CGPoint location = [touch locationInView:[touch view]]; CGPoint touchingPoint = [[CCDirector sharedDirector] convertToGL:location]; CGPoint firstTouchingPoint = [[CCDirector sharedDirector] convertToGL:firstLocation]; CGPoint firstVector = ccpSub(firstTouchingPoint, wheel.position); CGFloat firstRotateAngle = -ccpToAngle(firstVector); CGFloat previousTouch = CC_RADIANS_TO_DEGREES(firstRotateAngle); CGPoint vector = ccpSub(touchingPoint, wheel.position); CGFloat rotateAngle = -ccpToAngle(vector); CGFloat currentTouch = CC_RADIANS_TO_DEGREES(rotateAngle); wheelRotation += (currentTouch - previousTouch) * 0.6; //limit speed 0.6 } } I update the rotation of a the wheel in the update method by doing: wheel.rotation = wheelRotation; Now once the user lets go of the wheel I want it to rotate back to where it was before but not without taking into account the velocity of the swipe the user has done. This is the bit I really can't get my head around. So if the swipe generates a lot of velocity then the wheel will carry on moving slightly in that direction until the overall force which pulls the wheel back to the starting position kicks in. Any ideas/code snippets?

    Read the article

  • How to resolve concurrent ramp collisions in 2d platformer?

    - by Shaun Inman
    A bit about the physics engine: Bodies are all rectangles. Bodies are sorted at the beginning of every update loop based on the body-in-motion's horizontal and vertical velocity (to avoid sticky walls/floors). Solid bodies are resolved by testing the body-in-motion's new X with the old Y and adjusting if necessary before testing the new X with the new Y, again adjusting if necessary. Works great. Ramps (rectangles with a flag set indicating bottom-left, bottom-right, etc) are resolved by calculating the ratio of penetration along the x-axis and setting a new Y accordingly (with some checks to make sure the body-in-motion isn't attacking from the tall or flat side, in which case the ramp is treated as a normal rectangle). This also works great. Side-by-side ramps, eg. \/ and /\, work fine but things get jittery and unpredictable when a top-down ramp is directly above a bottom-up ramp, eg. < or > or when a bottom-up ramp runs right up to the ceiling/top-down ramp runs right down to the floor. I've been able to lock it down somewhat by detecting whether the body-in-motion hadFloor when also colliding with a top-down ramp or hadCeiling when also colliding with a bottom-up ramp then resolving by calculating the ratio of penetration along the y-axis and setting the new X accordingly (the opposite of the normal behavior). But as soon as the body-in-motion jumps the hasFloor flag becomes false, the first ramp resolution pushes the body into collision with the second ramp and collision resolution becomes jittery again for a few frames. I'm sure I'm making this more complicated than it needs to be. Can anyone recommend a good resource that outlines the best way to address this problem? (Please don't recommend I use something like Box2d or Chipmunk. Also, "redesign your levels" isn't an answer; the body-in-motion may at times be riding another body-in-motion, eg. a platform, that pushes it into a ramp so I'd like to be able to resolve this properly.) Thanks!

    Read the article

  • How to get location of sprite placed on rotating circle in cocos2d android?

    - by Real_steel4819
    I am developing a game using cocos2d and i got stuck here when finding location of sprite placed on rotating circle on background, so that when i hit at certain position on circle its not getting hit at wanted position,but its going away from it and placing target there.I tried printing the position of hit on spriteMoveFinished() and ccTouchesEnded(). Its giving initial position and not rotated position. CGPoint location = CCDirector.sharedDirector().convertToGL(CGPoint.ccp(event.getX(), event.getY())); This is what i am using to get location.

    Read the article

  • Low complexity shader to indicate the sides of a polyline

    - by Pris
    I have a bunch of polylines that I draw using GL_LINES. They can have thousands of points. They actually represent the separation of land and water on a map. I don't have complete polygons, just the ordered set of points. I'm looking for a neat but efficient way to visually convey Side A and Side B as being different. For example I could offset the polyline in one direction a few times and fade it out (but every offset is doubling the number of points), or offset it once to make a "ribbon" and give one side a 'glow' like effect to mimic the outer glow or shadow of a polygon). This is for a mobile application and I'm using OpenGL ES 2. I'd like to keep the effect as simple as possible from a complexity stand point. I'm looking for some additional ideas; maybe there's a clever shader technique out there or a visual effect I haven't considered.

    Read the article

  • List of Open Source Java Games for Android

    - by BluFire
    I'm wondering if there are any more opensource games than the ones that you can plainly see when you search a list of open source games for android on google. Such as, is there a good website that has compiled open source games? I don't want an answer of "go google it" or "en.wikipedia.org/wiki/List_of_open_source_Android_applications" it gets really annoying on posts when people just give lazy answers.

    Read the article

  • Licensing Theme Music from other games

    - by HS01
    As part of my game, I thought it would be fun to make a hidden level that pays tribute to Mario Bros (one of the earliest games I ever played). It would be themed in that way with 8-bit graphics and question mark blocks and completing the level would say "Thank you but the princess is in another castle" or such. For the sound track, I'm thinking of just overlaying the standard mario theme music by playing it on a virtual keyboard using a different instrument/timing or something. My question is, am I legally safe? I'm not using anyone else's actual music, I'm just playing the same tune in a different way myself. Do I have to get licensing for this?

    Read the article

  • How does gluLookAt work?

    - by Chan
    From my understanding, gluLookAt( eye_x, eye_y, eye_z, center_x, center_y, center_z, up_x, up_y, up_z ); is equivalent to: glRotatef(B, 0.0, 0.0, 1.0); glRotatef(A, wx, wy, wz); glTranslatef(-eye_x, -eye_y, -eye_z); But when I print out the ModelView matrix, the call to glTranslatef() doesn't seem to work properly. Here is the code snippet: #include <stdlib.h> #include <stdio.h> #include <GL/glut.h> #include <iomanip> #include <iostream> #include <string> using namespace std; static const int Rx = 0; static const int Ry = 1; static const int Rz = 2; static const int Ux = 4; static const int Uy = 5; static const int Uz = 6; static const int Ax = 8; static const int Ay = 9; static const int Az = 10; static const int Tx = 12; static const int Ty = 13; static const int Tz = 14; void init() { glClearColor(0.0, 0.0, 0.0, 0.0); glEnable(GL_DEPTH_TEST); glShadeModel(GL_SMOOTH); glEnable(GL_LIGHTING); glEnable(GL_LIGHT0); GLfloat lmodel_ambient[] = { 0.8, 0.0, 0.0, 0.0 }; glLightModelfv(GL_LIGHT_MODEL_AMBIENT, lmodel_ambient); } void displayModelviewMatrix(float MV[16]) { int SPACING = 12; cout << left; cout << "\tMODELVIEW MATRIX\n"; cout << "--------------------------------------------------" << endl; cout << setw(SPACING) << "R" << setw(SPACING) << "U" << setw(SPACING) << "A" << setw(SPACING) << "T" << endl; cout << "--------------------------------------------------" << endl; cout << setw(SPACING) << MV[Rx] << setw(SPACING) << MV[Ux] << setw(SPACING) << MV[Ax] << setw(SPACING) << MV[Tx] << endl; cout << setw(SPACING) << MV[Ry] << setw(SPACING) << MV[Uy] << setw(SPACING) << MV[Ay] << setw(SPACING) << MV[Ty] << endl; cout << setw(SPACING) << MV[Rz] << setw(SPACING) << MV[Uz] << setw(SPACING) << MV[Az] << setw(SPACING) << MV[Tz] << endl; cout << setw(SPACING) << MV[3] << setw(SPACING) << MV[7] << setw(SPACING) << MV[11] << setw(SPACING) << MV[15] << endl; cout << "--------------------------------------------------" << endl; cout << endl; } void reshape(int w, int h) { float ratio = static_cast<float>(w)/h; glViewport(0, 0, w, h); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.0, ratio, 1.0, 425.0); } void draw() { float m[16]; glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glGetFloatv(GL_MODELVIEW_MATRIX, m); gluLookAt( 300.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f ); glColor3f(1.0, 0.0, 0.0); glutSolidCube(100.0); glGetFloatv(GL_MODELVIEW_MATRIX, m); displayModelviewMatrix(m); glutSwapBuffers(); } int main(int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH); glutInitWindowSize(400, 400); glutInitWindowPosition(100, 100); glutCreateWindow("Demo"); glutReshapeFunc(reshape); glutDisplayFunc(draw); init(); glutMainLoop(); return 0; } No matter what value I use for the eye vector: 300, 0, 0 or 0, 300, 0 or 0, 0, 300 the translation vector is the same, which doesn't make any sense because the order of code is in backward order so glTranslatef should run first, then the 2 rotations. Plus, the rotation matrix, is completely independent of the translation column (in the ModelView matrix), then what would cause this weird behavior? Here is the output with the eye vector is (0.0f, 300.0f, 0.0f) MODELVIEW MATRIX -------------------------------------------------- R U A T -------------------------------------------------- 0 0 0 0 0 0 0 0 0 1 0 -300 0 0 0 1 -------------------------------------------------- I would expect the T column to be (0, -300, 0)! So could anyone help me explain this? The implementation of gluLookAt from http://www.mesa3d.org void GLAPIENTRY gluLookAt(GLdouble eyex, GLdouble eyey, GLdouble eyez, GLdouble centerx, GLdouble centery, GLdouble centerz, GLdouble upx, GLdouble upy, GLdouble upz) { float forward[3], side[3], up[3]; GLfloat m[4][4]; forward[0] = centerx - eyex; forward[1] = centery - eyey; forward[2] = centerz - eyez; up[0] = upx; up[1] = upy; up[2] = upz; normalize(forward); /* Side = forward x up */ cross(forward, up, side); normalize(side); /* Recompute up as: up = side x forward */ cross(side, forward, up); __gluMakeIdentityf(&m[0][0]); m[0][0] = side[0]; m[1][0] = side[1]; m[2][0] = side[2]; m[0][1] = up[0]; m[1][1] = up[1]; m[2][1] = up[2]; m[0][2] = -forward[0]; m[1][2] = -forward[1]; m[2][2] = -forward[2]; glMultMatrixf(&m[0][0]); glTranslated(-eyex, -eyey, -eyez); }

    Read the article

  • Marshalling C# Structs into DX11 cbuffers

    - by Craig
    I'm having some issues with the packing of my structure in C# and passing them through to cbuffers I have registered in HLSL. When I pack my struct in one manner the information seems to be able to pass to the shader: [StructLayout(LayoutKind.Explicit, Size = 16)] internal struct TestStruct { [FieldOffset(0)] public Vector3 mEyePosition; [FieldOffset(12)] public int type; } This works perfectly when used against this HLSL fragment: cbuffer PerFrame : register(b0) { Vector3 eyePos; int type; } float3 GetColour() { float3 returnColour = float(0.0f, 0.0f, 0.0f); switch(type) { case 0: returnColour = float3(1.0f, 0.0f, 0.0f); break; case 1: returnColour = float3(0.0f, 1.0f, 0.0f); break; case 2: returnColour = float3(0.0f, 0.0f, 1.0f); break; } return returnColour; } However, when I use the following structure definitions... // Note this is 16 because HLSL packs in 4 float 'chunks'. // It is also simplified, but still demonstrates the problem. [StructLayout(Layout.Explicit, Size = 16)] internal struct InternalTestStruct { [FieldOffset(0)] public int type; } [StructLayout(LayoutKind.Explicit, Size = 32)] internal struct TestStruct { [FieldOffset(0)] public Vector3 mEyePosition; //Missing 4 bytes here for correct packing. [FieldOffset(16)] public InternalTestStruct mInternal; } ... the following HLSL fragment no longer works. struct InternalType { int type; } cbuffer PerFrame : register(b0) { Vector3 eyePos; InternalType internalStruct; } float3 GetColour() { float3 returnColour = float(0.0f, 0.0f, 0.0f); switch(internaltype.type) { case 0: returnColour = float3(1.0f, 0.0f, 0.0f); break; case 1: returnColour = float3(0.0f, 1.0f, 0.0f); break; case 2: returnColour = float3(0.0f, 0.0f, 1.0f); break; } return returnColour; } Is there a problem with the way I am packing the struct, or is it another issue? To re-iterate: I can pass a struct in a cbuffer so long as it does not contain a nested struct.

    Read the article

  • Make an object slide around an obstacle

    - by Isaiah
    I have path areas set up in a game I'm making for canvas/html5 and have got it working to keep the player within these areas. I have a function isOut(boundary, x, y) that returns true if the point is outside the boundary. What I do is check only the new position x/y separately with the corresponding old position x/y. Then if each one is out I assign them the past value from the frame before. The old positions are kept in a variable from a closure I made. like this: opos = [x,y];//old position npos = [x,y];//new position if(isOut(bound, npos[0], opos[1])){ npos[0] = opos[0]; //assign it the old x position } if(isOut(bound, opos[0], npos[1])){ npos[1] = opos[1]; //assign it the old y position } It looks nice and works good at certain angles, but if your boundary has diagonal regions it results in jittery motion. What's happening is the y pos exits the area while x doesn't and continues pushing the player to the side, once it has moved the player to the side a bit the player can move forward and then the y exits again and the whole process repeats. Anyone know how I may be able to achieve a smoother slide? I have access to the player's velocity vector, the angle, and the speed(when used with the angle). I can move the play with either angle/speed or x/yvelocities as I've built in backups to translate one to the other if either have been altered manually.

    Read the article

  • Designing Videogame Character Parodies [duplicate]

    - by David Dimalanta
    This question already has an answer here: Is it legal to add a cameo appearance of a known video game character in my game? 2 answers Was it okay to make a playable character when making a videogame despite its resemblance? For example, I'm making a 3rd-person action-platform genre and I have to make a character design resembling like Megaman but not exactly the same as him since there is little alternate in color, details, and facial features.

    Read the article

  • XNA Shader Texture Memory

    - by Alex
    I was wondering about texture optimization in XNA 4.0. Will the the contentmanager send the texturedata to the GPU directly when the texture gets loaded or do I send the texture data to the GPU when I declare a texture in my shader. If that's the case, what happens if I have 5 shaders all using the same texture, does that mean that I send 5 instances of that texture data to the gpu or am I simply telling the GPU what preloaded texture to use? Or does XNA do the heavy lifting in the background?

    Read the article

  • Detecting walls or floors in pygame

    - by Serial
    I am trying to make bullets bounce of walls, but I can't figure out how to correctly do the collision detection. What I am currently doing is iterating through all the solid blocks and if the bullet hits the bottom, top or sides, its vector is adjusted accordingly. However, sometimes when I shoot, the bullet doesn't bounce, I think it's when I shoot at a border between two blocks. Here is the update method for my Bullet class: def update(self, dt): if self.can_bounce: #if the bullet hasnt bounced find its vector using the mousclick pos and player pos speed = -10. range = 200 distance = [self.mouse_x - self.player[0], self.mouse_y - self.player[1]] norm = math.sqrt(distance[0] ** 2 + distance[1] ** 2) direction = [distance[0] / norm, distance[1 ] / norm] bullet_vector = [direction[0] * speed, direction[1] * speed] self.dx = bullet_vector[0] self.dy = bullet_vector[1] #check each block for collision for block in self.game.solid_blocks: last = self.rect.copy() if self.rect.colliderect(block): topcheck = self.rect.top < block.rect.bottom and self.rect.top > block.rect.top bottomcheck = self.rect.bottom > block.rect.top and self.rect.bottom < block.rect.bottom rightcheck = self.rect.right > block.rect.left and self.rect.right < block.rect.right leftcheck = self.rect.left < block.rect.right and self.rect.left > block.rect.left each test tests if it hit the top bottom left or right side of the block its colliding with if self.can_bounce: if topcheck: self.rect = last self.dy *= -1 self.can_bounce = False print "top" if bottomcheck: self.rect = last self.dy *= -1 #Bottom check self.can_bounce = False print "bottom" if rightcheck: self.rect = last self.dx *= -1 #right check self.can_bounce = False print "right" if leftcheck: self.rect = last self.dx *= -1 #left check self.can_bounce = False print "left" else: # if it has already bounced and colliding again kill it self.kill() for enemy in self.game.enemies_list: if self.rect.colliderect(enemy): self.kill() #update position self.rect.x -= self.dx self.rect.y -= self.dy This definitely isn't the best way to do it but I can't think of another way. If anyone has done this or can help that would be awesome!

    Read the article

  • Design patterns for effects between actors and technology

    - by changelog
    I'm working on my first game, and taking the opportunity to brush up my C++ (I want to make as much of it as portable as I can.) Whilst working on the technology tree and how it affects actors (spaceships, planets, crew, buildings, etc) I can't find a pattern that decouples these entities enough to feel like a clean approach. Just as an idea, here's the type of effects these actors can have on one another (and techs too) An engineer inside a spaceship boosts its shield A hero in a spaceship in a fleet increases morale A technology improves spaceships' travel distance A building in a planet improves its production The best I can come up with is the Observer pattern, and basically manage it more or less manually (when a crew member enters a spaceship, fire the event; when a new building is built in a planet, fire the event, etc etc.) but it seems to be too tightly coupled to me. I would love to get some ideas about how to approach this better.

    Read the article

  • how do I set quad buffering with jogl 2.0

    - by tony danza
    I'm trying to create a 3d renderer for stereo vision with quad buffering with Processing/Java. The hardware I'm using is ready for this so that's not the problem. I had a stereo.jar library in jogl 1.0 working for Processing 1.5, but now I have to use Processing 2.0 and jogl 2.0 therefore I have to adapt the library. Some things are changed in the source code of Jogl and Processing and I'm having a hard time trying to figure out how to tell Processing I want to use quad buffering. Here's the previous code: public class Theatre extends PGraphicsOpenGL{ protected void allocate() { if (context == null) { // If OpenGL 2X or 4X smoothing is enabled, setup caps object for them GLCapabilities capabilities = new GLCapabilities(); // Starting in release 0158, OpenGL smoothing is always enabled if (!hints[DISABLE_OPENGL_2X_SMOOTH]) { capabilities.setSampleBuffers(true); capabilities.setNumSamples(2); } else if (hints[ENABLE_OPENGL_4X_SMOOTH]) { capabilities.setSampleBuffers(true); capabilities.setNumSamples(4); } capabilities.setStereo(true); // get a rendering surface and a context for this canvas GLDrawableFactory factory = GLDrawableFactory.getFactory(); drawable = factory.getGLDrawable(parent, capabilities, null); context = drawable.createContext(null); // need to get proper opengl context since will be needed below gl = context.getGL(); // Flag defaults to be reset on the next trip into beginDraw(). settingsInited = false; } else { // The following three lines are a fix for Bug #1176 // http://dev.processing.org/bugs/show_bug.cgi?id=1176 context.destroy(); context = drawable.createContext(null); gl = context.getGL(); reapplySettings(); } } } This was the renderer of the old library. In order to use it, I needed to do size(100, 100, "stereo.Theatre"). Now I'm trying to do the stereo directly in my Processing sketch. Here's what I'm trying: PGraphicsOpenGL pg = ((PGraphicsOpenGL)g); pgl = pg.beginPGL(); gl = pgl.gl; glu = pg.pgl.glu; gl2 = pgl.gl.getGL2(); GLProfile profile = GLProfile.get(GLProfile.GL2); GLCapabilities capabilities = new GLCapabilities(profile); capabilities.setSampleBuffers(true); capabilities.setNumSamples(4); capabilities.setStereo(true); GLDrawableFactory factory = GLDrawableFactory.getFactory(profile); If I go on, I should do something like this: drawable = factory.getGLDrawable(parent, capabilities, null); but drawable isn't a field anymore and I can't find a way to do it. How do I set quad buffering? If I try this: gl2.glDrawBuffer(GL.GL_BACK_RIGHT); it obviously doesn't work :/ Thanks.

    Read the article

  • Bullet Physics - Casting a ray straight down from a rigid body (first person camera)

    - by Hydrocity
    I've implemented a first person camera using Bullet--it's a rigid body with a capsule shape. I've only been using Bullet for a few days and physics engines are new to me. I use btRigidBody::setLinearVelocity() to move it and it collides perfectly with the world. The only problem is the Y-value moves freely, which I temporarily solved by setting the Y-value of the translation vector to zero before the body is moved. This works for all cases except when falling from a height. When the body drops off a tall object, you can still glide around since the translate vector's Y-value is being set to zero, until you stop moving and fall to the ground (the velocity is only set when moving). So to solve this I would like to try casting a ray down from the body to determine the Y-value of the world, and checking the difference between that value and the Y-value of the camera body, and disable or slow down movement if the difference is large enough. I'm a bit stuck on simply casting a ray and determining the Y-value of the world where it struck. I've implemented this callback: struct AllRayResultCallback : public btCollisionWorld::RayResultCallback{ AllRayResultCallback(const btVector3& rayFromWorld, const btVector3& rayToWorld) : m_rayFromWorld(rayFromWorld), m_rayToWorld(rayToWorld), m_closestHitFraction(1.0){} btVector3 m_rayFromWorld; btVector3 m_rayToWorld; btVector3 m_hitNormalWorld; btVector3 m_hitPointWorld; float m_closestHitFraction; virtual btScalar addSingleResult(btCollisionWorld::LocalRayResult& rayResult, bool normalInWorldSpace) { if(rayResult.m_hitFraction < m_closestHitFraction) m_closestHitFraction = rayResult.m_hitFraction; m_collisionObject = rayResult.m_collisionObject; if(normalInWorldSpace){ m_hitNormalWorld = rayResult.m_hitNormalLocal; } else{ m_hitNormalWorld = m_collisionObject->getWorldTransform().getBasis() * rayResult.m_hitNormalLocal; } m_hitPointWorld.setInterpolate3(m_rayFromWorld, m_rayToWorld, m_closestHitFraction); return 1.0f; } }; And in the movement function, I have this code: btVector3 from(pos.x, pos.y + 1000, pos.z); // pos is the camera's rigid body position btVector3 to(pos.x, 0, pos.z); // not sure if 0 is correct for Y AllRayResultCallback callback(from, to); Base::getSingletonPtr()->m_btWorld->rayTest(from, to, callback); So I have the callback.m_hitPointWorld vector, which seems to just show the position of the camera each frame. I've searched Google for examples of casting rays, as well as the Bullet documentation, and it's been hard to just find an example. An example is really all I need. Or perhaps there is some method in Bullet to keep the rigid body on the ground? I'm using Ogre3D as a rendering engine, and casting a ray down is quite straightforward with that, however I want to keep all the ray casting within Bullet for simplicity. Could anyone point me in the right direction? Thanks.

    Read the article

  • What causes Box2D revolute joints to separate?

    - by nbolton
    I have created a rag doll using dynamic bodies (rectangles) and simple revolute joints (with lower and upper angles). When my rag doll hits the ground (which is a static body) the bodies seem to fidget and the joints separate. It looks like the bodies are sticking to the ground, and the momentum of the rag doll pulls the joint apart (see screenshot below). I'm not sure if it's related, but I'm using the Badlogic GDX Java wrapper for Box2D. Here's some snippets of what I think is the most relevant code: private RevoluteJoint joinBodyParts( Body a, Body b, Vector2 anchor, float lowerAngle, float upperAngle) { RevoluteJointDef jointDef = new RevoluteJointDef(); jointDef.initialize(a, b, a.getWorldPoint(anchor)); jointDef.enableLimit = true; jointDef.lowerAngle = lowerAngle; jointDef.upperAngle = upperAngle; return (RevoluteJoint)world.createJoint(jointDef); } private Body createRectangleBodyPart( float x, float y, float width, float height) { PolygonShape shape = new PolygonShape(); shape.setAsBox(width, height); BodyDef bodyDef = new BodyDef(); bodyDef.type = BodyType.DynamicBody; bodyDef.position.y = y; bodyDef.position.x = x; Body body = world.createBody(bodyDef); FixtureDef fixtureDef = new FixtureDef(); fixtureDef.shape = shape; fixtureDef.density = 10; fixtureDef.filter.groupIndex = -1; fixtureDef.filter.categoryBits = FILTER_BOY; fixtureDef.filter.maskBits = FILTER_STUFF | FILTER_WALL; body.createFixture(fixtureDef); shape.dispose(); return body; } I've skipped the method for creating the head, as it's pretty much the same as the rectangle method (just using a cricle shape). Those methods are used like so: torso = createRectangleBodyPart(x, y + 5, 0.25f, 1.5f); Body head = createRoundBodyPart(x, y + 7.4f, 1); Body leftLegTop = createRectangleBodyPart(x, y + 2.7f, 0.25f, 1); Body rightLegTop = createRectangleBodyPart(x, y + 2.7f, 0.25f, 1); Body leftLegBottom = createRectangleBodyPart(x, y + 1, 0.25f, 1); Body rightLegBottom = createRectangleBodyPart(x, y + 1, 0.25f, 1); Body leftArm = createRectangleBodyPart(x, y + 5, 0.25f, 1.2f); Body rightArm = createRectangleBodyPart(x, y + 5, 0.25f, 1.2f); joinBodyParts(torso, head, new Vector2(0, 1.6f), headAngle); leftLegTopJoint = joinBodyParts(torso, leftLegTop, new Vector2(0, -1.2f), 0.1f, legAngle); rightLegTopJoint = joinBodyParts(torso, rightLegTop, new Vector2(0, -1.2f), 0.1f, legAngle); leftLegBottomJoint = joinBodyParts(leftLegTop, leftLegBottom, new Vector2(0, -1), -legAngle * 1.5f, 0); rightLegBottomJoint = joinBodyParts(rightLegTop, rightLegBottom, new Vector2(0, -1), -legAngle * 1.5f, 0); leftArmJoint = joinBodyParts(torso, leftArm, new Vector2(0, 1), -armAngle * 0.7f, armAngle); rightArmJoint = joinBodyParts(torso, rightArm, new Vector2(0, 1), -armAngle * 0.7f, armAngle);

    Read the article

  • HLSL: Pack 4 values into 32 bit float.

    - by TheBigO
    I can't find any useful information on packing 4 values into a 32 bit float in HLSL. Ideally, what I want to be able to do in HLSL is: float4 values = ... // Some values where each component is between 0 and 1. float packedValues = pack32R(values); float4 values2 = unpack32R(packedValues); I realize that there will be precision limitations, and performance tradeoffs between different precisions in different methods. I'm just wondering what ideas are out there.

    Read the article

  • I'm looking to learn how to apply traditional animation techniques to my graphics engine - are there any tutorials or online-resources that can help?

    - by blueberryfields
    There are many traditional animation techniques - such as blurring of motion, motion along an elliptical curve rather than a straight line, counter-motion before beginning of movement - which help with creating the appearance of a realistic 3D animated character. I'm looking to incorporate tools and short cuts for some of these into my graphics engine, to make it easier for my end users to use these techniques in their animations. Is there a good resource listing the techniques and the principles behind them, especially how they might apply to a graphics engine or 3D animation?

    Read the article

< Previous Page | 516 517 518 519 520 521 522 523 524 525 526 527  | Next Page >