Search Results

Search found 16410 results on 657 pages for 'game component'.

Page 353/657 | < Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >

  • Implement Fast Inverse Square Root in Javascript?

    - by BBz
    The Fast Inverse Square Root from Quake III seems to use a floating-point trick. As I understand, floating-point representation can have some different implementations. So is it possible to implement the Fast Inverse Square Root in Javascript? Would it return the same result? float Q_rsqrt(float number) { long i; float x2, y; const float threehalfs = 1.5F; x2 = number * 0.5F; y = number; i = * ( long * ) &y; i = 0x5f3759df - ( i >> 1 ); y = * ( float * ) &i; y = y * ( threehalfs - ( x2 * y * y ) ); return y; }

    Read the article

  • Finding diagonal objects of an object in 3d space

    - by samfisher
    Using Unity3d, I have a array which is having 8 GameObjects in grid and one object (which is already known) is in center like this where K is already known object. All objects are equidistant from their adjacent objects (even with the diagonal objects) which means (distance between 4 & K) == (distance between K & 3) = (distance between 2 & K) 1 2 3 4 K 5 6 7 8 I want to remove 1,3,6,8 from array (the diagonal objects). How can I check that at runtime? my problem is the order of objects {1-8} is not known so I need to check each object's position with K to see if it is a diagonal object or not. so what check should I put with the GameObjects (K and others) to verify if this object is in diagonal position Regards, Sam

    Read the article

  • How do I do random isometric paths?

    - by user406470
    I'm working on an Isometric city generator, and I am looking for a little push in the right direction. I'm looking to randomly generate roads on a isometric plane. I have never done pathfinding before, and I've googled it and didn't find any articles relating to what I am trying to do. Basically, my program generates a random isometric city and, I am hoping to add roads to that. Any help is appreciated!

    Read the article

  • Best practices in managing character states

    - by TheBroodian
    While in development of a character, I feel like I'm digging myself deeper into a hole every time I add more functionality to him, creating more bugs and it seems like my code is tripping over itself all over the place. What are the best practices when managing character states for a character that has a large selection of abilities and actions that they can perform, without their abilities interrupting each other and creating a mess overall?

    Read the article

  • How are trajectories calculated and transmitted to other players in Multi-Player ?

    - by giulio
    I play alot of COD4. And can see tracers for gunfire, missles, care packages fall from helicopters etc. There is alot of activity. I am curious to know the algorithm (at a high level) that manages all this action when you have 20 people on a map shooting each other to death ? This question touches on the subject but doesn't ask for a more in-depth answer as to how you the developers go about calculating and transmitting movement and collision detection for projectiles, be it missles/bullets or any other object that is flying through the air in real-time.

    Read the article

  • HLSL What you get when you subtract world position from InvertViewProjection.Transform?

    - by cubrman
    In one of NVIDIA's Vertex shaders (the metal one) I found the following code: // transform object normals, tangents, & binormals to world-space: float4x4 WorldITXf : WorldInverseTranspose < string UIWidget="None"; >; // provide tranform from "view" or "eye" coords back to world-space: float4x4 ViewIXf : ViewInverse < string UIWidget="None"; >; ... float4 Po = float4(IN.Position.xyz,1); // homogeneous location coordinates float4 Pw = mul(Po,WorldXf); // convert to "world" space OUT.WorldView = normalize(ViewIXf[3].xyz - Pw.xyz); The term OUT.WorldView is subsequently used in a Pixel Shader to compute lighting: float3 Ln = normalize(IN.LightVec.xyz); float3 Nn = normalize(IN.WorldNormal); float3 Vn = normalize(IN.WorldView); float3 Hn = normalize(Vn + Ln); float4 litV = lit(dot(Ln,Nn),dot(Hn,Nn),SpecExpon); DiffuseContrib = litV.y * Kd * LightColor + AmbiColor; SpecularContrib = litV.z * LightColor; Can anyone tell me what exactly is WorldView here? And why do they add it to the normal?

    Read the article

  • most efficient AABB vs Ray collision algorithms

    - by Asher Einhorn
    Is there a known 'most efficient' algorithm for AABB vs Ray collision detection? I recently stumbled accross Arvo's AABB vs Sphere collision algorithm, and I am wondering if there is a similarly noteworthy algorithm for this. One must have condition for this algorithm is that I need to have the option of querying the result for the distance from the ray's origin to the point of collision. having said this, if there is another, faster algorithm which does not return distance, then in addition to posting one that does, also posting that algorithm would be very helpful indeed. Please also state what the function's return argument is, and how you use it to return distance or a 'no-collision' case. For example, does it have an out parameter for the distance as well as a bool return value? or does it simply return a float with the distance, vs a value of -1 for no collision? (For those that don't know: AABB = Axis Aligned Bounding Box)

    Read the article

  • How can I make Maya export a mesh as double-sided?

    - by bobobobo
    I'm exporting from Maya 2009 to OBJ. The mesh I'm exporting has in it's Render Stats "Double Sided" checked, but when the polygon is exported, only a single side is actually exported. What really needs to happen is for each polygon that is double sided, two polygons need to be exported, facing in opposite directions.. I can do this manually, but is there a way to make the OBJ exporter do it for me?

    Read the article

  • Improving the efficiency of frustum culling

    - by DeadMG
    I've got some code which performs frustum culling. However, this defines the "frustum" way too broadly- when I have ~10 objects on screen, the code returns 42 objects to be rendered. I've tried taking "slices" through the frustum to attempt to increase the accuracy of the technique, but it doesn't seem to have made much impact. I also significantly reduced the far plane, so that the objects are barely at the edge. Here's my code (where size is the size in screen space- the resolution of the client area of the window I'm rendering into). Any suggestions? auto&& size = GetDimensions(); D3DVIEWPORT9 vp = { 0, 0, size.x, size.y, 0, 1 }; D3DCALL(device->SetViewport(&vp)); static const int slices = 10; std::vector<Object*> result; for(int i = 0; i < slices; i++) { D3DXVECTOR3 WorldSpaceFrustrumPoints[8] = { D3DXVECTOR3(0, size.y, static_cast<float>(i) / slices), D3DXVECTOR3(size.x, 0, static_cast<float>(i) / slices), D3DXVECTOR3(size.x, size.y, static_cast<float>(i) / slices), D3DXVECTOR3(0, 0, static_cast<float>(i) / slices), D3DXVECTOR3(0, 0, static_cast<float>(i + 1) / slices), D3DXVECTOR3(size.x, 0, static_cast<float>(i + 1) / slices), D3DXVECTOR3(size.x, size.y, static_cast<float>(i + 1) / slices), D3DXVECTOR3(0, size.y, static_cast<float>(i + 1) / slices) }; D3DXMATRIXA16 Identity; D3DXMatrixIdentity(&Identity); D3DXVec3UnprojectArray( WorldSpaceFrustrumPoints, sizeof(D3DXVECTOR3), WorldSpaceFrustrumPoints, sizeof(D3DXVECTOR3), &vp, &Projection, &View, &Identity, 8 ); Math::AABB Frustrum; auto world_begin = std::begin(WorldSpaceFrustrumPoints); auto world_end = std::end(WorldSpaceFrustrumPoints); auto world_initial = WorldSpaceFrustrumPoints[0]; Frustrum.BottomLeftClosest.x = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.x < rhs.x ? lhs : rhs; }).x; Frustrum.BottomLeftClosest.y = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.y < rhs.y ? lhs : rhs; }).y; Frustrum.BottomLeftClosest.z = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.z < rhs.z ? lhs : rhs; }).z; Frustrum.TopRightFurthest.x = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.x > rhs.x ? lhs : rhs; }).x; Frustrum.TopRightFurthest.y = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.y > rhs.y ? lhs : rhs; }).y; Frustrum.TopRightFurthest.z = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.z > rhs.z ? lhs : rhs; }).z; auto slices_result = ObjectTree.collision(Frustrum); result.insert(result.end(), slices_result.begin(), slices_result.end()); } return result;

    Read the article

  • How can I replicate the look and limitations of the Super NES?

    - by Mikalichov
    I am looking to produce graphics with the same limitations / look that in the Super Nes era. I am specifically looking for graphics similar to Chrono Trigger / FF6. It would be a lot easier to do if I had an idea of the resolution / dpi I am supposed to use. I found that the technical specs for the SNES are: Progressive: 256 × 224, 512 × 224, 256 × 239, 512 × 239 Interlaced: 512 × 448, 512 × 478 But even by using these resolutions, it is pointless if I set it at 72dpi, as I will still have possibly very detailed graphics (that is the main thing, I don't want detailed graphics, I want to go pixelated). I figured it might be related to the sprite size limit, i.e.: Sprites can be 8 × 8, 16 × 16, 32 × 32, or 64 × 64 pixels, each using one of eight 16-color palettes and tiles from one of two blocks of 256 in VRAM. Up to 32 sprites and 34 8 × 8 sprite tiles may appear on any one line. This would work for sprites (characters, objects), but what about maps? Are they built entirely from 8x8 tiles? And then, at what resolution is the end result displayed? It might seem like I am giving the question and answers at the same time, but all of these are suppositions I am making, so could someone confirm or correct them?

    Read the article

  • Switching between Discrete and Integrated GPUs

    - by void-pointer
    Hello everyone, I develop CUDA applications on my Alienware M17x portable back-breaker, which has two discrete GTX 285M GPUs and one integrated GeForce 9400M GPU. I can currently switch between them using NVIDIA's software, but I would like the ability to do so within my applications for purposes of benchmarking and general convenience. Apparently this requires the "NDA version" of NVIDIA's Driver API, which I know not how to obtain. Would using this API be the only way to accomplish what I seek, and if so, how would I obtain it? A solution using Windows APIs would also be acceptable, though less preferable to one which would leverage a cross-platform API. I have created a similar thread concerning the matter on NVIDIA's forum, which is down at the time of this writing. Thanks for reading my question; it is much appreciated!

    Read the article

  • CCSpriteHole in cocos2d 2.0?

    - by rakkarage
    i was using this cocos2d class CCSpriteHole in cocos2d 1.0 fine... http://jpsarda.tumblr.com/post/15779708304/new-cocos2d-iphone-extensions-a-progress-bar-and-a i am trying to convert it to cocos2d 2.0... i got it to compile by changing glVertexPointer to glVertexAttribPointer like in the 2.0 version of CCSpriteScale9 here http://jpsarda.tumblr.com/post/9162433577/scale9grid-for-cocos2d and changing contentSizeInPixels_ to contentSize_... -(id) init { if( (self=[super init]) ) { opacityModifyRGB_ = YES; opacity_ = 255; color_ = colorUnmodified_ = ccWHITE; capSize=capSizeInPixels=CGSizeZero; //Not used blendFunc_.src = CC_BLEND_SRC; blendFunc_.dst = CC_BLEND_DST; // update texture (calls updateBlendFunc) [self setTexture:nil]; // default transform anchor anchorPoint_ = ccp(0.5f, 0.5f); vertexDataCount=24; vertexData = (ccV2F_C4F_T2F*) malloc(vertexDataCount * sizeof(ccV2F_C4F_T2F)); [self setTextureRectInPixels:CGRectZero untrimmedSize:CGSizeZero]; } return self; } -(id) initWithTexture:(CCTexture2D*)texture rect:(CGRect)rect { NSAssert(texture!=nil, @"Invalid texture for sprite"); // IMPORTANT: [self init] and not [super init]; if( (self = [self init]) ) { [self setTexture:texture]; [self setTextureRect:rect]; } return self; } -(id) initWithTexture:(CCTexture2D*)texture { NSAssert(texture!=nil, @"Invalid texture for sprite"); CGRect rect = CGRectZero; rect.size = texture.contentSize; return [self initWithTexture:texture rect:rect]; } -(id) initWithFile:(NSString*)filename { NSAssert(filename!=nil, @"Invalid filename for sprite"); CCTexture2D *texture = [[CCTextureCache sharedTextureCache] addImage: filename]; if( texture ) return [self initWithTexture:texture]; return nil; } +(id)spriteWithFile:(NSString*)f { return [[self alloc] initWithFile:f]; } - (void) dealloc { if (vertexData) free(vertexData); } -(void) updateColor { ccColor4F color4; color4.r=(float)color_.r/255.0f; color4.g=(float)color_.g/255.0f; color4.b=(float)color_.b/255.0f; color4.a=(float)opacity_/255.0f; for (int i=0; i<vertexDataCount; i++) { vertexData[i].colors=color4; } } -(void)updateTextureCoords:(CGRect)rect { CCTexture2D *tex = texture_; if(!tex) return; float atlasWidth = (float)tex.pixelsWide; float atlasHeight = (float)tex.pixelsHigh; float left,right,top,bottom; left = rect.origin.x/atlasWidth; right = left + rect.size.width/atlasWidth; top = rect.origin.y/atlasHeight; bottom = top + rect.size.height/atlasHeight; // // |/|/|/| // CGSize capTexCoordsSize=CGSizeMake(capSizeInPixels.width/atlasWidth, capSizeInPixels.height/atlasHeight); // From left to right //Top band // Left vertexData[0].texCoords=(ccTex2F){left,top}; vertexData[1].texCoords=(ccTex2F){left,top+capTexCoordsSize.height}; vertexData[2].texCoords=(ccTex2F){left+capTexCoordsSize.width,top}; vertexData[3].texCoords=(ccTex2F){left+capTexCoordsSize.width,top+capTexCoordsSize.height}; // Center vertexData[4].texCoords=(ccTex2F){right-capTexCoordsSize.width,top}; vertexData[5].texCoords=(ccTex2F){right-capTexCoordsSize.width,top+capTexCoordsSize.height}; // Right vertexData[6].texCoords=(ccTex2F){right,top}; vertexData[7].texCoords=(ccTex2F){right,top+capTexCoordsSize.height}; //Center band // Left vertexData[8].texCoords=(ccTex2F){left,bottom-capTexCoordsSize.height}; vertexData[9].texCoords=(ccTex2F){left,top+capTexCoordsSize.height}; vertexData[10].texCoords=(ccTex2F){left+capTexCoordsSize.width,bottom-capTexCoordsSize.height}; vertexData[11].texCoords=(ccTex2F){left+capTexCoordsSize.width,top+capTexCoordsSize.height}; // Center vertexData[12].texCoords=(ccTex2F){right-capTexCoordsSize.width,bottom-capTexCoordsSize.height}; vertexData[13].texCoords=(ccTex2F){right-capTexCoordsSize.width,top+capTexCoordsSize.height}; // Right vertexData[14].texCoords=(ccTex2F){right,bottom-capTexCoordsSize.height}; vertexData[15].texCoords=(ccTex2F){right,top+capTexCoordsSize.height}; //Bottom band //Left vertexData[16].texCoords=(ccTex2F){left,bottom}; vertexData[17].texCoords=(ccTex2F){left,bottom-capTexCoordsSize.height}; vertexData[18].texCoords=(ccTex2F){left+capTexCoordsSize.width,bottom}; vertexData[19].texCoords=(ccTex2F){left+capTexCoordsSize.width,bottom-capTexCoordsSize.height}; // Center vertexData[20].texCoords=(ccTex2F){right-capTexCoordsSize.width,bottom}; vertexData[21].texCoords=(ccTex2F){right-capTexCoordsSize.width,bottom-capTexCoordsSize.height}; // Right vertexData[22].texCoords=(ccTex2F){right,bottom}; vertexData[23].texCoords=(ccTex2F){right,bottom-capTexCoordsSize.height}; } -(void) updateVertices { float left=0; //-spriteSizeInPixels.width*0.5f; float right=left+contentSize_.width; float bottom=0; //-spriteSizeInPixels.height*0.5f; float top=bottom+contentSize_.height; float holeLeft=holeRect.origin.x*CC_CONTENT_SCALE_FACTOR(); float holeRight=holeLeft+holeRect.size.width*CC_CONTENT_SCALE_FACTOR(); float holeBottom=holeRect.origin.y*CC_CONTENT_SCALE_FACTOR(); float holeTop=holeBottom+holeRect.size.height*CC_CONTENT_SCALE_FACTOR(); // // |/|/|/| // // From left to right //Top band // Left vertexData[0].vertices=(ccVertex2F){left,top}; vertexData[1].vertices=(ccVertex2F){left,holeTop}; vertexData[2].vertices=(ccVertex2F){holeLeft,top}; vertexData[3].vertices=(ccVertex2F){holeLeft,holeTop}; // Center vertexData[4].vertices=(ccVertex2F){holeRight,top}; vertexData[5].vertices=(ccVertex2F){holeRight,holeTop}; // Right vertexData[6].vertices=(ccVertex2F){right,top}; vertexData[7].vertices=(ccVertex2F){right,holeTop}; //Center band // Left vertexData[8].vertices=(ccVertex2F){left,holeBottom}; vertexData[9].vertices=(ccVertex2F){left,holeTop}; vertexData[10].vertices=(ccVertex2F){holeLeft,holeBottom}; vertexData[11].vertices=(ccVertex2F){holeLeft,holeTop}; // Center vertexData[12].vertices=(ccVertex2F){holeRight,holeBottom}; vertexData[13].vertices=(ccVertex2F){holeRight,holeTop}; // Right vertexData[14].vertices=(ccVertex2F){right,holeBottom}; vertexData[15].vertices=(ccVertex2F){right,holeTop}; //Bottom band //Left vertexData[16].vertices=(ccVertex2F){left,bottom}; vertexData[17].vertices=(ccVertex2F){left,holeBottom}; vertexData[18].vertices=(ccVertex2F){holeLeft,bottom}; vertexData[19].vertices=(ccVertex2F){holeLeft,holeBottom}; // Center vertexData[20].vertices=(ccVertex2F){holeRight,bottom}; vertexData[21].vertices=(ccVertex2F){holeRight,holeBottom}; // Right vertexData[22].vertices=(ccVertex2F){right,bottom}; vertexData[23].vertices=(ccVertex2F){right,holeBottom}; } -(void) setHole:(CGRect)r inRect:(CGRect)totalSurface { holeRect=r; self.contentSize=totalSurface.size; holeRect.origin=ccpSub(holeRect.origin,totalSurface.origin); CGPoint holeCenter=ccp(holeRect.origin.x+holeRect.size.width*0.5f,holeRect.origin.y+holeRect.size.height*0.5f); self.anchorPoint=ccp(holeCenter.x/contentSize_.width,holeCenter.y/contentSize_.height); //[self updateTextureCoords:rectInPixels_]; [self updateVertices]; [self updateColor]; } -(void) draw { BOOL newBlend = NO; if( blendFunc_.src != CC_BLEND_SRC || blendFunc_.dst != CC_BLEND_DST ) { newBlend = YES; glBlendFunc( blendFunc_.src, blendFunc_.dst ); } glBindTexture(GL_TEXTURE_2D, [texture_ name]); glVertexAttribPointer(kCCVertexAttrib_Position, 2, GL_FLOAT, GL_FALSE, sizeof(ccV2F_C4F_T2F), &vertexData[0].vertices); glVertexAttribPointer(kCCVertexAttrib_TexCoords, 2, GL_FLOAT, GL_FALSE, sizeof(ccV2F_C4F_T2F), &vertexData[0].texCoords); glVertexAttribPointer(kCCVertexAttrib_Color, 4, GL_FLOAT, GL_FALSE, sizeof(ccV2F_C4F_T2F), &vertexData[0].colors); glDrawArrays(GL_TRIANGLE_STRIP, 0, 8); glVertexAttribPointer(kCCVertexAttrib_Position, 2, GL_FLOAT, GL_FALSE, sizeof(ccV2F_C4F_T2F), &vertexData[8].vertices); glVertexAttribPointer(kCCVertexAttrib_TexCoords, 2, GL_FLOAT, GL_FALSE, sizeof(ccV2F_C4F_T2F), &vertexData[8].texCoords); glVertexAttribPointer(kCCVertexAttrib_Color, 4, GL_FLOAT, GL_FALSE, sizeof(ccV2F_C4F_T2F), &vertexData[8].colors); glDrawArrays(GL_TRIANGLE_STRIP, 0, 8); glVertexAttribPointer(kCCVertexAttrib_Position, 2, GL_FLOAT, GL_FALSE, sizeof(ccV2F_C4F_T2F), &vertexData[16].vertices); glVertexAttribPointer(kCCVertexAttrib_TexCoords, 2, GL_FLOAT, GL_FALSE, sizeof(ccV2F_C4F_T2F), &vertexData[16].texCoords); glVertexAttribPointer(kCCVertexAttrib_Color, 4, GL_FLOAT, GL_FALSE, sizeof(ccV2F_C4F_T2F), &vertexData[16].colors); glDrawArrays(GL_TRIANGLE_STRIP, 0, 8); if( newBlend ) glBlendFunc(CC_BLEND_SRC, CC_BLEND_DST); } -(void)setTextureRectInPixels:(CGRect)rect untrimmedSize:(CGSize)untrimmedSize { rectInPixels_ = rect; rect_ = CC_RECT_PIXELS_TO_POINTS( rect ); //[self setContentSizeInPixels:untrimmedSize]; [self updateTextureCoords:rectInPixels_]; } -(void)setTextureRect:(CGRect)rect { CGRect rectInPixels = CC_RECT_POINTS_TO_PIXELS( rect ); [self setTextureRectInPixels:rectInPixels untrimmedSize:rectInPixels.size]; } // // RGBA protocol // #pragma mark CCSpriteHole - RGBA protocol -(GLubyte) opacity { return opacity_; } -(void) setOpacity:(GLubyte) anOpacity { opacity_ = anOpacity; // special opacity for premultiplied textures if( opacityModifyRGB_ ) [self setColor: (opacityModifyRGB_ ? colorUnmodified_ : color_ )]; [self updateColor]; } - (ccColor3B) color { if(opacityModifyRGB_){ return colorUnmodified_; } return color_; } -(void) setColor:(ccColor3B)color3 { color_ = colorUnmodified_ = color3; if( opacityModifyRGB_ ){ color_.r = color3.r * opacity_/255; color_.g = color3.g * opacity_/255; color_.b = color3.b * opacity_/255; } [self updateColor]; } -(void) setOpacityModifyRGB:(BOOL)modify { ccColor3B oldColor = self.color; opacityModifyRGB_ = modify; self.color = oldColor; } -(BOOL) doesOpacityModifyRGB { return opacityModifyRGB_; } #pragma mark CCSpriteHole - CocosNodeTexture protocol -(void) updateBlendFunc { if( !texture_ || ! [texture_ hasPremultipliedAlpha] ) { blendFunc_.src = GL_SRC_ALPHA; blendFunc_.dst = GL_ONE_MINUS_SRC_ALPHA; [self setOpacityModifyRGB:NO]; } else { blendFunc_.src = CC_BLEND_SRC; blendFunc_.dst = CC_BLEND_DST; [self setOpacityModifyRGB:YES]; } } -(void) setTexture:(CCTexture2D*)texture { // accept texture==nil as argument NSAssert( !texture || [texture isKindOfClass:[CCTexture2D class]], @"setTexture expects a CCTexture2D. Invalid argument"); texture_ = texture; [self updateBlendFunc]; } -(CCTexture2D*) texture { return texture_; } @end but now positioning and scaling seem to not work? and it starts in the wrong position... but changing the opacity still works. so i was wondering if anyone can see why my 2.0 version is not working? or if maybe there is a better way to do a sprite hole with cocos2d/opengl 2.0? shaders? thanks

    Read the article

  • Circle vs Edge collision detection / resolution

    - by topheman
    I made a javascript class Ball.js that handles physics interactions betweens balls as well as painting. In the v1.0, the ball vs ball collision detection and resolution is well handled. In the next version (v2), I'm trying to add edgeCollision handling. I'm having some problems, maybe you will be able to help me. All the v2 branch source code is on github repository : https://github.com/topheman/Ball.js/tree/v2 The v2 demos (where you can see the bug I will be talking about) : http://labs.topheman.com/Ball-v2/#help As you will see on the demo, I have two major problems that I'm having a really hard time to solve on Ball.js : method resolveEdgeCollision : bounce angle is inconsistent method checkEdgeCollision : if the ball's velocity (the length that it runs each frame) is higher than its diameter, eventually, it will pass through an edge, without triggering any collision Any Ideas ?...

    Read the article

  • Blender 2.64, what are the actual hot-keys for certain actions

    - by Shivan Dragon
    I know this sounds mega lame but I've looked for hotkeys for certain actions, first in the appliation's User Settings (where I didn't find them) then in the official documentation (where I did find some of them but they're not the right ones): http://wiki.blender.org/index.php/Doc:2.4/Manual/3D_interaction/Transform_Control/Manipulators (Ctrl - Alt - S is recommended for Scale, but instead it opens the Save As... window - I think these changed in the latest versions, but they forgot to update the docs) So then, what are the hot keys for: selecting translate manipulator selecting rotate manipulator selecting scale manipulator In Edit mode: select vertex (editing) select edges (editing) select faces (editing) thanks.

    Read the article

  • Algorithm to simplify building/structural meshes

    - by morpheus
    I am looking for an algorithm to simplify the meshes of buildings or similar structures. EDIT: I had made a comment that Hoppe's algorithm tends to make meshes more and more spherical with simplification. But, I am not sure about it, so am deleting the comment. Buildings in contrast should tend to become more and more rectangular with increasing simplification. The D3DX extensions for D3D in version 9.0 (d3dx9.lib) used to have classes to do progressive mesh simplification. See: http://doc.51windows.net/Directx9_SDK/?url=/directx9_sdk/graphics/reference/d3dx/functions/mesh/d3dxgeneratepmesh.htm http://msdn.microsoft.com/en-us/library/windows/desktop/bb281243(v=vs.85).aspx

    Read the article

  • GestureListener's fling method doesn't get called

    - by nosferat
    I'm using SimpleGestureDetector from the libgdx-users Wiki as my InputProcessor. I set it in the created() method: Gdx.input.setInputProcess(new SimpleDirectionGestureDetector(charController)); charController is my class which implements the DirectionListener interface defined in the SimpleDirectionGestureDetector class and it is responsible for moving the player character. However the character doesn't change direction when I'm performing a fling action in any direction. I've checked and the fling() method in the SimpleDirectionGesture class doesn't get called and I have no idea why, since everything seems good. What am I doing wrong?

    Read the article

  • Mesh with Alpha Texture doesn't blend properly

    - by faulty
    I've followed example from various place regarding setting OutputMerger's BlendState to enable alpha/transparent texture on mesh. The setup is as follows: var transParentOp = new BlendStateDescription { SourceBlend = BlendOption.SourceAlpha, DestinationBlend = BlendOption.InverseDestinationAlpha, BlendOperation = BlendOperation.Add, SourceAlphaBlend = BlendOption.Zero, DestinationAlphaBlend = BlendOption.Zero, AlphaBlendOperation = BlendOperation.Add, }; I've made up a sample that display 3 mesh A, B and C, where each overlaps one another. They are drawn sequentially, A to C. Distance from camera where A is nearest and C is furthest. So, the expected output is that A see through and saw part of B and C. B will see through and saw part of C. But what I get was none of them see through in that order, but if I move C closer to the camera, then it will be semi transparent and see through A and B. B if move closer to camera will see A but not C. Sort of reverse. So it seems that I need to draw them in reverse order where furthest from camera is drawn first then nearest to camera is drawn last. Is it suppose to be done this way, or I can actually configure the blendstate so it works no matter in which order i draw them? Thanks

    Read the article

  • Terrain square loading

    - by AndroidXTr3meN
    Games like Skyrim, Morrowind, and more are using quads or square to divide the terrain if im correct. The player is always at #5 1 | 2 | 3 4 | 5 | 6 7 | 8 | 9 So whenever you cross the border you unload and load the new "areas" But if the user goes just over the edge and then the second after goes back previous area a lot of unnecessary loading and unloading is done. Is there a general approach to this because I dont think games like skyrim have this issue? Cheers!

    Read the article

  • Deferred contexts and inheriting state from the immediate context

    - by dreijer
    I took my first stab at using deferred contexts in DirectX 11 today. Basically, I created my deferred context using CreateDeferredContext() and then drew a simple triangle strip with it. Early on in my test application, I call OMSetRenderTargets() on the immediate context in order to render to the swap chain's back buffer. Now, after having read the documentation on MSDN about deferred contexts, I assumed that calling ExecuteCommandList() on the immediate context would execute all of the deferred commands as "an extension" to the commands that had already been executed on the immediate context, i.e. the triangle strip I rendered in the deferred context would be rendered to the swap chain's back buffer. That didn't seem to be the case, however. Instead, I had to manually pull out the immediate context's render target (using OMGetRenderTargets()) and then set it on the deferred context with OMSetRenderTargets(). Am I doing something wrong or is that the way deferred contexts work?

    Read the article

  • Java enum pairs / "subenum" or what exactly?

    - by vemalsar
    I have an RPG-style Item class and I stored the type of the item in enum (itemType.sword). I want to store subtype too (itemSubtype.long), but I want to express the relation between two data type (sword can be long, short etc. but shield can't be long or short, only round, tower etc). I know this is wrong source code but similar what I want: enum type { sword; } //not valid code! enum swordSubtype extends type.sword { short, long } Question: How can I define this connection between two data type (or more exactly: two value of the data types), what is the most simple and standard way? Array-like data with all valid (itemType,itemSubtype) enum pairs or (itemType,itemSubtype[]) so more subtype for one type, it would be the best. OK but how can I construct this simplest way? Special enum with "subenum" set or second level enum or anything else if it does exists 2 dimensional "canBePairs" array, itemType and itemSubtype dimensions with all type and subtype and boolean elements, "true" means itemType (first dimension) and itemSubtype (second dimension) are okay, "false" means not okay Other better idea Thank you very much!

    Read the article

  • Keypress Left is called twice in Update when key is pressed only once

    - by Simran kaur
    I have a piece of code that is changing the position of player when left key is pressed. It is inside of Update() function. I know, Update is called multiple times, but since I have an ifstatement to check if left arrow is pressed, it should update only once. I have tested using print statement that once pressed, it gets called twice. Problem: Position updated twice when key is pressed only once. Below given is the structure of my code: void Update() { if (Input.GetKeyDown (KeyCode.LeftArrow)) { print ("PRESSEEEEEEEEEEEEEEEEEEDDDDDDDDDDDDDD"); } } I looked up on web and what was suggested id this: if (Event.current.type == EventType.KeyDown && Event.current.keyCode == KeyCode.LeftArrow) { print("pressed"); } But, It gives me an error that says: Object reference not set to instance of an object How can I fix this?

    Read the article

  • A* how make natural look path?

    - by user11177
    I've been reading this: http://theory.stanford.edu/~amitp/GameProgramming/Heuristics.html But there are some things I don't understand, for example the article says to use something like this for pathfinding with diagonal movement: function heuristic(node) = dx = abs(node.x - goal.x) dy = abs(node.y - goal.y) return D * max(dx, dy) I don't know how do set D to get a natural looking path like in the article, I set D to the lowest cost between adjacent squares like it said, and I don't know what they meant by the stuff about the heuristic should be 4*D, that does not seem to change any thing. This is my heuristic function and move function: def heuristic(self, node, goal): D = 10 dx = abs(node.x - goal.x) dy = abs(node.y - goal.y) return D * max(dx, dy) def move_cost(self, current, node): cross = abs(current.x - node.x) == 1 and abs(current.y - node.y) == 1 return 19 if cross else 10 Result: The smooth sailing path we want to happen: The rest of my code: http://pastebin.com/TL2cEkeX

    Read the article

  • (SOLVED) Problems Rendering Text in OpenGL Using FreeType

    - by Sean M.
    I've been following both the FreeType2 tutorial and the WikiBooks tuorial, trying to combine things from them both in order to load and render fonts using the FreeType library. I used the font loading code from the FreeType2 tutorial and tried to implement the rendering code from the wikibooks tutorial (tried being the keyword as I'm still trying to learn model OpenGL, I'm using 3.2). Everything loads correctly and I have the shader program to render the text with working, but I can't get the text to render. I'm 99% sure that it has something to do with how I cam passing data to the shader, or how I set up the screen. These are the code segments that handle OpenGL initialization, as well as Font initialization and rendering: //Init glfw if (!glfwInit()) { fprintf(stderr, "GLFW Initialization has failed!\n"); exit(EXIT_FAILURE); } printf("GLFW Initialized.\n"); //Process the command line arguments processCmdArgs(argc, argv); //Create the window glfwWindowHint(GLFW_SAMPLES, g_aaSamples); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2); g_mainWindow = glfwCreateWindow(g_screenWidth, g_screenHeight, "Voxel Shipyard", g_fullScreen ? glfwGetPrimaryMonitor() : nullptr, nullptr); if (!g_mainWindow) { fprintf(stderr, "Could not create GLFW window!\n"); closeOGL(); exit(EXIT_FAILURE); } glfwMakeContextCurrent(g_mainWindow); printf("Window and OpenGL rendering context created.\n"); glClearColor(0.2f, 0.2f, 0.2f, 1.0f); //Are these necessary for Modern OpenGL (3.0+)? glViewport(0, 0, g_screenWidth, g_screenHeight); glOrtho(0, g_screenWidth, g_screenHeight, 0, -1, 1); //Init glew int err = glewInit(); if (err != GLEW_OK) { fprintf(stderr, "GLEW initialization failed!\n"); fprintf(stderr, "%s\n", glewGetErrorString(err)); closeOGL(); exit(EXIT_FAILURE); } printf("GLEW initialized.\n"); Here is the font file (it's slightly too big to post): CFont.h/CFont.cpp Here is the solution zipped up: [solution] (https://dl.dropboxusercontent.com/u/36062916/VoxelShipyard.zip), if anyone feels they need the entire solution. If anyone could take a look at the code, it would be greatly appreciated. Also if someone has a tutorial that is a little more user friendly, that would also be appreciated. Thanks.

    Read the article

  • (Where) Can I learn creating art for my 2D games?

    - by Poorly paid coder
    I'm currently bad at drawing. If I want to create something looking acceptable, it usually takes me hours and hours to fiddle around just to get the basic looks right. I think that I'm not completely skill-less, I just lack simple drawing techniques.. Am I a hopeless case? Where is a good place to start out in drawing for 2D games? I'd like to be able to create acceptably good backgrounds, terrains / tilemaps, characters and weapons

    Read the article

  • Calculating the rotational force of a 2D sprite

    - by Jon
    I am wondering if someone has an elegant way of calculating the following scenario. I have an object of (n) number of squares, random shapes, but we will pretend they are all rectangles. We are dealing with no gravity, so consider the object in space, from a top down perspective. I am applying a force to the object at a specific square (as illustrated below). How do I calculate the rotational angle, based on the force being applied, at the location being applied. If applied in the center square, it would go straight. How should it behave the further I move from the center? How do I calculate the rotational velocity?

    Read the article

< Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >