Search Results

Search found 19281 results on 772 pages for 'blender game engine'.

Page 469/772 | < Previous Page | 465 466 467 468 469 470 471 472 473 474 475 476  | Next Page >

  • How do I get a collision event from a KActor subclass?

    - by Almo
    I have a subclass of KActor, and I want an event when it collides with things. event RigidBodyCollision seems to be what I want according to this http://wiki.beyondunreal.com/UE3:Actor_events_%28UDK%29#RigidBodyCollision Called when a PrimitiveComponent this Actor owns has: -bNotifyRigidBodyCollision set to true -ScriptRigidBodyCollisionThreshold greater than 0 -it is involved in a physics collision where the relative velocity exceeds ScriptRigidBodyCollisionThreshold As far as I can tell, I have these set up, and the event is not called for any collisions (KActor-KActor, KActor-world geometry, etc). Is there something else I need to do?

    Read the article

  • Where in a typical rendering pipeline does visibility and shading occur?

    - by user29163
    I am taking a computer graphics course. The book and the lecture notes are vague on the on the order of flow between the different steps in the rendering process. For example, if we have specified a view in a scene, and then want to perform a projection transformation for that given view, then we have to go through a sequence of transformations. In the end we end up with a normalized "viewcube" ready to be mapped 2D after clipping. But why do we end up with a cube (ie 3D thing), when a projection results in projecting the 3D objects to 2D. (depth information is lost?) The other line of reasoning is that all information further needed is stored within the "cube" and that visibility detection and shading is performed with respect to this cube and then we perform rasterezation.

    Read the article

  • Good resources for learning modern OpenGL (3.0 or later)?

    - by MatterGoal
    I stumble upon the search of a good resource to start with OpenGL (3.0 or later) . Well, I found a lot of books but none of them can be considered a good resource! Here two examples: OpenGL Programming Guide (7th edition) http://www.amazon.com/exec/obidos/ASIN/0321552628/khongrou-20 This is FULL of deprecated material! Almost every chapters begin with a note about that. OpenGL Superbible (5th Edition) http://www.amazon.com/exec/obidos/ASIN/0321712617/khongrou-20 This book uses a library created by the author to explain the main arguments, hiding what you want to learn! I don't want to learn how to use your library! I want to learn OpenGL! I hope that you understand this is not the same question like "hey I'm not able to use Google... tell me how to learn OpenGL". I've just finished a full and deep search but I can't find a good and complete resource to learn the "new" OpenGL avoiding deprecated topics. Can someone heading me in the right direction? I know C++ and I have 10 years of experience in development... where I can find a good resource?! I want to spend time on it and I want to learn deeply. (please feel free to edit my question, my English is terrible!)

    Read the article

  • What is the primary use of Vertex Buffer Objects?

    - by sensae
    From what I've read, it seems VBOs are purely for performance. I'm working on a very rudimentary learning project in lwjgl and I'm just trying to figure out what more advanced features of the library I should be delving into, and what their use is. My understanding is that VBOs allow a person to keep vertexes in VRAM while they aren't currently being drawn in a scene. In my case, I'm just drawing quads and performance probably isn't a concern at all, but I'm trying to piece together what's happening under the hood. If I'm drawing quads directly, I'm drawing from the CPU memory, correct? Also, if I'm not doing any checks for visibility, does that mean I'm rendering absolutely everything in the "scene", regardless of whether its in view? Are VBOs a way to store objects and only render what's needed?

    Read the article

  • Limiting the speed of a dragged sprite in Cocos2dx

    - by Frozsht
    I am trying to drag a row of sprites using ccTouchesMoved. By that I mean that there is a row of sprites (they are colored squares) lined up next to each other and if I grab one with a touch the rest follow it. However, if the sprite that is selected by touch moves too fast it creates a slight gap between it and the following sprites. How do I go about limiting the speed that I can drag the sprite with ccTouchesMoved? This is the only solution I could think of to my problem. If anyone has another suggestion to prevent this sprite gap from happening I would appreciate it.

    Read the article

  • using per pixel collision for an elastic response

    - by Codejoy
    I realize this might be open ended ended but curious if I just did some over kill... I had this http://create.msdn.com/en-US/education/catalog/tutorial/collision_2d_perpixel and i reworked it to work with my animation code in XNA and what not. It works well, but now I want to use this to decide if there was a collision and to have the items (characters) bounce off eachother elastically. Was the per pixel too much and I could of just used a bounding box ? (in fact would that of been preferred for what needs to be calculated in the response for an elastic collision?) Looking for guidance really.

    Read the article

  • Assigning an item to an existing array in a list within a dictionary [on hold]

    - by Rouke
    I have a Dictionary declared like: public var PoolDict : Dictionary.<String, List.<GameObject[]> >; I made a function to add items to the list and array function Add(key:String, obj:GameObject) { if(!PoolDict.ContainsKey(key)) { PoolDict[key] = new List.<GameObject[]>(); } //PlaceHolder - Not what will be in final version PoolDict[key].Add(null); //Attempts - Errors- How to add to existing array? PoolDict[key].Add(obj); PoolDict[key][0].Add(obj); } I'd like to replace the line after //PlaceHolder with code that will assign a gameObject to an existing array in a list that's associated with a key. How could this be done?

    Read the article

  • How to make an object fly out of a slingshot?

    - by Deza
    At the moment I'm improvising a slingshot, the user can click and drag the projectile and let go. The force on the object is calculated by getting the distance between the vector of the slingshots two forks and the vector between where the user pulled it. However this will always result in a positive number and will not take into account the angle of the object relative to that of the slingshot. How can I make it fly out of the slingshot correctly?

    Read the article

  • Inversion of control in Unity?

    - by user3206275
    I am semi-experienced .NET developer who has just began working with Unity. I am trying to decide on how to make IoC work in Unity 4.X ( I have not yet tested anything), and I wonder what are the good ways of achieving it. This post and its answers states that Ninject won't work with Unity, however it is old. Is it still true? If yes, what are other means of achieving IoC in Unity ? Edit 1 : I am targeting mainly Windows platform. So I don't need platform interoperability, I just need it to work.

    Read the article

  • Android - Unity3D: setVisibility(View.VISIBLE) crashes

    - by Kazoeja
    I have a unity project and I use a Android (java) plugin to get camera data. I draw this on a TextureView. I want to hide/show this view when I press a button in unity. But my app crashes when I setVisibility onCreate UnityPlayer.currentActivity.addContentView(texView, new FrameLayout.LayoutParams(400, 400)); java: public void HideVideo() { //Hide view _TextureView.setVisibility(View.INVISIBLE); } Is there an extra function I need to call, or may I only call it on certain times? None of these thins work, they all make my app crash. _TextureView.setVisibility(View.INVISIBLE); _TextureView.setActivated(false); _TextureView.setAlpha(0); _TextureView.setTranslationY(-1000);

    Read the article

  • How do you blend multiple colors in HSV (polar) color-space?

    - by Toxikman
    In RGB color space, you can do a weighted multiple-color blend by just doing: Start with R = G = B = 0. Then we perform a blend at index i using a set of colors C, and a set of normalized weights w like so: R += w[i] * C[i].r G += w[i] * C[i].g B += w[i] * C[i].b But I'd like to interpolate the colors in the HSV color-space instead, so that saturation and brightness are uniform across the interpolation. I know I can blend saturation and brightness in the same way as above, but the HUE component is an angle around a continuous circle, since HSV is essentially a polar coordinate system. Blending only two HSV colors makes sense to me, you just find the shortest arc around the circle and interpolate between the two hues. But when you attempt to blend more than 2 colors, it becomes a bit of a puzzle. You have to handle anomalous cases, like 4 equally-weighted colors with a hue at 0, 90, 180, and 270 degrees. They basically cancel each other out, so any hue will do. Any ideas would be greatly appreciated.

    Read the article

  • Is there a prohibition against scaling collision shapes at runtime?

    - by Almo
    So, I have a StaticMeshComponent attached to an Actor: Begin Object Class=StaticMeshComponent Name=StaticMeshComponentObject StaticMesh=StaticMesh'QF_Art_Powers.Mesh.GP_ForcePush' CollideActors=true BlockActors=false //Scale3D=(X=5, Y=1.5, Z=3) // ALMODEBUG End Object CollisionComponent=StaticMeshComponentObject Components.Add(StaticMeshComponentObject) Ordinarily, the actor gets spawned, anything touching it gets bumped, and the actor despawns itself. If I set the Scale3D as a default property, everything works as I expect. But I want to scale it at runtime, like this: function SetImpulseComponentTemplate(QuadForceBoxImpulseComponent Value) { Local Vector ScaleVec; ScaleVec.X = Value.Length; ScaleVec.Y = Value.Width; ScaleVec.Z = Value.Height; CollisionComponent.SetScale3D(ScaleVec); } When I do this, the thing only collides as if it were not scaled. If I leave the actor spawned so I can see it, it is scaled. If I also "show collision", the collision displays correctly as well. Is there a prohibition against scaling collision shapes at runtime?

    Read the article

  • Why doesn't MoveBy work in this example?

    - by ufo
    I'd like to run an action on a sprite using the MoveBy action. After lots of attempts, I can't achieve the goal... I have issues with the MoveBy in 2 different projects, so maybe I'm missing something in the setup... But I can't figure what! The instruction is like this: this.platform1Sprite.runAction(cc.MoveBy.create(1, cc.p(200, 0))); I don't get any error, simply it doesn't work. platform1Sprite is a Sprite. But even with a LabelTTF it doesn't work: var MoveToAction = cc.MoveTo.create(2.5, cc.p(size.width / 2, size.height / 2)); this.creditLabel.runAction(MoveToAction); For this last snippet, you can view my complete code here: http://pastebin.com/fGbW4LLH

    Read the article

  • Z axis trouble with glTranslatef(...) - LWJGL

    - by Zarkopafilis
    Here is the code: private static boolean up = true , down = false , left = false , right = false, reset = false, in = false , out = false; public void start() { try { Display.setDisplayMode(new DisplayMode(800,600)); Display.create(); } catch (LWJGLException e) { e.printStackTrace(); System.exit(0); } GL11.glMatrixMode(GL11.GL_PROJECTION); GL11.glLoadIdentity(); GL11.glOrtho(0, 800, 0, 600, 0.00001f, 1000); GL11.glMatrixMode(GL11.GL_MODELVIEW); Keyboard.enableRepeatEvents(true); while (!Display.isCloseRequested()) { GL11.glClear(GL11.GL_COLOR_BUFFER_BIT); input(); if(up){ GL11.glTranslatef(0,0.1f,0); } if(down){ GL11.glTranslatef(0,-0.1f,0); } if(left){ GL11.glTranslatef(-0.1f,0,0); } if(right){ GL11.glTranslatef(0.1f,0,0); } if(in){ GL11.glTranslatef(0, 0, 1f); } if(out){ GL11.glTranslatef(0, 0, -1f); } if(reset){ GL11.glLoadIdentity(); } GL11.glBegin(GL11.GL_QUADS); GL11.glColor3f(255, 255, 255); GL11.glVertex3f(800/2, 600/2, 0); GL11.glVertex3f(800/2 + 200, 600/2, 0); GL11.glVertex3f(800/2 + 200, 600/2 + 200, 0); GL11.glVertex3f(800/2, 600/2 + 200, 0); GL11.glColor3f(0, 255, 0); GL11.glVertex3f(800/2, 600/2, 1); GL11.glVertex3f(800/2 + 200, 600/2, 1); GL11.glVertex3f(800/2 + 200, 600/2 + 200, 1); GL11.glVertex3f(800/2, 600/2 + 200, 1); GL11.glEnd(); Display.update(); } Display.destroy(); } public static void main(String[] argv){ new main().start(); } public void input(){ up = false; down = false; left = false; right = false; reset = false; in = false; out = false; if(Keyboard.isKeyDown(Keyboard.KEY_SPACE)){ reset = true; } if(Keyboard.isKeyDown(Keyboard.KEY_W)){ up = true; } if(Keyboard.isKeyDown(Keyboard.KEY_S)){ down = true; } if(Keyboard.isKeyDown(Keyboard.KEY_A)){ left = true; } if(Keyboard.isKeyDown(Keyboard.KEY_D)){ right = true; } if(Keyboard.isKeyDown(Keyboard.KEY_Q)){ in = true; } if(Keyboard.isKeyDown(Keyboard.KEY_E)){ out = true; } } As you can see I am creating 2 quads , a white one at z 0 and a green one at z 1. WASD keys function correctly. Also when I hit SPACEBAR the white quad is being shown. If I then press E , I can see the green quad. But if I press Q afterwards , I dont see the white one again!(Space Works everytime). Also if I render the green one at Z = -1 everything works perfectly BUT you may need up to 3 key presses Q/E to see the other quad. Why is that happening?

    Read the article

  • Box2d contant speed before and after collision

    - by bobenko
    I want to make my body fly at constant speed, how to make it fly at constant speed before and after collision? I set restitution of my body to 1.0 but after some direct and powerful collisions my objects begins to slow, I want it to fly same speed as before. I heard this can be done by setting liner damping of the object, I think it can prevent only from fast flying objects not slow. Thanks in advance.

    Read the article

  • Fourth texture = segmentation fault

    - by Robin92
    I keep on getting segmentation fault each time I load fourth texture - what type of texture, I mean filename, does not matter. I checked value of GL_TEXTURES_STACK_SIZE which turned out to be 10 so quite more than 4, isn't it? Here're code fragments: funciton to load texture from png static GLuint gl_loadTexture(const char filename[]) { static int iTexNum = 1; GLuint texture = 0; img_s *img = NULL; img = img_loadPNG(filename); if (img) { glGenTextures(iTexNum++, &texture); glBindTexture(GL_TEXTURE_2D, texture); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, img->iGlFormat, img->uiWidth, img->uiHeight, 0, img->iGlFormat, GL_UNSIGNED_BYTE, img->p_ubaData); img_free(img); //it may cause errors on windows } else printf("Error: loading texture '%s' failed!\n", filename); return texture; } actual loading static GLuint textures[4]; static void gl_init() { (...) //setting up OpenGL /* loading textures */ textures[0] = gl_loadTexture("images/background.png"); textures[1] = gl_loadTexture("images/spaceship.png"); textures[2] = gl_loadTexture("images/asteroid.png"); textures[3] = gl_loadTexture("images/asteroid2.png"); //this is causing SegFault no matter which file I load! } Any ideas? Problem is present on both Linux and Windows.

    Read the article

  • How do I convert a partially transparent image into polygons?

    - by user82779
    I'm using GLEE2D, a level editor allowing me to import images, scale them, rotate them, and position them onto layers and export the data into XML format. However, it does not tell me objects' boundaries. I can calculate them, but only given the original image's polygons. How do I get polygons of objects in a transparent image? An example object (I outlined it): How would I turn the object, knowing the scaled size of the image, into polygons? Is there an algorithm for this? I'll use OpenGL to draw them.

    Read the article

  • How do I render from one render target to another?

    - by Chaotikmind
    I have two render targets: a fake backbuffer; a special render target where I do all my rendering. a light render target; where I render my light fx. I'm sure I'm rendering correctly on both. The problem arises when I overlay the light render target onto the fake backbuffer by drawing a quad covering it: DxEngine.DrawSprite(0.0f, 0.0f, 0.0f, (float)DxEngine.GetWidth(), (float)DxEngine.GetHeight(), 0xFFFFFFFF, LightSurface->GetTexture()); Regardless of what's in the light target, nothing is rendered onto the other target. I tried clearing the light target with full-white or full-black, but still get nothing. Fake backbuffer created with Direct3dDev->CreateTexture(Width, Height, 1, D3DUSAGE_RENDERTARGET, D3DFMT_X8R8G8B8, D3DPOOL_DEFAULT, &Texture, nullptr); Light render target created with Direct3dDev->CreateTexture(Width, Height, 1, D3DUSAGE_RENDERTARGET, D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT, &Texture, nullptr); I also tried to create both with D3DFMT_A8R8G8B8, again without difference. Both targets have the same width and height. Only the fixed pipeline is used DirectX setup for rendering : Direct3dDev->SetSamplerState(0, D3DSAMP_MINFILTER, D3DTEXF_LINEAR ); Direct3dDev->SetSamplerState(0, D3DSAMP_MAGFILTER, D3DTEXF_LINEAR ); Direct3dDev->SetSamplerState(0, D3DSAMP_MIPFILTER, D3DTEXF_NONE ); Direct3dDev->SetSamplerState(0, D3DSAMP_ADDRESSU, D3DTADDRESS_WRAP ); Direct3dDev->SetSamplerState(0, D3DSAMP_ADDRESSV, D3DTADDRESS_WRAP ); Direct3dDev->SetRenderState(D3DRS_CULLMODE, D3DCULL_NONE); Direct3dDev->SetRenderState(D3DRS_LIGHTING, false); Direct3dDev->SetRenderState(D3DRS_ZENABLE, D3DZB_TRUE); Direct3dDev->SetRenderState(D3DRS_ZWRITEENABLE,D3DZB_TRUE); Direct3dDev->SetRenderState(D3DRS_ZFUNC,D3DCMP_LESSEQUAL); Direct3dDev->SetRenderState(D3DRS_ALPHABLENDENABLE, true ); Direct3dDev->SetRenderState(D3DRS_ALPHAREF, 0x00000000ul); Direct3dDev->SetRenderState(D3DRS_ALPHATESTENABLE, true); Direct3dDev->SetRenderState(D3DRS_ALPHAFUNC,D3DCMP_GREATER); Direct3dDev->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA ); Direct3dDev->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA ); Direct3dDev->SetTextureStageState(0, D3DTSS_COLORARG1, D3DTA_TEXTURE); Direct3dDev->SetTextureStageState(0, D3DTSS_COLORARG2, D3DTA_DIFFUSE); Direct3dDev->SetTextureStageState(0, D3DTSS_COLOROP, D3DTOP_MODULATE); Direct3dDev->SetTextureStageState(0, D3DTSS_ALPHAARG1, D3DTA_TEXTURE); Direct3dDev->SetTextureStageState(0, D3DTSS_ALPHAARG2, D3DTA_DIFFUSE); Direct3dDev->SetTextureStageState(0, D3DTSS_ALPHAOP, D3DTOP_MODULATE); Direct3dDev->SetTextureStageState(0, D3DTSS_RESULTARG, D3DTA_CURRENT); Direct3dDev->SetTextureStageState(0, D3DTSS_TEXCOORDINDEX, D3DTSS_TCI_PASSTHRU); //ensure the first stage is not used for now Direct3dDev->SetTextureStageState(1, D3DTSS_COLOROP, D3DTOP_DISABLE); How can I do this right?

    Read the article

  • Java single Array best choice for accessing pixels for manipulation?

    - by Petrol
    I am just watching this tutorial https://www.youtube.com/watch?v=HwUnMy_pR6A and the guy (who seems to be pretty competent) is using a single array to store and access the pixels of his to-be-rendered image. I was wondering if this really is the best way to do this. The alternative of Multi-Array does have one pointer more, but Arrays do have an O(1) for accessing each index and calculating the index in a single array seems to take one addition and one multiplication operation per pixel. And if Multi-Arrays really are bad, can't you use something with Hashing to avoid those addition and multiplication operations? EDIT: here is his code... public class Screen { private int width, height; public int[] pixels; public Screen(int width, int height) { this.width = width; this.height = height; // creating array the size of one index/int for every pixel // single array has better performance than multi-array pixels = new int[width * height]; } public void render() { for (int y = 0; y < height; y++) { for (int x = 0; x < width; x++) { pixels[x + y * width] = 0xff00ff; } } } }

    Read the article

  • have a problem with my 2nd quad bottom left vertex position ? weirdd

    - by RubyKing
    Hey all I'm just trying to add another quad to my frustum and when doing so I get this weird little error. What happens is the bottom left side of my quad seems to stick to the center point for no apparent reason that I can think of and or figure out from. has anyone else experienced this issue and knows a solution or would you like more information please do ask. here is my main.cpp file http://pastebin.com/g9q8uAsd I think its because of 2 different quad vertex array data is in the same array.

    Read the article

  • Vector normalization gives very imprecise results

    - by Kipras
    When I normalize vectors I receive very strange results. The lengths of the normalized vectors range from 1.0 to almost 1.5. The functions are all written by me, but I just can't find a mistake in my algorithm. When I normalize I just divide all components of the vector by the vector's length. public double length(){ return Math.sqrt(x*x + y*y); } public void normalize(){ if(length() > 0){ x /= length(); y /= length(); } } Is this supposed to happen? I mean I can see the length ranging from 0.9 to 1.1 at worst, but this is just overwhelming. Cheers

    Read the article

  • Simulating water droplets on a window

    - by skyuzo
    How do I simulate water droplets realistically falling, gathering, and flowing down a window? For example, see http://www.youtube.com/watch?v=4jaGyv0KRPw. In particular, I want to simulate how smaller droplets merge together to form larger droplets that have enough weight to oppose the surface tension and flow downward, leaving a trail of water. I'm aware of fluid simulation, but how would it be applied in this situation?

    Read the article

  • Are there any resources for motion-planning puzzle design?

    - by Salano Software
    Some background: I'm poking at a set of puzzles along the lines of Rush Hour/Sokoban/etc; for want of a better description, call them 'motion planning' puzzles - the player has to figure out the correct sequence of moves to achieve a particular configuration. (It's the sort of puzzle that's generically PSPACE-complete if that actually helps anyone's mental image). While I have a few straightforward 'building blocks' that I can use for puzzle crafting and I have a few basic examples put together, I'm trying to figure out how to avoid too much sameness over a large swath of these kinds of puzzles, and I'm also trying to figure out how to make puzzles that have more of a feel of logical solution than trial-and-error. Does anyone know of good resources out there for designing instances of this sort of puzzle once the core puzzle rules are in place? Most of what I've found on puzzle design only covers creating the puzzle rules, not building interesting puzzles out of a set of rules.

    Read the article

  • Toggle Fullscreen at Runtime

    - by sharethis
    Using the library GLFW, I can create a fullscreen window using this line of code. glfwOpenWindow(Width, Height, 8, 8, 8, 8, 24, 0, GLFW_FULLSCREEN); The line for creating a standard window looks like this. glfwOpenWindow(Width, Height, 8, 8, 8, 8, 24, 0, GLFW_WINDOW); What I want to do is letting the user switch between standard window and fullscreen by a keypress, let's say F11. It there a common practice of toggling fullscreen mode? What do I have to consider?

    Read the article

  • If I project a sphere in 3D will it be a circle?

    - by yuumei
    Assuming I have infinite vertices to represent the sphere, if I project the sphere from any position/scale in 3D to 2D, will it be a circle? I know it will not be a circle on the screen, because of scaling and different resolutions. But do field of view and aspect ratio effect the results? Edit: Sorry yes, I am talking about perspective projection. Seems the answer is no then, perspective will distort the sphere. Thanks!

    Read the article

< Previous Page | 465 466 467 468 469 470 471 472 473 474 475 476  | Next Page >