Search Results

Search found 26124 results on 1045 pages for 'unreal development kit'.

Page 486/1045 | < Previous Page | 482 483 484 485 486 487 488 489 490 491 492 493  | Next Page >

  • Enemy Spawning method in a Top-Down Shooter

    - by Chris Waters
    I'm working on a top-down shooter akin to DoDonPachi, Ikaruga, etc. The camera movement through the world is handled automatically with the player able to move inside of the camera's visible region. Along the way, enemies are scripted to spawn at particular points along the path. While this sounds straightforward, I could see two ways to define these points: Camera's position: 'trigger' spawning as the camera passes by the points Time along path: "30 seconds in, spawn 2 enemies" In both cases, the camera-relative positions would be defined as well as the behavior of the enemy. The way I see it, the way you define these points will directly affect how the 'level editor', or what have you, will work. Would there be any benefits of one approach over the other?

    Read the article

  • ssao implementation

    - by Irbis
    I try to implement a ssao based on this tutorial: link I use a deferred rendering and world coordinates for shading calculations. When saving gbuffer a vertex shader output looks like this: worldPosition = vec3(ModelMatrix * vec4(inPosition, 1.0)); normal = normalize(normalModelMatrix * inNormal); gl_Position = ProjectionMatrix * ViewMatrix * ModelMatrix * vec4(inPosition, 1.0); Next for a ssao calculations I render a scene as a full screen quad and I save an occlusion parameter in a texture. (Vertex positions in the world space: link Normals in the world space: link) SSAO implementation: subroutine (RenderPassType) void ssao() { vec2 texCoord = CalcTexCoord(); vec3 worldPos = texture(texture0, texCoord).xyz; vec3 normal = normalize(texture(texture1, texCoord).xyz); vec2 noiseScale = vec2(screenSize.x / 4, screenSize.y / 4); vec3 rvec = texture(texture2, texCoord * noiseScale).xyz; vec3 tangent = normalize(rvec - normal * dot(rvec, normal)); vec3 bitangent = cross(normal, tangent); mat3 tbn = mat3(tangent, bitangent, normal); float occlusion = 0.0; float radius = 4.0; for (int i = 0; i < kernelSize; ++i) { vec3 pix = tbn * kernel[i]; pix = pix * radius + worldPos; vec4 offset = vec4(pix, 1.0); offset = ProjectionMatrix * ViewMatrix * offset; offset.xy /= offset.w; offset.xy = offset.xy * 0.5 + 0.5; float sample_depth = texture(texture0, offset.xy).z; float range_check = abs(worldPos.z - sample_depth) < radius ? 1.0 : 0.0; occlusion += (sample_depth <= pix.z ? 1.0 : 0.0); } outputColor = vec4(occlusion, occlusion, occlusion, 1); } That code gives following results: camera looking towards -z world space: link camera looking towards +z world space: link I wonder if it is possible to use world coordinates in the above code ? When I move camera I get different results because world space positions don't change. Can I treat worldPos.z as a linear depth ? What should I change to get a correct results ? I except the white areas in place of occlusion, so the ground should has the white areas only near to the object.

    Read the article

  • OpenGL Drawing textured model (OBJ) black texture

    - by andrepcg
    I'm using OpenGL, Glew, GLFW and Glut to create a simple game. I've been following some tutorials and I have now a good model importer with textures (from ogldev.atspace.co.uk) but I'm having an issue with the model textures. I have a skybox with a beautiful texture as you can see in the picture That weird texture behind the helicopter (model) is the heli model that I've applied on purpose to that wall to demonstrate that specific texture is working, but not on the helicopter. I'll include the files I'm working on so you can check it out. Mesh.cpp - http://pastebin.com/pxDuKyQa Texture.cpp - http://pastebin.com/AByWjwL6 Render function + skybox - http://pastebin.com/Vivc9qnT I'm just calling mesh->Render(); before the drawSkyBox function, in the render loop. Why is the heli black when I can perfectly apply its texture to another quad? I've debugged the code and the mesh-render() call is correctly fetching the texture number and passing it to the texture-bind() function.

    Read the article

  • relationship between the model and the renderer

    - by acrilige
    I tried to build a simple graphics engine, and faced with this problems: i have a list of models that i need to draw, and object (renderer) that implements IRenderer interface with method DrawObject(Object* obj). Implementation of renderer depends on using graphics library (opengl/directx). 1st question: model should not know nothing about renderer implementation, but in this case where can i hold (cache) information that depends on renderer implementation? For example, if model have this definition: class Model { public: Model(); Vertex* GetVertices() const; private: Vertex* m_vertices; }; what is the best way to cache, for example, vertex buffer of this model for dx11? Hold it in renderer object? 2nd question: what is the best way for model to say renderer HOW it must be rendered (for example with texture, bump mapping, or may be just in one color). I thought it can be done with flags, like this: model-SetRenderOptions(RENDER_TEXTURE | RENDER_BUMPMAPPING | RENDER_LIGHTING); and in Renderer::DrawModel method check for each flag. But looks like it will become uncomfortable with the options count growth...

    Read the article

  • Open source management game in java

    - by jcw
    I am trying to find an open source sport management game, much like the link below, but am failing to do so. There are two links provided in the below question that are both fine,'except for one minor problem - I only know java! Is there an open source sports manager project? After some googling, I have been unsuccessful in finding a sports management game that is written in java. I am do not particullarly care about the type of sport, becuase I am mostly interested in mechanics. Does anyone know of any such projects or am I out of luck on java?

    Read the article

  • Extremely Hybrid Game requirements

    - by tugrul büyükisik
    What system specifications would a game need if it was: Total players per planet: ~20000 Total players per team:~1M Total players per map(small volume of space or small surface over a planet): ~2000 Total players: ~10M(world has more players than this amount i think) Two of the players are commanders of opposite quadrants(from HUD of a strategy game). Lots of players use space-crafts as a captain(like 3d fps and rts). Many many players control consoles in those space-crafts as under command of captains.(fps ) Some players are still in stone-age trying to reinvent wheel in some planet. Players design and construct any vehicles they have. With good physics engine Has puzzles inside. Everyone get experience by doing stuff(RPG). Commerce, income or totally different resource-based group(like starcraft) Player classes(primitive: cunning and strong, wrapped: healthy, wealthy) Arcade top-down style firing with ships when people get bored very low chance of miraculous things.(mediclorians, wormholes, bugs) Different game-modes: persistent(living world), resetted periodically(a new chance for noobs), instant(pre-built space + hack&slash) I suspect this would need 128GB ram and 2048 cores.

    Read the article

  • Strategy to prevent players from seeing through walls in an online FPS?

    - by geneotech
    Why do we still moan on wallhackers in multiplayer first-person shooters ? Isn't it possible to perform occlusion culling for all players server-side ? For example, send player xyz information to client only when the player is visible in client's frustum and not occluded by any object ? Even if the collision-geometry is very simplified, most of the time cheater won't receive tactical information. Why not do this ?

    Read the article

  • Initial direction of intersection between two moving vehicles?

    - by Larolaro
    I'm working with a bit of projectile prediction for my AI and I'm looking for some ideas, any input? If a blue vehicle is moving in a direction at a constant speed of X m/s and a stationary orange vehicle has a rocket that travels Y m/s, which initial direction would the orange vehicle have to fire the rocket for it to hit the blue vehicle at the earliest time in the future? Thanks for reading!

    Read the article

  • Deferred contexts and inheriting state from the immediate context

    - by dreijer
    I took my first stab at using deferred contexts in DirectX 11 today. Basically, I created my deferred context using CreateDeferredContext() and then drew a simple triangle strip with it. Early on in my test application, I call OMSetRenderTargets() on the immediate context in order to render to the swap chain's back buffer. Now, after having read the documentation on MSDN about deferred contexts, I assumed that calling ExecuteCommandList() on the immediate context would execute all of the deferred commands as "an extension" to the commands that had already been executed on the immediate context, i.e. the triangle strip I rendered in the deferred context would be rendered to the swap chain's back buffer. That didn't seem to be the case, however. Instead, I had to manually pull out the immediate context's render target (using OMGetRenderTargets()) and then set it on the deferred context with OMSetRenderTargets(). Am I doing something wrong or is that the way deferred contexts work?

    Read the article

  • Passing an objects rotation down through its children

    - by MintyAnt
    In my topdown 2d game you have a player with a sword, like an old Zelda game. The sword is a seperate entity, and its collision box "rotates" around the player like an orbit, but always follows the player wherever he goes. The player and sword both have a vector2 heading. The sword is a weapon object that is attached to the character. In order to allow swinging in a direction, I have the following property inside sword (RotateCopy returns a copy of the mHeading after rotation) public Vector2 Heading { get { return mHeading.RotateCopy(mOwner.Rotation); } } This seems a bit messy to me, and slower than it could be. Is there a better way to "translate" the base/owner component rotations through to whatever component I am using, like this sword? Would using a rotation MATRIX be better? (Curretnly rotates by sin/cos) If so, how can I "add" up the matrices? Thank you.

    Read the article

  • Well-tested libraries for player ratings?

    - by Lucky
    It's common in games to implement some sort of numerical ranking system -- the ELO system is usually used in chess. I could implement this system naively using Wikipedia's descriptions, but I suspect that this would open up a whole box of problems that have already been solved: rating inflation, etc -- for instance, the ELO system has a K constant that's 'fudged' according to rating, duration, pairings, statistics, ... What are some libraries (I'm looking at Python, but anything is okay) that implements rating systems? It also doesn't have to be ELO.

    Read the article

  • What should I worry about when changing OpenGL origin to upper left of screen?

    - by derivative
    For self education, I'm writing a 2D platformer engine in C++ using SDL / OpenGL. I initially began with pure SDL using the tutorials on sdltutorials.com and lazyfoo.net, but I'm now rendering in an OpenGL context (specifically immediate mode but I'm learning about VAOs/VBOs) and using SDL for interface, audio, etc. SDL uses a coordinate system with the origin in the upper left of the screen and the positive y-axis pointing down. It's easy to set up my orthographic projection in OpenGL to mirror this. I know that texture coordinates are a right-hand system with values from 0 to 1 -- flipping the texture vertically before rendering (well, flip the file before loading) yields textures that render correctly... which is fine if I'm drawing the entire texture, but ultimately I'll be using tilesets and can imagine problems. What should I be concerned about in terms of rendering when I do this? If anybody has any advice or they've done this themselves and can point out future pitfalls, that would be great, but really any thoughts would be appreciated.

    Read the article

  • Manually updating HTML5 local storage?

    - by hustlerinc
    I'm just starting out HTML5 game developement (and game dev in general) and watching all the videos and tutorials available something has crossed my mind. Everyone keep saying I should set the cookie's (or cached files) to be expired after a certain amount of time. So that when it reaches that time the browser automatically downloads all assets again, even if it's the same asset's. Wouldn't it be possible to manually define the version of the game? For example the user has downloaded all the files for 1.01 of the game, when updating I change a simple variable to 1.02. When the user logs in it would compare his version to the current and if they are not equal only then it downloads the files? This could even be improved to download only specific files depending on what needs to be updated? Would this be possible or am I just dreaming? What are the possible downsides of this approach?

    Read the article

  • Is it possible to construct a cube with less than 24 vertices

    - by Telanor
    I have a cube-based world like minecraft and I'm wondering if there's a way to construct a cube with less than 24 vertices so I can reduce memory usage. It doesn't seem possible to me for 2 reasons: the normals wouldn't come out right and per-face textures wouldn't work. Is this the case or am I wrong? Maybe there's some fancy new dx11 tech that can help? Edit: Just to clarify, I have 2 requirements: I need surface normals for each cube face in order to do proper lighting and I need a way to address a different indexes in a texture array for each cube face

    Read the article

  • How to balance this Pokémon simulator metagame by feedback?

    - by Dokkat
    This is a Pokémon simulator where you build a team of 6 pokémon and battle with someone. Unfortunately, some Pokémon are stronger than others and only a few of the hundredth species are practical. I'm trying to create a metagame where all of them are competitive. For this, I am tagging a Pokémon with a parameter (level) that changes it's strength and scales up/down depending on the it's performance. That is, if the system detects Mewtwo is overperforming, it should decrease it's level tag until Mewtwo is balanced. The question is: how can I identify if a Pokémon is causing an unbalance? The data I have is the historic of the battles (player 1, player 2, pokémon list, winner). The most basic solution I can think of is victory/loss counting.

    Read the article

  • What does a Game Designer do? what skills do they need?

    - by xenoterracide
    I know someone who is thinking about getting into game design, and I wondered, what does the job game designer entail? what tools do you have to learn how to use? what unique skills do you need? what exactly is it you'd do from day to day. I may be wording this a bit wrong because I'm not sure if the college program is become a game designer or learn game design. but I think the same questions apply either way.

    Read the article

  • Zooming in isometric engine using XNA

    - by Yheeky
    I´m currently working on an isometric game engine and right now I´m looking for help concerning my zoom function. On my tilemap there are several objects, some of them are selectable. When a house (texture size 128 x 256) is placed on the map I create an array containing all pixels (= 32768 pixels). Therefore each pixel has an alpha value I check if the value is bigger than 200 so it seems to be a pixel which belongs to the building. So if the mouse cursor is on this pixel the building will be selected - PixelCollision. Now I´ve already implemented my zooming function which works quite well. I use a scale variable which will change my calculation on drawing all map items. What I´m looking for right now is a precise way to find out if a zoomed out/in house is selected. My formula works for values like 0,5 (zoomed out) or 2 (zoomed in) but not for in between. Here is the code I use for the pixel index: var pixelIndex = (int)(((yPos / (Scale * Scale)) * width) + (xPos / Scale) + 1); Example: Let´s assume my mouse is over pixel coordinate 38/222 on the original house texture. Using the code above we get the following pixel index. var pixelIndex = ((222 / (1 * 1)) * 128) + (38 / 1) + 1; = (222 * 128) + 39 = 28416 + 39 = 28455 If we now zoom out to scale 0,5, the texture size will change to 64 x 128 and the amount of pixels will decrease from 32768 to 8192. Of course also our mouse point changes by the scale to 19/111. The formula makes it easy to calculate the original pixelIndex using our new coordinates: var pixelIndex = ((111 / (0.5 * 0.5)) * 64) + (19 / 0.5) + 1; = (444 * 64) + 39 = 28416 + 39 = 28455 But now comes the problem. If I zoom out just to scale 0.75 it does not work any more. The pixel amount changes from 32768 to 18432 pixels since texture size is 96 x 192. Mouse point is transformed to point 28/166. The formula gives me a wrong pixelIndex. var pixelIndex = ((166 / (0.75 * 0.75)) * 96) + (28 / 0.75) + 1; = (295.11 * 96) + 38.33 = 28330.66 + 38.33 = 28369 Does anyone have a clue what´s wrong in my code? Must be the first part (28330.66) which causes the calculation problem. Thanks! Yheeky

    Read the article

  • What is a good practice for 2D scene graph partitioning for culling?

    - by DevilWithin
    I need to know an efficient way to cull the scene graph objects, to render exclusively the ones in the view, and as fast as possible. I am thinking of doing it the following way, having in each object a local boundingbox which holds the object bounds, and a global boundingbox which holds the bounds of the object and all children. When a camera is moved, the render list is updated by traversing the global boundingboxes. When only the object is being moved, it tries to enlarge or shrink the ancestors global boundingboxes, and in the end updating or not the renderlist. What do you think of this approach? Do you think it will provide a fast and efficient culling? Also, because the render list is a contiguous list, it could accelerate the rendering, right? Any further tips for a 2D scene graphs are highly appreciated!

    Read the article

  • This for array colllision function doesn't work with anything but first object in array

    - by Zee Bashew
    For some reason, this simple simple loop is totally broken. (characterSheet is my character Class, it's just a movieClip with some extra functionality) (hitBox, is basically a square movieclip) Anyway: every time hitBox make contact with a characterSheet in a different order than they were created: Nothing happens. The program only seems to be listening to collisions that are made with o2[0]. As soon as another hitBox is created, it pushes the last one out of o2[0] and the last one becomes totally useless. What's super weird is that I can hit characterSheets in any order I like.... public function collisions(o1:Array, o2:Array) { if((o1.lenght>=0)&&(o2.length>=0)){ for (var i = 0; i < o1.length; i++) { var ob1 = o1[i]; for (var f = 0; f < o1.length; f++) { var ob2 = o2[f]; if (ob1 is characterSheet) { if (ob2.hitTestObject(ob1)) { var right:Boolean = true; if (ob1.x < hitBox(ob2).origin.x) right = false; characterSheet(ob1).specialDamage(hitBox(ob2).damageType, hitBox(ob2).damage, right); }}}}}} Also it might be somewhat helpful to see the function for creating a new hitBox public function SpawnHitBox(targeted, following, atype, xoff, yoff, ... args) { var newHitBox = new hitBox(targeted, following, atype, xoff, yoff, args); badCollisionObjects.push(newHitBox); arraydictionary[newHitBox] = badCollisionObjects; addChild(newHitBox); }

    Read the article

  • Import 3ds into JMonkeyEngine 3

    - by Yanick Rochon
    I have asked this question on SO, but I think it will be more suitable here. Basically, we are trying to import an animated character body (with skeleton) from 3D Studio Max to JMonkeyEngine 3, but while we succeeded at importing some animations, we cannot seem to export the skeleton to .skeleton.xml using OgreXML format. Since OgreXML seems to be the favored way to import models into JME, we dropped .obj files and such. Any help appreciated.

    Read the article

  • Unity falling body pendulum behaviour

    - by user3447980
    I wonder if someone could provide some guidance. Im attempting to create a pendulum like behaviour in 2D space in Unity without using a hinge joint. Essentially I want to affect a falling body to act as though it were restrained at the radius of a point, and to be subject to gravity and friction etc. Ive tried many modifications of this code, and have come up with some cool 'strange-attractor' like behaviour but i cannot for the life of me create a realistic pendulum like action. This is what I have so far: startingposition = transform.position; //Get start position newposition = startingposition + velocity; //add old velocity newposition.y -= gravity * Time.deltaTime; //add gravity newposition = pivot + Vector2.ClampMagnitude(newposition-pivot,radius); //clamp body at radius??? velocity = newposition-startingposition; //Get new velocity transform.Translate (velocity * Time.deltaTime, Space.World); //apply to transform So im working out the new position based on the old velocity + gravity, then constraining it to a distance from a point, which is the element in the code i cannot get correct. Is this a logical way to go about it?

    Read the article

  • Collision checking problem on a Tiled map

    - by nosferat
    I'm working on a pacman styled dungeon crawler, using the free oryx sprites. I've created the map using Tiled, separating the floor, walls and treasure in three different layers. After importing the map in libGDX, it renders fine. I also added the player character, for now it just moves into one direction, the player cannot control it yet. I wanted to add collision and I was planning to do this by checking if the player's new position is on a wall tile. Therefore as you can see in the following code snippet, I get the tile type of the appropriate tile and if it is not zero (since on that layer there is nothing except the wall tile) it is a collision and the player cannot move further: final Vector2 newPos = charController.move(warrior.getX(), warrior.getY()); if(!collided(newPos)) { warrior.setPosition(newPos.x, newPos.y); warrior.flip(charController.flipX(), charController.flipY()); } [..] private boolean collided(Vector2 newPos) { int row = (int) Math.floor((newPos.x / 32)); int col = (int) Math.floor((newPos.y / 32)); int tileType = tiledMap.layers.get(1).tiles[row][col]; if (tileType == 0) { return false; } return true; } The character only moves one tile with this code: If I reduce the col value by two it two more tiles. I think the problem will be around indexing, but I'm totally confused because the zero in the coordinate system of libGDX is in the bottom left corner of the screen, and I don't know the tiles array's indexing is similair or not. The size of the map is 19x21 tiles and looks like the following (the starting position of the player is marked with blue:

    Read the article

  • cocos2d event handler not fired when reentering scene

    - by Adam Freund
    I am encountering a very strange problem with my cocos2d app. I add a sprite to the page and have an event handler linked to it which replaces the scene with another scene. On that page I have another button to take me back to the original scene. When I am back on the original scene, the eventHandler doesn't get fired when I click on the sprite. Below is the relevant code. Thanks for any help! CCMenuItemImage *backBtnImg = [CCMenuItemImage itemWithNormalImage:@"btn_back.png" selectedImage:@"btn_back_pressed.png" target:self selector:@selector(backButtonTapped:)]; backBtnImg.position = ccp(45, 286); CCMenu *backBtn = [CCMenu menuWithItems:backBtnImg, nil]; backBtn.position = CGPointZero; [self addChild:backBtn]; EventHandler method (doesn't get called when the scene is re-entered). (void)backButtonTapped:(id)sender { NSLog(@"backButtonTapped\n"); CCMenuItemImage *backButton = (CCMenuItemImage *)sender; [backButton setNormalImage:[CCSprite spriteWithFile:@"btn_back_pressed.png"]]; [[CCDirector sharedDirector] replaceScene:[CCTransitionFade transitionWithDuration:.25 scene:[MenuView scene] withColor:ccBLACK]]; }

    Read the article

  • Arranging Gizmos in Unity 3D [on hold]

    - by Simran kaur
    I have this arrangement of Gizmos which was handed over to me. ! 1. How do I get it? I have read the documentation but I could get it as shown. I have basically track or lane that is coming towards the camera by moving towards negative z. I am moving lanes so that it appears as if cars are moving, The roads need to be rotated by 90 degrees otherwise they appear to move towards the upper end of the screen and that too parellely.Why exactly is that?

    Read the article

  • Efficiently rendering to 3D texture

    - by TravisG
    I have an existing depth texture and some other color textures, and want to process the information in them by rendering to a 3D texture (based on the depth contained in the depth texture, i.e. a point at (x/y) in the depth texture will be rendered to (x/y/texture(depth,uv)) in the 3D texture). Simply doing one manual draw call for each slice of the 3D texture (via glFramebufferTextureLayer) is terribly slow, since I don't know beforehand to what slice of the 3D texture a given texel from one of the color textures or the depth texture belongs. This means the entire process is effectively for each slice for each texel in depth texture process color textures and render to slice So I have to sample the depth texture completely per each slice, and I also have to go through the processing (at least until to discard;) for all texels in it. It would be much faster if I could rearrange the process to for each texel in depth texture figure out what slice it should end up in process color textures and render to slice Is this possible? If so, how? What I'm actually trying to do: the color textures contain lighting information (as seen from light view, it's a reflective shadow map). I want to accumulate that information in the 3D texture and then later use it to light the scene. More specifically I'm trying to implement Cryteks Light Propagation Volumes algorithm.

    Read the article

< Previous Page | 482 483 484 485 486 487 488 489 490 491 492 493  | Next Page >