Search Results

Search found 33194 results on 1328 pages for 'development approach'.

Page 384/1328 | < Previous Page | 380 381 382 383 384 385 386 387 388 389 390 391  | Next Page >

  • Alpha blending without depth writing

    - by teodron
    A recurring problem I get is this one: given two different billboard sets with alpha textures intended to create particle special effects (such as point lights and smoke puffs), rendering them correctly is tedious. The issue arising in this scenario is that there's no way to use depth writing and make certain billboards obey depth information as they appear in front of others that are clearly closer to the camera. I've described the problem on the Ogre forums several times without any suggestions being given (since the application I'm writing uses their engine). What could be done then? sort all individual billboards from different billboard sets to avoid writing the depth and still have nice alpha blended results? If yes, please do point out some resources to start with in the frames of the aforementioned Ogre engine. Any other suggestions are welcome!

    Read the article

  • First time shadow mapping problems

    - by user1294203
    I have implemented basic shadow mapping for the first time in OpenGL using shaders and I'm facing some problems. Below you can see an example of my rendered scene: The process of the shadow mapping I'm following is that I render the scene to the framebuffer using a View Matrix from the light point of view and the projection and model matrices used for normal rendering. In the second pass, I send the above MVP matrix from the light point of view to the vertex shader which transforms the position to light space. The fragment shader does the perspective divide and changes the position to texture coordinates. Here is my vertex shader, #version 150 core uniform mat4 ModelViewMatrix; uniform mat3 NormalMatrix; uniform mat4 MVPMatrix; uniform mat4 lightMVP; uniform float scale; in vec3 in_Position; in vec3 in_Normal; in vec2 in_TexCoord; smooth out vec3 pass_Normal; smooth out vec3 pass_Position; smooth out vec2 TexCoord; smooth out vec4 lightspace_Position; void main(void){ pass_Normal = NormalMatrix * in_Normal; pass_Position = (ModelViewMatrix * vec4(scale * in_Position, 1.0)).xyz; lightspace_Position = lightMVP * vec4(scale * in_Position, 1.0); TexCoord = in_TexCoord; gl_Position = MVPMatrix * vec4(scale * in_Position, 1.0); } And my fragment shader, #version 150 core struct Light{ vec3 direction; }; uniform Light light; uniform sampler2D inSampler; uniform sampler2D inShadowMap; smooth in vec3 pass_Normal; smooth in vec3 pass_Position; smooth in vec2 TexCoord; smooth in vec4 lightspace_Position; out vec4 out_Color; float CalcShadowFactor(vec4 lightspace_Position){ vec3 ProjectionCoords = lightspace_Position.xyz / lightspace_Position.w; vec2 UVCoords; UVCoords.x = 0.5 * ProjectionCoords.x + 0.5; UVCoords.y = 0.5 * ProjectionCoords.y + 0.5; float Depth = texture(inShadowMap, UVCoords).x; if(Depth < (ProjectionCoords.z + 0.001)) return 0.5; else return 1.0; } void main(void){ vec3 Normal = normalize(pass_Normal); vec3 light_Direction = -normalize(light.direction); vec3 camera_Direction = normalize(-pass_Position); vec3 half_vector = normalize(camera_Direction + light_Direction); float diffuse = max(0.2, dot(Normal, light_Direction)); vec3 temp_Color = diffuse * vec3(1.0); float specular = max( 0.0, dot( Normal, half_vector) ); float shadowFactor = CalcShadowFactor(lightspace_Position); if(diffuse != 0 && shadowFactor > 0.5){ float fspecular = pow(specular, 128.0); temp_Color += fspecular; } out_Color = vec4(shadowFactor * texture(inSampler, TexCoord).xyz * temp_Color, 1.0); } One of the problems is self shadowing as you can see in the picture, the crate has its own shadow cast on itself. What I have tried is enabling polygon offset (i.e. glEnable(POLYGON_OFFSET_FILL), glPolygonOffset(GLfloat, GLfloat) ) but it didn't change much. As you see in the fragment shader, I have put a static offset value of 0.001 but I have to change the value depending on the distance of the light to get more desirable effects , which not very handy. I also tried using front face culling when I render to the framebuffer, that didn't change much too. The other problem is that pixels outside the Light's view frustum get shaded. The only object that is supposed to be able to cast shadows is the crate. I guess I should pick more appropriate projection and view matrices, but I'm not sure how to do that. What are some common practices, should I pick an orthographic projection? From googling around a bit, I understand that these issues are not that trivial. Does anyone have any easy to implement solutions to these problems. Could you give me some additional tips? Please ask me if you need more information on my code. Here is a comparison with and without shadow mapping of a close-up of the crate. The self-shadowing is more visible.

    Read the article

  • How do I break an image into 6 or 8 pieces of different shapes?

    - by Anil gupta
    I am working on puzzle game, where the player can select an image from iPhone photo gallery. The selected image will save in puzzle page and after 3 second wait the selected image will be broken into 6 or 8 parts of different shapes. Then player will arrange these broken parts of images to make the original image. I am not getting idea how to break the image and merged so that player arrange the broken part. I want to break image like this below frame. I am developing this game in cocos2d.

    Read the article

  • std::map for storing static const Objects

    - by Sean M.
    I am making a game similar to Minecraft, and I am trying to fine a way to keep a map of Block objects sorted by their id. This is almost identical to the way that Minecraft does it, in that they declare a bunch of static final Block objects and initialize them, and then the constructor of each block puts a reference of that block into whatever the Java equivalent of a std::map is, so there is a central place to get ids and the Blocks with those ids. The problem is, that I am making my game in C++, and trying to do the exact same thing. In Block.h, I am declaring the Blocks like so: //Block.h public: static const Block Vacuum; static const Block Test; And in Block.cpp I am initializing them like so: //Block.cpp const Block Block::Vacuum = Block("Vacuum", 0, 0); const Block Block::Test = Block("Test", 1, 0); The block constructor looks like this: Block::Block(std::string name, uint16 id, uint8 tex) { //Check for repeat ids if (IdInUse(id)) { fprintf(stderr, "Block id %u is already in use!", (uint32)id); throw std::runtime_error("You cannot reuse block ids!"); } _id = id; //Check for repeat names if (NameInUse(name)) { fprintf(stderr, "Block name %s is already in use!", name); throw std::runtime_error("You cannot reuse block names!"); } _name = name; _tex = tex; //fprintf(stdout, "Using texture %u\n", _tex); _transparent = false; _solidity = 1.0f; idMap[id] = this; nameMap[name] = this; } And finally, the maps that I'm using to store references of Blocks in relation to their names and ids are declared as such: std::map<uint16, Block*> Block::idMap = std::map<uint16, Block*>(); //The map of block ids std::map<std::string, Block*> Block::nameMap = std::map<std::string, Block*>(); //The map of block names The problem comes when I try to get the Blocks in the maps using a method called const Block* GetBlock(uint16 id), where the last line is return idMap.at(id);. This line returns a Block with completely random values like _visibility = 0xcccc and such like that, found out through debugging. So my question is, is there something wrong with the blocks being declared as const obejcts, and then stored at pointers and accessed later on? The reason I cant store them as Block& is because that makes a copy of the Block when it is entered, so the block wouldn't have any of the attributes that could be set afterwards in the constructor of any child class, so I think I need to store them as a pointer. Any help is greatly appreciated, as I don't fully understand pointers yet. Just ask if you need to see any other parts of the code.

    Read the article

  • Best pathfinding for a 2D world made by CPU Perlin Noise, with random start- and destinationpoints?

    - by Mathias Lykkegaard Lorenzen
    I have a world made by Perlin Noise. It's created on the CPU for consistency between several devices (yes, I know it takes time - I have my techniques that make it fast enough). Now, in my game you play as a fighter-ship-thingy-blob or whatever it's going to be. What matters is that this "thing" that you play as, is placed in the middle of the screen, and moves along with the camera. The white stuff in my world are walls. The black stuff is freely movable. Now, as the player moves around he will constantly see "monsters" spawning around him in a circle (a circle that's larger than the screen though). These monsters move inwards and try to collide with the player. This is the part that's tricky. I want these monsters to constantly spawn, moving towards the player, but avoid walls entirely. I've added a screenshot below that kind of makes it easier to understand (excuse me for my bad drawing - I was using Paint for this). In the image above, the following rules apply. The red dot in the middle is the player itself. The light-green rectangle is the boundaries of the screen (in other words, what the player sees). These boundaries move with the player. The blue circle is the spawning circle. At the circumference of this circle, monsters will spawn constantly. This spawncircle moves with the player and the boundaries of the screen. Each monster spawned (shown as yellow triangles) wants to collide with the player. The pink lines shows the path that I want the monsters to move along (or something similar). What matters is that they reach the player without colliding with the walls. The map itself (the one that is Perlin Noise generated on the CPU) is saved in memory as two-dimensional bit-arrays. A 1 means a wall, and a 0 means an open walkable space. The current tile size is pretty small. I could easily make it a lot larger for increased performance. I've done some path algorithms before such as A*. I don't think that's entirely optimal here though.

    Read the article

  • Will I have an easier time learning OpenGL in Pygame or Pyglet? (NeHe tutorials downloaded)

    - by shadowprotocol
    I'm looking between PyGame and Pyglet, Pyglet seems to be somewhat newer and more Pythony, but it's last release according to Wikipedia is January '10. PyGame seems to have more documentation, more recent updates, and more published books/tutorials on the web for learning. I downloaded both the Pyglet and PyGame versions of the NeHe OpenGL tutorials (Lessons 1-10) which cover this material: lesson01 - Setting up the window lesson02 - Polygons lesson03 - Adding color lesson04 - Rotation lesson05 - 3D lesson06 - Textures lesson07 - Filters, Lighting, input lesson08 - Blending (transparency) lesson09 - 2D Sprites in 3D lesson10 - Moving in a 3D world What do you guys think? Is my hunch that I'll be better off working with PyGame somewhat warranted?

    Read the article

  • How do you stop OgreBullet Capsule from falling over?

    - by Nathan Baggs
    I've just started implementing bullet into my Ogre project. I followed the install instructions here: http://www.ogre3d.org/tikiwiki/OgreBullet+Tutorial+1 And the rest if the tutorial here: http://www.ogre3d.org/tikiwiki/OgreBullet+Tutorial+2 I got that to work fine however now I wanted to extend it to a handle a first person camera. I created a CapsuleShape and a Rigid Body (like the tutorial did for the boxes) however when I run the game the capsule falls over and rolls around on the floor, causing the camera swing wildly around. I need a way to fix the capsule to always stay upright, but I have no idea how Below is the code I'm using. (part of) Header File OgreBulletDynamics::DynamicsWorld *mWorld; // OgreBullet World OgreBulletCollisions::DebugDrawer *debugDrawer; std::deque<OgreBulletDynamics::RigidBody *> mBodies; std::deque<OgreBulletCollisions::CollisionShape *> mShapes; OgreBulletCollisions::CollisionShape *character; OgreBulletDynamics::RigidBody *characterBody; Ogre::SceneNode *charNode; Ogre::Camera* mCamera; Ogre::SceneManager* mSceneMgr; Ogre::RenderWindow* mWindow; main file bool MinimalOgre::go(void) { ... mCamera = mSceneMgr->createCamera("PlayerCam"); mCamera->setPosition(Vector3(0,0,0)); mCamera->lookAt(Vector3(0,0,300)); mCamera->setNearClipDistance(5); mCameraMan = new OgreBites::SdkCameraMan(mCamera); OgreBulletCollisions::CollisionShape *Shape; Shape = new OgreBulletCollisions::StaticPlaneCollisionShape(Vector3(0,1,0), 0); // (normal vector, distance) OgreBulletDynamics::RigidBody *defaultPlaneBody = new OgreBulletDynamics::RigidBody( "BasePlane", mWorld); defaultPlaneBody->setStaticShape(Shape, 0.1, 0.8); // (shape, restitution, friction) // push the created objects to the deques mShapes.push_back(Shape); mBodies.push_back(defaultPlaneBody); character = new OgreBulletCollisions::CapsuleCollisionShape(1.0f, 1.0f, Vector3(0, 1, 0)); charNode = mSceneMgr->getRootSceneNode()->createChildSceneNode(); charNode->attachObject(mCamera); charNode->setPosition(mCamera->getPosition()); characterBody = new OgreBulletDynamics::RigidBody("character", mWorld); characterBody->setShape( charNode, character, 0.0f, // dynamic body restitution 10.0f, // dynamic body friction 10.0f, // dynamic bodymass Vector3(0,0,0), Quaternion(0, 0, 1, 0)); mShapes.push_back(character); mBodies.push_back(characterBody); ... }

    Read the article

  • Scaling and new coordinates based off screen resolution

    - by Atticus
    I'm trying to deal with different resolutions on Android devices. Say I have a sprite at X,Y and dimensions W,H. I understand how to properly scale the width and heigh dimensions depending on the screen size, but I am not sure how to properly reposition this sprite so that it exists in the same area that it should. If I simply resize the width and heigh and keep X,Y at the same coordinates, things don't scale well. How do I properly reposition? Multiply the coordinates by the scale as well?

    Read the article

  • Maya 6 vs Maya 2013

    - by DiscreteGenius
    I have the entire "Learning Maya 6" books that I purchased back when Maya 6/6.5 was the hottest thing. I read some of the books but never finished the series. I don't know much about Maya or the field. I want to get back into the field but I have a concern. My question: Would I be failing if I decided to use my old Maya 6 books and Maya 6.5 software? As opposed to ditching my old books and starting with Maya 2013 and online tutorials, videos, etc.?

    Read the article

  • How to manage a multiplayer asynchronous environment in a game

    - by Phil
    I'm working on a game where players can setup villages, which can contain defending units. Any of these units (each on their own tiles) can be set to "campaign" which means they are no longer defending but can now be used to attack other villages. And each unit on a tile can have up to a 100 health. So far so good. Oh and it's all asynchronous so even though the server will be aware that your village is being attacked, you won't be until the attack is over. The issue I'm struggling with, is the following situation. Let's say a unit on a tile is being attacked by a player from another village. The other player see's your village and is attacking your units. You don't know this is happening though, so you set your unit to campaign and off you go to attack another village, with the unit which itself is actually being attacked by this other player. The other player stops attacking your village and leaves your unit with say a health of 1, which is then saved to the server. You however have this same unit are attacking another village with it, but now you discover that even though it started off with a 100 health, now mysteriously it only has 1... Solutions? Ideas? Edit The simplest solutions are often the best. I referred to Clash of clans below, well after a bit more digging it seems that in CoC you can only attack players that are offline! ha, that almost solves the problem. I say almost because there's still the situation where a players village could be in the process of being attacked when they come back online, still need to address that. Edit 2 A solution to the "What happens when a player is attacking your village and you come online" issue, could be the attacking player just get's kicked out of the village at that point and just get's whatever they had won up to that point, it's a bit of a fudge but it might work.

    Read the article

  • Computing pixel's screen position in a vertex shader: right or wrong?

    - by cubrman
    I am building a deferred rendering engine and I have a question. The article I took the sample code from suggested computing screen position of the pixel as follows: VertexShaderFunction() { ... output.Position = mul(worldViewProj, input.Position); output.ScreenPosition = output.Position; } PixelShaderFunction() { input.ScreenPosition.xy /= input.ScreenPosition.w; float2 TexCoord = 0.5f * (float2(input.ScreenPosition.x,-input.ScreenPosition.y) + 1); ... } The question is what if I compute the position in the vertex shader (which should optimize the performance as VSF is launched significantly less number of times than PSF) would I get the per-vertex lighting insted. Here is how I want to do this: VertexShaderFunction() { ... output.Position = mul(worldViewProj, input.Position); output.ScreenPosition.xy = output.Position / output.Position.w; } PixelShaderFunction() { float2 TexCoord = 0.5f * (float2(input.ScreenPosition.x,-input.ScreenPosition.y) + 1); ... } What exactly happens with the data I pass from VS to PS? How exactly is it interpolated? Will it give me the right per-pixel result in this case? I tried launching the game both ways and saw no visual difference. Is my assumption right? Thanks. P.S. I am optimizing the point light shader, so I actually pass a sphere geometry into the VS.

    Read the article

  • What to think about when designing a simple GUI for a quiz game

    - by PeterK
    I am coming close to finish my first iPhone game ever, as a matter of fact also my first programming experience ever, which is a quiz game. I have all the functionality i want and is currently polishing it both from a code point of view as well as looking at the GUI. My initial idea was not to use any specific graphics but rather focus on the game experience and simplicity and by that only using background color, orange, and white text as well as buttons. The design is based on that all ages, from learning to read, should be able to host and play this game. However, as i am now getting close to the finish line i am starting to think what is needed from a GUI point of view. I would like to ask for some advice what to think about when designing a GUI. Is it considered OK without any 'fancy' graphics, what is the risk without it etc.? Also, what colors goes well together if i choose to use a simple GUI. I am thinking about color blindness etc. In other words how do i design a good and effective GUI for a simple game as mine? Thanks

    Read the article

  • Parse/Write JSON with Unity iOS

    - by DannoEterno
    anybody know a tutorial or maybe can help me to develop a parser/reader for JSON compatible with Unity iOS pro? I've already tried different third part libraries but without luck (i've tried json.net, jsonfx, litjson). Im pretty in hurry of doing a simple parser/writer that i can use also under iOS and not only in Desktop. P.s. i can also use third part library, but please, first of suggest be sure that it will work under iOS! Thank you all

    Read the article

  • Thread-safety in Cocos2d-iPhone?

    - by Malax
    After tinkering a bit with cocos2d, I discovered that there is no classic game loop and everything is more-or-less event driven. I guess I can wrap my head around that, no problem. But I cannot find anything about thread safety. Say, I schedule something to occur every two seconds, which Thread will run the code? Given that I cannot find anything about that, I guess there is just one Cocos2d Thread and everything will be fine. Nevertheless, this implicit assumption does not give me a good feeling. Knowing is better than guessing. ;-) Can anyone shed some light onto that topic?

    Read the article

  • Converting to and from local and world 3D coordinate spaces?

    - by James Bedford
    Hey guys, I've been following a guide I found here (http://knol.google.com/k/matrices-for-3d-applications-view-transformation) on constructing a matrix that will allow me to 3D coordinates to an object's local coordinate space, and back again. I've tried to implement these two matrices using my object's look, side, up and location vectors and it seems to be working for the first three coordinates. I'm a little confused as to what I should expect for the w coordinate. Here are couple of examples from the print outs I've made of the matricies that are constructed. I'm passing a test vector of [9, 8, 14, 1] each time to see if I can convert both ways: Basic example: localize matrix: Matrix: 0.000000 -0.000000 1.000000 0.000000 0.000000 1.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.000000 5.237297 -45.530716 11.021271 1.000000 globalize matrix: Matrix: 0.000000 0.000000 1.000000 0.000000 -0.000000 1.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.000000 -11.021271 -45.530716 -5.237297 1.000000 test: Vector4f(9.000000, 8.000000, 14.000000, 1.000000) localTest: Vector4f(14.000000, 8.000000, 9.000000, -161.812256) worldTest: Vector4f(9.000000, 8.000000, 14.000000, -727.491455) More complicated example: localize matrix: Matrix: 0.052504 -0.000689 -0.998258 0.000000 0.052431 0.998260 0.002068 0.000000 0.997241 -0.052486 0.052486 0.000000 58.806095 2.979346 -39.396252 1.000000 globalize matrix: Matrix: 0.052504 0.052431 0.997241 0.000000 -0.000689 0.998260 -0.052486 0.000000 -0.998258 0.002068 0.052486 0.000000 -42.413120 5.975957 -56.419727 1.000000 test: Vector4f(9.000000, 8.000000, 14.000000, 1.000000) localTest: Vector4f(-13.508600, 8.486917, 9.290090, 2.542114) worldTest: Vector4f(9.000190, 7.993863, 13.990230, 102.057129) As you can see in the more complicated example, the coordinates after converting both ways loose some precision, but this isn't a problem. I'm just wondering how I should deal with the last (w) coordinate? Should I just set it to 1 after performing the matrix multiplication, or does it look like I've done something wrong? Thanks in advance for your help!

    Read the article

  • Ledge grab and climb in Unity3D

    - by BallzOfSteel
    I just started on a new project. In this project one of the main gameplay mechanics is that you can grab a ledge on certain points in a level and hang on to it. Now my question, since I've been wrestling with this for quite a while now. How could I actually implement this? I have tried it with animations, but it's just really ugly since the player will snap to a certain point where the animation starts.

    Read the article

  • help animating a player in Corona SDK

    - by andrew McCutchan
    Working on a game in the Corona SDK with Lua and I need help so the player animates on the line drawn. Right now he animates at the beggining and the end but not on the line. Here is the player code: function spawnPlayerObject(xPlayerSpawn,yPlayerSpawn,richTurn) --spawns Rich where we need him. -- riches sprite sheet and all his inital spirites. We need to adjust this so that animations 3-6 are the only ones that animate when moving horizontally local richPlayer = sprite.newSpriteSet(richSheet1,1,6) sprite.add(richPlayer, "rich", 1,6,500,1) rich = sprite.newSprite(richPlayer) rich.x = xPlayerSpawn rich.y = yPlayerSpawn rich:scale(_W*0.0009, _W*0.0009) -- scale is used to adjust the size of the object. richDir = richTurn rich.rotation = richDir rich:prepare("rich") rich:play() physics.addBody( rich, "static", { friction=1, radius = 15 } ) -- needs a better physics body for collision detection. end And here is the code for the line: function movePlayerOnLine(event) --for the original image replace all rich with player. if posCount ~= linePart then richDir = math.atan2(ractgangle_hit[posCount].y-rich.y,ractgangle_hit[posCount].x-rich.x)*180/math.pi playerSpeed = 5 rich.x = rich.x + math.cos(richDir*math.pi/180)*playerSpeed rich.y = rich.y + math.sin(richDir*math.pi/180)*playerSpeed if rich.x < ractgangle_hit[posCount].x+playerSpeed*10 and rich.x > ractgangle_hit[posCount].x-playerSpeed*10 then if rich.y < ractgangle_hit[posCount].y+playerSpeed*10 and rich.y > ractgangle_hit[posCount].y-playerSpeed*10 then posCount = posCount + 1 end end I don't think anything has changed recently but I have been getting an error when testing of "attempt to upvalue "rich" a nil value" on the second line, richDir = etc.

    Read the article

  • Flash game size and distribution between asset types

    - by EyeSeeEm
    I am currently developing a Flash game and am planning on a large amount of graphics and audio assets, which has led to some questions about Flash game size. By looking at some of the popular games on NG, there seem to be many in the 5-10Mb and a few in the 10-25Mb range. I was wondering if anyone knew of other notable large-scale games and what their sizes were, and if there have been any cases of games being disadvantaged because of their size. What is a common distribution of game size between code, graphics and audio? I know this can vary strongly, but I would like to hear your thoughts on an average case for a high-quality game.

    Read the article

  • Using textureGrad for anisotropic integration approximation

    - by Amxx
    I'm trying to develop a real time rendering method using real time acquired envmap (cubemap) for lightning. This implies that my envmap can change as often as every frame and I therefore cannot use any method base on precomputation of the envmap (such as convolution with BRDF...) So far my method worked well with Phong BRDF. For specular contribution I direclty read the value in my sampleCube and I use mipmap levels + linear filter for simulating the roughtness of the material considered: int size = textureSize(envmap, 0).x; float specular_level = log2(size * sqrt(3.0)) - 0.5 * log2(ns + 1); vec3 env_specular = ks * specular_color * textureLod(envmap, l_g, specular_level); From this method I would like to upgrade to a microfacet based BRDF. I already have algorithm for evaluating the shape (including anisotropic direction) of the reflection but I cannot manage to read the values I want in my sampleCube. I believe I have to use textureGrad(envmap, l_g, X, Y); with l_g being the reflection direction in global space but I cannot manage to find which values to give to X and Y in order to correctly specify the area I want to consider. What value should I give to X and Y in orther for textureGrad(envmap, l_g, X, Y); to give the same result as textureLod(envmap, l_g, specular_level);

    Read the article

  • Drawing beam effect in UDK?

    - by sgrif
    I'm having trouble drawing a particle effect between two actors in UDK - Both the source and the target are not static objects, so as far as I can tell I need to do it in the code not in kismet. Here's what I've got at the moment and it seems to not be doing anything at all. Ideas? BeamEmitter[0] = new(self) class'UTParticleSystemComponent'; BeamEmitter[0].SetAbsolute(false, false, false); BeamEmitter[0].SetTemplate(BeamTemplate[0]); BeamEmitter[0].SetTickGroup(TG_PostUpdateWork); BeamEmitter[0].bUpdateComponentInTick = true; self.AttachComponent(BeamEmitter[0]); BeamEmitter[0].SetBeamEndPoint(2, tarPos); BeamEmitter[0].ActivateSystem();

    Read the article

  • MeshBuilder, assembly missing

    - by BlackBear
    I'm trying to build a terrain starting from a heightmap. I've already some ideas about the procedure, but I can't even get started. I feel like I have to use a MeshBuider. The problem is that Visual Studio (I'm using the 2008 version) wants an assembly. Effectively on the MSDN there's a line specifying the assembly needed by the MeshBuilder, but I don't know how to import/load it. Any suggestions? Thanks in advance :)

    Read the article

  • A Quick HLSL Question (How to modify some HLSL code)

    - by electroflame
    Thanks for wanting to help! I'm trying to create a circular, repeating ring (that moves outward) on a texture. I've achieved this, to a degree, with the following code: float distance = length(inTex - in_ShipCenter); float time = in_Time; ///* Simple distance/time combination */ float2 colorIndex = float2(distance - time, .3); float4 shipColor = tex2D(BaseTexture, inTex); float4 ringColor = tex2D(ringTexture, colorIndex); float4 finalColor; finalColor.rgb = (shipColor.rgb) + (ringColor.rgb); finalColor.a = shipColor.a; // Use the base texture's alpha (transparency). return finalColor; This works, and works how I want it to. The ring moves outward from the center of the texture at a steady rate, and is constrained to the edges of the base texture (i.e. it won't continue past an edge). However, there are a few issues with it that I would like some help on, though. They are: By combining the color additively (when I set finalColor.rgb), it makes the resulting ring color much lighter than I want (which, is pretty much the definition of additive blending, but I don't really want additive blending in this case). I would really like to be able to pass in the color that I want the ring to be. Currently, I have to pass in a texture that contains the color of the ring, but I think that doing it that way is kind of wasteful and overly-cumbersome. I know that I'm probably being an idiot over this, so I greatly appreciate the help. Some other (possibly relevant) information: I'm using XNA. I'm applying this by providing it to a SpriteBatch (as an Effect). The SpriteBatch is using BlendState.NonPremultiplied. Thanks in advance! EDIT: Thanks for the answers thus far, as they've helped me get a better grasp of the color issue. However, I'm still unsure of how to pass a color in and not use a texture. i.e. Can I create a tex2D by using a float4 instead of a texture? Or can I make a texture from a float4 and pass the texture in to the tex2D? DOUBLE EDIT: Here's some example pictures: With the effect off: With the effect on: With the effect on, but with the color weighting set to full: As you can see, the color weighting makes the base texture completely black (The background is black, so it looks transparent). You can also see the red it's supposed to be, and then the white-ish it really is when blended additively.

    Read the article

  • Improving SpriteBatch performance for tiles

    - by Richard Rast
    I realize this is a variation on what has got to be a common question, but after reading several (good answers) I'm no closer to a solution here. So here's my situation: I'm making a 2D game which has (among some other things) a tiled world, and so, drawing this world implies drawing a jillion tiles each frame (depending on resolution: it's roughly a 64x32 tile with some transparency). Now I want the user to be able to maximize the game (or fullscreen mode, actually, as its a bit more efficient) and instead of scaling textures (bleagh) this will just allow lots and lots of tiles to be shown at once. Which is great! But it turns out this makes upward of 2000 tiles on the screen each time, and this is framerate-limiting (I've commented out enough other parts of the game to make sure this is the bottleneck). It gets worse if I use multiple source rectangles on the same texture (I use a tilesheet; I believe changing textures entirely makes things worse), or if you tint the tiles, or whatever. So, the general question is this: What are some general methods for improving the drawing of thousands of repetitive sprites? Answers pertaining to XNA's SpriteBatch would be helpful but I'm equally happy with general theory. Also, any tricks pertaining to this situation in particular (drawing a tiled world efficiently) are also welcome. I really do want to draw all of them, though, and I need the SpriteMode.BackToFront to be active, because

    Read the article

  • Design pattern for animation sequence in LibGDX

    - by kevinyu
    What design pattern to use for sequence of animation that involve different actor in libGDX. For example I am making a game to choose a wolf from a group of sheeps. The first animation played when the game begin is the wolf enter the field that is filled with two sheeps.Then the wolf disguise as a sheep and goes to the center of the screen. Then the game will shuffle the sheeps. After it finished it will ask the player where is the wolf. The game wait for player input. After that the game will show animation to show the player whether their answer is right or wrong. I am currently using State design pattern. There are four states wolfEnterState,DisguiseState,ShuffleState,UserInputState, and answerAnimationState. I feel that my code is messy. I use addAction with action sequence and action completion(new Runnable()) a lot. I feel that the action sequence is getting long. Is there a better solution for this kind of problem

    Read the article

  • OpenGL - have object follow mouse

    - by kevin james
    I want to have an object follow around my mouse on the screen in OpenGL. (I am also using GLEW, GLFW, and GLM). The best idea I've come up with is: Get the coordinates within the window with glfwGetCursorPos. The window was created with window = glfwCreateWindow( 1024, 768, "Test", NULL, NULL); and the code to get coordinates is double xpos, ypos; glfwGetCursorPos(window, &xpos, &ypos); Next, I use GLM unproject, to get the coordinates in "object space" glm::vec4 viewport = glm::vec4(0.0f, 0.0f, 1024.0f, 768.0f); glm::vec3 pos = glm::vec3(xpos, ypos, 0.0f); glm::vec3 un = glm::unProject(pos, View*Model, Projection, viewport); There are two potential problems I can already see. The viewport is fine, as the initial x,y, coordinates of the lower left are indeed 0,0, and it's indeed a 1024*768 window. However, the position vector I create doesn't seem right. The Z coordinate should probably not be zero. However, glfwGetCursorPos returns 2D coordinates, and I don't know how to go from there to the 3D window coordinates, especially since I am not sure what the 3rd dimension of the window coordinates even means (since computer screens are 2D). Then, I am not sure if I am using unproject correctly. Assume the View, Model, Projection matrices are all OK. If I passed in the correct position vector in Window coordinates, does the unproject call give me the coordinates in Object coordinates? I think it does, but the documentation is not clear. Finally, to each vertex of the object I want to follow the mouse around, I just increment the x coordinate by un[0], the y coordinate by -un[1], and the z coordinate by un[2]. However, since my position vector that is being unprojected is likely wrong, this is not giving good results; the object does move as my mouse moves, but it is offset quite a bit (i.e. moving the mouse a lot doesn't move the object that much, and the z coordinate is very large). I actually found that the z coordinate un[2] is always the same value no matter where my mouse is, probably because the position vector I pass into unproject always has a value of 0.0 for z. Edit: The (incorrectly) unprojected x-values range from about -0.552 to 0.552, and the y-values from about -0.411 to 0.411.

    Read the article

< Previous Page | 380 381 382 383 384 385 386 387 388 389 390 391  | Next Page >