Search Results

Search found 31839 results on 1274 pages for 'plugin development'.

Page 482/1274 | < Previous Page | 478 479 480 481 482 483 484 485 486 487 488 489  | Next Page >

  • Box2D: How to get the Exact Collision Point and ignore the collision (from 2 "ghost bodies")

    - by Moritz
    I have a very basic problem with Box2D. For a arenatype game where you can throw scriptable "missiles" at other players I decided to use Box2D for the collision detection between the players and the missiles. Players and missiles have their own circular shape with a specific size (varying). But I don´t want to use dynamic bodies because the missiles need to move themselve in any way they want to (defined in the script) and shouldnt be resolved unless the script wants it. The behavior I look for is as following (for each time step): velocity of missiles is set by the specific missile script each missile is moved according to that velocity if a collision accurs now, I want to get the exact position of impact, and now I need a mechanism to decide if the missile should just ignore the collision (for example collision between two fireballs which shouldnt interact) or take it (so they are resolved and dont overlap anymore) So is there a way in Box2D to create Ghost bodies and listen to collisions from them, then deciding if they should ignore the collision or should take them and resolve their position? I hope I was clear enough and would be happy about any help!

    Read the article

  • FreeType2 Crash on FT_Init_FreeType

    - by JoeyDewd
    I'm currently trying to learn how to use the FreeType2 library for drawing fonts with OpenGL. However, when I start the program it immediately crashes with the following error: "(Can't correctly start the application (0xc000007b))" Commenting the FT_Init_FreeType removes the error and my game starts just fine. I'm wondering if it's my code or has something to do with loading the dll file. My code: #include "SpaceGame.h" #include <ft2build.h> #include FT_FREETYPE_H //Freetype test FT_Library library; Game::Game(int Width, int Height) { //Freetype FT_Error error = FT_Init_FreeType(&library); if(error) { cout << "Error occured during FT initialisation" << endl; } And my current use of the FreeType2 files. Inside my bin folder (where debug .exe is located) is: freetype6.dll, libfreetype.dll.a, libfreetype-6.dll. In Code::Blocks, I've linked to the lib and include folder of the FreeType 2.3.5.1 version. And included a compiler flag: -lfreetype My program starts perfectly fine if I comment out the FT_Init function which means the includes, and library files should be fine. I can't find a solution to my problem and google isn't helping me so any help would be greatly appreciated.

    Read the article

  • Slick2D: Animation not being parsed from spritesheet correctly

    - by user2066880
    I have a 960x960 spritesheet with each tile being 192x192. I initialized my spritesheet and animation like so: spritesheet = new SpriteSheet("resources/spritesheets/player.png", 192, 192); walkingLeft = new Animation(spritesheet, 3, 0, 0, 1, true, 20, true); When I attempt to render the animation, I get a java.lang.ArrayIndexOutOfBoundsException: -1 error. This error doesn't occur when I'm creating an animation from images in the same row. Therefore, I'm assuming that the error is being caused because of the way Slick is handling horizontal scanning (going to the next row after reaching the end).

    Read the article

  • Why do the order of uniforms gets changed by the compiler?

    - by Aybe
    I have the following shader, everything works fine when setting the value of one of the matrices but I've discovered that getting a value back is incorrect for View and Projection, they are in reverse order. #version 430 precision highp float; layout (location = 0) uniform mat4 Model; layout (location = 1) uniform mat4 View; layout (location = 2) uniform mat4 Projection; layout (location = 0) in vec3 in_position; layout (location = 1) in vec4 in_color; out vec4 out_color; void main(void) { gl_Position = Projection * View * Model * vec4(in_position, 1.0); out_color = in_color; } When querying their location they are effectively reversed, I did a small test by renaming View to Piew which puts it before Projection if sorted alphabetically and the order is correct. Now if I do remove layout (location = ...) from the uniforms, the problem disappears !? I am starting to think that this is a driver bug as explained in the wiki. Do you know why the order of the uniforms is changed whenever the shader is compiled ? (using an AMD HD7850)

    Read the article

  • Performance issues with visibility detection and object transparency

    - by maul
    I'm working on a 3d game that has a view similar to classic isometric games (diablo, etc.). One of the things I'm trying to implement is the effect of turning walls transparent when the player walks behind them. By itself this is not a huge issue, but I'm having trouble determining which walls should be transparent exactly. I can't use a circle or square mask. There are a lot of cases where the wall piece at the same (relative) position has different visibility depending on the surrounding area. With the help of a friend I came up with this algorithm: Create a grid around the player that contains a lot of "visibility points" (my game is semi tile-based so I create one point for every tile on the grid) - the size of the square's side is close to the radius where I make objects transparent. I found 6x6 to be a good value, so that's 36 visibility points total. For every visibility point on the grid, check if that point is in the player's line of sight. For every visibility point that is in the LOS, cast a ray from the camera to that point and mark all objects the ray hits as transparent. This algorithm works - not perfectly, but only requires some tuning - however this is very slow. As you can see, it requries 36 ray casts minimum, but most of the time 60-70 depending on the position. That's simply too much for the CPU. Is there a better way to do this? I'm using Unity 3D but I'm not looking for an engine-specific solution.

    Read the article

  • Dynamic Environment Creation

    - by Jack
    I was wondering, I'm thinking on a more small-scale, abstracted level, but how does one create a dynamic environment a la Minecraft? In specific, I'm thinking of the world as a 3 dimensional array of block objects, how is it made so that large features such as oceans are created? The language isn't important, I'm thinking on a conceptual level, but if it helps, I use C# or C++. Thanks for any help!

    Read the article

  • Texture the quad with different parts of texture

    - by PolGraphic
    I have a 2D quad. Let say it's position is (5,10) and size is (7,11). I want to texture it with one texture, but using three different parts of it. I want to texture the part of quad from x = 5 to x = 7 with part of texture from U = 0 to U = 0.5 (replaying it after achieving 0.5, so I will have 4 same 0.5-lenght fragments). The second one with some other part of texture (also repeating it) and third in the same style. But, how to achieve it? I know that: float2 tc = fmod(input.TexCoord, textureCoordinates.zw - textureCoordinates.xy) + textureCoordinates.xy; //textureCoordinates.xy = fragments' offset Will give me the texture part replaying.

    Read the article

  • Cocos2d sprite's parent not reflecting true scale value

    - by Paul Renton
    I am encountering issues with determining a CCSprite's parent node's scale value. In my game I have a class that extends CCLayer and scales itself based on game triggers. Certain child sprites of this CCLayer have mathematical calculations that become inaccurate once I scale the parent CCLayer. For instance, I have a tank sprite that needs to determine its firing point within the parent node. Whenever I scale the layer and ask the layer for its scale values, they are accurate. However, when I poll the sprites contained within the layer for their parent's scale values, they always appear as one. // From within the sprite CCLOG(@"ChildSprite-> Parent's scale values are scaleX: %f, scaleY: %f", self.parent.scaleX, self.parent.scaleY); // Outputs 1.0,1.0 // From within the layer CCLOG(@"Layer-> ScaleX : %f, ScaleY: %f , SCALE: %f", self.scaleX, self.scaleY, self.scale); // Output is 0.80,0.80 Could anyone explain to me why this is the case? I don't understand why these values are different. Maybe I don't understand the inner design of Cocos2d fully. Any help is appreciated.

    Read the article

  • Randomly placing items script not working - sometimes items spawn in walls, sometimes items spawn in weird locations

    - by Timothy Williams
    I'm trying to figure out a way to randomly spawn items throughout my level, however I need to make sure they won't spawn inside another object (walls, etc.) Here's the code I'm currently using, it's based on the Physics.CheckSphere(); function. This runs OnLevelWasLoaded(); It spawns the items perfectly fine, but sometimes items spawn partway in walls. And sometimes items will spawn outside of the SpawnBox range (no clue why it does that.) //This is what randomly generates all the items. void SpawnItems () { if (Application.loadedLevelName == "Menu" || Application.loadedLevelName == "End Demo") return; //The bottom corner of the box we want to spawn items in. Vector3 spawnBoxBot = Vector3.zero; //Top corner. Vector3 spawnBoxTop = Vector3.zero; //If we're in the dungeon, set the box to the dungeon box and tell the items we want to spawn. if (Application.loadedLevelName == "dungeonScene") { spawnBoxBot = new Vector3 (8.857f, 0, 9.06f); spawnBoxTop = new Vector3 (-27.98f, 2.4f, -15); itemSpawn = dungeonSpawn; } //Spawn all the items. for (i = 0; i != itemSpawn.Length; i ++) { spawnedItem = null; //Zeroes out our random location Vector3 randomLocation = Vector3.zero; //Gets the meshfilter of the item we'll be spawning MeshFilter mf = itemSpawn[i].GetComponent<MeshFilter>(); //Gets it's bounds (see how big it is) Bounds bounds = mf.sharedMesh.bounds; //Get it's radius float maxRadius = new Vector3 (bounds.extents.x + 10f, bounds.extents.y + 10f, bounds.extents.z + 10f).magnitude * 5f; //Set which layer is the no walls layer var NoWallsLayer = 1 << LayerMask.NameToLayer("NoWallsLayer"); //Use that layer as your layermask. LayerMask layerMask = ~(1 << NoWallsLayer); //If we're in the dungeon, certain items need to spawn on certain halves. if (Application.loadedLevelName == "dungeonScene") { if (itemSpawn[i].name == "key2" || itemSpawn[i].name == "teddyBearLW" || itemSpawn[i].name == "teddyBearLW_Admiration" || itemSpawn[i].name == "radio") randomLocation = new Vector3(Random.Range(spawnBoxBot.x, -26.96f), Random.Range(spawnBoxBot.y, spawnBoxTop.y), Random.Range(spawnBoxBot.z, -2.141f)); else randomLocation = new Vector3(Random.Range(spawnBoxBot.x, spawnBoxTop.x), Random.Range(spawnBoxBot.y, spawnBoxTop.y), Random.Range(-2.374f, spawnBoxTop.z)); } //Otherwise just spawn them in the box. else randomLocation = new Vector3(Random.Range(spawnBoxBot.x, spawnBoxTop.x), Random.Range(spawnBoxBot.y, spawnBoxTop.y), Random.Range(spawnBoxBot.z, spawnBoxTop.z)); //This is what actually spawns the item. It checks to see if the spot where we want to instantiate it is clear, and if so it instatiates it. Otherwise we have to repeat the whole process again. if (Physics.CheckSphere(randomLocation, maxRadius, layerMask)) spawnedItem = Instantiate(itemSpawn[i], randomLocation, Random.rotation); else i --; //If we spawned something, set it's name to what it's supposed to be. Removes the (clone) addon. if (spawnedItem != null) spawnedItem.name = itemSpawn[i].name; } } What I'm asking for is if you know what's going wrong with this code that it would spawn stuff in walls. Or, if you could provide me with links/code/ideas of a better way to check if an item will spawn in a wall (some other function than Physics.CheckSphere). I've been working on this for a long time, and nothing I try seems to work. Any help is appreciated.

    Read the article

  • Mobile 3D engine renders alpha as full-object transparency

    - by Nils Munch
    I am running a iOS project using the isgl3d framework for showing pod files. I have a stylish car with 0.5 alpha windows, that I wish to render on a camera background, seeking some augmented reality goodness. The alpha on the windows looks okay, but when I add the object, I notice that it renders the entire object transparently, where the windows are. Including interior of the car. Like so (in example, keyboard can be seen through the dashboard, seats and so on. should be solid) The car interior is a seperate object with alpha 1.0. I would rather not show a "ghost car" in my project, but I haven't found a way around this. Have anyone encountered the same issue, and eventually reached a solution ?

    Read the article

  • Best algorithm for recursive adjacent tiles?

    - by OhMrBigshot
    In my game I have a set of tiles placed in a 2D array marked by their Xs and Zs ([1,1],[1,2], etc). Now, I want a sort of "Paint Bucket" mechanism: Selecting a tile will destroy all adjacent tiles until a condition stops it, let's say, if it hits an object with hasFlag. Here's what I have so far, I'm sure it's pretty bad, it also freezes everything sometimes: void destroyAdjacentTiles(int x, int z) { int GridSize = Cubes.GetLength(0); int minX = x == 0 ? x : x-1; int maxX = x == GridSize - 1 ? x : x+1; int minZ = z == 0 ? z : z-1; int maxZ = z == GridSize - 1 ? z : z+1; Debug.Log(string.Format("Cube: {0}, {1}; X {2}-{3}; Z {4}-{5}", x, z, minX, maxX, minZ, maxZ)); for (int curX = minX; curX <= maxX; curX++) { for (int curZ = minZ; curZ <= maxZ; curZ++) { if (Cubes[curX, curZ] != Cubes[x, z]) { Debug.Log(string.Format(" Checking: {0}, {1}", curX, curZ)); if (Cubes[curX,curZ] && Cubes[curX,curZ].GetComponent<CubeBehavior>().hasFlag) { Destroy(Cubes[curX,curZ]); destroyAdjacentTiles(curX, curZ); } } } } }

    Read the article

  • Strange mesh import problem with Assimp and OpenGL

    - by Morgan
    Using the assimp library for importing 3D data into an OpenGL application. I get some strange problems regarding indexing of the vertices: If I use the following code for importing vertex indices: for (unsigned int t = 0; t < mesh->mNumFaces; ++t) { const struct aiFace * face = &mesh->mFaces[t]; if (face->mNumIndices == 3) { indices->push_back(face->mIndices[0]); indices->push_back(face->mIndices[1]); indices->push_back(face->mIndices[2]); } } I get the following result: Instead, if I use the following code: for(int k = 0; k < 2 ; k++) { for (unsigned int t = 0; t < mesh->mNumFaces; ++t) { const struct aiFace * face = &mesh->mFaces[t]; if (face->mNumIndices == 3) { indices->push_back(face->mIndices[0]); indices->push_back(face->mIndices[1]); indices->push_back(face->mIndices[2]); } } } I get the correct result: Hence adding the indices twice, renders the correct result? The OpenGL buffer is populated, like so: glBufferData(GL_ELEMENT_ARRAY_BUFFER, indices->size() * sizeof(unsigned int), indices->data(), GL_STATIC_DRAW); And rendered as follows: glDrawElements(GL_TRIANGLES, vertexCount*3, GL_UNSIGNED_INT, indices->data());

    Read the article

  • Normal map applied as diffuse textures looks wrong

    - by KaiserJohaan
    Diffuse textures works fine, but I am having problem with normal maps, so I thought I'd tried to apply the normal maps as the diffuse map in my fragment shader so I could see everything is OK. I comment-out my normal map code and just set the diffuse map to the normal map and I get this: http://postimg.org/image/j9gudjl7r/ Looks like a smurf! This is the actual normal map of the main body: http://postimg.org/image/sbkyr6fg9/ Here is my fragment shader, notice I commented out normal map code so I could debug the normal map as a diffuse texture "#version 330 \n \ \n \ layout(std140) uniform; \n \ \n \ const int MAX_LIGHTS = 8; \n \ \n \ struct Light \n \ { \n \ vec4 mLightColor; \n \ vec4 mLightPosition; \n \ vec4 mLightDirection; \n \ \n \ int mLightType; \n \ float mLightIntensity; \n \ float mLightRadius; \n \ float mMaxDistance; \n \ }; \n \ \n \ uniform UnifLighting \n \ { \n \ vec4 mGamma; \n \ vec3 mViewDirection; \n \ int mNumLights; \n \ \n \ Light mLights[MAX_LIGHTS]; \n \ } Lighting; \n \ \n \ uniform UnifMaterial \n \ { \n \ vec4 mDiffuseColor; \n \ vec4 mAmbientColor; \n \ vec4 mSpecularColor; \n \ vec4 mEmissiveColor; \n \ \n \ bool mHasDiffuseTexture; \n \ bool mHasNormalTexture; \n \ bool mLightingEnabled; \n \ float mSpecularShininess; \n \ } Material; \n \ \n \ uniform sampler2D unifDiffuseTexture; \n \ uniform sampler2D unifNormalTexture; \n \ \n \ in vec3 frag_position; \n \ in vec3 frag_normal; \n \ in vec2 frag_texcoord; \n \ in vec3 frag_tangent; \n \ in vec3 frag_bitangent; \n \ \n \ out vec4 finalColor; " " \n \ \n \ void CalcGaussianSpecular(in vec3 dirToLight, in vec3 normal, out float gaussianTerm) \n \ { \n \ vec3 viewDirection = normalize(Lighting.mViewDirection); \n \ vec3 halfAngle = normalize(dirToLight + viewDirection); \n \ \n \ float angleNormalHalf = acos(dot(halfAngle, normalize(normal))); \n \ float exponent = angleNormalHalf / Material.mSpecularShininess; \n \ exponent = -(exponent * exponent); \n \ \n \ gaussianTerm = exp(exponent); \n \ } \n \ \n \ vec4 CalculateLighting(in Light light, in vec4 diffuseTexture, in vec3 normal) \n \ { \n \ if (light.mLightType == 1) // point light \n \ { \n \ vec3 positionDiff = light.mLightPosition.xyz - frag_position; \n \ float dist = max(length(positionDiff) - light.mLightRadius, 0); \n \ \n \ float attenuation = 1 / ((dist/light.mLightRadius + 1) * (dist/light.mLightRadius + 1)); \n \ attenuation = max((attenuation - light.mMaxDistance) / (1 - light.mMaxDistance), 0); \n \ \n \ vec3 dirToLight = normalize(positionDiff); \n \ float angleNormal = clamp(dot(normalize(normal), dirToLight), 0, 1); \n \ \n \ float gaussianTerm = 0.0; \n \ if (angleNormal > 0.0) \n \ CalcGaussianSpecular(dirToLight, normal, gaussianTerm); \n \ \n \ return diffuseTexture * (attenuation * angleNormal * Material.mDiffuseColor * light.mLightIntensity * light.mLightColor) + \n \ (attenuation * gaussianTerm * Material.mSpecularColor * light.mLightIntensity * light.mLightColor); \n \ } \n \ else if (light.mLightType == 2) // directional light \n \ { \n \ vec3 dirToLight = normalize(light.mLightDirection.xyz); \n \ float angleNormal = clamp(dot(normalize(normal), dirToLight), 0, 1); \n \ \n \ float gaussianTerm = 0.0; \n \ if (angleNormal > 0.0) \n \ CalcGaussianSpecular(dirToLight, normal, gaussianTerm); \n \ \n \ return diffuseTexture * (angleNormal * Material.mDiffuseColor * light.mLightIntensity * light.mLightColor) + \n \ (gaussianTerm * Material.mSpecularColor * light.mLightIntensity * light.mLightColor); \n \ } \n \ else if (light.mLightType == 4) // ambient light \n \ return diffuseTexture * Material.mAmbientColor * light.mLightIntensity * light.mLightColor; \n \ else \n \ return vec4(0.0); \n \ } \n \ \n \ void main() \n \ { \n \ vec4 diffuseTexture = vec4(1.0); \n \ if (Material.mHasDiffuseTexture) \n \ diffuseTexture = texture(unifDiffuseTexture, frag_texcoord); \n \ \n \ vec3 normal = frag_normal; \n \ if (Material.mHasNormalTexture) \n \ { \n \ diffuseTexture = vec4(normalize(texture(unifNormalTexture, frag_texcoord).xyz * 2.0 - 1.0), 1.0); \n \ // vec3 normalTangentSpace = normalize(texture(unifNormalTexture, frag_texcoord).xyz * 2.0 - 1.0); \n \ //mat3 tangentToWorldSpace = mat3(normalize(frag_tangent), normalize(frag_bitangent), normalize(frag_normal)); \n \ \n \ // normal = tangentToWorldSpace * normalTangentSpace; \n \ } \n \ \n \ if (Material.mLightingEnabled) \n \ { \n \ vec4 accumLighting = vec4(0.0); \n \ \n \ for (int lightIndex = 0; lightIndex < Lighting.mNumLights; lightIndex++) \n \ accumLighting += Material.mEmissiveColor * diffuseTexture + \n \ CalculateLighting(Lighting.mLights[lightIndex], diffuseTexture, normal); \n \ \n \ finalColor = pow(accumLighting, Lighting.mGamma); \n \ } \n \ else { \n \ finalColor = pow(diffuseTexture, Lighting.mGamma); \n \ } \n \ } \n"; Here is my wrapper around a texture OpenGLTexture::OpenGLTexture(const std::vector<uint8_t>& textureData, uint32_t textureWidth, uint32_t textureHeight, TextureFormat textureFormat, TextureType textureType, Logger& logger) : mLogger(logger), mTextureID(gNextTextureID++), mTextureType(textureType) { glGenTextures(1, &mTexture); CHECK_GL_ERROR(mLogger); glBindTexture(GL_TEXTURE_2D, mTexture); CHECK_GL_ERROR(mLogger); GLint glTextureFormat = (textureFormat == TextureFormat::TEXTURE_FORMAT_RGB ? GL_RGB : textureFormat == TextureFormat::TEXTURE_FORMAT_RGBA ? GL_RGBA : GL_RED); glTexImage2D(GL_TEXTURE_2D, 0, glTextureFormat, textureWidth, textureHeight, 0, glTextureFormat, GL_UNSIGNED_BYTE, &textureData[0]); CHECK_GL_ERROR(mLogger); glGenerateMipmap(GL_TEXTURE_2D); CHECK_GL_ERROR(mLogger); glBindTexture(GL_TEXTURE_2D, 0); CHECK_GL_ERROR(mLogger); } OpenGLTexture::~OpenGLTexture() { glDeleteBuffers(1, &mTexture); CHECK_GL_ERROR(mLogger); } And here is the sampler I create which is shared between Diffuse and normal textures // texture sampler setup glGenSamplers(1, &mTextureSampler); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_MAG_FILTER, GL_LINEAR); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_S, GL_REPEAT); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_T, GL_REPEAT); CHECK_GL_ERROR(mLogger); glSamplerParameterf(mTextureSampler, GL_TEXTURE_MAX_ANISOTROPY_EXT, mCurrentAnisotropy); CHECK_GL_ERROR(mLogger); glUniform1i(glGetUniformLocation(mDefaultProgram.GetHandle(), "unifDiffuseTexture"), OpenGLTexture::TEXTURE_UNIT_DIFFUSE); CHECK_GL_ERROR(mLogger); glUniform1i(glGetUniformLocation(mDefaultProgram.GetHandle(), "unifNormalTexture"), OpenGLTexture::TEXTURE_UNIT_NORMAL); CHECK_GL_ERROR(mLogger); glBindSampler(OpenGLTexture::TEXTURE_UNIT_DIFFUSE, mTextureSampler); CHECK_GL_ERROR(mLogger); glBindSampler(OpenGLTexture::TEXTURE_UNIT_NORMAL, mTextureSampler); CHECK_GL_ERROR(mLogger); SetAnisotropicFiltering(mCurrentAnisotropy); The diffuse textures looks like they should, but the normal looks so wierd. Why is this?

    Read the article

  • General visual effects to meshes/entities

    - by Pacha
    I am trying to add some visual effects to some entities, meshes, or whatever you want to call them as they are looking pretty dull in my game right now. What I want to achieve is this: http://youtu.be/zox8935PLw0?t=36s (the "texture" gets disintegrated and then goes back to normal, covering the whole mesh.) Also I would like to know what is the best way to add effects like the one in the video to my game (for example, thunder effects, shattering, etc.) I know that I can do some things with shaders, but I haven't learned them too well and I am still in a beginner level. I am using Ogre3D, and GLSL for shaders. Thanks! Note: this is a screen-shot of my game, I want to apply the effect in the video to my main character):

    Read the article

  • Algorithm to find average position

    - by Simran kaur
    In the given diagram, I have the extreme left and right points, that is -2 and 4 in this case. So, obviously, I can calculate the width which is 6 in this case. What we know: The number of partitions:3 in this case The partition number at at any point i.e which one is 1st,second or third partition (numbered starting from left) What I want: The position of the purple line drawn which is positio of average of a particular partition So, basically I just want a generalized formula to calculate position of the average at any point.

    Read the article

  • Algorithmically generating neon layers on pixel grid

    - by user190929
    In an attempt at a screensaver I am making, I am a fan of neo-like graphics, which, of course, look great against a black background. As I understand it, neon, graphically speaking, is essentially a gradient of a color, brightest in the center, and gets darker proceeding outward. Although, more accurate is similar, but separating it into tubes and glow. The tubes are mostly white, while the glow is where most of the color is seen. Well... the tubes could also be a light variant of the color, you could say. The glow is darker. Anyhow, my question is, how could you generate such things given an initial pattern of pixels that would be the tubes? For example, let's say I want to make a neon 'H'. I, via the libraries, can attain the rectangles of pixels which represent it, but I want to make it look neonized. How could I algorithmically achieve such an effect given a base tube shape and base color? EDIT: ok, I mistated that. Got a bit distracted. My purpose for this was similar to a neon effect, but not. Sorry about that. What I am looking for is something like this: Start with a pattern of pixels: [!][!][!][!][!][!][!][!] [!][!][O][!][!][!][!][!] [!][!][O][O][!][!][!][!] [!][!][!][!][O][!][!][!] [!][!][!][!][!][!][!][!] How to I find the U pixels? [!][E][E][E][!][!][!][!] [!][E][O][E][E][!][!][!] [!][E][O][O][E][E][!][!] [!][E][E][E][O][E][!][!] [!][!][!][E][E][E][!][!] Sorry if that looks bad.

    Read the article

  • How should I invoke a physics engine?

    - by ymfoi
    I'm new to writing games. I'm planning to write a 2D battle game which may require an physics engine. Suppose I've written one, but how can I combine it with the main routine of my game? Should I attach it directly to the graphics render routine or put it in an individual thread? I've spent much time looking for some common approach, but found nothing. So can you reveal some basics idea for me, a newbie? Thanks! P.S. There're many other problems I have to deal with if I choose to start a separate thread for the physics engine, for example, the lock problem, while from my intuition, I guess I'd better separate the render and the physics engine.

    Read the article

  • DirectX10 How to use Constant Buffers

    - by schnozzinkobenstein
    I'm trying to access some variables in my shader, but I think I'm doing this wrong. Say I have a constant buffer that looks like this: cbuffer perFrame { float foo; float bar; }; I got an ID3D10EffectConstantBuffer reference to it, and I can get a specific index by calling GetMemberByIndex, but how can I figure out how many members perFrame has so that I can get each member without going out of bounds?

    Read the article

  • How to move the object around the screen

    - by Abhishek
    I am trying to move the object around the screen I try this code -(void) move { CGFloat upperLimit = mWinSize.height - (mGunda.contentSize.height / 2.0); CGFloat upperLimit1 = mWinSize.height; CGFloat lowerLimit = (mGunda.contentSize.height / 2.0); CGFloat RightLimit = mWinSize.width - (mGunda.contentSize.width/2.0); CGFloat Right = (mGunda.contentSize.width/2.0); if ( mImageGoingUpward ) { mGunda.position = ccp( mGunda.position.x, mGunda.position.y + 5); if ( mGunda.position.y >= upperLimit ) { mImageGoingUpward = NO; mHori = NO; } } else { mGunda.position = ccp( mGunda.position.x, mGunda.position.y - 5); if ( mGunda.position.y <= lowerLimit ) { mGunda.position = ccp(mGunda.position.x +5, lowerLimit); } if(mGunda.position.x >= RightLimit) { mGunda.position = ccp(mGunda.position.x, mGunda.position.y+10); mHori = YES; } if(mHori) { if(mGunda.position.y >= upperLimit) { mGunda.position = ccp(mGunda.position.x - 5,mGunda.position.y); } } } } } It move the object from bottom to top & top to bottom & bottom to right & right to right top of the screen here is problem I have got It not move to the right top to left side of screen this rotationis not happen. How can I do this

    Read the article

  • How to make unit selection circles merge?

    - by MaT
    I would like to know how to make this effect of merged circle selection. Here are images to illustrate: Basically I'm looking for this effect: How the merge effect of the circles can be achieved ? I didn't found any explanation concerning this effect. I know that to project those texture I can develop a decal system but I don't know how to create the merging effect. If possible, I'm looking for purely shaders solution.

    Read the article

  • CSM DX11 issues

    - by KaiserJohaan
    I got CSM to work in OpenGL, and now Im trying to do the same in directx. I'm using the same math library and all and I'm pretty much using the alghorithm straight off. I am using right-handed, column major matrices from GLM. The light is looking (-1, -1, -1). The problem I have is twofolds; For some reason, the ground floor is causing alot of (false) shadow artifacts, like the vast shadowed area you see. I confirmed this when I disabled the ground for the depth pass, but thats a hack more than anything else The shadows are inverted compared to the shadowmap. If you squint you can see the chairs shadows should be mirrored instead. This is the first cascade shadow map, in range of the alien and the chair: I can't figure out why this is. This is the depth pass: for (uint32_t cascadeIndex = 0; cascadeIndex < NUM_SHADOWMAP_CASCADES; cascadeIndex++) { mShadowmap.BindDepthView(context, cascadeIndex); CameraFrustrum cameraFrustrum = CalculateCameraFrustrum(degreesFOV, aspectRatio, nearDistArr[cascadeIndex], farDistArr[cascadeIndex], cameraViewMatrix); lightVPMatrices[cascadeIndex] = CreateDirLightVPMatrix(cameraFrustrum, lightDir); mVertexTransformPass.RenderMeshes(context, renderQueue, meshes, lightVPMatrices[cascadeIndex]); lightVPMatrices[cascadeIndex] = gBiasMatrix * lightVPMatrices[cascadeIndex]; farDistArr[cascadeIndex] = -farDistArr[cascadeIndex]; } CameraFrustrum CalculateCameraFrustrum(const float fovDegrees, const float aspectRatio, const float minDist, const float maxDist, const Mat4& cameraViewMatrix) { CameraFrustrum ret = { Vec4(1.0f, 1.0f, -1.0f, 1.0f), Vec4(1.0f, -1.0f, -1.0f, 1.0f), Vec4(-1.0f, -1.0f, -1.0f, 1.0f), Vec4(-1.0f, 1.0f, -1.0f, 1.0f), Vec4(1.0f, -1.0f, 1.0f, 1.0f), Vec4(1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, -1.0f, 1.0f, 1.0f), }; const Mat4 perspectiveMatrix = PerspectiveMatrixFov(fovDegrees, aspectRatio, minDist, maxDist); const Mat4 invMVP = glm::inverse(perspectiveMatrix * cameraViewMatrix); for (Vec4& corner : ret) { corner = invMVP * corner; corner /= corner.w; } return ret; } Mat4 CreateDirLightVPMatrix(const CameraFrustrum& cameraFrustrum, const Vec3& lightDir) { Mat4 lightViewMatrix = glm::lookAt(Vec3(0.0f), -glm::normalize(lightDir), Vec3(0.0f, -1.0f, 0.0f)); Vec4 transf = lightViewMatrix * cameraFrustrum[0]; float maxZ = transf.z, minZ = transf.z; float maxX = transf.x, minX = transf.x; float maxY = transf.y, minY = transf.y; for (uint32_t i = 1; i < 8; i++) { transf = lightViewMatrix * cameraFrustrum[i]; if (transf.z > maxZ) maxZ = transf.z; if (transf.z < minZ) minZ = transf.z; if (transf.x > maxX) maxX = transf.x; if (transf.x < minX) minX = transf.x; if (transf.y > maxY) maxY = transf.y; if (transf.y < minY) minY = transf.y; } Mat4 viewMatrix(lightViewMatrix); viewMatrix[3][0] = -(minX + maxX) * 0.5f; viewMatrix[3][1] = -(minY + maxY) * 0.5f; viewMatrix[3][2] = -(minZ + maxZ) * 0.5f; viewMatrix[0][3] = 0.0f; viewMatrix[1][3] = 0.0f; viewMatrix[2][3] = 0.0f; viewMatrix[3][3] = 1.0f; Vec3 halfExtents((maxX - minX) * 0.5, (maxY - minY) * 0.5, (maxZ - minZ) * 0.5); return OrthographicMatrix(-halfExtents.x, halfExtents.x, -halfExtents.y, halfExtents.y, halfExtents.z, -halfExtents.z) * viewMatrix; } And this is the pixel shader used for the lighting stage: #define DEPTH_BIAS 0.0005 #define NUM_CASCADES 4 cbuffer DirectionalLightConstants : register(CBUFFER_REGISTER_PIXEL) { float4x4 gSplitVPMatrices[NUM_CASCADES]; float4x4 gCameraViewMatrix; float4 gSplitDistances; float4 gLightColor; float4 gLightDirection; }; Texture2D gPositionTexture : register(TEXTURE_REGISTER_POSITION); Texture2D gDiffuseTexture : register(TEXTURE_REGISTER_DIFFUSE); Texture2D gNormalTexture : register(TEXTURE_REGISTER_NORMAL); Texture2DArray gShadowmap : register(TEXTURE_REGISTER_DEPTH); SamplerComparisonState gShadowmapSampler : register(SAMPLER_REGISTER_DEPTH); float4 ps_main(float4 position : SV_Position) : SV_Target0 { float4 worldPos = gPositionTexture[uint2(position.xy)]; float4 diffuse = gDiffuseTexture[uint2(position.xy)]; float4 normal = gNormalTexture[uint2(position.xy)]; float4 camPos = mul(gCameraViewMatrix, worldPos); uint index = 3; if (camPos.z > gSplitDistances.x) index = 0; else if (camPos.z > gSplitDistances.y) index = 1; else if (camPos.z > gSplitDistances.z) index = 2; float3 projCoords = (float3)mul(gSplitVPMatrices[index], worldPos); float viewDepth = projCoords.z - DEPTH_BIAS; projCoords.z = float(index); float visibilty = gShadowmap.SampleCmpLevelZero(gShadowmapSampler, projCoords, viewDepth); float angleNormal = clamp(dot(normal, gLightDirection), 0, 1); return visibilty * diffuse * angleNormal * gLightColor; } As you can see I am using depth bias and a bias matrix. Any hints on why this behaves so wierdly?

    Read the article

  • CSM shadow errors when models are split

    - by KaiserJohaan
    I'm getting closer to fixing CSM, but there seems to be one more issue at hand. At certain angles, the models will be caught/split between two shadow map cascades, like below. first depth split second depth split - here you can see the model is caught between the splits How does one fix this? Increase the overlapping boundaries between the splits? Or is the frustrum erronous? CameraFrustrum CalculateCameraFrustrum(const float fovDegrees, const float aspectRatio, const float minDist, const float maxDist, const Mat4& cameraViewMatrix, Mat4& outFrustrumMat) { CameraFrustrum ret = { Vec4(1.0f, -1.0f, 0.0f, 1.0f), Vec4(1.0f, 1.0f, 0.0f, 1.0f), Vec4(-1.0f, 1.0f, 0.0f, 1.0f), Vec4(-1.0f, -1.0f, 0.0f, 1.0f), Vec4(1.0f, -1.0f, 1.0f, 1.0f), Vec4(1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, -1.0f, 1.0f, 1.0f), }; const Mat4 perspectiveMatrix = PerspectiveMatrixFov(fovDegrees, aspectRatio, minDist, maxDist); const Mat4 invMVP = glm::inverse(perspectiveMatrix * cameraViewMatrix); outFrustrumMat = invMVP; for (Vec4& corner : ret) { corner = invMVP * corner; corner /= corner.w; } return ret; } Mat4 CreateDirLightVPMatrix(const CameraFrustrum& cameraFrustrum, const Vec3& lightDir) { Mat4 lightViewMatrix = glm::lookAt(Vec3(0.0f), -glm::normalize(lightDir), Vec3(0.0f, -1.0f, 0.0f)); Vec4 transf = lightViewMatrix * cameraFrustrum[0]; float maxZ = transf.z, minZ = transf.z; float maxX = transf.x, minX = transf.x; float maxY = transf.y, minY = transf.y; for (uint32_t i = 1; i < 8; i++) { transf = lightViewMatrix * cameraFrustrum[i]; if (transf.z > maxZ) maxZ = transf.z; if (transf.z < minZ) minZ = transf.z; if (transf.x > maxX) maxX = transf.x; if (transf.x < minX) minX = transf.x; if (transf.y > maxY) maxY = transf.y; if (transf.y < minY) minY = transf.y; } Mat4 viewMatrix(lightViewMatrix); viewMatrix[3][0] = -(minX + maxX) * 0.5f; viewMatrix[3][1] = -(minY + maxY) * 0.5f; viewMatrix[3][2] = -(minZ + maxZ) * 0.5f; viewMatrix[0][3] = 0.0f; viewMatrix[1][3] = 0.0f; viewMatrix[2][3] = 0.0f; viewMatrix[3][3] = 1.0f; Vec3 halfExtents((maxX - minX) * 0.5, (maxY - minY) * 0.5, (maxZ - minZ) * 0.5); return OrthographicMatrix(-halfExtents.x, halfExtents.x, halfExtents.y, -halfExtents.y, halfExtents.z, -halfExtents.z) * viewMatrix; }

    Read the article

  • Is it important for reflection-based serialization maintain consistent field ordering?

    - by Matchlighter
    I just finished writing a packet builder that dynamically loads data into a data stream for eventual network transmission. Each builder operates by finding fields in a given class (and its superclasses) that are marked with a @data annotation. When I finishing my implementation, I remembered that getFields() does not return results in any specific order. Should reflection-based methods for serializing arbitrary data (like my packets) attempt to preserve a specific field ordering (such as alphabetical), and if so, how?

    Read the article

  • Dynamic/Adaptive RLE

    - by Lucius
    So, I'm developing a 2D, tile based game and a map maker thingy - all in Java. The problem is that recently I've been having some memory issues when about 4 maps are loaded. Each one of these maps are composed of 128x128 tiles and have 4 layers (for details and stuff). I already spent a good amount of time searching for solutions and the best thing I found was run-length enconding (RLE). It seems easy enough to use with static data, but is there a way to use it with data that is constantly changing, without a big drop in performance? In my maps, supposing I'm compressing the columns, I would have 128 rows, each with some amount of data (hopefully less than it would be without RLE). Whenever I change a tile, that whole row would have to be checked and I'm affraid that would slow down too much the production (and I'm in a somewhat tight schedule). Well, worst case scenario I work on each map individually, and save them using RLE, but it would be really nice if I could avoind that. EDIT: What I'm currently using to store the data for the tiles is a 2D array of HashMaps that use the layer as key and store the id of the tile in that position - like this: private HashMap< Integer, Integer [][]

    Read the article

  • Grid-Based 2D Lighting Problems

    - by Lemoncreme
    I am aware this question has been asked before, but unfortunately I am new to the language, so the complicated explanations I've found do not help me in the least. I need a lighting engine for my game, and I've tried some procedural lighting systems. This method works the best: if (light[xx - 1, yy] > light[xx, yy]) light[xx, yy] = light[xx - 1, yy] - lightPass; if (light[xx, yy - 1] > light[xx, yy]) light[xx, yy] = light[xx, yy - 1] - lightPass; if (light[xx + 1, yy] > light[xx, yy]) light[xx, yy] = light[xx + 1, yy] - lightPass; if (light[xx, yy + 1] > light[xx, yy]) light[xx, yy] = light[xx, yy + 1] - lightPass; (Subtracts adjacent values by 'lightPass' variable if they are more bright) (It's in a for() loop) This is all fine and dandy except for a an obvious reason: The system favors whatever comes first in the for() loop This is what the above code looks like applied to my game: If I could get some help on creating a new procedural or otherwise lighting system I would really appreciate it!

    Read the article

< Previous Page | 478 479 480 481 482 483 484 485 486 487 488 489  | Next Page >