Search Results

Search found 25518 results on 1021 pages for 'iterative development'.

Page 387/1021 | < Previous Page | 383 384 385 386 387 388 389 390 391 392 393 394  | Next Page >

  • Normal map applied as diffuse textures looks wrong

    - by KaiserJohaan
    Diffuse textures works fine, but I am having problem with normal maps, so I thought I'd tried to apply the normal maps as the diffuse map in my fragment shader so I could see everything is OK. I comment-out my normal map code and just set the diffuse map to the normal map and I get this: http://postimg.org/image/j9gudjl7r/ Looks like a smurf! This is the actual normal map of the main body: http://postimg.org/image/sbkyr6fg9/ Here is my fragment shader, notice I commented out normal map code so I could debug the normal map as a diffuse texture "#version 330 \n \ \n \ layout(std140) uniform; \n \ \n \ const int MAX_LIGHTS = 8; \n \ \n \ struct Light \n \ { \n \ vec4 mLightColor; \n \ vec4 mLightPosition; \n \ vec4 mLightDirection; \n \ \n \ int mLightType; \n \ float mLightIntensity; \n \ float mLightRadius; \n \ float mMaxDistance; \n \ }; \n \ \n \ uniform UnifLighting \n \ { \n \ vec4 mGamma; \n \ vec3 mViewDirection; \n \ int mNumLights; \n \ \n \ Light mLights[MAX_LIGHTS]; \n \ } Lighting; \n \ \n \ uniform UnifMaterial \n \ { \n \ vec4 mDiffuseColor; \n \ vec4 mAmbientColor; \n \ vec4 mSpecularColor; \n \ vec4 mEmissiveColor; \n \ \n \ bool mHasDiffuseTexture; \n \ bool mHasNormalTexture; \n \ bool mLightingEnabled; \n \ float mSpecularShininess; \n \ } Material; \n \ \n \ uniform sampler2D unifDiffuseTexture; \n \ uniform sampler2D unifNormalTexture; \n \ \n \ in vec3 frag_position; \n \ in vec3 frag_normal; \n \ in vec2 frag_texcoord; \n \ in vec3 frag_tangent; \n \ in vec3 frag_bitangent; \n \ \n \ out vec4 finalColor; " " \n \ \n \ void CalcGaussianSpecular(in vec3 dirToLight, in vec3 normal, out float gaussianTerm) \n \ { \n \ vec3 viewDirection = normalize(Lighting.mViewDirection); \n \ vec3 halfAngle = normalize(dirToLight + viewDirection); \n \ \n \ float angleNormalHalf = acos(dot(halfAngle, normalize(normal))); \n \ float exponent = angleNormalHalf / Material.mSpecularShininess; \n \ exponent = -(exponent * exponent); \n \ \n \ gaussianTerm = exp(exponent); \n \ } \n \ \n \ vec4 CalculateLighting(in Light light, in vec4 diffuseTexture, in vec3 normal) \n \ { \n \ if (light.mLightType == 1) // point light \n \ { \n \ vec3 positionDiff = light.mLightPosition.xyz - frag_position; \n \ float dist = max(length(positionDiff) - light.mLightRadius, 0); \n \ \n \ float attenuation = 1 / ((dist/light.mLightRadius + 1) * (dist/light.mLightRadius + 1)); \n \ attenuation = max((attenuation - light.mMaxDistance) / (1 - light.mMaxDistance), 0); \n \ \n \ vec3 dirToLight = normalize(positionDiff); \n \ float angleNormal = clamp(dot(normalize(normal), dirToLight), 0, 1); \n \ \n \ float gaussianTerm = 0.0; \n \ if (angleNormal > 0.0) \n \ CalcGaussianSpecular(dirToLight, normal, gaussianTerm); \n \ \n \ return diffuseTexture * (attenuation * angleNormal * Material.mDiffuseColor * light.mLightIntensity * light.mLightColor) + \n \ (attenuation * gaussianTerm * Material.mSpecularColor * light.mLightIntensity * light.mLightColor); \n \ } \n \ else if (light.mLightType == 2) // directional light \n \ { \n \ vec3 dirToLight = normalize(light.mLightDirection.xyz); \n \ float angleNormal = clamp(dot(normalize(normal), dirToLight), 0, 1); \n \ \n \ float gaussianTerm = 0.0; \n \ if (angleNormal > 0.0) \n \ CalcGaussianSpecular(dirToLight, normal, gaussianTerm); \n \ \n \ return diffuseTexture * (angleNormal * Material.mDiffuseColor * light.mLightIntensity * light.mLightColor) + \n \ (gaussianTerm * Material.mSpecularColor * light.mLightIntensity * light.mLightColor); \n \ } \n \ else if (light.mLightType == 4) // ambient light \n \ return diffuseTexture * Material.mAmbientColor * light.mLightIntensity * light.mLightColor; \n \ else \n \ return vec4(0.0); \n \ } \n \ \n \ void main() \n \ { \n \ vec4 diffuseTexture = vec4(1.0); \n \ if (Material.mHasDiffuseTexture) \n \ diffuseTexture = texture(unifDiffuseTexture, frag_texcoord); \n \ \n \ vec3 normal = frag_normal; \n \ if (Material.mHasNormalTexture) \n \ { \n \ diffuseTexture = vec4(normalize(texture(unifNormalTexture, frag_texcoord).xyz * 2.0 - 1.0), 1.0); \n \ // vec3 normalTangentSpace = normalize(texture(unifNormalTexture, frag_texcoord).xyz * 2.0 - 1.0); \n \ //mat3 tangentToWorldSpace = mat3(normalize(frag_tangent), normalize(frag_bitangent), normalize(frag_normal)); \n \ \n \ // normal = tangentToWorldSpace * normalTangentSpace; \n \ } \n \ \n \ if (Material.mLightingEnabled) \n \ { \n \ vec4 accumLighting = vec4(0.0); \n \ \n \ for (int lightIndex = 0; lightIndex < Lighting.mNumLights; lightIndex++) \n \ accumLighting += Material.mEmissiveColor * diffuseTexture + \n \ CalculateLighting(Lighting.mLights[lightIndex], diffuseTexture, normal); \n \ \n \ finalColor = pow(accumLighting, Lighting.mGamma); \n \ } \n \ else { \n \ finalColor = pow(diffuseTexture, Lighting.mGamma); \n \ } \n \ } \n"; Here is my wrapper around a texture OpenGLTexture::OpenGLTexture(const std::vector<uint8_t>& textureData, uint32_t textureWidth, uint32_t textureHeight, TextureFormat textureFormat, TextureType textureType, Logger& logger) : mLogger(logger), mTextureID(gNextTextureID++), mTextureType(textureType) { glGenTextures(1, &mTexture); CHECK_GL_ERROR(mLogger); glBindTexture(GL_TEXTURE_2D, mTexture); CHECK_GL_ERROR(mLogger); GLint glTextureFormat = (textureFormat == TextureFormat::TEXTURE_FORMAT_RGB ? GL_RGB : textureFormat == TextureFormat::TEXTURE_FORMAT_RGBA ? GL_RGBA : GL_RED); glTexImage2D(GL_TEXTURE_2D, 0, glTextureFormat, textureWidth, textureHeight, 0, glTextureFormat, GL_UNSIGNED_BYTE, &textureData[0]); CHECK_GL_ERROR(mLogger); glGenerateMipmap(GL_TEXTURE_2D); CHECK_GL_ERROR(mLogger); glBindTexture(GL_TEXTURE_2D, 0); CHECK_GL_ERROR(mLogger); } OpenGLTexture::~OpenGLTexture() { glDeleteBuffers(1, &mTexture); CHECK_GL_ERROR(mLogger); } And here is the sampler I create which is shared between Diffuse and normal textures // texture sampler setup glGenSamplers(1, &mTextureSampler); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_MAG_FILTER, GL_LINEAR); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_S, GL_REPEAT); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_T, GL_REPEAT); CHECK_GL_ERROR(mLogger); glSamplerParameterf(mTextureSampler, GL_TEXTURE_MAX_ANISOTROPY_EXT, mCurrentAnisotropy); CHECK_GL_ERROR(mLogger); glUniform1i(glGetUniformLocation(mDefaultProgram.GetHandle(), "unifDiffuseTexture"), OpenGLTexture::TEXTURE_UNIT_DIFFUSE); CHECK_GL_ERROR(mLogger); glUniform1i(glGetUniformLocation(mDefaultProgram.GetHandle(), "unifNormalTexture"), OpenGLTexture::TEXTURE_UNIT_NORMAL); CHECK_GL_ERROR(mLogger); glBindSampler(OpenGLTexture::TEXTURE_UNIT_DIFFUSE, mTextureSampler); CHECK_GL_ERROR(mLogger); glBindSampler(OpenGLTexture::TEXTURE_UNIT_NORMAL, mTextureSampler); CHECK_GL_ERROR(mLogger); SetAnisotropicFiltering(mCurrentAnisotropy); The diffuse textures looks like they should, but the normal looks so wierd. Why is this?

    Read the article

  • Best practices for implementing collectible virtual item "packs"?

    - by Glenn Barnett
    I'm in the process of building a game in which virtual items can be obtained either by in-game play (defeating enemies, gaining levels), or by purchasing "packs" via microtransactions. Looking at an existing example like Duels.com's item packs, it looks like a lot of thought went into their implementation, including: Setting clear player expectations as to what can be obtained in the pack Limiting pack supply to increase demand and control inflation Are there other considerations that should be taken into account? For example, should the contents of the packs be pre-generated to guarantee the advertised drop rates, or is each drop rate just a random chance, and you could end up with higher or lower supply?

    Read the article

  • HLSL Pixel Shader that does palette swap

    - by derrace
    I have implemented a simple pixel shader which can replace a particular colour in a sprite with another colour. It looks something like this: sampler input : register(s0); float4 PixelShaderFunction(float2 coords: TEXCOORD0) : COLOR0 { float4 colour = tex2D(input, coords); if(colour.r == sourceColours[0].r && colour.g == sourceColours[0].g && colour.b == sourceColours[0].b) return targetColours[0]; return colour; } What I would like to do is have the function take in 2 textures, a default table, and a lookup table (both same dimensions). Grab the current pixel, and find the location XY (coords) of the matching RGB in the default table, and then substitute it with the colour found in the lookup table at XY. I have figured how to pass the Textures from C# into the function, but I am not sure how to find the coords in the default table by matching the colour. Could someone kindly assist? Thanks in advance.

    Read the article

  • CSM shadow errors when models are split

    - by KaiserJohaan
    I'm getting closer to fixing CSM, but there seems to be one more issue at hand. At certain angles, the models will be caught/split between two shadow map cascades, like below. first depth split second depth split - here you can see the model is caught between the splits How does one fix this? Increase the overlapping boundaries between the splits? Or is the frustrum erronous? CameraFrustrum CalculateCameraFrustrum(const float fovDegrees, const float aspectRatio, const float minDist, const float maxDist, const Mat4& cameraViewMatrix, Mat4& outFrustrumMat) { CameraFrustrum ret = { Vec4(1.0f, -1.0f, 0.0f, 1.0f), Vec4(1.0f, 1.0f, 0.0f, 1.0f), Vec4(-1.0f, 1.0f, 0.0f, 1.0f), Vec4(-1.0f, -1.0f, 0.0f, 1.0f), Vec4(1.0f, -1.0f, 1.0f, 1.0f), Vec4(1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, -1.0f, 1.0f, 1.0f), }; const Mat4 perspectiveMatrix = PerspectiveMatrixFov(fovDegrees, aspectRatio, minDist, maxDist); const Mat4 invMVP = glm::inverse(perspectiveMatrix * cameraViewMatrix); outFrustrumMat = invMVP; for (Vec4& corner : ret) { corner = invMVP * corner; corner /= corner.w; } return ret; } Mat4 CreateDirLightVPMatrix(const CameraFrustrum& cameraFrustrum, const Vec3& lightDir) { Mat4 lightViewMatrix = glm::lookAt(Vec3(0.0f), -glm::normalize(lightDir), Vec3(0.0f, -1.0f, 0.0f)); Vec4 transf = lightViewMatrix * cameraFrustrum[0]; float maxZ = transf.z, minZ = transf.z; float maxX = transf.x, minX = transf.x; float maxY = transf.y, minY = transf.y; for (uint32_t i = 1; i < 8; i++) { transf = lightViewMatrix * cameraFrustrum[i]; if (transf.z > maxZ) maxZ = transf.z; if (transf.z < minZ) minZ = transf.z; if (transf.x > maxX) maxX = transf.x; if (transf.x < minX) minX = transf.x; if (transf.y > maxY) maxY = transf.y; if (transf.y < minY) minY = transf.y; } Mat4 viewMatrix(lightViewMatrix); viewMatrix[3][0] = -(minX + maxX) * 0.5f; viewMatrix[3][1] = -(minY + maxY) * 0.5f; viewMatrix[3][2] = -(minZ + maxZ) * 0.5f; viewMatrix[0][3] = 0.0f; viewMatrix[1][3] = 0.0f; viewMatrix[2][3] = 0.0f; viewMatrix[3][3] = 1.0f; Vec3 halfExtents((maxX - minX) * 0.5, (maxY - minY) * 0.5, (maxZ - minZ) * 0.5); return OrthographicMatrix(-halfExtents.x, halfExtents.x, halfExtents.y, -halfExtents.y, halfExtents.z, -halfExtents.z) * viewMatrix; }

    Read the article

  • How to make unit selection circles merge?

    - by MaT
    I would like to know how to make this effect of merged circle selection. Here are images to illustrate: Basically I'm looking for this effect: How the merge effect of the circles can be achieved ? I didn't found any explanation concerning this effect. I know that to project those texture I can develop a decal system but I don't know how to create the merging effect. If possible, I'm looking for purely shaders solution.

    Read the article

  • Restoring projection matrix

    - by brainydexter
    I am learning to use FBOs and one of the things that I need to do when rendering something onto user defined FBO, I have to setup the projection, modelview and viewport for it. Once I am done rendering to the FBO, I need to restore these matrices. I found: glPushAttrib(GL_VIEWPORT_BIT); glPopAttrib(); to restore the viewport to its old state. Is there a way to restore the projection and modelview matrix to whatever it was earlier ? Tech: C++/OpenGL Thanks!

    Read the article

  • How can I import models from Blender into jMonkeyEngine?

    - by Nathan Sabruka
    I have some blender model files (Blender version 2.6) which I would like to use with the jMonkeyEngine SDK. However, when I use Blender's native .obj exporter, I can't import it in jMonkeyEngine (the model simply fails to import or looks messed up). I've tried importing .obj files or .blend files directly into the jMonkeyEngine SDK to no avail. I've also tried to use various OGRE exporters to export .scene and .material files, but only the .scene file is created. Is there a simple way to simply export files from Blender into the jMonkeyEngine SDK? EDIT: I seem to have found something in Blender. When I go under addons, there's a warning in the OGRE exporter; "'.mesh' output requires OgreCommandLineTools". However, I have already installed those tools under the C drive. Has anyone else encountered this issue?

    Read the article

  • running GL ES 2.0 code under Linux ( no Android no iOS )

    - by user827992
    I need to code OpenGL ES 2.0 bits and i would like to do this and run the programs on my desktop for practical reasons. Now, i already have tried the official GLES SDK from ATI for my videocard but it not even runs the examples that comes with the SDK itself, i'm not looking for performance here, even a software based rendering pipeline could be enough, i just need full support for GLES 2.0 and GLSL to code and run GL stuff. There is a reliable solution for this under Ubuntu Linux ?

    Read the article

  • Convenience of mySQL over xml

    - by Bonechilla
    Currently I use XML to store specific information to correctly load a few things such as a list of specfied characters, scenes and music, Once more I use JAXB in combination with standard compression/decompression(ZIP) functionality to store a list of extrenous data. This data is called to add functionality to the character, somewhat like Skills in an RPG. Each skill is seperated into its own XML file with a grandlist which contains the names of each file with their extensions omitted and zipped in folder that gets encrypted. At first using xml was working fine however as the skill list grow i worry about its stability. I was wondering if I should begin storing the data in mySQL. Originally I planned to simply convert everything to JSON over xml but i think possibly mySQL would be a better move. Can anyone inform me of the key difference and pros and cons of each I guess i'm looking for the best way to store the data more conviently and would be easier to operate on. The data is mostly primatives and strings and the only arraylist of values i have i can just concat into a single field and parse later Edit: If I am going in the right direction with XML would it make sense to convert it to JSON and use maybe Kyro or EclipseLink JAXB (MOXy)

    Read the article

  • FreeType2 Crash on FT_Init_FreeType

    - by JoeyDewd
    I'm currently trying to learn how to use the FreeType2 library for drawing fonts with OpenGL. However, when I start the program it immediately crashes with the following error: "(Can't correctly start the application (0xc000007b))" Commenting the FT_Init_FreeType removes the error and my game starts just fine. I'm wondering if it's my code or has something to do with loading the dll file. My code: #include "SpaceGame.h" #include <ft2build.h> #include FT_FREETYPE_H //Freetype test FT_Library library; Game::Game(int Width, int Height) { //Freetype FT_Error error = FT_Init_FreeType(&library); if(error) { cout << "Error occured during FT initialisation" << endl; } And my current use of the FreeType2 files. Inside my bin folder (where debug .exe is located) is: freetype6.dll, libfreetype.dll.a, libfreetype-6.dll. In Code::Blocks, I've linked to the lib and include folder of the FreeType 2.3.5.1 version. And included a compiler flag: -lfreetype My program starts perfectly fine if I comment out the FT_Init function which means the includes, and library files should be fine. I can't find a solution to my problem and google isn't helping me so any help would be greatly appreciated.

    Read the article

  • Image with FadeIn effect blinks when added to scene

    - by Ef Es
    I am trying to add an image to the scene, but it should just be added to the scene invisible, FadeIn and then be deleted when the effect finishes. My problem is that the images blink once when they are added to the scene, then they do the intended effect. My best guess is that when they are added they show on the scene for a split second before starting the animation. I though of making them invisible for a split second before activating them, but I am not sure how to code it. const bool Sunbeams::add() { const CCSize kSceenSize = CCDirector::sharedDirector()->getWinSize(); const int nRayType = random( m_kRays.size()); const CCPoint kPosition( random( static_cast < int >( kSceenSize.width)), 0.0f); const float fDuration = random( m_fDurationVariance) + m_fDurationMin; CCSprite* pkLightBeam = CCSprite::spriteWithTexture( m_kRays[nRayType]); if ( !pkLightBeam) { msg::debug( "Sunbeams::add", "Failed to create sprite from ray '%d'!\n", m_kRays[nRayType]); return false; } pkLightBeam->setAnchorPoint( CCPointZero); pkLightBeam->setPosition( kPosition); m_kActiveBeams.push_back( pkLightBeam); CCDirector::sharedDirector()->getRunningScene()->addChild( pkLightBeam); CCActionInterval* pkAction = CCFadeIn::actionWithDuration( fDuration); CCActionInterval* pkActionBack = pkAction->reverse(); pkLightBeam->runAction( CCSequence::actions( pkAction, pkActionBack, 0)); return true; }

    Read the article

  • Raycasting mouse coordinates to rotated object?

    - by SPL
    I am trying to cast a ray from my mouse to a plane at a specified position with a known width and length and height. I know that you can use the NDC (Normalized Device Coordinates) to cast ray but I don't know how can I detect if the ray actually hit the plane and when it did. The plane is translated -100 on the Y and rotated 60 on the X then translated again -100. Can anyone please give me a good tutorial on this? For a complete noob! I am almost new to matrix and vector transformations.

    Read the article

  • "Accumulate" buffer results in XNA4?

    - by Utkarsh Sinha
    I'm trying to simulate a "heightmap" buffer in XNA4.0 but the results don't look correct. Here's what I'm hoping to achieve: http://www.youtube.com/watch?feature=player_detailpage&v=-Q6ISVaM5Ww#t=517s (8:38). From what I understand, here are the steps to reach there: Pass height buffer + current entity's heightmap Generate a stencil and update the height buffer Render sprite+stencil For now, I'm just trying to get the height buffer thing to work. So here's the problem. Inside the draw loop, I do the following: Create a new render target & set it Draw the heightmap with a sprite batch(no shaders) graphicsDevice.SetRenderTarget(null) Draw the rendertarget with SpriteBatch I expected to see all entities' heightmaps. But only the last entity's heightmap is visible. Any hints on what I'm doing wrong? Here's the code inside the draw loop: RenderTarget2D tempDepthStencil = new RenderTarget2D(graphicsDevice, graphicsDevice.Viewport.Width, graphicsDevice.Viewport.Height, false, graphicsDevice.DisplayMode.Format, DepthFormat.None); graphicsDevice.SetRenderTarget(tempDepthStencil); // Gather depth information SpriteBatch depthStencilSpriteBatch = new SpriteBatch(graphicsDevice); depthStencilSpriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.None, RasterizerState.CullCounterClockwise); depthStencilSpriteBatch.Draw(texHeightmap, pos, null, Color.White, 0, Vector2.Zero, 1, spriteEffects, 1); depthStencilSpriteBatch.End(); graphicsDevice.SetRenderTarget(null); SpriteBatch b1 = new SpriteBatch(graphicsDevice); b1.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, null, null, null, null); b1.Draw((Texture2D)tempDepthStencil, Vector2.Zero, null, Color.White, 0, Vector2.Zero, 1, spriteEffects, 1); b1.End();

    Read the article

  • Sprite not rotating around its centre after Scaling at its centre

    - by asma.farhat
    If I scale a sprite at its centre, then try to rotate it around its centre as well, the rotation does not occur around its centre. If you need to rotate, for example a scaled ball,the way its working it is set the scale center at the top left (0,0) set the scale that you want, and then set the rotation center to the middle of the scaled sprite, and then apply the rotation modifier. blaBloBliSprite.setScaleCenter(0, 0); blaBloBliSprite.setScale(0.667f); blaBloBliSprite.setPosition(557, CAMERA_HEIGHT / 2 - blaBloBliSprite.getHeightScaled() / 2); blaBloBliSprite.setRotationCenter(blaBloBliSprite.getWidthScaled() / 2, blaBloBliSprite.getHeightScaled() / 2); But I want to scale a sprite at its centre as well. Is there any way of doing it?

    Read the article

  • Effective way to check if an Entity/Player enters a region/trigger

    - by Chris
    I was wondering how multiplayer games detect if you enter a special region. Let's assume there is a huge map that is so big that simply checking it would become a huge performance issue. I've seen bukkit (a modding API for Minecraft servers) firing an Event on every single move. I don't think that larger games do the same because even if you have only a few coordinates you are interested in, you have to loop through a few trigger zone to see if the player is inside your region - for every player. This seems like an extremely CPU-intense operation to me even though I've never developed something like that. Is there a special algorithm that is used by larger games to accomplish this? The only thing I could imagine is to split up the world into multiple parts and to register the event not on the movement itself but on all the parts that are covered by your area and only check for areas that are registered in the current part. And another thing I would like to know: How could you detect when someone must have entered a trigger but you never saw him directly in it since his client only sent you an move packet shortly before entering and after leaving the trigger area. Drawing a line and calculate all colliding parts seems rather CPU intensive if you have to perform it every time.

    Read the article

  • Best practices for periodically saving game state to disk

    - by Ben Morris
    I'm working on an MMO. All of the player and environment data lives on a server and is kept in memory. There's a "world" object which keeps track of all of the maps, characters, etc. and their relations to each other. To avoid data loss in case of a crash, I've been periodically serializing the world to disk. The trouble is, this object can be quite large, so when the server starts writing, there's noticeable in-game slowdown for a few seconds, which I'd like to avoid. Any pointers on how to go about this in a more efficient way?

    Read the article

  • libgdx ActorGestureListener.pan() parameters not moving actor in smooth line

    - by Roar Skullestad
    I override the pan method in ActorGestureListener to implement dragging actors in libgdx (scene2d). When I move individual pieces on a board they move smoothly, but when moving the whole board, the x and y coordinates that is sent to pan is "jumping", and in an increasingly amount the longer it is dragged. These are an example of the deltaY coordinates sent to pan when dragging smoothly downwards: 1.1156368 -0.13125038 -1.0500145 0.98439217 -1.0500202 0.91877174 -0.984396 0.9187679 -0.98439026 0.9187641 -0.13125038 This is how I move the camera: public void pan (InputEvent event, float x, float y, float deltaX, float deltaY) { cam.translate(-deltaX, -deltaY); I have been using both the delta values sent to pan and the real position values, but similar results. And since it is the coordinates that are wrong, it doesn't matter whether I move the board itself or the camera. What could the cause be for this and what is the solution? When I move camera only half the delta-values, it moves smoothly but only at half the speed of the mouse pointer: cam.translate(-deltaX / 2, -deltaY / 2); It seems like the moving of camera or board affects the mouse input coordinates. How can I drag at "mouse speed" and still get smooth movements? (This question was also posted on stackoverflow: http://stackoverflow.com/questions/20693020/libgdx-actorgesturelistener-pan-parameters-not-moving-actor-in-smooth-line)

    Read the article

  • Unexpected behaviour with glFramebufferTexture1D

    - by Roshan
    I am using render to texture concept with glFramebufferTexture1D. I am drawing a cube on non-default FBO with all the vertices as -1,1 (maximum) in X Y Z direction. Now i am setting viewport to X while rendering on non default FBO. My background is blue with white color of cube. For default FBO, i have created 1-D texture and attached this texture to above FBO with color attachment. I am setting width of texture equal to width*height of above FBO view-port. Now, when i render this texture to on another cube, i can see continuous white color on start or end of each face of the cube. That means part of the face is white and rest is blue. I am not sure whether this behavior is correct or not. I expect all the texels should be white as i am using -1 and 1 coordinates for cube rendered on non-default FBO. code: #define WIDTH 3 #define HEIGHT 3 GLfloat vertices8[]={ 1.0f,1.0f,1.0f, -1.0f,1.0f,1.0f, -1.0f,-1.0f,1.0f, 1.0f,-1.0f,1.0f,//face 1 1.0f,-1.0f,-1.0f, -1.0f,-1.0f,-1.0f, -1.0f,1.0f,-1.0f, 1.0f,1.0f,-1.0f,//face 2 1.0f,1.0f,1.0f, 1.0f,-1.0f,1.0f, 1.0f,-1.0f,-1.0f, 1.0f,1.0f,-1.0f,//face 3 -1.0f,1.0f,1.0f, -1.0f,1.0f,-1.0f, -1.0f,-1.0f,-1.0f, -1.0f,-1.0f,1.0f,//face 4 1.0f,1.0f,1.0f, 1.0f,1.0f,-1.0f, -1.0f,1.0f,-1.0f, -1.0f,1.0f,1.0f,//face 5 -1.0f,-1.0f,1.0f, -1.0f,-1.0f,-1.0f, 1.0f,-1.0f,-1.0f, 1.0f,-1.0f,1.0f//face 6 }; GLfloat vertices[]= { 0.5f,0.5f,0.5f, -0.5f,0.5f,0.5f, -0.5f,-0.5f,0.5f, 0.5f,-0.5f,0.5f,//face 1 0.5f,-0.5f,-0.5f, -0.5f,-0.5f,-0.5f, -0.5f,0.5f,-0.5f, 0.5f,0.5f,-0.5f,//face 2 0.5f,0.5f,0.5f, 0.5f,-0.5f,0.5f, 0.5f,-0.5f,-0.5f, 0.5f,0.5f,-0.5f,//face 3 -0.5f,0.5f,0.5f, -0.5f,0.5f,-0.5f, -0.5f,-0.5f,-0.5f, -0.5f,-0.5f,0.5f,//face 4 0.5f,0.5f,0.5f, 0.5f,0.5f,-0.5f, -0.5f,0.5f,-0.5f, -0.5f,0.5f,0.5f,//face 5 -0.5f,-0.5f,0.5f, -0.5f,-0.5f,-0.5f, 0.5f,-0.5f,-0.5f, 0.5f,-0.5f,0.5f//face 6 }; GLuint indices[] = { 0, 2, 1, 0, 3, 2, 4, 5, 6, 4, 6, 7, 8, 9, 10, 8, 10, 11, 12, 15, 14, 12, 14, 13, 16, 17, 18, 16, 18, 19, 20, 23, 22, 20, 22, 21 }; GLfloat texcoord[] = { 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0 }; glGenTextures(1, &id1); glBindTexture(GL_TEXTURE_1D, id1); glGenFramebuffers(1, &Fboid); glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexImage1D(GL_TEXTURE_1D, 0, GL_RGBA, WIDTH*HEIGHT , 0, GL_RGBA, GL_UNSIGNED_BYTE,0); glBindFramebuffer(GL_FRAMEBUFFER, Fboid); glFramebufferTexture1D(GL_DRAW_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,GL_TEXTURE_1D,id1,0); draw_cube(); glBindFramebuffer(GL_FRAMEBUFFER, 0); draw(); } draw_cube() { glViewport(0, 0, WIDTH, HEIGHT); glClearColor(0.0f, 0.0f, 0.5f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); glEnableVertexAttribArray(glGetAttribLocation(temp.psId,"position")); glVertexAttribPointer(glGetAttribLocation(temp.psId,"position"), 3, GL_FLOAT, GL_FALSE, 0,vertices8); glDrawArrays (GL_TRIANGLE_FAN, 0, 24); } draw() { glClearColor(1.0f, 0.0f, 0.0f, 1.0f); glClearDepth(1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnableVertexAttribArray(glGetAttribLocation(shader_data.psId,"tk_position")); glVertexAttribPointer(glGetAttribLocation(shader_data.psId,"tk_position"), 3, GL_FLOAT, GL_FALSE, 0,vertices); nResult = GL_ERROR_CHECK((GL_NO_ERROR, "glVertexAttribPointer(position, 3, GL_FLOAT, GL_FALSE, 0,vertices);")); glEnableVertexAttribArray(glGetAttribLocation(shader_data.psId,"inputtexcoord")); glVertexAttribPointer(glGetAttribLocation(shader_data.psId,"inputtexcoord"), 2, GL_FLOAT, GL_FALSE, 0,texcoord); glBindTexture(*target11, id1); glDrawElements ( GL_TRIANGLES, 36,GL_UNSIGNED_INT, indices ); when i change WIDTH=HEIGHT=2, and call a glreadpixels with height, width equal to 4 in draw_cube() i can see first 2 pixels with white color, next two with blue(glclearcolor), next two white and then blue and so on.. Now when i change width parameter in glTeximage1D to 16 then ideally i should see alternate patches of white and blue right? But its not the case here. why so?

    Read the article

  • MiniMax function throws null pointer exception

    - by Sven
    I'm working on a school project, I have to build a tic tac toe game with the AI based on the MiniMax algorithm. The two player mode works like it should. I followed the code example on http://ethangunderson.com/blog/minimax-algorithm-in-c/. The only thing is that I get a NullPointer Exception when I run the code. And I can't wrap my finger around it. I placed a comment in the code where the exception is thrown. The recursive call is returning a null pointer, what is very strange because it can't.. When I place a breakpoint on the null return with the help of a if statement, then I see that there ARE still 2 to 3 empty places.. I probably overlooking something. Hope someone can tell me what I'm doing wrong. Here is the MiniMax code (the tic tac toe code is not important): /* * To change this template, choose Tools | Templates * and open the template in the editor. */ package MiniMax; import Game.Block; import Game.Board; import java.util.ArrayList; public class MiniMax { public static Place getBestMove(Board gameBoard, Block.TYPE player) { Place bestPlace = null; ArrayList<Place> emptyPlaces = gameBoard.getEmptyPlaces(); Board newBoard; //loop trough all the empty places for(Place emptyPlace : emptyPlaces) { newBoard = gameBoard.clone(); newBoard.setBlock(emptyPlace.getRow(), emptyPlace.getCell(), player); //no game won and still room to move if(newBoard.getWinner() == Block.TYPE.NONE && newBoard.getEmptyPlaces().size() > 0) { //is an node (has children) Place tempPlace = getBestMove(newBoard, invertPlayer(player)); //ERROR is thrown here! tempPlace is null. emptyPlace.setScore(tempPlace.getScore()); } else { //is an leaf if(newBoard.getWinner() == Block.TYPE.NONE) { emptyPlace.setScore(0); } else if(newBoard.getWinner() == Block.TYPE.X) { emptyPlace.setScore(-1); } else if(newBoard.getWinner() == Block.TYPE.O) { emptyPlace.setScore(1); } //if this move is better then our prev move, take it! if((bestPlace == null) || (player == Block.TYPE.X && emptyPlace.getScore() < bestPlace.getScore()) || (player == Block.TYPE.O && emptyPlace.getScore() > bestPlace.getScore())) { bestPlace = emptyPlace; } } } //This should never be null, but it does.. return bestPlace; } private static Block.TYPE invertPlayer(Block.TYPE player) { if(player == Block.TYPE.X) { return Block.TYPE.O; } return Block.TYPE.X; } }

    Read the article

  • problem in array of shooter sprites which contain different colour bubbles

    - by prakash s
    everyone i am developing bubble shooter game in cocos2d I have placed shooter array which contain different color bubbles like this 00000000 it is 8 bubbles array if i tap the screen, first bubbles should move for shooting the target .png .And if i again tap the screen again 2nd position bubble should move for shooting the target.png bubbles,how it will possible for me because i have already created the array of target which contain different color bubbles, here i write the code : - (void)ccTouchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { // Choose one of the touches to work with UITouch *touch = [touches anyObject]; CGPoint location = [touch locationInView:[touch view]]; location = [[CCDirector sharedDirector] convertToGL:location]; // Set up initial location of projectile CGSize winSize = [[CCDirector sharedDirector] winSize]; NSMutableArray * movableSprites = [[NSMutableArray alloc] init]; NSArray *images = [NSArray arrayWithObjects:@"1.png", @"2.png", @"3.png", @"4.png",@"5.png",@"6.png",@"7.png", @"8.png", nil]; for(int i = 0; i < images.count; ++i) { int index = (arc4random() % 8)+1; NSString *image = [NSString stringWithFormat:@"%d.png", index]; CCSprite*projectile = [CCSprite spriteWithFile:image]; //CCSprite *projectile = [CCSprite spriteWithFile:@"3.png" rect:CGRectMake(0, 0,256,256)]; [self addChild:projectile]; [movableSprites addObject:projectile]; float offsetFraction = ((float)(i+1))/(images.count+1); //projectile.position = ccp(20, winSize.height/2); //projectile.position = ccp(18,0 ); //projectile.position = ccp(350*offsetFraction, 20.0f); projectile.position = ccp(10/offsetFraction, 20.0f); // projectile.position = ccp(projectile.position.x,projectile.position.y); // Determine offset of location to projectile int offX = location.x - projectile.position.x; int offY = location.y - projectile.position.y; // Bail out if we are shooting down or backwards if (offX <= 0) return; // Ok to add now - we've double checked position //[self addChild:projectile]; // Determine where we wish to shoot the projectile to int realX = winSize.width + (projectile.contentSize.width/2); float ratio = (float) offY / (float) offX; int realY = (realX * ratio) + projectile.position.y; CGPoint realDest = ccp(realX, realY); // Determine the length of how far we're shooting int offRealX = realX - projectile.position.x; int offRealY = realY - projectile.position.y; float length = sqrtf((offRealX*offRealX)+(offRealY*offRealY)); float velocity = 480/1; // 480pixels/1sec float realMoveDuration = length/velocity; // Move projectile to actual endpoint [projectile runAction:[CCSequence actions: [CCMoveTo actionWithDuration:realMoveDuration position:realDest], [CCCallFuncN actionWithTarget:self selector:@selector(spriteMoveFinished:)], nil]]; // Add to projectiles array projectile.tag = 1; [_projectiles addObject:projectile]; } }

    Read the article

  • How do I reconfigure my GLES frame buffer after a rotation?

    - by Panda Pajama
    I am implementing interface rotation for my GLES based game for iOS, written in Xamarin.iOS with OpenTK. I am detecting the rotation by overriding WillRotate, in my UIViewController, and I correctly re-setup all of my projection matrices. However, when drawing a sprite, the image looks a bit blurrier on the landscape version compared to the portrait version, as you can see in the following closeups magnified 10x. Portrait (before rotating) Landscape (after rotating) In both cases, I'm using the same texture with the same sampler, the same shader, and the same GL state. I just changed the order of the parameters in the projection matrix, so the resulting sizes should be exactly the same pixelwise. Since this could be thought of as a window resize, I suppose that the framebuffer has to be recreated to the new size. When working on desktop apps on Direct3D11 (SharpDX), I would have to call swapChain.ResizeBuffers() to do this. I have tried setting AutoResize = true in my iPhoneOSGameView, but then the framebuffer gets clipped as I rotate the interface, and then everything disappears when rotating the interface again. I'm not doing anything strange, my framebuffer initialization is pretty vanilla: int scaling = (int)UIScreen.MainScreen.Scale; DeviceWidth = (int)UIScreen.MainScreen.Bounds.Width * scaling; DeviceHeight = (int)UIScreen.MainScreen.Bounds.Height * scaling; Size = new System.Drawing.Size((int)(DeviceWidth), (int)(DeviceHeight)); Bounds = new System.Drawing.RectangleF(0, 0, DeviceWidth, DeviceHeight); Frame = new System.Drawing.RectangleF(0, 0, DeviceWidth, DeviceHeight); ContextRenderingApi = EAGLRenderingAPI.OpenGLES2; AutoResize = true; LayerRetainsBacking = true; LayerColorFormat = EAGLColorFormat.RGBA8; I get inconsistent results when changing Size, Bounds and Frame on my CreateFrameBuffer override, but since the documentation is so incomplete (it has nothing on Bounds and Frame), I have resorted to randomly changing stuff here and there without really knowing what is going on. There is a similar question which has no answers. However, I don't know if they're experiencing the same problem as I am. Is my supposition that recreating the framebuffer is necessary, correct? If so, does anybody know how to do it correctly in OpenTK for Xamarin.iOS?

    Read the article

  • jerky walljump in unity rigidbody2d

    - by Lilz Votca Love
    hey there i know people have encountered issues with it before i looked upon the different solutions provided but i couldnt get any fix at all.im making a 2d game and am stuck with the walljump.i detect the wall wonderfully i also detect the jump and it works the player jumps off the wall when facing right with the use of rigidbody.addforce(new vector2(-1,2)*jumpforce) now when jumping on the oposite wall using the same vector with the sign in x axis changed to 1,the player jumps too but it goes more in the y axis than it should.Here is an image to show you the curves it(the player) follows. check the following url to see the behaviour https://scontent-b-mad.xx.fbcdn.net/hphotos-xpf1/t1.0-9/1470386_10152415957141154_156025539179003805_n.jpg voila the green one is happennig when player faces right and the other one happens when he is not here is the section of code if (wall) { if (wallJump) { if (facingRight) { rigidbody2D.velocity = Vector2.zero; flip (); rigidbody2D.AddForce (new Vector2 (-1f, 2f) * Jumpforce / 1.5f); Debug.Log ("saut mural gauche" + new Vector2 (-1f, 2f) * Jumpforce / 1.5f); } else { rigidbody2D.velocity = Vector2.zero; flip (); rigidbody2D.AddForce (new Vector2 (1f, 2f) * Jumpforce / 1.5f); Debug.Log ("saut mural droit--" + new Vector2 (Mathf.Sign (1f), 2f) * Jumpforce / 1.5f + "jump" + jump); } } } else { wallJump = false; } here the code is not optimized yet but i assure you it works so guys any help would be so awesome!! thanks

    Read the article

  • Advantages and disadvantages of libgdx [on hold]

    - by Paul
    I've been an android developer for a while and am thinking about getting into gaming. While looking for a game dev framework, I thought libgdx provides very friendly documentation and functionality. So I would like to use it if there is no big obstacle. But when I tried to see how many developers employ this library, I could find not that many. Is there anything wrong with this library? In other words, I would like to know its advantages or disadvantages from any experienced developer. UPDATE: After reviewing its documentations and trying to build simple games with libgdx, I decided to go with it as its documentations are good enough and its community is very active. What I liked the most is that it provides a bunch of demo games that I can learn a lot from.

    Read the article

  • How do I choose the scaling factor of a 3D game world?

    - by concept3d
    I am making a 3D tank game prototype with some physics simulation, am using C++. One of the decisions I need to make is the scale of the game world in relation to reality. For example, I could consider 1 in-game unit of measurement to correspond to 1 meter in reality. This feels intuitive, but I feel like I might be missing something. I can think of the following as potential problems: 3D modelling program compatibility. (?) Numerical accuracy. (Does this matter?) Especially at large scales, how games like Battlefield have huge maps: How don't they lose numerical accuracy if they use 1:1 mapping with real world scale, since floating point representation tend to lose more precision with larger numbers (e.g. with ray casting, physics simulation)? Gameplay. I don't want the movement of units to feel slow or fast while using almost real world values like -9.8 m/s^2 for gravity. (This might be subjective.) Is it ok to scale up/down imported assets or it's best fit with a world with its original scale? Rendering performance. Are large meshes with the same vertex count slower to render? I'm wondering if I should split this into multiple questions...

    Read the article

  • How to make an object stay relative to another object

    - by Nick
    In the following example there is a guy and a boat. They have both a position, orientation and velocity. The guy is standing on the shore and would like to board. He changes his position so he is now standing on the boat. The boat changes velocity and orientation and heads off. My character however has a velocity of 0,0,0 but I would like him to stay onboard. When I move my character around, I would like to move as if the boat was the ground I was standing on. How do keep my character aligned properly with the boat? It is exactly like in World Of Warcraft, when you board a boat or zeppelin. This is my physics code for the guy and boat: this.velocity.addSelf(acceleration.multiplyScalar(dTime)); this.position.addSelf(this.velocity.clone().multiplyScalar(dTime)); The guy already has a reference to the boat he's standing on, and thus knows the boat's position, velocity, orientation (even matrices or quaternions can be used).

    Read the article

< Previous Page | 383 384 385 386 387 388 389 390 391 392 393 394  | Next Page >