Search Results

Search found 33291 results on 1332 pages for 'development environment'.

Page 441/1332 | < Previous Page | 437 438 439 440 441 442 443 444 445 446 447 448  | Next Page >

  • adapting a Unity gravitational script to allow moons

    - by PartyMix
    I'm using this script: http://wiki.unity3d.com/index.php/Simple_planetary_orbits to get a solar system going in Unity, but it doesn't seem to support creating bodies that orbit other moving bodies (or I am using it incorrectly). Any idea about how to modify it so that it does (or just use it correctly)? I've been beating my head against this problem for a couple hours, and I really don't feel like I have any idea what I'm doing. Thanks in advance.

    Read the article

  • Importing Models from Maya to OpenGL

    - by Mert Toka
    I am looking for ways to import models to my game project. I am using Maya as modelling software, and GLUT for windowing of my game. I found this great parser, it imports all the textures and normal vectors, but it is compatible with .obj files of 3dsMAX. I tried to use it with Maya obj's, and it turned out that Maya's obj files are a bit different from former one, thus it cannot parse them. If you know any way to convert Maya obj files to 3dsMax obj files, that would be acceptable as well as a new parser for Maya obj files.

    Read the article

  • How do multi-platform games usually store save data?

    - by PixelPerfect3
    I realize this is a bit of a broad question, but I was wondering if there is a "standard" in the industry when it comes to storing save data for games (and is it different across platforms - Xbox/PS/PC/Mac/Android/iOS?) For example for a game like Assassin's Creed or The Walking Dead: They are on multiple platforms and they usually have to save enough information about the player and their actions. Do they use something like XML files, databases, or just straight binary dumps? How much does it differ from platform to platform? I would appreciate it if someone with experience in the game industry would answer this.

    Read the article

  • HLSL Pixel Shader that does palette swap

    - by derrace
    I have implemented a simple pixel shader which can replace a particular colour in a sprite with another colour. It looks something like this: sampler input : register(s0); float4 PixelShaderFunction(float2 coords: TEXCOORD0) : COLOR0 { float4 colour = tex2D(input, coords); if(colour.r == sourceColours[0].r && colour.g == sourceColours[0].g && colour.b == sourceColours[0].b) return targetColours[0]; return colour; } What I would like to do is have the function take in 2 textures, a default table, and a lookup table (both same dimensions). Grab the current pixel, and find the location XY (coords) of the matching RGB in the default table, and then substitute it with the colour found in the lookup table at XY. I have figured how to pass the Textures from C# into the function, but I am not sure how to find the coords in the default table by matching the colour. Could someone kindly assist? Thanks in advance.

    Read the article

  • How do I reconfigure my GLES frame buffer after a rotation?

    - by Panda Pajama
    I am implementing interface rotation for my GLES based game for iOS, written in Xamarin.iOS with OpenTK. I am detecting the rotation by overriding WillRotate, in my UIViewController, and I correctly re-setup all of my projection matrices. However, when drawing a sprite, the image looks a bit blurrier on the landscape version compared to the portrait version, as you can see in the following closeups magnified 10x. Portrait (before rotating) Landscape (after rotating) In both cases, I'm using the same texture with the same sampler, the same shader, and the same GL state. I just changed the order of the parameters in the projection matrix, so the resulting sizes should be exactly the same pixelwise. Since this could be thought of as a window resize, I suppose that the framebuffer has to be recreated to the new size. When working on desktop apps on Direct3D11 (SharpDX), I would have to call swapChain.ResizeBuffers() to do this. I have tried setting AutoResize = true in my iPhoneOSGameView, but then the framebuffer gets clipped as I rotate the interface, and then everything disappears when rotating the interface again. I'm not doing anything strange, my framebuffer initialization is pretty vanilla: int scaling = (int)UIScreen.MainScreen.Scale; DeviceWidth = (int)UIScreen.MainScreen.Bounds.Width * scaling; DeviceHeight = (int)UIScreen.MainScreen.Bounds.Height * scaling; Size = new System.Drawing.Size((int)(DeviceWidth), (int)(DeviceHeight)); Bounds = new System.Drawing.RectangleF(0, 0, DeviceWidth, DeviceHeight); Frame = new System.Drawing.RectangleF(0, 0, DeviceWidth, DeviceHeight); ContextRenderingApi = EAGLRenderingAPI.OpenGLES2; AutoResize = true; LayerRetainsBacking = true; LayerColorFormat = EAGLColorFormat.RGBA8; I get inconsistent results when changing Size, Bounds and Frame on my CreateFrameBuffer override, but since the documentation is so incomplete (it has nothing on Bounds and Frame), I have resorted to randomly changing stuff here and there without really knowing what is going on. There is a similar question which has no answers. However, I don't know if they're experiencing the same problem as I am. Is my supposition that recreating the framebuffer is necessary, correct? If so, does anybody know how to do it correctly in OpenTK for Xamarin.iOS?

    Read the article

  • Scale a game object to Bounds

    - by Spikeh
    I'm trying to scale a lot of dynamically created game objects in Unity3D to the bounds of a sphere collider, based on the size of their current mesh. Each object has a different scale and mesh size. Some are bigger than the AABB of the collider, and some are smaller. Here's the script I've written so far: private void ScaleToCollider(GameObject objectToScale, SphereCollider sphere) { var currentScale = objectToScale.transform.localScale; var currentSize = objectToScale.GetMeshHierarchyBounds().size; var targetSize = (sphere.radius * 2); var newScale = new Vector3 { x = targetSize * currentScale.x / currentSize.x, y = targetSize * currentScale.y / currentSize.y, z = targetSize * currentScale.z / currentSize.z }; Debug.Log("{0} Current scale: {1}, targetSize: {2}, currentSize: {3}, newScale: {4}, currentScale.x: {5}, currentSize.x: {6}", objectToScale.name, currentScale, targetSize, currentSize, newScale, currentScale.x, currentSize.x); //DoorDevice_meshBase Current scale: (0.1, 4.0, 3.0), targetSize: 5, currentSize: (2.9, 4.0, 1.1), newScale: (0.2, 5.0, 13.4), currentScale.x: 0.125, currentSize.x: 2.869114 //RedControlPanelForAirlock_meshBase Current scale: (1.0, 1.0, 1.0), targetSize: 5, currentSize: (0.0, 0.3, 0.2), newScale: (147.1, 16.7, 25.0), currentScale.x: 1, currentSize.x: 0.03400017 objectToScale.transform.localScale = newScale; } And the supporting extension method: public static Bounds GetMeshHierarchyBounds(this GameObject go) { var bounds = new Bounds(); // Not used, but a struct needs to be instantiated if (go.renderer != null) { bounds = go.renderer.bounds; // Make sure the parent is included Debug.Log("Found parent bounds: " + bounds); //bounds.Encapsulate(go.renderer.bounds); } foreach (var c in go.GetComponentsInChildren<MeshRenderer>()) { Debug.Log("Found {0} bounds are {1}", c.name, c.bounds); if (bounds.size == Vector3.zero) { bounds = c.bounds; } else { bounds.Encapsulate(c.bounds); } } return bounds; } After the re-scale, there doesn't seem to be any consistency to the results - some objects with completely uniform scales (x,y,z) seem to resize correctly, but others don't :| Its one of those things I've been trying to fix for so long I've lost all grasp on any of the logic :| Any help would be appreciated!

    Read the article

  • Rendering performance in FlasCC + UDK when compared to Stage3d and UDK on Windows?

    - by Arthur Wulf White
    Adobe recently released the Flash C++ Compiler, which UDK uses to target Flash Player. Developers can now access UDK for browser applications. Does this mean greater performance than using a Stage3D engine (Away3D 4) and how much of a noticeable difference in performance would it make in rendering speeds? Is there any benchmark you could propose that would allow to compare them fairly? I am asking this to help myself understand the consequences in performance for deciding to use UDK in a browser based game. I would also like to know how it compares with UDK running natively in Windows? I am not asking which technology to use or which is better. Only interested in optimizing rendering speed in a 3d browser game with flash.

    Read the article

  • Missing z-axis rotation for transforming between two vectors

    - by Steve Baughman
    I'm trying to rotate a cube so that it's facing up, but am getting hung up on the final implementation details. It now reliably will rotate the x,y axis to the correct side, but the z-axis is never rotating (See photos of before and after rotation). When I'm using the code below I always get '0' for my rotationVector.z. What am I missing here? // Define lookAt vector lookAtVector = GLKVector3Make(0,0,1); // Define axes vectors axes[0] = GLKVector3Make(0,0,1); axes[1] = GLKVector3Make(-1,0,0); axes[2] = GLKVector3Make(0,1,0); axes[3] = GLKVector3Make(1,0,0); axes[4] = GLKVector3Make(0,-1,0); axes[5] = GLKVector3Make(0,0,-1); CGFloat highest_dot = -1.0; GLKVector3 closest_axis; for(int i = 0; i < 6; i++) { // multiply cube's axes by existing matrix GLKVector3 axis = GLKMatrix4MultiplyVector3(matrix, axes[i]); CGFloat dot = GLKVector3DotProduct(axis, lookAtVector); if(dot > highest_dot) { closest_axis = axis; highest_dot = dot; } } GLKVector3 rotationVector = GLKVector3CrossProduct(closest_axis, lookAtVector); // Get angle between vectors CGFloat angle = atan2(GLKVector3Length(rotationVector), GLKVector3DotProduct(closest_axis, lookAtVector)); // normalize the rotation vector rotationVector = GLKVector3Normalize(rotationVector); // Create transform CATransform3D rotationTransform = CATransform3DMakeRotation(angle, rotationVector.x, rotationVector.y, rotationVector.z); // add rotation transform to existing transformation baseTransform = CATransform3DConcat(baseTransform, rotationTransform); return baseTransform; Before 3d Rotation After 3d Rotation Implementation based on this post

    Read the article

  • Retrieving model position after applying modeltransforms in XNA

    - by Glen Dekker
    For this method that the goingBeyond XNA tutorial provides, it would be really convenient if I could retrieve the new position of the model after I apply all the transforms to the mesh. I have edited the method a little for what I need. Does anyone know a way I can do this? public void DrawModel( Camera camera ) { Matrix scaleY = Matrix.CreateScale(new Vector3(1, 2, 1)); Matrix temp = Matrix.CreateScale(100f) * scaleY * rotationMatrix * translationMatrix * Matrix.CreateRotationY(MathHelper.Pi / 6) * translationMatrix2; Matrix[] modelTransforms = new Matrix[model.Bones.Count]; model.CopyAbsoluteBoneTransformsTo(modelTransforms); if (camera.getDistanceFromPlayer(position+position1) > 3000) return; foreach (ModelMesh mesh in model.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.World = modelTransforms[mesh.ParentBone.Index] * temp * worldMatrix; effect.View = camera.viewMatrix; effect.Projection = camera.projectionMatrix; } mesh.Draw(); } }

    Read the article

  • What is involved for a simple UDP game?

    - by acidzombie24
    I once tried to write a simple game with UDP in a week as a throwaway test. It went horribly. I threw it away early. The main problem i had was restoring the game state of all players/enemies/objects to an old state and fast forward the game to the point of time the player is playing (ie half a second before a jump. A little early or late can make the player miss the jump) Maybe this method is not the easiest way? i suspect it to be but i designed it wrong from the beginning and realized at the end of 2nd day. (so i didnt learn too much or wasted that much time) For myself and others, What is involved for a simple UDP game and how do i write one? Or how do i solve the prediction problem restoring to state properly. I'll mark this as CW bc i know there will be lots of helpful answers.

    Read the article

  • How to run the pixel shader effcet??

    - by Yashwinder
    Below stated is the code for my pixel shader which I am rendering after the vertex shader. I have set the wordViewProjection matrix in my program but I don't know to set the progress variable i.e in my pixel shader file which will make the image displayed by the help of a quad to give out transition effect. Here is the code for my pixel shader program::: As my pixel shader is giving a static effect and now I want to use it to give some effect. So for this I have to add a progress variable in my pixel shader and initialize to the Constant table function i.e constantTable.SetValue(D3DDevice,"progress",progress ); I am having the problem in using this function for progress in my program. Anybody know how to set this variable in my program. And my new pixel Shader code is float progress : register(C0); sampler2D implicitInput : register(s0); sampler2D oldInput : register(s1); struct VS_OUTPUT { float4 Position : POSITION; float4 Color : COLOR0; float2 UV : TEXCOORD 0; }; float4 Blinds(float2 uv) { if(frac(uv.y * 5) < progress) { return tex2D(implicitInput, uv); } else { return tex2D(oldInput, uv); } } // Pixel Shader { return Blinds(input.UV); }

    Read the article

  • Game Center: Leaderboard score inconsistencies

    - by Hasyimi Bahrudin
    Background I'm currently developing a simple library that mirrors Game Center's functionalities locally. Basically, this library is a system that manages achievements and leaderboards, and optionally sync it with the Game Center. So, if the game is not GC enabled, the game will still have achievements and leaderboards (stored inside a plist). But of course, the leaderboards will then only contain the local player's scores (which is kind of useless, I know :P). Problem Currently I have coded both of the achievements and leaderboards subsystems. The achievements subsystem have already been tested and it works. I'm currently testing the leaderboards subsystem using multiple test user accounts. I loaded the test app on a device and on the simulator, both logged in with 2 different user accounts. Then I performed these steps: I first used the device to upload a score. Then, I ran the simulator, and the score submitted by the user on the device is shown. Which is cool. Then, I used the simulator to upload a score. But on the device, still, only one score is listed. I checked on the Game Center app (to see if the bug lies within my code), and I got the same thing. Under "All players", there is only one score on the device, but there are 2 scores on the simulator. I wanted to make sure that the simulator is not causing this, so I swapped the users on the device and the simulator, and the result is still the same. In other words, the first user is oblivious of the second user's score, but the second user can see the first user's score. Then I tried with a third user. The result: the third user can only see the scores of the first user and himself. The second user still sees the scores of the first user and himself. The first user only sees his own score. Now here comes the weird part. I then make the first user and the second user befriend each other. The result: under "Friends", the first user can see the second user's score, but under "All Players", the first user's score is the only one listed. Screenshots The first user sees this: The second user sees this: So, is this a normal thing when using sandboxed GC accounts? Is this behavior documented somewhere by Apple?

    Read the article

  • SpriteBatch.end() generating null pointer exception

    - by odaymichael
    I am getting a null pointer exception using libGDX that the debugger points as the SpriteBatch.end() line. I was wondering what would cause this. Here is the offending code block, specifically the batch.end() line: batch.begin(); for (int j = 0; j < 3; j++) for (int i = 0; i < 3; i++) if (zoomgrid[i][j].getPiece().getImage() != null) zoomgrid[i][j].getPiece().getImage().draw(batch); batch.end(); The top of the stack is actually a line that calls lastTexture.bind(); In the flush() method of com.badlogic.gdx.graphics.g2d.SpriteBatch. I appreciate any input, let me know if I haven't included enough information.

    Read the article

  • Early Z culling - Ogre

    - by teodron
    This question is concerned with how one can enable this "pixel filter" to work within an Ogre based app. Simply put, one can write two passes, the first without writing any colour values to the frame buffer lighting off colour_write off shading flat The second pass is the one that employs heavy pixel shader computations, hence it would be really nice to get rid of those hidden surface patches and not process them pixel-wise. This approach works, except for one thing: objects with alpha, such as billboard trees suffer in a peculiar way - from one side, they seem to capture the sky/background within their alpha region and ignore other trees/houses behind them, while viewed from the other side, they exhibit the desired behavior. To tackle the issue, I thought I could write a custom vertex shader in the first pass and offset the projected Z component of the vertex a little further away from its actual position, so that in the second pass there is a need to recompute correctly the pixels of the objects closest to the camera. This doesn't work at all, all surfaces are processed in the pixel shader and there is no performance gain. So, if anyone has done a similar trick with Ogre and alpha objects, kindly please help.

    Read the article

  • Detecting long held keys on keyboard

    - by Robinson Joaquin
    I just want to ask if can I check for "KEY"(keyboard) that is HOLD/PRESSED for a long time, because I am to create a clone of breakout with air hockey for 2 different human players. Here's the list of my concern: Do I need other/ 3rd party library for KEY HOLDS? Is multi-threading needed? I don't know anything about this multi-threading stuff and I don't think about using one(I'm just a NEWBIE). One more thing, what if the two players pressed their respective key at the same time, how can I program to avoid error or worse one player's key is prioritized first before the the key of the other. example: Player 1 = W for UP & S for DOWN Player 2 = O for UP & L for DOWN (example: W & L is pressed at the same time) PS: I use GLUT for the visuals of the game.

    Read the article

  • Normal map applied as diffuse textures looks wrong

    - by KaiserJohaan
    Diffuse textures works fine, but I am having problem with normal maps, so I thought I'd tried to apply the normal maps as the diffuse map in my fragment shader so I could see everything is OK. I comment-out my normal map code and just set the diffuse map to the normal map and I get this: http://postimg.org/image/j9gudjl7r/ Looks like a smurf! This is the actual normal map of the main body: http://postimg.org/image/sbkyr6fg9/ Here is my fragment shader, notice I commented out normal map code so I could debug the normal map as a diffuse texture "#version 330 \n \ \n \ layout(std140) uniform; \n \ \n \ const int MAX_LIGHTS = 8; \n \ \n \ struct Light \n \ { \n \ vec4 mLightColor; \n \ vec4 mLightPosition; \n \ vec4 mLightDirection; \n \ \n \ int mLightType; \n \ float mLightIntensity; \n \ float mLightRadius; \n \ float mMaxDistance; \n \ }; \n \ \n \ uniform UnifLighting \n \ { \n \ vec4 mGamma; \n \ vec3 mViewDirection; \n \ int mNumLights; \n \ \n \ Light mLights[MAX_LIGHTS]; \n \ } Lighting; \n \ \n \ uniform UnifMaterial \n \ { \n \ vec4 mDiffuseColor; \n \ vec4 mAmbientColor; \n \ vec4 mSpecularColor; \n \ vec4 mEmissiveColor; \n \ \n \ bool mHasDiffuseTexture; \n \ bool mHasNormalTexture; \n \ bool mLightingEnabled; \n \ float mSpecularShininess; \n \ } Material; \n \ \n \ uniform sampler2D unifDiffuseTexture; \n \ uniform sampler2D unifNormalTexture; \n \ \n \ in vec3 frag_position; \n \ in vec3 frag_normal; \n \ in vec2 frag_texcoord; \n \ in vec3 frag_tangent; \n \ in vec3 frag_bitangent; \n \ \n \ out vec4 finalColor; " " \n \ \n \ void CalcGaussianSpecular(in vec3 dirToLight, in vec3 normal, out float gaussianTerm) \n \ { \n \ vec3 viewDirection = normalize(Lighting.mViewDirection); \n \ vec3 halfAngle = normalize(dirToLight + viewDirection); \n \ \n \ float angleNormalHalf = acos(dot(halfAngle, normalize(normal))); \n \ float exponent = angleNormalHalf / Material.mSpecularShininess; \n \ exponent = -(exponent * exponent); \n \ \n \ gaussianTerm = exp(exponent); \n \ } \n \ \n \ vec4 CalculateLighting(in Light light, in vec4 diffuseTexture, in vec3 normal) \n \ { \n \ if (light.mLightType == 1) // point light \n \ { \n \ vec3 positionDiff = light.mLightPosition.xyz - frag_position; \n \ float dist = max(length(positionDiff) - light.mLightRadius, 0); \n \ \n \ float attenuation = 1 / ((dist/light.mLightRadius + 1) * (dist/light.mLightRadius + 1)); \n \ attenuation = max((attenuation - light.mMaxDistance) / (1 - light.mMaxDistance), 0); \n \ \n \ vec3 dirToLight = normalize(positionDiff); \n \ float angleNormal = clamp(dot(normalize(normal), dirToLight), 0, 1); \n \ \n \ float gaussianTerm = 0.0; \n \ if (angleNormal > 0.0) \n \ CalcGaussianSpecular(dirToLight, normal, gaussianTerm); \n \ \n \ return diffuseTexture * (attenuation * angleNormal * Material.mDiffuseColor * light.mLightIntensity * light.mLightColor) + \n \ (attenuation * gaussianTerm * Material.mSpecularColor * light.mLightIntensity * light.mLightColor); \n \ } \n \ else if (light.mLightType == 2) // directional light \n \ { \n \ vec3 dirToLight = normalize(light.mLightDirection.xyz); \n \ float angleNormal = clamp(dot(normalize(normal), dirToLight), 0, 1); \n \ \n \ float gaussianTerm = 0.0; \n \ if (angleNormal > 0.0) \n \ CalcGaussianSpecular(dirToLight, normal, gaussianTerm); \n \ \n \ return diffuseTexture * (angleNormal * Material.mDiffuseColor * light.mLightIntensity * light.mLightColor) + \n \ (gaussianTerm * Material.mSpecularColor * light.mLightIntensity * light.mLightColor); \n \ } \n \ else if (light.mLightType == 4) // ambient light \n \ return diffuseTexture * Material.mAmbientColor * light.mLightIntensity * light.mLightColor; \n \ else \n \ return vec4(0.0); \n \ } \n \ \n \ void main() \n \ { \n \ vec4 diffuseTexture = vec4(1.0); \n \ if (Material.mHasDiffuseTexture) \n \ diffuseTexture = texture(unifDiffuseTexture, frag_texcoord); \n \ \n \ vec3 normal = frag_normal; \n \ if (Material.mHasNormalTexture) \n \ { \n \ diffuseTexture = vec4(normalize(texture(unifNormalTexture, frag_texcoord).xyz * 2.0 - 1.0), 1.0); \n \ // vec3 normalTangentSpace = normalize(texture(unifNormalTexture, frag_texcoord).xyz * 2.0 - 1.0); \n \ //mat3 tangentToWorldSpace = mat3(normalize(frag_tangent), normalize(frag_bitangent), normalize(frag_normal)); \n \ \n \ // normal = tangentToWorldSpace * normalTangentSpace; \n \ } \n \ \n \ if (Material.mLightingEnabled) \n \ { \n \ vec4 accumLighting = vec4(0.0); \n \ \n \ for (int lightIndex = 0; lightIndex < Lighting.mNumLights; lightIndex++) \n \ accumLighting += Material.mEmissiveColor * diffuseTexture + \n \ CalculateLighting(Lighting.mLights[lightIndex], diffuseTexture, normal); \n \ \n \ finalColor = pow(accumLighting, Lighting.mGamma); \n \ } \n \ else { \n \ finalColor = pow(diffuseTexture, Lighting.mGamma); \n \ } \n \ } \n"; Here is my wrapper around a texture OpenGLTexture::OpenGLTexture(const std::vector<uint8_t>& textureData, uint32_t textureWidth, uint32_t textureHeight, TextureFormat textureFormat, TextureType textureType, Logger& logger) : mLogger(logger), mTextureID(gNextTextureID++), mTextureType(textureType) { glGenTextures(1, &mTexture); CHECK_GL_ERROR(mLogger); glBindTexture(GL_TEXTURE_2D, mTexture); CHECK_GL_ERROR(mLogger); GLint glTextureFormat = (textureFormat == TextureFormat::TEXTURE_FORMAT_RGB ? GL_RGB : textureFormat == TextureFormat::TEXTURE_FORMAT_RGBA ? GL_RGBA : GL_RED); glTexImage2D(GL_TEXTURE_2D, 0, glTextureFormat, textureWidth, textureHeight, 0, glTextureFormat, GL_UNSIGNED_BYTE, &textureData[0]); CHECK_GL_ERROR(mLogger); glGenerateMipmap(GL_TEXTURE_2D); CHECK_GL_ERROR(mLogger); glBindTexture(GL_TEXTURE_2D, 0); CHECK_GL_ERROR(mLogger); } OpenGLTexture::~OpenGLTexture() { glDeleteBuffers(1, &mTexture); CHECK_GL_ERROR(mLogger); } And here is the sampler I create which is shared between Diffuse and normal textures // texture sampler setup glGenSamplers(1, &mTextureSampler); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_MAG_FILTER, GL_LINEAR); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_S, GL_REPEAT); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_T, GL_REPEAT); CHECK_GL_ERROR(mLogger); glSamplerParameterf(mTextureSampler, GL_TEXTURE_MAX_ANISOTROPY_EXT, mCurrentAnisotropy); CHECK_GL_ERROR(mLogger); glUniform1i(glGetUniformLocation(mDefaultProgram.GetHandle(), "unifDiffuseTexture"), OpenGLTexture::TEXTURE_UNIT_DIFFUSE); CHECK_GL_ERROR(mLogger); glUniform1i(glGetUniformLocation(mDefaultProgram.GetHandle(), "unifNormalTexture"), OpenGLTexture::TEXTURE_UNIT_NORMAL); CHECK_GL_ERROR(mLogger); glBindSampler(OpenGLTexture::TEXTURE_UNIT_DIFFUSE, mTextureSampler); CHECK_GL_ERROR(mLogger); glBindSampler(OpenGLTexture::TEXTURE_UNIT_NORMAL, mTextureSampler); CHECK_GL_ERROR(mLogger); SetAnisotropicFiltering(mCurrentAnisotropy); The diffuse textures looks like they should, but the normal looks so wierd. Why is this?

    Read the article

  • Algorithm to find average position

    - by Simran kaur
    In the given diagram, I have the extreme left and right points, that is -2 and 4 in this case. So, obviously, I can calculate the width which is 6 in this case. What we know: The number of partitions:3 in this case The partition number at at any point i.e which one is 1st,second or third partition (numbered starting from left) What I want: The position of the purple line drawn which is positio of average of a particular partition So, basically I just want a generalized formula to calculate position of the average at any point.

    Read the article

  • Adobe Air turn based multiplayer Game, sockets vs http bandwidth

    - by Arin Aivazian
    I am developing an Adobe Air multiplayer game for iPad. It is turn based and not realtime. It is like checkers game. I want to use a client server model. I have found 2 options to connect to server so far: socket connection and http requests My question is: Is the bandwidth requirement for socket connection vs http requests different? I need the game to work with very low speed internet connections

    Read the article

  • Game window systems and internal frames

    - by 2080
    I don't know if this is a valid question, but: What kind of window manager do games use which have internal frames (Frames inside frames)? Does this differ between the programming languages (Are e.g. in Java the AWT/Swing libraries used to manage these and other graphical elements, such as buttons,or is this to restrictive (speed, graphical possibilities?)) A special example would be EVE Online, where the client can use the ingame windows like on a normal desktop.

    Read the article

  • 2D Planet Gravity

    - by baked
    I'm trying to make a simple game where a spaceship is launched and then its path is effected by the gravity of planets. Similar to this game: http://sciencenetlinks.com/interactives/gravity.html I wish to know how to replicate the effect the planets have on the spaceship in this game so a spaceship can 'loop' around a planet to change direction. I have managed to achieve some bogus results where the spaceship loops in a huge ellipse around the planet or is only slightly affected by the gravity of a planet using Vectors. Thanks in advance. p.s I have plenty of coding experience just none to do with game dev.

    Read the article

  • FreeType2 Crash on FT_Init_FreeType

    - by JoeyDewd
    I'm currently trying to learn how to use the FreeType2 library for drawing fonts with OpenGL. However, when I start the program it immediately crashes with the following error: "(Can't correctly start the application (0xc000007b))" Commenting the FT_Init_FreeType removes the error and my game starts just fine. I'm wondering if it's my code or has something to do with loading the dll file. My code: #include "SpaceGame.h" #include <ft2build.h> #include FT_FREETYPE_H //Freetype test FT_Library library; Game::Game(int Width, int Height) { //Freetype FT_Error error = FT_Init_FreeType(&library); if(error) { cout << "Error occured during FT initialisation" << endl; } And my current use of the FreeType2 files. Inside my bin folder (where debug .exe is located) is: freetype6.dll, libfreetype.dll.a, libfreetype-6.dll. In Code::Blocks, I've linked to the lib and include folder of the FreeType 2.3.5.1 version. And included a compiler flag: -lfreetype My program starts perfectly fine if I comment out the FT_Init function which means the includes, and library files should be fine. I can't find a solution to my problem and google isn't helping me so any help would be greatly appreciated.

    Read the article

  • Quaternion dfference + time --> angular velocity (gyroscope in physics library)

    - by AndrewK
    I am using Bullet Physic library to program some function, where I have difference between orientation from gyroscope given in quaternion and orientation of my object, and time between each frame in milisecond. All I want is set the orientation from my gyroscope to orientation of my object in 3D space. But all I can do is set angular velocity to my object. I have orientation difference and time, and from that I calculate vector of angular velocity [Wx,Wy,Wz] from that formula: W(t) = 2 * dq(t)/dt * conj(q(t)) My code is: btQuaternion diffQuater = gyroQuater - boxQuater; btQuaternion conjBoxQuater = gyroQuater.inverse(); btQuaternion velQuater = ((diffQuater * 2.0f) / d_time) * conjBoxQuater; And everything works well, till I get: 1 rotating around Y axis, angle about 60 degrees, then I have these values in 2 critical frames: x: -0.013220 y: -0.038050 z: -0.021979 w: -0.074250 - diffQuater x: 0.120094 y: 0.818967 z: 0.156797 w: -0.538782 - gyroQuater x: 0.133313 y: 0.857016 z: 0.178776 w: -0.464531 - boxQuater x: 0.207781 y: 0.290452 z: 0.245594 - diffQuater -> euler angles x: 3.153619 y: -66.947929 z: 175.936615 - gyroQuater -> euler angles x: 4.290697 y: -57.553043 z: 173.320053 - boxQuater -> euler angles x: 0.138128 y: 2.823307 z: 1.025552 w: 0.131360 - velQuater d_time: 0.058000 x: 0.211020 y: 1.595124 z: 0.303650 w: -1.143846 - diffQuater x: 0.089518 y: 0.771939 z: 0.144527 w: -0.612543 - gyroQuater x: -0.121502 y: -0.823185 z: -0.159123 w: 0.531303 - boxQuater x: nan y: nan z: nan - diffQuater -> euler angles x: 2.985240 y: -76.304405 z: -170.555054 - gyroQuater -> euler angles x: 3.269681 y: -65.977966 z: 175.639420 - boxQuater -> euler angles x: -0.730262 y: -2.882153 z: -1.294721 w: 63.325996 - velQuater d_time: 0.063000 2 rotating around X axis, angle about 120 degrees, then I have these values in 2 critical frames: x: -0.013045 y: -0.004186 z: -0.005667 w: -0.022482 - diffQuater x: -0.848030 y: -0.187985 z: 0.114400 w: 0.482099 - gyroQuater x: -0.834985 y: -0.183799 z: 0.120067 w: 0.504580 - boxQuater x: 0.036336 y: 0.002312 z: 0.020859 - diffQuater -> euler angles x: -113.129463 y: 0.731925 z: 25.415056 - gyroQuater -> euler angles x: -110.232368 y: 0.860897 z: 25.350458 - boxQuater -> euler angles x: -0.865820 y: -0.456086 z: 0.034084 w: 0.013184 - velQuater d_time: 0.055000 x: -1.721662 y: -0.387898 z: 0.229844 w: 0.910235 - diffQuater x: -0.874310 y: -0.200132 z: 0.115142 w: 0.426933 - gyroQuater x: 0.847352 y: 0.187766 z: -0.114703 w: -0.483302 - boxQuater x: -144.402298 y: 4.891629 z: 71.309158 - diffQuater -> euler angles x: -119.515343 y: 1.745076 z: 26.646086 - gyroQuater -> euler angles x: -112.974533 y: 0.738675 z: 25.411509 - boxQuater -> euler angles x: 2.086195 y: 0.676526 z: -0.424351 w: 70.104248 - velQuater d_time: 0.057000 2 rotating around Z axis, angle about 120 degrees, then I have these values in 2 critical frames: x: -0.000736 y: 0.002812 z: -0.004692 w: -0.008181 - diffQuater x: -0.003829 y: 0.012045 z: -0.868035 w: 0.496343 - gyroQuater x: -0.003093 y: 0.009232 z: -0.863343 w: 0.504524 - boxQuater x: -0.000822 y: -0.003032 z: 0.004162 - diffQuater -> euler angles x: -1.415189 y: 0.304210 z: -120.481873 - gyroQuater -> euler angles x: -1.091881 y: 0.227784 z: -119.399445 - boxQuater -> euler angles x: 0.159042 y: 0.169228 z: -0.754599 w: 0.003900 - velQuater d_time: 0.025000 x: -0.007598 y: 0.024074 z: -1.749412 w: 0.968588 - diffQuater x: -0.003769 y: 0.012030 z: -0.881377 w: 0.472245 - gyroQuater x: 0.003829 y: -0.012045 z: 0.868035 w: -0.496343 - boxQuater x: -5.645197 y: 1.148993 z: -146.507187 - diffQuater -> euler angles x: -1.418294 y: 0.270319 z: -123.638245 - gyroQuater -> euler angles x: -1.415183 y: 0.304208 z: -120.481873 - boxQuater -> euler angles x: 0.017498 y: -0.013332 z: 2.040073 w: 148.120056 - velQuater d_time: 0.027000 The problem is the most visible in diffQuater - euler angles vector. Can someone tell me why it is like that? and how to solve that problem? All suggestions are welcome.

    Read the article

  • libgdx game not disposing

    - by Yesh
    My game does not exit entirely even after calling dispose() method. It loads a black screen when I launch it for the second time and works well if I kill the game manually and restart it. I get an error that says buffer not allocated with newUnsafeByteBuffer or already disposed when I try to dispose off the SpriteBatch object. This is were I suspect the problem to be. But not able to fix it entirely. Please help! Here is how I have built it (I have put the sample code here just to show you guys that there are no visible loop backs in dispose function, please correct me if I'm wrong)- In game screen, public void dispose() { AssetLoader.dispose(); render.dispose(); Gdx.app.exit(); } Under class AssetLoader- public void dispose(){ Texture.dispose(); sound.dispose(); } Under game render class - public void dispose(){ spritebatch.dispose(); //throws an error when I GameScreen.dispose is called font.dispose(); shaperender.dispose(); } I believe that my spritebatch isn't disposing which is causing the black screen but I cannot find a way to dispose it off successfully. Any help would be greatly appreciated.

    Read the article

  • Performance due to entity update

    - by Rizzo
    I always think about 2 ways to code the global Step() function, both with pros and cons. Please note that AIStep is just to provide another more step for whoever who wants it. // Approach 1 step foreach( entity in entities ) { entity.DeltaStep( delta_time ); if( time_for_fixed_step ) entity.FixedStep(); if( time_for_AI_step ) entity.AIStep(); ... // all kind of updates you want } PRO: you just have to iterate once over all entities. CON: fidelity could be lower at some scenarios, since the entity.FixedStep() isn't going all at a time. // Approach 2 step foreach( entity in entities ) entity.DeltaStep( delta_time ); if( time_for_fixed_step ) foreach( entity in entities ) entity.FixedStep(); if( time_for_AI_step ) foreach( entity in entities ) entity.FixedStep(); // all kind of updates you want SEPARATED PRO: fidelity on FixedStep is higher, shouldn't be much time between all entities update, rather than Approach 1 where you may have to wait other updates until FixedStep() comes. CON: you iterate once for each kind of update. Also, a third approach could be a hybrid between both of them, something in the way of foreach( entity in entities ) { entity.DeltaStep( delta_time ); if( time_for_AI_step ) entity.AIStep(); // all kind of updates you want BUT FixedStep() } if( time_for_fixed_step ) { foreach( entity in entities ) { entity.FixedStep(); } } Just two loops, don't caring about time fidelity in nothing other than at FixedStep(). Any thoughts on this matter? Should it really matters to make all steps at once or am I thinking on problems that don't exist?

    Read the article

  • Determine percentage of screen covered by an object without using frustum culling

    - by Meltac
    On the CPU-side of an 3D first-person / ego perspective game I need to check whether what the players currently sees on screen is the inside of a box object defined by world space coordinates (the player might be outside of that box but on screen sees only/mostly the inside of the box, or vice-versa, looks from within the box to the outside). The "casual" way of performing such a check would incorporate frustum culling but such an approach would be hard to achieve with my given set of engine parameters which I'd like to avoid if there is a simpler way. What I actually have at the point where I would like to do the check (high-level script on CPU, not GPU side): Camera world position Camera direction Camera FOV Two Box corner world coordinates (left-bottom-front, right-top-back) What I do not have right away: View frustrum definition (near/far plane or say 6 planes defining frustum) Any specific pixel information (uv, view space position, depth or the like) What I would like to calculate: Percentage of screen "covered" by box. Any hints on how to perform such calculation?

    Read the article

< Previous Page | 437 438 439 440 441 442 443 444 445 446 447 448  | Next Page >