Search Results

Search found 29201 results on 1169 pages for 'game development'.

Page 526/1169 | < Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >

  • How can I implement 2D cel shading in XNA?

    - by Artii
    So I was just wondering on how to give a scene I am rendering a hand drawn look (like say Crayon Physics). I don't really want to preprocess the sprites and was thinking of using a shader. Cel shading supplies the effect I want to achieve, but I am only aware of the 3D instances for it. So I wanted to ask if anyone knew a way to get this effect in 2D, or if cel shading would work just as fine on 2D scenes?

    Read the article

  • Convenience of mySQL over xml

    - by Bonechilla
    Currently I use XML to store specific information to correctly load a few things such as a list of specfied characters, scenes and music, Once more I use JAXB in combination with standard compression/decompression(ZIP) functionality to store a list of extrenous data. This data is called to add functionality to the character, somewhat like Skills in an RPG. Each skill is seperated into its own XML file with a grandlist which contains the names of each file with their extensions omitted and zipped in folder that gets encrypted. At first using xml was working fine however as the skill list grow i worry about its stability. I was wondering if I should begin storing the data in mySQL. Originally I planned to simply convert everything to JSON over xml but i think possibly mySQL would be a better move. Can anyone inform me of the key difference and pros and cons of each I guess i'm looking for the best way to store the data more conviently and would be easier to operate on. The data is mostly primatives and strings and the only arraylist of values i have i can just concat into a single field and parse later Edit: If I am going in the right direction with XML would it make sense to convert it to JSON and use maybe Kyro or EclipseLink JAXB (MOXy)

    Read the article

  • Team matchups for Dota Bot

    - by Dan
    I have a ghost++ bot that hosts games of Dota (a warcraft 3 map that is played 5 players versus 5 players) and I'm trying to come up with good formulas to balance the players going into a match based on their records (I have game history for several thousand games). I'm familear with some of the concepts required to match up players, like confidence based on sample size of the number of games they played, and also perameter approximation and degrees of freedom and thus throwing out any variables that don't contribute enough to the r^2. My bot collects quite a few variables for each player from each game: The Important ones: Win/Lose/Game did not finish # of Player Kills # of Player Deaths # of Kills player assisted The not so important ones: # of enemy creep kills # of creep sneak attacks # of neutral creep kills # of Tower kills # of Rax kills # of courier kills Quick explination: The kills/deaths don't determine who wins, but the gold gained and lost from this usually is enough to tilt the game. Tower/Rax kills are what the goal of the game is (once a team looses all their towers/rax their thrown can be attacked if that is destroyed they lose), but I don't really count these as important because it is pretty random who gets the credit for the tower kill, and chances are if you destroy a tower it is only because some other player is doing well and distracting the otherteam elsewhere on the map. I'm getting a bit confused when trying to deal with the fact that 5 players are on a team, so ultimately each individual isn't that responsible for the team winner or losing. Take a player that is really good at killing and has 40 kills and only 10 deaths, but in their 5 games they've only won 1. Should I give him extra credit for such a high kill score despite losing? (When losing it is hard to keep a positive kill/death ratio) Or should I dock him for losing assuming that despite the nice kill/death ratio he probably plays in a really greedy way only looking out for himself and not helping the team? Ultimately I don't think I have to guess at questions like this because I have so much data... but I don't really know how to look at the data to answer questions like this. Can anyone help me come up with formulas to help team balance and predict the outcome? Thanks, Dan

    Read the article

  • Django and Google App Engine Helper not finding the ipaddr module.

    - by Phil
    I'm trying to get Django running on GAE using this tutorial. When I run python manage.py runserver I get the stacktrace below. I'm new to both django and python so I don't know what my next steps are (This is Ubuntu Jaunty btw). It seems django isn't finding the GAE module ipaddr which comes with SDK 1.3.1. How do I get django to find this module? /home/username/bin/google_appengine/google/appengine/api/datastore_file_stub.py:40: DeprecationWarning: the md5 module is deprecated; use hashlib instead import md5 /home/username/bin/google_appengine/google/appengine/api/memcache/__init__.py:31: DeprecationWarning: the sha module is deprecated; use the hashlib module instead import sha Traceback (most recent call last): File "manage.py", line 18, in <module> InstallAppengineHelperForDjango() File "/home/username/Development/GAE/myapp/appengine_django/__init__.py", line 543, in InstallAppengineHelperForDjango InstallDjangoModuleReplacements() File "/home/username/Development/GAE/myapp/appengine_django/__init__.py", line 260, in InstallDjangoModuleReplacements import django.db File "/home/username/Development/GAE/myapp/django/db/__init__.py", line 57, in <module> 'TIME_ZONE': settings.TIME_ZONE, File "/home/username/Development/GAE/myapp/appengine_django/db/base.py", line 117, in __init__ self._setup_stubs() File "/home/username/Development/GAE/myapp/appengine_django/db/base.py", line 128, in _setup_stubs from google.appengine.tools import dev_appserver_main File "/home/username/bin/google_appengine/google/appengine/tools/dev_appserver_main.py", line 82, in <module> from google.appengine.tools import appcfg File "/home/username/bin/google_appengine/google/appengine/tools/appcfg.py", line 53, in <module> from google.appengine.api import dosinfo File "/home/username/bin/google_appengine/google/appengine/api/dosinfo.py", line 25, in <module> import ipaddr ImportError: No module named ipaddr

    Read the article

  • HLSL Pixel Shader that does palette swap

    - by derrace
    I have implemented a simple pixel shader which can replace a particular colour in a sprite with another colour. It looks something like this: sampler input : register(s0); float4 PixelShaderFunction(float2 coords: TEXCOORD0) : COLOR0 { float4 colour = tex2D(input, coords); if(colour.r == sourceColours[0].r && colour.g == sourceColours[0].g && colour.b == sourceColours[0].b) return targetColours[0]; return colour; } What I would like to do is have the function take in 2 textures, a default table, and a lookup table (both same dimensions). Grab the current pixel, and find the location XY (coords) of the matching RGB in the default table, and then substitute it with the colour found in the lookup table at XY. I have figured how to pass the Textures from C# into the function, but I am not sure how to find the coords in the default table by matching the colour. Could someone kindly assist? Thanks in advance.

    Read the article

  • adapting a Unity gravitational script to allow moons

    - by PartyMix
    I'm using this script: http://wiki.unity3d.com/index.php/Simple_planetary_orbits to get a solar system going in Unity, but it doesn't seem to support creating bodies that orbit other moving bodies (or I am using it incorrectly). Any idea about how to modify it so that it does (or just use it correctly)? I've been beating my head against this problem for a couple hours, and I really don't feel like I have any idea what I'm doing. Thanks in advance.

    Read the article

  • What's the future of online gamedev. FLASH or UNITY?

    - by Cpucpu
    Currently, i develop for flash, not much ago i discovered unity, not yet played with it, but i have seen so far was cool. Here are my thoughts: Flash is more casual, start with cost less, in time and money. In unity you'd likely have to go more bussines-serious (real money). There are proven bussines models in flash, like adver-gaming, ads, micro-transactions. Have not seen much movement in this in Unity, too soon maybe. Flash is too heavy. By its nature(making games) Unity is way faster. Flash is 2d, doing something 3d with it turns weird and slow. Unity is natively 3d, not optimized for 2d though, it is likely feasible as well. I am overlooking the plug-in widespread, that gap will get closed over the time.

    Read the article

  • Automated texture mapping

    - by brandon
    I have a set of seamless tiling textures. I want to be able to take an arbitrary model and create a UV map with these properties: No stretching (all textures tile appropriately so there is no stretching and sheering of the texture) The textures display on the correct axis relative to the model it's mapping to (if you look at the example, you can see some of the letters on the front are tilted, the y axis of the texture should be matching up with the y axis of the object. Some other faces have upside down letters too) the texture is as continuous as possible on the surface of the model (if two faces are adjacent, the texture continues on the adjacent face where it left off) the model is closed (all faces are completely enclosed by other faces) A few notes. This mapping will occur before triangulation. I realize there are ways to do this by hand and it's probably a hard problem to automatically map textures in general, but since these textures are seamless and I just need uniform coverage it seems like an easier problem. I'm looking for an algorithmic approach to this that I can apply in general, not a tool that does it. What approach would work for this, is there an existing one? (I assume so)

    Read the article

  • Retrieving model position after applying modeltransforms in XNA

    - by Glen Dekker
    For this method that the goingBeyond XNA tutorial provides, it would be really convenient if I could retrieve the new position of the model after I apply all the transforms to the mesh. I have edited the method a little for what I need. Does anyone know a way I can do this? public void DrawModel( Camera camera ) { Matrix scaleY = Matrix.CreateScale(new Vector3(1, 2, 1)); Matrix temp = Matrix.CreateScale(100f) * scaleY * rotationMatrix * translationMatrix * Matrix.CreateRotationY(MathHelper.Pi / 6) * translationMatrix2; Matrix[] modelTransforms = new Matrix[model.Bones.Count]; model.CopyAbsoluteBoneTransformsTo(modelTransforms); if (camera.getDistanceFromPlayer(position+position1) > 3000) return; foreach (ModelMesh mesh in model.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.World = modelTransforms[mesh.ParentBone.Index] * temp * worldMatrix; effect.View = camera.viewMatrix; effect.Projection = camera.projectionMatrix; } mesh.Draw(); } }

    Read the article

  • What algorithm to use to fill a KenKen square board with cages?

    - by JimmyBoh
    I am working on recreating KenKen, a popular math puzzle involving a blank grid that is divided into "cages". Each cage is just a collection of adjacent squares and has a clue which is generally a number and an operand, shown below: What type of algorithm would be best to fill the square with cages? Assume the maximum number of cells per cage would be 3 and the board is 4x4 in size, like in the example above.

    Read the article

  • How can I pass an array of floats to the fragment shader using textures?

    - by James
    I want to map out a 2D array of depth elements for the fragment shader to use to check depth against to create shadows. I want to be able to copy a float array into the GPU, but using large uniform arrays causes segfaults in openGL so that is not an option. I tried texturing but the best i got was to use GL_DEPTH_COMPONENT glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 512, 512, 0, GL_DEPTH_COMPONENT, GL_FLOAT, smap); Which doesn't work because that stores depth components (0.0 - 1.0) which I don't want because I have no idea how to calculate them using the depth value produced by the light sources MVP matrix multiplied by the coordinate of each vertex. Is there any way to store and access large 2D arrays of floats in openGL?

    Read the article

  • How to make unit selection circles merge?

    - by MaT
    I would like to know how to make this effect of merged circle selection. Here are images to illustrate: Basically I'm looking for this effect: How the merge effect of the circles can be achieved ? I didn't found any explanation concerning this effect. I know that to project those texture I can develop a decal system but I don't know how to create the merging effect. If possible, I'm looking for purely shaders solution.

    Read the article

  • CSM shadow errors when models are split

    - by KaiserJohaan
    I'm getting closer to fixing CSM, but there seems to be one more issue at hand. At certain angles, the models will be caught/split between two shadow map cascades, like below. first depth split second depth split - here you can see the model is caught between the splits How does one fix this? Increase the overlapping boundaries between the splits? Or is the frustrum erronous? CameraFrustrum CalculateCameraFrustrum(const float fovDegrees, const float aspectRatio, const float minDist, const float maxDist, const Mat4& cameraViewMatrix, Mat4& outFrustrumMat) { CameraFrustrum ret = { Vec4(1.0f, -1.0f, 0.0f, 1.0f), Vec4(1.0f, 1.0f, 0.0f, 1.0f), Vec4(-1.0f, 1.0f, 0.0f, 1.0f), Vec4(-1.0f, -1.0f, 0.0f, 1.0f), Vec4(1.0f, -1.0f, 1.0f, 1.0f), Vec4(1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, -1.0f, 1.0f, 1.0f), }; const Mat4 perspectiveMatrix = PerspectiveMatrixFov(fovDegrees, aspectRatio, minDist, maxDist); const Mat4 invMVP = glm::inverse(perspectiveMatrix * cameraViewMatrix); outFrustrumMat = invMVP; for (Vec4& corner : ret) { corner = invMVP * corner; corner /= corner.w; } return ret; } Mat4 CreateDirLightVPMatrix(const CameraFrustrum& cameraFrustrum, const Vec3& lightDir) { Mat4 lightViewMatrix = glm::lookAt(Vec3(0.0f), -glm::normalize(lightDir), Vec3(0.0f, -1.0f, 0.0f)); Vec4 transf = lightViewMatrix * cameraFrustrum[0]; float maxZ = transf.z, minZ = transf.z; float maxX = transf.x, minX = transf.x; float maxY = transf.y, minY = transf.y; for (uint32_t i = 1; i < 8; i++) { transf = lightViewMatrix * cameraFrustrum[i]; if (transf.z > maxZ) maxZ = transf.z; if (transf.z < minZ) minZ = transf.z; if (transf.x > maxX) maxX = transf.x; if (transf.x < minX) minX = transf.x; if (transf.y > maxY) maxY = transf.y; if (transf.y < minY) minY = transf.y; } Mat4 viewMatrix(lightViewMatrix); viewMatrix[3][0] = -(minX + maxX) * 0.5f; viewMatrix[3][1] = -(minY + maxY) * 0.5f; viewMatrix[3][2] = -(minZ + maxZ) * 0.5f; viewMatrix[0][3] = 0.0f; viewMatrix[1][3] = 0.0f; viewMatrix[2][3] = 0.0f; viewMatrix[3][3] = 1.0f; Vec3 halfExtents((maxX - minX) * 0.5, (maxY - minY) * 0.5, (maxZ - minZ) * 0.5); return OrthographicMatrix(-halfExtents.x, halfExtents.x, halfExtents.y, -halfExtents.y, halfExtents.z, -halfExtents.z) * viewMatrix; }

    Read the article

  • Early Z culling - Ogre

    - by teodron
    This question is concerned with how one can enable this "pixel filter" to work within an Ogre based app. Simply put, one can write two passes, the first without writing any colour values to the frame buffer lighting off colour_write off shading flat The second pass is the one that employs heavy pixel shader computations, hence it would be really nice to get rid of those hidden surface patches and not process them pixel-wise. This approach works, except for one thing: objects with alpha, such as billboard trees suffer in a peculiar way - from one side, they seem to capture the sky/background within their alpha region and ignore other trees/houses behind them, while viewed from the other side, they exhibit the desired behavior. To tackle the issue, I thought I could write a custom vertex shader in the first pass and offset the projected Z component of the vertex a little further away from its actual position, so that in the second pass there is a need to recompute correctly the pixels of the objects closest to the camera. This doesn't work at all, all surfaces are processed in the pixel shader and there is no performance gain. So, if anyone has done a similar trick with Ogre and alpha objects, kindly please help.

    Read the article

  • My first animation - Using SDL.NET C#

    - by Mark
    Hi all! I'm trying to animate a player object in my 2D grid when the user clicks somewhere in the screen. I got the following 4 variables: oX (Current player position X) oY (Current player position Y) dX (Destination X) dY (Destination Y) How can I make sure the player moves in a straight line to the new XY coordinates. The way I'm doing it now is really awfull and causes the player to first move along x axis, and finally in y axis. Can someone give me some guidance with the involved math cause I'm really not sure on how to accomplish this. Thank you for your time. Kind regards, Mark Update: It's working now but whats the right way to check if the current positions are equal to the target position? private static void MovePlayer(double x2, double y2, int duration) { double hX = x2 - m_PlayerPosition.X; double hY = y2 - m_PlayerPosition.Y; double Length = Math.Sqrt(Math.Pow(hX, 2) + Math.Pow(hY, 2)); hX = hX / Length; hY = hY / Length; while (m_PlayerPosition.X != Convert.ToInt32(x2) || m_PlayerPosition.Y != Convert.ToInt32(y2)) { m_PlayerPosition.X += Convert.ToInt32(hX * 1); m_PlayerPosition.Y += Convert.ToInt32(hY * 1); UpdatePlayerLocation(); } }

    Read the article

  • Restoring projection matrix

    - by brainydexter
    I am learning to use FBOs and one of the things that I need to do when rendering something onto user defined FBO, I have to setup the projection, modelview and viewport for it. Once I am done rendering to the FBO, I need to restore these matrices. I found: glPushAttrib(GL_VIEWPORT_BIT); glPopAttrib(); to restore the viewport to its old state. Is there a way to restore the projection and modelview matrix to whatever it was earlier ? Tech: C++/OpenGL Thanks!

    Read the article

  • Quaternion dfference + time --> angular velocity (gyroscope in physics library)

    - by AndrewK
    I am using Bullet Physic library to program some function, where I have difference between orientation from gyroscope given in quaternion and orientation of my object, and time between each frame in milisecond. All I want is set the orientation from my gyroscope to orientation of my object in 3D space. But all I can do is set angular velocity to my object. I have orientation difference and time, and from that I calculate vector of angular velocity [Wx,Wy,Wz] from that formula: W(t) = 2 * dq(t)/dt * conj(q(t)) My code is: btQuaternion diffQuater = gyroQuater - boxQuater; btQuaternion conjBoxQuater = gyroQuater.inverse(); btQuaternion velQuater = ((diffQuater * 2.0f) / d_time) * conjBoxQuater; And everything works well, till I get: 1 rotating around Y axis, angle about 60 degrees, then I have these values in 2 critical frames: x: -0.013220 y: -0.038050 z: -0.021979 w: -0.074250 - diffQuater x: 0.120094 y: 0.818967 z: 0.156797 w: -0.538782 - gyroQuater x: 0.133313 y: 0.857016 z: 0.178776 w: -0.464531 - boxQuater x: 0.207781 y: 0.290452 z: 0.245594 - diffQuater -> euler angles x: 3.153619 y: -66.947929 z: 175.936615 - gyroQuater -> euler angles x: 4.290697 y: -57.553043 z: 173.320053 - boxQuater -> euler angles x: 0.138128 y: 2.823307 z: 1.025552 w: 0.131360 - velQuater d_time: 0.058000 x: 0.211020 y: 1.595124 z: 0.303650 w: -1.143846 - diffQuater x: 0.089518 y: 0.771939 z: 0.144527 w: -0.612543 - gyroQuater x: -0.121502 y: -0.823185 z: -0.159123 w: 0.531303 - boxQuater x: nan y: nan z: nan - diffQuater -> euler angles x: 2.985240 y: -76.304405 z: -170.555054 - gyroQuater -> euler angles x: 3.269681 y: -65.977966 z: 175.639420 - boxQuater -> euler angles x: -0.730262 y: -2.882153 z: -1.294721 w: 63.325996 - velQuater d_time: 0.063000 2 rotating around X axis, angle about 120 degrees, then I have these values in 2 critical frames: x: -0.013045 y: -0.004186 z: -0.005667 w: -0.022482 - diffQuater x: -0.848030 y: -0.187985 z: 0.114400 w: 0.482099 - gyroQuater x: -0.834985 y: -0.183799 z: 0.120067 w: 0.504580 - boxQuater x: 0.036336 y: 0.002312 z: 0.020859 - diffQuater -> euler angles x: -113.129463 y: 0.731925 z: 25.415056 - gyroQuater -> euler angles x: -110.232368 y: 0.860897 z: 25.350458 - boxQuater -> euler angles x: -0.865820 y: -0.456086 z: 0.034084 w: 0.013184 - velQuater d_time: 0.055000 x: -1.721662 y: -0.387898 z: 0.229844 w: 0.910235 - diffQuater x: -0.874310 y: -0.200132 z: 0.115142 w: 0.426933 - gyroQuater x: 0.847352 y: 0.187766 z: -0.114703 w: -0.483302 - boxQuater x: -144.402298 y: 4.891629 z: 71.309158 - diffQuater -> euler angles x: -119.515343 y: 1.745076 z: 26.646086 - gyroQuater -> euler angles x: -112.974533 y: 0.738675 z: 25.411509 - boxQuater -> euler angles x: 2.086195 y: 0.676526 z: -0.424351 w: 70.104248 - velQuater d_time: 0.057000 2 rotating around Z axis, angle about 120 degrees, then I have these values in 2 critical frames: x: -0.000736 y: 0.002812 z: -0.004692 w: -0.008181 - diffQuater x: -0.003829 y: 0.012045 z: -0.868035 w: 0.496343 - gyroQuater x: -0.003093 y: 0.009232 z: -0.863343 w: 0.504524 - boxQuater x: -0.000822 y: -0.003032 z: 0.004162 - diffQuater -> euler angles x: -1.415189 y: 0.304210 z: -120.481873 - gyroQuater -> euler angles x: -1.091881 y: 0.227784 z: -119.399445 - boxQuater -> euler angles x: 0.159042 y: 0.169228 z: -0.754599 w: 0.003900 - velQuater d_time: 0.025000 x: -0.007598 y: 0.024074 z: -1.749412 w: 0.968588 - diffQuater x: -0.003769 y: 0.012030 z: -0.881377 w: 0.472245 - gyroQuater x: 0.003829 y: -0.012045 z: 0.868035 w: -0.496343 - boxQuater x: -5.645197 y: 1.148993 z: -146.507187 - diffQuater -> euler angles x: -1.418294 y: 0.270319 z: -123.638245 - gyroQuater -> euler angles x: -1.415183 y: 0.304208 z: -120.481873 - boxQuater -> euler angles x: 0.017498 y: -0.013332 z: 2.040073 w: 148.120056 - velQuater d_time: 0.027000 The problem is the most visible in diffQuater - euler angles vector. Can someone tell me why it is like that? and how to solve that problem? All suggestions are welcome.

    Read the article

  • Normal map applied as diffuse textures looks wrong

    - by KaiserJohaan
    Diffuse textures works fine, but I am having problem with normal maps, so I thought I'd tried to apply the normal maps as the diffuse map in my fragment shader so I could see everything is OK. I comment-out my normal map code and just set the diffuse map to the normal map and I get this: http://postimg.org/image/j9gudjl7r/ Looks like a smurf! This is the actual normal map of the main body: http://postimg.org/image/sbkyr6fg9/ Here is my fragment shader, notice I commented out normal map code so I could debug the normal map as a diffuse texture "#version 330 \n \ \n \ layout(std140) uniform; \n \ \n \ const int MAX_LIGHTS = 8; \n \ \n \ struct Light \n \ { \n \ vec4 mLightColor; \n \ vec4 mLightPosition; \n \ vec4 mLightDirection; \n \ \n \ int mLightType; \n \ float mLightIntensity; \n \ float mLightRadius; \n \ float mMaxDistance; \n \ }; \n \ \n \ uniform UnifLighting \n \ { \n \ vec4 mGamma; \n \ vec3 mViewDirection; \n \ int mNumLights; \n \ \n \ Light mLights[MAX_LIGHTS]; \n \ } Lighting; \n \ \n \ uniform UnifMaterial \n \ { \n \ vec4 mDiffuseColor; \n \ vec4 mAmbientColor; \n \ vec4 mSpecularColor; \n \ vec4 mEmissiveColor; \n \ \n \ bool mHasDiffuseTexture; \n \ bool mHasNormalTexture; \n \ bool mLightingEnabled; \n \ float mSpecularShininess; \n \ } Material; \n \ \n \ uniform sampler2D unifDiffuseTexture; \n \ uniform sampler2D unifNormalTexture; \n \ \n \ in vec3 frag_position; \n \ in vec3 frag_normal; \n \ in vec2 frag_texcoord; \n \ in vec3 frag_tangent; \n \ in vec3 frag_bitangent; \n \ \n \ out vec4 finalColor; " " \n \ \n \ void CalcGaussianSpecular(in vec3 dirToLight, in vec3 normal, out float gaussianTerm) \n \ { \n \ vec3 viewDirection = normalize(Lighting.mViewDirection); \n \ vec3 halfAngle = normalize(dirToLight + viewDirection); \n \ \n \ float angleNormalHalf = acos(dot(halfAngle, normalize(normal))); \n \ float exponent = angleNormalHalf / Material.mSpecularShininess; \n \ exponent = -(exponent * exponent); \n \ \n \ gaussianTerm = exp(exponent); \n \ } \n \ \n \ vec4 CalculateLighting(in Light light, in vec4 diffuseTexture, in vec3 normal) \n \ { \n \ if (light.mLightType == 1) // point light \n \ { \n \ vec3 positionDiff = light.mLightPosition.xyz - frag_position; \n \ float dist = max(length(positionDiff) - light.mLightRadius, 0); \n \ \n \ float attenuation = 1 / ((dist/light.mLightRadius + 1) * (dist/light.mLightRadius + 1)); \n \ attenuation = max((attenuation - light.mMaxDistance) / (1 - light.mMaxDistance), 0); \n \ \n \ vec3 dirToLight = normalize(positionDiff); \n \ float angleNormal = clamp(dot(normalize(normal), dirToLight), 0, 1); \n \ \n \ float gaussianTerm = 0.0; \n \ if (angleNormal > 0.0) \n \ CalcGaussianSpecular(dirToLight, normal, gaussianTerm); \n \ \n \ return diffuseTexture * (attenuation * angleNormal * Material.mDiffuseColor * light.mLightIntensity * light.mLightColor) + \n \ (attenuation * gaussianTerm * Material.mSpecularColor * light.mLightIntensity * light.mLightColor); \n \ } \n \ else if (light.mLightType == 2) // directional light \n \ { \n \ vec3 dirToLight = normalize(light.mLightDirection.xyz); \n \ float angleNormal = clamp(dot(normalize(normal), dirToLight), 0, 1); \n \ \n \ float gaussianTerm = 0.0; \n \ if (angleNormal > 0.0) \n \ CalcGaussianSpecular(dirToLight, normal, gaussianTerm); \n \ \n \ return diffuseTexture * (angleNormal * Material.mDiffuseColor * light.mLightIntensity * light.mLightColor) + \n \ (gaussianTerm * Material.mSpecularColor * light.mLightIntensity * light.mLightColor); \n \ } \n \ else if (light.mLightType == 4) // ambient light \n \ return diffuseTexture * Material.mAmbientColor * light.mLightIntensity * light.mLightColor; \n \ else \n \ return vec4(0.0); \n \ } \n \ \n \ void main() \n \ { \n \ vec4 diffuseTexture = vec4(1.0); \n \ if (Material.mHasDiffuseTexture) \n \ diffuseTexture = texture(unifDiffuseTexture, frag_texcoord); \n \ \n \ vec3 normal = frag_normal; \n \ if (Material.mHasNormalTexture) \n \ { \n \ diffuseTexture = vec4(normalize(texture(unifNormalTexture, frag_texcoord).xyz * 2.0 - 1.0), 1.0); \n \ // vec3 normalTangentSpace = normalize(texture(unifNormalTexture, frag_texcoord).xyz * 2.0 - 1.0); \n \ //mat3 tangentToWorldSpace = mat3(normalize(frag_tangent), normalize(frag_bitangent), normalize(frag_normal)); \n \ \n \ // normal = tangentToWorldSpace * normalTangentSpace; \n \ } \n \ \n \ if (Material.mLightingEnabled) \n \ { \n \ vec4 accumLighting = vec4(0.0); \n \ \n \ for (int lightIndex = 0; lightIndex < Lighting.mNumLights; lightIndex++) \n \ accumLighting += Material.mEmissiveColor * diffuseTexture + \n \ CalculateLighting(Lighting.mLights[lightIndex], diffuseTexture, normal); \n \ \n \ finalColor = pow(accumLighting, Lighting.mGamma); \n \ } \n \ else { \n \ finalColor = pow(diffuseTexture, Lighting.mGamma); \n \ } \n \ } \n"; Here is my wrapper around a texture OpenGLTexture::OpenGLTexture(const std::vector<uint8_t>& textureData, uint32_t textureWidth, uint32_t textureHeight, TextureFormat textureFormat, TextureType textureType, Logger& logger) : mLogger(logger), mTextureID(gNextTextureID++), mTextureType(textureType) { glGenTextures(1, &mTexture); CHECK_GL_ERROR(mLogger); glBindTexture(GL_TEXTURE_2D, mTexture); CHECK_GL_ERROR(mLogger); GLint glTextureFormat = (textureFormat == TextureFormat::TEXTURE_FORMAT_RGB ? GL_RGB : textureFormat == TextureFormat::TEXTURE_FORMAT_RGBA ? GL_RGBA : GL_RED); glTexImage2D(GL_TEXTURE_2D, 0, glTextureFormat, textureWidth, textureHeight, 0, glTextureFormat, GL_UNSIGNED_BYTE, &textureData[0]); CHECK_GL_ERROR(mLogger); glGenerateMipmap(GL_TEXTURE_2D); CHECK_GL_ERROR(mLogger); glBindTexture(GL_TEXTURE_2D, 0); CHECK_GL_ERROR(mLogger); } OpenGLTexture::~OpenGLTexture() { glDeleteBuffers(1, &mTexture); CHECK_GL_ERROR(mLogger); } And here is the sampler I create which is shared between Diffuse and normal textures // texture sampler setup glGenSamplers(1, &mTextureSampler); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_MAG_FILTER, GL_LINEAR); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_S, GL_REPEAT); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_T, GL_REPEAT); CHECK_GL_ERROR(mLogger); glSamplerParameterf(mTextureSampler, GL_TEXTURE_MAX_ANISOTROPY_EXT, mCurrentAnisotropy); CHECK_GL_ERROR(mLogger); glUniform1i(glGetUniformLocation(mDefaultProgram.GetHandle(), "unifDiffuseTexture"), OpenGLTexture::TEXTURE_UNIT_DIFFUSE); CHECK_GL_ERROR(mLogger); glUniform1i(glGetUniformLocation(mDefaultProgram.GetHandle(), "unifNormalTexture"), OpenGLTexture::TEXTURE_UNIT_NORMAL); CHECK_GL_ERROR(mLogger); glBindSampler(OpenGLTexture::TEXTURE_UNIT_DIFFUSE, mTextureSampler); CHECK_GL_ERROR(mLogger); glBindSampler(OpenGLTexture::TEXTURE_UNIT_NORMAL, mTextureSampler); CHECK_GL_ERROR(mLogger); SetAnisotropicFiltering(mCurrentAnisotropy); The diffuse textures looks like they should, but the normal looks so wierd. Why is this?

    Read the article

  • How do you turn a cube into a sphere?

    - by Tom Dalling
    I'm trying to make a quad sphere based on an article, which shows results like this: I can generate a cube correctly: But when I convert all the points according to this formula (from the page linked above): x = x * sqrtf(1.0 - (y*y/2.0) - (z*z/2.0) + (y*y*z*z/3.0)); y = y * sqrtf(1.0 - (z*z/2.0) - (x*x/2.0) + (z*z*x*x/3.0)); z = z * sqrtf(1.0 - (x*x/2.0) - (y*y/2.0) + (x*x*y*y/3.0)); My sphere looks like this: As you can see, the edges of the cube still poke out too far. The cube ranges from -1 to +1 on all axes, like the article says. Any ideas what is wrong?

    Read the article

  • SpriteBatch.end() generating null pointer exception

    - by odaymichael
    I am getting a null pointer exception using libGDX that the debugger points as the SpriteBatch.end() line. I was wondering what would cause this. Here is the offending code block, specifically the batch.end() line: batch.begin(); for (int j = 0; j < 3; j++) for (int i = 0; i < 3; i++) if (zoomgrid[i][j].getPiece().getImage() != null) zoomgrid[i][j].getPiece().getImage().draw(batch); batch.end(); The top of the stack is actually a line that calls lastTexture.bind(); In the flush() method of com.badlogic.gdx.graphics.g2d.SpriteBatch. I appreciate any input, let me know if I haven't included enough information.

    Read the article

  • Randomly placing items script not working - sometimes items spawn in walls, sometimes items spawn in weird locations

    - by Timothy Williams
    I'm trying to figure out a way to randomly spawn items throughout my level, however I need to make sure they won't spawn inside another object (walls, etc.) Here's the code I'm currently using, it's based on the Physics.CheckSphere(); function. This runs OnLevelWasLoaded(); It spawns the items perfectly fine, but sometimes items spawn partway in walls. And sometimes items will spawn outside of the SpawnBox range (no clue why it does that.) //This is what randomly generates all the items. void SpawnItems () { if (Application.loadedLevelName == "Menu" || Application.loadedLevelName == "End Demo") return; //The bottom corner of the box we want to spawn items in. Vector3 spawnBoxBot = Vector3.zero; //Top corner. Vector3 spawnBoxTop = Vector3.zero; //If we're in the dungeon, set the box to the dungeon box and tell the items we want to spawn. if (Application.loadedLevelName == "dungeonScene") { spawnBoxBot = new Vector3 (8.857f, 0, 9.06f); spawnBoxTop = new Vector3 (-27.98f, 2.4f, -15); itemSpawn = dungeonSpawn; } //Spawn all the items. for (i = 0; i != itemSpawn.Length; i ++) { spawnedItem = null; //Zeroes out our random location Vector3 randomLocation = Vector3.zero; //Gets the meshfilter of the item we'll be spawning MeshFilter mf = itemSpawn[i].GetComponent<MeshFilter>(); //Gets it's bounds (see how big it is) Bounds bounds = mf.sharedMesh.bounds; //Get it's radius float maxRadius = new Vector3 (bounds.extents.x + 10f, bounds.extents.y + 10f, bounds.extents.z + 10f).magnitude * 5f; //Set which layer is the no walls layer var NoWallsLayer = 1 << LayerMask.NameToLayer("NoWallsLayer"); //Use that layer as your layermask. LayerMask layerMask = ~(1 << NoWallsLayer); //If we're in the dungeon, certain items need to spawn on certain halves. if (Application.loadedLevelName == "dungeonScene") { if (itemSpawn[i].name == "key2" || itemSpawn[i].name == "teddyBearLW" || itemSpawn[i].name == "teddyBearLW_Admiration" || itemSpawn[i].name == "radio") randomLocation = new Vector3(Random.Range(spawnBoxBot.x, -26.96f), Random.Range(spawnBoxBot.y, spawnBoxTop.y), Random.Range(spawnBoxBot.z, -2.141f)); else randomLocation = new Vector3(Random.Range(spawnBoxBot.x, spawnBoxTop.x), Random.Range(spawnBoxBot.y, spawnBoxTop.y), Random.Range(-2.374f, spawnBoxTop.z)); } //Otherwise just spawn them in the box. else randomLocation = new Vector3(Random.Range(spawnBoxBot.x, spawnBoxTop.x), Random.Range(spawnBoxBot.y, spawnBoxTop.y), Random.Range(spawnBoxBot.z, spawnBoxTop.z)); //This is what actually spawns the item. It checks to see if the spot where we want to instantiate it is clear, and if so it instatiates it. Otherwise we have to repeat the whole process again. if (Physics.CheckSphere(randomLocation, maxRadius, layerMask)) spawnedItem = Instantiate(itemSpawn[i], randomLocation, Random.rotation); else i --; //If we spawned something, set it's name to what it's supposed to be. Removes the (clone) addon. if (spawnedItem != null) spawnedItem.name = itemSpawn[i].name; } } What I'm asking for is if you know what's going wrong with this code that it would spawn stuff in walls. Or, if you could provide me with links/code/ideas of a better way to check if an item will spawn in a wall (some other function than Physics.CheckSphere). I've been working on this for a long time, and nothing I try seems to work. Any help is appreciated.

    Read the article

  • Missing z-axis rotation for transforming between two vectors

    - by Steve Baughman
    I'm trying to rotate a cube so that it's facing up, but am getting hung up on the final implementation details. It now reliably will rotate the x,y axis to the correct side, but the z-axis is never rotating (See photos of before and after rotation). When I'm using the code below I always get '0' for my rotationVector.z. What am I missing here? // Define lookAt vector lookAtVector = GLKVector3Make(0,0,1); // Define axes vectors axes[0] = GLKVector3Make(0,0,1); axes[1] = GLKVector3Make(-1,0,0); axes[2] = GLKVector3Make(0,1,0); axes[3] = GLKVector3Make(1,0,0); axes[4] = GLKVector3Make(0,-1,0); axes[5] = GLKVector3Make(0,0,-1); CGFloat highest_dot = -1.0; GLKVector3 closest_axis; for(int i = 0; i < 6; i++) { // multiply cube's axes by existing matrix GLKVector3 axis = GLKMatrix4MultiplyVector3(matrix, axes[i]); CGFloat dot = GLKVector3DotProduct(axis, lookAtVector); if(dot > highest_dot) { closest_axis = axis; highest_dot = dot; } } GLKVector3 rotationVector = GLKVector3CrossProduct(closest_axis, lookAtVector); // Get angle between vectors CGFloat angle = atan2(GLKVector3Length(rotationVector), GLKVector3DotProduct(closest_axis, lookAtVector)); // normalize the rotation vector rotationVector = GLKVector3Normalize(rotationVector); // Create transform CATransform3D rotationTransform = CATransform3DMakeRotation(angle, rotationVector.x, rotationVector.y, rotationVector.z); // add rotation transform to existing transformation baseTransform = CATransform3DConcat(baseTransform, rotationTransform); return baseTransform; Before 3d Rotation After 3d Rotation Implementation based on this post

    Read the article

  • How can I generate a view or projection matrix for OpenGL 3.+

    - by Ken
    I'm transitioning from OpenGL 2 to OpenGL 3.+ and to GLSL 1.5. I'm trying to avoid using the deprecated features. My question how do we now generate the view or projection matrix. I was using the matrix stack to calculate the projection matrix for me; GLfloat ptr[16]; gluPerspective(...); glGetFloatv(GL_MODELVIEW_MATRIX, ptr); //then pass ptr via a uniform to the shader But obviously the matrix stack is deprecated. So this approach is not the best an option going forward. I have the 'Red Book', 7th ed, which covers 3.0 & 3.1 and it still uses the deprecated matrix functions in it's examples. I could write some utility-code myself to generate the matrices. But I don't want to re-invent this particular wheel, especially when this functionality is required for every 3D graphics program. What is the accepted way to generate world,view & projection matrices for OpenGL? Is there an emerging 'standard' library for this? Or is there some other hidden (to me) functionality in OpenGL/GLSL which I have overlooked?

    Read the article

  • Restrict Tile Map to its boundaries

    - by Farooq Arshed
    I have loaded a tmx file in cocos2dx and now I am trying to implement panning. I have successfully implemented the panning first part where the map moves. Now I want to restrict the map so it does not display the map beyond its boundary where it shows black screen. I am confused as to how to implement it. Below is my code any help would be appreciated. bool HelloWorld::init() { if ( !CCLayer::init() ) { return false; } const char* tmx= "isometric_grass_and_water.tmx"; _tileMap = new CCTMXTiledMap(); _tileMap->initWithTMXFile(tmx); this->addChild(_tileMap); this->setTouchEnabled(true); return true; } void HelloWorld::ccTouchesBegan(CCSet *touches, CCEvent *event){ CCSetIterator it; for (it=touches->begin(); it!=touches->end(); ++it){ CCTouch* touch = (CCTouch*)it.operator*(); CCLog("touches id: %d", touch->getID()); oldLoc = touch->getLocationInView(); oldLoc = CCDirector::sharedDirector()->convertToGL(oldLoc); } } void HelloWorld::ccTouchesMoved(CCSet *touches, CCEvent *event) { if (touches->count() == 1) { CCTouch* touch = (CCTouch*)( touches->anyObject() ); this->moveScreen(touch); } else if (touches->count() == 2) { this->scaleScreen(touches); } } void HelloWorld::moveScreen(CCTouch* touch) { CCPoint currentLoc = touch->getLocationInView(); currentLoc = CCDirector::sharedDirector()->convertToGL(currentLoc); CCPoint moveTo = ccpSub(oldLoc, currentLoc); moveTo = ccpMult(moveTo, -1); oldLoc = currentLoc; this->setPosition(ccpAdd(this->getPosition(), ccp(moveTo.x, moveTo.y))); }

    Read the article

  • Normals vs Normal maps

    - by KaiserJohaan
    I am using Assimp asset importer (http://assimp.sourceforge.net/lib_html/index.html) to parse 3d models. So far, I've simply pulled out the normal vectors which are defined for each vertex in my meshes. Yet I have also found various tutorials on normal maps... As I understand it for normal maps, the normal vectors are stored in each texel of a normal map, and you pull these out of the normal texture in the shader. Why is there two ways to get the normals, which one is considered best-practice and why?

    Read the article

< Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >