Search Results

Search found 11995 results on 480 pages for 'clement game'.

Page 240/480 | < Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >

  • Away3D & Directional Light w/ Rotating Meshes

    - by seethru
    This is likely a stupid error but I can't seem to find what I've done wrong. I've got a simple scene with 10 cylinders rotating at a default speed. If I grab one of these cylinders I can rotate it in the opposite direction or at a greater speed. I have a single directional light in the scene. It would appear that the directional light is only calculated at initialization and not on further frames. The shadow created by the light rotates with the cylinder giving the impression that the light is rotating when it isn't. Camera & Light Initialization _view = new View3D(); addChild(_view); _view.antiAlias = 4; _view.backgroundColor = 0xFFFFFF; _view.camera.z = -850; _view.camera.y = 0; _view.camera.x = 0; _view.camera.lookAt(new Vector3D()); _view.camera.lens = new PerspectiveLens(15); _view.mousePicker = PickingType.RAYCAST_BEST_HIT; _light = new DirectionalLight(); _light.z = -850; _light.direction = new Vector3D(1, 1, 1); _light.color = 0xFFFFFF; _light.ambient = 0.1; _light.diffuse = 0.7; _view.scene.addChild(_light); Mesh and Material creation var material:TextureMaterial = new TextureMaterial(createPow2Texture(sprite, _colors[i]) , true, false, true); material.animateUVs = true; material.lightPicker = _lightPicker; cylinder = new Mesh(new CylinderGeometry(radius, radius, 13, 70, 1, true, true), material); cylinder.subMeshes[0].scaleU = spriteWidth / sprite.width; cylinder.y = y; cylinder.mouseEnabled = true; cylinder.pickingCollider = PickingColliderType.AS3_BEST_HIT; cylinder.addEventListener(MouseEvent3D.MOUSE_OVER, onMouseOverMesh); cylinder.addEventListener(MouseEvent3D.MOUSE_MOVE, onMouseOverMesh); cylinder.addEventListener(MouseEvent3D.MOUSE_OUT, onMouseOutMesh); _cylinders.push(cylinder); Frame private function onEnterFrame(event:Event):void { for each (var mesh:Mesh in _cylinders) { if (mesh == _mouseOverMesh) continue; mesh.rotationY += 0.25; } _view.render(); }

    Read the article

  • MD5 vertex skinning problem extending to multi-jointed skeleton (GPU Skinning)

    - by Soapy
    Currently I'm trying to implement GPU skinning in my project. So far I have achieved single joint translation and rotation, and multi-jointed translation. The problem arises when I try to rotate a multi-jointed skeleton. The image above shows the current progress. The left image shows how the model should deform. The middle image shows how it deforms in my project. The right shows a better deform (still not right) inverting a certain value, which I will explain below. The way I get my animation data is by exporting it to the MD5 format (MD5mesh for mesh data and MD5anim for animation data). When I come to parse the animation data, for each frame, I check if the bone has a parent, if not, the data is passed in as is from the MD5anim file. If it does have a parent, I transform the bones position by the parents orientation, and the add this with the parents translation. Then the parent and child orientations get concatenated. This is covered at this website. if (Parent < 0){ ... // Save this data without editing it } else { Math3::vec3 rpos; Math3::quat pq = Parent.Quaternion; Math3::quat pqi(pq); pqi.InvertUnitQuat(); pqi.Normalise(); Math3::quat::RotateVector3(rpos, pq, jv); Math3::vec3 npos(rpos + Parent.Pos); this->Translation = npos; Math3::quat nq = pq * jq; nq.Normalise(); this->Quaternion = nq; } And to achieve the image to the right, all I need to do is to change Math3::quat::RotateVector3(rpos, pq, jv); to Math3::quat::RotateVector3(rpos, pqi, jv);, why is that? And this is my skinning shader. SkinningShader.vert #version 330 core smooth out vec2 vVaryingTexCoords; smooth out vec3 vVaryingNormals; smooth out vec4 vWeightColor; uniform mat4 MV; uniform mat4 MVP; uniform mat4 Pallete[55]; uniform mat4 invBindPose[55]; layout(location = 0) in vec3 vPos; layout(location = 1) in vec2 vTexCoords; layout(location = 2) in vec3 vNormals; layout(location = 3) in int vSkeleton[4]; layout(location = 4) in vec3 vWeight; void main() { vec4 wpos = vec4(vPos, 1.0); vec4 norm = vec4(vNormals, 0.0); vec4 weight = vec4(vWeight, (1.0f-(vWeight[0] + vWeight[1] + vWeight[2]))); normalize(weight); mat4 BoneTransform; for(int i = 0; i < 4; i++) { if(vSkeleton[i] != -1) { if(i == 0) { // These are interchangable for some reason // BoneTransform = ((invBindPose[vSkeleton[i]] * Pallete[vSkeleton[i]]) * weight[i]); BoneTransform = ((Pallete[vSkeleton[i]] * invBindPose[vSkeleton[i]]) * weight[i]); } else { // These are interchangable for some reason // BoneTransform += ((invBindPose[vSkeleton[i]] * Pallete[vSkeleton[i]]) * weight[i]); BoneTransform += ((Pallete[vSkeleton[i]] * invBindPose[vSkeleton[i]]) * weight[i]); } } } wpos = BoneTransform * wpos; vWeightColor = weight; vVaryingTexCoords = vTexCoords; vVaryingNormals = normalize(vec3(vec4(vNormals, 0.0) * MV)); gl_Position = wpos * MVP; } The Pallete matrices are the matrices calculated using the above code (a rotation and translation matrix get created from the translation and quaternion). The invBindPose matrices are simply the inverted matrices created from the joints in the MD5mesh file. Update 1 I looked at GLM to compare the values I get with my own implementation. They turn out to be exactly the same. So now i'm checking if there's a problem with matrix creation... Update 2 Looked at GLM again to compare matrix creation using quaternions. Turns out that's not the problem either.

    Read the article

  • How to create projection/view matrix for hole in the monitor effect

    - by Mr Bell
    Lets say I have my XNA app window that is sized at 640 x 480 pixels. Now lets say I have a cube model with its poly's facing in to make a room. This cube is sized 640 units wide by 480 units high by 480 units deep. Lets say the camera is somewhere in front of the box looking at it. How can I set up the view and projection matrices such that the front edge of the box lines up exactly with the edges of the application window? It seems like this should probably involve the Matrix.CreatePerspectiveOffCenter method, but I don't fully understand how the parameters translate on to the screen. For reference, the end result will be something like Johhny Lee's wii head tracking demo: http://www.youtube.com/watch?v=Jd3-eiid-Uw&feature=player_embedded P.S. I realize that his source code is available, but I am afraid I haven't been able to make heads or tails out of it.

    Read the article

  • Direct3D9 application won't write to depth buffer

    - by DeadMG
    I've got an application written in D3D9 which will not write any values to the depth buffer, resulting in incorrect values for the depth test. Things I've checked so far: D3DRS_ZENABLE, set to TRUE D3DRS_ZWRITEENABLE, set to TRUE D3DRS_ZFUNC, set to D3DCMP_LESSEQUAL The depth buffer is definitely bound to the pipeline at the relevant time The depth buffer was correctly cleared before use. I've used PIX to confirm that all of these things occurred as expected. For example, if I clear the depth buffer to 0 instead of 1, then correctly nothing is drawn, and PIX confirms that all the pixels failed the depth test. But I've also used PIX to confirm that my submitted geometry does not write to the depth buffer and so is not correctly rendered. Any other suggestions?

    Read the article

  • A few questions about integrating AudioKinetic Wwise and Unity

    - by SaldaVonSchwartz
    I'm new to Wwise and to using it with Unity, and though I have gotten the integration to work, I'm still dealing with some loose ends and have a few questions: (I'm on Unity 4.3 as of now but I think it shouldn't make any difference) The base path: The Wwise documentation implies you set this in the AkGlobalSoundEngineInitializer basePath public ivar, which is exposed to the editor. However, I found that this variable is not really used. Instead, the path is hardcoded to /Audio/GeneratedSoundBanks in AkBankPath. I had to modify both scripts to actually look in the path that I set in the editor property. What's the deal with this? Just sloppyness or am I missing something? Also about paths: since I'm on Mac, I'm using Unity natively under OS X and in tadem, the Wwise authoring tool via VMWare and I share the OS X Unity project folder so I can generate the soundbanks into the assets folder. However, the authoring tool (downloaded the latest one for Windows) doesn't automatically generate any "platform-specific" subfolders for my wwise files. That is, again, the Unity integration scripts assume the path to be /Audio/GeneratedSoundBanks/<my-platform>/ which in my case would be Mac (I set the authoring tool to generate for Mac). The documentation says wwise will automatically generate the platform-specific folders but it just dumps all the stuff in GeneratedSoundBanks. Am I missing some setting? cause right now I just manually create the /Mac folder. The C# methods AkSoundEngine.PostEvent and AkSoundEngine.LoadBank for instance, have a few overloads, including ones where I can refer to the soundbanks or events by their ID. However, if I try to use these, for instance: AkSoundEngine.LoadBank(, AkSoundEngine.AK_DEFAULT_POOL_ID) where the int I got from the .h header, I get Ak_Fail. If I use the overloads that reference the objects by string name then it works. What gives? Converting the ID header to C#: The integration comes with a C# script that seems to fork a process to call Python in turn to covert the C++ header into a C# script. This always fails unless I manually execute the Python script myself from outside Unity. Might be a permissions thing, but has anyone experienced this? The Profiler: I set up the Unity player to run in the background and am using the "Profile" version of the plugin. However, when I start the Unity OS X standalone app, the profiler in VMWare does not see it. This I'm thinking might just be that I'm trying to see a running instance of the sound engine inside an OS X binary from a Windows virtual machine. But I'm just wondering if anyone has gotten the Windows profiler to see an OS X Unity binary. Different versions of the integration plugin: It's not clear to me from the documentation whether I have to manually (or write a script to do it) remove the "Profile" version and install the "Release" version when I'm going to do a Release build or if I should install both version in Unity and it'll select the right one. Thanks!

    Read the article

  • Textures do not render on ATI graphics cards?

    - by Mathias Lykkegaard Lorenzen
    I'm rendering textured quads to an orthographic view in XNA through hardware instancing. On Nvidia graphics cards, this all works, tested on 3 machines. On ATI cards, it doesn't work at all, tested on 2 machines. How come? Culling perhaps? My orthographic view is set up like this: Matrix projection = Matrix.CreateOrthographicOffCenter(0, graphicsDevice.Viewport.Width, -graphicsDevice.Viewport.Height, 0, 0, 1); And my elements are rendered with the Z-coordinate 0. Edit: I just figured out something weird. If I do not call this spritebatch code above doing my textured quad rendering code, then it won't work on Nvidia cards either. Could that be due to culling information or something like that? Batch.Instance.SpriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.Default, RasterizerState.CullNone); ... spriteBatch.End(); Edit 2: Here's the full code for my instancing call. public void DrawTextures() { Batch.Instance.SpriteBatch.Begin(SpriteSortMode.Texture, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.Default, RasterizerState.CullNone, textureEffect); while (texturesToDraw.Count > 0) { TextureJob texture = texturesToDraw.Dequeue(); spriteBatch.Draw(texture.Texture, texture.DestinationRectangle, texture.TintingColor); } spriteBatch.End(); #if !NOTEXTUREINSTANCING // no work to do if (positionInBufferTextured > 0) { device.BlendState = BlendState.Opaque; textureEffect.CurrentTechnique = textureEffect.Techniques["Technique1"]; textureEffect.Parameters["Texture"].SetValue(darkTexture); textureEffect.CurrentTechnique.Passes[0].Apply(); if ((textureInstanceBuffer == null) || (positionInBufferTextured > textureInstanceBuffer.VertexCount)) { if (textureInstanceBuffer != null) textureInstanceBuffer.Dispose(); textureInstanceBuffer = new DynamicVertexBuffer(device, texturedInstanceVertexDeclaration, positionInBufferTextured, BufferUsage.WriteOnly); } if (positionInBufferTextured > 0) { textureInstanceBuffer.SetData(texturedInstances, 0, positionInBufferTextured, SetDataOptions.Discard); } device.Indices = textureIndexBuffer; device.SetVertexBuffers(textureGeometryBuffer, new VertexBufferBinding(textureInstanceBuffer, 0, 1)); device.DrawInstancedPrimitives(PrimitiveType.TriangleStrip, 0, 0, textureGeometryBuffer.VertexCount, 0, 2, positionInBufferTextured); // now that we've drawn, it's ok to reset positionInBuffer back to zero, // and write over any vertices that may have been set previously. positionInBufferTextured = 0; } #endif }

    Read the article

  • Sprite sheets, Clamp or Wrap?

    - by David
    I'm using a combination of sprite sheets for well, sprites and individual textures for infinite tiling. For the tiling textures I'm obviously using Wrap to draw the entire surface in one call but up until now I've been making a seperate batch using Clamp for drawing sprites from the sprite sheets. The sprite sheets include a border (repeating the edge pixels of each sprite) and my code uses the correct source coordinates for sprites. But since I'm never giving coordinates outside of the texture when drawing sprites (and indeed the border exists to prevent bleed over when filtering) it's struck me that I'd be better off just using Wrap so that I can combine everything into one batch. I just want to be sure that I haven't overlooked something obvious. Is there any reason that Wrap would be harmful when used with a sprite sheet?

    Read the article

  • Multiple Key Presses in XNA?

    - by Bryan Harrington
    I'm actually trying to do something fairly simple. I cannot get multiple key presses to work in XNA. I've tried the following pieces of code. else if (keyboardState.IsKeyDown(Keys.Down) && (keyboardState.IsKeyDown(Keys.Left))) { //Move Character South-West } and I tried. else if (keyboardState.IsKeyDown(Keys.Down)) { if (keyboardState.IsKeyDown(Keys.Left)) { //Move Character South-West } } Neither worked for me. Single presses work just fine. Any thoughts?

    Read the article

  • Spritesheet per pixel collision XNA

    - by Jixi
    So basically i'm using this: public bool IntersectPixels(Rectangle rectangleA, Color[] dataA,Rectangle rectangleB, Color[] dataB) { int top = Math.Max(rectangleA.Top, rectangleB.Top); int bottom = Math.Min(rectangleA.Bottom, rectangleB.Bottom); int left = Math.Max(rectangleA.Left, rectangleB.Left); int right = Math.Min(rectangleA.Right, rectangleB.Right); for (int y = top; y < bottom; y++) { for (int x = left; x < right; x++) { Color colorA = dataA[(x - rectangleA.Left) + (y - rectangleA.Top) * rectangleA.Width]; Color colorB = dataB[(x - rectangleB.Left) + (y - rectangleB.Top) * rectangleB.Width]; if (colorA.A != 0 && colorB.A != 0) { return true; } } } return false; } In order to detect collision, but i'm unable to figure out how to use it with animated sprites. This is my animation update method: public void AnimUpdate(GameTime gameTime) { if (!animPaused) { animTimer += (float)gameTime.ElapsedGameTime.TotalMilliseconds; if (animTimer > animInterval) { currentFrame++; animTimer = 0f; } if (currentFrame > endFrame || endFrame <= currentFrame || currentFrame < startFrame) { currentFrame = startFrame; } objRect = new Rectangle(currentFrame * TextureWidth, frameRow * TextureHeight, TextureWidth, TextureHeight); origin = new Vector2(objRect.Width / 2, objRect.Height / 2); } } Which works with multiple rows and columns. and how i call the intersect: public bool IntersectPixels(Obj me, Vector2 pos, Obj o) { Rectangle collisionRect = new Rectangle(me.objRect.X, me.objRect.Y, me.objRect.Width, me.objRect.Height); collisionRect.X += (int)pos.X; collisionRect.Y += (int)pos.Y; if (IntersectPixels(collisionRect, me.TextureData, o.objRect, o.TextureData)) { return true; } return false; } Now my guess is that i have to update the textureData everytime the frame changes, no? If so then i already tried it and miserably failed doing so :P Any hints, advices? If you need to see any more of my code just let me know and i'll update the question. Updated almost functional collisionRect: collisionRect = new Rectangle((int)me.Position.X, (int)me.Position.Y, me.Texture.Width / (int)((me.frameCount - 1) * me.TextureWidth), me.Texture.Height); What it does now is "move" the block up 50%, shouldn't be too hard to figure out. Update: Alright, so here's a functional collision rectangle(besides the height issue) collisionRect = new Rectangle((int)me.Position.X, (int)me.Position.Y, me.TextureWidth / (int)me.frameCount - 1, me.TextureHeight); Now the problem is that using breakpoints i found out that it's still not getting the correct color values of the animated sprite. So it detects properly but the color values are always: R:0 G:0 B:0 A:0 ??? disregard that, it's not true afterall =P For some reason now the collision area height is only 1 pixel..

    Read the article

  • Unity custom shaders and z-fighting

    - by Heisenbug
    I've just readed a chapter of Unity iOS Essential by Robert Wiebe. It shows a solution for handling z-figthing problem occuring while rendering a street on a plane with the same y offset. Basically it modified Normal-Diffuse shader provided by Unity, specifing the (texture?) offset in -1, -1. Here's basically what the shader looks like: Shader "Custom/ModifiedNormalDiffuse" { Properties { _Color ("Main Color", Color) = (1,1,1,1) _MainTex ("Base (RGB)", 2D) = "white" {} } SubShader { Offset -1,-1 //THIS IS THE ADDED LINE Tags { "RenderType"="Opaque" } LOD 200 CGPROGRAM #pragma surface surf Lambert sampler2D _MainTex; fixed4 _Color; struct Input { float2 uv_MainTex; }; void surf (Input IN, inout SurfaceOutput o) { half4 c = tex2D (_MainTex, IN.uv_MainTex) *_Color; o.Albedo = c.rgb; o.Alpha = c.a; } ENDCG } FallBack "Diffuse" } Ok. That's simple and it works. The author says about it: ...we could use a copy of the shader that draw the road at an Offset of -1, -1 so that whenever the two textures are drawn, the road is always drawn last. I don't know CG nor GLSL, but I've a little bit of experience with HLSL. Anyway I can't figure out what exactly is going on. Could anyone explain me what exactly Offset directly does, and how is solves z-fighting problems?

    Read the article

  • How much it will cost to create tile-set similar to HoM&M 2?

    - by Alexey Petrushin
    How much it will cost to create tile-set similar to HoM&M 2? I'm mostly interested in the tile-set graphics only, no animation needed, the big images of town and creatures can be done as quick and dirty pensil sketches. The quality of tiles and its amount should be roughly the same as in HoM&M 2. Can You please give a rough estimate how much it will take man-hours and how much will it cost?

    Read the article

  • Vehicle: Boat accelerating and turning in Unity

    - by Emilios S.
    I'm trying to make a player-controllable boat in Unity and I'm running into problems with my code. 1) I want to make the boat to accelerate and decelerate steadily instead of simply moving the speed I'm telling it to right away. 2) I want to make the player unable to steer the boat unless it is moving. 3) If possible, I want to simulate the vertical floating of a boat during its movement (it going up and down) My current code (C#) is this: using UnityEngine; using System.Collections; public class VehicleScript : MonoBehaviour { public float speed=10; public float rotationspeed=50; // Use this for initialization // Update is called once per frame void Update () { // Forward movement if(Input.GetKey(KeyCode.I)) speed = transform.Translate (Vector3.left*speed*Time.deltaTime); // Backward movement if(Input.GetKey(KeyCode.K)) transform.Translate (Vector3.right*speed*Time.deltaTime); // Left movement if(Input.GetKey(KeyCode.J)) transform.Rotate (Vector3.down*rotationspeed*Time.deltaTime); // Right movement if(Input.GetKey(KeyCode.L)) transform.Rotate (Vector3.up*rotationspeed*Time.deltaTime); } } In the current state of my code, when I press the specified keys, the boat simply moves 10 units/sec instantly, and also stops instantly. I'm not really sure how to make the things stated above, so any help would be appreciated. Just to clarify, I don't necessarily need the full code to implement those features, I just want to know what functions to use in order to achieve the desired effects. Thank you very much.

    Read the article

  • Using textureGrad for anisotropic integration approximation

    - by Amxx
    I'm trying to develop a real time rendering method using real time acquired envmap (cubemap) for lightning. This implies that my envmap can change as often as every frame and I therefore cannot use any method base on precomputation of the envmap (such as convolution with BRDF...) So far my method worked well with Phong BRDF. For specular contribution I direclty read the value in my sampleCube and I use mipmap levels + linear filter for simulating the roughtness of the material considered: int size = textureSize(envmap, 0).x; float specular_level = log2(size * sqrt(3.0)) - 0.5 * log2(ns + 1); vec3 env_specular = ks * specular_color * textureLod(envmap, l_g, specular_level); From this method I would like to upgrade to a microfacet based BRDF. I already have algorithm for evaluating the shape (including anisotropic direction) of the reflection but I cannot manage to read the values I want in my sampleCube. I believe I have to use textureGrad(envmap, l_g, X, Y); with l_g being the reflection direction in global space but I cannot manage to find which values to give to X and Y in order to correctly specify the area I want to consider. What value should I give to X and Y in orther for textureGrad(envmap, l_g, X, Y); to give the same result as textureLod(envmap, l_g, specular_level);

    Read the article

  • How to store a shmup level?

    - by pek
    I am developing a 2D shmup (i.e. Aero Fighters) and I was wondering what are the various ways to store a level. Assuming that enemies are defined in their own xml file, how would you define when an enemy spawns in the level? Would it be based on time? Updates? Distance? Currently I do this based on "level time" (the amount of time the level is running - pausing doesn't update the time). Here is an example (the serialization was done by XNA): <?xml version="1.0" encoding="utf-8"?> <XnaContent xmlns:level="pekalicious.xanor.XanorContentShared.content.level"> <Asset Type="level:Level"> <Enemies> <Enemy> <EnemyType>data/enemies/smallenemy</EnemyType> <SpawnTime>PT0S</SpawnTime> <NumberOfSpawns>60</NumberOfSpawns> <SpawnOffset>PT0.2S</SpawnOffset> </Enemy> <Enemy> <EnemyType>data/enemies/secondenemy</EnemyType> <SpawnTime>PT0S</SpawnTime> <NumberOfSpawns>10</NumberOfSpawns> <SpawnOffset>PT0.5S</SpawnOffset> </Enemy> <Enemy> <EnemyType>data/enemies/secondenemy</EnemyType> <SpawnTime>PT20S</SpawnTime> <NumberOfSpawns>10</NumberOfSpawns> <SpawnOffset>PT0.5S</SpawnOffset> </Enemy> <Enemy> <EnemyType>data/enemies/boss1</EnemyType> <SpawnTime>PT30S</SpawnTime> <NumberOfSpawns>1</NumberOfSpawns> <SpawnOffset>PT0S</SpawnOffset> </Enemy> </Enemies> </Asset> </XnaContent> Each Enemy element is basically a wave of specific enemy types. The type is defined in EnemyType while SpawnTime is the "level time" this wave should appear. NumberOfSpawns and SpawnOffset is the number of enemies that will show up and the time it takes between each spawn respectively. This could be a good idea or there could be better ones out there. I'm not sure. I would like to see some opinions and ideas. I have two problems with this: spawning an enemy correctly and creating a level editor. The level editor thing is an entirely different problem (which I will probably post in the future :P). As for spawning correctly, the problem lies in the fact that I have a variable update time and so I need to make sure I don't miss an enemy spawn because the spawn offset is too small, or because the update took a little more time. I kinda fixed it for the most part, but it seems to me that the problem is with how I store the level. So, any ideas? Comments? Thank you in advance.

    Read the article

  • WebGL CORS error loading simple texture in Chrome

    - by mathacka
    Here's my code: function loadTexture() { textureImage = new Image(); textureImage.onload = function() { setupTexture(); } textureImage.src = "jumper2.png"; } function setupTexture() { texture = gl.createTexture(); gl.bindTexture(gl.TEXTURE_2D, texture); gl.pixelStorei(gl.UNPACK_FLIP_Y_WEBGL, true); // this next line has the error: Uncaught SecurityError: An attempt was made to break through the security policy of the user agent. gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, textureImage); gl.texParameteri(gl.TEXTURE_2D, gl.OES_TEXTURE_FLOAT_LINEAR, gl.NEAREST); if (!gl.isTexture(texture)) { alert("Error: Texture is invalid"); } glProgram.samplerUniform = gl.getUniformLocation(glProgram, "uSampler"); gl.uniform1i(glProgram.samplerUniform, 0); } I've researched it and it is a CORS error a "Cross-origin resource sharing" error, but it's a local file! I can't figure out what's wrong. I did make the picture using gimp, and I'm not sure the coding was right on the export, but I eliminated a previous error using "gl.OES_TEXTURE_FLOAT_LINEAR".

    Read the article

  • Converting from different handedness coordinate systems

    - by SirYakalot
    I am currently porting a demo from XNA to DirectX which, as I understand it, both have coordinate systems with different handednesses. What are the things I need to bare in mind when converting between the two? I understand not everything needs to be changed. Also I notice that many of the 3D maths functions in some of the direct3D libraries have right handed and left handed alternatives. Would it be better to just use these?

    Read the article

  • How exactly are textures drawn on faces of cubes?

    - by Christian Frantz
    Are they drawn from the lower left corner clockwise? I know how triangles are created, I'm not just sure if textures are the same way. The texture on my cube is skewed way off and after playing around with the U,V coordinates, I still can't get it right. //front left bottom corner ok vertices[0] = (new VertexPositionTexture(new Vector3(0, 0, 0), new Vector2(1, 0))); //front left upper corner vertices[1] = (new VertexPositionTexture(new Vector3(0, 1, 0), new Vector2(1, 1))); //front right upper corner ok vertices[2] = (new VertexPositionTexture(new Vector3(1, 1, 0), new Vector2(0, 1))); //front lower right corner vertices[3] = (new VertexPositionTexture(new Vector3(1, 0, 0), new Vector2(0, 0))); //back left lower corner ok vertices[4] = (new VertexPositionTexture(new Vector3(0, 0, -1), new Vector2(0, 1))); //back left upper corner vertices[5] = (new VertexPositionTexture(new Vector3(0, 1, -1), new Vector2(1, 1))); //back right upper corner ok vertices[6] = (new VertexPositionTexture(new Vector3(1, 1, -1), new Vector2(1, 0))); //back right lower corner vertices[7] = (new VertexPositionTexture(new Vector3(1, 0, -1), new Vector2(0, 0)));

    Read the article

  • Where can i get the openal sdk for c++?

    - by Peter Short
    The OpenAL site I'm looking at is a crappy outdated and broken sharepoint portal and the SDK in the downloads section give me a 500 html code when i request it. http://connect.creativelabs.com/openal/Downloads/OpenAL11CoreSDK.zip I found an OpenAL SDK on a softpedia and it has headers but not alu.h or alut.h which the tutorials I'm looking at apparently require for loading wavs etc. What am I missing? Is OpenAL dead or something?

    Read the article

  • How do I detect and handle collisions using a tile property with Slick2D?

    - by oracleCreeper
    I am trying to set up collision detection in Slick2D based on a tilemap. I currently have two layers on the maps I'm using, a background layer, and a collision layer. The collision layer has a tile with a 'blocked' property, painted over the areas the player can't walk on. I have looked through the Slick documentation, but do not understand how to read a tile property and use it as a flag for collision detection. My method of 'moving' the player is somewhat different, and might affect how collisions are handled. Instead of updating the player's location on the window, the player always stays in the same spot, updating the x and y the map is rendered at. I am working on collisions with objects by restricting the player's movement when its hitbox intersects an object's hitbox. The code for the player hitting the right side of an object, for example, would look like this: if(Player.bounds.intersects(object.bounds)&&(Player.x<=(object.x+object.width+0.5))&&Player.isMovingLeft){ isInCollision=true; level.moveMapRight(); } else if(Player.bounds.intersects(object.bounds)&&(Player.x<=(object.x+object.width+0.5))&&Player.isMovingRight){ isInCollision=true; level.moveMapRight(); } else if(Player.bounds.intersects(object.bounds)&&(Player.x<=(object.x+object.width+0.5))&&Player.isMovingUp){ isInCollision=true; level.moveMapRight(); } else if(Player.bounds.intersects(object.bounds)&&(Player.x<=(object.x+object.width+0.5))&&Player.isMovingDown){ isInCollision=true; level.moveMapRight(); } and in the level's update code: if(!Player.isInCollision) Player.manageMovementInput(map, i); However, this method still has some errors. For example, when hitting the object from the right, the player will move up and to the left, clipping through the object and becoming stuck inside its hitbox. If there is a more effective way of handling this, any advice would be greatly appreciated.

    Read the article

  • Software rendering 3d triangles in the proper order

    - by at.
    I'm implementing a basic 3d rendering engine in software (for education purposes, please don't mention to use an API). When I project a triangle from 3d to 2d coordinates, I draw the triangle. However, it's in a random order and so whatever gets drawn last draws on top of all other triangles (which might be in front of triangles it shouldn't be in front of)... Intuitively, seems I need to draw the triangles in the correct order. So I can calculate all their distances to the camera and sort by that. The objects furthest away get drawn last. Is this the proper way to render triangles? If I'm sorting all the objects, this is n*log(n) now. Is this the most efficient way to do this?

    Read the article

  • Having a problem with texturing vertices in WebGL, think parameters are off in the image?

    - by mathacka
    I'm having a problem texturing a simple rectangle in my WebGL program, I have the parameters set as follows: gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, textureImage); I'm using this image: On the properties of this image it says it's 32 bit depth, so that should take care of the gl.UNSIGNED_BYTE, and I've tried both gl.RGBA and gl.RGB to see if it's not reading the transparency. It is a 32x32 pixel image, so it's power of 2. And I've tried almost all the combinations of formats and types, but I'm not sure if this is the answer or not. I'm getting these two errors in the chrome console: INVALID_VALUE: texImage2D: invalid image (index):101 WebGL: drawArrays: texture bound to texture unit 0 is not renderable. It maybe non-power-of-2 and have incompatible texture filtering or is not 'texture complete'. Or the texture is Float or Half Float type with linear filtering while OES_float_linear or OES_half_float_linear extension is not enabled. the drawArrays function is simply: "gl.drawArrays(gl.TRIANGLES, 0, 6);" using 6 vertices to make a rectangle.

    Read the article

  • Decal implementation

    - by dreta
    I had issues finding information about decals, so maybe this question will help others. The implementation is for a forward renderer. Could somebody confirm if i got decal implementation right? You define a cube of any dimension that'll define the projection volume in common space. You check for triangle intersection with the defined cube to recieve triangles that the projection will affect. You clip these triangles and save them. You then use matrix tricks to calculate UV coordinates for the saved triangles that'll reference the texture you're projecting. To do this you take the vectors representing height, width and depth of the cube in common space, so that f.e. the bottom left corner is the origin. You put that in a matrix as the i, j, k unit vectors, set the translation for the cube, then you inverse this matrix. You multiply the vertices of the saved triangles by this matrix, that way you get their coordinates inside of a 0 to 1 size cube that you use as the UV coordinates. This way you have the original triangles you're projecting onto and you have UV coordinates for them (the UV coordinates are referencing the texture you're projecting). Then you rerender the saved triangles onto the scene and they overwrite the area of projection with the projected image. Now the questions that i couldn't find answers for. Is the last point right? I've never done software clipping, but it seems error prone enough, due to limited precision, that the'll be some z fighting occuring for the projected texture. Also is the way of getting UV coordinates correct?

    Read the article

  • 2D Tile-Based Concept Art App

    - by ashes999
    I'm making a bunch of 2D games (now and in the near future) that use a 2D, RPG-like interface. I would like to be able to quickly paint tiles down and drop character sprites to create concept art. Sure, I could do it in GIMP or Photoshop. But that would require manually adding each tile, layering on more tiles, cutting and pasting particular character sprites, etc. and I really don't need that level of granularity; I need a quick and fast way to churn out concept art. Is there a tool that I can use for this? Perhaps some sort of 2D tile editor which lets me draw sprites and tiles given that I can provide the graphics files.

    Read the article

  • Does swf provide better compress rate than zlib for png image?

    - by Huang F. Lei
    Somebody told me that when a png image is stored in swf, it's separated to several layer, hence the alpha channel can be compressed better. Is it true? Or, once png image is imported into a swf, it's format is changed, e.g converted into bitmap data, and than compressed by swf's compress algorithm. That's, it is not in png format anymore. I don't know how swf packing its resource, please tell me if you know.

    Read the article

  • Having a hard time having consecutive animations for an attack

    - by Kelby Styler
    So I've been trying to figure this out for about 8 hours now...It's driving me nuts because I am pretty sure that it is something dead simple that I am just not understanding. I had everything working fine when I was just cycling through the animation: Idle - Attack - Attack 1 - Attack 2. Just in an infinite loop. The problem now is that I want it to go Attack - check if x time passes if ctrl pressed before x passes move to Attack 1, if not move back to Idle - Then either Attack 1 or Idle depending on how long has passed. I've almost gotten it a few time, but something always happens where it falls apart if I press ctrl too fast or after multiple cycles of the animation. Any help would be appreciated, I'm just at my wits end on this one. I've been looking at this so long that I just don't know where to go anymore. Code is below, here is the controller using UnityEngine; using System.Collections; public class MeleeAttack : MonoBehaviour { public int damage; public bool Attack; public bool Attack1; public bool Attack2; public bool Idle; private Animator animator; private int attnum = 0; private float count = 2f; private float timeLeft; //Gives value to damage output void MAttackDmg () { if (Input.GetKeyDown (KeyCode.RightControl) || Input.GetKeyDown (KeyCode.LeftControl)) { switch (attnum) { case (0): Attack = true; damage = 2; animator.SetBool ("Attack", Attack); attnum++; Idle = false; animator.SetBool ("Idle", Idle); timeLeft = count; break; case (1): Attack1 = true; damage = 2; animator.SetBool ("Attack1", Attack1); attnum++; Idle = false; animator.SetBool ("Idle", Idle); timeLeft = count; break; case (2): Attack2 = true; damage = 2; animator.SetBool ("Attack2", Attack2); attnum = 0; Idle = false; animator.SetBool ("Idle", Idle); timeLeft = count; break; } } if (Input.GetKeyUp (KeyCode.RightControl) || Input.GetKeyUp (KeyCode.LeftControl)) { switch (attnum) { case (0): Debug.Log ("false"); damage = 0; if (timeLeft <= 0f) { Attack2 = false; animator.SetBool ("Attack2", Attack2); Debug.Log ("t1"); Idle = true; animator.SetBool ("Idle", Idle); attnum = 0; timeLeft = count; } break; case (1): Debug.Log ("false1"); damage = 0; if (timeLeft <= 0f) { Debug.Log ("t2"); Attack = false; animator.SetBool ("Attack", Attack); Idle = true; animator.SetBool ("Idle", Idle); attnum = 0; timeLeft = count; } break; case (2): Debug.Log ("false2"); damage = 0; if (timeLeft <= 0f) { Attack1 = false; animator.SetBool ("Attack1", Attack1); Debug.Log ("t3"); Idle = true; animator.SetBool ("Idle", Idle); attnum = 0; timeLeft = count; } break; } } } // Use this for initialization void Awake () { animator = GetComponent<Animator> (); } // Update is called once per frame void Update () { timeLeft -= Time.deltaTime;; MAttackDmg (); } void Start (){ timeLeft = count; } }

    Read the article

< Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >