Search Results

Search found 44141 results on 1766 pages for 'unix development support'.

Page 520/1766 | < Previous Page | 516 517 518 519 520 521 522 523 524 525 526 527  | Next Page >

  • General approach to isometrics

    - by MrThys
    I am currently discovering the world of isometrics, now I found out there are two approaches to creating the tilemap; Just create 2:1 ratio tile-images and draw those. Creating squares and transforming them to the 2:1 ratio. What is the general approach on developing an isometric game? Now I was wondering a few things; How do more known games like AOE1/2 do this? What are the pros/cons of both methods? Which method is preferred to be used in this day and age? Edit added more general question

    Read the article

  • Problems with Snow Leopard, Django & PIL

    - by Cato Johnston
    Hi I am having some trouble getting Django & PIL work properly since upgrading to Snow Leopard. I have installed freetype, libjpeg and then PIL, which tells me: --- TKINTER support ok --- JPEG support ok --- ZLIB (PNG/ZIP) support ok --- FREETYPE2 support ok but when I try to upload a jpeg through the django admin interface I get: Upload a valid image. The file you uploaded was either not an image or a corrupted image. It works fine with PNG files. Any Ideas?

    Read the article

  • openGL migration from SFML to glut, vertices arrays or display lists are not displayed

    - by user3714670
    Due to using quad buffered stereo 3D (which i have not included yet), i need to migrate my openGL program from a SFML window to a glut window. With SFML my vertices and display list were properly displayed, now with glut my window is blank white (or another color depending on the way i clear it). Here is the code to initialise the window : int type; int stereoMode = 0; if ( stereoMode == 0 ) type = GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH; else type = GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH | GLUT_STEREO; glutInitDisplayMode(type); int argc = 0; char *argv = ""; glewExperimental = GL_TRUE; glutInit(&argc, &argv); bool fullscreen = false; glutInitWindowSize(width,height); int win = glutCreateWindow(title.c_str()); glutSetWindow(win); assert(win != 0); if ( fullscreen ) { glutFullScreen(); width = glutGet(GLUT_SCREEN_WIDTH); height = glutGet(GLUT_SCREEN_HEIGHT); } GLenum err = glewInit(); if (GLEW_OK != err) { fprintf(stderr, "Error: %s\n", glewGetErrorString(err)); } glutDisplayFunc(loop_function); This is the only code i had to change for now, but here is the code i used with sfml and displayed my objects in the loop, if i change the value of glClearColor, the window's background does change color : glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glClearColor(255.0f, 255.0f, 255.0f, 0.0f); glLoadIdentity(); sf::Time elapsed_time = clock.getElapsedTime(); clock.restart(); camera->animate(elapsed_time.asMilliseconds()); camera->look(); for (auto i = objects->cbegin(); i != objects->cend(); ++i) (*i)->draw(camera); glutSwapBuffers(); Is there any other changes i should have done switching to glut ? that would be great if someone could enlighten me on the subject. In addition to that, i found out that adding too many objects (that were well handled before with SFML), openGL gives error 1285: out of memory. Maybe this is related. EDIT : Here is the code i use to draw each object, maybe it is the problem : GLuint LightID = glGetUniformLocation(this->shaderProgram, "LightPosition_worldspace"); if(LightID ==-1) cout << "LightID not found ..." << endl; GLuint MaterialAmbientID = glGetUniformLocation(this->shaderProgram, "MaterialAmbient"); if(LightID ==-1) cout << "LightID not found ..." << endl; GLuint MaterialSpecularID = glGetUniformLocation(this->shaderProgram, "MaterialSpecular"); if(LightID ==-1) cout << "LightID not found ..." << endl; glm::vec3 lightPos = glm::vec3(0,150,150); glUniform3f(LightID, lightPos.x, lightPos.y, lightPos.z); glUniform3f(MaterialAmbientID, MaterialAmbient.x, MaterialAmbient.y, MaterialAmbient.z); glUniform3f(MaterialSpecularID, MaterialSpecular.x, MaterialSpecular.y, MaterialSpecular.z); // Get a handle for our "myTextureSampler" uniform GLuint TextureID = glGetUniformLocation(shaderProgram, "myTextureSampler"); if(!TextureID) cout << "TextureID not found ..." << endl; glActiveTexture(GL_TEXTURE0); sf::Texture::bind(texture); glUniform1i(TextureID, 0); // 2nd attribute buffer : UVs GLuint vertexUVID = glGetAttribLocation(shaderProgram, "color"); if(vertexUVID==-1) cout << "vertexUVID not found ..." << endl; glEnableVertexAttribArray(vertexUVID); glBindBuffer(GL_ARRAY_BUFFER, color_array_buffer); glVertexAttribPointer(vertexUVID, 2, GL_FLOAT, GL_FALSE, 0, 0); GLuint vertexNormal_modelspaceID = glGetAttribLocation(shaderProgram, "normal"); if(!vertexNormal_modelspaceID) cout << "vertexNormal_modelspaceID not found ..." << endl; glEnableVertexAttribArray(vertexNormal_modelspaceID); glBindBuffer(GL_ARRAY_BUFFER, normal_array_buffer); glVertexAttribPointer(vertexNormal_modelspaceID, 3, GL_FLOAT, GL_FALSE, 0, 0 ); GLint posAttrib; posAttrib = glGetAttribLocation(shaderProgram, "position"); if(!posAttrib) cout << "posAttrib not found ..." << endl; glEnableVertexAttribArray(posAttrib); glBindBuffer(GL_ARRAY_BUFFER, position_array_buffer); glVertexAttribPointer(posAttrib, 3, GL_FLOAT, GL_FALSE, 0, 0); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, elements_array_buffer); glDrawElements(GL_TRIANGLES, indices.size(), GL_UNSIGNED_INT, 0); GLuint error; while ((error = glGetError()) != GL_NO_ERROR) { cerr << "OpenGL error: " << error << endl; } disableShaders();

    Read the article

  • Own a KINECT for MS-XBOX before anyone does

    Following is the announced by Richik Nandi from Microsoft team. Dear Customer, We believe that our privileged customers shouldn't have to wait for good things. So, here's a special offer exclusively for you. Be one of the first in India to own and experience Kinect for XBOX 360, few days before it is even launched in stores. Introducing the new Kinect for XBOX 360®. Kinect needs no controllers. You are the controller. Kinect brings games and entertainment to life in extraordinary new ways without using a controller. The sensor recognizes your face, eyes and body movements to deliver a superb gaming experience. Easy to use and great fun, Kinect gets everyone off the couch. See a ball? Kick it. Want to join a friend in the fun? Simply jump in. Imagine controlling movies and music with the wave of a hand or the sound of your voice! Kinect is all about fun for you and your family. And the best part is Kinect works with every Xbox 360®. There are two options you can choose from: •  Kinect sensor + 4GB Xbox 360 bundle + Kinect Adventures game at Rs 22,990/-and get Dance Central game worth Rs 1999 from Redington, 20% discount voucher from Starwood on food and beverages, T-shirt from PUMA and a Kinect adventure live card absolutely free using your unique promo code : XbTXXZl2Sb •  Kinect Sensor at Rs 9,500/-and get 20% discount voucher from Starwood on food and beverages, T-shirt from PUMA and a Kinect adventure live card absolutely free using your unique promo code : lDg6o8SuYh We want you to own your Kinect before the official launch. The promotion closes by 10th November. To know more about Kinect click here. To book your Kinect PRE-ORDER now! Enter your details along with the above mentioned promo code to avail of the free gifts offer. We will have your Kinect delivered by 19th November 2010. Enjoy being the controller. Enjoy the Kinect. span.fullpost {display:none;}

    Read the article

  • An issue with tessellation a model with DirectX11

    - by Paul Ske
    I took the hardware tessellation tutorial from Rastertek and implemended texturing instead of color. This is great, so I wanted to implemended the same techique to a model inside my game editor and I noticed it doesn't draw anything. I compared the detailed tessellation from DirectX SDK sample. Inside the shader file - if I replace the HullInputType with PixelInputType it draws. So, I think because when I compiled the shaders inside the program it compiles VertexShader, PixelShader, HullShader then DomainShader. Isn't it suppose to be VertexShader, HullSHader, DomainShader then PixelShader or does it really not matter? I am just curious why wouldn't the model even be drawn when HullInputType but renders fine with PixelInputType. Shader Code: [code] cbuffer ConstantBuffer { float4x4 WVP; float4x4 World; // the rotation matrix float3 lightvec; // the light's vector float4 lightcol; // the light's color float4 ambientcol; // the ambient light's color bool isSelected; } cbuffer cameraBuffer { float3 cameraDirection; float padding; } cbuffer TessellationBuffer { float tessellationAmount; float3 padding2; } struct ConstantOutputType { float edges[3] : SV_TessFactor; float inside : SV_InsideTessFactor; }; Texture2D Texture; Texture2D NormalTexture; SamplerState ss { MinLOD = 5.0f; MipLODBias = 0.0f; }; struct HullOutputType { float3 position : POSITION; float2 texcoord : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; }; struct HullInputType { float4 position : POSITION; float2 texcoord : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; }; struct VertexInputType { float4 position : POSITION; float2 texcoord : TEXCOORD; float3 normal : NORMAL; float3 tangent : TANGENT; uint uVertexID : SV_VERTEXID; }; struct PixelInputType { float4 position : SV_POSITION; float2 texcoord : TEXCOORD0; // texture coordinates float3 normal : NORMAL; float3 tangent : TANGENT; float4 color : COLOR; float3 viewDirection : TEXCOORD1; float4 depthBuffer : TEXTURE0; }; HullInputType VShader(VertexInputType input) { HullInputType output; output.position.w = 1.0f; output.position = mul(input.position,WVP); output.texcoord = input.texcoord; output.normal = input.normal; output.tangent = input.tangent; //output.normal = mul(normal,World); //output.tangent = mul(tangent,World); //output.color = output.color; //output.texcoord = texcoord; // set the texture coordinates, unmodified return output; } ConstantOutputType TexturePatchConstantFunction(InputPatch inputPatch,uint patchID : SV_PrimitiveID) { ConstantOutputType output; output.edges[0] = tessellationAmount; output.edges[1] = tessellationAmount; output.edges[2] = tessellationAmount; output.inside = tessellationAmount; return output; } [domain("tri")] [partitioning("integer")] [outputtopology("triangle_cw")] [outputcontrolpoints(3)] [patchconstantfunc("TexturePatchConstantFunction")] HullOutputType HShader(InputPatch patch, uint pointId : SV_OutputControlPointID, uint patchId : SV_PrimitiveID) { HullOutputType output; // Set the position for this control point as the output position. output.position = patch[pointId].position; // Set the input color as the output color. output.texcoord = patch[pointId].texcoord; output.normal = patch[pointId].normal; output.tangent = patch[pointId].tangent; return output; } [domain("tri")] PixelInputType DShader(ConstantOutputType input, float3 uvwCoord : SV_DomainLocation, const OutputPatch patch) { float3 vertexPosition; float2 uvPosition; float4 worldposition; PixelInputType output; // Interpolate world space position with barycentric coordinates float3 vWorldPos = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; // Determine the position of the new vertex. vertexPosition = vWorldPos; // Calculate the position of the new vertex against the world, view, and projection matrices. output.position = mul(float4(vertexPosition, 1.0f),WVP); // Send the input color into the pixel shader. output.texcoord = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; output.normal = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; output.tangent = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; //output.depthBuffer = output.position; //output.depthBuffer.w = 1.0f; //worldposition = mul(output.position,WVP); //output.viewDirection = cameraDirection.xyz - worldposition.xyz; //output.viewDirection = normalize(output.viewDirection); return output; } [/code] Somethings are commented out but will be in place when fixed. I'm probably not connecting something correctly.

    Read the article

  • Keystone Correction using 3D-Points of Kinect

    - by philllies
    With XNA, I am displaying a simple rectangle which is projected onto the floor. The projector can be placed at an arbitrary position. Obviously, the projected rectangle gets distorted according to the projectors position and angle. A Kinect scans the floor looking for the four corners. Now my goal is to transform the original rectangle such that the projection is no longer distorted by basically pre-warping the rectangle. My first approach was to do everything in 2D: First compute a perspective transformation (using OpenCV's warpPerspective()) from the scanned points to the internal rectangle's points und apply the inverse to the rectangle. This seemed to work but was too slow as it couldn't be rendered on the GPU. The second approach was to do everything in 3D in order to use XNA's rendering features. First, I would display a plane, scan its corners with Kinect and map the received 3D-Points to the original plane. Theoretically, I could apply the inverse of the perspective transformation to the plane, as I did in the 2D-approach. However, in since XNA works with a view and projection matrix, I can't just call a function such as warpPerspective() and get the desired result. I would need to compute the new parameters for the camera's view and projection matrix. Question: Is it possible to compute these parameters and split them into two matrices (view and projection)? If not, is there another approach I could use?

    Read the article

  • How can I store spell & items using a std::vector implementation?

    - by Vladimir Marenus
    I'm following along with a book from GameInstitute right now, and it's asking me to: Allow the player to buy and carry healing potions and potions of fireball. You can add an Item array (after you define the item class) to the Player class for storing them, or use a std::vector to store them. I think I would like to use the std::vector implementation, because that seems to confuse me less than making an item class, but I am unsure how to do so. I've heard from many people that vectors are great ways to store dynamic values (such as items, weapons, etc), but I've not seen it used.

    Read the article

  • A list of game mechanics

    - by Iain
    I'm trying to compile a list of game mechanics, by which I mean high-level/meta game mechanics like Cooperation, Resource Management, Chance and Time Manipulation rather than low level mechanics like running, jumping climbing ladders, etc Does any one have any suggestions or can point me to good existing lists? My WIP list is already proving to be quite useful to me in the way I think about games.

    Read the article

  • Softbody with complex geometry

    - by philipp
    I have modeled an Handball, based on the tutorial here, with a custom texture. Now I am trying to animate this model with the reactor module as a soft body. Therefor I have watched and tried a lot of tutorials and for animating a simple Sphere everything works fine. But if i try to use the model I have created, than it results in the crash of max or an animation that shows a crystal like structure that transforms itself to another crystal. Is it possible to animate this kind of complex geometry as a soft body and am i just setting the values wrong? If yes, which are the important ones I should check? Thanks in advance! Greetings philipp

    Read the article

  • How can I get the palette of an 8-bit surface in SDL.NET/Tao.SDL?

    - by lolmaster
    I'm looking to get the palette of an 8-bit surface in SDL.NET if possible, or (more than likely) using Tao.SDL. This is because I want to do palette swapping with the palette directly, instead of blitting surfaces together to replace colours like how you would do it with a 32-bit surface. I've gotten the SDL_Surface and the SDL_PixelFormat, however when I go to get the palette in the same way, I get a System.ExecutionEngineException: private Tao.Sdl.Sdl.SDL_Palette GetPalette(Surface surf) { // Get surface. Tao.Sdl.Sdl.SDL_Surface sdlSurface = (Tao.Sdl.Sdl.SDL_Surface)System.Runtime.InteropServices.Marshal.PtrToStructure(surf.Handle, typeof(Tao.Sdl.Sdl.SDL_Surface)); // Get pixel format. Tao.Sdl.Sdl.SDL_PixelFormat pixelFormat = (Tao.Sdl.Sdl.SDL_PixelFormat)System.Runtime.InteropServices.Marshal.PtrToStructure(sdlSurface.format, typeof(Tao.Sdl.Sdl.SDL_PixelFormat)); // Execution exception here. Tao.Sdl.Sdl.SDL_Palette palette = (Tao.Sdl.Sdl.SDL_Palette)System.Runtime.InteropServices.Marshal.PtrToStructure(pixelFormat.palette, typeof(Tao.Sdl.Sdl.SDL_Palette)); return palette; } When I used unsafe code to get the palette, I got a compile time error: "Cannot take the address of, get the size of, or declare a pointer to a managed type ('Tao.Sdl.Sdl.SDL_Palette')". My unsafe code to get the palette was this: unsafe { Tao.Sdl.Sdl.SDL_Palette* pal = (Tao.Sdl.Sdl.SDL_Palette*)pixelFormat.palette; } From what I've read, a managed type in this case is when a structure has some sort of reference inside it as a field. The SDL_Palette structure happens to have an array of SDL_Color's, so I'm assuming that's the reference type that is causing issues. However I'm still not sure how to work around that to get the underlying palette. So if anyone knows how to get the palette from an 8-bit surface, whether it's through safe or unsafe code, the help would be greatly appreciated.

    Read the article

  • XNA 4 Deferred Rendering deforms the model

    - by Tomáš Bezouška
    I have a problem when rendering a model of my World - when rendered using BasicEffect, it looks just peachy. Problem is when I render it using deferred rendering. See for yourselves: what it looks like: http://imageshack.us/photo/my-images/690/survival.png/ what it should look like: http://imageshack.us/photo/my-images/521/survival2.png/ (Please ignora the cars, they shouldn't be there. Nothing changes when they are removed) Im using Deferred renderer from www.catalinzima.com/tutorials/deferred-rendering-in-xna/introduction-2/ except very simplified, without the custom content processor. Here's the code for the GBuffer shader: float4x4 World; float4x4 View; float4x4 Projection; float specularIntensity = 0.001f; float specularPower = 3; texture Texture; sampler diffuseSampler = sampler_state { Texture = (Texture); MAGFILTER = LINEAR; MINFILTER = LINEAR; MIPFILTER = LINEAR; AddressU = Wrap; AddressV = Wrap; }; struct VertexShaderInput { float4 Position : POSITION0; float3 Normal : NORMAL0; float2 TexCoord : TEXCOORD0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 TexCoord : TEXCOORD0; float3 Normal : TEXCOORD1; float2 Depth : TEXCOORD2; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); output.TexCoord = input.TexCoord; //pass the texture coordinates further output.Normal = mul(input.Normal,World); //get normal into world space output.Depth.x = output.Position.z; output.Depth.y = output.Position.w; return output; } struct PixelShaderOutput { half4 Color : COLOR0; half4 Normal : COLOR1; half4 Depth : COLOR2; }; PixelShaderOutput PixelShaderFunction(VertexShaderOutput input) { PixelShaderOutput output; output.Color = tex2D(diffuseSampler, input.TexCoord); //output Color output.Color.a = specularIntensity; //output SpecularIntensity output.Normal.rgb = 0.5f * (normalize(input.Normal) + 1.0f); //transform normal domain output.Normal.a = specularPower; //output SpecularPower output.Depth = input.Depth.x / input.Depth.y; //output Depth return output; } technique Technique1 { pass Pass1 { VertexShader = compile vs_2_0 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } And here are the rendering parts in XNA: public void RednerModel(Model model, Matrix world) { Matrix[] boneTransforms = new Matrix[model.Bones.Count]; model.CopyAbsoluteBoneTransformsTo(boneTransforms); Game.GraphicsDevice.DepthStencilState = DepthStencilState.Default; Game.GraphicsDevice.BlendState = BlendState.Opaque; Game.GraphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; foreach (ModelMesh mesh in model.Meshes) { foreach (ModelMeshPart meshPart in mesh.MeshParts) { GBufferEffect.Parameters["View"].SetValue(Camera.Instance.ViewMatrix); GBufferEffect.Parameters["Projection"].SetValue(Camera.Instance.ProjectionMatrix); GBufferEffect.Parameters["World"].SetValue(boneTransforms[mesh.ParentBone.Index] * world); GBufferEffect.Parameters["Texture"].SetValue(meshPart.Effect.Parameters["Texture"].GetValueTexture2D()); GBufferEffect.Techniques[0].Passes[0].Apply(); RenderMeshpart(mesh, meshPart); } } } private void RenderMeshpart(ModelMesh mesh, ModelMeshPart part) { Game.GraphicsDevice.SetVertexBuffer(part.VertexBuffer); Game.GraphicsDevice.Indices = part.IndexBuffer; Game.GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, part.NumVertices, part.StartIndex, part.PrimitiveCount); } I import the model using the built in content processor for FBX. The FBX is created in 3DS Max. I don't know the exact details of that export, but if you think it might be relevant, I will get them from my collegue who does them. What confuses me though is why the BasicEffect approach works... seems the FBX shouldnt be a problem. Any thoughts? They will be greatly appreciated :)

    Read the article

  • Game Engine which can provide 360 degree projection for PC

    - by Never Quit
    I'm searching Game engine which can provide 360 degree real-time projection. I've already achieved this by using VBS2 Game Engine. (Ref.: http://products.bisimulations.com/products/vbs2/vbs2-multi-channel). But I'm not satisfied with its graphics. So I'm looking for some other Game Engine which can do the same and provide me more better graphics and user experience. Like Frostbite2 or Unreal Engine 3. Like this image I want full 360 degree view. Is there any Game Engine which can provide 360 degree projection for PC? Thanks in advance...

    Read the article

  • OpenGL and atlas

    - by user30088
    I'm trying to draw element from a texture atlas with OpenGL ES 2. Currently, I'm drawing my elements using something like that in the shader: uniform mat4 uCamera; uniform mat4 uModel; attribute vec4 aPosition; attribute vec4 aColor; attribute vec2 aTextCoord; uniform vec2 offset; uniform vec2 scale; varying lowp vec4 vColor; varying lowp vec2 vUV; void main() { vUV = offset + aTextCoord * scale; gl_Position = (uCamera * uModel) * aPosition; vColor = aColor; } For each elements to draw I send his offset and scale to the shader. The problem with this method: I can't rotate the element but it's not a problem for now. I would like to know, what is better for performance: Send uniforms like that for each element on every frames Update quad geometry (uvs) for each element Thanks!

    Read the article

  • Seperating entities from their actions or behaviours

    - by Jamie Dixon
    Hi everyone, I'm having a go at creating a very simple text based game and am wondering what the standard design patterns are when it comes to entities (characters, sentient scenery) and the actions those entities can perform. As an example, I have entity that is a 'person' with various properties such as age, gender, height, etc. This 'person' can also perform some actions such as speaking, walking, jumping, flying, etc etc. How would you seperate out the entity from the actions it can perform and what are some common design patterns that solve this kind of problem?

    Read the article

  • Flex 4 + Apache Ant, Cannot Load FlashPunk Libraries

    - by SquareCrow
    I have been searching google, Apache Docs*, and FlashPunk forums looking for an answer to this: I cannot get Ant/Flex to find and compile the FlashPunk libraries. Here is my build.xml. [code] <!-- Fetch the JAR full of Flex tasks if it is not already in the source directory --> <copy file="${FLEX_HOME}/ant/lib/flexTasks.jar" todir="${SOURCE_PATH}"/> <!-- Add flextasks to the project --> <taskdef resource="flexTasks.tasks" classpath="${SOURCE_PATH}/flexTasks.jar"></taskdef> <!-- Release build Flash Player 10.1 --> <target name="build"> <!-- Build the FlashPunk library --> <echo message="building swc..." /> <compc output="FlashPunk.swc" keep-generated-actionscript="false" incremental="false" optimize="false" debug="true" use-network="false"> <include-sources dir="${FLASHPUNK_PATH}/net" includes="**/* flashpunk/utils/* flashpunk/masks/*" excludes="**/*.TTF **/*.png"/> <load-config filename="${FLEX_HOME}/frameworks/flex-config.xml"/> </compc> <echo message="building swf..." /> <mxmlc file="${SOURCE_PATH}/epOne.as" output="${OUTPUT_PATH}/epOne.swf" debug="false" incremental="false" strict="true" accessible="false" link-report="link_report.xml" static-link-runtime-shared-libraries="true"> <optimize>true</optimize> </mxmlc> </target> [/code] Results in many errors of the type "Definition net.flashpunk.masks:Grid could not be found" even though when I open the directories I can see the *.AS files right there. Sorry if this is very basic. I am piecing together knowledge of Ant from docs and tutorials. *I decided to use Ant because neither FlashDevelop for Windows nor Eclipse for Linux seemto work for me.

    Read the article

  • Direct3D11 and SharpDX - How to pass a model instance's world matrix as an input to a vertex shader

    - by Nathan Ridley
    Using Direct3D11, I'm trying to pass a matrix into my vertex shader from the instance buffer that is associated with a given model's vertices and I can't seem to construct my InputLayout without throwing an exception. The shader looks like this: cbuffer ConstantBuffer : register(b0) { matrix World; matrix View; matrix Projection; } struct VIn { float4 position: POSITION; matrix instance: INSTANCE; float4 color: COLOR; }; struct VOut { float4 position : SV_POSITION; float4 color : COLOR; }; VOut VShader(VIn input) { VOut output; output.position = mul(input.position, input.instance); output.position = mul(output.position, View); output.position = mul(output.position, Projection); output.color = input.color; return output; } The input layout looks like this: var elements = new[] { new InputElement("POSITION", 0, Format.R32G32B32_Float, 0, 0, InputClassification.PerVertexData, 0), new InputElement("INSTANCE", 0, Format.R32G32B32A32_Float, 0, 0, InputClassification.PerInstanceData, 1), new InputElement("COLOR", 0, Format.R32G32B32A32_Float, 12, 0) }; InputLayout = new InputLayout(device, signature, elements); The buffer initialization looks like this: public ModelDeviceData(Model model, Device device) { Model = model; var vertices = Helpers.CreateBuffer(device, BindFlags.VertexBuffer, model.Vertices); var instances = Helpers.CreateBuffer(device, BindFlags.VertexBuffer, Model.Instances.Select(m => m.WorldMatrix).ToArray()); VerticesBufferBinding = new VertexBufferBinding(vertices, Utilities.SizeOf<ColoredVertex>(), 0); InstancesBufferBinding = new VertexBufferBinding(instances, Utilities.SizeOf<Matrix>(), 0); IndicesBuffer = Helpers.CreateBuffer(device, BindFlags.IndexBuffer, model.Triangles); } The buffer creation helper method looks like this: public static Buffer CreateBuffer<T>(Device device, BindFlags bindFlags, params T[] items) where T : struct { var len = Utilities.SizeOf(items); var stream = new DataStream(len, true, true); foreach (var item in items) stream.Write(item); stream.Position = 0; var buffer = new Buffer(device, stream, len, ResourceUsage.Default, bindFlags, CpuAccessFlags.None, ResourceOptionFlags.None, 0); return buffer; } The line that instantiates the InputLayout object throws this exception: *HRESULT: [0x80070057], Module: [General], ApiCode: [E_INVALIDARG/Invalid Arguments], Message: The parameter is incorrect.* Note that the data for each model instance is simply an instance of SharpDX.Matrix. EDIT Based on Tordin's answer, it sems like I have to modify my code like so: var elements = new[] { new InputElement("POSITION", 0, Format.R32G32B32_Float, 0, 0, InputClassification.PerVertexData, 0), new InputElement("INSTANCE0", 0, Format.R32G32B32A32_Float, 0, 0, InputClassification.PerInstanceData, 1), new InputElement("INSTANCE1", 1, Format.R32G32B32A32_Float, 0, 0, InputClassification.PerInstanceData, 1), new InputElement("INSTANCE2", 2, Format.R32G32B32A32_Float, 0, 0, InputClassification.PerInstanceData, 1), new InputElement("INSTANCE3", 3, Format.R32G32B32A32_Float, 0, 0, InputClassification.PerInstanceData, 1), new InputElement("COLOR", 0, Format.R32G32B32A32_Float, 12, 0) }; and in the shader: struct VIn { float4 position: POSITION; float4 instance0: INSTANCE0; float4 instance1: INSTANCE1; float4 instance2: INSTANCE2; float4 instance3: INSTANCE3; float4 color: COLOR; }; VOut VShader(VIn input) { VOut output; matrix world = { input.instance0, input.instance1, input.instance2, input.instance3 }; output.position = mul(input.position, world); output.position = mul(output.position, View); output.position = mul(output.position, Projection); output.color = input.color; return output; } However I still get an exception.

    Read the article

  • Real-Time Strategy Gameplay

    - by Ahmad Alkhawaja
    I am working on building a HTML5 RTS game, and my current state is that I am building the Campaign mode of the game, and want to define the gameplay (The Scoring, Unit Behaviors/Attributes). I am searching for links/articles/books about how to define the gameplay, for me this: The scoring Figuring out levels of control (in any RTS game, there is units, individuals and squads) Unit action/attributes/properties point timing (how long it will take to play?) Achievements ..etc I want to see how they usually define these areas in RTS games, I expect to see general document discussing this concept that I can use to build the gameplay. Any idea? Is my question clear or I need to provide more details?

    Read the article

  • Geometry instancing in OpenGL ES 2.0

    - by seahorse
    I am planning to do geometry instancing in OpenGL ES 2.0 Basically I plan to render the same geometry(a chair) maybe 1000 times in my scene. What is the best way to do this in OpenGL ES 2.0? I am considering passing model view mat4 as an attribute. Since attributes are per vertex data do I need to pass this same mat4, three times for each vertex of the same triangle(since modelview remains constant across vertices of the triangle). That would amount to a lot of extra data sent to the GPU( 2 extra vertices*16 floats*(Number of triangles) amount of extra data). Or should I be sending the mat4 only once per triangle?But how is that possible using attributes since attributes are defined as "per vertex" data? What is the best and efficient way to do instancing in OpenGL ES 2.0?

    Read the article

  • How can I load .FBX files?

    - by gardian06
    I am looking into options for the model assets for my game. I have gotten pretty good with Blender, and want to use C++/DirectX9 (don't need all the excess from 10+), but Blender 2.6 exports .fbx not .x (by nature) and supposedly what is exported from Blender to .x is not entirely stable. In short how do I import .fbx models (I can work around not having animations if I must) into DirectX9? Is there a middleware, or conversion tool that will maintain stability?

    Read the article

  • Comparing a saved movement with other movement with Kinect

    - by Ewerton
    I need to develop an application where a user (physiotherapist) will perform a movement in front of the Kinect, I'll write the data movement in the database and then the patient will try to imitate this motion. The system will calculate the similarity between the movement recorded and executed. My first idea is, during recording (each 5 second, by example), to store the position (x, y, z) of the points and then compare them in the execution time(by patient). I know that this approach is too simple, because I imagine that in people of different sizes the skeleton is recognized differently, so the comparison is not reliable. My question is about the best way to compare a saved motion with a movement executed (on the fly). PS: Sorry by my English.

    Read the article

  • How to code UI / HUD in Entity System?

    - by Sylpheed
    I think I already got the idea of the Entity System inspired by Adam Martin (t-machine). I want to start using this for my next project. I already know the basic of Entity, Components, and Systems. My problem is how to handle UI / HUD. For example, a quest window, skill window, character info window, etc. How do you handle UI events (eg. pressing a button)? These are stuff that doesn't need to be processed every frame. Currently, I'm using MVC to code UI but I don't think that'll be compatible for Entity System. I've read that Entity System is embedded on a larger OOP. I don't know if UI is outside of ES or not. How do I approach this one?

    Read the article

  • Is it possible to billboard a sprite using a transformation matrix?

    - by Ross
    None of the current topics on billboarding seemed to answer this question. But, given a sprite (or quad) that has it's own 4x4 transformation matrix a camera with a view matrix (a standard 4x4 transformation matrix) is it possible to compute a 4x4 transformation matrix such that when the quad's matrix is multiplied with this matrix it has an orientation of looking at the camera? Thank you in advance.

    Read the article

  • Collision library for bullet hell in Python

    - by darkfeline
    I am making a bullet hell game in Python and am looking for a suitable collision library, taking the following into consideration: The library should do 2D polygon collision. It should be very fast. As a bullet hell game, I expect to do collision checks between hundreds, likely thousands of objects every frame at a consistent 60fps. Good documentation Permissive license (like MIT, not GPL) I am also considering writing my own library in C/C++ and wrapping with python ctypes in the event that no such library exists, though I do not have experience with collision detection algorithms, so I am not sure if this would be more trouble than it's worth. Could someone provide some guidance on this matter?

    Read the article

  • In GLSL is it possible to offset vertices based on height map colour?

    - by Rob
    I am attempting to generate some terrain based upon a heightmap. I have generated a 32 x 32 grid and a corresponding height map - In my vertex shader I am trying to offset the position of the Y axis based upon the colour of the heightmap, white vertices being higher than black ones. //Vertex Shader Code #version 330 uniform mat4 modelMatrix; uniform mat4 viewMatrix; uniform mat4 projectionMatrix; uniform sampler2D heightmap; layout (location=0) in vec4 vertexPos; layout (location=1) in vec4 vertexColour; layout (location=3) in vec2 vertexTextureCoord; layout (location=4) in float offset; out vec4 fragCol; out vec4 fragPos; out vec2 fragTex; void main() { // Retreive the current pixel's colour vec4 hmColour = texture(heightmap,vertexTextureCoord); // Offset the y position by the value of current texel's colour value ? vec4 offset = vec4(vertexPos.x , vertexPos.y + hmColour.r, vertexPos.z , 1.0); // Final Position gl_Position = projectionMatrix * viewMatrix * modelMatrix * offset; // Data sent to Fragment Shader. fragCol = vertexColour; fragPos = vertexPos; fragTex = vertexTextureCoord; } However the code I have produced only creates a grid with none of the y vertices higher than any others. This is the C++ code that generates the grid and texture co-orientates which I believe to be correct as the texture is mapped to the grid, hence the white blob in the middle. The grid-lines are generated in the fragment shader, sorry for any confusion. I have tried multiplying the r value of hmColour by 1000 unfortunately that had no effect. The only other problem it could be is that the texture coordinate data is incorrect ? for (int z = 0; z < MAP_Z ; z++) { for(int x = 0; x < MAP_X ; x++) { //Generate Vertex Buffer vertexData[iVertex++] = float (x) * MAP_X; vertexData[iVertex++] = 0; vertexData[iVertex++] = -(float) (z) * MAP_Z; //Colour Buffer NOT NEEDED colourData[iColour++] = 255.0f; // R colourData[iColour++] = 1.0f; // G colourData[iColour++] = 0.0f; // B //Texture Buffer textureData[iTexture++] = (float ) x * (1.0f / MAP_X); textureData[iTexture++] = (float ) z * (1.0f / MAP_Z); } } The heightmap texture I am trying to use appears like so (without grid-lines). This is the corresponding fragment shader // Fragment Shader Code #version 330 uniform sampler2D hmTexture; layout (location=0) out vec4 fragColour; in vec2 fragTex; in vec4 pos; void main(void) { vec2 line = fragTex * 32; // Without Gridlines fragColour = texture(hmTexture,fragTex); // With grid lines // + mix(vec4(0.0, 0.0, 1.0, 0.0), vec4(1.0, 1.0, 1.0, 1.0), // smoothstep(0.05,fract(line.y), 0.99) * smoothstep(0.05,fract(line.x),0.99)); }

    Read the article

  • Libgdx - 2D Mesh rendering overlap glitch

    - by user46858
    I am trying to render a 2D circle segment mesh (quarter circle)using Libgdx/Opengl ES 2.0 but I seem to be getting an overlapping issue as seen in the picture attached. I cant seem to find the cause of the problem but the overlapping disappears/reappears if I drag and resize the window to random sizes. The problem occurs on both pc and android. The strange thing is the first two segments atleast dont seem to be causing any overlapping only the third and/or forth segment.......even though they are all rendered using the same mesh object..... I have spent ages trying to find the cause of the problem before posting here for help so ANY help/advice in finding the cause of this problem would be really appreciated. public class MyGdxGame extends Game { private SpriteBatch batch; private Texture texture; private OrthographicCamera myCamera; private float w; private float h; private ShaderProgram circleSegShader; private Mesh circleScaleSegMesh; private Stage stage; private float TotalSegments; Vector3 virtualres; @Override public void create() { w = Gdx.graphics.getWidth(); h = Gdx.graphics.getHeight(); batch = new SpriteBatch(); ViewPortsize = new Vector2(); TotalSegments = 4.0f; virtualres = new Vector3(1280.0f, 720.0f, 0.0f); myCamera = new OrthographicCamera(); myCamera.setToOrtho(false, w, h); texture = new Texture(Gdx.files.internal("data/libgdx.png")); texture.setFilter(TextureFilter.Linear, TextureFilter.Linear); circleScaleSegMesh = createCircleMesh_V3(0.0f,0.0f,200.0f, 30.0f,3, (360.0f /TotalSegments) ); circleSegShader = loadShaderFromFile(new String("circleseg.vert"), new String("circleseg.frag")); shaderProgram.pedantic = false; stage = new Stage(); stage.setViewport(new ExtendViewport(w, h)); Gdx.input.setInputProcessor(stage); } @Override public void render() { .... //render renderInit(); renderCircleScaledSegment(); } @Override public void resize(int width, int height) { stage.getViewport().update(width, height, true); myCamera.position.set( virtualres.x/2.0f, virtualres.y/2.0f, 0.0f); myCamera.update(); } public void renderInit(){ Gdx.gl20.glClearColor(1.0f, 1.0f, 1.0f, 0.0f); Gdx.gl20.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT); batch.setShader(null); batch.setProjectionMatrix(myCamera.combined); } public void renderCircleScaledSegment(){ Gdx.gl20.glEnable(GL20.GL_DEPTH_TEST); Gdx.gl20.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA); Gdx.gl20.glEnable(GL20.GL_BLEND); batch.begin(); circleSegShader.begin(); Matrix4 modelMatrix = new Matrix4(); Matrix4 cameraMatrix = new Matrix4(); Matrix4 cameraMatrix2 = new Matrix4(); Matrix4 cameraMatrix3 = new Matrix4(); Matrix4 cameraMatrix4 = new Matrix4(); cameraMatrix = myCamera.combined.cpy(); modelMatrix.idt().rotate(new Vector3(0.0f,0.0f,1.0f), 0.0f - ((360.0f /TotalSegments)/ 2.0f)).trn(virtualres.x/2.0f,virtualres.y/2.0f, 0.0f); cameraMatrix.mul(modelMatrix); cameraMatrix2 = myCamera.combined.cpy(); modelMatrix.idt().rotate(new Vector3(0.0f,0.0f,1.0f), 0.0f - ((360.0f /TotalSegments)/ 2.0f) +(360.0f /TotalSegments) ).trn(virtualres.x/2.0f,virtualres.y/2.0f, 0.0f); cameraMatrix2.mul(modelMatrix); cameraMatrix3 = myCamera.combined.cpy(); modelMatrix.idt().rotate(new Vector3(0.0f,0.0f,1.0f), 0.0f - ((360.0f /TotalSegments)/ 2.0f) +(2*(360.0f /TotalSegments))).trn(virtualres.x/2.0f,virtualres.y/2.0f, 0.0f); cameraMatrix3.mul(modelMatrix); cameraMatrix4 = myCamera.combined.cpy(); modelMatrix.idt().rotate(new Vector3(0.0f,0.0f,1.0f),0.0f - ((360.0f /TotalSegments)/ 2.0f) +(3*(360.0f /TotalSegments)) ).trn(virtualres.x/2.0f,virtualres.y/2.0f, 0.0f); cameraMatrix4.mul(modelMatrix); Vector3 box2dpos = new Vector3(0.0f, 0.0f, 0.0f); circleSegShader.setUniformMatrix("u_projTrans", cameraMatrix); circleSegShader.setUniformf("u_box2dpos", box2dpos); circleSegShader.setUniformi("u_texture", 0); texture.bind(); circleScaleSegMesh.render(circleSegShader, GL20.GL_TRIANGLES); circleSegShader.setUniformMatrix("u_projTrans", cameraMatrix2); circleSegShader.setUniformf("u_box2dpos", box2dpos); circleSegShader.setUniformi("u_texture", 0); texture.bind(); circleScaleSegMesh.render(circleSegShader, GL20.GL_TRIANGLES); circleSegShader.setUniformMatrix("u_projTrans", cameraMatrix3); circleSegShader.setUniformf("u_box2dpos", box2dpos); circleSegShader.setUniformi("u_texture", 0); texture.bind(); circleScaleSegMesh.render(circleSegShader, GL20.GL_TRIANGLES); circleSegShader.setUniformMatrix("u_projTrans", cameraMatrix4); circleSegShader.setUniformf("u_box2dpos", box2dpos); circleSegShader.setUniformi("u_texture", 0); texture.bind(); circleScaleSegMesh.render(circleSegShader, GL20.GL_TRIANGLES); circleSegShader.end(); batch.flush(); batch.end(); Gdx.gl20.glDisable(GL20.GL_DEPTH_TEST); Gdx.gl20.glDisable(GL20.GL_BLEND); } public Mesh createCircleMesh_V3(float cx, float cy, float r_out, float r_in, int num_segments, float segmentSizeDegrees){ float theta = (float) (2.0f * MathUtils.PI / (num_segments * (360.0f / segmentSizeDegrees))); float c = MathUtils.cos(theta);//precalculate the sine and cosine float s = MathUtils.sin(theta); float t,t2; float x = r_out;//we start at angle = 0 float y = 0; float x2 = r_in;//we start at angle = 0 float y2 = 0; float[] meshCoords = new float[num_segments *2 *3 *7]; int arrayIndex = 0; //array for triangles without indices for(int ii = 0; ii < num_segments; ii++) { meshCoords[arrayIndex] = x2+cx; meshCoords[arrayIndex +1] = y2+cy; meshCoords[arrayIndex +2] = 0.0f; meshCoords[arrayIndex +3] = 63.0f/255.0f; meshCoords[arrayIndex +4] = 139.0f/255.0f; meshCoords[arrayIndex +5] = 217.0f/255.0f; meshCoords[arrayIndex +6] = 0.7f; arrayIndex = arrayIndex + 7; meshCoords[arrayIndex] = x+cx; meshCoords[arrayIndex +1] = y+cy; meshCoords[arrayIndex +2] = 0.0f; meshCoords[arrayIndex +3] = 63.0f/255.0f; meshCoords[arrayIndex +4] = 139.0f/255.0f; meshCoords[arrayIndex +5] = 217.0f/255.0f; meshCoords[arrayIndex +6] = 0.7f; arrayIndex = arrayIndex + 7; t = x; x = c * x - s * y; y = s * t + c * y; meshCoords[arrayIndex] = x+cx; meshCoords[arrayIndex +1] = y+cy; meshCoords[arrayIndex +2] = 0.0f; meshCoords[arrayIndex +3] = 63.0f/255.0f; meshCoords[arrayIndex +4] = 139.0f/255.0f; meshCoords[arrayIndex +5] = 217.0f/255.0f; meshCoords[arrayIndex +6] = 0.7f; arrayIndex = arrayIndex + 7; meshCoords[arrayIndex] = x2+cx; meshCoords[arrayIndex +1] = y2+cy; meshCoords[arrayIndex +2] = 0.0f; meshCoords[arrayIndex +3] = 63.0f/255.0f; meshCoords[arrayIndex +4] = 139.0f/255.0f; meshCoords[arrayIndex +5] = 217.0f/255.0f; meshCoords[arrayIndex +6] = 0.7f; arrayIndex = arrayIndex + 7; meshCoords[arrayIndex] = x+cx; meshCoords[arrayIndex +1] = y+cy; meshCoords[arrayIndex +2] = 0.0f; meshCoords[arrayIndex +3] = 63.0f/255.0f; meshCoords[arrayIndex +4] = 139.0f/255.0f; meshCoords[arrayIndex +5] = 217.0f/255.0f; meshCoords[arrayIndex +6] = 0.7f; arrayIndex = arrayIndex + 7; t2 = x2; x2 = c * x2 - s * y2; y2 = s * t2 + c * y2; meshCoords[arrayIndex] = x2+cx; meshCoords[arrayIndex +1] = y2+cy; meshCoords[arrayIndex +2] = 0.0f; meshCoords[arrayIndex +3] = 63.0f/255.0f; meshCoords[arrayIndex +4] = 139.0f/255.0f; meshCoords[arrayIndex +5] = 217.0f/255.0f; meshCoords[arrayIndex +6] = 0.7f; arrayIndex = arrayIndex + 7; } Mesh myMesh = new Mesh(VertexDataType.VertexArray, false, meshCoords.length, 0, new VertexAttribute(VertexAttributes.Usage.Position, 3, "a_position"), new VertexAttribute(VertexAttributes.Usage.Color, 4, "a_color")); myMesh.setVertices(meshCoords); return myMesh; } }

    Read the article

< Previous Page | 516 517 518 519 520 521 522 523 524 525 526 527  | Next Page >