Search Results

Search found 21563 results on 863 pages for 'game testing'.

Page 444/863 | < Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >

  • Setting uniform value of a vertex shader for different sprites in a SpriteBatch

    - by midasmax
    I'm using libGDX and currently have a simple shader that does a passthrough, except for randomly shifting the vertex positions. This shift is a vec2 uniform that I set within my code's render() loop. It's declared in my vertex shader as uniform vec2 u_random. I have two different kind of Sprites -- let's called them SpriteA and SpriteB. Both are drawn within the same SpriteBatch's begin()/end() calls. Prior to drawing each sprite in my scene, I check the type of the sprite. If sprite instance of SpriteA: I set the uniform u_random value to Vector2.Zero, meaning that I don't want any vertex changes for it. If sprite instance of SpriteB, I set the uniform u_random to Vector2(MathUtils.random(), MathUtils.random(). The expected behavior was that all the SpriteA objects in my scene won't experience any jittering, while all SpriteB objects would be jittering about their positions. However, what I'm experiencing is that both SpriteA and SpriteB are jittering, leading me to believe that the u_random uniform is not actually being set per Sprite, and being applied to all sprites. What is the reason for this? And how can I fix this such that the vertex shader correctly accepts the uniform value set to affect each sprite individually? passthrough.vsh attribute vec4 a_color; attribute vec3 a_position; attribute vec2 a_texCoord0; uniform mat4 u_projTrans; uniform vec2 u_random; varying vec4 v_color; varying vec2 v_texCoord; void main() { v_color = a_color; v_texCoord = a_texCoord0; vec3 temp_position = vec3( a_position.x + u_random.x, a_position.y + u_random.y, a_position.z); gl_Position = u_projTrans * vec4(temp_position, 1.0); } Java Code this.batch.begin(); this.batch.setShader(shader); for (Sprite sprite : sprites) { Vector2 v = Vector2.Zero; if (sprite instanceof SpriteB) { v.x = MathUtils.random(-1, 1); v.y = MathUtils.random(-1, 1); } shader.setUniformf("u_random", v); sprite.draw(this.batch); } this.batch.end();

    Read the article

  • How to make a iOS plugin for Unity3d

    - by DannoEterno
    I've passed last 2 days reading articles and book for understand how can i make a plugin for iOS in Unity. Basically i need just a demo for understand how it work. For now i've tried to make this process (with really poor luck): I've started a new project in Unity and writed a simple script using UnityEngine; using System.Collections; using System; using System.Runtime.InteropServices; public class CallPlugin : MonoBehaviour { [DllImport ("__Internal")] private static extern int test(); void Start () { Debug.Log(test()); } } Then i've created a project in Xcode with this simple script: extern "C"{ int test() { int che = 5; return che; } } Then i've tried: to put the .mm and .h in the Assets/Plugins/iOS = nothing to build the unity project and than add the .h and .mm in the Xcode project = nothing In Unity i will always get the EntryPointNotFoundException, so unity see the file but is unable to reach the method. The problem is... how?! :) Maybe i miss something or i've done something wrong? Thanks a lot for every help that you can give me :)

    Read the article

  • ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND

    - by Telanor
    I've stared at this for at least half an hour now and I cannot figure out what directx is complaining about. I know this error normally means you put float3 instead of a float4 or something like that, but I've checked over and over and as far as I can tell, everything matches. This is the full error message: D3D11: ERROR: ID3D11DeviceContext::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. The input stage requires Semantic/Index (COLOR,0) as input, but it is not provided by the output stage. [ EXECUTION ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND ] This is the vertex shader's input signature as seen in PIX: // Input signature: // // Name Index Mask Register SysValue Format Used // -------------------- ----- ------ -------- -------- ------ ------ // POSITION 0 xyz 0 NONE float xyz // NORMAL 0 xyz 1 NONE float // COLOR 0 xyzw 2 NONE float The HLSL structure looks like this: struct VertexShaderInput { float3 Position : POSITION0; float3 Normal : NORMAL0; float4 Color: COLOR0; }; The input layout, from PIX, is: The C# structure holding the data looks like this: [StructLayout(LayoutKind.Sequential)] public struct PositionColored { public static int SizeInBytes = Marshal.SizeOf(typeof(PositionColored)); public static InputElement[] InputElements = new[] { new InputElement("POSITION", 0, Format.R32G32B32_Float, 0), new InputElement("NORMAL", 0, Format.R32G32B32_Float, 0), new InputElement("COLOR", 0, Format.R32G32B32A32_Float, 0) }; Vector3 position; Vector3 normal; Vector4 color; #region Properties ... #endregion public PositionColored(Vector3 position, Vector3 normal, Vector4 color) { this.position = position; this.normal = normal; this.color = color; } public override string ToString() { StringBuilder sb = new StringBuilder(base.ToString()); sb.Append(" Position="); sb.Append(position); sb.Append(" Color="); sb.Append(Color); return sb.ToString(); } } SizeInBytes comes out to 40, which is correct (4*3 + 4*3 + 4*4 = 40). Can anyone find where the mistake is?

    Read the article

  • How to factorize code in Unreal Kismet (i.e. "Material Function"s for Kismet)

    - by Georges Dupéron
    In the Unreal Development Kit, when using the Material Editor, one can factorize frequently-used groups of nodes by creating a Material Function (content Browser ? right-click ? new matrial function, IIRC). When defining the behaviour of some actor in Kismet, one can easily have a dozen nodes involved. If I have many actors that share the same behaviour, then I'll copy-paste these nodes, and change the variables so they point to the other actors. This leads to inconsistencies (a modification in the behaviour of an actor isn't propagated in the copy-pasted nodes), complexity (you end up with hundreds of nodes), and generally useless effort. My question is : Can I create a "kismet function", just like a material function ? Note: I'd rather avoid using UnrealScript. I don't even know where to type UnrealScripts, don't know where the documentation is and more generally don't have enough time to invest in learning UnrealScript. This "kismet function" feature must be usable by graphists (with little programming knowledge). If a (simple) script suffices to add this feature in the Kismet editor, so that one can create several "functions" without using UnrealScript, then fine, but I don't really want to have to write a script each time I want to factorize a few nodes. Thanks for any information !

    Read the article

  • Wall avoidance steering

    - by Vodemki
    I making a small steering simulator using the reynolds boid algorythm. Now I want to add a wall avoidance feature. My walls are in 3D and defined using two points like that: ---------. P2 | | P1 .--------- My agents have a velocity, a position, etc... Could you tell me how to make avoidance with my agents ? Vector2D ReynoldsSteeringModel::repulsionFromWalls() { Vector2D force; vector<Wall *> wallsList = walls(); Point2D pos = self()->position(); Vector2D velocity = self()->velocity(); for (unsigned i=0; i<wallsList.size(); i++) { //TODO } return force; } Then I use all the forces returned by my boid functions and I apply it to my agent. I just need to know how to do that with my walls ? Thanks for your help.

    Read the article

  • What are the semantics of glRotate and glTranslate's parameters?

    - by Zarkopafilis
    I have been trying to play with OpenGL after watching some tutorials and I don't understand how the glTranslatef and glRotatef functions work. I believe a simple picture would help me. I understand that glTranslatef changes the position of the "camera" (but does it change the position in wich the shapes are getting drawn)? However, I don't understand the rotation concept at all. If I do glRotatef(1,0,0,1) it makes my quad spin around. If I just do glRotatef(1,0,0,0) it makes the quad smaller (further away) but if I try to rotate around the X or Y axis, I get a black screen. I don't understand the angle either. Help would be appreciated.

    Read the article

  • Light on every model and not in the whole scene

    - by alecnash
    I am using a custom shader and try to pass the effect on my Models like that: foreach (ModelMesh mesh in Model.Meshes) { foreach (ModelMeshPart part in mesh.MeshParts) { part.Effect = effect; } mesh.Draw(); } My only issue is that every Model now has its own light source in it. Why is this happening and is this a problem of my shader? Edit: These are the parameters passed to the shader: private void Get_lambertEffect() { if (_lambertEffect == null) _lambertEffect = Engine.LambertEffect; //Lambert technique (LambertWithShadows, LambertWithShadows2x2PCF, LambertWithShadows3x3PCF) _lambertEffect.CurrentTechnique = _lambertEffect.Techniques["LambertWithShadows3x3PCF"]; _lambertEffect.Parameters["texelSize"].SetValue(Engine.ShadowMap.TexelSize); //ShadowMap parameters _lambertEffect.Parameters["lightViewProjection"].SetValue(Engine.ShadowMap.LightViewProjectionMatrix); _lambertEffect.Parameters["textureScaleBias"].SetValue(Engine.ShadowMap.TextureScaleBiasMatrix); _lambertEffect.Parameters["depthBias"].SetValue(Engine.ShadowMap.DepthBias); _lambertEffect.Parameters["shadowMap"].SetValue(Engine.ShadowMap.ShadowMapTexture); //Camera view and projection parameters _lambertEffect.Parameters["view"].SetValue(Engine._camera.ViewMatrix); _lambertEffect.Parameters["projection"].SetValue(Engine._camera.ProjectionMatrix); _lambertEffect.Parameters["world"].SetValue( Matrix.CreateScale(Size) * world ); //Light and color _lambertEffect.Parameters["lightDir"].SetValue(Engine._sourceLight.Direction); _lambertEffect.Parameters["lightColor"].SetValue(Engine._sourceLight.Color); _lambertEffect.Parameters["materialAmbient"].SetValue(Engine.Material.Ambient); _lambertEffect.Parameters["materialDiffuse"].SetValue(Engine.Material.Diffuse); _lambertEffect.Parameters["colorMap"].SetValue(ColorTexture.Create(Engine.GraphicsDevice, Color.Red)); }

    Read the article

  • jump pads problem

    - by Pasquale Sada
    I'm trying to make a character jump on a landing pad who stays above him. Here is the formula I've used (everything is pretty much self-explainable, maybe except character_MaxForce that is the total force the character can jump ): deltaPosition = target - character_position; sqrtTerm = Sqrt(2*-gravity.y * deltaPosition.y + MaxYVelocity* character_MaxForce); time = (MaxYVelocity-sqrtTerm) /gravity.y; speedSq = jumpVelocity.x* jumpVelocity.x + jumpVelocity.z *jumpVelocity.z; if speedSq < (character_MaxForce * character_MaxForce) we have the right time so we can store the value jumpVelocity.x = deltaPosition.x / time; jumpVelocity.z = deltaPosition.z / time; otherwise we try the other solution time = (MaxYVelocity+sqrtTerm) /gravity.y; and then store it jumpVelocity.x = deltaPosition.x / time; jumpVelocity.z = deltaPosition.z / time; jumpVelocity.y = MaxYVelocity; rigidbody_velocity = jumpVelocity; The problem is that the character is jumping away from the landing pad or sometime he jumps too far never hitting the landing pad.

    Read the article

  • Is this the most effect simple way to display a moving image? SDL2

    - by user36324
    I've looked around for tutorials on SDL2, but there isnt many so I am curious i was messing around and is this an effective way to move an image. One problem is that it drags along the image to where it moves. #include "SDL.h" #include "SDL_image.h" int main(int argc, char* argv[]) { bool exit = false; SDL_Init(SDL_INIT_EVERYTHING); SDL_Window *win = SDL_CreateWindow("Hello World!", 100, 100, 640, 480, SDL_WINDOW_SHOWN); SDL_Renderer *ren = SDL_CreateRenderer(win, -1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC); SDL_Surface *png = IMG_Load("character.png"); SDL_Rect src; src.x = 0; src.y = 0; src.w = 161; src.h = 159; SDL_Rect dest; dest.x = 50; dest.y = 50; dest.w = 161; dest.h = 159; SDL_Texture *tex = SDL_CreateTextureFromSurface(ren, png); SDL_FreeSurface(png); while(exit==false){ dest.x++; SDL_RenderClear(ren); SDL_RenderCopy(ren, tex, &src, &dest); SDL_RenderPresent(ren); } SDL_Delay(5000); SDL_DestroyTexture(tex); SDL_DestroyRenderer(ren); SDL_DestroyWindow(win); SDL_Quit(); }

    Read the article

  • Figuring out what object is closer to a certain point?

    - by user1157885
    I'm trying to create fog of war, I have the visual effect created but I'm not sure how to deal with the hiding of other players if they're within the fog of war. So right now the thing I'm trying to do is if another player is hiding behind a wall then not to render that player. I was thinking of doing it by sending a ray in the direction of all the players, and then creating a list of all the obstacles that ray collides with and then trying to figure out if an obstacle was closer than the player in order to predict the distance. But then I realized I'm not really sure how to figure out if the obstacle is infact closer or not because I have to account for all the dimensions, so I'm kind of stuck. First of all is this approach the correct way to go about it and secondly how would I calculate if the obstacle was infact closer taking into account the X Y and Z. Thanks

    Read the article

  • Climbing boxes in box2D

    - by Rothens
    I've just stepped into the world of Box2D with libgdx. I've already made a stack of boxes: They are dropped randomly ontop of each other. What I'd like to achieve is to make a character, that could freely climb on the boxes, (He can grip on the boxes anywhere, not just on the side/top of a box) but his weight affects the stack as well, so the boxes could fall down. My google-fu failed me... Is there any way to make this possible?

    Read the article

  • Procedural Planets, Heightmaps and Textures

    - by henryprescott
    I am currently working on an OpenGL procedural planet generator. I hope to use it for a space RPG, that will not allow players to go down to the surface of a planet so I have ignored anything ROAM related. At the momement I am drawing a cube with VBOs and mapping onto a sphere. I am familiar with most fractal heightmap generating techniques and have already implemented my own version of midpoint displacement(not that useful in this case I know). My question is, what is the best way to procedurally generate the heightmap. I have looked at libnoise which allows me to make tilable heightmaps/textures, but as far as I can see I would need to generate a net like this. Leaving the tiling obvious. Could anyone advise me on the best route to take? Any input would be much appreciated. Thanks, Henry.

    Read the article

  • Do I need the 'w' component in my Vector class?

    - by bobobobo
    Assume you're writing matrix code that handles rotation, translation etc for 3d space. Now the transformation matrices have to be 4x4 to fit the translation component in. However, you don't actually need to store a w component in the vector do you? Even in perspective division, you can simply compute and store w outside of the vector, and perspective divide before returning from the method. For example: // post multiply vec2=matrix*vector Vector operator*( const Matrix & a, const Vector& v ) { Vector r ; // do matrix mult r.x = a._11*v.x + a._12*v.y ... real w = a._41*v.x + a._42*v.y ... // perspective divide r /= w ; return r ; } Is there a point in storing w in the Vector class?

    Read the article

  • infer half vector length in BRDF

    - by cician
    it's my first question on stack. Is it possible to infer length of the half angle vector for specular lighting from N·L and N·V without the whole view and light vectors? I may be completely off-track, but I have this gut feeling it's possible... Why? I'm working on a skin shader and I'm already doing one texture lookup with N·L+N·E and one texture lookup for specular with N·H+N·V. The latter one can be transformed into N·L+N·E lookup if only I had the half vector length. Doing so could simplify the shader a bit and move some operations into the pre-computed lookup texture. It would make a huge difference since I'm trying to squeeze as much functionality as possible to a single pass mobile version so instruction count matters. Thanks.

    Read the article

  • World to Pixel Transformation

    - by D00d
    My objects have a location in world coordinates (basically 1.0f is a meter). If I simply draw my objects using their world coordinates, each meter will correspond to a pixel. Obviously that's not what I want. Now, I don't want to have to apply a transformation to each and every object's position when I draw them. As I happen to be using XNA, and spritebatch allows a Matrix to be passed in as an argument in it's begin method, I was wondering if there is a way to pass the World to Pixel transformation in there. Any suggestions? So far Matrix.CreateScale(new Vector3(zoom, zoom, 1)) puts the objects in their proper spot, but it also scales up the sprites. Is there a way to transform the position without enlarging the sprite? Thanks

    Read the article

  • How to offset particles from point of origin

    - by Sun
    Hi I'm having troubles off setting particles from a point of origin. I want my particles to spread out after a certain radius from a the point of origin. For example, this is what I have right now: All particles emitted from a point of origin. What I want is this: Particles are offset from the point of origin by some amount, i.e after the circle. What is the best way to achieve this? At the moment, I have the point of origin, the position of each particle and its rotation angle. Sorry for the poor illustrations. Edit: I was mistaken, when a particle is created, I have only the point of origin. When the particle is created I am able to calculate the rotation of the particle in the update method after it has moved to a new location using atan2() method. This is how I create/manage particles: Created new particle at enemy ship death location, for every new particle which is added to the list, call Update and Draw to update its position, calculate new angle and draw it.

    Read the article

  • How do I create a big multiplayer world in UDK?

    - by Dorpe
    I want to create a big multiplayer world in UDK and I'm having a few difficulties. I created the biggest terrain possible but then any terrain related action I do takes forever. However, I've seen videos of people make same size terrain and working without a problem. My pc is strong enough, so maybe someone can tell me what I'm doing wrong. I want to make it even bigger then the biggest terrain size, so I was thinking of doing level streaming but then I read that streaming is working server side which means if I have a player on every terrain all terrains will still be loaded and I want to save as much memory possible so it will work well online. Thanks for any help you can give.

    Read the article

  • How are vertex shader outs sent as inputs to the fragment shader?

    - by Jeffrey
    I'm learning some OpenGL 3.2 way of doing things and I think it's quite great, I'm actually understanding more of shaders and non-fixed pipeline in 1 week rather than those 2 years I tried to learn OpenGL fixed pipeline functions. But here's my question: From what I think I've understood the vertex shader is run for each vertexes in the VBO. But the fragments shader is run per each pixel (is that right?) which is a huge number compared to let's say 3 vertexes of a triangle. Now it seems that in the vertex shader the out variables (like colors and stuff) are passed 1 to 1 to the fragment shader. But let's say that I pass to the fragment shader the position of the vertex in the vertex shader. How is all executed? What vertex (A, B or C of the hipothetical triangle) is passed per each fragment and why?

    Read the article

  • What is involved with writing a lobby server?

    - by Kira
    So I'm writing a Chess matchmaking system based on a Lobby view with gaming rooms, general chat etc. So far I have a working prototype but I have big doubts regarding some things I did with the server. Writing a gaming lobby server is a new programming experience to me and so I don't have a clear nor precise programming model for it. I also couldn't find a paper that describes how it should work. I ordered "Java Network Programming 3rd edition" from Amazon and still waiting for shipment, hopefully I'll find some useful examples/information in this book. Meanwhile, I'd like to gather your opinions and see how you would handle some things so I can learn how to write a server correctly. Here are a few questions off the top of my head: (may be more will come) First, let's define what a server does. It's primary functionality is to hold TCP connections with clients, listen to the events they generate and dispatch them to the other players. But is there more to it than that? Should I use one thread per client? If so, 300 clients = 300 threads. Isn't that too much? What hardware is needed to support that? And how much bandwidth does a lobby consume then approx? What kind of data structure should be used to hold the clients' sockets? How do you protect it from concurrent modification (eg. a player enters or exists the lobby) when iterating through it to dispatch an event without hurting throughput? Is ConcurrentHashMap the correct answer here, or are there some techniques I should know? When a user enters the lobby, what mechanism would you use to transfer the state of the lobby to him? And while this is happening, where do the other events bubble up? Screenshot : http://imageshack.us/photo/my-images/695/sansrewyh.png/

    Read the article

  • Material tiling and offset in unity

    - by Simran kaur
    Ambiguity: What exactly is the difference between Tiling the material and Offset of material? Need to do: I need the material to be repeated n times on the object where I need to set the value of n via script.How do I do it? It seems to happen through Tiling(tried via inspector) but again what is difference between mainTextureOffset and setTextureOffset? Tried: Following is the line of code that I tried to repeat the texture n number of times on an object(repeat across the width of object), but it does nothing significant that I can see.

    Read the article

  • UDK : UTWeap_RocketLauncher gift CreateInventory: Any idea why this does not work properly?

    - by John Sloan
    I am giving the player an instanced class of UTWeap_RocketLauncher in an instance of UTGame. PlayerPawn.CreateInventory(class'FobikRocketLauncher',false); // Does not work PlayerPawn.CreateInventory(class'FobikLinkGun',false); // Works Even if I give the original class (eg. UTWeap_RocketLauncher) it does not actually show up. However if I do a "GiveWeapons" cheat, I get it just fine. It also works if I had code it into the map. - But UTWeap_LinkGun works fine either way. Any ideas? It shows the default ammo amount, and the icon on the HUD.

    Read the article

  • Possible to pass pygame data to memory map block?

    - by toozie21
    I am building a matrix out of addressable pixels and it will be run by a Pi (over the ethernet bus). The matrix will be 75 pixels wide and 20 pixels tall. As a side project, I thought it would be neat to run pong on it. I've seen some python based pong tutorials for Pi, but the problem is that they want to pass the data out to a screen via pygame.display function. I have access to pass pixel information using a memory map block, so is there anyway to do that with pygame instead of passing it out the video port? In case anyone is curious, this was the pong tutorial I was looking at: Pong Tutorial

    Read the article

  • port opengl2.x to opengl 3.x

    - by user46759
    I'm trying to port opencloth example to OpenGL 3.x. I've mostly done it to the shaders but I'm not sure of this part : glEnableClientState(GL_VERTEX_ARRAY); glBindBuffer(GL_ARRAY_BUFFER, vboID); glVertexPointer(4, GL_FLOAT, 0,0); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glBindBuffer(GL_ARRAY_BUFFER, vboTexID); glTexCoordPointer(2, GL_FLOAT,0, 0); glEnableClientState(GL_NORMAL_ARRAY); glBindBuffer(GL_ARRAY_BUFFER, vboNormID); glNormalPointer(GL_FLOAT,sizeof(float)*4, 0); maybe glEnableVertexAttriArray somewhere ? any clue ? thanx edit : maybe something like that ? glEnableVertexAttribArray (2) ; // Ou glEnableVertexAttribArray (positionIndex) ; glBindBuffer(GL_ARRAY_BUFFER, vboTexID); glVertexAttribPointer (2, 2, GL_FLOAT, GL_FALSE, 0, 0) ; glEnableVertexAttribArray (3) ; // Ou glEnableVertexAttribArray (positionIndex) ; glBindBuffer(GL_ARRAY_BUFFER, vboNormID); glVertexAttribPointer (3, 4, GL_FLOAT, GL_FALSE, sizeof (float) * 4, 0) ;

    Read the article

  • Are there any reasons to use Legacy (2.X) OpenGL?

    - by user27886
    The benefits are well documented of the Modern OpenGL 3.X & 4.X API's, but I'm wondering if there are ANY benefits to keeping with the old OpenGL, Or if learning OpenGL 2.X is a complete waste of time now no matter what? Particularly I've wondered if using the OpenGL 2.X API is appropriate if the target platform had graphics hardware capable of only up to OpenGL 2.X. Would a driver update on said target platform allow programs compiled using the Modern OpenGL API's to be released on this old platform? If they both work, which would be faster? Thanks

    Read the article

  • Scene transitions

    - by Mars
    It's my first time working with actual scenes/states, aka DrawableGameComponents, which work separate from one another. I'm now wondering what's the best way to make transitions between them, and how to affect them from other scenes. Lets say I wanted to "push" one screen to the right, with another one coming in at the same time. Naturally I'd have to keep drawing both, until the transition is complete. And I'd have to adjust the coordinates I'm drawing at while doing it. Is there a way around specifically handling this special case in every single scene? Or of I wanted to fade one into the other. Basically the question stays the same, how would you do that without having to handle it in every single scene? While writing this I'm realizing it will be the same thing for all kinds of transitions. Maybe a central Draw method in the manager could be a solution, where parameters and effects are applied when necessary. But this wouldn't work if objects that are drawn have their own method, and aren't drawn within the scene, or if an effect has to be applied to the whole scene. That means, maybe scenes have to be drawn to their own rendertarget? That way one call to the base class after the normal drawing could be enough, to apply the effects, while drawing it to the main render target. But I once heard there are problems when switching from target to target, back and forth. So is that even a viable option? As you can see, I have some basic ideas how it might work... but nothing specific. I'd like to learn what's the common way to achieve such things, a general way to apply all kinds of transitions.

    Read the article

< Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >