Search Results

Search found 35343 results on 1414 pages for 'development tools'.

Page 522/1414 | < Previous Page | 518 519 520 521 522 523 524 525 526 527 528 529  | Next Page >

  • XNA C# How to draw fonts in different color

    - by XNA newbie
    I'm doing a simple chat system with XNA C#. It is a chatbox that contains 5 lines of chat typed by the users. Something like a MMORPG chatting system. [User1name] says: Hi [User2name] says: Hello [User1name] says: What are you doing? [User2name] says: I'm fine [System] The time is now 1:03AM. When the user pressed 'ENTER', the text he entered will be added inside an ArrayList chatList.Add(s); For displaying the text he entered, I used for (int i = 0; i < chatList.Count(); i++) { spriteBatch.DrawString(font, chatList[i], new Vector2(40, arr1[i]), Color.Yellow); } *arr1[i] contains 5 y-axis points to print my 5 line of chats in the chatbox Question1: What if I have another type of message which will be added into ChatList (something like a system message). I need the System Message to be printed out in red color. And if the user keeps on chatting, the chat box will be updated according: (MAX 5 LINES) The newest chat will be shown below, and the oldest one will be deleted if they reached the max 5 lines. [User2name] says: Hello [User1name] says: What are you doing? [User2name] says: I'm fine [System] The time is now 1:03AM. [User1name] says: Ok, great to hear that! I'm having trouble to print each line color according to their msg type. For normal msg, it should be yellow. For system msg, it should be red. Question2: And for the next problem, I need the chat texts to be white color, while the names of the user is yellow (like warcraft3 chat system). How do I do that? I have a hard time thinking of a solution for these to work. Advise needed.

    Read the article

  • XNA Skinned Animated Mesh Rendering Exported from Maya

    - by Devin Garner
    I am working on translating an old RTS game engine I wrote from DirectX9 to XNA. My old models didn't have animation & are an old format, so I'm trying with an FBX file. I temporarily "borrowed" a model from League of Legends just to test if my rendering is working correctly. I imported the mesh/bones/skin/animation into Maya 2012 using an "unnamed" 3rd-party import tool. (obviously I'll have to get legit models later, but I just want to test if my programming is correct). Everything looks correct in maya and it renders the animations flawlessly. I exported everything into a single FBX file (with only a single animation). I then tried to load this model using the example at the following site: http://create.msdn.com/en-US/education/catalog/sample/skinned_model With my exported FBX, the animation looks correct for most of the frames, however at random times it screws up for a split second. Basically, the body/arms/head will look right, but the leg/foot will shoot out to a random point in space for a second & then go back to the normal position. The original FBX from the sample looks correct in my program. It seems odd that my model was imported into maya wrong, since it displays fine in Maya. So, I'm thinking either I'm exporting it wrong, or the sample code is bad & the model from the sample caters to the samples bad code. I'm new to 3D programming & maya, so chances are I'm doing something wrong in the export. I'm using mostly the defaults, but I've tried all 3 interpolation modes (quaternion, euler, resample). Thanks

    Read the article

  • What do you use to bundle / encrypt data?

    - by David McGraw
    More and more games are going the data driven route which means that there needs to be a layer of security around easy manipulation. I've seen it where games completely bundle up their assets (audio, art, data) and I'm wondering how they are managing that? Are there applications / libraries that will bundle and assist you with managing the assets within? If not is there any good resources that you would point to for packing / unpacking / encryption? This specific question revolves around C++, but I would be open to hear how this is managed in C#/XNA as well. Just to be clear -- I'm not out to engineer a solution to prevent hacking. At the fundamental level we're all manipulating 0's and 1's. But, we do want to keep the 99% of people that play the game from simply modifying XML files that are used to build the game world. I've seen plenty of games bundle all of their resources together. I'm simply curious about the methods they're using.

    Read the article

  • how much time does grid.py take to run ?

    - by trinity
    Hello all , I am using libsvm for binary classification.. I wanted to try grid.py , as it is said to improve results.. I ran this script for five files in separate terminals , and the script has been running for more than 12 hours.. this is the state of my 5 terminals now : [root@localhost tools]# python grid.py sarts_nonarts_feat.txt>grid_arts.txt Warning: empty z range [61.3997:61.3997], adjusting to [60.7857:62.0137] line 2: warning: Cannot contour non grid data. Please use "set dgrid3d". Warning: empty z range [61.3997:61.3997], adjusting to [60.7857:62.0137] line 4: warning: Cannot contour non grid data. Please use "set dgrid3d". [root@localhost tools]# python grid.py sgames_nongames_feat.txt>grid_games.txt Warning: empty z range [64.5867:64.5867], adjusting to [63.9408:65.2326] line 2: warning: Cannot contour non grid data. Please use "set dgrid3d". Warning: empty z range [64.5867:64.5867], adjusting to [63.9408:65.2326] line 4: warning: Cannot contour non grid data. Please use "set dgrid3d". [root@localhost tools]# python grid.py sref_nonref_feat.txt>grid_ref.txt Warning: empty z range [62.4602:62.4602], adjusting to [61.8356:63.0848] line 2: warning: Cannot contour non grid data. Please use "set dgrid3d". Warning: empty z range [62.4602:62.4602], adjusting to [61.8356:63.0848] line 4: warning: Cannot contour non grid data. Please use "set dgrid3d". [root@localhost tools]# python grid.py sbiz_nonbiz_feat.txt>grid_biz.txt Warning: empty z range [67.9762:67.9762], adjusting to [67.2964:68.656] line 2: warning: Cannot contour non grid data. Please use "set dgrid3d". Warning: empty z range [67.9762:67.9762], adjusting to [67.2964:68.656] line 4: warning: Cannot contour non grid data. Please use "set dgrid3d". [root@localhost tools]# python grid.py snews_nonnews_feat.txt>grid_news.txt Wrong input format at line 494 Traceback (most recent call last): File "grid.py", line 223, in run if rate is None: raise "get no rate" TypeError: exceptions must be classes or instances, not str I had redirected the outputs to files , but those files for now contain nothing.. And , the following files were created : sbiz_nonbiz_feat.txt.out sbiz_nonbiz_feat.txt.png sarts_nonarts_feat.txt.out sarts_nonarts_feat.txt.png sgames_nongames_feat.txt.out sgames_nongames_feat.txt.png sref_nonref_feat.txt.out sref_nonref_feat.txt.png snews_nonnews_feat.txt.out (-- is empty ) There's just one line of information in .out files.. the ".png" files are some GNU PLOTS . But i dont understand what the above GNUplots / warnings convey .. Should i re-run them ? Can anyone please tell me on how much time this script might take if each input file contains about 144000 lines.. Thanks and regards

    Read the article

  • How can I resolve collisions at different speeds, depending on the direction?

    - by Raven Dreamer
    I have, for all intents and purposes, a Triangle class that objects in my scene can collide with (In actuality, the right side of a parallelogram). My collision detection and resolution code works fine for the purposes of preventing a gameobject from entering into the space of the Triangle, instead directing the movement along the edge. The trouble is, the maximum speed along the x and y axis is not equivalent in my game, and moving along the Y axis (up or down) should take twice as long as an equivalent distance along the X axis (left or right). Unfortunately, these speeds apply to the collision resolution too, and movement along the blue path above progresses twice as fast. What can I do in my collision resolution to make sure that the speedlimit for Y axis movement is obeyed in the latter case? Collision Resolution for this case below (vecInput and velocity are the position and velocity vectors of the game object): // y = mx+c // solve for y. M = 2, x = input's x coord, c = rightYIntercept lowY = 2*vecInput.x + parag.rightYIntercept ; ... else { // y = mx+c // vecInput.y = 2(x) + RightYIntercept // (vecInput.y - RightYIntercept) / 2 = x; //if velocity.Y (positive) greater than velocity.X (negative) //pushing from bottom, so push right. if(velocity.y > -1*velocity.x) { //change the input vector's x position to match the //y position on the shape's edge. Formula for line: Y = MX+C // M is 2, C is rightYIntercept, y is the input y, solve for X. vecInput = new Vector2((vecInput.y - parag.rightYIntercept)/2, vecInput.y); Debug.Log("adjusted rightwards"); } else { vecInput = new Vector2( vecInput.x, lowY); Debug.Log("adjusted downwards"); } }

    Read the article

  • Trouble with touch events on iPhone

    - by MrDatabase
    I'm making a simple 2D game for iPhone. Think of the game as a ball on the screen that goes up while the user is touching the screen and falls down when the user stops touching the screen. The ball starts moving up in touchesBegan:withEvent and starts moving down in touchesEnded:withEvent. This works fine almost all the time. However on occasion the ball will keep moving up after the user stops touching... or the ball will keep moving down while the user is touching. Why is this happening? Just fyi the ball is drawn on a UIWindow. The taps are handled by a UIImageview subclass that's clearColor and takes up the entire screen. This "touchLayer" is also moved to the front of the window in the game loop. Any idea why this control scheme occasionally fails? Perhaps the touch events just aren't firing? Or they're fired out of order? Cheers!

    Read the article

  • Unity 5.1 audio issues (no sound in back channels)

    - by N0xus
    I've trying to bring in surround sound audio into my project. I've set my computer up to run in 5.1 and when I play a 6 channel audio through windows media player (it's a test audio that does left speaker, right speaker etc) it works fine. However, when I run it through Unity, all I get is the front 3 channels. I've set it in the Edit - project settings - audio to be 5.1 in there. I even set it in code with following: void Start() { AudioSettings.speakerMode = AudioSpeakerMode.Mode5point1; } How ever, when I run a debug line of: print ( AudioSettings.driverCaps); It tells me that Unity is only playing in stereo. Is there something I'm still not doing? I should also add I've ran 10 different tests using the 3D audio pan and spread options. I've set both to either being fully off, half way on and full. Still the same results.

    Read the article

  • Game engines and monetization of indie games

    - by Extrakun
    Does the game engine you use affect monetization of indie games? Of course, targeting difficult platforms is one of the issues. Besides that, how would the game engine used impact monetization of games, assuming cases where the developers is going through a portal and handling the online distribution themselves? As an example, if I make a game in DarkBASIC, will it be harder to sell it than one made with Popcaps Framework or ClanLib etc.?

    Read the article

  • Queries regarding Geometry Shaders

    - by maverick9888
    I am dealing with geometry shaders using GL_ARB_geometry_shader4 extension. My code goes like : GLfloat vertices[] = { 0.5,0.25,1.0, 0.5,0.75,1.0, -0.5,0.75,1.0, -0.5,0.25,1.0, 0.6,0.35,1.0, 0.6,0.85,1.0, -0.6,0.85,1.0, -0.6,0.35,1.0 }; glProgramParameteriEXT(psId, GL_GEOMETRY_INPUT_TYPE_EXT, GL_TRIANGLES); glProgramParameteriEXT(psId, GL_GEOMETRY_OUTPUT_TYPE_EXT, GL_TRIANGLE_STRIP); glLinkProgram(psId); glBindAttribLocation(psId,0,"Position"); glEnableVertexAttribArray (0); glVertexAttribPointer(0, 3, GL_FLOAT, 0, 0, vertices); glDrawArrays(GL_TRIANGLE_STRIP,0,4); My vertex shader is : #version 150 in vec3 Position; void main() { gl_Position = vec4(Position,1.0); } Geometry shader is : #version 150 #extension GL_EXT_geometry_shader4 : enable in vec4 pos[3]; void main() { int i; vec4 vertex; gl_Position = pos[0]; EmitVertex(); gl_Position = pos[1]; EmitVertex(); gl_Position = pos[2]; EmitVertex(); gl_Position = pos[0] + vec4(0.3,0.0,0.0,0.0); EmitVertex(); EndPrimitive(); } Nothing is rendered with this code. What exactly should be the mode in glDrawArrays() ? How does the GL_GEOMETRY_OUTPUT_TYPE_EXT parameter will affect glDrawArrays() ? What I expect is 3 vertices will be passed on to Geometry Shader and using those we construct a primitive of size 4 (assuming GL_TRIANGLE_STRIP requires 4 vertices). Can somebody please throw some light on this ?

    Read the article

  • How to properly add texture to multi-fixture/shape b2Body

    - by Blazej Wdowikowski
    Hello to everyone this is my first poste here I hope that will be not fail start. At start I must say I make part 1 in Ray's Tutorial "How To Make A Game Like Fruit Ninja With Box2D and Cocos2D". But I wonder what when I want make more complex body with texture? Simple just add n b2FixtureDef to the same body. OK but what about texture? If I will take code from that tutorial it only fill last fixture. Probably it does not takes every b2Vec2 point. I was right, it did not. So quick refactor and from that -(id)initWithTexture:(CCTexture2D*)texture body:(b2Body*)body original:(BOOL)original { // gather all the vertices from our Box2D shape b2Fixture *originalFixture = body->GetFixtureList(); b2PolygonShape *shape = (b2PolygonShape*)originalFixture->GetShape(); int vertexCount = shape->GetVertexCount(); NSMutableArray *points = [NSMutableArray arrayWithCapacity:vertexCount]; for(int i = 0; i < vertexCount; i++) { CGPoint p = ccp(shape->GetVertex(i).x * PTM_RATIO, shape->GetVertex(i).y * PTM_RATIO); [points addObject:[NSValue valueWithCGPoint:p]]; } if ((self = [super initWithPoints:points andTexture:texture])) { _body = body; _body->SetUserData(self); _original = original; // gets the center of the polygon _centroid = self.body->GetLocalCenter(); // assign an anchor point based on the center self.anchorPoint = ccp(_centroid.x * PTM_RATIO / texture.contentSize.width, _centroid.y * PTM_RATIO / texture.contentSize.height); } return self; } I came up with that -(id)initWithTexture:(CCTexture2D*)texture body:(b2Body*)body original:(BOOL)original { int vertexCount = 0; //gather total number of b2Vect2 points b2Fixture *currentFixture = body->GetFixtureList(); while (currentFixture) { //new b2PolygonShape *shape = (b2PolygonShape*)currentFixture->GetShape(); vertexCount += shape->GetVertexCount(); currentFixture = currentFixture->GetNext(); } NSMutableArray *points = [NSMutableArray arrayWithCapacity:vertexCount]; // gather all the vertices from our Box2D shape b2Fixture *originalFixture = body->GetFixtureList(); while (originalFixture) { //new NSLog((NSString*)@"-"); b2PolygonShape *shape = (b2PolygonShape*)originalFixture->GetShape(); int currentVertexCount = shape->GetVertexCount(); for(int i = 0; i < currentVertexCount; i++) { CGPoint p = ccp(shape->GetVertex(i).x * PTM_RATIO, shape->GetVertex(i).y * PTM_RATIO); [points addObject:[NSValue valueWithCGPoint:p]]; } originalFixture = originalFixture->GetNext(); } if ((self = [super initWithPoints:points andTexture:texture])) { _body = body; _body->SetUserData(self); _original = original; // gets the center of the polygon _centroid = self.body->GetLocalCenter(); // assign an anchor point based on the center self.anchorPoint = ccp(_centroid.x * PTM_RATIO / texture.contentSize.width,_centroid.y * PTM_RATIO / texture.contentSize.height); } return self; } I was working for simple two fixtures body like b2BodyDef bodyDef; bodyDef.type = b2_dynamicBody; bodyDef.position = position; bodyDef.angle = rotation; b2Body *body = world->CreateBody(&bodyDef); b2FixtureDef fixtureDef; fixtureDef.density = 1.0; fixtureDef.friction = 0.5; fixtureDef.restitution = 0.2; fixtureDef.filter.categoryBits = 0x0001; fixtureDef.filter.maskBits = 0x0001; b2Vec2 vertices[] = { b2Vec2(0.0/PTM_RATIO,50.0/PTM_RATIO), b2Vec2(0.0/PTM_RATIO,0.0/PTM_RATIO), b2Vec2(50.0/PTM_RATIO,30.1/PTM_RATIO), b2Vec2(60.0/PTM_RATIO,60.0/PTM_RATIO) }; b2PolygonShape shape; shape.Set(vertices, 4); fixtureDef.shape = &shape; body->CreateFixture(&fixtureDef); b2Vec2 vertices2[] = { b2Vec2(20.0/PTM_RATIO,50.0/PTM_RATIO), b2Vec2(20.0/PTM_RATIO,0.0/PTM_RATIO), b2Vec2(70.0/PTM_RATIO,30.1/PTM_RATIO), b2Vec2(80.0/PTM_RATIO,60.0/PTM_RATIO) }; shape.Set(vertices2, 4); fixtureDef.shape = &shape; body->CreateFixture(&fixtureDef); But if I try put secondary shape upper than first it starting wierd, texture goes crazy. For example not mention about more complex shapes. What's more if shapes have one common point texture will not render for them at all [For that I use Physics Edytor like in tutorial part1] BTW. I use PolygonSprite and in method createWithWorld... another shapes. Uff.. Question So my question is, why texture coords are in such a mess up? It's my modify method or just wrong approach? Maybe I should remove duplicated from points array?

    Read the article

  • Spherical harmonics lighting - what does it accomplish?

    - by TravisG
    From my understanding, spherical harmonics are sometimes used to approximate certain aspects of lighting (depending on the application). For example, it seems like you can approximate the diffuse lighting cause by a directional light source on a surface point, or parts of it, by calculating the SH coefficients for all bands you're using (for whatever accuracy you desire) in the direction of the surface normal and scaling it with whatever you need to scale it with (e.g. light colored intensity, dot(n,l),etc.). What I don't understand yet is what this is supposed to accomplish. What are the actual advantages of doing it this way as opposed to evaluating the diffuse BRDF the normal way. Do you save calculations somewhere? Is there some additional information contained in the SH representation that you can't get out of the scalar results of the normal evaluation?

    Read the article

  • Where is a good spot to start when writing a LWJGL game engine?

    - by Alcionic
    I'm starting work on a huge game and somewhere along my train of thought I decided it would be a good idea to write my own engine for the game. I was originally going to use JMonkeyEngine but there were some things about it that just didn't work well with me. I wanted full control over every aspect of the entire process. Where would a good place to start be when writing your own engine? I have no experience with LWJGL but I learn quick. Either advice or some place where there is good advice would be nice. Thanks!

    Read the article

  • Smooth vector based jump

    - by Esa
    I started working on Wolfire's mathematics tutorials. I got the jumping working well using a step by step system, where you press a button and the cube moves to the next point on the jumping curve. Then I tried making the jumping happen during a set time period e.g the jump starts and lands within 1.5 seconds. I tried the same system I used for the step by step implementation, but it happens instantly. After some googling I found that Time.deltatime should be used, but I could not figure how. Below is my current jumping code, which makes the jump happen instantly. while (transform.position.y > 0) { modifiedJumperVelocity -= jumperDrag; transform.position += new Vector3(modifiedJumperVelocity.x, modifiedJumperVelocity.y, 0); } Where modifiedJumperVelocity is starting vector minus the jumper drag. JumperDrag is the value that is substracted from the modifiedJumperVelocity during each step of the jump. Below is an image of the jumping curve:

    Read the article

  • Not getting desired results with SSAO implementation

    - by user1294203
    After having implemented deferred rendering, I tried my luck with a SSAO implementation using this Tutorial. Unfortunately, I'm not getting anything that looks like SSAO, you can see my result below. You can see there is some weird pattern forming and there is no occlusion shading where there needs to be (i.e. in between the objects and on the ground). The shaders I implemented follow: #VS #version 330 core uniform mat4 invProjMatrix; layout(location = 0) in vec3 in_Position; layout(location = 2) in vec2 in_TexCoord; noperspective out vec2 pass_TexCoord; smooth out vec3 viewRay; void main(void){ pass_TexCoord = in_TexCoord; viewRay = (invProjMatrix * vec4(in_Position, 1.0)).xyz; gl_Position = vec4(in_Position, 1.0); } #FS #version 330 core uniform sampler2D DepthMap; uniform sampler2D NormalMap; uniform sampler2D noise; uniform vec2 projAB; uniform ivec3 noiseScale_kernelSize; uniform vec3 kernel[16]; uniform float RADIUS; uniform mat4 projectionMatrix; noperspective in vec2 pass_TexCoord; smooth in vec3 viewRay; layout(location = 0) out float out_AO; vec3 CalcPosition(void){ float depth = texture(DepthMap, pass_TexCoord).r; float linearDepth = projAB.y / (depth - projAB.x); vec3 ray = normalize(viewRay); ray = ray / ray.z; return linearDepth * ray; } mat3 CalcRMatrix(vec3 normal, vec2 texcoord){ ivec2 noiseScale = noiseScale_kernelSize.xy; vec3 rvec = texture(noise, texcoord * noiseScale).xyz; vec3 tangent = normalize(rvec - normal * dot(rvec, normal)); vec3 bitangent = cross(normal, tangent); return mat3(tangent, bitangent, normal); } void main(void){ vec2 TexCoord = pass_TexCoord; vec3 Position = CalcPosition(); vec3 Normal = normalize(texture(NormalMap, TexCoord).xyz); mat3 RotationMatrix = CalcRMatrix(Normal, TexCoord); int kernelSize = noiseScale_kernelSize.z; float occlusion = 0.0; for(int i = 0; i < kernelSize; i++){ // Get sample position vec3 sample = RotationMatrix * kernel[i]; sample = sample * RADIUS + Position; // Project and bias sample position to get its texture coordinates vec4 offset = projectionMatrix * vec4(sample, 1.0); offset.xy /= offset.w; offset.xy = offset.xy * 0.5 + 0.5; // Get sample depth float sample_depth = texture(DepthMap, offset.xy).r; float linearDepth = projAB.y / (sample_depth - projAB.x); if(abs(Position.z - linearDepth ) < RADIUS){ occlusion += (linearDepth <= sample.z) ? 1.0 : 0.0; } } out_AO = 1.0 - (occlusion / kernelSize); } I draw a full screen quad and pass Depth and Normal textures. Normals are in RGBA16F with the alpha channel reserved for the AO factor in the blur pass. I store depth in a non linear Depth buffer (32F) and recover the linear depth using: float linearDepth = projAB.y / (depth - projAB.x); where projAB.y is calculated as: and projAB.x as: These are derived from the glm::perspective(gluperspective) matrix. z_n and z_f are the near and far clip distance. As described in the link I posted on the top, the method creates samples in a hemisphere with higher distribution close to the center. It then uses random vectors from a texture to rotate the hemisphere randomly around the Z direction and finally orients it along the normal at the given pixel. Since the result is noisy, a blur pass follows the SSAO pass. Anyway, my position reconstruction doesn't seem to be wrong since I also tried doing the same but with the position passed from a texture instead of being reconstructed. I also tried playing with the Radius, noise texture size and number of samples and with different kinds of texture formats, with no luck. For some reason when changing the Radius, nothing changes. Does anyone have any suggestions? What could be going wrong?

    Read the article

  • DrawIndexedPrimitives overdraws data in previous buffer if called in loop

    - by Daniel Excinsky
    I doubled the question from stackoverflow here, and will delete the opposite of a question that gave me the answer. I have the Draw method in one of my renderers, that loops through the dictionary and gets precollected and preinitialized buffers. When dictionary has only one element, everything is just fine. But with more elements what I get on the screen is only the data from the last buffer (I suppose, not sure) My Draw method: public void Draw(GameTime gameTime) { if (!_areStaticEffectsSet) { // blockEffect.Parameters["TextureAtlas"].SetValue(textureAtlas); blockEffect.Parameters["HorizonColor"].SetValue(World.HORIZONCOLOR); blockEffect.Parameters["NightColor"].SetValue(World.NIGHTCOLOR); blockEffect.Parameters["MorningTint"].SetValue(World.MORNINGTINT); blockEffect.Parameters["EveningTint"].SetValue(World.EVENINGTINT); blockEffect.Parameters["SunColor"].SetValue(World.SUNCOLOR); _areStaticEffectsSet = true; } blockEffect.Parameters["World"].SetValue(Matrix.Identity); blockEffect.Parameters["View"].SetValue(_player.CameraView); blockEffect.Parameters["Projection"].SetValue(_player.CameraProjection); blockEffect.Parameters["CameraPosition"].SetValue(_player.CameraPosition); blockEffect.Parameters["timeOfDay"].SetValue(_world.TimeOfDay); var viewFrustum = new BoundingFrustum(_player.CameraView * _player.CameraProjection); _graphicsDevice.BlendState = BlendState.Opaque; _graphicsDevice.DepthStencilState = DepthStencilState.Default; foreach (KeyValuePair<int, Texture2D> textureAtlas in textureAtlases) { blockEffect.Parameters["TextureAtlas"].SetValue(textureAtlas.Value); foreach (EffectPass pass in blockEffect.CurrentTechnique.Passes) { pass.Apply(); //TODO: ?????????? ??????????????? ?? ?????? ?? ??????? ??????? VertexBuffer ? IndexBuffer foreach (Chunk chunk in _world.Chunks.Values) { if (chunk == null || chunk.IsDisposed) { continue; } if (chunk.BoundingBox.Intersects(viewFrustum) && chunk.GetBlockIndexBuffer(textureAtlas.Key) != null) { lock (chunk) { if (chunk.GetBlockIndexBuffer(textureAtlas.Key).IndexCount > 0) { VertexBuffer vertexBuffer = chunk.GetBlockVertexBuffer(textureAtlas.Key); IndexBuffer indexBuffer = chunk.GetBlockIndexBuffer(textureAtlas.Key); //if (chunk.DrawIndex == new Vector3i(0, 0, 0)) //{ //if (textureAtlas.Key == -1) //{ //var varray = new [] //{ //new VertexPositionTextureLight(new Vector3(0,68,0), new Vector2(0,1),1,new Vector3(0,0,0), new Vector3(1,1,1)), //new VertexPositionTextureLight(new Vector3(0,68,1), new Vector2(0,1),1,new Vector3(0,0,0), new Vector3(1,1,1)), //new VertexPositionTextureLight(new Vector3(1,68,0), new Vector2(0,1),1,new Vector3(0,0,0), new Vector3(1,1,1)) //}; //var iarray = new short[] {0, 1, 2}; //vertexBuffer = new VertexBuffer(_graphicsDevice, typeof(VertexPositionTextureLight), varray.Length, BufferUsage.WriteOnly); //indexBuffer = new IndexBuffer(_graphicsDevice, IndexElementSize.SixteenBits, iarray.Length, BufferUsage.WriteOnly); //vertexBuffer.SetData(varray); //indexBuffer.SetData(iarray); } } _graphicsDevice.SetVertexBuffer(vertexBuffer); _graphicsDevice.Indices = indexBuffer; _graphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vertexBuffer.VertexCount, 0, indexBuffer.IndexCount / 3); } } } } } } } Noteworthy things about the code: XNA version is 4.0. I've commented the debugging code in the loop, but left it for it may bring some insight. I try not only to change vertices/indices in the loop, but textureAtlas also. Code in the shader about textureAtlas: Texture TextureAtlas; sampler TextureAtlasSampler = sampler_state { texture = <TextureAtlas>; magfilter = POINT; minfilter = POINT; mipfilter = POINT; AddressU = WRAP; AddressV = WRAP; }; struct VSInput { float4 Position : POSITION0; float2 TexCoords1 : TEXCOORD0; float SunLight : COLOR0; float3 LocalLight : COLOR1; float3 Normal : NORMAL0; }; VertexPositionTextureLight is my own realization of IVertexType. So, do anybody know about this problem, or see the wrongness in my code (that's far more likely)?

    Read the article

  • Having to check collisions twice per game tic

    - by user22241
    I have vertically moving elevators (3 solid tiles wide) and static solid tiles. Each are separate entities and therefore have their own respective collision routines (to check for, and resolve, collisions with the main character) I check my vertical collisions after characters vertical movements and then horizontal collisions after horizontal movements. The problem is that I want my platform to kill the player if it squashes him from the top, and also if he's on a moving platform (that is moving up) that squashes him into a solid block. Correct behaviour, player on solid blocks being squashed from above by decending elevator Here is what happens. Gravity pushes character into solid block, solid block collision routine corrects characters position and sits him on the solid block which pushes him into the moving elevator, elevator routine then checks for collision and kills player. This assumes I am checking solid blocks first, then elevator collisions. However, if it's the other way around, this happens.... Incorrect behaviour, player on accending elevator gets pushed into solid blocks above Player is on an elevator moving up, gravity pushes him into the elevator, solid block CD routine detects no collision, no action taken. Elevator CD routine detects character has been pushed into elevator by gravity, corrects this by moving character up and sitting him on the elevator and pushes him into the solid blocks above, however the solid block vertical routine has now already run for this tic, so the game continues and the next solid block collision that is encountered is the horizontal routine. This detects a collision and moves the character out of the collision to the left or right of the block which looks odd to say the least (character should get killed here). The only way I've managed to get this working correctly is by running the solid block CD, then the elevator CD, then the solid block CD again straight after. This is clearly wasteful but I can't figure out how else to do this. Any help would be appreciated.

    Read the article

  • Improving Click and Drag with C++

    - by Josh
    I'm currently using SFML 2.0 to develop a game in C++. I have a game sprite class that has a click and drag method. The method works, but there is a slight problem. If the mouse moves too fast, the object the user selected can't keep up and is left behind in the spot where the mouse left its bounds. I will share the class definition and the given function implementation. Definition: class codePeg { protected: FloatRect bounds; CircleShape circle; int xPos, yPos, xDiff, yDiff, once; int xBase, yBase; Vector2i mousePos; Vector2f circlePos; public: void init(RenderWindow& Window); void draw(RenderWindow& Window); void drag(RenderWindow& Window); void setPegPosition(int x, int y); void setPegColor(Color pegColor); void mouseOver(RenderWindow& Window); friend int isPegSelected(void); }; Implementation of the "drag" function: void codePeg::drag(RenderWindow& Window) { mousePos = Mouse::getPosition(Window); circlePos = circle.getPosition(); if(Mouse::isButtonPressed(Mouse::Left)) { if(mousePos.x > xPos && mousePos.y > yPos && mousePos.x - bounds.width < xPos && mousePos.y - bounds.height < yPos) { if(once) { xDiff = mousePos.x - circlePos.x; yDiff = mousePos.y - circlePos.y; once = 0; } xPos = mousePos.x - xDiff; yPos = mousePos.y - yDiff; circle.setPosition(xPos, yPos); } } else { once = 1; xPos = xBase; yPos = yBase; xDiff = 0; yDiff = 0; circle.setPosition(xBase, yBase); } Window.draw(circle); } Like I said, the function works, but to me, the code is very ugly and I think it could be improved and could be more efficient. The only thing I can think of as to why the object cannot keep up with the mouse is that there are too many function calls and/or checks. The user does not really have to mouse the mouse "fast" for it to happen, I would say at an average pace the object is left behind. How can I improve the code so that the object remains with the mouse when it is selected? Any help improving this code or giving advice is greatly appreciated.

    Read the article

  • Deformation of Sphere using Transformations

    - by Mert Toka
    I have a graphic related question. I need to have a transformation matrix that I have no idea about what it is. The problem is to create right image from the right sphere. I created those images in Maya, but I need some matrices for the graphics course. Here is the image: Our professor told us to use some sine and cosine in our transformations, but I have no idea what he meant. I thought of intersecting a plane from the grid(that is xz plane) and sphere, and then scaling down the resulting circle. Would that work? I also checked this paper, however it looks like a bit advanced for me. Another thing is I guess that paper is not about the same type of information I was looking for. It would be great if you could help me.

    Read the article

  • Boat passing under a bridge in a 2D tile based RTS

    - by aleguna
    I'm writing a 2D tile based RTS. And I want to add a 'pseudo 3D' feature to it - bridges over the rivers. I havent't start any coding yet, just trying to think how it fits the collision detection model. A boat passing under the bridge and a unit moving over the bridge will eventually occupy the same cell on the map. How to prement them from colliding? Is there a common approach to solve such a problem? Or I need to implement a 3D world to do this?

    Read the article

  • Dynamic Dijkstra

    - by Dani
    I need dynamic dijkstra algorithm that can update itself when edge's cost is changed without a full recalculation. Full recalculation is not an option. I've tryed to "brew" my own implemantion with no success. I've also tryed to find on the Internet but found nothing. A link to an article explaining the algorithm or even it's name will be good. Edit: Thanks everyone for answering. I managed to make algorithm of my own that runs in O(V+E) time, if anyone wishes to know the algorithm just say so and I will post it.

    Read the article

  • Level and Player objects - which should contain which?

    - by Thane Brimhall
    I've been working on a several simple games, and I've always come to a decision point where I have to choose whether to have the Level object as an attribute of the Player class or the Player as an attribute of the Level class. I can see arguments for both: The Level should contain the player because it also contains every other entity. In fact it just makes sense this way: "John is in the room." It makes it a bit more difficult to move the player to a new level, however, because then each level has to pass its player object to an upcoming level. On the other hand, it makes programming sense to me to leave the player as the top-level object that is persistent between levels, and the environment changes because the player decides to change his level and location. It becomes very easy to change levels, because all I have to do is replace the level variable on the player. What's the most common practice here? Or better yet, is there a "right" way to architecture this relationship?

    Read the article

  • Music Rhythm Game Difficulty Question

    - by David Dimalanta
    I have curious question about music rhythm based genre while I'm making a code for the game. Is it really better if I set a random pattern encountered on every music played or there is a specific pattern depending on the music and the difficulty? I have observed the Guitar Hero 3 game for the game console where the difficulty is set on the number of strings used and possible number of combo (e.g. two-string combo). Compared to the Tap Tap Revenge for the Android and iPhone, the difficulty based on the number of BPM (Beat per Minute), meaning, number of targets spawn and must be hit.

    Read the article

  • Dynamic audio score/music

    - by Joel Martinez
    I'm interested in developing a game who's background music changes with the mood and scenario of the game's action. Of course many existing games do this (halo for example), but I was interested in any resources/papers/articles talking about the techniques to develop a system like this. I have some ideas, and I understand that this will be equally challenging to implement at the code level as it will be to come up or acquire music that fits this model. Any links or, answers with ideas in them would he appreciated. Edit: this is the kind of info I'm looking for :) http://halo.bungie.org/misc/gdc.2002.music/

    Read the article

  • What is the best type of c# timer to use with an Unity game that uses many timers simultaneously?

    - by Kyle Seidlitz
    I am developing a stand-alone 3d game in Unity that will have anywhere from 1 to 200 timers running simultaneously. For this game timer durations will range from 5 minutes to 4 days. There will not be any countdown displays or any UI for the timers. An object will be selected, a menu choice will then be selected, and the timer will start. Several events will occur at different intervals during the duration of the timer. The events will be confined to changing the material of the selected object, and calling a 1 second sound effect like a chime or a bell. If the user wants to save or end the game before all the timers are done, the start of the still running timers is to be saved to an XML file such that when the game is started again, any still running timers will have a calculation done to see if the timer is then done, where the game will change the materials appropriately. I am still trying to figure out what type of timer to use, and see also if there are any suggestions for saving and calculating times over several days. What class(es) of timers should I use? Are there any special issues I should look out for in terms of performance?

    Read the article

  • Play audio in javascript with a good performance

    - by João
    I'm developing a browser game where the player can shoot. Everytime he shoots it play a sound. Currently i'm using this code to play sounds in JavaScript: var audio = document.createElement("audio"); audio.src = "my_sound.mp3"; audio.play(); I'm worried about performance here. Will 10 simultaneous sounds impact my game performance too much? Will all audio objects stay in memory even after they are played?

    Read the article

< Previous Page | 518 519 520 521 522 523 524 525 526 527 528 529  | Next Page >