Search Results

Search found 31839 results on 1274 pages for 'plugin development'.

Page 529/1274 | < Previous Page | 525 526 527 528 529 530 531 532 533 534 535 536  | Next Page >

  • What is the best type of c# timer to use with an Unity game that uses many timers simultaneously?

    - by Kyle Seidlitz
    I am developing a stand-alone 3d game in Unity that will have anywhere from 1 to 200 timers running simultaneously. For this game timer durations will range from 5 minutes to 4 days. There will not be any countdown displays or any UI for the timers. An object will be selected, a menu choice will then be selected, and the timer will start. Several events will occur at different intervals during the duration of the timer. The events will be confined to changing the material of the selected object, and calling a 1 second sound effect like a chime or a bell. If the user wants to save or end the game before all the timers are done, the start of the still running timers is to be saved to an XML file such that when the game is started again, any still running timers will have a calculation done to see if the timer is then done, where the game will change the materials appropriately. I am still trying to figure out what type of timer to use, and see also if there are any suggestions for saving and calculating times over several days. What class(es) of timers should I use? Are there any special issues I should look out for in terms of performance?

    Read the article

  • how can i move static box2d object.

    - by user5198
    how can i move static box2d sprites. i have tried this tutorial from . http://www.raywenderlich.com/475/how-to-create-a-simple-breakout-game-with-box2d-and-cocos2d-tutorial-part-12. I managed to add another "paddle" object with box2d body, but i can seam to be able to make the code to move the second "paddle" body. Can anyone direct me how to do it? Is there a way to move a "b2_staticBody" box 2d object? i have tried, but i can only move it when i use "b2_dynamicBody" if i used "b2_staticBody" i can move it at all.

    Read the article

  • Representing heightmaps, on disk and when drawing

    - by gardian06
    This is a conglomeration question when answering please specify which part you are addressing. I am looking at creating a maze type game that utilizes elevation. I have a few features I would like to have, but am unsure as to some of the implementation. I have done work doing fileIO maze generation (using a key to read the file, and then generate the level based on that file), but I am unsure how to think about this with elevation in the mix. I think height maps might be a good approach, but don't know how to represent them effectively. for a height map which is more beneficial XML(containing h[u,v] data and key definition), CSV (item1 is key reference, item2 is elevation), or another approach that I have not thought of yet? When it comes to placing the elevation values themselves what kind of deltah values are appropriate to have it noticeable at about a 60degree angle while not really effecting gravity driven physics (assuming some effect while moving up/down hill)? I am thinking of maybe going to procedural generation at some point, but am wondering if it is practical to have a procedurally generated grid (wall squares possibly same dimensions as the open space squares), or if designing to a thin wall open spaces is better? this decision will effect the amount of work need on the graphics end for uniform vs. irregular walls. EDIT: Game will be a elevation maze shooter. Levels/maps will be mazes with elevation the player has to negotiate. Elevations will have effects on "combat" vision, and movement.

    Read the article

  • How can I transform a Point2f with a matrix on Android?

    - by Vivendi
    I'm developing for Android and I'm using the android.renderscript.Matrix3f class to do some calculations. What I need to do now is to now is to do something like mat.tranform(pointIn, pointOut); So I need to transform a matrix by a given Point class. In awt I would simply do this: AffineTransform t = new AffineTransform(); Point2D.Float p = new Point2D.Float(); t.transform( p, p ); But in Android I now have this: Matrix3f t = new Matrix3f(); PointF p = new PointF(); // Now I need to tranform it somehow.. But the Matrix3f class in Android doesn't have a Matrix.transform(Point2D ptSrc, Point2D ptDst) method. So I guess I have to do the transformation manually. But I'm not really sure how that works. From what I've seen it's something like a translate and then a rotate? Could anyone please tell me how to do this in code?

    Read the article

  • Implementing a FSM with ActionScript 2 without using classes?

    - by Up2u
    I have seen several references of A.I. and FSM, but sadly I still can't understand the point of an FSM in AS2.0. Is it a must to create a class for each state? I have a game-project which also it has an A.I., the A.I. has 3 states: distanceCheck, ChaseTarget, and Hit the target. It's an FPS game and played via mouse. I have created the A.I. successfully, but I want to convert it to FSM method... My first state is CheckDistanceState() and in that function I look for the nearest target and trigger the function ChaseState(), there I insert the Hit() function to destroy the enemy, The 3 functions that I created are being called in AI_cursor.onEnterframe. Is there any chance to implement an FSM without the need to create a class? From what I've read before, you have to create a class. I prefer to write the code on frames in flash and I still don't understand how to have external classes.

    Read the article

  • How to spawn a character at certain point and walk to a set point

    - by Robert H.
    I am making a game where I have a background image of a neighborhood. Each location has a different number of customers that are generated to walk on sidewalks. They all walk to a specific location (like a stand or cart that sells stuff), after they get to location I want them to interact with the cart. However, if another customer is already in a sale interaction then the others get in line in order of arrival. After the transaction the customers walk off screen. Any information on how I can do this and what game engine would be needed? Any one have any idea where I should go for this. I already have my game done up through Eclipse/Java without any game engine.

    Read the article

  • Direct2D Transform

    - by James
    I have a beginner question about Direct2D transforms. I have a 20 x 10 bitmap that I would like to draw in different orientations. To start, I would like to draw it vertically with a destination rectangle of say: (left, top, right, bottom) (300, 300, 310, 320) The bitmap is wider than it is tall (20 x 10), but when I draw it vertically, it will be appear taller than it is wide (10 x 20). I know that I can use a rotation matrix like so: m_pRenderTarget->SetTransform( D2D1::Matrix3x2F::Rotation( 90.0f, D2D1::Point2F(<center of shape>)) ); But when I use this method to rotate my shape, the destination rectangle is still wider than it is tall. Maybe it would look something like this: (left, top, right, bottom) (280, 290, 300, 300) The destination rectangle is 20 x 10 but the bitmap appears on the screen as 10 x 20. I can't look at the destination rectangle in the debugger and compare it to: (left, top, right, bottom) (300, 300, 310, 320) Is there any simple way to say "I want to rotate it so that the image is rendered to exactly this destination rectangle after the rotation?" In this case, I would like to say "Please rotate the bitmap so that it appears on the screen at this location:" (left, top, right, bottom) (300, 300, 310, 320) If I can't do that, is there any way to find out the 10 x 20 destination rectangle where the bitmap is actually being rendered to the screen?

    Read the article

  • OpenGL problem with FBO integer texture and color attachment

    - by Grieverheart
    In my simple renderer, I have 2 FBOs one that contains diffuse, normals, instance ID and depth in that order and one that I use store the ssao result. The textures I use for the first FBO are RGB8, RGBA16F, R32I and GL_DEPTH_COMPONENT32F for the depth. For the second FBO I use an R16F texture. My rendering process is to first render to everything I mentioned in the first FBO, then bind depth and normals textures for reading for the ssao pass and write to the second FBO. After that I bind the second FBO's texture for reading in my blur shader and bind the first FBO for writing. What I intend to do is to write the blurred ssao value to the alpha component of the Normals texture. Here are where the problems start. First of all, I use shading language 3.3, which my graphics card does support. I manage ouputs in my shaders using layout(location = #). Now, the normals texture should be bound to color attachment 1, but when I use 1, it seems to write to my diffuse texture which should be in color attachment 0. When I instead use layout(location = 0), it gets correctly written to my normals texture. Besides this, my instance ID texture also gets resets after running the blur shader which is weird because if I use a float texture and write to it instanceID / nInstances, the texture doesn't get reset after the blur shader has ran. Here is how I prepare my first FBO: bool CGBuffer::Init(unsigned int WindowWidth, unsigned int WindowHeight){ //Create FBO glGenFramebuffers(1, &m_fbo); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, m_fbo); //Create gbuffer and Depth Buffer Textures glGenTextures(GBUFF_NUM_TEXTURES, &m_textures[0]); glGenTextures(1, &m_depthTexture); //prepare gbuffer for(unsigned int i = 0; i < GBUFF_NUM_TEXTURES; i++){ glBindTexture(GL_TEXTURE_2D, m_textures[i]); if(i == GBUFF_TEXTURE_TYPE_NORMAL) glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, WindowWidth, WindowHeight, 0, GL_RGBA, GL_FLOAT, NULL); else if(i == GBUFF_TEXTURE_TYPE_DIFFUSE) glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, WindowWidth, WindowHeight, 0, GL_RGB, GL_FLOAT, NULL); else if(i == GBUFF_TEXTURE_TYPE_ID) glTexImage2D(GL_TEXTURE_2D, 0, GL_R32I, WindowWidth, WindowHeight, 0, GL_RED_INTEGER, GL_INT, NULL); else{ std::cout << "Error in FBO initialization" << std::endl; return false; } glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D, m_textures[i], 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); } //prepare depth buffer glBindTexture(GL_TEXTURE_2D, m_depthTexture); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, WindowWidth, WindowHeight, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_depthTexture, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_NONE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); GLenum DrawBuffers[] = {GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2}; glDrawBuffers(GBUFF_NUM_TEXTURES, DrawBuffers); GLenum Status = glCheckFramebufferStatus(GL_FRAMEBUFFER); if(Status != GL_FRAMEBUFFER_COMPLETE){ std::cout << "FB error, status 0x" << std::hex << Status << std::endl; return false; } //Restore default framebuffer glBindFramebuffer(GL_FRAMEBUFFER, 0); return true; } where I use an enum defined as, enum GBUFF_TEXTURE_TYPE{ GBUFF_TEXTURE_TYPE_DIFFUSE, GBUFF_TEXTURE_TYPE_NORMAL, GBUFF_TEXTURE_TYPE_ID, GBUFF_NUM_TEXTURES }; Am I missing some kind of restriction? Does the color attachment of the FBO's textures somehow gets reset i.e. I'm using a re-size function which re-sizes the textures of the FBO but should I perhaps call glFramebufferTexture2D again too? EDIT: Here is the shader in question: #version 330 core uniform sampler2D aoSampler; uniform vec2 TEXEL_SIZE; // x = 1/res x, y = 1/res y uniform bool use_blur; noperspective in vec2 TexCoord; layout(location = 0) out vec4 out_AO; void main(void){ if(use_blur){ float result = 0.0; for(int i = -1; i < 2; i++){ for(int j = -1; j < 2; j++){ vec2 offset = vec2(TEXEL_SIZE.x * i, TEXEL_SIZE.y * j); result += texture(aoSampler, TexCoord + offset).r; // -0.004 because the texture seems to be a bit displaced } } out_AO = vec4(vec3(0.0), result / 9); } else out_AO = vec4(vec3(0.0), texture(aoSampler, TexCoord).r); }

    Read the article

  • Game Code Design for Rendering

    - by kuroutadori
    I first created a game on the iPhone and I'm now porting it to Android. I wrote most of the code in C++, but when it came to porting it wasn't so easy. The Android way is to have two threads, one for rendering and one for updating. This due to some devices blocking when updating the hardware. My problem is that I am coming from the iPhone. When I transition, say from the Menu to the Game, I would stop the Animation (Rendering) and load up the next Manager (the Menu has a Manager and so has the Game). I could implement the same thing on Android, but I have noticed on game ports like Quake, don't do this - as far as I can tell. I have learnt that I cannot just dynamically add another Renderer class the the tree because I will probably get a dequeuing buffer error - which I believe to be a problem with the OpenGL ES side. So how is it done?

    Read the article

  • How do I get the child of a unique parent in ActionScript?

    - by Koen
    My question is about targeting a child with a unique parent. For example. Let's say I have a box people can move called box_mc and 3 platforms it can jump on called: Platform_1 Platform_2 Platform_3 All of these platforms have a child element called hit. Platform_1 Hit Platform_2 Hit Platform_3 Hit I use an array and a for each statement to detect if box_mc hits one of the platforms childs. var obj_arr:Array = [Platform_1, Platform_2, Platform_3]; for each(obj in obj_arr){ if(box_mc.hitTestObject(obj.hit)){ trace(obj + " " + obj.hit); box_mc.y = obj.hit.y - box_mc.height; } } obj seems to output the unique parent it is hitting but obj.hit ouputs hit, so my theory is that it is applying the change of y to all the childs called hit in the stage. Would it be possible to only detect the child of that specific parent?

    Read the article

  • Absorbtion 2d image effect

    - by Ed.
    I want to create a specyfic 2d image effect. It consists in modifying a sprite so it looks like it is being zoomed to a point or "absorbed" by that point. I'm not really sure what is the technical name of this effect so I cannot explain it correctly. Here you can see a video of what I'm talking about, it is the effect when the character absorbs the three glyphs. http://www.youtube.com/watch?v=PIo-GddsMcU&t=4m45s What is the name of this effect? How can I implement it with XNA for 2D textures/sprites?

    Read the article

  • In-Game Encyclopedias

    - by SHiNKiROU
    There are some games where there is an in-game encyclopedia where you can know many things about characters and settings of the game. For example, the Codex in Mass Effect. I want to know if it is exclusive to Bioware, and get inspired about other encyclopedia systems. What are some other examples of in-game encyclopedias? How effective is it? I also want some examples where the in-game encyclopedia is not effective at all or an ignored feature

    Read the article

  • How to play many sounds at once in OpenAL

    - by Krom
    Hello, I'm developing an RTS game and I would like to add sounds to it. My choice has landed on OpenAL. I have plenty of units which from time to time make sounds: fSound.Play(sfx_shoot, location). Sounds often repeat, e.g. when squad of archers shoots arrows, but they are not synced with each other. My questions are: What is the common design pattern to play multiple sounds in OpenAL, when some of them are duplicate? What are the hardware limitations on sounds count and tricks to overcome them?

    Read the article

  • Lwjgl or opengl double pixels

    - by Philippe Paré
    I'm working in java with LWJGL and trying to double all my pixels. I'm trying to draw in an area of 800x450 and then stretch all the frame image to the complete 1600x900 pixels without them getting blured. I can't figure out how to do that in java, everything I find is in c++... A hint would be great! Thanks a lot. EDIT : I've tried drawing to a texture created in opengl by setting it to the framebuffer, but I can't find a way to use glGenTextures() in java... so this is not working... also I though about using a shader but I would not be able to draw only in the smaller region...

    Read the article

  • Rendering oily/polluted water?

    - by Fraser
    Any shader wizards out there have an idea of how to achieve an oily/polluted water effect, similar to this: Ideally, the water would not be uniformly oily, but instead the oil could be generated from some source (such as a polluting drain from a chemical plant) and then diffuse throughout the water body. My thought for this part would be to keep an "oil map" as a 2D texture that determines the density of oil at each point on the water surface. It would diffuse and move naturally with the water vel;ocity at that point (I have a wave-particle simulation for dynamic waves, and am already doing something similar for foam on the water surface). However, I'm not sure how physically correct that would be, since oil might not move at the same velocity as the water. And I have no idea how to make all those trippy colors :-). Thoughts?

    Read the article

  • Mandelbrot set not displaying properly

    - by brainydexter
    I am trying to render mandelbrot set using glsl. I'm not sure why its not rendering the correct shape. Does the mandelbrot calculation require values to be within a range for the (x,y) [ or (real, imag) ] ? Here is a screenshot: I render a quad as follows: float w2 = 6; float h2 = 5; glBegin(GL_QUADS); glVertex3f(-w2, h2, 0.0); glVertex3f(-w2, -h2, 0.0); glVertex3f(w2, -h2, 0.0); glVertex3f(w2, h2, 0.0); glEnd(); My vertex shader: varying vec3 Position; void main(void) { Position = gl_Vertex.xyz; gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } My fragment shader (where all the meat is): uniform float MAXITERATIONS; varying vec3 Position; void main (void) { float zoom = 1.0; float centerX = 0.0; float centerY = 0.0; float real = Position.x * zoom + centerX; float imag = Position.y * zoom + centerY; float r2 = 0.0; float iter; for(iter = 0.0; iter < MAXITERATIONS && r2 < 4.0; ++iter) { float tempreal = real; real = (tempreal * tempreal) + (imag * imag); imag = 2.0 * real * imag; r2 = (real * real) + (imag * imag); } vec3 color; if(r2 < 4.0) color = vec3(1.0); else color = vec3( iter / MAXITERATIONS ); gl_FragColor = vec4(color, 1.0); }

    Read the article

  • Not getting desired results with SSAO implementation

    - by user1294203
    After having implemented deferred rendering, I tried my luck with a SSAO implementation using this Tutorial. Unfortunately, I'm not getting anything that looks like SSAO, you can see my result below. You can see there is some weird pattern forming and there is no occlusion shading where there needs to be (i.e. in between the objects and on the ground). The shaders I implemented follow: #VS #version 330 core uniform mat4 invProjMatrix; layout(location = 0) in vec3 in_Position; layout(location = 2) in vec2 in_TexCoord; noperspective out vec2 pass_TexCoord; smooth out vec3 viewRay; void main(void){ pass_TexCoord = in_TexCoord; viewRay = (invProjMatrix * vec4(in_Position, 1.0)).xyz; gl_Position = vec4(in_Position, 1.0); } #FS #version 330 core uniform sampler2D DepthMap; uniform sampler2D NormalMap; uniform sampler2D noise; uniform vec2 projAB; uniform ivec3 noiseScale_kernelSize; uniform vec3 kernel[16]; uniform float RADIUS; uniform mat4 projectionMatrix; noperspective in vec2 pass_TexCoord; smooth in vec3 viewRay; layout(location = 0) out float out_AO; vec3 CalcPosition(void){ float depth = texture(DepthMap, pass_TexCoord).r; float linearDepth = projAB.y / (depth - projAB.x); vec3 ray = normalize(viewRay); ray = ray / ray.z; return linearDepth * ray; } mat3 CalcRMatrix(vec3 normal, vec2 texcoord){ ivec2 noiseScale = noiseScale_kernelSize.xy; vec3 rvec = texture(noise, texcoord * noiseScale).xyz; vec3 tangent = normalize(rvec - normal * dot(rvec, normal)); vec3 bitangent = cross(normal, tangent); return mat3(tangent, bitangent, normal); } void main(void){ vec2 TexCoord = pass_TexCoord; vec3 Position = CalcPosition(); vec3 Normal = normalize(texture(NormalMap, TexCoord).xyz); mat3 RotationMatrix = CalcRMatrix(Normal, TexCoord); int kernelSize = noiseScale_kernelSize.z; float occlusion = 0.0; for(int i = 0; i < kernelSize; i++){ // Get sample position vec3 sample = RotationMatrix * kernel[i]; sample = sample * RADIUS + Position; // Project and bias sample position to get its texture coordinates vec4 offset = projectionMatrix * vec4(sample, 1.0); offset.xy /= offset.w; offset.xy = offset.xy * 0.5 + 0.5; // Get sample depth float sample_depth = texture(DepthMap, offset.xy).r; float linearDepth = projAB.y / (sample_depth - projAB.x); if(abs(Position.z - linearDepth ) < RADIUS){ occlusion += (linearDepth <= sample.z) ? 1.0 : 0.0; } } out_AO = 1.0 - (occlusion / kernelSize); } I draw a full screen quad and pass Depth and Normal textures. Normals are in RGBA16F with the alpha channel reserved for the AO factor in the blur pass. I store depth in a non linear Depth buffer (32F) and recover the linear depth using: float linearDepth = projAB.y / (depth - projAB.x); where projAB.y is calculated as: and projAB.x as: These are derived from the glm::perspective(gluperspective) matrix. z_n and z_f are the near and far clip distance. As described in the link I posted on the top, the method creates samples in a hemisphere with higher distribution close to the center. It then uses random vectors from a texture to rotate the hemisphere randomly around the Z direction and finally orients it along the normal at the given pixel. Since the result is noisy, a blur pass follows the SSAO pass. Anyway, my position reconstruction doesn't seem to be wrong since I also tried doing the same but with the position passed from a texture instead of being reconstructed. I also tried playing with the Radius, noise texture size and number of samples and with different kinds of texture formats, with no luck. For some reason when changing the Radius, nothing changes. Does anyone have any suggestions? What could be going wrong?

    Read the article

  • Speed up lighting in deferred shading

    - by kochol
    I implemented a simple deferred shading renderer. I use 3 G-Buffer for storing position (R32F), normal (G16R16F) and albedo (ARGB8). I use sphere map algorithm to store normals in world space. Currently I use inverse of view * projection matrix to calculate the position of each pixel from stored depth value. First I want to avoid per pixel matrix multiplication for calculating the position. Is there another way to store and calculate position in G-Buffer without the need of matrix multiplication Store the normal in view space Every lighting in my engine is in world space and I want do the lighting in view space to speed up my lighting pass. I want an optimized lighting pass for my deferred engine.

    Read the article

  • Logging library for (c++) games

    - by Klaim
    I know a lot of logging libraries but didn't test a lot of them. (GoogleLog, Pantheios, the coming boost::log library...) In games, especially in remote multiplayer and multithreaded games, logging is vital to debugging, even if you remove all logs in the end. Let's say I'm making a PC game (not console) that needs logs (multiplayer and multithreaded and/or multiprocess) and I have good reasons for looking for a library for logging (like, I don't have time or I'm not confident in my ability to write one correctly for my case). Assuming that I need : performance ease of use (allow streaming or formating or something like that) reliable (don't leak or crash!) cross-platform (at least Windows, MacOSX, Linux/Ubuntu) Wich logging library would you recommand? Currently, I think that boost::log is the most flexible one (you can even log to remotely!), but have not good performance. Pantheios is often cited but I don't have comparison points on performance and usage. I've used my own lib for a long time but I know it don't manage multithreading so it's a big problem, even if it's fast enough. Google Log seems interesting, I just need to test it but if you already have compared those libs and more, your advice might be of good use. Games are often performance demanding while complex to debug so it would be good to know logging libraries that, in our specific case, have clear advantages.

    Read the article

  • How to monetize and/or protect framework rights?

    - by Arthur Wulf White
    I made a game engine/framerwork for ActionScript 3 that allows very efficient and flexible level design for Platformers, Tower Defense game, RPG's, RTS and racing games. The algorithms I used are new and are not available in any other level editor I've seen. What are the best ways to benefit myself and others with my new framework? It is written for ActionScript 3 so unless I translate it to C# I'm guessing it will be decompiled and used by others. I want to have some lisence, allowing me to share the framework and still benefit from it. Any advice would be appreciated. This issue has been on my mind a lot this year. I am hoping to find a solution that will bring me some relief.

    Read the article

  • How do I draw a single Triangle with XNA and fill it with a Texture?

    - by Deukalion
    I'm trying to wrap my head around: http://msdn.microsoft.com/en-us/library/bb196409.aspx I'm trying to create a method in XNA that renders a single Triangle, then later make a method that takes a list of Triangles and renders them also. But it isn't working. I'm not understanding what all the things does and there's not enough information. My methods: // Triangle is a struct with A, B, C (didn't include) A, B, C = Vector3 public static void Render(GraphicsDevice device, List<Triangle> triangles, Texture2D texture) { foreach (Triangle triangle in triangles) { Render(device, triangle, texture); } } public static void Render(GraphicsDevice device, Triangle triangle, Texture2D texture) { BasicEffect _effect = new BasicEffect(device); _effect.Texture = texture; _effect.VertexColorEnabled = true; VertexPositionColor[] _vertices = new VertexPositionColor[3]; _vertices[0].Position = triangle.A; _vertices[1].Position = triangle.B; _vertices[2].Position = triangle.B; foreach (var pass in _effect.CurrentTechnique.Passes) { pass.Apply(); device.DrawUserIndexedPrimitives<VertexPositionColor> ( PrimitiveType.TriangleList, _vertices, 0, _vertices.Length, new int[] { 0, 1, 2 }, // example has something similiar, no idea what this is 0, 3 // 3 = gives me an error, 1 = works but no results ); } }

    Read the article

  • Textures of .x model deformed in XNA

    - by marc wellman
    I want to have a 3D model with textures built in SketchUp 8 be imported as a .x model in XNA. So far I have used several .x exporters like http://edecadoudal.googlepages.com/xExporter.rb 3D RAD zbylsxexporter With all of them I have the same problem: The model gets built correctly but the textures are deformed. The sizes of my texture files are multiples of four and inside Sketchup the model looks prefect. That's the texture file which is 256x256: And this is how it looks like in my XNA program: What can I do?

    Read the article

  • Moving a body in a specific direction using XNA with Farseer Physics

    - by Code Assasssin
    I have a custom polygon attached to a body, which looks like this: What I am trying to accomplish is getting the body to move according to wherever the tip of the body is. So far this is what I've tried: if (ks.IsKeyDown(Keys.Up)) { body.ApplyForce(new Vector2(0, -20),body.GetLocalPoint(new Vector2(0,0))); } if (ks.IsKeyDown(Keys.Left)) { body.ApplyTorque(-500); } if (ks.IsKeyDown(Keys.Right)) { body.ApplyTorque(500); } The body rotates fine - but when I try making the body accelerate according to the tip of the body - assuming I have specified the tip correctly(I am pretty sure I haven't), it just spins around, as if I have applied Torque to it. Can anyone point me in the right direction of how to fix this problem?

    Read the article

  • Can I become a Game Designer? [on hold]

    - by user32721
    This is my first time posting something on a forum in 4 years. I am posting this because I want to adjust my expectations and goals regarding game design. I am in college in Morocco (Al Akhawayn university). just started my junior year. I am a communications major (school of humanities) and a gender studies minor. I want to become a video game designer. It is the only career that I am interested in. I have been playing ever since I was 5 and haven't stopped yet. Currently I don't have any noteworthy skills to become a designer. I don't know how to program (don't really have the patience for it) and I can't draw to save my life. I haven't tried visual software like MAYA or MAX so I can't comment on graphic design. So I basically want to know whether my current education is capable of helping me reach my goal. If not then should I take a master's in game design (in the U.S?) or switch my minor to computer science? I am sorry that this post is long! I look forward to hearing your advice!

    Read the article

  • WIn API Basic Paint program

    - by Tom Burman
    Just trying to learn a bit of Win API. Im trying to make a basic drawing app, a bit like MS Paint. For the time being im trying to get one function to work which is, when you left click and drag the mouse around the screen a line is drawn behind the mouse. Heres what i have so far, but for some reason: 1) the line starts drawing straight away rather then waiting for the left click 2) the line isn't solid its very dotty. case WM_MOUSEMOVE: { if(MK_LBUTTON){ hdc = GetDC(hwnd); hPen = CreatePen(PS_SOLID,5,RGB(0, 0, 255)); SelectObject(hdc, hPen); int x = LOWORD(lParam); int y = HIWORD(lParam); MoveToEx(hdc,x,y,NULL); LineTo(hdc, LOWORD(lParam), HIWORD(lParam)); ReleaseDC(hwnd,hdc); } else break; } } Thanks for any help!

    Read the article

< Previous Page | 525 526 527 528 529 530 531 532 533 534 535 536  | Next Page >