Search Results

Search found 28230 results on 1130 pages for 'embedded development'.

Page 488/1130 | < Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >

  • How to work with scenes in a 2D game

    - by Anearion
    I'm a java/android programmer, but I don't have any experience in game programming, I'm already reading proper books, like "Pro Android Games", but my concerns are more about the ideas behind game programming than the techniques themselves. I'm working on a 2D game, something like Cluedo to let you understand the genre. I would like to know how should I act with the "scenes", for example, a room with a desk, TV, windows and a lamp. I need to make some items tappable and others not. Is it common to use one image (invisible to the user) with every different item a different color, then call the getColor() method on the image? Or use one image as background, and separate images for all the items? If the latter, how can I set the positioning? and should I use imageView or imageButton? I'm sorry if those are really low quality questions, but as "outsider" ( I'm 23 and still finishing my university ) it's pretty hard learn alone.

    Read the article

  • Splitting a texture atlas into seperate images

    - by bigtunacan
    I'm doing a port of an existing game and the designer no longer has all of the original art; he only has the resulting texture atlases he used when developing for iPad. The tool I'm using won't support these files so I need to break them back out into separate PNG files. I'm hoping someone knows of a software tool that does this. PC software would be preferred in this case, but Mac would suffice.

    Read the article

  • 2D vector graphic html5 framework

    - by Yury
    I trying to find html5 game framework by following criteria: 1)Real good performance. 2)Good support of vector graphic( objects which contains canvas elements -line, rec,bezierCurve etc.) 3)Easy port to mobile. Optional- Physics Engine. I found 1)Pixi.js- it looks like real good, but i didn't find any info about "vector objects" support. 2) i found "vector objects" support in paper.js I need something like these: http://paperjs.org/examples/chain/ and http://paperjs.org/examples/path-intersections/. But it looks like paper.js- not so good performance as pixi.js. And it is not game engine. Is there any good framework meets these requirements? P.S. I found similar question here Which free HTML5-based game engine meets these requirements?. But it was a long time ago. A lot of new things were created since 2011.

    Read the article

  • The practical cost of swapping effects

    - by sebf
    Hello, I use XNA for my projects and on those forums I sometimes see references to the fact that swapping an effect for a mesh has a relatively high cost, which surprises me as I thought to swap an effect was simply a case of copying the replacement shader program to the GPU along with appropriate parameters. I wondered if someone could explain exactly what is costly about this process? And put, if possible, 'relatively' into context? For example say I wanted to use a short shader to help with picking, I would: Change the effect on every object, calculting a unique color to identify it and providing it to the shader. Draw all the objects to a render target in memory. Get the color from the target and use it to look up the selected object. What portion of the total time taken to complete that process would be spent swapping the shaders? My instincts would say that rendering the scene again, no matter how simple the shader, would be an order of magnitude slower than any other part of the process so why all the concern over effects?

    Read the article

  • Object pools for efficient resource management

    - by GameDevEnthusiast
    How can I avoid using default new() to create each object? My previous demo had very unpleasant framerate hiccups during dynamic memory allocations (usually, when arrays are resized), and creating lots of small objects which often contain one pointer to some DirectX resource seems like an awful lot of waste. I'm thinking about: Creating a master look-up table to refer to objects by handles (for safety & ease of serialization), much like EntityList in source engine Creating a templated object pool, which will store items contiguously (more cache-friendly, fast iteration, etc.) and the stored elements will be accessed (by external systems) via the global lookup table. The object pool will use the swap-with-last trick for fast removal (it will invoke the object's ~destructor first) and will update the corresponding indices in the global table accordingly (when growing/shrinking/moving elements). The elements will be copied via plain memcpy(). Is it a good idea? Will it be safe to store objects of non-POD types (e.g. pointers, vtable) in such containers? Related post: Dynamic Memory Allocation and Memory Management

    Read the article

  • Game programming basics under Windows

    - by dreta
    I've been trying to learn some Windows programming using the Win32 API. Now, i'm used to working with the OS layer being abstracted away, mostly thanks to libraries like SFML or Allegro. Could you guys help me out and tell me if i'm thinking right here. The place for my gameloop is where i'm reading the messages? while (TRUE) { if (PeekMessage (&msg, NULL, 0, 0, PM_REMOVE)) { if (msg.message == WM_QUIT) break ; TranslateMessage (&msg) ; DispatchMessage (&msg) ; } else { //my game loop goes here } } Now the slightly bigger issue, that is, drawing. Do i run my drawing where i normaly do it, inside the game loop after the game logic? Or do i do it when WM_PAIN is being called and just call InvalidateRect (hwnd, NULL, TRUE); when i want to draw? This does feel weird, the WM_PAINT is a queued message, so i don't know for sure when it'll be called. So if i wanted to avoid this, do i just get the device handle inside the game loop and only ValidateRect (hwnd, NULL); in the WM_PAINT case (beside the ValidateRect (hwnd, NULL); called after drawing in the game loop)? Actually, now that i think about it, do i even need WM_PAINT in this situation or can i skip it and let DefWindowProc handle it (does it validate the screen if WM_PAINT isn't processed)? If this is any important, i'm setting up my code for OpenGL.

    Read the article

  • How do I dynamically reload content files?

    - by Kikaimaru
    Is there a relatively simple way to dynamically reload content files, such as effect files? I know I can do the following: Detect change of file Run content pipeline to rebuild that specific file Unload ALL content that was loaded Load all content And use double references to reference content files. The problem is with step 3 (and step 2 isn't that nice either). I need to unload everything because if I have model Hero.x which references Model.fx effect, and I change the Model.fx file, I need to reload the Hero.x file which will then call LoadExternalReference on Model.fx. Has someone managed to make this work without rewriting the whole ContentManager (and every ContentReader) and tracking calls to LoadExternalReference?

    Read the article

  • Where can I find affordable legal advice for game software related inquiries?

    - by Steven Lu
    I am working on simulation middleware which is applicable for game engine implementations. What I would like to do is to make it freely available for use for all non-commercial purposes, while at the same time imposing some percentage of royalty on revenue (above a certain threshold) that is derived from my work. Something very similar to Epic's UDK licensing model. To facilitate the use of my software, I plan to offer binaries (static libs) for several platforms, as well as obfuscated source code which I will freely distribute, in addition to documentation of the API. I simply want to impose the restriction that if you try to make money from it, I get a cut eventually. I'm wondering if there are online forums and such where I am likely to find people who are willing to assist me in terms of learning what sort of things I have to do to get things down on the right kinds of documents. So far a site like this seems to be the most promising.

    Read the article

  • Best practices of texture size

    - by psal
    I wanted to know how should I determine a good texture size ? Currently, I always create UV texture that are 1024x1024px but if I create for example, a big house with a 1024px texture size, it will looks pretty bad. So, should I create different texture size (512, 1024, ...) for different mesh size like this ? : or is it better to always do high-resolution texture and then reduce it in the software (ie : increase the LODBias settings in UDK reduce the size of the texture) ? Thanks for your answer. ps : sorry for my english !

    Read the article

  • Strange and erratic transformations when using OpenGL VBOs to render scene

    - by janoside
    I have an existing iOS game with fairly simple scenes (all textured quads) and I'm using Apple's "Texture2D" class. I'm trying to convert this class to use VBOs since the vertices of my objects basically never change so I may as well not re-create them for every object every frame. I have the scene rendering using VBOs but the sizes and orientations of all rendered objects are strange and erratic - though locations seem generally correct. I've been toying with this code for a few days now, and I've found something odd: if I re-create all of my VBOs each frame, everything looks correct, even though I'm almost certain my vertices are not changing. Other notes I'm basing my work on this tutorial, and therefore am also using "IBOs" I create my buffers before rendering begins My buffers include vertex and texture data I'm using OpenGL ES 1.1 Fearing some strange effect of the current matrix GL state at the time of buffer creation I've also tried wrapping my buffer-setup code in a "pushMatrix-loadIdentity-popMatrix" block which (as expected) had no effect I'm aware that various articles have been published demonstrating that VBOs may not help performance, but I want to understand this problem and at least have the option to use them. I realize this is a shot in the dark, but has anyone else experienced this type of strange behavior? What might I be doing to result in this behavior? It's rather difficult for me to isolate the problem since I'm working in an existing, moderately complex project, so suggestions about how to approach the problem are also quite welcome.

    Read the article

  • How do I create a camera?

    - by Morphex
    I am trying to create a generic camera class for a game engine, which works for different types of cameras (Orbital, GDoF, FPS), but I have no idea how to go about it. I have read about quaternions and matrices, but I do not understand how to implement it. Particularly, it seems you need "Up", "Forward" and "Right" vectors, a Quaternion for rotations, and View and Projection matrices. For example, an FPS camera only rotates around the World Y and the Right Axis of the camera; the 6DoF rotates always around its own axis, and the orbital is just translating for a set distance and making it look always at a fixed target point. The concepts are there; implementing this is not trivial for me. SharpDX seems to have has already Matrices and Quaternions implemented, but I don't know how to use them to create a camera. Can anyone point me on what am I missing, what I got wrong? I would really enjoy if you could give a tutorial, some piece of code, or just plain explanation of the concepts.

    Read the article

  • Bouncing ball slowing down over time

    - by user46610
    I use the unreal engine 4 to bounce a ball off of walls in a 2D space, but over time the ball gets slower and slower. Movement happens in the tick function of the ball FVector location = GetActorLocation(); location.X += this->Velocity.X * DeltaSeconds; location.Y += this->Velocity.Y * DeltaSeconds; SetActorLocation(location, true); When a wall gets hit I get a Hit Event with the normal of the collision. This is how I calculate the new velocity of the ball: FVector2D V = this->Velocity; FVector2D N = FVector2D(HitNormal.X, HitNormal.Y); FVector2D newVelocity = -2 * (V.X * N.X + V.Y * N.Y) * N + V; this->Velocity = newVelocity; Over time, the more the ball bounced around, the velocity gets smaller and smaller. How do I prevent speed loss when bouncing off walls like that? It's supposed to be a perfect bounce without friction or anything.

    Read the article

  • Issues with shooting in a HTML5 platformer game

    - by fnx
    I'm coding a 2D sidescroller using only JavaScript and HTML5 canvas, and in my game I have two problems with shooting: 1) Player shoots continous stream of bullets. I want that player can shoot only a single bullet even though the shoot-button is being held down. 2) Also, I get an error "Uncaught TypeError: Cannot call method 'draw' of undefined" when all the bullets are removed. My shooting code goes like this: When player shoots, I do game.bullets.push(new Bullet(this, this.scale)); and after that: function Bullet(source, dir) { this.id = "bullet"; this.width = 10; this.height = 3; this.dir = dir; if (this.dir == 1) { this.x = source.x + source.width - 5; this.y = source.y + 16; } if (this.dir == -1) { this.x = source.x; this.y = source.y + 16; } } Bullet.prototype.update = function() { if (this.dir == 1) this.x += 8; if (this.dir == -1) this.x -= 8; for (var i in game.enemies) { checkCollisions(this, game.enemies[i]); } // Check if bullet leaves the viewport if (this.x < game.viewX * 32 || this.x > (game.viewX + game.tilesX) * 32) { removeFromList(game.bullets, this); } } Bullet.prototype.draw = function() { // bullet flipping uses orientation of the player var posX = game.player.scale == 1 ? this.x : (this.x + this.width) * -1; game.ctx.scale(game.player.scale, 1); game.ctx.drawImage(gameData.getGfx("bullet"), posX, this.y); } I handle removing with this function: function removeFromList(list, object) { for (i in list) { if (object == list[i]) { list.splice(i, 1); break; } } } And finally, in the main game loop I have this: for (var i in game.bullets) { game.bullets[i].update(); game.bullets[i].draw(); } I have tried adding if (game.bullets.length > 0) to the main game loop before the above draw&update calls, but I still get the same error.

    Read the article

  • Fast software color interpolating triangle rasterization technique

    - by Belgin
    I'm implementing a software renderer with this rasterization method, however, I was wondering if there is a possibility to improve it, or if there exists an alternative technique that is much faster. I'm specifically interested in rendering small triangles, like the ones from this 100k poly dragon: As you can see, the method I'm using is not perfect either, as it leaves small gaps from time to time (at least I think that's what's happening). I don't mind using assembly optimizations. Pseudocode or actual code (C/C++ or similar) is appreciated. Thanks in advance.

    Read the article

  • Progress bar in Super Hexagon using OpenGL ES 2 (Android)

    - by user16547
    I'm wondering how the progress bar in Super Hexagon was made. (see image, top left) Actually I am not very sure how to implement a progress bar at all using OpenGL ES 2 on Android, but I am asking specifically about the one used in Super Hexagon because it seems to me less straightforward / obvious than others: the bar changes its colour during game play. I think one possibility is to use the built-in Android progress bar. I can see from some Stackoverflow questions that you can change the default blue colour to whatever you want, but I'm not sure whether you can update it during the game play. The other possibility I can think of for implementing a progress bar is to have a small texture that starts with a scale of 0 and that you keep scaling until it reaches the maximum size, representing 100%. But this suffers from the same problem as before: you'll not be able to update the colour of the texture during run-time. It's fixed. So what's the best way to approach this problem? *I'm assuming he didn't use a particular library, although if he did, it would be interesting to know. I'm interested in a pure OpenGL ES 2 + Android solution.

    Read the article

  • Simple Math Multiplayer game - is Ajax sufficient?

    - by Christian Strang
    I'm planning to create a simple math multiplayer game and I plan to just use Ajax for the server/client communication but I'm not sure if this is sufficient or if I need a socket server. The game will look like this: 2-4 users all get a simple math task (like: "37 + 14") they have to solve it as fast as possible first user who solves it is the winner I will track the time for each user, since the game started, on the client side and everytime a user gives an answer, the answer and the passed time will be send to the server. Additionally I'll add a function which will check every 3 seconds if the other users finished, how much time they needed and who won. Do you think this is possible just using Ajax? What alternatives are there?

    Read the article

  • OpenGL 3 and the Radeon HD 4850x2

    - by rotard
    A while ago, I picked up a copy of the OpenGL SuperBible fifth edition and slowly and painfully started teaching myself OpenGL the 3.3 way, after having been used to the 1.0 way from school way back when. Making things more challenging, I am primarily a .NET developer, so I was working in Mono with the OpenTK OpenGL wrapper. On my laptop, I put together a program that let the user walk around a simple landscape using a couple shaders that implemented per-vertex coloring and lighting and texture mapping. Everything was working brilliantly until I ran the same program on my desktop. Disaster! Nothing would render! I have chopped my program down to the point where the camera sits near the origin, pointing at the origin, and renders a square (technically, a triangle fan). The quad renders perfectly on my laptop, coloring, lighting, texturing and all, but the desktop renders a small distorted non-square quadrilateral that is colored incorrectly, not affected by the lights, and not textured. I suspect the graphics card is at fault, because I get the same result whether I am booted into Ubuntu 10.10 or Win XP. I did find that if I pare the vertex shader down to ONLY outputting the positional data and the fragment shader to ONLY outputting a solid color (white) the quad renders correctly. But as SOON as I start passing in color data (whether or not I use it in the fragment shader) the output from the vertex shader is distorted again. The shaders follow. I left the pre-existing code in, but commented out so you can get an idea what I was trying to do. I'm a noob at glsl so the code could probably be a lot better. My laptop is an old lenovo T61p with a Centrino (Core 2) Duo and an nVidia Quadro graphics card running Ubuntu 10.10 My desktop has an i7 with a Radeon HD 4850 x2 (single card, dual GPU) from Saphire dual booting into Ubuntu 10.10 and Windows XP. The problem occurs in both XP and Ubuntu. Can anyone see something wrong that I am missing? What is "special" about my HD 4850x2? string vertexShaderSource = @" #version 330 precision highp float; uniform mat4 projection_matrix; uniform mat4 modelview_matrix; //uniform mat4 normal_matrix; //uniform mat4 cmv_matrix; //Camera modelview. Light sources are transformed by this matrix. //uniform vec3 ambient_color; //uniform vec3 diffuse_color; //uniform vec3 diffuse_direction; in vec4 in_position; in vec4 in_color; //in vec3 in_normal; //in vec3 in_tex_coords; out vec4 varyingColor; //out vec3 varyingTexCoords; void main(void) { //Get surface normal in eye coordinates //vec4 vEyeNormal = normal_matrix * vec4(in_normal, 0); //Get vertex position in eye coordinates //vec4 vPosition4 = modelview_matrix * vec4(in_position, 0); //vec3 vPosition3 = vPosition4.xyz / vPosition4.w; //Get vector to light source in eye coordinates //vec3 lightVecNormalized = normalize(diffuse_direction); //vec3 vLightDir = normalize((cmv_matrix * vec4(lightVecNormalized, 0)).xyz); //Dot product gives us diffuse intensity //float diff = max(0.0, dot(vEyeNormal.xyz, vLightDir.xyz)); //Multiply intensity by diffuse color, force alpha to 1.0 //varyingColor.xyz = in_color * diff * diffuse_color.xyz; varyingColor = in_color; //varyingTexCoords = in_tex_coords; gl_Position = projection_matrix * modelview_matrix * in_position; }"; string fragmentShaderSource = @" #version 330 //#extension GL_EXT_gpu_shader4 : enable precision highp float; //uniform sampler2DArray colorMap; //in vec4 varyingColor; //in vec3 varyingTexCoords; out vec4 out_frag_color; void main(void) { out_frag_color = vec4(1,1,1,1); //out_frag_color = varyingColor; //out_frag_color = vec4(varyingColor, 1) * texture(colorMap, varyingTexCoords.st); //out_frag_color = vec4(varyingColor, 1) * texture(colorMap, vec3(varyingTexCoords.st, 0)); //out_frag_color = vec4(varyingColor, 1) * texture2DArray(colorMap, varyingTexCoords); }"; Note that in this code the color data is accepted but not actually used. The geometry is outputted the same (wrong) whether the fragment shader uses varyingColor or not. Only if I comment out the line varyingColor = in_color; does the geometry output correctly. Originally the shaders took in vec3 inputs, I only modified them to take vec4s while troubleshooting.

    Read the article

  • Are there existing FOSS component-based frameworks?

    - by Tesserex
    The component based game programming paradigm is becoming much more popular. I was wondering, are there any projects out there that offer a reusable component framework? In any language, I guess I don't care about that. It's not for my own project, I'm just curious. Specifically I mean are there projects that include a base Entity class, a base Component class, and maybe some standard components? It would then be much easier starting a game if you didn't want to reinvent the wheel, or maybe you want a GraphicsComponent that does sprites with Direct3D, but you figure it's already been done a dozen times. A quick Googling turns up Rusher. Has anyone heard of this / does anyone use it? If there are no popular ones, then why not? Is it too difficult to make something like this reusable, and they need heavy customization? In my own implementation I found a lot of boilerplate that could be shoved into a framework.

    Read the article

  • Tiny Wings - Placing items

    - by Federico
    I'm currently developing a Flash game like 'Tiny Wings'. I have a lot of work done, but i'm currently working on placing the items ( coins and obstacles ) on the terrain. My player it is moving on a auto-generated terrain (based on Emanuele Feronato's tutorials) so every time the player's x position is greater than (screenWidth + x) another hill is generated and so on. I'm currently having problems placing the items in a correct angle and put 5 or more items together on a hill. Could you please help me with this? Thanks, Regards. PS: This is the URL to the Emanuele Feronato post and the code to make the hills http://www.emanueleferonato.com/2011/10/04/create-a-terrain-like-the-one-in-tiny-wings-with-flash-and-box2d-%E2%80%93-adding-more-bumps/

    Read the article

  • Is good practice to optimize FPS even when it's above the lower limit to give illusion of movement?

    - by rraallvv
    I started over 50 FPS on the iPhone, but now I'm bellow 30 PFS, I've seen most iPhone games clamped to either 60 or 30 FPS, even when 24 or less would give the illusion of movement. I've concidered my limit to be a little bit over 15 FPS, in fact my physics simulation is updated at that rate (15.84 steps/s) as that is the lowest that still give fluid movement, a bit lower gives jerky motion. Is there a practical reason why to clamp FPS way above the lower limit? Update: The following image could help to clarify I can independently set the physic simulation step, frame rate, and simulation interval update. My concern is why should I clamp any of those to values greater than the minimum? For instance to conserve battery life I could just to choose the lower limits, but it seems that 60 or 30 FPS are the most used values.

    Read the article

  • Dynamic body implementation

    - by ArturoVM
    I am writing a 2D game where one of the characters has some very particular requirements. This character is a body with no particular shape (similar to a fluid, but not so much), it has to be able to grow and shrink (as in actually growing, not just scaling), and it has to have collision detection (even if it's basic). Because of this requirements, it obviously can't be based on a sprite, so direct rendering of the shape should be the logical thing to do. I assume this is no easy task, but I just couldn't find a good physics engine that covers these requirements (or at least no tutorial on how to do it; I particularly searched for Box2D tutorials). Is there a way of doing this with Box2D, SDL, or any other physics or game engine out there? If not, what's a good place to start? I am really clueless as far as soft-body physics are concerned.

    Read the article

  • Sprite Animation in Android with OpenGL ES

    - by lijo john
    How to do a sprite animation in android using OpenGL ES? What i have done : Now I am able to draw a rectangle and apply my texture(Spritesheet) to it What I need to know : Now the rectangle shows the whole sprite sheet as a whole How to show a single action from sprite sheet at a time and make the animation It will be very help full if anyone can share any idea's , links to tutorials and suggestions. Advanced Thanks to All

    Read the article

  • (int) Math.floor(x / TILESIZE) or just (int) (x / TILESIZE)

    - by Aidan Mueller
    I have a Array that stores my map data and my Tiles are 64X64. Sometimes I need to convert from pixels to units of tiles. So I was doing: int x int y public void myFunction() { getTile((int) Math.floor(x / 64), (int) Math.floor(y / 64)).doOperation(); } But I discovered by using (I'm using java BTW) System.out.println((int) (1 / 1.5)) that converting to an int automatically rounds down. This means that I can replace the (int) Math.floor with just x / 64. But if I run this on a different OS do you think it might give a different result? I'm just afraid there might be some case where this would round up and not down. Should I keep doing it the way I was and maybe make a function like convert(int i) to make it easier? Or is it OK to just do x / 64?

    Read the article

  • Is using the student version of 3DS Max and Unity3d legal?

    - by SubZeron
    I am developing an indie game together with my friend using Unity3D engine. I bought "Silo 3D" for modeling two month ago and for texturing I use 3D coat. We plan to sell our game in the future. For the animations I work with 3DS max (only animation part). My question is, can I work with a students license? The license for the original version is too expensive for me. I am still at the university and I can not buy the 3DS Max license which costs 4000 €. As an alternative I have the choice beetween Blender (can´t work with this software and don't have time to invest for learning a new program) and Truespace (can´t export fbx animation and specially with bones) so for me, 3DS Max is the best choice to be effective and quick. Is it possible to prove it when I export my fbx characters from 3DS Max to Unity3D? I mean can they find out that I have used the students license of 3DS Max for the animations after the release of the game? Maybe with help of DRM? Can I solve that problem when I export the fbx from 3DS Max to Blender and after that export the same fbx to Unity3D?

    Read the article

< Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >