Search Results

Search found 25550 results on 1022 pages for 'mere development'.

Page 545/1022 | < Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >

  • How to handle circle penetration

    - by Kaertserif
    I've been working on cirlce to circle collision and have gotten the intersection method working correctly, but I'm having problems using the returned values to actually seperate the circles from one another. This is the method which calculates the depth of the circle collision public static Vector2 GetIntersectionDepth(Circle a, Circle b) { float xValue = a.Center.X - b.Center.X; float yValue = a.Center.Y - b.Center.Y; Vector2 depth = Vector2.Zero; float distance = Vector2.Distance(a.Center, b.Center); if (a.Radius + b.Radius > distance) { float result = (a.Radius + b.Radius) - distance; depth.X = (float)Math.Cos(result); depth.Y = (float)Math.Sin(result); } return depth; } This is where I'm trying to apply the values to actually seperate the circles. Vector2 depth = Vector2.Zero; for (int i = 0; i < circlePositions.Count; i++) { for (int j = 0; j < circlePositions.Count; j++) { Circle bounds1 = new Circle(circlePositions[i], circle.Width / 2); Circle bounds2 = new Circle(circlePositions[j], circle.Width / 2); if(i != j) depth = CircleToCircleIntersection.GetIntersectionDepth(bounds1, bounds2); if (depth != Vector2.Zero) { circlePositions[i] = new Vector2(circlePositions[i].X + depth.X, circlePositions[i].Y + depth.Y); } } } If you can offer any help in this I would really appreciate it.

    Read the article

  • Why do I get a blinking screen when running lwjgl?

    - by SystemNetworks
    I didn't have any errors. But When I run my lwjgl game, it gives me a blinking screen. Here is the code: package L1F3; import org.lwjgl.opengl.Display; import org.lwjgl.opengl.DisplayMode; import org.lwjgl.LWJGLException; import static org.lwjgl.opengl.GL11.*; public class Main { public static void main(String[] args) { try { Display.setDisplayMode(new DisplayMode(640, 480)); Display.setTitle("A fresh display!"); Display.create(); } catch (LWJGLException e) { e.printStackTrace(); Display.destroy(); System.exit(1); } while(!Display.isCloseRequested()) { Display.update(); } Display.destroy(); System.exit(0); } } How do I stop the blinking screen? I was thinking its my framerate. I deleted Display.sync but it still gives me all white and black. Last time it didn't give me a blinking screen. EDIT When I remove Display.update() , it gives me a perfect screen, no blinking or no white. Will my game work without it? I can also close it perfectly.

    Read the article

  • How do I find actors in an area on a poly-precise basis?

    - by Almo
    Ok, I've been asking various questions and getting some good answers, but I think I need to rethink my method, so I'll describe the problem. I have a player who has a big blue box in front of him. This box shows which KActors will be pushed when he pulls the trigger: Currently, the blue box spawns a descendant of Actor which checks collision to see which KActors are touching it: foreach Owner.TouchingActors(class'DynamicSMActor', DynamicActorItt) { // do stuff } The problem is, if you check for touching between Actors and KActors, it looks like it does a plain axis-aligned bounding-box collision. The power will push the box on the lower right, when it's clear it's not touching the blue box. How should I do this properly? I just need a way to find out which KActors are touching that area, on a poly-by-poly level. These collisions are only done with rectangular boxes and simple sphere collision; we are aware of the potential for performance issues with complex objects and poly-collision. I've tried making the collision checker a KActor, but it doesn't report any TouchingActors. This issue is causing us trouble in a lot of other places as well. So solving this problem is a core issue in our game.

    Read the article

  • How can you easily determine the textureRect for tiled maps in SFML 2.0?

    - by ThePlan
    I'm working on creating a 2d map prototype, and I've come across the rendering bit of it. I have a tilesheet with tiles, each tile is 30x30 pixels, and there's a 1px border to delimitate them. In SFML the usual method of drawing a part of a tilesheet is declaring an IntRect with the rectangle coordinates then calling the setTextureRectangle() method to a sprite. In a small game it would work, but I have well over 45 tiles and adding more every day, I can't declare 45 intRects for every material, the map is not optimized yet, it would get even worse if I would have to call the setTextureRect() method, aside from declaring 45 rectangleInts. How could I simplify this task? All I need is a very simple and flexible solution for extracting a region of the tilesheet. Basically I have a Tile class. I create multiple instances of tiles (vectors) and each tile has a position and a material. I parse a map file and as I parse it I set the materials of the map according to the parsed map file, and all I need to do is render. Basically I need to do something like this: switch(tile.getMaterial()) { case GRASS: material_sprite.setTextureRect(something); window.draw(material_sprite); break; case WATER: material_sprite.setTextureRect(something); window.draw(material_sprite); break; // handle more cases }

    Read the article

  • CUDA 4.1 Update

    - by N0xus
    I'm currently working on porting a particle system to update on the GPU via the use of CUDA. With CUDA, I've already passed over the required data I need to the GPU and allocated and copied the date via the host. When I build the project, it all runs fine, but when I run it, the project says I need to allocate my h_position pointer. This pointer is my host pointer and is meant to hold the data. I know I need to pass in the current particle position to the required cudaMemcpy call and they are currently stored in a list with a for loop being created and interated for each particle calling the following line of code: m_particleList[i].positionY = m_particleList[i].positionY - (m_particleList[i].velocity * frameTime * 0.001f); My current host side cuda code looks like this: float* h_position; // Your host pointer. This holds the data (I assume it's already filled with the data.) float* d_position; // Your device pointer, we will allocate and fill this float* d_velocity; float* d_time; int threads_per_block = 128; // You should play with this value int blocks = m_maxParticles/threads_per_block + ( (m_maxParticles%threads_per_block)?1:0 ); const int N = 10; size_t size = N * sizeof(float); cudaMalloc( (void**)&d_position, m_maxParticles * sizeof(float) ); cudaMemcpy( d_position, h_position, m_maxParticles * sizeof(float), cudaMemcpyHostToDevice); Both of which were / can be found inside my UpdateParticle() method. I had originally thought it would be a simple case of changing the h_position variable in the cudaMemcpy to m_particleList[i] but then I get the following error: no suitable conversion function from "ParticleSystemClass::ParticleType" to "const void *" exists I've probably messed up somewhere, but could someone please help fix the issues I'm facing. Everything else seems to running fine, it's just when I try to run the program that certain things hit the fan.

    Read the article

  • Threads slowing down application and not working properly

    - by Belgin
    I'm making a software renderer which does per-polygon rasterization using a floating point digital differential analyzer algorithm. My idea was to create two threads for rasterization and have them work like so: one thread draws each even scanline in a polygon and the other thread draws each odd scanline, and they both start working at the same time, but the main application waits for both of them to finish and then pauses them before continuing with other computations. As this is the first time I'm making a threaded application, I'm not sure if the following method for thread synchronization is correct: First of all, I use two global variables to control the two threads, if a global variable is set to 1, that means the thread can start working, otherwise it must not work. This is checked by the thread running an infinite loop and if it detects that the global variable has changed its value, it does its job and then sets the variable back to 0 again. The main program also uses an empty while to check when both variables become 0 after setting them to 1. Second, each thread is assigned a global structure which contains information about the triangle that is about to be rasterized. The structures are filled in by the main program before setting the global variables to 1. My dilemma is that, while this process works under some conditions, it slows down the program considerably, and also it fails to run properly when compiled for Release in Visual Studio, or when compiled with any sort of -O optimization with gcc (i.e. nothing on screen, even SEGFAULTs). The program isn't much faster by default without threads, which you can see for yourself by commenting out the #define THREADS directive, but if I apply optimizations, it becomes much faster (especially with gcc -Ofast -march=native). N.B. It might not compile with gcc because of fscanf_s calls, but you can replace those with the usual fscanf, if you wish to use gcc. Because there is a lot of code, too much for here or pastebin, I created a git repository where you can view it. My questions are: Why does adding these two threads slow down my application? Why doesn't it work when compiling for Release or with optimizations? Can I speed up the application with threads? If so, how? Thanks in advance.

    Read the article

  • Write depth buffer to texture

    - by innochenti
    I need to read depth buffer from GPU and write it to texture. How this can be done? Here is how texture for depth buffer is created: depthBufferDesc.Width = screenWidth; depthBufferDesc.Height = screenHeight; depthBufferDesc.MipLevels = 1; depthBufferDesc.ArraySize = 1; depthBufferDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT; depthBufferDesc.SampleDesc.Count = 1; depthBufferDesc.SampleDesc.Quality = 0; depthBufferDesc.Usage = D3D10_USAGE_DEFAULT; depthBufferDesc.BindFlags = D3D10_BIND_DEPTH_STENCIL; depthBufferDesc.CPUAccessFlags = 0; depthBufferDesc.MiscFlags = 0; m_device->CreateTexture2D(&depthBufferDesc, NULL, m_depthStencilBuffer); Also, I've got another question: is it possible to bind depth buffer texture as sampler to the pixel shader?

    Read the article

  • openGL textures in bitmap mode

    - by evenex_code
    For reasons detailed here I need to texture a quad using a bitmap (as in, 1 bit per pixel, not an 8-bit pixmap). Right now I have a bitmap stored in an on-device buffer, and am mounting it like so: glBindBuffer(GL_PIXEL_UNPACK_BUFFER, BFR.G[(T+1)%2]); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, W, H, 0, GL_COLOR_INDEX, GL_BITMAP, 0); The OpenGL spec has this to say about glTexImage2D: "If type is GL_BITMAP, the data is considered as a string of unsigned bytes (and format must be GL_COLOR_INDEX). Each data byte is treated as eight 1-bit elements..." Judging by the spec, each bit in my buffer should correspond to a single pixel. However, the following experiments show that, for whatever reason, it doesn't work as advertised: 1) When I build my texture, I write to the buffer in 32-bit chunks. From the wording of the spec, it is reasonable to assume that writing 0x00000001 for each value would result in a texture with 1-px-wide vertical bars with 31-wide spaces between them. However, it appears blank. 2) Next, I write with 0x000000FF. By my apparently flawed understanding of the bitmap mode, I would expect that this should produce 8-wide bars with 24-wide spaces between them. Instead, it produces a white 1-px-wide bar. 3) 0x55555555 = 1010101010101010101010101010101, therefore writing this value ought to create 1-wide vertical stripes with 1 pixel spacing. However, it creates a solid gray color. 4) Using my original 8-bit pixmap in GL_BITMAP mode produces the correct animation. I have reached the conclusion that, even in GL_BITMAP mode, the texturer is still interpreting 8-bits as 1 element, despite what the spec seems to suggest. The fact that I can generate a gray color (while I was expecting that I was working in two-tone), as well as the fact that my original 8-bit pixmap generates the correct picture, support this conclusion. Questions: 1) Am I missing some kind of prerequisite call (perhaps for setting a stride length or pack alignment or something) that will signal to the texturer to treat each byte as 8-elements, as it suggests in the spec? 2) Or does it simply not work because modern hardware does not support it? (I have read that GL_BITMAP mode was deprecated in 3.3, I am however forcing a 3.0 context.) 3) Am I better off unpacking the bitmap into a pixmap using a shader? This is a far more roundabout solution than I was hoping for but I suppose there is no such thing as a free lunch.

    Read the article

  • Is there a good reason I shouldn't use a java applet for a game?

    - by ryeguy
    I want to make a multiplayer browser-based game. The nice thing about using an applet is that I can make the client and the server in the same language (java/closure/scala/etc). I know there's html5 and javascript, but server side javascript isn't as mature as the jvm platform and browser support is still kind of flaky. Applets don't seem to be widely used (except for Runescape), but is there a reason they're unsuitable or is it just because of the bad reputation they developed in their infancy?

    Read the article

  • Higher Performance With Spritesheets Than With Rotating Using C# and XNA 4.0?

    - by Manuel Maier
    I would like to know what the performance difference is between using multiple sprites in one file (sprite sheets) to draw a game-character being able to face in 4 directions and using one sprite per file but rotating that character to my needs. I am aware that the sprite sheet method restricts the character to only be able to look into predefined directions, whereas the rotation method would give the character the freedom of "looking everywhere". Here's an example of what I am doing: Single Sprite Method Assuming I have a 64x64 texture that points north. So I do the following if I wanted it to point east: spriteBatch.Draw( _sampleTexture, new Rectangle(200, 100, 64, 64), null, Color.White, (float)(Math.PI / 2), Vector2.Zero, SpriteEffects.None, 0); Multiple Sprite Method Now I got a sprite sheet (128x128) where the top-left 64x64 section contains a sprite pointing north, top-right 64x64 section points east, and so forth. And to make it point east, i do the following: spriteBatch.Draw( _sampleSpritesheet, new Rectangle(400, 100, 64, 64), new Rectangle(64, 0, 64, 64), Color.White); So which of these methods is using less CPU-time and what are the pro's and con's? Is .NET/XNA optimizing this in any way (e.g. it notices that the same call was done last frame and then just uses an already rendered/rotated image thats still in memory)?

    Read the article

  • Getting collision detection in Pygames

    - by user36010
    I am writing a game in Pygame, I want to get collision detection. The aim is when a object hits another, the target object disappears. I want to avoid having classes and just have my code class less for now, in one script. This makes it difficult to get collision detection because the Rect method in Pygame is called on by an object(class). The logic I want to achieve is: object hits a target object target object disappears. is there an easy way to achieve this?(with minimal code possible)

    Read the article

  • Sprite batching in OpenGL

    - by Roy T.
    I've got a JAVA based game with an OpenGL rendering front that is drawing a large amount of sprites every frame (during testing it peaked at 700). Now this game is completely unoptimized. There is no spatial partitioning (so a sprite is drawn even if it isn't on screen) and every sprite is drawn separately like this: graphics.glPushMatrix(); { graphics.glTranslated(x, y, 0.0); graphics.glRotated(degrees, 0, 0, 1); graphics.glBegin(GL2.GL_QUADS); graphics.glTexCoord2f (1.0f, 0.0f); graphics.glVertex2d(half_size , half_size); // upper right // same for upper left, lower left, lower right graphics.glEnd(); } graphics.glPopMatrix(); Currently the game is running at +-25FPS and is CPU bound. I would like to improve performance by adding spatial partitioning (which I know how to do) and sprite batching. Not drawing sprites that aren't on screen will help a lot, however since players can zoom out it won't help enough, hence the need for batching. However sprite batching in OpenGL is a bit of mystery to me. I usually work with XNA where a few classes to do this are built in. But in OpenGL I don't know what to do. As for further optimization, the game I'm working on as a few interesting characteristics. A lot of sprites have the same texture and all the sprites are square. Maybe these characteristics will help determine an efficient batching technique?

    Read the article

  • Purchasing a TV show adaptation rights, how does it work?

    - by Mikalichov
    Basically, I was thinking about a game based on a TV show, just for fun, and ended up thinking "well, it's not like it can be made anyway". Or can it? In the present situation, developing a game by myself/ourselves on my/our free time, and then using crowdfunding to purchase the rights is not that crazy, if the show is really popular... and the rights not too expensive. Purchasing the rights of the whole show is obiously a sh!tload of money, but what about adaptation rights? What is the range of price it can be? Is it a percentage of the full rights? Does it depend on the kind of adaptation (novel vs. toy vs. game)? ps: if it can help answer, I was thinking about a MLPFIM retro RPG. Please don't laugh at me.

    Read the article

  • Do you need expensive servers and fancy hosting in order to make a multiplayer game?

    - by ThePlan
    I've finished working on an RPG and it would seem so much more fun to make it multiplayer. SFML has a networking feature, I figured it's possible but then again, never in my life have I even tried something basic about networking, in fact my knowledge of it is very limited. What would it take to make a multiplayer game resource-wise? I'm not talking about an MMO, more like a co-op type of game. Do I need mountains of cash to pay for hosting and servers and many many things to make one?

    Read the article

  • How do I add a Rigid body and a box collider component to a Texture2D?

    - by gamenewdev
    I am making a snake game. I'm basing it on a basic tutorial game, which does no collision detection, wall checking or different levels. All snake head, piece, food, even the background is made of Texture2D. I want the head of the snake to detects 2D collisions with them, but Rect.contains isn't working. I'd prefer to detect collisions by onTriggerEnter() for which I need to add BoxCollider to my snakeHead.

    Read the article

  • How can I stop pixel seams appearing in adjacent mesh boundaries due to floating point imprecision?

    - by ufomorace
    Graphics cards are mathematically imprecise. So when some meshes are joined by their borders, the graphics card often makes mistakes and decides that some pixels at the seam represent neither object, and unwanted pixels appear. It's a natural behaviour on all graphics cards. How are such worries avoided in Pro Games? Batching? Shaders? Different tangent vectors? Merging? Overlaping seams? Dark backgrounds? Extra vertices at borders? Z precision? Camera distance tweaks? Screencap of a fix that ended up not working:

    Read the article

  • Using PhysX, how can I predict where I will need to generate procedural terrain collision shapes?

    - by Sion Sheevok
    In this situation, I have terrain height values I generate procedurally. For rendering, I use the camera's position to generate an appropriate sized height map. For collision, however, I need to have height fields generated in areas where objects may intersect. My current potential solution, which may be naive, is to iterate over all "awake" physics actors, use their bounds/extents and velocities to generate spheres in which they may reside after a physics update, then generate height values for ranges encompassing clustered groups of actors. Much of that data is likely already calculated by PhysX already, however. Is there some API, maybe a set of queries, even callbacks from the spatial system, that I could use to predict where terrain height values will be needed?

    Read the article

  • Checking collision of bullets and Asteroids

    - by Moaz ELdeen
    I'm trying to detect collision between two list of bullets and asteroids. The code works fine, but when the bullet intersects with an asteroid, and that bullet passes through another asteroid, the code gives an assertion, and it says about it can't increment the iterator. I'm sure there is a small bug in that code, but I can't find it. for (list<Bullet>::iterator itr_bullet = ship.m_Bullets.begin(); itr_bullet!=ship.m_Bullets.end();) { for (list<Asteroid>::iterator itr_astroid = asteroids.begin(); itr_astroid!=asteroids.end(); itr_astroid++) { if(checkCollision(itr_bullet->getCenter(),itr_astroid->getCenter(), itr_bullet->getRadius(), itr_astroid->getRadius())) { itr_astroid = asteroids.erase(itr_astroid); } } itr_bullet++; }

    Read the article

  • Rotate 2d sprite towards pointer

    - by Phil
    I'm using Crafty.js and am trying to point a sprite towards the mouse pointer. I have a function for getting the degree between two points and I'm pretty sure it works mathematically (I have it under unit tests). At least it responds that the degree between { 0,0} and {1,1} is 45, which seems right. However, when I set the sprite rotation to the returned degree, it's just wrong all the time. If I manually set it to 90, 45, etc, it gets the right direction.. I have a jsfiddle that shows what I've done so far. Any ideas?

    Read the article

  • Data-driven animations

    - by saadtaame
    Say you are using C/SDL for a 2D game project. It's often the case that people use a structure to represent a frame in an animation. The struct consists of an image and how much time the frame is supposed to be visible. Is this data sufficient to represent somewhat complex animation? Is it a good idea to separate animation management code and animation data? Can somebody provide a link to animations tutorials that store animations in a file and retrieve them when needed. I read this in a book (AI game programming wisdom) but would like to see a real implementation.

    Read the article

  • Adding gesture recognizer (or dragging) to CCSprite

    - by user339946
    I'm trying to allow a CCSprite to be dragged across the screen. I've succeeded so far by doing it on a Layer level (from this tutorial). However, this only allows ONE sprite to be dragged at a time as the method implementation can only identify a single sprite to move at a time. I'd like to be able to perhaps add a gesture recognizer or somehow implement ccTouchesBegan/Moved in my own little CCSprite subclass. However, from what I understand, you can't just add gesture recognizers to CCSprites. ccTouchMoved are also not available on CCSprites?? Really confused as to how to implement touches on Cocos2D. What is the easiest way to add some position translation code to a CCSprite so it can be dragged around? Thanks!

    Read the article

  • Can anyone point me to some open source directX rendering engines or frameworks? [on hold]

    - by Jim
    I'm completely new to graphics API programmming, but not at all new to the theory and principle operation of game engines and rendering engines. That being said, I want to do some experiments of rendering very dense geometry scenes in a basic rendering engine or game engine. I don't need a lot of bells and whistles. What I need is enough control that I can implement my own scene graph algorithms and control the rendering pipeline very specifically. My ideal candidate engine would be either a rendering engine or game engine with a modular design that might be ready to go out of the box but would be simple enough in case I need to rip out some of the guts in the rendering management and implement my own. It's a tough call because I'm right at the level where it's almost better to go from scratch, but there's no sense in having to build every single basic thing such as heirarchical transforms, etc. I just want to work with rendering optimization to push dense geometry for maximum FPS. Does anyone have a suggestion for an engine or basic framework to use? I requested DirectX in my title because I figured it would likely be better supported and less likely for me to run into some obscure less-documented problem. But OpenGL might be acceptable if the recommended framework was definitely better than my other options. EDIT: I should add that I really want GPU tessellation support (part of adding to the density of geometry detail).

    Read the article

  • Keyboard is not detecting as3

    - by Kaoru
    i got this problem, when i press 1 in keyboard, the function does not run, even i already declared it. How do i solve this? Here is the code: public function Character() { Constant.loaderMario.load(new URLRequest("mario.swf")); addEventListener(KeyboardEvent.KEY_DOWN, selectingPlayer); } private function selectingPlayer(event:KeyboardEvent):void { isoTextMario = new Word(); isoTextChocobo = new Word(); if (event.keyCode == Keyboard.NUMBER_1|| event.keyCode == 49) { Constant.marioSelected = true; if (Constant.marioSelected == true) { isoTextMario.marioCharacter(); addChild(isoTextMario); } } else if (event.keyCode == Keyboard.NUMBER_2 || event.keyCode == 50) { Constant.chocoboSelected = true; if (Constant.chocoboSelected == true) { isoTextChocobo.chocoboCharacter(); addChild(isoTextChocobo); } } }

    Read the article

  • Game Design - When to separate out pieces into static libraries?

    - by Jason
    I am developing a game that has a lot of platform generic pieces. I am wanting to separate out various pieces into static libraries and I would like to know what other devs do. I am considering targeting other platforms and I want to maintain an much platform neutrality as I can. I have a lot of generic level data in C++ classes. THinking all of the level data could go into a single static library. I have a lot of generic OpenGL code that I think could also go into a single static library. I am already using CMAKE for some and XCode 4.5 for the Apple specific pieces. What do other devs do to stay platform neutral? Does anyone use Eclipse instead of XCode and Visual Studio on Windows?

    Read the article

  • Working Qt controls in a 3d environment

    - by Jay
    I need some advice from a Qt expert. The background: I have a 3D engine (ogre3d) working in concert with Qt. The 3D Content is displayed in a widget (using a custom OS window in the client area). I'm able to overlay arbitrary Qt widgets onto the 3d world using the widget render() method and a shared bitmap. This makes a great "heads up display". I can use the standard Qt style sheets and animation using this technique. My goal I'd like to go a step further and allow the user to move these rendered widgets using the mouse. I'd like some advice on the best way to implement this. Possible solutions: The widgets in the HUD are not part of the inheritance chain. I render them manually. They don't get events though. I could add them to the inheritance chain so they get events in the usual way. Then I would need to change them to render to my shared bitmap instead of to the operating system. I looked at this once but couldn't find enough information to implement it. Capture mouse events in the 3D display widget and EMIT them to child controls. I basically create my own event handling chain. Any suggestions on how to implement this? I'm also considering switching to Qt5. I'm not sure how that might affect this decision.

    Read the article

< Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >