Search Results

Search found 32375 results on 1295 pages for 'dnn module development'.

Page 617/1295 | < Previous Page | 613 614 615 616 617 618 619 620 621 622 623 624  | Next Page >

  • Isometric Collision Detection

    - by Sleepy Rhino
    I am having some issues with trying to detect collision of two isometric tile. I have tried plotting the lines between each point on the tile and then checking for line intercepts however that didn't work (probably due to incorrect formula) After looking into this for awhile today I believe I am thinking to much into it and there must be a easier way. I am not looking for code just some advise on the best way to achieve detection of overlap

    Read the article

  • How to create a simple side scroller game

    - by D34thSt4lker
    I'm still pretty new to game programming and any tutorial that I have worked with stuck to only games with the initial screen. I want to start creating my own games but there are a few things that I still need to learn. One of them is how to create a game that side-scrolls. For example; Mario... Or ANY type of game like that... Can anyone give me a small example to create something like that. I'm not asking for any specific language because currently in school I am learning javascript but I know some c++/java/processing/objective-c as well. So any of those languages would be fine and I could probably implement it in any of the others... I have been searching for some help with this for a while now but could never actually get any help on it. Thanks in advance!

    Read the article

  • UDK - How to make sure a PhysicalMaterial mask actually works?

    - by tomacmuni
    Hello, I have been reading the documentation for UDK about physical materials and masks. I have my 1bit BMP mask, and the two physical material assets I want to shoot off in the black and white channels. I have applied my material to both a rigid body and to a skeletal mesh and neither apparently uses the mask. If I assign a regular physical material (one that doesn't use a mask) then it will work fine, but this defeats the point because it gives only one hit reaction. In the documentation it states that it is possible to extend a class on which we want to use a physical material based on the KActor class's usage. How to do that? Here is the quote: "The following properties [ie, ImpactEffect - Particle system to spawn at the point of impact + ImpactSound - Sound to play when an impact occurs] allow you to attach sounds and effects to physical collisions. These only work on classes which support them, which at the moment is only KActor. By looking at the implementation in KActor though, you can add this functionality to other classes (or you can subclass KActor)." Essentially, how to make sure a PhysicalMaterial mask actually works? What code could be added to a skeletal mesh class perhaps, to get it going? Any help appreciated.

    Read the article

  • How to get local point inside a body where mouse click occurred in box2d?

    - by humbleBee
    I need to find out the point inside a body, lets say a rectangular object, where the mouse was clicked on. I'm makin a game where the force will be applied depending on where the mouse was clicked on the body. Any ideas? Will body.GetLocalPoint(b2vec2) work? I tried by passing the mouse coordinates when the click occurred when inside the body but if the body's position is (400,300) in world coordinates then for trace(body.GetLocalPoint(new b2vec2(mouseX,mouseY)).x); I get some value between 380 to 406 or something (eg. 401.6666666). I thought getLocalPoint will give something like x=-10 when clicked to the left of the centre of body or x=15 when clicked to the right. Language is As3 btw.

    Read the article

  • Fastest approach to 3D animation

    - by HappyFerret
    I'm currently tasked with designing a small HTML5 game. Having done everything by myself so far (3D models, codebase, game design, etc) I'm now at a point where I'm running out of time. I've less than a day to animate and bind everything together. However, that's exactly my problem. I was under the naive impression that everything would be easier if I went with pre-rendered 3D models. However, I didn't consider the most difficult part. Animation. After having spent over an hour trying to figure out messiahStudio, I figured it's time to ask for outside help. Is there any easier solution to 3D animation than 3D rigging? What I'm basically looking for is some sort of tool that allows me to simply grab and move/deform select polygons. It doesn't have to be as life-like and accurate as rigging, just efficient enough. Were the circumstances any different, I might just learn how to rig. But that's sorely out of scope right now. PS:The models were created in Sculptris but are fairly low-poly.

    Read the article

  • Data structures for a 3D array

    - by Smallbro
    Currently I've been using a 3D array for my tiles in a 2D world but the 3D side comes in when moving down into caves and whatnot. Now this is not memory efficient and I switched over to a 2D array and can now have much larger maps. The only issue I'm having now is that it seems that my tiles cannot occupy the same space as a tile on the same z level. My current structure means that each block has its own z variable. This is what it used to look like: map.blockData[x][y][z] = new Block(); however now it works like this map.blockData[x][y] = new Block(z); I'm not sure why but if I decide to use the same space on say the floor below it wont allow me to. Does anyone have any ideas on how I can add a z-axis to my 2D array? I'm using java but I reckon the concept carries across different languages.

    Read the article

  • Collision rectangle response

    - by dotty
    I'm having difficulties getting a moveable rectangle to collide with more than one rectangle. I'm using SFML and it has a handy function called Intersect() which takes 2 rectangles and returns the intersections. I have a vector full of rectangles which I want my moveable rectangle to collide with. I'm looping through this using the following code (p is the moveble rectangle). IsCollidingWith returns a bool but also uses SFML's Interesect to work out the intersections. while(unsigned i = 0; i!= testRects.size(); i++){ if(p.IsCollidingWith(testRects[i]){ p.Collide(testRects[i]); } } and the actual Collide() code void gameObj::collide( gameObj collidingObject ){ printf("%f %f\n", this->colliderResult.width, this->colliderResult.height); if (this->colliderResult.width < this->colliderResult.height) { // collided on X if (this->getCollider().left < collidingObject.getCollider().left ) { this->move( -this->colliderResult.width , 0); }else { this->move( this->colliderResult.width, 0 ); } } if(this->colliderResult.width > this->colliderResult.height){ if (this->getCollider().top < collidingObject.getCollider().top ) { this->move( 0, -this->colliderResult.height); }else { this->move( 0, this->colliderResult.height ); } } } and the IsCollidingWith() code is bool gameObj::isCollidingWith( gameObj testObject ){ if (this->getCollider().intersects( testObject.getCollider(), this->colliderResult )) { return true; }else { return false; } } This works fine when there's only 1 Rect in the scene. However, when there's move than one Rect it causes issue when working out 2 collisions at once. Any idea how to deal with this correctly? I have uploaded a video to youtube to show my problem. The console on the far-right shows the width and height of the intersections. You can see on the console that it's trying to calculate 2 collisions at once, I think this is where the problem is being caused. The youtube video is at http://www.youtube.com/watch?v=fA2gflOMcAk also , this image also seems to illustrate the problem nicely. Can someone please help, I've been stuck on this all weekend!

    Read the article

  • Physics don't apply on a unique body AndEngine

    - by Kanga
    I am developing a game in AndEngine so far I managed to create everything I wanted for my sprite that was connected to a BoxBody. I was rotating it moving it everything was great. I wanted my collision detection to be more precise so I changed from boxBody to unique irregular shaped body. I found all the vertices and I just replaced the newly created irregular shaped body with the boxbody everywhere in my code. Not only that the image of the sprite is not in place but all the physics and maths I was doing for movement and physics wont work. Please help me :(. Thanks in advance.

    Read the article

  • Is there any guarantee about the graphical output of different GPUs in DirectX?

    - by cloudraven
    Let's say that I run the same game in two different computers with different GPUs. If for example they are both certified for DirectX 10. Is there a guarantee that the output for a given program (game) is going to be the same regardless the manufacturer or model of the GPU? I am assuming the configurable settings are exactly the same in both cases. I heard that it is not the case for DirectX 9 and older, but that it is true for DirectX 10. If someone could provide a source confirming or denying it, it would be great. Also what is the guarantee offered. Will the output be exactly the same or just perceptually the same to the human eye?

    Read the article

  • What techniques can I use to render very large numbers of objects more efficiently in OpenGL?

    - by Luke
    You can think of my application as drawing a very large ball-and-stick diagram (or graph). At times, this graph can get very large, where the number of elements even outnumbers the pixels on the screen. Currently I am simply passing all of my textures (as GL_POINTS) and lines to the graphics card using VBO's. When the number of elements outnumbers the number of pixels, is this the most efficient way to do this? Or should I do some calculations on the CPU side before handing everything over to the GPU? If it matters, I do use GL_DEPTH_TEST and GL_ALPHA_TEST. I do some alpha blending, but probably not enough to make a huge performance difference. My scene can be static at times, but the user has control over a typical arc-ball camera and can pan, rotate, or zoom. It is during these operations that performance degradation is noticeable.

    Read the article

  • AndEngine player, background and camera

    - by valdemar593
    I'm developing a 2D shooter using AndEngine. At the moment I'm trying to make the camera follow the player. As I've understood the common approach is to use the SmoothCamera zooming it and setting the chased entity. The problem is that the camera follows the player WITH background moving also (RepeatingSpriteBackground), so it looks like the player doesn't move at all though the actual position changes. So I don't really get how to make the camera follow the player and have the background not moving. Thanks in advance.

    Read the article

  • Game planning and software design? I feel that UML is not convenient

    - by user1542
    In my university, they always emphasize and hype about UML design and stuff, in which I feel it is not going to work well with game structure design. Now, I just want a professional advice on how should I begin my game designing? The story is I have some skill in programming and have done many minor game such as getting some 2D platformer working to some extend. The problems that I find about my program is the poor quality design. After coding for a while, things start to break down due to poor planning (When I add new feature, it tends to make me have to recode the whole program). However, to plan everything out without a single design flaw is a bit too ideal. Therefore, any advice to how should I plan my game? How should I put it into visible pictures, so that me and my friends are able to overview the designs? I planned to start coding a game with my friend. This is going to be my first teamwork, so any professional advices would be a pleasure. Is there any other alternatives than UML? Another question is how does "prototyping" normally looks like?

    Read the article

  • Cross-platform builds with OGRE3D via CMake. Any tips?

    - by frarees
    I've been trying to compile a simple project for both OSX and Windows platforms, using OGRE3D, but I've got some problems on the way. I'm using CMake to create my platform specific project files (VS solution & Xcode project). Some problems I found are: OGRE3D source is distributed in 2 flavors, Windows sources and UNIX/OSX sources. In OSX, compiling dependencies (freetype, FreeImage and specially OIS) is such a pain. I don't know how to handle precompiled dependencies (they exist for both Win & Mac). May sound like a noob question, but I would appreciate some tips on this. Resources, forum posts, anything. There exists any "cross-platform base project for OGRE3D" on the net? Would be really helpful if someone who already managed to do this can bring some light. Btw, I'm not basing the project on OGRE3D, it's just that is the biggest library I'm probably using, so I depend a lot on it. Thanks in advantage!

    Read the article

  • 3d Collision Handling

    - by TobSpr
    I have trouble while detecting collisions on my 3D-Game. I have set-up Rays, to detect collisions (Screenshot) and my main-rountine already analyzes them. But now there's the question what to do with that. One possibility would be, to move the player back to the last position, but that's dirty, and does not work if the player can walk in multiple directions (e.g. if the player runs along a wall). My question is, what to do with the collision data / or in which direction, by which amount move the player? I'm sure there is an algorithm for that (as for almost all is).

    Read the article

  • Threads slowing down application and not working properly

    - by Belgin
    I'm making a software renderer which does per-polygon rasterization using a floating point digital differential analyzer algorithm. My idea was to create two threads for rasterization and have them work like so: one thread draws each even scanline in a polygon and the other thread draws each odd scanline, and they both start working at the same time, but the main application waits for both of them to finish and then pauses them before continuing with other computations. As this is the first time I'm making a threaded application, I'm not sure if the following method for thread synchronization is correct: First of all, I use two global variables to control the two threads, if a global variable is set to 1, that means the thread can start working, otherwise it must not work. This is checked by the thread running an infinite loop and if it detects that the global variable has changed its value, it does its job and then sets the variable back to 0 again. The main program also uses an empty while to check when both variables become 0 after setting them to 1. Second, each thread is assigned a global structure which contains information about the triangle that is about to be rasterized. The structures are filled in by the main program before setting the global variables to 1. My dilemma is that, while this process works under some conditions, it slows down the program considerably, and also it fails to run properly when compiled for Release in Visual Studio, or when compiled with any sort of -O optimization with gcc (i.e. nothing on screen, even SEGFAULTs). The program isn't much faster by default without threads, which you can see for yourself by commenting out the #define THREADS directive, but if I apply optimizations, it becomes much faster (especially with gcc -Ofast -march=native). N.B. It might not compile with gcc because of fscanf_s calls, but you can replace those with the usual fscanf, if you wish to use gcc. Because there is a lot of code, too much for here or pastebin, I created a git repository where you can view it. My questions are: Why does adding these two threads slow down my application? Why doesn't it work when compiling for Release or with optimizations? Can I speed up the application with threads? If so, how? Thanks in advance.

    Read the article

  • OpenGL : sluggish performance in extracting texture from GPU

    - by Cyan
    I'm currently working on an algorithm which creates a texture within a render buffer. The operations are pretty complex, but for the GPU this is a simple task, done very quickly. The problem is that, after creating the texture, i would like to save it. This requires to extract it from GPU memory. For this operation, i'm using glGetTexImage(). It works, but the performance is sluggish. No, i mean even slower than that. For example, an 8MB texture (uncompressed) requires 3 seconds (yes, seconds) to be extracted. That's mind puzzling. I'm almost wondering if my graphic card is connected by a serial link... Well, anyway, i've looked around, and found some people complaining about the same, but no working solution so far. The most promising advise was to "extract data in the native format of the GPU". Which i've tried and tried, but failed so far. Edit : by moving the call to glGetTexImage() in a different place, the speed has been a bit improved for the most dramatic samples : looking again at the 8MB texture, it knows requires 500ms, instead of 3sec. It's better, but still much too slow. Smaller texture sizes were not affected by the change (typical timing remained into the 60-80ms range). Using glFinish() didn't help either. Note that, if i call glFinish() (without glGetTexImage), i'm getting a fixed 16ms result, whatever the texture size or complexity. It really looks like the timing for a frame at 60fps. The timing is measured for the full rendering + saving sequence. The call to glGetTexImage() alone does not really matter. That being said, it is this call which changes the performance. And yes, of course, as stated at the beginning, the texture is "created into the GPU", hence the need to save it.

    Read the article

  • How to display consistent background image

    - by Tofu_Craving_Redish_BlueDragon
    Drawing a large background is relatively slow in PyGame. In order to avoid drawing BG every frame, you could draw it once, then do nothing. However, if something is overdrawn onto the surface and keeps moving, you will need to redraw the background in order to "erase" the color pixels left by moving object; otherwise, you will have "traces" of the moving object. I have a moving object in my PyGame. However, I do not want to "clear the color buffer" by redrawing the background image. Redrawing the background image every frame is slow. My solution : I will "clear" only required portions (where the "traces" of moving object are left) of the "buffer" by redrawing portions of background. Is there any other better way to have a consistent background?

    Read the article

  • Unit turning in navmesh-based pathfinding

    - by Haddayn
    I'm working on an RTS game, and I'm using navmeshes for unit pathfinding. I do know how to find a general path within a navmesh, but how do you determine if the unit have enough space to turn? I have units of different shapes (mostly rectangles with different dimensions), and with different turn radii. Additionally some of units can turn in place, and some can move in reverse. So, how to find a path which unit can follow, considering that it can not rotate easily?

    Read the article

  • Achieve anisotropic filtering

    - by fedab
    I want to set anisotropic filtering to my scene. I use SharpDX (DirectX 11) and C#. How do i set up anisotropic filtering in my shader? Currently i try that in the shader: Texture2D tex; sampler textureSampler = sampler_state { Texture = (tex); MipFilter = Anisotropic; MagFilter = Anisotropic; MinFilter = Anisotropic; MaxAnisotropy = 16; }; float4 PShader(float4 position : SV_POSITION, float4 color:COLOR, float2 tex0 : TEXCOORD0) : SV_TARGET { float4 textureColor; textureColor = tex.Sample(textureSampler, tex0) * color; return textureColor; } I get my object, textured, but it is not filtered anisotropic. I can write everything in the Parameters, even invalid things and i don't get any errors. The result is the same, objects without applied anisotropic filtering. Do i have to set that in the shader? Can i do that also with SamplerState? I tested that but i didn't get a result too. Some steps what i have to set would be helpful.

    Read the article

  • How can I keep track of a battle log on a web game?

    - by Jay W
    Recently I started working on a Web turn-based PvP RPG game. Now I'm working on the battle system but I encountered some issues: How can I keep track of everything that happens in the battle? It should keep track of the characters on the field, inventory, the damage done etc. I first thought I would simply put it in the (MySQL) database, but I think it will be too much. Especially if several people are in a battle. I thought of puting this in sessions or cookies but I don't think thats reliable. Does anyone have an idea how I can do this?

    Read the article

  • Creating movement path displays in a top-down 2d RTS

    - by nihohit
    My game is a top-down 2d RTS coded in C# using SFML's libraries. I want that during unit selection, a unit will display it's movement path on the map. Currently, after the path is computed as a list of directions ({left, up,down, down, down, left}, as an example), it's sent to the graphical component to create it's UI equivalent, and here I'm having some problems. current, these I've checked three ways to do it: compute the size of the image (in the example above it'll be a 3*2 rectangle) and create an invisible rectangle, and then go over the directions list and mark each spot with a visible point, so as to get a continous line. This system is slightly problematic because of the amount of large images that I need to save, but mostly because I have a lot of fine detail onscreen, and a continous line obstructs the view. again, compute the size of the image, but now create several (let's say 4) invisible images of that size, and then instead of a single continous line I'll switch between the four images, in each will appear only a fourth of the spots, in a way which creates a path animation. This is nicer on the eye, but here the memory demands, and the amount of time needed to compute each such image-loop is significant. Just create a list of single markers, each on a different spot on the path. This is very quick & easy on memory, but too sparse. Is there a simple or resource-light system to create path-animations?

    Read the article

  • two guitexture that do not work together

    - by London2423
    I have two GUITexture that move left and right a cube. Is pretty strange but together they don't work. If I activate only one it works. To be more specific: If I have the left GUItexture alone in the game the cube move left. If I have the right GUITexture activated alone the cube move right. Seems all fine I thought but If I have both of them the cube move only right and not left. Where is the mistake? Here is the code inside the GameObject cube for Right move void OnMousedown () { transform.position += Vector3.right * Time.deltaTime; } For Left move void OnMousedown () { transform.position += Vector3.left * Time.deltaTime; } And this is the left GUITexture code //move the cube left Cube.GetComponent<Left> ().enabled = true; left.transform.position += Vector3.left * Time.deltaTime; This is the right GUITexture //move the cube right Cube.GetComponent<Left> ().enabled = true; right.transform.position += Vector3.right * Time.deltaTime; What is the reason for this? I hope someone can help me.

    Read the article

  • Entity communication: Message queue vs Publish/Subscribe vs Signal/Slots

    - by deft_code
    How do game engine entities communicate? Two use cases: How would entity_A send a take-damage message to entity_B? How would entity_A query entity_B's HP? Here's what I've encountered so far: Message queue entity_A creates a take-damage message and posts it to entity_B's message queue. entity_A creates a query-hp message and posts it to entity_B. entity_B in return creates an response-hp message and posts it to entity_A. Publish/Subscribe entity_B subscribes to take-damage messages (possibly with some preemptive filtering so only relevant message are delivered). entity_A produces take-damage message that references entity_B. entity_A subscribes to update-hp messages (possibly filtered). Every frame entity_B broadcasts update-hp messages. Signal/Slots ??? entity_A connects an update-hp slot to entity_B's update-hp signal. Something better? Do I have a correct understanding of how these communication schemes would tie into a game engine's entity system? How do entities in commercial game engines communicate?

    Read the article

  • Keeping the meshes "thickness" the same when scaling an object

    - by user1806687
    I've been bashing my head for the past couple of weeks trying to find a way to help me accomplish, on first look very easy task. So, I got this one object currently made out of 5 cuboids (2 sides, 1 top, 1 bottom, 1 back), this is just for an example, later on there will be whole range of different set ups. Now, the thing is when the user chooses to scale the whole object this is what should happen: X scale: top and bottom cuboids should get scaled by a scale factor, sides should get moved so they are positioned just like they were before(in this case at both ends of top and bottom cuboids), back should get scaled so it fits like before(if I simply scale it by a scale factor it will leave gaps on each side). Y scale: sides should get scaled by a scale factor, top and bottom cuboid should get moved, and back should also get scaled. Z scale: sides, top and bottom cuboids should get scaled, back should get moved. Hope you can help, EDIT: So, I've decided to explain the situation once more, this time more detailed(hopefully). I've also made some pictures of how the scaling should look like, where is the problem and the wrong way of scaling. I this example I will be using a thick walled box, with one face missing, where each wall is made by a cuboid(but later on there will be diffrent shapes of objects, where a one of the face might be roundish, or triangle or even under some angle), scaling will be 2x on X axis. 1.This is how the default object without any scaling applied looks like: http://img856.imageshack.us/img856/4293/defaulttz.png 2.If I scale the whole object(all of the meshes) by some scale factor, the problem becomes that the "thickness" of the object walls also change(which I do not want): http://img822.imageshack.us/img822/9073/wrongwaytoscale.png 3.This is how the correct scaling should look like. Appropriate faces gets caled in this case where the scale is on X axis(top, bottom, back): http://imageshack.us/photo/my-images/163/rightwayxscale1.png/ 4.But the scale factor might not be the same for all object all of the times. In this case the back has to get scaled a bit more or it leaves gaps: http://imageshack.us/photo/my-images/9/problemwhenscaling.png/ 5.If everything goes well this is how the final object should look like: http://imageshack.us/photo/my-images/856/rightwayxscale2.png/ So, as you have might noticed there are quite a bit of things to look out when scaling. I am asking you, if any of you have any idea on how to accomplish this scaling. I have tried whole bunch of things, from scaling all of the object by the same scale factor, to subtracting and adding sizes to get the right size. But nothing I tried worked, if one mesh got scaled correctly then others didnt. Donwload the example object. English is not my first language, so I am really sorry if its hard to understand what I am saying.

    Read the article

  • FBO rendering different result between Galaxy S2 and S3

    - by BruceJones
    I'm working on a pong game and have recently set up FBO rendering so that I can apply some post-processing shaders. This proceeds as so: Bind texture A to framebuffer Draw balls Bind texture B to framebuffer Draw texture A using fade shader on fullscreen quad Bind screen to framebuffer Draw texture B using normal textured quad shader Neither texture A or B are cleared at any point, this way the balls leave trails on screen, see below for the fade shader. Fade Shader private final String fragmentShaderCode = "precision highp float;" + "uniform sampler2D u_Texture;" + "varying vec2 v_TexCoordinate;" + "vec4 color;" + "void main(void)" + "{" + " color = texture2D(u_Texture, v_TexCoordinate);" + " color.a *= 0.8;" + " gl_FragColor = color;" + "}"; This works fine with the Samsung Galaxy S3/ Note2, but cause a strange effect doesnt work on Galaxy S2 or Note1. See pictures: Galaxy S3/Note2 Galaxy S3/Note2 Galaxy S2/Note Galaxy S2/Note Can anyone explain the difference?

    Read the article

< Previous Page | 613 614 615 616 617 618 619 620 621 622 623 624  | Next Page >