Search Results

Search found 32375 results on 1295 pages for 'dnn module development'.

Page 620/1295 | < Previous Page | 616 617 618 619 620 621 622 623 624 625 626 627  | Next Page >

  • What is the correct way to use glTexCoordPointer?

    - by RubyKing
    I'm trying to work out how to use this function glTexCoordPointer. The man page states that I must set a pointer to the first element of the array that uses the texture cordinate. Here is my array: static const GLfloat GUIVertices[] = { //FIRST QUAD 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, -1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, -1.0f, 0.94f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.94f, 0.0f, 1.0f, 1.0f, 1.0f, //2ND QUAD // x y z w X Y 1.0f, -1.0f, 0.0f, 1.0f, 1.0f, 0.0f, -1.0f, -1.0f, 0.0f, 1.0f, 0.0f, 0.0f, -1.0f, -0.94f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, -0.94f, 0.0f, 1.0f, 1.0f, 1.0, }; But how do I set the pointer correctly for the fifth element on the 2nd quad first row? I was thinking something like this: glTexCoordPointer(1, GL_FLOAT, 6, reinterpret_cast<const GLvoid *>(29 * sizeof(float)));

    Read the article

  • Relative Positions Of Player And Enemy Are Different In XNA 3D Game

    - by CoOlDud3
    I am having a problem in my 3D Jet Fighter Game using XNA. I have a Player Jet and a few enemy drones built from a separate class. The problem is that when I set Player position and a drone's position to a height 10f in y direction. They aren't at the same height. But if i move Drone's Position up 500f in the y direction then it is pretty much close to the player. Relatively They are supposedly at the same height but with different position values. Can Any One Help Please?

    Read the article

  • Keeping the meshes "thickness" the same when scaling an object

    - by user1806687
    I've been bashing my head for the past couple of weeks trying to find a way to help me accomplish, on first look very easy task. So, I got this one object currently made out of 5 cuboids (2 sides, 1 top, 1 bottom, 1 back), this is just for an example, later on there will be whole range of different set ups. Now, the thing is when the user chooses to scale the whole object this is what should happen: X scale: top and bottom cuboids should get scaled by a scale factor, sides should get moved so they are positioned just like they were before(in this case at both ends of top and bottom cuboids), back should get scaled so it fits like before(if I simply scale it by a scale factor it will leave gaps on each side). Y scale: sides should get scaled by a scale factor, top and bottom cuboid should get moved, and back should also get scaled. Z scale: sides, top and bottom cuboids should get scaled, back should get moved. Hope you can help, EDIT: So, I've decided to explain the situation once more, this time more detailed(hopefully). I've also made some pictures of how the scaling should look like, where is the problem and the wrong way of scaling. I this example I will be using a thick walled box, with one face missing, where each wall is made by a cuboid(but later on there will be diffrent shapes of objects, where a one of the face might be roundish, or triangle or even under some angle), scaling will be 2x on X axis. 1.This is how the default object without any scaling applied looks like: http://img856.imageshack.us/img856/4293/defaulttz.png 2.If I scale the whole object(all of the meshes) by some scale factor, the problem becomes that the "thickness" of the object walls also change(which I do not want): http://img822.imageshack.us/img822/9073/wrongwaytoscale.png 3.This is how the correct scaling should look like. Appropriate faces gets caled in this case where the scale is on X axis(top, bottom, back): http://imageshack.us/photo/my-images/163/rightwayxscale1.png/ 4.But the scale factor might not be the same for all object all of the times. In this case the back has to get scaled a bit more or it leaves gaps: http://imageshack.us/photo/my-images/9/problemwhenscaling.png/ 5.If everything goes well this is how the final object should look like: http://imageshack.us/photo/my-images/856/rightwayxscale2.png/ So, as you have might noticed there are quite a bit of things to look out when scaling. I am asking you, if any of you have any idea on how to accomplish this scaling. I have tried whole bunch of things, from scaling all of the object by the same scale factor, to subtracting and adding sizes to get the right size. But nothing I tried worked, if one mesh got scaled correctly then others didnt. Donwload the example object. English is not my first language, so I am really sorry if its hard to understand what I am saying.

    Read the article

  • How to make other semantics behave like SV_Position?

    - by object
    I'm having a lot of trouble with shadow mapping, and I believe I've found the problem. When passing vectors from the vertex shader to the pixel shader, does the hardware automatically change any of the values based on the semantic? I've compiled a barebones pair of shaders which should illustrate the problem. Vertex shader : struct Vertex { float3 position : POSITION; }; struct Pixel { float4 position : SV_Position; float4 light_position : POSITION; }; cbuffer Matrices { matrix projection; }; Pixel RenderVertexShader(Vertex input) { Pixel output; output.position = mul(float4(input.position, 1.0f), projection); output.light_position = output.position; // We simply pass the same vector in screenspace through different semantics. return output; } And a simple pixel shader to go along with it: struct Pixel { float4 position : SV_Position; float4 light_position : POSITION; }; float4 RenderPixelShader(Pixel input) : SV_Target { // At this point, (input.position.z / input.position.w) is a normal depth value. // However, (input.light_position.z / input.light_position.w) is 0.999f or similar. // If the primitive is touching the near plane, it very quickly goes to 0. return (0.0f).rrrr; } How is it possible to make the hardware treat light_position in the same way which position is being treated between the vertex and pixel shaders? EDIT: Aha! (input.position.z) without dividing by W is the same as (input.light_position.z / input.light_position.w). Not sure why this is.

    Read the article

  • Would it be more efficient to handle 2D collision detection with polygons, rather than both squares/polygons?

    - by KleptoKat
    I'm working on a 2D game engine and I'm trying to get collision detection as efficient as possible. One thing I've noted is that I have a Rectangle Collision collider, a Shape (polygon) collider and a circle collider. Would it be more efficient (either dev-time wise or runtime wise) to have just one shape collider, rather than have that and everything else? I feel it would optimize my code in the back end, but how much would it affect my game at runtime? Should I be concerned with this at all, as 3D games generally have tens of thousands of polygons?

    Read the article

  • How to detect and collide two elastic line segments?

    - by Tautrimas
    There are 4 moving physical nodes in 3D space. They are paired with two elastic line segments / strings (1 <- 2; 3 <- 4). Part I: How to detect the collision of two segments? Part II: On the moment of collision, fifth node is created at the intersection point and here you have the force-based graph. 5-th node (bend point) can slide among the strings as in a real world. Given the new coordinates of 4 nodes, how to calculate the position of the 5-th node on the next frame? I assume string force on the nodes to be F = -k * x where x is the string length. All I came up to is that the force between 5 and 1 equals 5 and 2 (the same with 3 and 4). What are the other properties?.

    Read the article

  • How do you ensure consistent experience across multiple graphics cards (or even driver versions)?

    - by Grigory Javadyan
    So I was writing a simple 2D game with OpenGL and SDL and had this problem when there was awful tearing when running in windowed mode (even though I explicitly asked SDL_SetVideoMode to use double buffering). Didn't worry about it all too much because most of the time the game grabs the entire screen, windowed mode is just for debugging. Anyway, yesterday I updated my nVidia drivers and tearing disappeared, the game runs smooth and looks nice in windowed mode too. I can see how the problem may be in the graphics driver, but this leads to a question. Obviously, professional game developers have to deal with a lot of different hardware/software configurations. What are the techniques they use to make sure the game looks the roughly the same on different graphics cards or even the same model of graphics card, but with different driver versions?

    Read the article

  • Options for 2D Web/iPhone/Android, with Test Framework

    - by ashes999
    I would like to make a 2D game with one codebase that runs on iPhone, Android, and web (any flavour of web -- HTML5, Flash, Silverlight, etc.) What are my options? I should be able to write my code once, and run it anywhere. I also absolutely need the ability to write unit tests (or write a unit-testing framework) -- I cannot make sizable games without testing. Unity is good, but unity is 3d; even with hacks, the graphical assets will still be 3d. I'm after 2d, not 3d. (If you need a Mac or separate licensing for the Mac part, that's okay.)

    Read the article

  • Entity System with C++ templates

    - by tommaisey
    I've been getting interested in the Entity/Component style of game programming, and I've come up with a design in C++ which I'd like a critique of. I decided to go with a fairly pure Entity system, where entities are simply an ID number. Components are stored in a series of vectors - one for each Component type. However, I didn't want to have to add boilerplate code for every new Component type I added to the game. Nor did I want to use macros to do this, which frankly scare me. So I've come up with a system based on templates and type hinting. But there are some potential issues I'd like to check before I spend ages writing this (I'm a slow coder!) All Components derive from a Component base class. This base class has a protected constructor, that takes a string parameter. When you write a new derived Component class, you must initialise the base with the name of your new class in a string. When you first instantiate a new DerivedComponent, it adds the string to a static hashmap inside Component mapped to a unique integer id. When you subsequently instantiate more Components of the same type, no action is taken. The result (I think) should be a static hashmap with the name of each class derived from Component that you instantiate at least once, mapped to a unique id, which can by obtained with the static method Component::getTypeId ("DerivedComponent"). Phew. The next important part is TypedComponentList<typename PropertyType>. This is basically just a wrapper to an std::vector<typename PropertyType> with some useful methods. It also contains a hashmap of entity ID numbers to slots in the array so we can find Components by their entity owner. Crucially TypedComponentList<> is derived from the non-template class ComponentList. This allows me to maintain a list of pointers to ComponentList in my main ComponentManager, which actually point to TypedComponentLists with different template parameters (sneaky). The Component manager has template functions such as: template <typename ComponentType> void addProperty (ComponentType& component, int componentTypeId, int entityId) and: template <typename ComponentType> TypedComponentList<ComponentType>* getComponentList (int componentTypeId) which deal with casting from ComponentList to the correct TypedComponentList for you. So to get a list of a particular type of Component you call: TypedComponentList<MyComponent>* list = componentManager.getComponentList<MyComponent> (Component::getTypeId("MyComponent")); Which I'll admit looks pretty ugly. Bad points of the design: If a user of the code writes a new Component class but supplies the wrong string to the base constructor, the whole system will fail. Each time a new Component is instantiated, we must check a hashed string to see if that component type has bee instantiated before. Will probably generate a lot of assembly because of the extensive use of templates. I don't know how well the compiler will be able to minimise this. You could consider the whole system a bit complex - perhaps premature optimisation? But I want to use this code again and again, so I want it to be performant. Good points of the design: Components are stored in typed vectors but they can also be found by using their entity owner id as a hash. This means we can iterate them fast, and minimise cache misses, but also skip straight to the component we need if necessary. We can freely add Components of different types to the system without having to add and manage new Component vectors by hand. What do you think? Do the good points outweigh the bad?

    Read the article

  • Certain grid lines not rendering as expected

    - by row1
    I am drawing a simple quad (a triangle strip with 4 vertices) as the floor and then drawing an 8x8 grid over top (a collection of vertex pairs for a line list). The vertical grid lines work fine (apart from being very aliased), but some of the horizontal lines do not get rendered. The grid renders fine if I do not draw the quad. foreach (EffectPass pass in _Effect.CurrentTechnique.Passes) { pass.Apply(); CurrentGraphicsDevice.SetVertexBuffer(_VertexFloorBuffer); _Engine.CurrentGraphicsDevice.DrawPrimitives(PrimitiveType.TriangleStrip, 0, 2); //Some of the horizontal lines seems to disappear if we draw the above quad. CurrentGraphicsDevice.SetVertexBuffer(_VertexGridBuffer); CurrentGraphicsDevice.DrawPrimitives(PrimitiveType.LineList, 0, _VertexGridBuffer.VertexCount / 2); } What could be causing these lines to not be rendered? Update: I added the below code after I draw my quad and grid and it started working. But I am not sure why that works as I thought this code was to draw the WPF controls elementRenderer.Render(); spriteBatch.Begin(); spriteBatch.Draw(elementRenderer.Texture, Vector2.Zero, Color.White); spriteBatch.End();

    Read the article

  • Draw "vision cone" / targetting element onto game world

    - by gkimsey
    I'm wanting to indicate various things using a "pie slice" sort of shape as below. Similar to vision cones in stealth game minimaps, or targetting indicators in RTS type games for frontal area attacks. Something generic enough to be used for both would be ideal. I need to be able to procedurally (and efficiently) change things like the slice width and length, color, transparency, position in the world, etc. For my particular situation, there's no concern with elevation, funky terrain, or really any third axis at all as far as this element is concerned. I have two first inclinations on how to accomplish this: 1) Manually generate the vertices for a main triangle, (possibly two, superimposed to get the border effect), a handful more to approximate the arc at the end, and roll it into a mesh. 2) Use some sort of 2D drawing library to create a circle and mask it off at the right angles, render to texture, and use that. For reference, I have some experience with Ogre3D, but I'm not attached to it as this is a mostly academic pursuit at the moment. Other technologies that might be better at accomplishing this are more than welcome. Finally, I'm kind of curious about how to do a "flashlight" or similar 3D effect that could produce the same result, but on all surfaces in the lit area.

    Read the article

  • How does having the Debugger change the game execution on an XBOX 360?

    - by Sebastian Gray
    So I thought my issue was relating to the difference between a Debug and a Release build as per this question: What's the difference between a "Release" Xbox 360 build and a "Debug" one? but I've since found that if I go ahead and build a Creators Club version of the game using a Debug build and deploy to the XBOX, I get the same experience I had with the Release version of my game. However if I run the game from Visual Studio using F5 and having set the XBOX as the default platform, then the game runs as expected. If I change from Debug to Release and run with CTRL+F5 then the game also works as expected. How would running the game with the debugger attached change the results I am getting in game? Is there any way that I can use the same approach or change the default compilation of the game so that I can use this approach to release my game?

    Read the article

  • Effecient finding of long-range spotting targets

    - by nihohit
    I'm creating a top-down 2d strategy game, with a square grid map. So far, I've used Bresenham's line drawing algorithm in a circle to determine what's in LOS of each unit, and then targedt one of the targets in the circle. Now I find that this limits my units to shoot only at targets that they see. I want to extend my targeting algorithm to target any other unit in range of my weapon, even if they're out of sight range of this given unit, if they're "spotted" by another friendly unit. In other words, I want to enable usage of weapons with ranges longer than sight range. Is there a better way than iterating over all sighted units and computing range and LOSto each of them?

    Read the article

  • Do you need expensive servers and fancy hosting in order to make a multiplayer game?

    - by ThePlan
    I've finished working on an RPG and it would seem so much more fun to make it multiplayer. SFML has a networking feature, I figured it's possible but then again, never in my life have I even tried something basic about networking, in fact my knowledge of it is very limited. What would it take to make a multiplayer game resource-wise? I'm not talking about an MMO, more like a co-op type of game. Do I need mountains of cash to pay for hosting and servers and many many things to make one?

    Read the article

  • Modular building technique with angles? (A roof)

    - by Mungoid
    Ive been spending a bit of time lately studying the modular buildings of many games and reading/viewing several tutorials about it as well, but almost every example I see uses a plain square building that does not have any angled roof or similar. In all my applications (CS6, Blender/Max, UDK) I adhere to the same grid spacing and I get pretty good results, but trying to make modular angled pieces is confusing me as I'm not sure the best way to approach it. Below is some shots of my template sheet and workflow I have been doing. Should I do the roof separately or is it possible for me to keep it in the same texture sheet? The main issue is below. I have made a couple modular roof pieces but when i try to use them, i end up needing to model multiple other parts to fill gaps based on what roof shape i want. I then model those 'filler' pieces and now i have that much less space left in my texture sheet and those pieces are usually not that reusable for anything else. This is where im not sure how to proceed. If anyone has any links to documents or papers talking about this or advice, I would greatly appreciate it! =-) My main roof pieces with the gaps My power of 2 texture sheet, with 16x16 grid squares. The texture sheet loaded into blender on a 16x16 plane and starting to separate and extrude.

    Read the article

  • Server costs and back loading for mobile devices

    - by user23844
    A company approached me to design an MMO for the mobile platform and I have the perfect idea for them. My question is how much would a server for a FTP game that has both a PVE element and PVP cost? Also do you think that it would be better or is it even possible to back load the data onto the phones (trying to come up with some interesting way to back up the data in case of emergency). I don't want the game to be totally online reliant (I want to appeal to not only phone users but also iPod touch users) and for there to be an offline mode. If you can't tell this is my first game besides simple projects I've done on the side. Any help would be greatly appreciated.

    Read the article

  • Fast, accurate 2d collision

    - by Neophyte
    I'm working on a 2d topdown shooter, and now need to go beyond my basic rectangle bounding box collision system. I have large levels with many different sprites, all of which are different shapes and sizes. The textures for the sprites are all square png files with transparent backgrounds, so I also need a way to only have a collision when the player walks into the coloured part of the texture, and not the transparent background. I plan to handle collision as follows: Check if any sprites are in range of the player Do a rect bounding box collision test Do an accurate collision (Where I need help) I don't mind advanced techniques, as I want to get this right with all my requirements in mind, but I'm not sure how to approach this. What techniques or even libraries to try. I know that I will probably need to create and store some kind of shape that accurately represents each sprite minus the transparent background. I've read that per pixel is slow, so given my large levels and number of objects I don't think that would be suitable. I've also looked at Box2d, but haven't been able to find much documentation, or any examples of how to get it up and running with SFML.

    Read the article

  • Exporting spritesheet for Cocos2d

    - by Terko
    I would like to know how people usually save the animations in order to load them easily in Cocos2d with as few hard-code as possible. E.G. The solution I thought of is to have one plist file containing information about each frame, and the second plist to contain information about each of the animation(name of the animation, which frames to play, and the delay probably). If this is the correct solution, how can I generate such plist files for spritesheet automatically?

    Read the article

  • Using Bullet physics engine to find the moment of object contact before penetration

    - by MooMoo
    I would like to use Bullet Physics engine to simulate the objects in 3D world. One of the objects in the world will move using the position from 3D mouse control. I will call it "Mouse Object" and any object in the world as "Object A" I define the time before "mouse object" and "Object A" collide as t-1 The time "mouse object" penetrate "Object A" as t Now there is a problem about rendering the scene because when I move the mouse very fast, "Mouse object" will reside in "Object A" before "Object A" start to move. I would like the "Mouse Object" to stop right away attach to the "Object A". Also If the "Object A" move, the "Mouse object" should move following (attach) the "Object A" without stop at the first collision take place. This is what i did I find the position of the "Mouse Object" at time t-1 and time t. I will name it as pos(t-1) and pos(t) The contact time will be sometime between t-1 to t, which the time of contact I name it as t_contact, therefore the contact position (without penetration) between "Mouse object" and "Object A" will be pos(t_contact) then I create multiple "Mouse object"s using this equation pos[n] = pos(t-1) * C * ( pos(t) - pos(t-1) ) where 0 <= C <= 1 if I choose C = 0.1, 0.2, 0.3,0.4..... 1.0, I will get pos[n] for 10 values Then I test collision for all of these 10 "Mouse Objects" and choose the one that seperate between "no collision" and "collision". I feel this method is super non-efficient. I am not sure the way other people find the time-of-contact or the position-of-contact when "Object A" can move.

    Read the article

  • How to avoid the GameManager god object?

    - by lorancou
    I just read an answer to a question about structuring game code. It made me wonder about the ubiquitous GameManager class, and how it often becomes an issue in a production environment. Let me describe this. First, there's prototyping. Nobody cares about writing great code, we just try to get something running to see if the gameplay adds up. Then there's a greenlight, and in an effort to clean things up, somebody writes a GameManager. Probably to hold a bunch of GameStates, maybe to store a few GameObjects, nothing big, really. A cute, little, manager. In the peaceful realm of pre-production, the game is shaping up nicely. Coders have proper nights of sleep and plenty of ideas to architecture the thing with Great Design Patterns. Then production starts and soon, of course, there is crunch time. Balanced diet is long gone, the bug tracker is cracking with issues, people are stressed and the game has to be released yesterday. At that point, usually, the GameManager is a real big mess (to stay polite). The reason for that is simple. After all, when writing a game, well... all the source code is actually here to manage the game. It's easy to just add this little extra feature or bugfix in the GameManager, where everything else is already stored anyway. When time becomes an issue, no way to write a separate class, or to split this giant manager into sub-managers. Of course this is a classical anti-pattern: the god object. It's a bad thing, a pain to merge, a pain to maintain, a pain to understand, a pain to transform. What would you suggest to prevent this from happening?

    Read the article

  • Common light map practices

    - by M. Utku ALTINKAYA
    My scene consists of individual meshes. At the moment each mesh has its associated light map texture, I was able to implement the light mapping using these many small textures. 1) Of course, I want to create an atlas, but how do you split atlases to pages, I mean do you group the lm's of objects that are close to each other, and load light maps on the fly if scene is expected to be big. 2) the 3d authoring software provides automatic uv coordinates for each mesh in the scene, but there are empty areas in the texel space, so if I scale the texture polygons the texel density of each face wil not match other meshes, if I create atlas like that there will be varying lm resolution, how do you solve this, just leave it as it is, or ignore resolution ? Actually these questions also applies to other non tiled maps.

    Read the article

  • How to create a Raining Effect(Particles) on Android?

    - by user19495
    I am developing a 2d android strategy game, it runs on SurfaceView, so I can't(or can I?) use LibGdx's particle system. And I would like to make a raining effect, I am aiming for something like this( http://ridingwiththeriver.files.wordpress.com/2010/09/rain-fall-animation.gif ), I don't need the splash effect in the end (although that would be superb, but probably would take up a lot of system resources). How could I achieve that raining effect? Any ideas? Thank You a lot in advance!

    Read the article

  • Hit Detection When rotating the camera

    - by SD1990
    This bug/feature has been plaguing me for a while and i want to know the best way to fix it. I'm testing simple hit detection with a wall, like: if (Forward button) if(Inv.w.z < -49 || Inv.w.z > 49) pos.z = 0.0f; else if(Inv.w.x < -49 || Inv.w.x > 49) pos.z = 0.0f; else pos.z = +1.0f; where Inv.w. is the camera positions. Now obviously when i now hit that certain point i can no longer move away from the wall or anywhere in fact. How can i change this code to allow for the camera to be turned away from the wall so therefore i should be allowed to move? for example, the player hits the wall and i cant move until i turn around or to the side? I know its something to do with velocity but im pretty new to this so please bare with me if this is easy.

    Read the article

  • ArrayList of Entites Random Movement

    - by opiop65
    I have an arraylist of entites that I want to move randomly. No matter what I do, they won't move. Here is my female class: import java.awt.Graphics2D; import java.awt.Image; import java.awt.Rectangle; import java.util.Random; import javax.swing.ImageIcon; public class Female extends Entity { static int femaleX = 0; static int femaleY = 0; double walkSpeed = .1f; Random rand = new Random(); int random; int dir; Player player; public Female(int posX, int posY) { super(posX, posY); } public void update() { posX += femaleX; posY += femaleY; } public void draw(Graphics2D g2d) { g2d.drawImage(getFemaleImg(), posX, posY, null); if (Player.showBounds == true) { g2d.draw(getBounds()); } } public Image getFemaleImg() { ImageIcon ic = new ImageIcon("res/female.png"); return ic.getImage(); } public Rectangle getBounds() { return new Rectangle(posX, posY, getFemaleImg().getHeight(null), getFemaleImg().getWidth(null)); } public void moveFemale() { random = rand.nextInt(3); System.out.println(random); if (random == 0) { dir = 0; posX -= (int) walkSpeed; } } } And here is how I update the female class in the main class: public void actionPerformed(ActionEvent ae) { player.update(); for(int i = 0; i < females.size(); i++){ Female tempFemale = females.get(i); tempFemale.update(); } repaint(); } If I do something like this(in the female update method): public void update() { posX += femaleX; posY += femaleY; posX -= walkSpeed; } The characters move with no problem. Why is this?

    Read the article

  • forward rendering and multiple shadow maps

    - by Irbis
    I have two light sources on my scene. I created two fbo's which store depth textures for these lights. A render loop looks like this: bind fbo1 save depth values for first light unbind fbo1 bind fbo2 save depth values for second light unbind fbo2 enable additive blending bind first depth texture render scene bind second depth texture render scene disable additive blending For one light source the program works fine. For many light sources I use an additive blending to acumulate lighting results but then some objects become transparent (for example when an object which is further away from the camera is drawn before an object which is closer to the camera). How to resolve that problem ? How should I accumulate lighting effects for many light sources (many shadow maps) ? P.S. I use OpenGL/GLSL 3.3+

    Read the article

< Previous Page | 616 617 618 619 620 621 622 623 624 625 626 627  | Next Page >