Search Results

Search found 32114 results on 1285 pages for 'general development'.

Page 356/1285 | < Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >

  • Best way to implement a simple bullet trajectory

    - by AirieFenix
    I searched and searched and although it's a fair simple question, I don't find the proper answer but general ideas (which I already have). I have a top-down game and I want to implement a gun which shoots bullets that follow a simple path (no physics nor change of trajectory, just go from A to B thing). a: vector of the position of the gun/player. b: vector of the mouse position (cross-hair). w: the vector of the bullet's trajectory. So, w=b-a. And the position of the bullet = [x=x0+speed*time*normalized w.x , y=y0+speed*time * normalized w.y]. I have the constructor: public Shot(int shipX, int shipY, int mouseX, int mouseY) { //I get mouse with Gdx.input.getX()/getY() ... this.shotTime = TimeUtils.millis(); this.posX = shipX; this.posY = shipY; //I used aVector = aVector.nor() here before but for some reason didn't work float tmp = (float) (Math.pow(mouseX-shipX, 2) + Math.pow(mouseY-shipY, 2)); tmp = (float) Math.sqrt(Math.abs(tmp)); this.vecX = (mouseX-shipX)/tmp; this.vecY = (mouseY-shipY)/tmp; } And here I update the position and draw the shot: public void drawShot(SpriteBatch batch) { this.lifeTime = TimeUtils.millis() - this.shotTime; //position = positionBefore + v*t this.posX = this.posX + this.vecX*this.lifeTime*speed*Gdx.graphics.getDeltaTime(); this.posY = this.posY + this.vecY*this.lifeTime*speed*Gdx.graphics.getDeltaTime(); ... } Now, the behavior of the bullet seems very awkward, not going exactly where my mouse is (it's like the mouse is 30px off) and with a random speed. I know I probably need to open the old algebra book from college but I'd like somebody says if I'm in the right direction (or points me to it); if it's a calculation problem, a code problem or both. Also, is it possible that Gdx.input.getX() gives me non-precise position? Because when I draw the cross-hair it also draws off the cursor position. Sorry for the long post and sorry if it's a very basic question. Thanks!

    Read the article

  • Using Ogre with android [closed]

    - by Rich
    I am trying get Ogre 3d to work on android, I have managed to download and run the ogre sample browser but I am really struggling with trying to get a basic application working i have been trying for days now with no avail. Does anyone have any pointers on where to start with this? Thanks if anyone can help EDIT: Very sorry for my rubbish question! I am a bit new to this and I am just trying to seek some guidance. Ok so i followed the instructions on the Ogre wiki to build ogre for android and the sample browser here: http://www.ogre3d.org/tikiwiki/tiki-index.php?page=CMake+Quick+Start+Guide&tikiversion=Android so it is deffinately possible. The issue I am having is knowing what I need to do to get started with ogre e.g just a simple hello world style app where it might just show the ogre head, so tutorials might actually be good because I could not really find any simple ones as I am very new to 3D development. I just found that the sample browser was just massive and yes it has everything in it but it's very difficult to understand how it all works. What I am asking is basically some help, as I have been trying to pull out some parts of the sample browser to just create a view with a 3D model. Hope this is better?

    Read the article

  • Many sources of movement in an entity system

    - by Sticky
    I'm fairly new to the idea of entity systems, having read a bunch of stuff (most usefully, this great blog and this answer). Though I'm having a little trouble understanding how something as simple as being able to manipualate the position of an object by an undefined number of sources. That is, I have my entity, which has a position component. I then have some event in the game which tells this entity to move a given distance, in a given time. These events can happen at any time, and will have different values for position and time. The result is that they'd be compounded together. In a traditional OO solution, I'd have some sort of MoveBy class, that contains the distance/time, and an array of those inside my game object class. Each frame, I'd iterate through all the MoveBy, and apply it to the position. If a MoveBy has reached its finish time, remove it from the array. With the entity system, I'm a little confused as how I should replicate this sort of behavior. If there were just one of these at a time, instead of being able to compound them together, it'd be fairly straightforward (I believe) and look something like this: PositionComponent containing x, y MoveByComponent containing x, y, time Entity which has both a PositionComponent and a MoveByComponent MoveBySystem that looks for an entity with both these components, and adds the value of MoveByComponent to the PositionComponent. When the time is reached, it removes the component from that entity. I'm a bit confused as to how I'd do the same thing with many move by's. My initial thoughts are that I would have: PositionComponent, MoveByComponent the same as above MoveByCollectionComponent which contains an array of MoveByComponents MoveByCollectionSystem that looks for an entity with a PositionComponent and a MoveByCollectionComponent, iterating through the MoveByComponents inside it, applying/removing as necessary. I guess this is a more general problem, of having many of the same component, and wanting a corresponding system to act on each one. My entities contain their components inside a hash of component type - component, so strictly have only 1 component of a particular type per entity. Is this the right way to be looking at this? Should an entity only ever have one component of a given type at all times?

    Read the article

  • How to deal with OpenGL and Fullscreen on OS X

    - by Armin Ronacher
    I do most of my development on OS X and for my current game project this is my target environment. However when I play games I play on Windows. As a windows gamer I am used to Alt+Tab switching from within the game to the last application that was open. On OS X I currently can't find either a game that supports that nor can I find a way to make it possible. My current project is based on SDL 1.3 and I can see that cmd+tab is a sequence that is sent directly to my application and not intercepted by the operating system. Now my first attempt was to hide the rendering window on cmd+tab which certainly works, but has the disadvantage that a hidden OpenGL window in SDL cannot be restored when the user tabs back to the application. First of all, there is no event fired for that or I can't find it, secondly the core problem is that when that application window is hidden, my game is still the active application, just that the window disappeared. That is incredible annoying. Any ideas how to approximate the windows / linux behavior for alt+tab?

    Read the article

  • canvas tile grid, hover effects, single tilesheet, etc

    - by user121730
    I'm currently in the process of building both the client and server side of an html5, canvas, and WebSocket game. This is what I have thus far for the client: http://jsfiddle.net/dDmTf/7/ Current obstacles The hover effect has no idea what to put back after the mouse leaves. Currently it's just drawing a "void" tile, but I can't figure out how to redraw a single tile without redrawing the whole map. How would I go about storing multiple layers within the map variable? I was considering just using a multi-dimensional array for each layer (similar to what you see as the current array), and just iterating through it, but is that really an efficient way of doing it? Side note The tile sheet being used for the jsfiddle display is only for development. I'll be replacing it as things progress in the engine. I hope you can help! Hopefully you guys can help me, I've been struggling to get through things, since I'm learning how things kind of stuff works as I go. If you guys have any pointers for my JavaScript, feel free. As I'm more or less learning advanced usage as I go, I'm sure I'm doing plenty of things wrong. Note: I will continue to update this post as the engine improves, but updating the jsfiddle link and updating the obstacles list by striking things that have been solved, or adding additions. Thanks!

    Read the article

  • Moving characters on a grid [on hold]

    - by madmax1
    i am developing my first game with C++. My game uses a grid of rectangles. I have a class Board which manages the grid as a whole, initializes the terrain, places/removes characters, etc. It has a 2D vector of a class Field, which handles the Structure of the field, contained Objects, Characters, etc. Field again contains a vector of class Character, which are positioned on the field. Now i want to implement the functionality to move a character on the board, however dont know which is best practice to do so. Should i implement a moveCharacter(character, offset) function in Board, make it search for the character and move it? Or should i implement a function move(offset) in Character? This sure would be nicest, however makes characters necessary to know the board they are on, or the field which in turn knows the board. On the one hand i feel like i should avoid inclusion between classes as much as possible e.g. to increase portability of classes for different projects, on the other hand i think the character.move() functionality is most comfortable for further development. Im pretty new to "bigger" C++ projects and these kind of design questions pop up more and more often lately and i have troubles deciding. Thanks a lot for any advice!

    Read the article

  • Physics-perfect (or somewhere near) 3d sound engine

    - by passcod
    I'm new to game programming, although I have some years of experience in console/web development. My problem is not so much that I can't find what I'm looking for, it's just that I don't have the terminology to actually perform a successful search. I am looking for a physics engine which has great focus on sounds. In fact, I do not care at all for anything else. What I mean is better explained by an example: Suppose a 1st person type game. You are facing North, and someone somewhere around you throws a flute at you (nevermind the absurdity of the situation). The flute spins while it is on its way, making sounds through its holes. There is a wind of say, 5 knots South. I imagine a physics engine will be capable of calculating the trajectory of the flute, as well as the direction it takes after it hits. What I want is for the physics engine to calculate the precise sounds it will make, from any listener's perspective. Does any such engine exists? If there are several, which one would be best for the example above?

    Read the article

  • World Location issues with camera and particle

    - by Joe Weeks
    I have a bit of a strange question, I am adapting the existing code base including the tile engine as per the book: XNA 4.0 Game Development by example by Kurt Jaegers, particularly the aspect that I am working on is the part about the 2D platformer in the last couple of chapters. I am creating a platformer which has a scrolling screen (similar to an old school screen chase), I originally did not have any problems with this aspect as it is simply a case of updating the camera position on the X axis with game time, however I have since added a particle system to allow the players to fire weapons. This particle shot is updated via the world position, I have translated everything correctly in terms of the world position when the collisions are checked. The crux of the problem is that the collisions only work once the screen is static, whilst the camera is moving to follow the player, the collisions are offset and are hitting blocks that are no longer there. My collision for particles is as follows (There are two vertical and horizontal): protected override Vector2 horizontalCollisionTest(Vector2 moveAmount) { if (moveAmount.X == 0) return moveAmount; Rectangle afterMoveRect = CollisionRectangle; afterMoveRect.Offset((int)moveAmount.X, 0); Vector2 corner1, corner2; // new particle world alignment code. afterMoveRect = Camera.ScreenToWorld(afterMoveRect); // end. if (moveAmount.X < 0) { corner1 = new Vector2(afterMoveRect.Left, afterMoveRect.Top + 1); corner2 = new Vector2(afterMoveRect.Left, afterMoveRect.Bottom - 1); } else { corner1 = new Vector2(afterMoveRect.Right, afterMoveRect.Top + 1); corner2 = new Vector2(afterMoveRect.Right, afterMoveRect.Bottom - 1); } Vector2 mapCell1 = TileMap.GetCellByPixel(corner1); Vector2 mapCell2 = TileMap.GetCellByPixel(corner2); if (!TileMap.CellIsPassable(mapCell1) || !TileMap.CellIsPassable(mapCell2)) { moveAmount.X = 0; velocity.X = 0; } return moveAmount; } And the camera is pretty much the same as the one in the book... with this added (as an early test). public static void Update(GameTime gameTime) { position.X += 1; }

    Read the article

  • Sprite sorting issues

    - by TheBroodian
    I apologize for the vague title, I'm not sure how else to phrase this problem. I am using tIDE to assist me in my game's world development. To give a dynamic effect to map layers within tIDE, it has events that can be wired up to draw sprites before or after the draw of layers, to create foreground or background effects during runtime. This all works fine and well, however, the only way that I understand that this works, is by calling tIDE's internal spritebatch to create this effect. This creates a problem for me, because within tIDE's source code, its spritebatch's call block is set to SpriteSortMode.Deferred, and my characters have particle elements that I would like to draw in front of and behind themselves, via a drawdepth value. I can use a separate instance of spritebatch and call my character's draw method, and set sprite sorting there, but then my character is drawn ontop of all layers in my tIDE map. Which is even worse to me than my particles not being drawn as I want them to be. So, in summary, I want all of my crap to work, but at the moment the only way I can figure to do that is to ghetto rig the spritebatch within my characters' draw methods by calling a spritebatch.End();, then starting a new call to Begin() with SpriteSortMode.BackToFront, doing all of my characters' draw logic, and then calling another spritebatch.End(); followed once again by a new spritebatch.Begin(). Obviously that is pretty undesirable, but I don't know any other feasible alternatives. Anybody got any wisdom they could impart unto me as to how I could handle this?

    Read the article

  • Games without a(n explicit) game loop

    - by Davy8
    Most game development happens with a main game loop. Are there any good articles/blog posts/discussions about games without a game loop? I imagine they'd mostly be web games, but I'd be interested in hearing otherwise. (As a side note, I think it's really interesting that the concept is almost exclusively used in gaming as far as I'm aware, perhaps that may be another question.) Edit: I realize there's probably a redraw loop somewhere. I guess what I really mean is a loop that is hidden to you. Frames are something you as the developer are not concerned with as you're working on a higher level of abstraction. E.g. someLootItem.moveTo(inventory, someAnimatationType) and that will move from the loot box to your inventory using the specified animation type without the game developer having to worry about the implementation details of that animation. Maybe that's how "real" games end up working, but from reading most tutorials they seem to imply a much more granular level of control is used, but that might just be an artifact of being a tutorial.

    Read the article

  • Keep cube spinning after fling

    - by Zero
    So I've been trying to get started with game development for Android using Unity3D. For my first project I've made a simple cube that you can spin using touch. For that I have the following code: using UnityEngine; using System.Collections; public class TouchScript : MonoBehaviour { float speed = 0.4f; bool canRotate = false; Transform cachedTransform; public bool CanRotate { get { return canRotate; } private set { canRotate = value; } } void Start () { // Make reference to transform cachedTransform = transform; } // Update is called once per frame void Update () { if (Input.touchCount > 0) { Touch touch = Input.GetTouch (0); // Switch through touch events switch (Input.GetTouch (0).phase) { case TouchPhase.Began: if (VerifyTouch (touch)) CanRotate = true; break; case TouchPhase.Moved: if (CanRotate) RotateObject (touch); break; case TouchPhase.Ended: CanRotate = false; break; } } } bool VerifyTouch (Touch touch) { Ray ray = Camera.main.ScreenPointToRay (touch.position); RaycastHit hit; // Check if there is a collider attached already, otherwise add one on the fly if (collider == null) gameObject.AddComponent (typeof(BoxCollider)); if (Physics.Raycast (ray, out hit)) { if (hit.collider.gameObject == this.gameObject) return true; } return false; } void RotateObject (Touch touch) { cachedTransform.Rotate (new Vector3 (touch.deltaPosition.y, -touch.deltaPosition.x, 0) * speed, Space.World); } } The above code works fine. However, I'm wondering how I can keep the cube spinning after the user lifts his finger. The user should be able to "fling" the cube, which would keep spinning and after a while would slowly come to a stop due to drag. Should I do this using AddForce or something? I'm really new to this stuff so I'd like it if you guys could point me in the right direction here :) .

    Read the article

  • Kickstarter and 2D smartphone games

    - by mm24
    I am about to launch a Kickstarter project as, after 14 months of full time development on my first iOS game, I run out of money. I developed an iOS game that needs few more months to be ready (the game structure is there but haven't yet worked on balancing the difficulty of the various levels). I have a feeling that most of the computer games founded on Kickstarter are for console, PC or Mac and not for smartphones. The category that many people seem to like is RPG style games. I have done tons of work over a year and collaborated with musicians and illustrators to get top quality graphics and music. The game looks cool to be an iOS 2D game but, compared to what I've seen on Kickstarter, I feel so little and humbled. I have searched for smartphone game projects on Kickstarter but haven't found many. I believe that the reason is that people are not keen in backing an APP that is normally sold for 0.99$ as they perceive is not something big. Am I the only one having this feeling? Could anyone please share a list of references to some successfully backed kickstarter smartphone game projects? (In this way the question will not become a "chat" and will fulfill the requirements to be a gamedev question). Any other article or authoritative answer will be welcome.

    Read the article

  • Matrices: Arrays or separate member variables?

    - by bjz
    I'm teaching myself 3D maths and in the process building my own rudimentary engine (of sorts). I was wondering what would be the best way to structure my matrix class. There are a few options: Separate member variables: struct Mat4 { float m11, m12, m13, m14, m21, m22, m23, m24, m31, m32, m33, m34, m41, m42, m43, m44; // methods } A multi-dimensional array: struct Mat4 { float[4][4] m; // methods } An array of vectors struct Mat4 { Vec4[4] m; // methods } I'm guessing there would be positives and negatives to each. From 3D Math Primer for Graphics and Game Development, 2nd Edition p.155: Matrices use 1-based indices, so the first row and column are numbered 1. For example, a12 (read “a one two,” not “a twelve”) is the element in the first row, second column. Notice that this is different from programming languages such as C++ and Java, which use 0-based array indices. A matrix does not have a column 0 or row 0. This difference in indexing can cause some confusion if matrices are stored using an actual array data type. For this reason, it’s common for classes that store small, fixed size matrices of the type used for geometric purposes to give each element its own named member variable, such as float a11, instead of using the language’s native array support with something like float elem[3][3]. So that's one vote for method one. Is this really the accepted way to do things? It seems rather unwieldy if the only benefit would be sticking with the conventional math notation.

    Read the article

  • How to factorize code in Unreal Kismet (i.e. "Material Function"s for Kismet)

    - by Georges Dupéron
    In the Unreal Development Kit, when using the Material Editor, one can factorize frequently-used groups of nodes by creating a Material Function (content Browser ? right-click ? new matrial function, IIRC). When defining the behaviour of some actor in Kismet, one can easily have a dozen nodes involved. If I have many actors that share the same behaviour, then I'll copy-paste these nodes, and change the variables so they point to the other actors. This leads to inconsistencies (a modification in the behaviour of an actor isn't propagated in the copy-pasted nodes), complexity (you end up with hundreds of nodes), and generally useless effort. My question is : Can I create a "kismet function", just like a material function ? Note: I'd rather avoid using UnrealScript. I don't even know where to type UnrealScripts, don't know where the documentation is and more generally don't have enough time to invest in learning UnrealScript. This "kismet function" feature must be usable by graphists (with little programming knowledge). If a (simple) script suffices to add this feature in the Kismet editor, so that one can create several "functions" without using UnrealScript, then fine, but I don't really want to have to write a script each time I want to factorize a few nodes. Thanks for any information !

    Read the article

  • touch detection of non-rectangular sprites (cocos2d)

    - by hogni89
    What is the correct way to implement a non-rectangular sprite in Cocos2d? I am working on a jigsaw puzzle. And therefor do our sprites have some strange forms (Jigsaw puzzle bricks). As of now, we have implemented the "detect" this way: - (void)selectSpriteForTouch:(CGPoint)touchLocation { CCSprite * newSprite = nil; // Loop array of sprites for (CCSprite *sprite in movableSprites) { // Check if sprite is hit. // TODO: Swap if with something better. if (CGRectContainsPoint(sprite.boundingBox, touchLocation)) { newSprite = sprite; break; } } if (newSprite != selSprite) { // Move along, nothing to see here // Not the problem } } - (BOOL)ccTouchBegan:(UITouch *)touch withEvent:(UIEvent *)event { CGPoint touchLocation = [self convertTouchToNodeSpace:touch]; [self selectSpriteForTouch:touchLocation]; return TRUE; } I know that the problem is in the keyword "sprite.boundingBox". Is there a better way of implementing this, OR is it a limit when using sprites based on .png's? If so, how should I proceed? I'm new to iPhone and game development :D

    Read the article

  • Cry Engine 3 vs UDK

    - by Daniele Riccardelli
    I'm new here and I hope that other people may find this question interesting. Me and a bunch of guys from my University are thinking about getting started in game development, but, even though the game design is kind of ready, we are stucked at the point "which Engine should we choose". At first, we were thinking about Unity3D Free, in particular because we are pretty familiar with C# and, even better, it's completely free, but on the other hand there are some cons like no dynamic shadowing that make the realization of our game kinda hard. So, we are thinking about moving either to UDK or Cry Engine, since they are not as expensive as Unity3D Pro (at least before the game deployment) . The thing we are worried the most, though, is that we are kind of scared about the support for team work(i mean, that we might find it hard to coordinate our efforts without software support), since in Unity3D the team features are part of purchasable content, and we are not interested in paying that much for a project we are not even sure we can complete. So, finally, I hope one of you knows the mentioned engines enough to give us a tip telling which one offers better team features and is advisable for guys that have barely worked before on videogames but have good object oriented programming skills. Probably might be helpful for your suggestion, if i say that our game is an exploration game. Thanks to anybody who's going to answer.

    Read the article

  • Android - big game universe

    - by user1641923
    I am new to an Android development, though I have much experience with Java, C++, PHP programming and a bit experience with vector graphics too (basic 3d Studio Max, Flash, etc). I am starting to work on an Android game. It is going to be a 2D space shooter/RPG, and I am not going to use any game engines and any 3D party libs. I really want to create a very large game universe, or even pseudo-infinite (without visible borders, as if it were a 2D projection of a sphere). It should include 10-12 clusters of 7-8 planets/other space objects and random amount of single asteroids/comets, which player can interact with and also not interactive background. I am looking for a least complicated aproach to create such a universe. My current ideas are: Simply create bitmaps with space scenery background so that they can be tiled seamlessly repeated and construct my 2D universe of this tiles, then place interactive objects (planets, other spaceships) on it. Using vector graphics. I would have a solid color background, some random background objects and gradients here and there. My problems here: Lack of knowledge of how well vector graphics is integrated in Android. Performance? Memory usage? Does Android manage big bitmaps well? Do all of the bitmaps have to be in memory during all game process? I am interested in technical details regarding each of the ideas and a suggestion, which I should go with.

    Read the article

  • Java game design question (graphical objects)

    - by vemalsar
    Hello Guys, I'm beginner in game development, in Java and here on this site too and I have a game design question. Please comment my idea: I have a main loop which call update and draw method. I want to use an ArrayList which store graphical objects, they have coordinate and image or text to draw and my game objects extends this class. In update, I can choose which objects should be put in the array and in draw method I'll display the elements of array on the screen. I'm using a buffer and draw first there, but it is not important now I guess...Here is a simple (not full) code, only the logic: public class GamePanel extends JPanel implements KeyListener { ArrayList<graphicalObjects> graphArray = new ArrayList<graphicalObjects>(); public void update() { //change the game scene, update the graphArray, process input etc. } public void draw() { //draws every element of graphArray to a JPanel } public static main(String[] args) { while(true) { update(); draw(); } } } My questions: Should have I use interface or abstract class for graphicalObjects? graphicalObjects class and the ArrayList really needs or there is some better solution? How to draw objects? They draw themself with their own method or in the draw method I have to draw manually based on graphicalObjects variables (x,y coordinates, image etc.)? If this conception is wrong, please suggest another one! All comments are welcome and sorry if this is dumb question, thanks!

    Read the article

  • rotating an object on an arc

    - by gardian06
    I am trying to get a turret to rotate on an arc, and have hit a wall. I have 8 possible starting orientations for the turrets, and want them to rotate on a 90 degree arc. I currently take the starting rotation of the turret, and then from that derive the positive, and negative boundary of the arc. because of engine restrictions (Unity) I have to do all of my tests against a value which is between [0,360], and due to numerical precision issues I can not test against specific values. I would like to write a general test without having to go in, and jury rig cases //my current test is: // member variables public float negBound; public float posBound; // found in Start() function (called immediately after construction) // eulerAngles.y is the the degree measure of the starting y rotation negBound = transform.eulerAngles.y-45; posBound = transform.eulerAngles.y+45; // insure that values are within bounds if(negBound<0){ negBound+=360; }else if(posBound>360){ posBound-=360; } // called from Update() when target not in firing line void Rotate(){ // controlls what direction if(transform.eulerAngles.y>posBound){ dir = -1; } else if(transform.eulerAngles.y < negBound){ dir = 1; } // rotate object } follows is a table of values for my different cases (please excuse my force formatting) read as base is the starting rotation of the turret, neg is the negative boundry, pos is the positive boundry, range is the acceptable range of values, and works is if it performs as expected with the current code. |base-|-neg-|-pos--|----------range-----------|-works-| |---0---|-315-|--45--|-315-0,0-45----------|----------| |--45--|---0---|--90--|-0-45,54-90----------|----x----| |-135-|---90--|-180-|-90-135,135-180---|----x----| |-180-|--135-|-225-|-135-180,180-225-|----x----| |-225-|--180-|-270-|-180-225,225-270-|----x----| |-270-|--225-|-315-|-225-270,270-315-|----------| |-315-|--270-|---0---|--270-315,315-0---|----------| I will need to do all tests from derived, or stored values, but can not figure out how to get all of my cases to work simultaneously. //I attempted to concatenate the 2 tests: if((transform.eulerAngles.y>posBound)&&(transform.eulerAngles.y < negBound)){ dir *= -1; } this caused only the first case to be successful // I attempted to store a opposite value, and do a void Rotate(){ // controlls what direction if((transform.eulerAngles.y > posBound)&&(transform.eulerAngles.y<oposite)){ dir = -1; } else if((transform.eulerAngles.y < negBound)&&(transform.eulerAngles.y>oposite)){ dir = 1; } // rotate object } this causes the opposite situation as indicated on the table. What am I missing here?

    Read the article

  • Getting into the details of game engine programming

    - by Darkslash
    I am interested in learning game programming, but I really have an interest in the lower level engineering in games. I have OpenGL experience, and I am really interested in learning more about implementing AI, Physics, etc. I have a computer science degree, so I really like getting into technical stuff. Many times when I ask about this sort of thing, I get a lot of "Use an engine", "Use Unity3d", "Why waste your time writing code that already exists", etc, etc. My idea was to use simpler libraries such as SFML or XNA so that I could learn how to implement the more complex systems. The thing is, although I do want to write games, I want to learn things that using something like Unity simply doesn't teach you. My goal is not to make a current generation quality 3D game to sell, I just want to make some cool smaller games and learn all I can about the programming side of game development. Is this something that people just do not do anymore? It seems like everywhere I turn people are using Unity or UDK or GameMaker. I fully understand why you would use a tool like these, but I cant see how they would suit my purposes. So where does someone like myself turn? Am I trying to learn something that people just do not bother doing anymore? Is the innovation in this area gone and just all about gameplay now? I'm sorry if this question seems silly, but I am genuinely interested in knowing more about this and meeting more people who are interested in this sort of thing.

    Read the article

  • how to rotate a sprite using multi touch (andengine) in android?

    - by 786
    I am new to android game development. I am using andengine GLES-2. i have created sprite as a box. this box is now draggable by using this coding. it works fine. but i want multitouch on this which i want to rotate a sprite with 2 finger in that box and even it should be draggable. .... plz help someone by overwriting this code or by giving exact example of this doubt... i am trying this many days but no idea. final float centerX = (CAMERA_WIDTH - this.mBox.getWidth()) / 2; final float centerY = (CAMERA_HEIGHT - this.mBox.getHeight()) / 2; Box= new Sprite(centerX, centerY, this.mBox, this.getVertexBufferObjectManager()) { public boolean onAreaTouched(TouchEvent pSceneTouchEvent, float pTouchAreaLocalX, float pTouchAreaLocalY) { this.setPosition(pSceneTouchEvent.getX() - this.getWidth()/ 2, pSceneTouchEvent.getY() - this.getHeight() / 2); float pValueX = pSceneTouchEvent.getX(); float pValueY = CAMERA_HEIGHT-pSceneTouchEvent.getY(); float dx = pValueX - gun.getX(); float dy = pValueY - gun.getY(); double Radius = Math.atan2(dy,dx); double Angle = Radius * 360 ; Box.setRotation((float)Math.toDegrees(Angle)); return true; } thanks

    Read the article

  • What OpenGL version(s) to learn and/or use?

    - by zuko
    So, I'm new to OpenGL... I have general knowledge of game programming but little practical experience. I've been looking into various articles and books and trying to dive into OpenGL, but I've found the various versions and old vs new way of doing things confusing. I guess my first questions is does anyone know some figures about percentages of gamers that can run each version of OpenGL. What's the market share like? 2.x, 3.x, 4.x... I looked into the requirements for Half Life 2 since I know Valve updated it with OpenGL to run on Mac and I know they usually try to hit a very wide user-base, and they say a minimum of GeForce 8 Series. I looked at the 8800 GT on Nvidia's website and it listed support for OpenGL 2.1. Which, maybe I'm wrong, sounds ancient to me since there's already 4.x. I looked up a driver for 8800GT and it says it supports 4.2! A bit of a discrepancy there, lol. I've also read things like XP only supports up to a certain version, or OS X only supports 3.2, or all kinds of other things. Overall, I'm just confused as to how much support there is for various versions and what version to learn/use. I'm also looking for learning resources. My search results thus far have pointed me to the OpenGL SuperBible. The 4th edition has great reviews on Amazon, but it teaches 2.1. The 5th edition teaches 3.3 and there are a couple things in the reviews that mention the 4th edition is better and that the 5th edition doesn't properly teach the new features or something? Basically, even within learning material I'm seeing discrepancies and I just don't even know where to start. From what I understand, 3.x started a whole new way of doing things and I've read from various articles and reviews that you want to "stay away from deprecated features like glBegin(), glEnd()" yet a lot of books and tutorials I've seen use that method. I've seen people saying that, basically, the new way of doing stuff is more complicated yet the old way is bad . Just a side note, personally, I know I still have a lot to learn beforehand, but I'm interested in tessellation; so I guess that factors into it as well, because, as far as I understand that's only in 4.x? [just btw, my desktop supports 4.2]

    Read the article

  • When attaching AI to a vehicle should I define all steps or try Line of Sight?

    - by ThorDivDev
    This problem is related to an intersection simulation I am building for university. I will try to make it as general as possible. I am trying to assign AI to a vehicle using the JMonkeyEngine platform. AIGama_JmonkeyEngine explains that if you wish to create a car that follows a path you can define the path in steps. If there was no physics attached whatsoever then all you need to do is define the x,y,z values of where you want the object to appear in all subsequent steps. I am attaching the vehicleControl that implements jBullet. In this case the author mentions that I would need to define the steering and accelerating behaviors at each step. I was trying to use ghost controls that represented waypoints and when on colliding the car would decide what to do next like stopping at a red light. This didn't work so well. Car doesn't face right. public void update(float tpf) { Vector3f currentPos = aiVehicle.getPhysicsLocation(); Vector3f baseforwardVector = currentPos.clone(); Vector3f forwardVector; Vector3f subsVector; if (currentState == ObjectState.Running) { aiVehicle.accelerate(-800); } else if (currentState == ObjectState.Seeking) { baseforwardVector = baseforwardVector.normalize(); forwardVector = aiVehicle.getForwardVector(baseforwardVector); subsVector = pointToSeek.subtract(currentPos.clone()); System.out.printf("baseforwardVector: %f, %f, %f\n", baseforwardVector.x, baseforwardVector.y, baseforwardVector.z); System.out.printf("subsVector: %f, %f, %f\n", subsVector.x, subsVector.y, subsVector.z); System.out.printf("ForwardVector: %f, %f, %f\n", forwardVector.x, forwardVector.y, forwardVector.z); if (pointToSeek != null && pointToSeek.x + 3 >= currentPos.x && pointToSeek.x - 3 <= currentPos.x) { aiVehicle.steer(0.0f); aiVehicle.accelerate(-40); } else if (pointToSeek != null && pointToSeek.x > currentPos.x) { aiVehicle.steer(-0.5f); aiVehicle.accelerate(-40); } else if (pointToSeek != null && pointToSeek.x < currentPos.x) { aiVehicle.steer(0.5f); aiVehicle.accelerate(-40); } } else if (currentState == ObjectState.Stopped) { aiVehicle.accelerate(0); aiVehicle.brake(40); } }

    Read the article

  • Rendering multiple squares fast?

    - by Sam
    so I'm doing my first steps with openGL development on android and I'm kinda stuck at some serious performance issues... What I'm trying to do is render a whole grid of single colored squares on to the screen and I'm getting framerates of ~7FPS. The squares are 9px in size right now with one pixel border in between, so I get a few thousand of them. I have a class "Square" and the Renderer iterates over all Squares every frame and calls the draw() method of each (just the iteration is fast enough, with no openGL code the whole thing runs smootlhy at 60FPS). Right now the draw() method looks like this: // Prepare the square coordinate data GLES20.glVertexAttribPointer(mPositionHandle, COORDS_PER_VERTEX, GLES20.GL_FLOAT, false, vertexStride, vertexBuffer); // Set color for drawing the square GLES20.glUniform4fv(mColorHandle, 1, color, 0); // Draw the square GLES20.glDrawElements(GLES20.GL_TRIANGLES, drawOrder.length, GLES20.GL_UNSIGNED_SHORT, drawListBuffer); So its actually only 3 openGL calls. Everything else (loading shaders, filling buffers, getting appropriate handles, etc.) is done in the Constructor and things like the Program and the handles are also static attributes. What am I missing here, why is it rendering so slow? I've also tried loading the buffer data into VBOs, but this is actually slower... Maybe I did something wrong though. Any help greatly appreciated! :)

    Read the article

  • Will Unity skills be interchangeable?

    - by Starkers
    I'm currently learning Unity and working my way through a video game maths primer text book. My goal is to create a racing game for WebGL (using Three.js and maybe Physic.js). I'm well aware that the Unity program shields you from a lot of what's going on and a lot of the grunt work attached to developing even a simple video game, but if I power through a bunch of Unity tutorials, will a lot of the skills I learn translate over to other frameworks/engines? I'm pretty proficient at level design with WebGL, and I'm a good 3D modeller. My weaknesses are definitely AI and Physics. While I am rapidly shoring up my math, and while Physics is undeniably interesting there's only so many hours in the day and there's a wealth of engines out there to take care of this sort of thing. AI does appeal to me a lot more, and is a lot more necessary. AI changes drastically from game to game, is tweaked heavily during development, and the physics is a lot more constant. Will leaning AI concepts in Unity allow me to transfer this knowledge pretty much anywhere? Or will I just be paddling up Unity creek with these skills?

    Read the article

< Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >