Search Results

Search found 32114 results on 1285 pages for 'general development'.

Page 520/1285 | < Previous Page | 516 517 518 519 520 521 522 523 524 525 526 527  | Next Page >

  • libgdx draw issue and animation

    - by johnny-b
    it seems as though i cannot get the draw method to work??? it seems as though the bullet.draw(batcher) does not work and i cannot understand why as the bullet is a sprite. i have made a Sprite[] and added them as animation. could that be it? i tried batcher.draw(AssetLoader.bulletAnimation.getKeyFrame(runTime), bullet.getX(), bullet.getY(), bullet.getOriginX() / 2, bullet.getOriginY() / 2, bullet.getWidth(), bullet.getHeight(), 1, 1, bullet.getRotation()); but that dont work, the only way it draws is this batcher.draw(AssetLoader.bulletAnimation.getKeyFrame(runTime), bullet.getX(), bullet.getY()); below is the code. // this is in a Asset Class texture = new Texture(Gdx.files.internal("SpriteN1.png")); texture.setFilter(TextureFilter.Nearest, TextureFilter.Nearest); bullet1 = new Sprite(texture, 380, 350, 45, 20); bullet1.flip(false, true); bullet2 = new Sprite(texture, 425, 350, 45, 20); bullet2.flip(false, true); Sprite[] bullets = { bullet1, bullet2 }; bulletAnimation = new Animation(0.06f, bullets); bulletAnimation.setPlayMode(Animation.PlayMode.LOOP); // this is the GameRender class public class GameRender() { private Bullet bullet; private Ball ball; public GameRenderer(GameWorld world) { myWorld = world; cam = new OrthographicCamera(); cam.setToOrtho(true, 480, 320); batcher = new SpriteBatch(); // Attach batcher to camera batcher.setProjectionMatrix(cam.combined); shapeRenderer = new ShapeRenderer(); shapeRenderer.setProjectionMatrix(cam.combined); // Call helper methods to initialize instance variables initGameObjects(); initAssets(); } private void initGameObjects() { ball = GameWorld.getBall(); bullet = myWorld.getBullet(); scroller = myWorld.getScroller(); } private void initAssets() { ballAnimation = AssetLoader.ballAnimation; bulletAnimation = AssetLoader.bulletAnimation; } public void render(float runTime) { Gdx.gl.glClearColor(0, 0, 0, 1); Gdx.gl.glClear(GL30.GL_COLOR_BUFFER_BIT); batcher.begin(); // Disable transparency // This is good for performance when drawing images that do not require // transparency. batcher.disableBlending(); // The ball needs transparency, so we enable that again. batcher.enableBlending(); batcher.draw(AssetLoader.ballAnimation.getKeyFrame(runTime), ball.getX(), ball.getY(), ball.getWidth(), ball.getHeight()); batcher.draw(AssetLoader.bulletAnimation.getKeyFrame(runTime), bullet.getX(), bullet.getY()); // End SpriteBatch batcher.end(); } } // this is the gameworld class public class GameWorld { public static Ball ball; private Bullet bullet; private ScrollHandler scroller; public GameWorld() { ball = new Ball(480, 273, 32, 32); bullet = new Bullet(10, 10); scroller = new ScrollHandler(0); } public void update(float delta) { ball.update(delta); bullet.update(delta); scroller.update(delta); } public static Ball getBall() { return ball; } public ScrollHandler getScroller() { return scroller; } public Bullet getBullet() { return bullet; } } is there anyway so make the sprite work?

    Read the article

  • Attaching two objects and changing their world matrices accordingly

    - by A-Type
    I'm having a hard time wrapping my head around the transformations required to bind two objects together in either a two-way or one-way relationship. I will need to implement both types. For the first case, I want to be able to 'couple' two ships together in space. The ships have different mass, of course. Forces applied to either ship will use combined mass and moment of inertia to calculate and move both ships. The trick is, being sure that the point at which they are coupled remains the same, and they don't move at all relative to each other. The second case is similar: I want a ship to be able to enter the atmosphere of a planet and move relative to the planet. The planet will be orbiting the sun, which is fixed at 0,0,0. Essentially, when the ship is sitting still outside of the atmosphere, the planet will move past it on its course-- but when the ship is sitting still inside the atmosphere, it moves and rotates with the planet, so that it is always relative to the horizon. Essentially, the vertices which make up the ship are now transformed just like the ones that make up the planet, except that the ship can move itself around relative to the planet. I get the feeling I can implement both of these with the same code. Essentially, I am thinking of giving each object (which I call Fixtures) a list of "slave" Fixtures onto which that Fixture's world matrix is imposed. So, this would be the planet imposing its world on any contained ships. In the case of coupling, I would simply make each ship a slave of the other, somehow. Obviously I can't just multiply the ship's world matrix by the planet's, or each ship by the others. What I'd like some help with is what calculations to make in order to get a nice, seamless relative world to the other object. I was thinking maybe I could just multiply the world of the slave by the inverse of the master, but then when you couple two ships you would lose all that world data. So, perhaps I need an intermediate "world" which is the absolute world, but use a secondary "final world" to actually transform the objects?

    Read the article

  • Any interesting thesis topic?

    - by revers
    Hi, I study Computer Science at Technical University of Lodz (in Poland) with Computer Game and Simulation Technology specialization. I'm going to defend BSc thesis next year and I was wondering what topic I could choose but nothing really interesting is coming to my mind. Maybe You could help me and suggest some subjects related to programming graphics, games or simulations? (or maybe something else that is interesting enough :) ). I would be very grateful for any suggestion!

    Read the article

  • XNA Drag Gestures - fractional delta values

    - by Den
    I have an issue with objects moving roughly twice as far as expected when dragging them. I am comparing my application to the standard TouchGestureSample sample from MSDN. For some reason in my application gesture samples have fractional positions and deltas. Both are using same Microsoft.Xna.Framework.Input.Touch.dll, v4.0.30319. I am running both apps using standard Windows Phone Emulator. I am setting my break point immediately after this line of code in a simple Update method: GestureSample gesture = TouchPanel.ReadGesture(); Typical values in my app: Delta = {X:-13.56522 Y:4.166667} Position = {X:184.6956 Y:417.7083} Typical values in sample app: Delta = {X:7 Y:16} Position = {X:497 Y:244} Have anyone seen this issue? Does anyone have any suggestions? Thank you.

    Read the article

  • Am I missing something about these considerations about Leaderboard's database's schema?

    - by misiMe
    I just finished to develop a mobile game, now I want to implement an online leaerboard using mysql. I'm wondering about the database's schema, I thought about some possibilities: (I didn't got in detail with syntax because my question is just about the logic of it) Name: string; Score: integer I thought to ask the name just the first time. If, in the future, you will modify that, it will call just an update to the name associated with your id. Leaderboard(ID, Name, Score) ID: integer autoincrement, PrimaryKey With this kind of idea maybe the db will grow fast because if you choose everytime a different name for the score, it will add a new entry. Leaderboard(PhoneId, Name, Score) Here PhoneId will be the unique identifier of the phone, PrimaryKey. A con of this choice is that if you want to play with your friends' phone, you can't put a different name for the score. Leaderboard(Name, Score) Here Name is PrimaryKey. With that, if you enter a name that already exists, you will be prompted to choose another one. Do you agree with this considerations? What will you do? Am I missing something?

    Read the article

  • UIView with IrrlichtScene - iOS

    - by user1459024
    i have a UIViewController in a Storyboard and want to draw a IrrlichtScene in this View Controller. My Code: WWSViewController.h #import <UIKit/UIKit.h> @interface WWSViewController : UIViewController { IBOutlet UILabel *errorLabel; } @end WWSViewController.mm #import "WWSViewController.h" #include "../../ressources/irrlicht/include/irrlicht.h" using namespace irr; using namespace core; using namespace scene; using namespace video; using namespace io; using namespace gui; @interface WWSViewController () @end @implementation WWSViewController -(void)awakeFromNib { errorLabel = [[UILabel alloc] init]; errorLabel.text = @""; IrrlichtDevice *device = createDevice( video::EDT_OGLES1, dimension2d<u32>(640, 480), 16, false, false, false, 0); /* Set the caption of the window to some nice text. Note that there is an 'L' in front of the string. The Irrlicht Engine uses wide character strings when displaying text. */ device->setWindowCaption(L"Hello World! - Irrlicht Engine Demo"); /* Get a pointer to the VideoDriver, the SceneManager and the graphical user interface environment, so that we do not always have to write device->getVideoDriver(), device->getSceneManager(), or device->getGUIEnvironment(). */ IVideoDriver* driver = device->getVideoDriver(); ISceneManager* smgr = device->getSceneManager(); IGUIEnvironment* guienv = device->getGUIEnvironment(); /* We add a hello world label to the window, using the GUI environment. The text is placed at the position (10,10) as top left corner and (260,22) as lower right corner. */ guienv->addStaticText(L"Hello World! This is the Irrlicht Software renderer!", rect<s32>(10,10,260,22), true); /* To show something interesting, we load a Quake 2 model and display it. We only have to get the Mesh from the Scene Manager with getMesh() and add a SceneNode to display the mesh with addAnimatedMeshSceneNode(). We check the return value of getMesh() to become aware of loading problems and other errors. Instead of writing the filename sydney.md2, it would also be possible to load a Maya object file (.obj), a complete Quake3 map (.bsp) or any other supported file format. By the way, that cool Quake 2 model called sydney was modelled by Brian Collins. */ IAnimatedMesh* mesh = smgr->getMesh("/Users/dbocksteger/Desktop/test/media/sydney.md2"); if (!mesh) { device->drop(); if (!errorLabel) { errorLabel = [[UILabel alloc] init]; } errorLabel.text = @"Konnte Mesh nicht laden."; return; } IAnimatedMeshSceneNode* node = smgr->addAnimatedMeshSceneNode( mesh ); /* To let the mesh look a little bit nicer, we change its material. We disable lighting because we do not have a dynamic light in here, and the mesh would be totally black otherwise. Then we set the frame loop, such that the predefined STAND animation is used. And last, we apply a texture to the mesh. Without it the mesh would be drawn using only a color. */ if (node) { node->setMaterialFlag(EMF_LIGHTING, false); node->setMD2Animation(scene::EMAT_STAND); node->setMaterialTexture( 0, driver->getTexture("/Users/dbocksteger/Desktop/test/media/sydney.bmp") ); } /* To look at the mesh, we place a camera into 3d space at the position (0, 30, -40). The camera looks from there to (0,5,0), which is approximately the place where our md2 model is. */ smgr->addCameraSceneNode(0, vector3df(0,30,-40), vector3df(0,5,0)); /* Ok, now we have set up the scene, lets draw everything: We run the device in a while() loop, until the device does not want to run any more. This would be when the user closes the window or presses ALT+F4 (or whatever keycode closes a window). */ while(device->run()) { /* Anything can be drawn between a beginScene() and an endScene() call. The beginScene() call clears the screen with a color and the depth buffer, if desired. Then we let the Scene Manager and the GUI Environment draw their content. With the endScene() call everything is presented on the screen. */ driver->beginScene(true, true, SColor(255,100,101,140)); smgr->drawAll(); guienv->drawAll(); driver->endScene(); } /* After we are done with the render loop, we have to delete the Irrlicht Device created before with createDevice(). In the Irrlicht Engine, you have to delete all objects you created with a method or function which starts with 'create'. The object is simply deleted by calling ->drop(). See the documentation at irr::IReferenceCounted::drop() for more information. */ device->drop(); } - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view, typically from a nib. } - (void)viewDidUnload { [super viewDidUnload]; // Release any retained subviews of the main view. } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return (interfaceOrientation != UIInterfaceOrientationPortraitUpsideDown); } @end Sadly the result is just a black View in the Simulator. :( Hope here is anyone who can explain me how i draw the scene in a UIView. Furthermore I'm getting this Error: Could not load sprite bank because the file does not exist: #DefaultFont How can i fix it ?

    Read the article

  • How to acheive a smoother lighting effect

    - by Cyral
    I'm making a tile based game in XNA So currently my lightning looks like this: How can I get it to look like this? Instead of each block having its own tint, it has a smooth overlay. I'm assuming some sort of shader, and to tell it the lighting and blur it some how. But im not an expert with shaders. My current lighting calculates the light, and then passes it to a spritebatch and draws with a color parameter EDIT: No longer uses spritebatch tint, I was testing and now pass parameters to set the light values. But still looking for a way to smooth it

    Read the article

  • Circle vs Edge collision detection / resolution

    - by topheman
    I made a javascript class Ball.js that handles physics interactions betweens balls as well as painting. In the v1.0, the ball vs ball collision detection and resolution is well handled. In the next version (v2), I'm trying to add edgeCollision handling. I'm having some problems, maybe you will be able to help me. All the v2 branch source code is on github repository : https://github.com/topheman/Ball.js/tree/v2 The v2 demos (where you can see the bug I will be talking about) : http://labs.topheman.com/Ball-v2/#help As you will see on the demo, I have two major problems that I'm having a really hard time to solve on Ball.js : method resolveEdgeCollision : bounce angle is inconsistent method checkEdgeCollision : if the ball's velocity (the length that it runs each frame) is higher than its diameter, eventually, it will pass through an edge, without triggering any collision Any Ideas ?...

    Read the article

  • Designing generic render/graphics component in C++?

    - by s73v3r
    I'm trying to learn more about Component Entity systems. So I decided to write a Tetris clone. I'm using the "style" of component-entity system where the Entity is just a bag of Components, the Components are just data, a Node is a set of Components needed to accomplish something, and a System is a set of methods that operates on a Node. All of my components inherit from a basic IComponent interface. I'm trying to figure out how to design the Render/Graphics/Drawable Components. Originally, I was going to use SFML, and everything was going to be good. However, as this is an experimental system, I got the idea of being able to change out the render library at will. I thought that since the Rendering would be fairly componentized, this should be doable. However, I'm having problems figuring out how I would design a common Interface for the different types of Render Components. Should I be using C++ Template types? It seems that having the RenderComponent somehow return it's own mesh/sprite/whatever to the RenderSystem would be the simplest, but would be difficult to generalize. However, letting the RenderComponent just hold on to data about what it would render would make it hard to re-use this component for different renderable objects (background, falling piece, field of already fallen blocks, etc). I realize this is fairly over-engineered for a regular Tetris clone, but I'm trying to learn about component entity systems and making interchangeable components. It's just that rendering seems to be the hardest to split out for me.

    Read the article

  • Design leaderboard ratings for quiz games

    - by PeterK
    Back in March 2011 i started the following post: How to design a leaderboard? Now my quiz game have been out for approximately a year and sold pretty decently. I am working on to update the game design and is again looking into the leaderboard design to make it better as i am not happy with it. Currently i rate players on number of correct answers, which is not good as it does not consider things like number of games, difficulty levels etc. I also have "extended" stats behind the UITableView (Leaderboard). A player can play based on three levels of difficulty: hard, medium or easy Difficulty levels can be mixed between players in a game Each game can be one to six players, so there can be single games or duels Between 2 and 30 questions per game As i am considering integrating Game Center Leaderboard i need to design a better rating system so i would like to ask for some ideas how to do the rating based on the above. I am thinking about how much a point would be worth and what it includes.

    Read the article

  • Can't detect collision properly using Rectangle.Intersects()

    - by Daniel Ribeiro
    I'm using a single sprite sheet image as the main texture for my breakout game. The image is this: My code is a little confusing, since I'm creating two elements from the same Texture using a Point, to represent the element size and its position on the sheet, a Vector, to represent its position on the viewport and a Rectangle that represents the element itself. Texture2D sheet; Point paddleSize = new Point(112, 24); Point paddleSheetPosition = new Point(0, 240); Vector2 paddleViewportPosition; Rectangle paddleRectangle; Point ballSize = new Point(24, 24); Point ballSheetPosition = new Point(160, 240); Vector2 ballViewportPosition; Rectangle ballRectangle; Vector2 ballVelocity; My initialization is a little confusing as well, but it works as expected: paddleViewportPosition = new Vector2((GraphicsDevice.Viewport.Bounds.Width - paddleSize.X) / 2, GraphicsDevice.Viewport.Bounds.Height - (paddleSize.Y * 2)); paddleRectangle = new Rectangle(paddleSheetPosition.X, paddleSheetPosition.Y, paddleSize.X, paddleSize.Y); Random random = new Random(); ballViewportPosition = new Vector2(random.Next(GraphicsDevice.Viewport.Bounds.Width), random.Next(GraphicsDevice.Viewport.Bounds.Top, GraphicsDevice.Viewport.Bounds.Height / 2)); ballRectangle = new Rectangle(ballSheetPosition.X, ballSheetPosition.Y, ballSize.X, ballSize.Y); ballVelocity = new Vector2(3f, 3f); The problem is I can't detect the collision properly, using this code: if(ballRectangle.Intersects(paddleRectangle)) { ballVelocity.Y = -ballVelocity.Y; } What am I doing wrong?

    Read the article

  • Debugging Minimum Translation Vector

    - by SyntheCypher
    I implemented the minimum translation vector from codezealot's tutorial on SAT (Separating Axis Theorem) but I'm having an issue I can't quite figure out. Here's the example I have: As you can see in top and bottom left images regardless of the side the of the green car which red car is penetrating the MTV for the red car still remains as a negative number also here is the same example when the front of the red car is facing the opposite direction the number will always be positive. When the red car is past the half way through the green car it should switch polarity. I thought I'd compensated for this in my code, but apparently not either that or it's a bug I can find. Here is my function for finding and returning the MTV, any help would be much appreciated: Code

    Read the article

  • 2D Side scroller collision detection

    - by Shanon Simmonds
    I am trying to do some collision detection between objects and tiles, but the tiles do not have there own x and y position, they are just rendered to the x and y position given, there is an array of integers which has the ids of the tiles to use(which are given from an image and all the different colors are assigned different tiles) int x0 = camera.x / 16; int y0 = camera.y / 16; int x1 = (camera.x + screen.width) / 16; int y1 = (camera.y + screen.height) / 16; for(int y = y0; y < y1; y++) { if(y < 0 || y >= height) continue; // height is the height of the level for(int x = x0; x < x1; x++) { if(x < 0 || x >= width) continue; // width is the width of the level getTile(x, y).render(screen, x * 16, y * 16); } } I tried using the levels getTile method to see if the tile that the object was going to advance to, to see if it was a certain tile, but, it seems to only work in some directions. Any ideas on what I'm doing wrong and fixes would be greatly appreciated. What's wrong is that it doesn't collide properly in every direction and also this is how I tested for a collision in the objects class if(!level.getTile((x + xa) / 16, (y + ya) / 16).isSolid()) { x += xa; y += ya; } EDIT: xa and ya represent the direction as well as the movement, if xa is negative it means the object is moving left, if its positive it is moving right, and same with ya except negative for up, positive for down.

    Read the article

  • Bad texture on model with different GPU

    - by Pacha
    I have some kind of distortion on the texture of my 3D model. It works perfectly well on an AMD GPU, but when testing on a integrated Intel HD graphics card it has a weird issue. I don't have a problem with the rest of my entities as they are not scaled. The models with the problems are scaled, as my engine supports different sizes for the platforms. I am using Ogre3D as rendering engine, and GLSL as shader language. Vertex shader: #version 120 varying vec2 UV; void main() { UV = gl_MultiTexCoord0; gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } Fragment shader: #version 120 varying vec2 UV; uniform sampler2D diffuseMap; void main(void) { gl_FragColor = texture(diffuseMap, UV); } Screenshot (the error is on the right and left side, the top and bottom part are rendered perfectly well):

    Read the article

  • Avoiding lag when rendering Texture2D for first time

    - by Emir Lima
    I have found a similar question here, but it is about playing sounds. I am using 2048 x 2048 textures for sprite sheets and every time I call spriteBatch.Draw using a sheet for the first time in game execution, causes a considerable lag. The lag doesn't appears for the next times. Someone has faced this problem before? What can I do to overcome this? Update: I inserted a code in the end of content load routine that draws EVERY Texture2D that is loaded into ContentManager before follow to the game screen. This works well. None lag occurs when different textures are rendered over the time, EXCEPT if the IsFullScreen are changed. Apparently, changing this property makes the textures loaded in the GPU gone. Is that correct?

    Read the article

  • How to implement Fog Of War with an shader?

    - by Cambrano
    Okay, I'm creating a RTS game and want to implement an AgeOfEmpires-like Fog Of War(FOW). That means a tile(or pixel) can be: 0% transparent (unexplored) 50% transparent black (explored but not in viewrange) 100% transparent(explored and in viewrange) RTS means I'll have many explorers (NPCs, buildings, ...). Okay, so I have an 2d array of bytes byte[,] explored. The byte value correlates the transparency. The question is, how do I pass this array to my shader? Well I think it is not possible to pass an entire array. So: what technique shall I use to let my shader know if a pixel/tile is visible or not?

    Read the article

  • Geometry Shader : points + Triangles

    - by CmasterG
    I have different Shaders and for each Shader a instance of the ShaderClass class, which initializes the Shaders, Renders the Shaders, etc. I use most of the Shaderclasses without Geometry Shader, but in one Shader Class i also use a Geometry Shader. The problem is, that when I render one object with the Shaderclass that uses the Geometry shader, all other object are rendered with the same geometry that I create in the Geometry Shader. Can you help me? Is it possible that I have to use a Geometry Shader for each object, when I use one for one object? I use DirectX 11 with C++.

    Read the article

  • "Walking" along a rotating surface in LimeJS

    - by Dave Lancea
    I'm trying to have a character walk along a plank (a long, thin rectangle) that works like a seesaw, being rotated around a central point by box2d physics (falling objects). I want the left and right arrow keys to move the player up and down the plank, regardless of it's slope, and I don't want to use real physics for the player movement. My idea for achieving this was to compute the coordinate based on the rotation of the plank and the current location "up" or "down" the board. My math is derived from here: http://math.stackexchange.com/questions/143932/calculate-point-given-x-y-angle-and-distance Here's the code I have so far: movement = 0; if(keys[37]){ // Left movement = -3; } if(keys[39]){ // Right movement = 3; } // this.plank is a LimeJS sprite. // getRotation() Should return an angle in degrees var rotation = this.plank.getRotation(); // this.current_plank_location is initialized as 0 this.current_plank_location += movement; var x_difference = this.current_plank_location * Math.cos(rotation); var y_difference = this.current_plank_location * Math.sin(rotation); this.setPosition(seesaw.PLANK_CENTER_X + x_difference, seesaw.PLANK_CENTER_Y + y_difference); This code causes the player to swing around in a circle when they are out of the center of the plank given a slight change in rotation of the plank. Any ideas on how I can get the player position to follow the board position?

    Read the article

  • What are the factors that determine the default frequency of a shader call?

    - by user827992
    After i have been played for some days with various vertex and fragments shaders seems clear to me that this programs are called by the GPU at every and each rendering cycle, the problem is that I can't really quantify this frequency and I can't tell if is based on some default values or not because I don't have a big collection of hardware right now to do extensive tests. For what i know the answer could be really trivial like "it's the same of the refresh rate of your monitor", but i would like some good answers on that to be clear on this. For instance looks really odd to me that all the techniques used to control the amount of FPS that i have seen until now uses a call for the OpenGL function glutGet(GLUT_ELAPSED_TIME) to retrieve a value in ms about when the rendering started but I have to relies on the CPU to do the math. Why I can't set an FPS value in OpenGL if OpenGL clearly has a counter and a timer/clock? PS I'm referring to OpenGL 3.0+

    Read the article

  • How can a load and play an .x model using vertex animation in XNA?

    - by Christian
    From a game I developed years ago, I still have character models that my former 3D engine designer created and that I'd like to reuse in a Windows Phone project now. However, the files are in DirectX format (.x) containing keyframe animation only. No bones. No skeleton. There are a lot of animation keys defined on several frames to animate the characters. I don't quite understand how that works, to be frankly. However, I did a lot of research regarding a possible way of getting the characters animated via XNA on Windows Phone and all I found are hints that it is generally possible but not supported. Possibly by implementing own Content Importers and Processors. I didn't find anyone who successfully did something like that yet. How should I go about loading and displaying these models in XNA?

    Read the article

  • Confusion about Rotation matrices from Euler Angles

    - by xEnOn
    I am trying to learn more about Euler Angles so as to help myself in understanding how I can control my camera better in the game. I came across the following formula that converts Euler Angles to rotation matrices: In the equation, I could see that the first matrix from the left is the rotation matrix about x-axis, the second is about y-axis and the third is about z-axis. From my understanding about ordinary matrix transformations, the later transformation is always applied to the right hand side. And if I'm right about this, then the above equation should have a rotation order starting from rotating about z-axis, y-axis, then finally x-axis. But, from the symbols it seems that the rotation order start rotating about x-axis, then y-axis, then finally z-axis. What should the actual order of the rotation be? Also, I am confuse about if the input vector, in this case, would be a row vector on the left, or a column vector on the right?

    Read the article

  • lwjgl custom icon

    - by melchor629
    I have a little problem with the icon in lwjgl, it doesn't work. I google about it, but i haven't found anything that works for me yet. This is my code for now: PNGDecoder imageDecoder = new PNGDecoder(new FileInputStream("res/images/Icon.png")); ByteBuffer imageData = BufferUtils.createByteBuffer(4 * imageDecoder.getWidth() * imageDecoder.getHeight()); imageDecoder.decode(imageData, imageDecoder.getWidth() * 4, PNGDecoder.Format.RGBA); imageData.flip(); System.err.println(Display.setIcon(new ByteBuffer[]{imageData}) == 0 ? "No se ha creado el icono" : "Se ha creado el icono"); The png file is a 128x128px with transparency. PNGDecoder is from the matthiasmann utility (de.matthiasmann.twl.utils). I'm using Mac OS, 10.8.4 with lwjgl 2.9.0. Thanks :)

    Read the article

  • Android: Layouts and views or a single full screen custom view?

    - by futlib
    I'm developing an Android game, and I'm making it so that it can run on low end devices without GPU, so I'm using the 2D API. I have so far tried to use Android's mechanisms such as layouts and activities where possible, but I'm beginning to wonder if it's not easier to just create a single custom view (or one per activity) and do all the work there. Here's an example of how I currently do things: I'm using a layout to display the game's background as an image view and the square game area, which is a custom view, centered in the middle. What would you say? Should I continue to use layouts where possible or is it more common/reasonable to just use a large custom view? I'm thinking that this would probably also make it easier to port my code to other platforms.

    Read the article

  • Detect if square in grid is within a diamond shape

    - by myrkos
    So I have a game in which basically everything is a square inside a big grid. It's easy to check if a square is inside a box whose center is another square: *** x *o* --> x is not in o's square *** **x *o* --> x IS in o's square *** This can be done by simply subtracting the coordinates of o and x, then taking the largest coordinate of that and comparing it with the half side length. Now I want to do the same thing but check if x is in o's diamond, like so: * **x **o** --> x IS in o's diamond *** * What would be the best way to check if a square is in another square's surrounding diamond-shaped area, given the diamond's half width/height?

    Read the article

< Previous Page | 516 517 518 519 520 521 522 523 524 525 526 527  | Next Page >