Search Results

Search found 35343 results on 1414 pages for 'development tools'.

Page 649/1414 | < Previous Page | 645 646 647 648 649 650 651 652 653 654 655 656  | Next Page >

  • OpenGLES 2.0 gluunProject

    - by secheung
    I've spent more time than i should trying to get my ray picking program working. I'm pretty convinced my math is solid with respect to line plane intersection, but I believe the problem lies with the changing of the mouse screen touch into 3D world space. Heres my code public void passTouchEvents(MotionEvent e){ int[] viewport = {0,0,viewportWidth,viewportHeight}; float x = e.getX(), y = viewportHeight - e.getY(); float[] pos1 = new float[4]; float[] pos2 = new float[4]; GLU.gluUnProject( x, y, 0.0f, mViewMatrix, 0, mProjectionMatrix, 0, viewport, 0, pos1, 0); GLU.gluUnProject( x, y, 1.0f, mViewMatrix, 0, mProjectionMatrix, 0, viewport, 0, pos2, 0); } Just as a reference I've tried transforming the coordinates 0,0,0 and got an offset. It would be appreciated if you would answer using opengl es 2.0 code. Thanks

    Read the article

  • Physics Engine [Collision Response, 2-dimensional] experts, help!! My stack is unstable!

    - by Register Sole
    Previously, I struggle with the sequential impulse-based method I developed. Thanks to jedediah referring me to this paper, I managed to rebuild the codes and implement the simultaneous impulse based method with Projected-Gauss-Seidel (PGS) iterative solver as described by Erin Catto (mentioned in the reference of the paper as [Catt05]). So here's how it currently is: The simulation handles 2-dimensional rotating convex polygons. Detection is using separating-axis test, with a SKIN, meaning closest points between two polygons is detected and determined if their distance is less than SKIN. To resolve collision, simultaneous impulse-based method is used. It is solved using iterative solver (PGS-solver) as in Erin Catto's paper. Error-correction is implemented using Baumgarte's stabilization (you can refer to either paper for this) using J V = beta/dt*overlap, J is the Jacobian for the constraints, V the matrix containing the velocities of the bodies, beta an error-correction parameter that is better be < 1, dt the time-step taken by the engine, and overlap, the overlap between the bodies (true overlap, so SKIN is ignored). However, it is still less stable than I expected :s I tried to stack hexagons (or squares, doesn't really matter), and even with only 4 to 5 of them, they hardly stand still! Also note that I am not looking for a sleeping scheme. But I would settle if you have any explicit scheme to handle resting contacts. That said, I would be more than happy if you have a way of treating it generally (as continuous collision, instead of explicitly as a special state). Ideas I have: I would try adding a damping term (proportional to velocity) to the Baumgarte. Is this a good idea in general? If not I would not want to waste my time trying to tune the parameter hoping it magically works. Ideas I have tried: Using simultaneous position based error correction as described in the paper in section 5.3.2, turned out to be worse than the current scheme. If you want to know the parameters I used: Hexagons, side 50 (pixels) gravity 2400 (pixels/sec^2) time-step 1/60 (sec) beta 0.1 restitution 0 to 0.2 coeff. of friction 0.2 PGS iteration 10 initial separation 10 (pixels) mass 1 (unit is irrelevant for now, i modified velocity directly<-impulse method) inertia 1/1000 Thanks in advance! I really appreciate any help from you guys!! :)

    Read the article

  • HTML5 Canvas Tileset Animation

    - by Veyha
    How to do in HTML5 canvas Image animating? I am have this code now: http://jsfiddle.net/WnjB6/1/ In here I am can add animations something like - Animation.add('stand', [0, 1, 2, 3, 4, 5]); But how to play this animation? My image drawing function is - drawTile(canvasX, canvasY, tile, tileWidth, tileHeight); Animation['stand']; return 0, 1, 2, 3, 4, 5 I am need something like when I am run Animation.play('stand') run animation from 'stand' array. I am try to do this something like one day, but no have more idea how. :( Thanks and sorry for my bad English language.

    Read the article

  • GLSL custom interpolation filter

    - by Cyan
    I'm currently building a fragment shader which is using several textures to render the final pixel color. The textures are not really textures, they are in fact "input data" to be used in the formula to generate the final color. The problem I've got is that the texture are getting bi-linear-filtered, and therefore the input data as well. This results in many unwanted side-effects, especially when final rendered texture is "zoomed" compared to original resolution. Removing the side effect is a complex task, and only result in "average" rendering. I was thinking : well, all my problems seems to come from the "default" bi-linear filtering on these input data. I can't move to GL_NEAREST either, since it would create "blocky" rendering. So i guess the better way to proceed is to be fully in charge of the interpolation. For this to work, i would need the input data at their "natural" resolution (so that means 4 samples), and a relative position between the sampled points. Is that possible, and if yes, how ? [EDIT] Since i started this question, i found this internet entry, which seems to (mostly) answer my needs. http://www.gamerendering.com/2008/10/05/bilinear-interpolation/ One aspect of the solution worry me though : the dimensions of the texture must be provided in an argument. It seems there is no way to "find this information transparently". Adding an argument into the rendering pipeline is unwelcomed though, since it's not under my responsibility, and translates into adding complexity for others.

    Read the article

  • How to store an iOS game save file in multiple devices? (Without remote servers)

    - by Omega
    I've been developing an iOS game for iPhone. My game saves the progress as a couple of .plist documents in the device. I have come to realize that when I install a game in my iPhone, this same game is installed in my iPad. And then it struck me: how would I manage save files? I mean, I'd like the player to be able to continue playing from where they left no matter what device are they using... without using remote servers. What have you done to address this issue?

    Read the article

  • How can I keep the correct alpha during rendering particles?

    - by April
    Rencently,I was trying to save textures of 3D particles so that I can reuse the in 2D rendering.Now I had some problem with alpha channel.Some artist told me I that my textures should have unpremultiplied alpha channel.When I try to get the rgb value back,I got strange result.Some area went lighter and even totally white.I mainly focus on additive and blend mode,that is: ADDITIVE: srcAlpha VS 1 BLEND: srcAlpha VS 1-srcAlpha I tried a technique called premultiplied alpha.This technique just got you the right rgb value,its all you need on screen.As for alpha value,it worked well with BLEND mode,but not ADDITIVE mode.As you can see in parameters,BLEND mode always controlled its value within 1.While ADDITIVE mode cannot guarantee. I want proper alpha,but it just got too big or too small consider to rgb.Now what can I do?Any help will be great thankful. PS:If you don't understand what I am trying to do,there is a commercial software called "Particle Illusion".You can create various particles and then save the scene to texture,where you can choose to remove background of particles.

    Read the article

  • Retrieve the coordinates of the *occluding* (closest/drawn) pixels during 3D overlap, using OpenGL?

    - by Big Rich
    Hi, Sorry if the question is not worded well, I'm a new to both 3D and OpenGL. How could I go about obtaining the 3D coordinates of the occluding object, at the point where occlusion is happening (i.e. the 'intersection' of the object in front/closest to the screen)? Just to offer a [very] rudimentary, visual, example, if you were to form an index-finger cross, with your right hand closest to your face, I'd like to know the coordinates of the part of your right finger which obscures the other finger (obviously back within the OpenGL context - no jokers ;-) ). If there is a way to find out both about the occluder (hider) and the occluded (hidden) objects in OpenGL, then that would be of great use, also. Cheers Rich

    Read the article

  • Movement on the X an Z axis are combined?

    - by Magicaxis
    This is probably a stupid question, but I'm trying to simply move a 3D object up, down, left, and right (Not forward or backward). The Y axis works fine, but when I increment the object's X position, the object moves BOTH right and backwards! when I decrement X, left and forwards! setPosition(getPosition().X + 2/*times deltatime*/, getPosition().Y, getPosition().Z); I was astonished that XNA doesnt have its own setPosition function, so I made a parent class for all objects with a setPosition and Draw function. Setposition simply edits a variable "mPosition" and passes it to the common draw function: // Copy any parent transforms. Matrix[] transforms = new Matrix[block.Bones.Count]; block.CopyAbsoluteBoneTransformsTo(transforms); // Draw the model. A model can have multiple meshes, so loop. foreach (ModelMesh mesh in block.Meshes) { // This is where the mesh orientation is set, as well // as our camera and projection. foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.World = transforms[mesh.ParentBone.Index] * Matrix.CreateRotationY(MathHelper.ToRadians(mOrientation.Y)) * Matrix.CreateTranslation(mPosition); effect.View = game1.getView(); effect.Projection = game1.getProjection(); } // Draw the mesh, using the effects set above. mesh.Draw(); } I tried to work it out by attempting to increment and decrement the Z axis, but nothing happens?! So using the X axis changes the objects x and z axis', but changing the Z does nothing. Great. So how do I seperate the X and Z axis movement?

    Read the article

  • Trouble with collision detection in XNA?

    - by Lewis Wilcock
    I'm trying to loop through an list of enemies (enemyList) and then any that have intersected the rectangle belonging to the box object (Which doesn't move), declare there IsAlive bool as false. Then another part of the code removes any enemies that have the IsAlive bool as false. The problem im having is getting access to the variable that holds the Rectangle (named boundingBox) of the enemy. When this is in a foreach loop it works fine, as the enemy class is declared within the foreach. However, there are issues in using the foreach as it removes more than one of the enemies at once (Usually at positions 0 and 2, 1 and 3, etc...). I was wondering the best way to declare the enemy class, without it actually creating new instances of the class? Heres the code I currently have: if (keyboardState.IsKeyDown(Keys.Q) && oldKeyState.IsKeyUp(Keys.Q)) { enemyList.Add(new enemy(textureList.ElementAt(randText), new Vector2(250, 250), graphics)); } //foreach (enemy enemy in enemyList) //{ for (int i = 0; i < enemyList.Count; i++) { if (***enemy.boundingBox***.Intersects(theDefence.boxRectangle)) { enemyList[i].IsDead = true; i++; } } //} for(int j = enemyList.Count - 1; j >= 0; j--) { if(enemyList[j].IsDead) enemyList.RemoveAt(j); } (The enemy.boundingBox is the variables I can't get access too). This is a complete copy of the code (Zipped) If it helps: https://www.dropbox.com/s/ih52k4e21g98j3k/Collision%20tests.rar I managed to find the issue. Changed enemy.boundingBox to enemyList[i].boundingBox. Collision works now! Thanks for any help!

    Read the article

  • Touch Event Not Work With PinchZoom

    - by Siddharth
    I was implemented pinch zoom functionality for my tower of defense game. I manage different entity to display all the towers. From that entity game player select the tower and drag to the actual position where he want to plot the tower. I set the entity in HUD also, so user scroll and zoom the region, the tower become visible all the time. Basically I have created the different entity to show and hide towers only so I can manage it easily. My problem is when I have not perform the scroll and zoom the tower touch and the dragging easily done but when I zoom and scroll the scene at that time the tower touch event does not call so the player can not able to drag and drop it to actual position. So anybody please help me to come out of it.

    Read the article

  • Change the scale-policy of OpenGL ES in Android?

    - by wanting252
    I currently develop a game for Android in OpenGL ES 1.0, use libgdx library. I target the 720x480 screen size. For example, I design only one arts pack for 720x480. And what will happen in Android phones with screen-size smaller or bigger than it, 480x320 for instance? Could you please tell me how to change the scale-policy of OpenGL ES in Android? Or in libgdx specially? Is there anything like "Resample Image" like photoshop?(Nearest Neighbor, Bilinear, Bicubic etc..) for libgdx? Edit: I found some tutorials about texture filter in OpenGL, test it with Linear and Nearest. Linear is good for scaling but slow down the game, and Nearest is on the contrary. What should I do to get a balance between those?

    Read the article

  • How to get a Read-Write Reference to Parent GameObject from a script component attached to it?

    - by onguarde
    I have a game object(object) with a script component(myscript) attached. I have a reference to myscript component through getComponent, and I want to change the transform of the gameObject the script is attached to. myscript.gameObject.transform = (new value); The above code gives me error, Property 'UnityEngine.GameObject.transform' is read only. Is there a way to get a read-write version?

    Read the article

  • Help with Meshes, and Shading

    - by Brian Diehr
    In a game I'm making in LibGdx, I wish to incorporate a ripple effect. I'm confused as to how I get access to the individual pixels on the screen, or any way to influence them (apart from what I can do with sprite batch). I have my suspicions that I have to do it through openGL, and it has something to do with apply a mesh? This brings me to my question, what exactly is a mesh? I've been working on my game for about half a year, and am doing great with the other aspects of the game, but I find this more advance stuff isn't as well documented. Thanks!

    Read the article

  • how does HDR work?

    - by dotminic
    I'm trying to understand what HDR is and how it works. I understand the basic concepts and have an slight idea of how it is implemented with D3D/hlsl. However it's still pretty foggy. Say I'm rendering a sphere with a texture of the earth and a small point list of vertices to act as stars, how would I render this in HDR ? Here are a few things I'm confused about: I'm guessing, I can't use just any basic image format for the texture as the values would be limited to [0, 255] and clamped to [0, 1] in a shader. Same goes for the back buffer, I take it the format needs to be a float point format ? What are the other steps involved ? Surely there has to be more than just using floating point formats to render to a render target and then apply some bloom as a post process ? (considering the output will be 8bpp anyway) Basically, what are the steps for HDR ? How does it work ? I can't seem to find any good papers / articles that describe the process, other than this one, but it seems to skim over the basics a little, so it's confusing.

    Read the article

  • Handling commands or events that wait for an action to be completed afterwards

    - by virulent
    Say you have two events: Action1 and Action2. When you receive Action1, you want to store some arbitrary data to be used the next time Action2 rolls around. Optimally, Action1 is normally a command however it can also be other events. The idea is still the same. The current way I am implementing this is by storing state and then simply checking when Action2 is called if that specific state is there. This is obviously a bit messy and leads to a lot of redundant code. Here is an example of how I am doing that, in pseudocode form (and broken down quite a bit, obviously): void onAction1(event) { Player = event.getPlayer() Player.addState("my_action_to_do") } void onAction2(event) { Player = event.getPlayer() if not Player.hasState("my_action_to_do") { return } // Do something } When doing this for a lot of other actions it gets somewhat ugly and I wanted to know if there is something I can do to improve upon it. I was thinking of something like this, which wouldn't require passing data around, but is this also not the right direction? void onAction1(event) { Player = event.getPlayer() Player.onAction2(new Runnable() { public void run() { // Do something } }) } If one wanted to take it even further, could you not simply do this? void onPlayerEnter(event) { // When they join the server Player = event.getPlayer() Player.onAction1(new Runnable() { public void run() { // Now wait for action 2 Player.onAction2(new Runnable() { // Do something }) } }, true) // TRUE would be to repeat the event, // not remove it after it is called. } Any input would be wonderful.

    Read the article

  • Things to do to port game made for iOS in Unity to Android?

    - by 2600th
    I have just made my first game for iOS and submitted it to app store. I was thinking of porting my game to Android also. I would like to know things one need to do/remember to port game made for iOS in Unity to Android. How to handle different screen resolutions and pixel densities, optimizations required, etc. Any other suggestions and important things you think I should know? EDIT: Also, should I handle builds according to device resolutions or by pixel density?

    Read the article

  • How to create a script for moving a 3rd person controller in an iOS device by using Javascript in Unity3D?

    - by user36563
    I've a code but I'm not sure about the steps, so what I should do after the script? pragma strict public var horizontalSpeed : float = 1.0; public var verticalSpeed : float = 1.0; private var h : float = 0.0; private var v : float = 0.0; private var lastPos : Vector3 = Vector3.zero; function Update() { if UNITY_EDITOR if ( Input.GetMouseButtonDown(0) ) { lastPos = Input.mousePosition; } else if ( Input.GetMouseButton(0) ) { var delta = Input.mousePosition - lastPos; h = horizontalSpeed * delta.x ; transform.Rotate( 0, -h, 0, Space.World ); v = verticalSpeed * delta.y ; transform.position += transform.forward * v * Time.deltaTime; lastPos = Input.mousePosition; } else if (Input.touchCount == 1) { var touch : Touch = Input.GetTouch(0); if (touch.phase == TouchPhase.Moved) { h = horizontalSpeed * touch.deltaPosition.x ; transform.Rotate( 0, -h, 0, Space.World ); v = verticalSpeed * touch.deltaPosition.y ; transform.position += transform.forward * v * Time.deltaTime; } } endif }

    Read the article

  • Can someone explain the (reasons for the) implications of colum vs row major in multiplication/concatenation?

    - by sebf
    I am trying to learn how to construct view and projection matrices, and keep reaching difficulties in my implementation owing to my confusion about the two standards for matrices. I know how to multiply a matrix, and I can see that transposing before multiplication would completely change the result, hence the need to multiply in a different order. What I don't understand though is whats meant by only 'notational convention' - from the articles here and here the authors appear to assert that it makes no difference to how the matrix is stored, or transferred to the GPU, but on the second page that matrix is clearly not equivalent to how it would be laid out in memory for row-major; and if I look at a populated matrix in my program I see the translation components occupying the 4th, 8th and 12th elements. Given that: "post-multiplying with column-major matrices produces the same result as pre-multiplying with row-major matrices. " Why in the following snippet of code: Matrix4 r = t3 * t2 * t1; Matrix4 r2 = t1.Transpose() * t2.Transpose() * t3.Transpose(); Does r != r2 and why does pos3 != pos for: Vector4 pos = wvpM * new Vector4(0f, 15f, 15f, 1); Vector4 pos3 = wvpM.Transpose() * new Vector4(0f, 15f, 15f, 1); Does the multiplication process change depending on whether the matrices are row or column major, or is it just the order (for an equivalent effect?) One thing that isn't helping this become any clearer, is that when provided to DirectX, my column major WVP matrix is used successfully to transform vertices with the HLSL call: mul(vector,matrix) which should result in the vector being treated as row-major, so how can the column major matrix provided by my math library work?

    Read the article

  • Storing large array of tiles, but allowing easy access to data

    - by Cyral
    I've been thinking about this for a while. I have a 2D tile bases platformer in XNA with a large array of tile data, I've been running into memory problems with large maps. (I will add chunks soon!) Currently, Each tile contains an Item along with other properties like how its rotated, if it has forground / background, etc. An Item is static and has properties like the name, tooltip, type of item, how much light it emits, the collision it does to player, etc. Examples: public class Item { public static List<Item> Items; public Collision blockCollisionType; public string nameOfItem; public bool someOtherVariable,etc,etc public static Item Air public static Item Stone; public static Item Dirt; static Item() { Items = new List<Item>() { (Stone = new Item() { nameOfItem = "Stone", blockCollisionType = Collision.Solid, }), (Air = new Item() { nameOfItem = "Air", blockCollisionType = Collision.Passable, }), }; } } Would be an Item, The array of Tiles would contain a Tile for each point, public class Tile { public Item item; //What type it is public bool onBackground; public int someOtherVariables,etc,etc } Now, Most would probably use an enum, or a form of ID to identify blocks. Well my system is really nice just to find out about an item. I can simply do tiles[x,y].item.Name To get the name for example. I realized my Item property of the tile is over 1000 Bytes! Wow! What I'm looking for is a way to use an ID (Int or byte depending on how many items) instead of an Item but still have a method for retreiving data about the type of item a tile contains.

    Read the article

  • How to calculate production when player is offline

    - by Kaizer
    What is the best way to do for example food growth based on how many food buildings you have? Lets say I have a webbased game where you can build a farm wich generates 60 food units per hour. A player has 1 farm in his possession. What is the best way to keep on producing these units even when the player is offline? Should I do the math when the player get's back online again? If so..how can I do this without having to save his last online time every 5 seconds so I can do some maths with it when he logs back in (datetime.now - lastonlinetime)? Next thing is when the player is online, should I refresh his resource count every 5 seconds or so by going to the database and back? This would seem weird to do for every logged on player. I hope you understand my question. kind regards

    Read the article

  • What exactly does an installer do and why might I need one?

    - by Jan
    this is probably the noob-question of the day: So I've written this game. Now there's the .exe file that does the work, a folder with my beautiful, beautiful assets and a bunch of .dll files and other stuff that I probably shouldn't touch. To run the game, I copy the whole lot to the desired computer, double-click the .exe file and start shooting some dudes. Yay! But what exactly is the difference between that and using an installer? What else does an installer do besides copying files and looking more professional than a .zip-file? Is there generally a lot of patching/configuring involved when trying to make a game run on a different computer? I tested my game on all windows computers I could get my greedy fingers on and it works great. Thanks for your time.

    Read the article

  • How can you easily determine the textureRect for tiled maps in SFML 2.0?

    - by ThePlan
    I'm working on creating a 2d map prototype, and I've come across the rendering bit of it. I have a tilesheet with tiles, each tile is 30x30 pixels, and there's a 1px border to delimitate them. In SFML the usual method of drawing a part of a tilesheet is declaring an IntRect with the rectangle coordinates then calling the setTextureRectangle() method to a sprite. In a small game it would work, but I have well over 45 tiles and adding more every day, I can't declare 45 intRects for every material, the map is not optimized yet, it would get even worse if I would have to call the setTextureRect() method, aside from declaring 45 rectangleInts. How could I simplify this task? All I need is a very simple and flexible solution for extracting a region of the tilesheet. Basically I have a Tile class. I create multiple instances of tiles (vectors) and each tile has a position and a material. I parse a map file and as I parse it I set the materials of the map according to the parsed map file, and all I need to do is render. Basically I need to do something like this: switch(tile.getMaterial()) { case GRASS: material_sprite.setTextureRect(something); window.draw(material_sprite); break; case WATER: material_sprite.setTextureRect(something); window.draw(material_sprite); break; // handle more cases }

    Read the article

  • How to design 2D collision callback methods?

    - by Ahmed Fakhry
    In a 2D game where you have a lot of possible combination of collision between objects, such as: object A vs object B = object B vs A; object A vs object C = object C vs A; object A vs object D = object D vs A; and so on ... Do we need to create callback methods for all single type of collision? and do we need to create the same method twice? Like, say a bullet hits a wall, now I need a method to penetrate the wall for the wall, and a method to destroy the bullet for the bullet!! At the same time, a bullet can hit many objects in the game, and hence, more different callback methods!!! Is there a design pattern for that?

    Read the article

  • Developing Games for Samsung Smart TV

    - by Caner Öncü
    We are planning to develop a game for Samsung Smart TVs. Although those TVs support Flash and HTML5 other specs fail at supporting a game engine. For ex: Using an engine that needs GPU is not possible with the default Samsung smart tv set. Or... WebGL is supported with Samsung SDK 4.1 but we don't know if SDK 4.1 is available for Smart TV series between 7000-9000 or not. We have tried to communicate with Samsung but they don't really seem to respond. Is there anyone who has developed a game for Samsung Smart TVs? If there is, can you name the game engines that can work with those TVs?

    Read the article

  • Simple (and fast) dices physics

    - by Markus von Broady
    I'm programming a throw of 5 dices in Actionscript 3 + AwayPhysics (BulletPhysics port). I had a lot of fun tweaking frictions, masses etc. and in the end I found best results with more physics ticks per frame. Currently I use 10 ticks per frame (1/60 s) and it's OK, though I see a difference in plus for 20 ticks. Even though it's only 5 cubes (dices) in a box (or a floor with 3 walls really) I can't simulate 20 ticks in a frame and keep FPS at 60 on a medium-aged PC. That's why I decided to precompute frames for animation, finishing it in around 1700 ticks in 2 seconds. The flash player is freezed for these 2 seconds, and I'm afraid that this result will be more of a 5 seconds or even more, if I'll simulate multi-threading and compute frames in background of some other heavy processes and CPU drawing (dices is only a part of this game). Because I want both players to see dices roll in same way, I can't compute physics when having free resources, and build a buffer for at least one throw of each type (where type is number of dices thrown). I'm afraid players will see a "preparing dices........." message too often and for too long. I think the only solution to this problem is replacing PhysicsEngine with something simpler, or creating own physicsEngine. Do You have any formulas for cube-cube and cube-wall collision detection, and for calculating how their angular and linear velocities should change after a collision occurs?

    Read the article

< Previous Page | 645 646 647 648 649 650 651 652 653 654 655 656  | Next Page >