Search Results

Search found 33194 results on 1328 pages for 'development approach'.

Page 403/1328 | < Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >

  • How can I generate a navigation mesh for a tile grid?

    - by Roflha
    I haven't actually started programming for this one yet, but I wanted to see how I would go about doing this anyway. Say I have a grid of tiles, all of the same size, some traversable and some not. How would I go about creating a navigation mesh of polygons from this grid? My idea was to take the non-traversable tiles out and extend lines from there edges to make polygons... that's all I have got so far. Any advice?

    Read the article

  • How do I scroll to follow my sprite in the physical world?

    - by Esteban Quintero
    I am using andengine to make a game where a sprite (player) is going up across the stage, and I want the camera to stay centred on the sprite the entire time. This is my world so far: final Rectangle ground = new Rectangle(0, CAMERA_HEIGHT - 2, CAMERA_WIDTH, 2, vertexBufferObjectManager); final Rectangle roof = new Rectangle(0, 0, CAMERA_WIDTH, 2, vertexBufferObjectManager); final Rectangle left = new Rectangle(0, 0, 2, CAMERA_HEIGHT, vertexBufferObjectManager); final Rectangle right = new Rectangle(CAMERA_WIDTH - 2, 0, 2, CAMERA_HEIGHT, vertexBufferObjectManager); final FixtureDef wallFixtureDef = PhysicsFactory.createFixtureDef(0, 0.5f, 0.5f); PhysicsFactory.createBoxBody(this.mPhysicsWorld, ground, BodyType.StaticBody, wallFixtureDef); PhysicsFactory.createBoxBody(this.mPhysicsWorld, roof, BodyType.StaticBody, wallFixtureDef); PhysicsFactory.createBoxBody(this.mPhysicsWorld, left, BodyType.StaticBody, wallFixtureDef); PhysicsFactory.createBoxBody(this.mPhysicsWorld, right, BodyType.StaticBody, wallFixtureDef); /* Create two sprits and add it to the scene. */ this.mScene.setBackground(autoParallaxBackground); this.mScene.attachChild(ground); this.mScene.attachChild(roof); this.mScene.attachChild(left); this.mScene.attachChild(right); this.mScene.registerUpdateHandler(this.mPhysicsWorld); The problem is that when the sprite reaches the top wall, it crashes. How can I fix this?

    Read the article

  • PyGame QIX clone, filling areas

    - by astropanic
    I'm playing around with PyGame. Now I'm trying to implement a QIX clone. I have my game loop, and I can move the player (cursor) on the screen. In QIX, the movment of the player leaves a trace (tail) on the screen, creating a polyline. If the polyline with the screen boundaries creates a polygon, the area is filled. How I can accomplish this behaviour ? How store the tail in memory ? How to detect when it build a closed shape that should be filled ? I don't need an exact working solution, some pointers, algo names would be cool.

    Read the article

  • First-Time GLSL Shadow Mapping Problems

    - by Locke
    I'm working on building out a 2.5D engine and having massive problems getting my shadows working. I'm at a point where I'm VERY close. So, let's see a picture to see what I have: As you can see above, the image has lighting -- but the shadow map is displaying incorrectly. The shadow map is shown in the bottom left hand side of the screen as a normal 2D texture, so we can see what it looks like at any given time. If you notice, it appears that the shadows are generating backwards in the wrong direction -- I think. But the problem is a little more deep -- I'm just plotting the shadow onto the screen, which I know is wrong -- I'm ignoring the actual test to see if we NEED to show a shadow. The incoming parameters all appear to be correct -- so there has to be something wrong with my shader code somewhere. Here's what my code looks like: VERTEX: uniform mat4 LightModelViewProjectionMatrix; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { Normal = normalize(gl_NormalMatrix * gl_Normal); LightDirection = normalize(gl_NormalMatrix * gl_LightSource[0].position.xyz); LightCoordinate = LightModelViewProjectionMatrix * gl_Vertex; LightCoordinate.xy = ( LightCoordinate.xy * 0.5 ) + 0.5; gl_Position = ftransform(); gl_TexCoord[0] = gl_MultiTexCoord0; } FRAGMENT: uniform sampler2D DiffuseMap; uniform sampler2D ShadowMap; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { vec4 Texel = texture2D(DiffuseMap, vec2(gl_TexCoord[0])); // Directional lighting //Build ambient lighting vec4 AmbientElement = gl_LightSource[0].ambient; //Build diffuse lighting float Lambert = max(dot(Normal, LightDirection), 0.0); //max(abs(dot(Normal, LightDirection)), 0.0); vec4 DiffuseElement = ( gl_LightSource[0].diffuse * Lambert ); vec4 LightingColor = ( DiffuseElement + AmbientElement ); LightingColor.r = min(LightingColor.r, 1.0); LightingColor.g = min(LightingColor.g, 1.0); LightingColor.b = min(LightingColor.b, 1.0); LightingColor.a = min(LightingColor.a, 1.0); LightingColor *= Texel; //Everything up to this point is PERFECT // Shadow mapping // ------------------------------ vec4 ShadowCoordinate = LightCoordinate / LightCoordinate.w; float DistanceFromLight = texture2D( ShadowMap, ShadowCoordinate.st ).z; float DepthBias = 0.001; float ShadowFactor = 1.0; if( LightCoordinate.w > 0.0 ) { ShadowFactor = DistanceFromLight < ( ShadowCoordinate.z + DepthBias ) ? 0.5 : 1.0; } LightingColor.rgb *= ShadowFactor; //gl_FragColor = LightingColor; //Yes, I know this is wrong, but the line above (gl_FragColor = LightingColor;) produces the wrong effect gl_FragColor = LightingColor * texture2D( ShadowMap, ShadowCoordinate.st ); } I wanted to make sure the coordinates were correct for the shadow map -- so that's why you see it applied to the image as it is below. But the depth for each point seems to be wrong -- the shadows SHOULD be opposite (look at how the image is -- the shaded areas from normal lighting are facing the opposite direction of the shadows). Maybe my matrices are bad or something going in? They're isolated and appear to be correct -- nothing else is going in unusual. When I view from the light's view and get the MVP matrices for it, they're correct. EDIT: Added an image so you can see what happens when I do the correct command at the end of the GLSL: That's the image when the last line is just glFragColor = LightingColor; Maybe someone has some idea of what I screwed up?

    Read the article

  • GLSL: How Do I cast a float into an int?

    - by dugla
    In a GLSL fragment shader I am trying to cast a float into an int. The compiler has other ideas. It complains thusly: ERROR: 0:60: '=' : cannot convert from 'mediump float' to 'highp int' I am trying to do this: mediump float indexf = floor(2.0 * mixer); highp int index = indexf; I (vainly) tried to raise the precision of the int above the float to appease the GL Gods but no joy. Could someone please school me here? Thanks, Doug

    Read the article

  • Unity scaling instantiated GameObject at Start() doesn't "keep"

    - by Shivan Dragon
    I have a very simple scenario: A box-like Prefab which is imported from Blender automatically (I have the .blend file in the Assets folder). A script that has two public GameObject fields. In one I place the above prefab, and in the other I place a terrain object (which I've created in Unity's graphical view): public Collider terrain; public GameObject aStarCellHighlightPrefab; This script is attached to the camera. The idea is to have the Blender prefab instantiated, have the terrain set as its parent, and then scale said prefab instance up. I first did it like this, in the Start() method: void Start () { cursorPositionOnTerrain = new RaycastHit(); aStarCellHighlight = (GameObject)Instantiate(aStarCellHighlightPrefab, new Vector3(300,300,300), terrain.transform.rotation); aStarCellHighlight.name = "cellHighlight"; aStarCellHighlight.transform.parent = terrain.transform; aStarCellHighlight.transform.localScale = new Vector3(100,100,100); } and first thought it didn't work. However later I noticed that it did in fact work, in the sense where the scale was applied right at the start, but then right after the prefab instance came back to its initial scale. Putting the scale code in the Update() methods fixes it in the sense where now it stays scaled all the time: void Update () { aStarCellHighlight.transform.localScale = new Vector3(100,100,100); //... } However I've noticed that when I run this code, the object is first displayed without the scale being applied, and it takes about 5-10 seconds for the scale to happen. During this time everything works fine (like input and logging, etc). The scene is very simple, it's not like it has a lot of stuff to load or anything (there's a Ray cast from the camera on to the terrain, but that seems to happen without such delays). My (2 part) question is: Why doesn't it take the scale transform when I do it at the beginning in the Start() method. Why do I have to keep scaling it in the Update() method? Why does it take so long for the scale to "apply/show up".

    Read the article

  • Why is concept art not signed by the author?

    - by Gerald
    I am a starting concept artist who would like to enter the gaming industry. I noticed that some AAA titles show their concept art with no artists signature (only a reference to game the game, such as for Star Wars The Old Republic: 2013 ALL RIGHTS RESERVED BioWare, LucasArts). I asked myself a question, what possible harm could my autograph cause on the public concept art if I am not a well known concept artist such as Adam Adamowicz (who did concepts for Skyrim). Why would a prospective boss tells me not to leave my "finger print" on the picture despite, the fact that I am a very talented artist?

    Read the article

  • Creating natural environments that can run on lower end computers in Unity3D/C#

    - by Timothy Williams
    So, I'm starting work on a project soon that will require me to create realistic environments that can preferably run on PC's besides high quality ones. The goal is to get as real an environment as possible while still being easy(ish) to run. The only problem is I've NEVER done anything with 3D environments, making trees sway, grass move, lighting, etc. Can anyone give me any help? Perhaps describe how it's done? Link me to articles? I'm just looking to be pointed in the right direction, not for you to write the code for me. Any help at all would be greatly appreciated, I'm using Unity3D and C# as my language. Thanks, Tim.

    Read the article

  • Circular Bullet Spread not Even

    - by SoulBeaver
    I'm creating a bullet shooter much in the style of Touhou. Right now I want to have a very simple circular shot being fired from the enemy. See this picture: As you can see, the spacing is very uneven, which isn't very good if you want to survive. The code I'm using is this: private function shoot() : void { const BULLETS_PER_WAVE : int = 72; var interval : Number = BULLETS_PER_WAVE / 360; for (var i : int = 0; i < BULLETS_PER_WAVE; ++i { var xSpeed : Number = GameConstants.BULLET_NORMAL_SPEED_X * Math.sin(i * interval); var ySpeed : Number = GameConstants.BULLET_NORMAL_SPEED_Y * Math.cos(i * interval); BulletFactory.createNormalBullet(bulletColor_, alice_.center, xSpeed, ySpeed); } canShoot_ = false; cooldownTimer_.start(); } I imagine my mistake is in the sin, cos functions, but I'm not entirely sure what's wrong.

    Read the article

  • Render on other render targets starting from one already rendered on

    - by JTulip
    I have to perform a double pass convolution on a texture that is actually the color attachment of another render target, and store it in the color attachment of ANOTHER render target. This must be done multiple time, but using the same texture as starting point What I do now is (a bit abstracted, but what I have abstract is guaranteed to work singularly) renderOnRT(firstTarget); // This is working. for each other RT currRT{ glBindFramebuffer(GL_FRAMEBUFFER, currRT.frameBufferID); programX.use(); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, firstTarget.colorAttachmentID); programX.setUniform1i("colourTexture",0); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, firstTarget.depthAttachmentID); programX.setUniform1i("depthTexture",1); glBindBuffer(GL_ARRAY_BUFFER, quadBuffID); // quadBuffID is a VBO for a screen aligned quad. It is fine. programX.vertexAttribPointer(POSITION_ATTRIBUTE, 3, GL_FLOAT, GL_FALSE, 0, (void*)0); glDrawArrays(GL_QUADS,0,4); programY.use(); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, currRT.colorAttachmentID); // The second pass is done on the previous pass programY.setUniform1i("colourTexture",0); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, currRT.depthAttachmentID); programY.setUniform1i("depthTexture",1); glBindBuffer(GL_ARRAY_BUFFER, quadBuffID); programY.vertexAttribPointer(POSITION_ATTRIBUTE, 3, GL_FLOAT, GL_FALSE, 0, (void*)0); glDrawArrays(GL_QUADS, 0, 4); } The problem is that I end up with black textures and not the wanted result. The GLSL programs program(X,Y) works fine, already tested on single targets. Is there something stupid I am missing? Even an hint is much appreciated, thanks!

    Read the article

  • Tile-based maps in AS3

    - by Ashley
    I want to make a tile-based platformer in AS3. I want my game to read an external maps file (in xml or json or somethimg similar) to draw a tile-based map. I've seen loads of tutorials for this in AS2 and other languages, and the few I've found in AS3 are either incomplete or filled with extra unnecessary features. I just want to be able to draw a basic map from sprites in Flash. Any links or information to point me in the right direction would be appreciated.

    Read the article

  • Acceptable sound quality: stereo needed for an Android game?

    - by Thomas Calc
    I have various simple short sound effects (damage sound, dying sound, thunderbolt, fanfare, breaking) for a game that is developed for Android currently. I use OGG files: 96kbps VBR, 44.1KHz, 2 channels (that means stereo, right?). I read the other stackexchange topics about "acceptable sound quality", but they're too general, address too many things. My experience is that even with 80kbps, my effects sound OK. But I tested it on a limited number of Android devices (including a Sony Ericsson Xperia Neo and a HTC Desire HD). My questions: For mobile phones and tablets, generally, what parameters are recommended? Won't my 80kbps sounds be bad on a newer device (such as a modern tablet)? I don't hear any difference between stereo and mono (2 channels vs. 1 channel, right?), is there any noticeable difference at all for mobile phones / tablets? (in terms of the player experience) May it worth it at all? I assume that stereo sounds take much more in memory (when they're decoded to PCM), despite of the fact that the compressed OGG size is practically the same. Reacting to Roy T.'s great comment: Actually, I couldn't measure the PCM size (Android decodes OGG internally), but I thought that stereo will take more space than mono when uncompressed After throwing out one of the WAV channels in Audacity, and re-exporting it: The new WAV file size is half than before The OGG file size is practically the same as before The sound effects and game music was recorded by my friend who is an experienced hobby musician/composer, but he knows little about computers & software so he just gave me some high-quality WAV files generated via his hardware.These were stereo, but if I check them in Audacity, both channels appear to be exactly the same.Can I consider them the same (= moving to mono), or might there be some unnoticeable differences to the human eye?

    Read the article

  • 2D non-tile based map editor

    - by user5468
    I am currently developing a relatively simple 2D, topdown oriented adventure game for the iPhone and was wondering what would be the easiest way to create the maps for my game. I figured I would need some kind of visual editor that would give me immediate feedback and would allow me to place all objects in the world exactly where I want them. I could then load the saved representation of the world I create in the editor in my game. So, I am looking for a simple map editor that allows me to do this. All the objects in my game are simply textured rectangles build up from two triangles. All I need to be able to do is position different rectangles/objects in the map, and give them a texture. I am using texture atlases, so it would be useful to be able to assign portions of textures to the objects. I then need to be able to extract all the objects from the saved representation of my maps, together with the name/identifier of the texture(atlas) they use, and the area of the texture atlas. I have looked at some tile-based map editors like Tiled and Ogmo, but they don't seem to be able to do what I want. Any suggestions? EDIT: a more concrete example: something like the GameMaker level editor, but then with added export functionality in a handy format.

    Read the article

  • Playing a Song causing WP7 to crash on phone, but not on emulator

    - by Michael Zehnich
    Hi there, I am trying to implement a song into a game that begins playing and continually loops on Windows Phone 7 via XNA 4.0. On the emulator, this works fine, however when deployed to a phone, it simply gives a black screen before going back to the home screen. Here is the rogue code in question, and commenting this code out makes the app run fine on the phone: // in the constructor fields private Song song; // in the LoadContent() method song = Content.Load<Song>("song"); // in the Update() method if (MediaPlayer.GameHasControl && MediaPlayer.State != MediaState.Playing) { MediaPlayer.Play(song); } The song file itself is a 2:53 long, 2.28mb .wma file at 106kbps bitrate. Again this works perfectly on emulator but does not run at all on phone. Thanks for any help you can provide!

    Read the article

  • Entity Component System, weapon

    - by Heorhiy
    I'm new to game programming and currently trying to understand Entity Component System design by implementing simple 2d game. By ECS I mean design, described here for example In my game I have different kind of weapons: automatic, gun, grenade, etc... Each type of weapon has it's own affect area (gun shots along the straight line and grenade explodes and covers some spherical area) , damage impact, visual effect and bullet amount, delay between shots. So I don't completely understand how to implement weapons. Should weapon be an Entity or it should be a component? And how the player should pick up a weapon, switch between different types of weapons and etc.

    Read the article

  • How to make this game loop deterministic

    - by Lanaru
    I am using the following game loop for my pacman clone: long prevTime = System.currentTimeMillis(); while (running) { long curTime = System.currentTimeMillis(); float frameTime = (curTime - prevTime) / 1000f; prevTime = curTime; while (frameTime > 0.0f) { final float deltaTime = Math.min(frameTime, TIME_STEP); update(deltaTime); frameTime -= deltaTime; } repaint(); } The thing is, I don't always get the same ghost movement every time I run the game (their logic is deterministic), so it must be the game loop. I imagine it's due to the final float deltaTime = Math.min(frameTime, TIME_STEP); line. What's the best way of modifying this to perform the exact same way every time I run it? Also, any further improvements I can make?

    Read the article

  • UnrealScript error: Importing defaults for actor: Changing Role in defaultproperties illegal, - what is it importing?

    - by user3079666
    I added the line var float Mass; to Actor and commented it out of the classes that inherit from actor and declare it, fixed all issues but I now get the error message: Error, Importing defaults for Actor: Changing Role in defaultproperties is illegal (was RemoteRole intended?) The thing is, I did not change anything related to Role or in defaultproperties. Also since it says Importing, I'm guessing it's some ini file.. any clues?

    Read the article

  • Bloom shader makes it impossible to render black?

    - by Mathias Lykkegaard Lorenzen
    I am playing around with the bloom shader from the XNA sample page, to do some glow shading. I am rendering primitive vector-ish squares of linelists/linestrips, on a background. However, I am facing a few problems. With a black background and white squares, I can actually see the squares. However, with a white background and black squares, I can't see them at all. Why is this happening, and is there any way of me fixing it? Can I modify my bloom shader to also "glow" dark elements, if that's what is causing it?

    Read the article

  • Advice for programming a lobby for a network multiplayer game?

    - by Milo
    I'm working on learning network programming. I'm working on a simple card game. The basic idea is: Players enter the lobby Players see tables Players sit at an empty seat Once they sit, they do not need any information from the lobby, they see the card table and the data about the other players and so forth. I've programmed the server portion for the game itself. The clients connect to my server object and the server then receives and sends messages; quite simple. The tricky concepts for me are: What's a good way to run many tables at the same time? What's a good way to keep the lobby consistently updated for each person in the lobby (eg: MSG_TABLE_FILLED, 22) Ideally I'd like to have 1 server exe for all of this and to have to deal with multithreading as little as possible. I'm going to use the enet library. I was thinking that each time a game session starts, I push a new Game and I map the client IPs to that table, then I just route messages from those clients to that Game. Since enet supports channels I was thinking of using 2 channels per table, one for the game messages and one for in game chat. Would something like this work? Does anyone have any advice / design ideas for a game with a lobby and many tables? Is there a usual way this is done that I'm overlooking? Any conceptual ideas or even c/c++ code examples would be very helpful. Thanks

    Read the article

  • ScreenManagement better practices ?! Textbox not focusing

    - by xykudyax
    I saw a question here using DataTemplates with WPF for ScreenManagement, I was curious and I gave it a try I think the ideia is amazing and very clean. Though I'm new to WPF and I read a lot of times that almost everything should be made in XAML and very little should be "coded behind". My questions resolves about using the datatemplate ideia, WHERE should the code that calls the transitions be? where should I define which commands are avaiable in which screens. For example: [ScreenA] Commands: Pressing B - Goes to state B Pressing ESC - Exits [ScreenB] Commands: Pressing A - Goes to state A Pressing SPACE - Exits where do I define the keyEventHandlers? and where do I call the next screen? I'm doing this as an hobby for learning and "if you are learning, better learn it right" :) Thank you for your time. Yes the Q/A I was talking is: What's a good way to handle game screen management in WPF? What I've done so far was to create a Screen class (derived from UserControl) and create some virtual methods: - one for Initializing stuff (like focus a given component by default) - another for inputHandling I handle it by using a switch case and by listening to the PreviewKeyDown event from the parent container (MainWindow) Im not able to do it another way! Help?!. - and a finally one that removes the keyEvent method (when the screen is terminated) Parent.PreviewKeyDown -= OnKeyDown; am I doing okay? I face a problem. When I add a new screen (userControl) containing a TextBox I'm not able to give it autofocus :/ The Caret is there but is not blinking and I have to hit "TAB" before being able to input anything at all :/

    Read the article

  • LWJGL Text Rendering

    - by Trixmix
    Currently in my project I am using LWJGL and the Slick2D library to render text onto the screen. Here is a more specific example: Font f = new Font("Times New Roman",Font.BOLD,18); font = new UnicodeFont(f); font.getEffects().add(new ColorEffect(Color.white)); font.addAsciiGlyphs(); try { font.loadGlyphs(); } catch (SlickException e) { e.printStackTrace(); } then i use font.drawString to write onto the screen. This is a quick easy way but it has a lot of disadvantages. for example font.loadGlyphs take a very long time 1-3 seconds. so when i want to change a color or font type then i have to wait 1-3 seconds which means I cannot do it while rendering (ie. cant have different color text on the same screen). My question is what is a better way of drawing multicolored text onto the screen? I use slick2d only for the text rendering so maybe i can fully get rid of the library and draw text some other way... If you have an answer please leave a quick short example. Thanks!

    Read the article

  • Physics/Graphics Components

    - by Brett Powell
    I have spent the last 48 hours reading up on Object Component systems, and feel I am ready enough to start implementing it. I got the base Object and Component classes created, but now that I need to start creating the actual components I am a bit confused. When I think of them in terms of HealthComponent or something that would basically just be a property, it makes perfect sense. When it is something more general as a Physics/Graphics component, I get a bit confused. My Object class looks like this so far (If you notice any changes I should make please let me know, still new to this)... typedef unsigned int ID; class GameObject { public: GameObject(ID id, Ogre::String name = ""); ~GameObject(); ID &getID(); Ogre::String &getName(); virtual void update() = 0; // Component Functions void addComponent(Component *component); void removeComponent(Ogre::String familyName); template<typename T> T* getComponent(Ogre::String familyName) { return dynamic_cast<T*>(m_components[familyName]); } protected: // Properties ID m_ID; Ogre::String m_Name; float m_flVelocity; Ogre::Vector3 m_vecPosition; // Components std::map<std::string,Component*> m_components; std::map<std::string,Component*>::iterator m_componentItr; }; Now the problem I am running into is what would the general population put into Components such as Physics/Graphics? For Ogre (my rendering engine) the visible Objects will consist of multiple Ogre::SceneNode (possibly multiple) to attach it to the scene, Ogre::Entity (possibly multiple) to show the visible meshes, and so on. Would it be best to just add multiple GraphicComponent's to the Object and let each GraphicComponent handle one SceneNode/Entity or is the idea to have one of each Component needed? For Physics I am even more confused. I suppose maybe creating a RigidBody and keeping track of mass/interia/etc. would make sense. But I am having trouble thinking of how to actually putting specifics into a Component. Once I get a couple of these "Required" components done, I think it will make a lot more sense. As of right now though I am still a bit stumped.

    Read the article

  • Scripts won't affect clones - Unity3d

    - by user3666251
    I made a script which swaps two game objects on click.But the script won't work because the objects are actualy clones of the original prefab. This is the script (UnityScript): #pragma strict var object1 : GameObject; var object2 : GameObject; function OnMouseDown () { Instantiate(object2,object1.transform.position,object1.transform.rotation); Destroy(object1); } I use this script to create other game objects (clones)[c#] : using UnityEngine; using System.Collections; public class Spawner : MonoBehaviour { public GameObject[] obj; public float spawnMin = 1f; public float spawnMax = 2f; // Use this for initialization void Start () { Spawn (); } void Spawn() { Instantiate(obj[Random.Range(0, obj.GetLength(0))],transform.position, Quaternion.identity); Invoke ("Spawn", Random.Range (spawnMin, spawnMax)); } } The objects get renamed to NAME (Clone). What I wanna do is make the script affect clones too.So they will swap when I click on them.

    Read the article

  • Impulsioned jumping

    - by Mutoh
    There's one thing that has been puzzling me, and that is how to implement a 'faux-impulsed' jump in a platformer. If you don't know what I'm talking about, then think of the jumps of Mario, Kirby, and Quote from Cave Story. What do they have in common? Well, the height of your jump is determined by how long you keep the jump button pressed. Knowing that these character's 'impulses' are built not before their jump, as in actual physics, but rather while in mid-air - that is, you can very well lift your finger midway of the max height and it will stop, even if with desacceleration between it and the full stop; which is why you can simply tap for a hop and hold it for a long jump -, I am mesmerized by how they keep their trajetories as arcs. My current implementation works as following: While the jump button is pressed, gravity is turned off and the avatar's Y coordenate is decremented by the constant value of the gravity. For example, if things fall at Z units per tick, it will rise Z units per tick. Once the button is released or the limit is reached, the avatar desaccelerates in an amount that would make it cover X units until its speed reaches 0; once it does, it accelerates up until its speed matches gravity - sticking to the example, I could say it accelerates from 0 to Z units/tick while still covering X units. This implementation, however, makes jumps too diagonal, and unless the avatar's speed is faster than the gravity, which would make it way too fast in my current project (it moves at about 4 pixels per tick and gravity is 10 pixels per tick, at a framerate of 40FPS), it also makes it more vertical than horizontal. Those familiar with platformers would notice that the character's arc'd jump almost always allows them to jump further even if they aren't as fast as the game's gravity, and when it doesn't, if not played right, would prove itself to be very counter-intuitive. I know this because I could attest that my implementation is very annoying. Has anyone ever attempted at similar mechanics, and maybe even succeeded? I'd like to know what's behind this kind of platformer jumping. If you haven't ever had any experience with this beforehand and want to give it a go, then please, don't try to correct or enhance my explained implementation, unless I was on the right way - try to make up your solution from scratch. I don't care if you use gravity, physics or whatnot, as long as it shows how these pseudo-impulses work, it does the job. Also, I'd like its presentation to avoid a language-specific coding; like, sharing us a C++ example, or Delphi... As much as I'm using the XNA framework for my project and wouldn't mind C# stuff, I don't have much patience to read other's code, and I'm certain game developers of other languages would be interested in what we achieve here, so don't mind sticking to pseudo-code. Thank you beforehand.

    Read the article

  • Can one draw a cube using different method/drawing mode?

    - by den-javamaniac
    Hi. I've just started learning gamedev (in particular android EGL based) and have ran over a code from Pro Android Games 2 that looks as follows: /* * Copyright (C) 2007 Google Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package opengl.scenes.cubes; import java.nio.ByteBuffer; import java.nio.ByteOrder; import java.nio.IntBuffer; import javax.microedition.khronos.opengles.GL10; public class Cube { public Cube(){ int one = 0x10000; int vertices[] = { -one, -one, -one, one, -one, -one, one, one, -one, -one, one, -one, -one, -one, one, one, -one, one, one, one, one, -one, one, one, }; int colors[] = { 0, 0, 0, one, one, 0, 0, one, one, one, 0, one, 0, one, 0, one, 0, 0, one, one, one, 0, one, one, one, one, one, one, 0, one, one, one, }; byte indices[] = { 0, 4, 5, 0, 5, 1, 1, 5, 6, 1, 6, 2, 2, 6, 7, 2, 7, 3, 3, 7, 4, 3, 4, 0, 4, 7, 6, 4, 6, 5, 3, 0, 1, 3, 1, 2 }; // Buffers to be passed to gl*Pointer() functions // must be direct, i.e., they must be placed on the // native heap where the garbage collector cannot vbb.asIntBuffer() // move them. // // Buffers with multi-byte datatypes (e.g., short, int, float) // must have their byte order set to native order ByteBuffer vbb = ByteBuffer.allocateDirect(vertices.length*4); vbb.order(ByteOrder.nativeOrder()); mVertexBuffer = vbb.asIntBuffer(); mVertexBuffer.put(vertices); mVertexBuffer.position(0); ByteBuffer cbb = ByteBuffer.allocateDirect(colors.length*4); cbb.order(ByteOrder.nativeOrder()); mColorBuffer = cbb.asIntBuffer(); mColorBuffer.put(colors); mColorBuffer.position(0); mIndexBuffer = ByteBuffer.allocateDirect(indices.length); mIndexBuffer.put(indices); mIndexBuffer.position(0); } public void draw(GL10 gl) { gl.glFrontFace(GL10.GL_CW); gl.glVertexPointer(3, GL10.GL_FIXED, 0, mVertexBuffer); gl.glColorPointer(4, GL10.GL_FIXED, 0, mColorBuffer); gl.glDrawElements(GL10.GL_TRIANGLES, 36, GL10.GL_UNSIGNED_BYTE, mIndexBuffer); } private IntBuffer mVertexBuffer; private IntBuffer mColorBuffer; private ByteBuffer mIndexBuffer;} So it suggests to draw a cube using triangles. My question is: can I draw the same cube using GL_TPOLYGON? If so, isn't that an easier/more understandable way to do things?

    Read the article

< Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >