Search Results

Search found 33291 results on 1332 pages for 'development environment'.

Page 569/1332 | < Previous Page | 565 566 567 568 569 570 571 572 573 574 575 576  | Next Page >

  • Wall jumping collision detection anomaly

    - by Nanor
    I'm creating a game where the player ascends a tower by wall jumping his way to the top. When the player has collided with the right wall they can only jump left and vice versa. Here is my current implementation: if(wallCollision() == "left"){ player.setPosX(0); player.setVelX(0); ignoreCollisions = true; player.setCanJump(true); player.setFacingLeft(false); } else if (wallCollision() == "right"){ player.setPosX(screenWidth-playerWidth*2); player.setVelX(0); ignoreCollisions = true; player.setCanJump(true); player.setFacingLeft(true); } else{ player.setVelY(player.getVelY() + gravity); } and private String wallCollision(){ if(player.getPosX() < playerWidth && !ignoreCollisions) return "left"; else if(player.getPosX() > screenWidth - playerWidth*2 && !ignoreCollisions) return "right"; else{ timeToJump += Gdx.graphics.getDeltaTime(); if(timeToJump > 0.50f){ timeToJump = 0; ignoreCollisions = false; } return "jumping"; } } If the player is colliding with the left wall it will switch between the states left and jumping repeatedly due to the varible ignoreCollisions being switched repeatedly in collision checks. This will give a chance to either jump as intended or simply ascend vertically instead of diagonally. I can't figure out an implementation that will reliably make sure the player jumps as intended. Does anyone have any pointers?

    Read the article

  • Light on every model and not in the whole scene

    - by alecnash
    I am using a custom shader and try to pass the effect on my Models like that: foreach (ModelMesh mesh in Model.Meshes) { foreach (ModelMeshPart part in mesh.MeshParts) { part.Effect = effect; } mesh.Draw(); } My only issue is that every Model now has its own light source in it. Why is this happening and is this a problem of my shader? Edit: These are the parameters passed to the shader: private void Get_lambertEffect() { if (_lambertEffect == null) _lambertEffect = Engine.LambertEffect; //Lambert technique (LambertWithShadows, LambertWithShadows2x2PCF, LambertWithShadows3x3PCF) _lambertEffect.CurrentTechnique = _lambertEffect.Techniques["LambertWithShadows3x3PCF"]; _lambertEffect.Parameters["texelSize"].SetValue(Engine.ShadowMap.TexelSize); //ShadowMap parameters _lambertEffect.Parameters["lightViewProjection"].SetValue(Engine.ShadowMap.LightViewProjectionMatrix); _lambertEffect.Parameters["textureScaleBias"].SetValue(Engine.ShadowMap.TextureScaleBiasMatrix); _lambertEffect.Parameters["depthBias"].SetValue(Engine.ShadowMap.DepthBias); _lambertEffect.Parameters["shadowMap"].SetValue(Engine.ShadowMap.ShadowMapTexture); //Camera view and projection parameters _lambertEffect.Parameters["view"].SetValue(Engine._camera.ViewMatrix); _lambertEffect.Parameters["projection"].SetValue(Engine._camera.ProjectionMatrix); _lambertEffect.Parameters["world"].SetValue( Matrix.CreateScale(Size) * world ); //Light and color _lambertEffect.Parameters["lightDir"].SetValue(Engine._sourceLight.Direction); _lambertEffect.Parameters["lightColor"].SetValue(Engine._sourceLight.Color); _lambertEffect.Parameters["materialAmbient"].SetValue(Engine.Material.Ambient); _lambertEffect.Parameters["materialDiffuse"].SetValue(Engine.Material.Diffuse); _lambertEffect.Parameters["colorMap"].SetValue(ColorTexture.Create(Engine.GraphicsDevice, Color.Red)); }

    Read the article

  • What's the best way to add some particle or laser effects to an already animated character?

    - by Scott
    I just purchased some rigged and animated robot characters from 3drt for a game I'm making in unity. I would like to be able to add some weapon effects to the characters. For example, I would like for the robots to be able to shot lasers out of the hands at enemies. I have know idea where to even start with this task as I'm more of a programmer than a graphics guy. Can some experienced developers / designers please point me in a good direction? Thanks. Note: As of right now I have maya and blender installed on my computer.

    Read the article

  • Panning a 3d viewport in 2d direction with rotated camera

    - by Noob Game Developer
    I am using below code to pan the viewport (action script 3 code using flare3d framework) _mainCamera.x-= Input3D.mouseXSpeed; _mainCamera.z+= Input3D.mouseYSpeed; Where as Input3D.mouse[X|Y]Speed gives the displacement of the mouse on the X/Y axis starting from the position of the last frame. This works perfect if my camera is not rotated. However, if I rotate the camera (x by 30, y by 60) and pan the camera then it goes wrong. Which is actually correctly panning according to the code. But this is not desired and I know I need to do some math to get the correct x/y which I am not aware of it. Can some one help me achieving it? Update: I am getting an Idea but I am not sure how to do it :( Get the mouseX/Y deltas (xd,yd) Get the current camera coords (pos3d) Convert to screen coords (pos2d) Add deltas to screen coords (pos2d+ (xd,yd)) Convert above coords to 3d coords

    Read the article

  • Are there any OpenGL ES 2.0 examples for JOGL?

    - by fjdutoit
    I've scoured the internet for the last few hours looking for an example of how to run even the most basic OpenGL ES 2 example using JOGL but "by Jupiter!" it has been a total fail. I tried converting the android example from the OpenGL ES 2.0 Programming Guide examples (and at the same time looking at the WebGL example -- which worked fine) yet without any success. Are there any examples out there? If anyone else wants some extra help regarding this question see this thread on the official Jogamp forum.

    Read the article

  • OpenGL render to texture causing edge artifacts

    - by mysticalOso
    This is my first post here so any help would be massively appreciated :) I'm using C++ with SDL and OpenGL 3.3 When rendering directly to screen I get the following result And when I render to texture I this happens Anti-aliasing is turned off for both. I'm guessing this has something to do with depth buffer accuracy but I've tried a lot of different methods to improve the result but, no success :( I'm currently using the following code to set up my FBO: GLuint frameBufferID; glGenFramebuffers(1, &frameBufferID); glBindFramebuffer(GL_FRAMEBUFFER, frameBufferID); glGenTextures(1, &coloursTextureID); glBindTexture(GL_TEXTURE_2D, coloursTextureID); glTexImage2D(GL_TEXTURE_2D,0,GL_RGB,SCREEN_WIDTH,SCREEN_HEIGHT,0,GL_RGB,GL_UNSIGNED_BYTE,NULL); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); //Depth buffer setup GLuint depthrenderbuffer; glGenRenderbuffers(1, &depthrenderbuffer); glBindRenderbuffer(GL_RENDERBUFFER, depthrenderbuffer); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, SCREEN_WIDTH,SCREEN_HEIGHT); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthrenderbuffer); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, coloursTextureID, 0); GLenum DrawBuffers[1] = {GL_COLOR_ATTACHMENT0}; glDrawBuffers(1, DrawBuffers); // if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) return false; Thank you so much for any help :)

    Read the article

  • GameStateManagement and inputs not being recognized

    - by Dave Voyles
    EDIT: I've removed a bit of code from the input class to make this more readable, and updated my StartScreen class, which is now at the bottom. I have the same issues though, but they are explained in my comments on the bottom of this page. It won't let me paste my additional code here (the format comes out crazy), so I've linked to pastebin with the code pastebin I've been trying to implement the MS provided GameStateManagement sample with my game, but it has proven a bit difficult. Really, I'm using Oneksoft's Starter Kit, which uses the MS provided sample, so they are identical, except for my splash screen. I'm able to get the splash screen to launch, where it informs the player to press A to advance the screen, but this doesn't seem to accept any of my inputs. I’ve also added Console.Writeline(“Pressing A”) under the IsMenuPressed method in Input.cs to verify that it is getting called, but for some reason it is constantly spamming my log, rather than just appearing each time I press it. Not sure why this is happening. I have a bit too much code to post it all here, so I’ve attached a link to my .rar with my classes, but I’ll also leave a bit here which I thinkmay be applicable. https://www.dropbox.com/sh/6ek4uru2jc2ch0k/JTeBWN_3PQ What do you guys think the issue is? namespace Pong { public class Input { public const int MaxInputs = 4; public readonly KeyboardState[] CurrentKeyboardState; public readonly GamePadState[] CurrentGamePadState; public KeyboardState[] LastKeyboardState; public GamePadState[] LastGamePadState; public readonly bool[] GamePadWasConnected; public Input() { // Get input state CurrentKeyboardState = new KeyboardState[MaxInputs]; CurrentGamePadState = new GamePadState[MaxInputs]; // Preserving last states to check for isKeyUp events LastKeyboardState = CurrentKeyboardState; LastGamePadState = CurrentGamePadState; } /// <summary> /// Checks for a "menu select" input action. /// The controllingPlayer parameter specifies which player to read input for. /// If this is null, it will accept input from any player. When the action /// is detected, the output playerIndex reports which player pressed it. /// </summary> public bool IsMenuSelect(PlayerIndex? controllingPlayer, out PlayerIndex playerIndex) { Console.WriteLine("Pressing A"); return IsNewKeyPress(Keys.Space, controllingPlayer, out playerIndex) || IsNewKeyPress(Keys.Enter, controllingPlayer, out playerIndex) || IsNewButtonPress(Buttons.A, controllingPlayer, out playerIndex) || IsNewButtonPress(Buttons.Start, controllingPlayer, out playerIndex); } /// <summary> /// Checks for a "menu cancel" input action. /// The controllingPlayer parameter specifies which player to read input for. /// If this is null, it will accept input from any player. When the action /// is detected, the output playerIndex reports which player pressed it. /// </summary> public bool IsMenuCancel(PlayerIndex? controllingPlayer, out PlayerIndex playerIndex) { return IsNewKeyPress(Keys.Escape, controllingPlayer, out playerIndex) || IsNewButtonPress(Buttons.B, controllingPlayer, out playerIndex) || IsNewButtonPress(Buttons.Back, controllingPlayer, out playerIndex); }

    Read the article

  • Any advice for dynamic music control?

    - by Assembler
    I would like to be able to dynamically progress the score, and affect the volume levels of separate channels within the music. How could I do this? From my experience with mod music (olden days Amiga music, Mod Tracker, Scream Tracker, Fast Tracker II, Impulse Tracker etc etc), I believe this is the best way to tackle the problem, to allow the music to move from one loop to another, without anything mixed down. I want to do this in AS3, and am considering pulling apart Flod to make this happen

    Read the article

  • Models from 3ds max lose their transformations when input into XNA

    - by jacobian
    I am making models in 3ds max. However when I export them to .fbx format and then input them into XNA, they lose their scaling. -It is most likely something to do with not using the transforms from the model correctly, is the following code correct -using xna 3.0 Matrix[] transforms=new Matrix[playerModel.Meshes.Count]; playerModel.CopyAbsoluteBoneTransformsTo(transforms); // Draw the model. int count = 0; foreach (ModelMesh mesh in playerModel.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.World = transforms[count]* Matrix.CreateScale(scale) * Matrix.CreateRotationX((float)MathHelper.ToRadians(rx)) * Matrix.CreateRotationY((float)MathHelper.ToRadians(ry)) * Matrix.CreateRotationZ((float)MathHelper.ToRadians(rz))* Matrix.CreateTranslation(position); effect.View = view; effect.Projection = projection; effect.EnableDefaultLighting(); } count++; mesh.Draw(); }

    Read the article

  • Particle Effect Completion

    - by Siddharth
    In my game I use particle effect for various purposes. In that I detect the completion of the particle effect. Basically I want to do something after completion of the particle effect. But the problem is that I didn't able to find the particle effect completion. So any community member please help me. EDIT : I was creating particle effect using following code pointParticleEmtitter = new PointParticleEmitter(pX, pY); particleSystem = new ParticleSystem(pointParticleEmtitter, maxRate, minRate, maxParticles, mParticleTextureRegion.deepCopy()); particleSystem.setBlendFunction(GL10.GL_SRC_ALPHA, GL10.GL_ONE); particleSystem.addParticleInitializer(new ColorInitializer(0f, 0f, 1f)); particleSystem.addParticleModifier(new AlphaModifier(1, 0, 0, 0.5f)); particleSystem.addParticleModifier(new ExpireModifier(0.5f)); gameObject.getScene().attachChild(particleSystem); Using above code the particle effect was started but when finished that I want to detect. After finishing effect I want to remove the object from the scene.

    Read the article

  • Doubt with c# handlers?

    - by aF
    I have this code in c# public void startRecognition(string pName) { presentationName = pName; if (WaveNative.waveInGetNumDevs() > 0) { string grammar = System.Environment.GetEnvironmentVariable("PUBLIC") + "\\SoundLog\\Presentations\\" + presentationName + "\\SpeechRecognition\\soundlog.cfg"; /* if (File.Exists(grammar)) { File.Delete(grammar); } executeCommand();*/ recContext = new SpSharedRecoContextClass(); recContext.CreateGrammar(0, out recGrammar); if (File.Exists(grammar)) { recGrammar.LoadCmdFromFile(grammar, SPLOADOPTIONS.SPLO_STATIC); recGrammar.SetGrammarState(SPGRAMMARSTATE.SPGS_ENABLED); recGrammar.SetRuleIdState(0, SPRULESTATE.SPRS_ACTIVE); } recContext.Recognition += new _ISpeechRecoContextEvents_RecognitionEventHandler(handleRecognition); //recContext.RecognitionForOtherContext += new _ISpeechRecoContextEvents_RecognitionForOtherContextEventHandler(handleRecognition); //System.Windows.Forms.MessageBox.Show("olari"); } } private void handleRecognition(int StreamNumber, object StreamPosition, SpeechLib.SpeechRecognitionType RecognitionType, SpeechLib.ISpeechRecoResult Result) { System.Windows.Forms.MessageBox.Show("entrei"); string temp = Result.PhraseInfo.GetText(0, -1, true); _recognizedText = ""; foreach (string word in recognizedWords) { if (temp.Contains(word)) { _recognizedText = word; } } } public void run() { if (File.Exists(System.Environment.GetEnvironmentVariable("PUBLIC") + "\\SoundLog\\Serialization\\Voices\\identifiedVoicesDLL.txt")) { deserializer = new XmlSerializer(_identifiedVoices.GetType()); FileStream fs = new FileStream(System.Environment.GetEnvironmentVariable("PUBLIC") + "\\SoundLog\\Serialization\\Voices\\identifiedVoicesDLL.txt", FileMode.Open); Object o = deserializer.Deserialize(fs); fs.Close(); _identifiedVoices = (double[])o; } if (File.Exists(System.Environment.GetEnvironmentVariable("PUBLIC") + "\\SoundLog\\Serialization\\Voices\\deletedVoicesDLL.txt")) { deserializer = new XmlSerializer(_deletedVoices.GetType()); FileStream fs = new FileStream(System.Environment.GetEnvironmentVariable("PUBLIC") + "\\SoundLog\\Serialization\\Voices\\deletedVoicesDLL.txt", FileMode.Open); Object o = deserializer.Deserialize(fs); fs.Close(); _deletedVoices = (ArrayList)o; } myTimer.Interval = 5000; myTimer.Tick += new EventHandler(clearData); myTimer.Start(); if (WaveNative.waveInGetNumDevs() > 0) { _waveFormat = new WaveFormat(_samples, 16, 2); _recorder = new WaveInRecorder(-1, _waveFormat, 8192 * 2, 3, new BufferDoneEventHandler(DataArrived)); _scaleHz = (double)_samples / _fftLength; _limit = (int)((double)_limitVoice / _scaleHz); SoundLogDLL.MelFrequencyCepstrumCoefficients.calculateFrequencies(_samples, _fftLength); } } startRecognition is a method for Speech Recognition that load a grammar and makes the recognition handler here: recContext.Recognition += new _ISpeechRecoContextEvents_RecognitionEventHandler(handleRecognition); Now I have a problem, when I call the method startRecognition before method run, both handlers (the recognition one and the handler for the Tick) work well. If a word is recognized, handlerRecognition method is called. But, when I call the method run before the method startRecognition, both methods seem to run well but then the recognition Handler is never executed! Even when I see that words are recognized (because they happear on the Windows Speech Recognition app). What can I do for the recognition handler be allways called?

    Read the article

  • Transparent JPanel, Canvas background in JFrame

    - by Andy Tyurin
    I wanna make canvas background and add some elements on top of it. For this goal I made JPanel as transparent container with setOpaque(false) and added it as first of JFrame container, then I added canvas with black background (in future I wanna set animation) to JFrame as second element. But I can't undestand why i see grey background, not a black. Any suggestions? public class Game extends JFrame { public Container container; //Game container with components public Canvas backgroundLayer; //Background layer of a game public JPanel elementsLayer; //elements panel (top of backgroundLayer), holds different elements private Dimension startGameDimension = new Dimension(800,600); //start game dimension public Game() { //init main window super("Astra LaserForces"); setSize(startGameDimension); setBackground(Color.CYAN); container=getContentPane(); container.setLayout(null); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); //init jpanel elements layer elementsLayer=new JPanel(); elementsLayer.setSize(startGameDimension); elementsLayer.setBackground(Color.BLUE); elementsLayer.setOpaque(false); container.add(elementsLayer); //init canvas background layer backgroundLayer = new Canvas(); backgroundLayer.setSize(startGameDimension); backgroundLayer.setBackground(Color.BLACK); //set default black color container.add(backgroundLayer); } //start game public void start() { setVisible(true); } //create new instance of game and start it public static void main(String[] args) { new Game().start(); } }

    Read the article

  • Dynamic quadrees

    - by paul424
    recently I come out writing Quadtree for creatures culling in Opendungeons game. Thing is those are moving points and bounding hierarchy will quickly get lost if the quadtree is not rebuild very often. I have several variants, first is to upgrade the leaf position , every time creature move is requested. ( note if I would need collision detection anyway, so this might be necessery anyway). Second would be making leafs enough large , that the creature would sure stay inside it's bounding box ( due to its speed limit). The partition of a plane in quadtree is always fixed ( modulo the hierarchical unions of some parts) . For creatures close to the center of the plane , there would be no way of keeping it but inside one big leaf, besides this brokes the invariant that each point can be put into any small area as desired. So on the second thought could I use several quadrees ? Each would have its "coordinate axis XY" somwhere shifted ? Before I start playing with this maybe some other space diving structure would suit me better, unfortunetly wiki does not compare it's execution time : http://en.wikipedia.org/wiki/Grid_%28spatial_index%29#See_also

    Read the article

  • Recommended formats to store bitmaps in memory?

    - by Geotarget
    I'm working with general purpose image rendering, and high-performance image processing, and so I need to know how to store bitmaps in-memory. (24bpp/32bpp, compressed/raw, etc) I'm not working with 3D graphics or DirectX / OpenGL rendering and so I don't need to use graphics card compatible bitmap formats. My questions: What is the "usual" or "normal" way to store bitmaps in memory? (in C++ engines/projects?) How to store bitmaps for high-performance algorithms, such that read/write times are the fastest? (fixed array? with/without padding? 24-bpp or 32-bpp?) How to store bitmaps for applications handling a lot of bitmap data, to minimize memory usage? (JPEG? or a faster [de]compression algorithm?) Some possible methods: Use a fixed packed 24-bpp or 32-bpp int[] array and simply access pixels using pointer access, all pixels are allocated in one continuous memory chunk (could be 1-10 MB) Use a form of "sparse" data storage so each line of the bitmap is allocated separately, reusing more memory and requiring smaller contiguous memory segments Store bitmaps in its compressed form (PNG, JPG, GIF, etc) and unpack only when its needed, reducing the amount of memory used. Delete the unpacked data if its not used for 10 secs.

    Read the article

  • iOS + cocos2d: how to account for sprite's position for the different device dimensions in an universal app?

    - by fuzzlog
    All the questions I've seen regarding iOS universal apps (with or without cocos2d) deal with the "how to add graphics to a universal app". My question is, how does the code need to be written to ensure that the sprites appear appropriately on the screen (given that an iPhone 5's resolution is not proportional to an iPad's resolution)? Is it just a bunch of "if" statements and duplicate code or do iOS/cocos2d provide common function calls that will place the sprites at an appropriate position?

    Read the article

  • How to add reflection definition to read JSON files in web game

    - by user3728735
    I have a game which I deployed for desktop and Android. I can read JSON data and create my levels, but when it comes to reading JSON files from web app, I get an error that logs, "cannot read the json file". I researched a lot and I found out that I should add my JSON config class to configurations, so I added this line to gameName.gwt.xml, which is in core folder: <extend-configuration-property name="gdx.reflect.include" value="com.las.get.level.LevelConfig"/> But it did not work out. I have no idea where should I place this line or where I should change to make my web app work, so I can read JSON files.

    Read the article

  • How do I create weapon attachments?

    - by Tron86
    My question is; I am developing a game for XNA and I am trying to create a weapon attachment for my player model. My player model loads the .md3 format and reads tags for attachment points. I am able to get the tag of my model's hand. And I am also able to get the tag of my weapon's handle. Each tag I am able to get the rotation and position of and this is how I am calculating it: Model.worldMatrix = Matrix.CreateScale(Model.scale) * Matrix.CreateRotationX(-MathHelper.PiOver2) * Matrix.CreateRotationY(MathHelper.PiOver2); Pretty simple, the player model has a scale and its orientation(it loads on its side so I just use a 90 degree X axis rotation, and a Y axis rotation to face away from the camera). I then calculate the torso tag on the lower body, which gives me a local coordinate at the waist. Then I take that matrix and calculate the tag_weapon in the upper body. This gives me the hand position in local space. I also get the rotation matrix from that tag that I store for later use. All this seems to work fine. Now I move onto my weapon: Matrix weaponWorld = Matrix.CreateScale(CurrentWeapon.scale) * Matrix.CreateRotationX(-MathHelper.PiOver2) * TagRotationMatrix * Matrix.CreateTranslation(HandTag.Position) * Matrix.CreateRotationY(PlayerRotation) * Matrix.CreateTranslation(CollisionBody.Position) * You may notice the weapon matrix gets rotated by 90 degress on the X axis as well. This is because they load in on their sides. Once again this seems pretty simple and follows the SRT order I keep reading about. My TagRotation matrix is the hand's rotation. HandTag.Position is its position in local space. CreateRotationY(PlayerRotation) is the player's rotation in world space, and the CollisionBody.Position is the player's world location. Everything seems to be in order, and almost works in game. However when the gun spawns and follows the player's hand it seems to be flipped on an axis every couple frames. Almost like the X or Y axis is being inversed then put right back. Its hard to explain and I am totally stumped. Even removing all my X axis fixes does nothing to solve the problem. Hopefully I explained everything enough as I am a bit new to this! Thanks!

    Read the article

  • Does it make the game more fun when the user is forced to progress through the levels sequentially rather than letting them pick and play?

    - by BeachRunnerJoe
    Hello. For the first time in my game, I'm stuck with a real design dilemma. I guess that's a good thing ;) I'm building a word puzzle game that has five levels, each with 30 puzzles. Currently, the user has to solve one puzzle at a time before moving to the next. However, I'm finding the user occasionally gets stuck on a puzzle, at which point they can no longer play until they solve it. This is obviously bad because many people will probably just quit playing the game and delete the app. The only elegant solution I can find to helping the player get unstuck is changing the design of the game to allow the users to pick any puzzle to play at any time. This way, if they get stuck, they can come back to it later and at least they have other puzzles to play in the meantime. It's my opinion, however, that this new flow design doesn't make the game as fun as the original flow design where the player has to complete a puzzle before moving to the next. To me, it's like anything else, when you only have one of something, it's more enjoyable, but when you have 30 of something, it's far less enjoyable. In fact, when I present the user with 30 puzzles to choose from, I'm concerned I might be making them feel like it's a lot of work they have to do and that's bad. I even had a tester voluntarily tell me that being forced to complete a puzzle before moving to the next is actually motivating. My questions are... Do you agree/disagree? Do you have any suggestions for how I can help the player get unstuck? Thanks so much in advance for your thoughts! EDIT: I should mention that I've already considered a few other solutions to helping the user get unstuck, but none of them seem like good ideas. They are... Add more hints: Currently, the user gets two hints per puzzle. If I increase the hint count, it only makes the game more easy and still leaves the possibility of the user getting stuck. Add a "Show Solution" button: This seems like a bad idea because it's my opinion this takes the fun out of the game for many people who would probably otherwise solve the puzzle if they didn't have the quick option to see the solution.

    Read the article

  • Translate along local axis

    - by Aaron
    I have an object with a position matrix and a rotation matrix (derived from a quaternion, but I digress). I'm able to translate this object along world-relative vectors, but I'm trying to figure out how to translate it along local-relative vectors. So if the object is tilted 45 degrees around its Z-axis the vector (1, 0, 0) would make it move to the upper right. For world-space translations I simply turn the movement vector into a matrix and multiply it by the position matrix: position_mat = translation_mat * position_mat. For local-space translations I'd think I'd have to use the rotation matrix into that formula, but I see the object spin around instead when I apply a translation over time no matter where I multiply the rotation matrix.

    Read the article

  • Implementing my Entity System. Questions about some problems I have found.

    - by Notbad
    Hi!, Well during this week I have deciding about implementation of my entity system. It is a big topic so it has been difficult to take one option from the whole. This has been my decision: 1) I don't have an entity class it is just an id. 2) I have systems that contain a list of components (the list is homegenous, I mean, RenderSystem will just have RenderComponents). 3) Compones will be just data. 4) There would be some kind of "entity prototypes" in a manager or something from we will create entity instances.Ideally they will define the type of components it has and initialization data. 5) Prototype code to create an entity (this is from the top of my head): int id=World::getInstance()->createEntity("entity template"); 6) This will notify all systems that a new entity has been created, and if the entity needs a component that the system handles it will add it to the entity. Ok, this are the ideas. Let's see if some can help with the problems: 1) The main problem is this templates that are sent to the systems in creation process to populate the entity with needed components. What would you use, an OR(ed) int?, a list of strings?. 2) How to do initialization for components when the entity has been created? How to store this in the template? I have thought about having a function in the template that is virtual and after entity is created an populated, gets the components and sets initialization values. 3) Don't you think this is a lot of work for just an entity creation?. Sorry for the long post, I have tried to expose my ideas and finding in order other could have a start beside exposing my problems. Thanks in advance, Notbad.

    Read the article

  • Cheap ways to do scaling ops in shader?

    - by Nick Wiggill
    I've got an extensive world terrain that uses vec3 for the vertex position attribute. That's good, because the terrain has endless gradations due to the use of floating point. But I'm thinking about how to reduce the amount of data uploaded to the GPU. For my terrain, which uses discrete / grid-based vertex positions in x and z, it's pretty clear that I can replace my vec3s (floats, really) with shorts, halving the per-vertex position attribute cost from 12 bytes each to 6 bytes. Considering I've got little enough other vertex data, and an enormous amount of terrain data to push into the world, it's a major gain. Currently in my code, one unit in GLSL shaders is equal to 1m in the world. I like that scale. If I move over to using shorts, though, I won't be able to use the same scale, as I would then have a very blocky world where every step in height is an entire metre. So I see these potential solutions to scale the positional data correctly once it arrives at the vertex shader stage: Use 10:1 scaling, i.e. 1 short unit = 1 decimetre in CPU-side code. Do a division by 10 in the vertex shader to scale incoming decimetre values back to metres. Arbirary (non-PoT) divisions tend to be slow, however. Use (some-power-of-two):1 scaling (eg. 8:1), which enables the use of a bitshift (eg. val >> 3) to do the division... not sure how performant this is in shaders, though. Not as intuitive to read values, but possibly quite a bit faster than div by a non-PoT value. Use a texture as lookup table. I've heard that this is really fast. Or whatever solutions others can offer to achieve the same results -- minimal vertex data with sensible scaling.

    Read the article

  • BoundingSpheres move when they should not

    - by NDraskovic
    I have a XNA 4.0 project in which I load a file that contains type and coordinates of items I need to draw to the screen. Also I need to check if one particular type (the only movable one) is passing in front or trough other items. This is the code I use to load the configuration: if (ks.IsKeyDown(Microsoft.Xna.Framework.Input.Keys.L)) { this.GraphicsDevice.Clear(Color.CornflowerBlue); Otvaranje.ShowDialog(); try { using (StreamReader sr = new StreamReader(Otvaranje.FileName)) { String linija; while ((linija = sr.ReadLine()) != null) { red = linija.Split(','); model = red[0]; x = red[1]; y = red[2]; z = red[3]; elementi.Add(Convert.ToInt32(model)); podatci.Add(new Vector3(Convert.ToSingle(x), Convert.ToSingle(y), Convert.ToSingle(z))); sfere.Add(new BoundingSphere(new Vector3(Convert.ToSingle(x), Convert.ToSingle(y), Convert.ToSingle(z)), 1f)); } } } catch (Exception ex) { Window.Title = ex.ToString(); } } The "Otvaranje" is an OpenFileDialog object, "elementi" is a List (determines the type of item that would be drawn), podatci is a List (determines the location where the items will be drawn) and sfere is a List. Now I solved the picking algorithm (checking for ray and bounding sphere intersection) and it works fine, but the collision detection does not. I noticed, while using picking, that BoundingSphere's move even though the objects that they correspond to do not. The movable object is drawn to the world1 Matrix, and the static objects are drawn into the world2 Matrix (world1 and world2 have the same values, I just separated them so that the static elements would not move when the movable one does). The problem is that when I move the item I want, all boundingSpheres move accordingly. How can I move only the boundingSphere that corresponds to that particular item, and leave the rest where they are?

    Read the article

  • Are these non-standard applications of rendering practical in games?

    - by maul
    I've recently got into 3D and I came up with a few different "tricky" rendering techniques. Unfortunately I don't have the time to work on this myself, but I'd like to know if these are known methods and if they can be used in practice. Hybrid rendering Now I know that ray-tracing is still not fast enough for real-time rendering, at least on home computers. I also know that hybrid rendering (a combination of rasterization and ray-tracing) is a well known theory. However I had the following idea: one could separate a scene into "important" and "not important" objects. First you render the "not important" objects using traditional rasterization. In this pass you also render the "important" objects using a special shader that simply marks these parts on the image using a special color, or some stencil/depth buffer trickery. Then in the second pass you read back the results of the first pass and start ray tracing, but only from the pixels that were marked by the "important" object's shader. This would allow you to only ray-trace exactly what you need to. Could this be fast enough for real-time effects? Rendered physics I'm specifically talking about bullet physics - intersection of a very small object (point/bullet) that travels across a straight line with other, relatively slow-moving, fairly constant objects. More specifically: hit detection. My idea is that you could render the scene from the point of view of the gun (or the bullet). Every object in the scene would draw a different color. You only need to render a 1x1 pixel window - the center of the screen (again, from the gun's point of view). Then you simply check that central pixel and the color tells you what you hit. This is pixel-perfect hit detection based on the graphical representation of objects, which is not common in games. Afaik traditional OpenGL "picking" is a similar method. This could be extended in a few ways: For larger (non-bullet) objects you render a larger portion of the screen. If you put a special-colored plane in the middle of the scene (exactly where the bullet will be after the current frame) you get a method that works as the traditional slow-moving iterative physics test as well. You could simulate objects that the bullet can pass through (with decreased velocity) using alpha blending or some similar trick. So are these techniques in use anywhere, and/or are they practical at all?

    Read the article

  • Need to produce an animated texture of Water where each image tiles in all directions

    - by ProfVersaggi
    I need to produce a 2D 'animated' texture of "water" for a game in which each image tiles in 'all' directions, much like those produced by the Caustics Generator, but with the power and flexibility of something the likes of Blender. The final result from Caustics Generator is 32 images that are actually animated such that when the full 32 images are played in a loop they will seamlessly loop forever. They will not only loop in time, but each image also tile in all directions. This is nice, but it comes in only one flavor so to speak. I'd like to accomplish the same thing with a Blender type tool, and I have actually gotten to the point where I generate the X number of images, but they do not tile in 'all' directions, nor are they slightly animated. I've tried Blender texture animations using offsets but with only limited success. Does anyone know of how to (or of a tool) which will animate textures such that they tile in all (4) directions? Many thanks in advance ....

    Read the article

  • Whole continent simulation [on hold]

    - by user2309021
    Let's suppose I am planning to create a simulation of an entire continent at some point in the past (let's say, around 0 A.D). Is it feasible to spawn a hundred million actors that interact with each other and their environments? Having them reproduce, extract resources, etc? The fact is that I actually want to create a simulation that allows me to zoom in from a view of the entire continent up to a single village, and interact with it. (Think as if you could keep zooming in the campaign map of any Total War game and the transition to the battle map was seamless, not a change of the "game mode"). By the way, I have never made a game in my entire life (I have programmed normal desktop applications, though), so I am really having trouble wrapping my head around how to implement such a thing. Even while thinking about how to implement a simple population simulator, without a graphical interface, I think that the O(n) complexity of traversing an array and telling all people to get one year older each time the program ticks is kind of stupid. Any kind help would be greatly appreciated :) EDIT: After being put on hold, I shall specify a question. How would you implement a simulation of all basic human dynamics (reproduction, resource consumption) in an entire continent (with millions of people)?

    Read the article

< Previous Page | 565 566 567 568 569 570 571 572 573 574 575 576  | Next Page >