Search Results

Search found 16410 results on 657 pages for 'game component'.

Page 327/657 | < Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >

  • Changing Ogre3D terrain lighting in real time

    - by lezebulon
    I'm looking at the Ogre 3D library and I'm browsing through some examples / tutorials. My question is about terrain. There are a few examples showing how great the terrain system is, but I think that the global illumination and shadows of the terrain have to be pre-computed, which kinda makes it impossible to integrate this with a day / night cycle. Is there a way to change the terrain light sources in real time? If so it is possible to do it and keep a decent FPS?

    Read the article

  • Algorithm for creating spheres?

    - by Dan the Man
    Does anyone have an algorithm for creating a sphere proceduraly with la amount of latitude lines, lo amount of longitude lines, and a radius of r? I need it to work with Unity, so the vertex positions need to be defined and then, the triangles defined via indexes (more info). EDIT I managed to get the code working in unity. But I think I might have done something wrong. When I turn up the detailLevel, All it does is add more vertices and polygons without moving them around. Did I forget something?

    Read the article

  • Trying to figure out SDL pixel manipulation?

    - by NoobScratcher
    Hello so I've found code that plots a pixel in an SDL Screen Surface : void putpixels(int x, int y, int color) { unsigned int *ptr = (unsigned int*)Screen->pixels; int lineoffset = y * (Screen->pitch / 4 ); ptr[lineoffset + x ] = color; } But I have no idea what its actually doing here this is my thoughts. You make an unsigned integer to hold the unsigned int version of pixels then you make another integer to hold the line offset and it equals to multiply by pitch which is then divided by 4 ... Now why am I dividing it by 4 and what is the pitch and why do I multiply it?? Why must I change the lineoffset and add it to the x value then equal it to colors? I'm soo confused.. ;/ I found this function here - http://sol.gfxile.net/gp/ch02.html

    Read the article

  • Comparing angles and working out the difference

    - by Thomas O
    I want to compare angles and get an idea of the distance between them. For this application, I'm working in degrees, but it would also work for radians and grads. The problem with angles is that they depend on modular arithmetic, i.e. 0-360 degrees. Say one angle is at 15 degrees and one is at 45. The difference is 30 degrees, and the 45 degree angle is greater than the 15 degree one. But, this breaks down when you have, say, 345 degrees and 30 degrees. Although they compare properly, the difference between them is 315 degrees instead of the correct 45 degrees. How can I solve this? I could write algorithmic code: if(angle1 > angle2) delta_theta = 360 - angle2 - angle1; else delta_theta = angle2 - angle1; But I'd prefer a solution that avoids compares/branches, and relies entirely on arithmetic.

    Read the article

  • Drawing an outline around an arbitrary group of hexagons

    - by Perky
    Is there an algorithm for drawing an outline around around an arbitrary group of hexagons? The polygon outline drawn may be concave. See the images below, the green line is what I am trying to achieve. The hexagons are stored as vertices and drawn as polygons. Edit: I've uploaded images that should explain more. I want to favour convex hulls because it's conveys an area of control more quickly. Each hexagon is stored in a multidimensional array so they all have x and y coordinates, I can easily find adjacent hexagons and the opposite vertex, i.e. adjacentHexagon = getAdjacentHexagon( someHexagon, NORTHWEST ) if there isn't a hexagon immediately adjacent it will continue to search in that direction until it finds one or hits the map edges.

    Read the article

  • How to use the float value from Noise function in voxel terrain?

    - by therealjohn
    Im using Unity, although this question is not really specific to that engine. Im also using an asset from the store called Coherent Noise. It has some neat noise functionality built it. I am using those functions to produce some noise values. I am getting a value between 0 and 1 (floats). I have an array of blocks (for minecraft like voxel terrain) and I am confused on how to use this float value for terrain? Do I do something like <= 0 == Solid block etc etc? I am confused on how to use the floating values that the noise functions produce to use for height values of an array of say a height of 16. Thanks for any guidance.

    Read the article

  • How to translate along Z axis in OpenTK

    - by JeremyJAlpha
    I am playing around with an OpenGL sample application I downloaded for Xamarin-Android. The sample application produces a rotating colored cube I would simply like to edit it so that the rotating cube is translated along the Z axis and disappears into the distance. I modified the code by: adding an cumulative variable to store my Z distance, adding GL.Enable(All.DepthBufferBit) - unsure if I put it in the right place, adding GL.Translate(0.0f, 0.0f, Depth) - before the rotate functions, Result: cube rotates a couple of times then disappears, it seems to be getting clipped out of the frustum. So my question is what is the correct way to use and initialize the Z buffer and get the cube to travel along the Z axis? I am sure I am missing some function calls but am unsure of what they are and where to put them. I apologise in advance as this is very basic stuff but am still learning :P, I would appreciate it if anyone could show me the best way to get the cube to still rotate but to also move along the Z axis. I have commented all my modifications in the code: // This gets called when the drawing surface is ready protected override void OnLoad (EventArgs e) { // this call is optional, and meant to raise delegates // in case any are registered base.OnLoad (e); // UpdateFrame and RenderFrame are called // by the render loop. This is takes effect // when we use 'Run ()', like below UpdateFrame += delegate (object sender, FrameEventArgs args) { // Rotate at a constant speed for (int i = 0; i < 3; i ++) rot [i] += (float) (rateOfRotationPS [i] * args.Time); }; RenderFrame += delegate { RenderCube (); }; GL.Enable(All.DepthBufferBit); //Added by Noob GL.Enable(All.CullFace); GL.ShadeModel(All.Smooth); GL.Hint(All.PerspectiveCorrectionHint, All.Nicest); // Run the render loop Run (30); } void RenderCube () { GL.Viewport(0, 0, viewportWidth, viewportHeight); GL.MatrixMode (All.Projection); GL.LoadIdentity (); if ( viewportWidth > viewportHeight ) { GL.Ortho(-1.5f, 1.5f, 1.0f, -1.0f, -1.0f, 1.0f); } else { GL.Ortho(-1.0f, 1.0f, -1.5f, 1.5f, -1.0f, 1.0f); } GL.MatrixMode (All.Modelview); GL.LoadIdentity (); Depth -= 0.02f; //Added by Noob GL.Translate(0.0f,0.0f,Depth); //Added by Noob GL.Rotate (rot[0], 1.0f, 0.0f, 0.0f); GL.Rotate (rot[1], 0.0f, 1.0f, 0.0f); GL.Rotate (rot[2], 0.0f, 1.0f, 0.0f); GL.ClearColor (0, 0, 0, 1.0f); GL.Clear (ClearBufferMask.ColorBufferBit); GL.VertexPointer(3, All.Float, 0, cube); GL.EnableClientState (All.VertexArray); GL.ColorPointer (4, All.Float, 0, cubeColors); GL.EnableClientState (All.ColorArray); GL.DrawElements(All.Triangles, 36, All.UnsignedByte, triangles); SwapBuffers (); }

    Read the article

  • Level and Player objects - which should contain which?

    - by Thane Brimhall
    I've been working on a several simple games, and I've always come to a decision point where I have to choose whether to have the Level object as an attribute of the Player class or the Player as an attribute of the Level class. I can see arguments for both: The Level should contain the player because it also contains every other entity. In fact it just makes sense this way: "John is in the room." It makes it a bit more difficult to move the player to a new level, however, because then each level has to pass its player object to an upcoming level. On the other hand, it makes programming sense to me to leave the player as the top-level object that is persistent between levels, and the environment changes because the player decides to change his level and location. It becomes very easy to change levels, because all I have to do is replace the level variable on the player. What's the most common practice here? Or better yet, is there a "right" way to architecture this relationship?

    Read the article

  • Correct order of tasks in each frame for a Physics simulation

    - by Johny
    I'm playing a bit around with 2D physics. I created now some physic blocks which should collide with each other. This works fine "mostly" but sometimes one of the blocks does not react to a collision and i think that's because of my order of tasks done in each frame. At the moment it looks something like this: function GameFrame(){ foreach physicObject do AddVelocityToPosition(); DoCollisionStuff(); // Only for this object not to forget! AddGravitationToVelocity(); end RedrawScene(); } Is this the correct order of tasks in each frame?

    Read the article

  • Drawing a random x,y grid of objects within a prespective

    - by T Reddy
    I'm wrapping my head around OpenGL ES 2.0 and I think I'm trying to do something very simple, but I think the math may be eluding me. I created a simple, flat-ish cylinder in Blender that is 2 units in diameter. I want to create an arbitrary grid of these edge to edge (think of a checker board). I'm using a 3D perspective with GLKit: CGSize size = [[self view] bounds].size; _projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(45.0f), size.width/size.height, 0.1f, 100.0f); So, I managed to manually get all of these cylinders drawn on the screen just fine. However, I would like to understand how I can programmatically "fit" all of these cylinders on the screen at the same time given the camera location, screen size, cylinder diameter, and the number of rows/columns. So the net effect is that for small grids (i.e., 5x5) the objects are closer to the camera, but for large grids (i.e., 30x30) the objects are farther away. In either case, all of the cylinders are visible.

    Read the article

  • How to get warnings when compiling fx files

    - by jdv-Jan de Vaan
    When I compile DirectX shaders (.fx files), I dont see any compiler warnings unless there was an error in the effect. This happens both when using the offline FXC compiler, as well as calling SlimDx's CompileEffect (which is what we normally do). I could force warnings as errors (/WX), but if you enable that, you get an error that compilation failed, without the warning that caused the problem. So how can I output warnings for shaders that compile properly?

    Read the article

  • How exactly does XNA's SpriteBatch work?

    - by David Gouveia
    To be more precise, if I needed to recreate this functionality from scratch in another API (e.g. in OpenGL) what would it need to be capable of doing? I do have a general idea of some of the steps, such as how it prepares an orthographic projection matrix and creates a quad for each draw call. I'm not too familiar, however, with the batching process itself. Are all quads stored in the same vertex buffer? Does it need an index buffer? How are different textures handled? If possible I'd be grateful if you could guide me through the process from when SpriteBatch.Begin() is called until SpriteBatch.End(), at least when using the default Deferred mode.

    Read the article

  • Can I use PBOs for textures in iOS?

    - by Radu
    As far as I can see, there is no GL_PIXEL_UNPACK_BUFFER. Also, the OpenGL ES 2.0 specification (and as far as I know, no iOS device currently supports OpenGL ES 2.0) states that glMapBufferOES() can only use GL_ARRAY_BUFFER as a target, yet glTexImage2D() and glTexSubImage2D() only seem to use PBOs if GL_PIXEL_UNPACK_BUFFER is bound. The OpenGL documentation for glBindBuffer() also states that: GL_PIXEL_PACK_BUFFER and GL_PIXEL_UNPACK_BUFFER are available only if the GL version is 2.1 or greater. So, can I use PBOs for textures? Am I missing something obvious?

    Read the article

  • Matrix.CreateBillboard centre rotation problem

    - by Chris88
    I'm having an issue with Matrix.CreateBillboard and a textured Quad where the center axis seems to be positioned incorrectly to the quad object which is rotating around a center point: Using: BasicEffect quadEffect; Drawing the quad shape: Left = Vector3.Cross(Normal, Up); Vector3 uppercenter = (Up * height / 2) + origin; LowerLeft = uppercenter + (Left * width / 2); LowerRight = uppercenter - (Left * width / 2); UpperLeft = LowerLeft - (Up * height); UpperRight = LowerRight - (Up * height); Where height and width are float values passed in (it draws a square) Draw method: quadEffect.View = camera.view; quadEffect.Projection = camera.projection; quadEffect.World = Matrix.CreateBillboard(Origin, camera.cameraPosition, Vector3.Up, camera.cameraDirection); GraphicsDevice.BlendState = BlendState.Additive; foreach (EffectPass pass in quadEffect.CurrentTechnique.Passes) { pass.Apply(); GraphicsDevice.DrawUserIndexedPrimitives <VertexPositionNormalTexture>( PrimitiveType.TriangleList, Vertices, 0, 4, Indexes, 0, 2); } GraphicsDevice.BlendState = BlendState.Opaque; In the screenshots below i draw the image at Vector3(32f, 0f, 32f) The screenshots below show you the position of the quad in relation to the red cross. The red cross shows where it should be drawn http://i.imgur.com/YwRYj.jpg http://i.imgur.com/ZtoHL.jpg It rotates around the red cross position

    Read the article

  • Move projectile in direction the gun is facing

    - by Manderin87
    I am attempting to have a projectile follow the direction a gun is facing. When using the following code I am unable to make the projectile go in the right direction. float speed = .5f; float dX = (float) -Math.cos(Math.toRadians(degree)) * speed; float dY = (float) Math.sin(Math.toRadians(degree)) * speed; Can anyone tell me what I am doing wrong? The degree is the direction the gun is facing in degree's.

    Read the article

  • Resources on expected behaviour when manipulating 3D objects with the mouse

    - by sebf
    Hello, In my animation editor, I have a 3D gizmo that sits on the origin of a bone; the user drags the mesh around to rotate the bone. I've found that translating the 2D movements of the mouse into sensible 3D transforms is not near as simple as i'd hoped. For example what is intuitively 'up' or 'down'? How should the magnitude of rotations change with respect to dX/dY? How to implement this? What happens when the gizmo changes position or orientation with respect to the camera? ect. So far with trial and error i've written something (very) simple that works 70% of the time. I could probably continue to hack at it until I made something that works 99% of the time, but there must be someone who needed the same thing, and spent the time coming up with a much more elegant solution. Does anyone know of one?

    Read the article

  • Blender to 3ds max to cal3d format

    - by Kaliber64
    There are quite a few questions on cal3d but they are old and don't apply anymore. In Blender(must be 2.49a for python script to work!!!): I have a scene with 7 meshes, 1 armature, 10 bones. I tried going to one mesh to simplify it but doesn't change anything. I found a small blend file that was used for cal3d and it exported just fine. So I tried to copy it's setup with no success. EDIT*8/13/2012 In the last week here is what I have found so far. I made the mesh in the newest blender(2.62?) and exported it to import it in the old one(2.49a). Did an animation in the old one because importing new blend files to old blenders, its just said it would lose keyframe data and all was good. And then you get the last problem of it not exporting meshes. BUT I found that meshes made in the old one export regardless. I can't find any that won't export. So if I used the old blender to remake my model I could get it to export :) At this point I found a modified release of cal3d (because the most core model variable would not initiate as I made a really small test subject in old blender instead of remaking my big one which took 4 hours.) which fixes the morph objects and adds what cal3d left off with. Under their license they have to release the modification but it has no documentation so I have to figure it out on my own. Its mostly the same. But with this lib it came with a 3ds max exporter. My question now is how do I transfer armature and mesh information from blender to 3ds max in order to export into cal3d format. Every time I try the models are see through and small and there are no bones. The formats I have tried to import are .3ds .obj(mesh only) and COLLADA. In all of them the mesh is invisible and no bones. It says the default texture is on so I should be able to see it. All the vertices are present I found a vertex highlighter so I can see those. If any of this is confusing let me know so I can clear it up. Its late .<=sleep.

    Read the article

  • How can I generate a texture that looks like left-over tea leaves?

    - by Jedidja
    We are working on a project for iPhone and Windows Phone 7 where we'd like to be able to generate tea leaves at the bottom of a cup. It doesn't have to look photo-realistic, and actually cartoon-y is ok. What sort of techniques should we research to accomplish this? Are there any libraries (preferably in C, but we can translate) that would be helpful? Here are some samples pulled from a Google Image search

    Read the article

  • Circle to Circle collision, checking each circle against all others

    - by user14861
    I'm currently coding a little circle to circle collision demo but I've got a little stuck. I think I currently have the code to detect collision but I'm not sure how to loop through my list of circles and check them off against one another. Collision check code: public static Vector2 GetIntersectionDepth(Circle a, Circle b) { float xValue = a.Center.X - b.Center.X; float yValue = a.Center.Y - b.Center.Y; Vector2 depth = Vector2.Zero; float distance = Vector2.Distance(a.Center, b.Center); if (a.Radius + b.Radius > distance) { float result = (a.Radius + b.Radius) - distance; depth.X = (float)Math.Cos(result); depth.Y = (float)Math.Sin(result); } return depth; } Loop through code: Vector2 depth = Vector2.Zero; for (int i = 0; i < bounds.Count; i++) for (int j = i+1; j < bounds.Count; j++) { depth = CircleToCircleIntersection.GetIntersectionDepth(bounds[i], bounds[j]); } Clearly I'm a complete newbie, wondering if anyone can give any suggestions or point out my errors, thanks in advance. :)

    Read the article

  • Lock mouse in center of screen, and still use to move camera Unity

    - by Flotolk
    I am making a program from 1st person point of view. I would like the camera to be moved using the mouse, preferably using simple code, like from XNA var center = this.Window.ClientBounds; MouseState newState = Mouse.GetState(); if (Keyboard.GetState().IsKeyUp(Keys.Escape)) { Mouse.SetPosition((int)center.X, (int)center.Y); camera.Rotation -= (newState.X - center.X) * 0.005f; camera.UpDown += (newState.Y - center.Y) * 0.005f; } Is there any code that lets me do this in Unity, since Unity does not support XNA, I need a new library to use, and a new way to collect this input. this is also a little tougher, since I want one object to go up and down based on if you move it the mouse up and down, and another object to be the one turning left and right. I am also very concerned about clamping the mouse to the center of the screen, since you will be selecting items, and it is easiest to have a simple cross-hairs in the center of the screen for this purpose. Here is the code I am using to move right now: using UnityEngine; using System.Collections; [AddComponentMenu("Camera-Control/Mouse Look")] public class MouseLook : MonoBehaviour { public enum RotationAxes { MouseXAndY = 0, MouseX = 1, MouseY = 2 } public RotationAxes axes = RotationAxes.MouseXAndY; public float sensitivityX = 15F; public float sensitivityY = 15F; public float minimumX = -360F; public float maximumX = 360F; public float minimumY = -60F; public float maximumY = 60F; float rotationY = 0F; void Update () { if (axes == RotationAxes.MouseXAndY) { float rotationX = transform.localEulerAngles.y + Input.GetAxis("Mouse X") * sensitivityX; rotationY += Input.GetAxis("Mouse Y") * sensitivityY; rotationY = Mathf.Clamp (rotationY, minimumY, maximumY); transform.localEulerAngles = new Vector3(-rotationY, rotationX, 0); } else if (axes == RotationAxes.MouseX) { transform.Rotate(0, Input.GetAxis("Mouse X") * sensitivityX, 0); } else { rotationY += Input.GetAxis("Mouse Y") * sensitivityY; rotationY = Mathf.Clamp (rotationY, minimumY, maximumY); transform.localEulerAngles = new Vector3(-rotationY, transform.localEulerAngles.y, 0); } while (Input.GetKeyDown(KeyCode.Space) == true) { Screen.lockCursor = true; } } void Start () { // Make the rigid body not change rotation if (GetComponent<Rigidbody>()) GetComponent<Rigidbody>().freezeRotation = true; } } This code does everything except lock the mouse to the center of the screen. Screen.lockCursor = true; does not work though, since then the camera no longer moves, and the cursor does not allow you to click anything else either.

    Read the article

  • What causes Box2D revolute joints to separate?

    - by nbolton
    I have created a rag doll using dynamic bodies (rectangles) and simple revolute joints (with lower and upper angles). When my rag doll hits the ground (which is a static body) the bodies seem to fidget and the joints separate. It looks like the bodies are sticking to the ground, and the momentum of the rag doll pulls the joint apart (see screenshot below). I'm not sure if it's related, but I'm using the Badlogic GDX Java wrapper for Box2D. Here's some snippets of what I think is the most relevant code: private RevoluteJoint joinBodyParts( Body a, Body b, Vector2 anchor, float lowerAngle, float upperAngle) { RevoluteJointDef jointDef = new RevoluteJointDef(); jointDef.initialize(a, b, a.getWorldPoint(anchor)); jointDef.enableLimit = true; jointDef.lowerAngle = lowerAngle; jointDef.upperAngle = upperAngle; return (RevoluteJoint)world.createJoint(jointDef); } private Body createRectangleBodyPart( float x, float y, float width, float height) { PolygonShape shape = new PolygonShape(); shape.setAsBox(width, height); BodyDef bodyDef = new BodyDef(); bodyDef.type = BodyType.DynamicBody; bodyDef.position.y = y; bodyDef.position.x = x; Body body = world.createBody(bodyDef); FixtureDef fixtureDef = new FixtureDef(); fixtureDef.shape = shape; fixtureDef.density = 10; fixtureDef.filter.groupIndex = -1; fixtureDef.filter.categoryBits = FILTER_BOY; fixtureDef.filter.maskBits = FILTER_STUFF | FILTER_WALL; body.createFixture(fixtureDef); shape.dispose(); return body; } I've skipped the method for creating the head, as it's pretty much the same as the rectangle method (just using a cricle shape). Those methods are used like so: torso = createRectangleBodyPart(x, y + 5, 0.25f, 1.5f); Body head = createRoundBodyPart(x, y + 7.4f, 1); Body leftLegTop = createRectangleBodyPart(x, y + 2.7f, 0.25f, 1); Body rightLegTop = createRectangleBodyPart(x, y + 2.7f, 0.25f, 1); Body leftLegBottom = createRectangleBodyPart(x, y + 1, 0.25f, 1); Body rightLegBottom = createRectangleBodyPart(x, y + 1, 0.25f, 1); Body leftArm = createRectangleBodyPart(x, y + 5, 0.25f, 1.2f); Body rightArm = createRectangleBodyPart(x, y + 5, 0.25f, 1.2f); joinBodyParts(torso, head, new Vector2(0, 1.6f), headAngle); leftLegTopJoint = joinBodyParts(torso, leftLegTop, new Vector2(0, -1.2f), 0.1f, legAngle); rightLegTopJoint = joinBodyParts(torso, rightLegTop, new Vector2(0, -1.2f), 0.1f, legAngle); leftLegBottomJoint = joinBodyParts(leftLegTop, leftLegBottom, new Vector2(0, -1), -legAngle * 1.5f, 0); rightLegBottomJoint = joinBodyParts(rightLegTop, rightLegBottom, new Vector2(0, -1), -legAngle * 1.5f, 0); leftArmJoint = joinBodyParts(torso, leftArm, new Vector2(0, 1), -armAngle * 0.7f, armAngle); rightArmJoint = joinBodyParts(torso, rightArm, new Vector2(0, 1), -armAngle * 0.7f, armAngle);

    Read the article

  • SWF file not playing after being published

    - by rsquare
    I'm trying to run the "connector" example that comes bundled with the SmartFoxServer 2X downloads.. There it connects to the server and loads the correct configuration file. When I run it in Adobe Flash Professional 5, it runs correctly and connects to the server but after being published as SWF movie, it doesnt work. It loads the configuration file but can't connect and gives an error connection failure: ERROR 2048. This is the example I'm talking about.

    Read the article

  • Playing part of a sfx audio file in HTML5 using WebAudio

    - by Matthew James Davis
    I have compiled all of my sound effects into one sequenced .ogg file. I have the start and stop times for each sound effect. How do I play the individual effects? That is, how do I play part of an audio file. More specificially, I've created a dictionary { 'sword_hit': { src: 'sfx.ogg', start: 265, // ms length: 212 // ms } } that my play_sound() function can use to look up 'sword_hit' and play the correct audio file at the correct start time for the correct duration. I simply need to know how to tell the WebAudio API to start playing at start ms and only play for length ms.

    Read the article

  • Having troubles with LibNoise.XNA and generating tileable maps

    - by Jon
    Following up on my previous post, I found a wonderful port of LibNoise for XNA. I've been working with it for about 8 hours straight and I'm tearing my hair out - I just can not get maps to tile, I can't figure out how to do this. Here's my attempt: Perlin perlin = new Perlin(1.2, 1.95, 0.56, 12, 2353, QualityMode.Medium); RiggedMultifractal rigged = new RiggedMultifractal(); Add add = new Add(perlin, rigged); // Initialize the noise map int mapSize = 64; this.m_noiseMap = new Noise2D(mapSize, perlin); //this.m_noiseMap.GeneratePlanar(0, 1, -1, 1); // Generate the textures this.m_noiseMap.GeneratePlanar(-1,1,-1,1); this.m_textures[0] = this.m_noiseMap.GetTexture(this.graphics.GraphicsDevice, Gradient.Grayscale); this.m_noiseMap.GeneratePlanar(mapSize, mapSize * 2, mapSize, mapSize * 2); this.m_textures[1] = this.m_noiseMap.GetTexture(this.graphics.GraphicsDevice, Gradient.Grayscale); this.m_noiseMap.GeneratePlanar(-1, 1, -1, 1); this.m_textures[2] = this.m_noiseMap.GetTexture(this.graphics.GraphicsDevice, Gradient.Grayscale); The first and third ones generate fine, they create a perlin noise map - however the middle one, which I wanted to be a continuation of the first (As per my original post), is just a bunch of static. How exactly do I get this to generate maps that connect to each other, by entering in the mapsize * tile, using the same seed, settings, etc.?

    Read the article

  • Math > Logic for a Logarithmic Score Meter

    - by oodavid
    I'm trying to implement a score meter whereby I specify a maximum value (say 15,000) and I can render values on it in a logarithmic manner ie: +------+---+--+-++ +------+---+--+-++ |== | |====== | +------+---+--+-++ +------+---+--+-++ 200 pts 1,000 pts +------+---+--+-++ +------+---+--+-++ |============= | |================| +------+---+--+-++ +------+---+--+-++ 5,000 pts 15,000 pts + The upper bound needs to be variable, and need to be able to convert a score to a percentage, using the above mockup as an example: score2pct(15000, 200) = 0.2 score2pct(15000, 1000) = 0.4 score2pct(15000, 5000) = 0.8 score2pct(15000, 15000) = 1 Does anyone have any pointers for me?

    Read the article

< Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >