Search Results

Search found 25550 results on 1022 pages for 'mere development'.

Page 458/1022 | < Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >

  • Better way to load level content in XNA?

    - by user2002495
    Currently I loaded all my assets in XNA in the main Game class. What I want to achieve later is that I only load specific assets for specific levels (the game will consist of many levels). Here is how I load my main assets into the main class: protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); plane = new Player(Content.Load<Texture2D>(@"Player/playerSprite"), 6, 8); plane.animation = "down"; plane.pos = new Vector2(400, 500); plane.fps = 15; Global.currentPos = plane.pos; lvl1 = new Level1(Content.Load<Texture2D>(@"Levels/bgLvl1"), Content.Load<Texture2D>(@"Levels/bgLvl1-other"), new Vector2(0, 0), new Vector2(0, -600)); CommonBullet.LoadContent(Content); CommonEnemyBullet.LoadContent(Content); } protected override void UnloadContent() { } protected override void Update(GameTime gameTime) { if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); plane.Update(gameTime); lvl1.Update(gameTime); foreach (CommonEnemy ce in cel) { if (ce.CollidesWith(plane)) { ce.hasSpawn = false; } foreach (CommonBullet b in plane.commonBulletList) { if (b.CollidesWith(ce)) { ce.hasSpawn = false; } } ce.Update(gameTime); } LoadCommonEnemy(); base.Update(gameTime); } private void LoadCommonEnemy() { int randY = rand.Next(-600, -10); int randX = rand.Next(0, 750); if (cel.Count < 3) { cel.Add(new CommonEnemy(Content.Load<Texture2D>(@"Enemy/Common/commonEnemySprite"), 7, 2, "left", randX, randY)); } for (int i = 0; i < cel.Count; i++) { if (!cel[i].hasSpawn) { cel.RemoveAt(i); i--; } } } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.Black); spriteBatch.Begin(); lvl1.Draw(spriteBatch); plane.Draw(spriteBatch); foreach (CommonEnemy ce in cel) { ce.Draw(spriteBatch); } spriteBatch.End(); base.Draw(gameTime); } I wish to load my players, enemies, all in Level1 class. However, when I move my player & enemy code into the Level1 class, the gameTime returns null. Here is my Level1 class: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Media; using Microsoft.Xna.Framework.Input; using SpaceShooter_Beta.Animation.PlayerCollection; using SpaceShooter_Beta.Animation.EnemyCollection.Common; namespace SpaceShooter_Beta.Levels { public class Level1 { public Texture2D bgTexture1, bgTexture2; public Vector2 bgPos1, bgPos2; public float speed = 5f; Player plane; public Level1(Texture2D texture1, Texture2D texture2, Vector2 pos1, Vector2 pos2) { this.bgTexture1 = texture1; this.bgTexture2 = texture2; this.bgPos1 = pos1; this.bgPos2 = pos2; } public void LoadContent(ContentManager cm) { plane = new Player(cm.Load<Texture2D>(@"Player/playerSprite"), 6, 8); plane.animation = "down"; plane.pos = new Vector2(400, 500); plane.fps = 15; Global.currentPos = plane.pos; } public void Draw(SpriteBatch sb) { sb.Draw(bgTexture1, bgPos1, Color.White); sb.Draw(bgTexture2, bgPos2, Color.White); plane.Draw(sb); } public void Update(GameTime gt) { bgPos1.Y += speed; bgPos2.Y += speed; if (bgPos1.Y >= 600) { bgPos1.Y = -600; } if (bgPos2.Y >= 600) { bgPos2.Y = -600; } plane.Update(gt); } } } Of course when I did this, I delete all my player's code in the main Game class. All of that works fine (no errors) except that the game cannot start. The debugger says that plane.Update(gt); in Level 1 class has null GameTime, same thing with the Draw method in the Level class. Please help, I appreciate for the time. [EDIT] I know that using switch in the main class can be a solution. But I prefer a cleaner solution than that, since using switch still means I need to load all the assets through the main class, the code will be A LOT later on for each levels

    Read the article

  • What is the best way to implement collision detection using Bullet physics engine and a track generated from a curve?

    - by tigrou
    I am developing a small racing game were the track is generated from a curve. As said above, the track is generated, but not infinite. The track of one level could fit with no problem in memory and will contain a reasonably small amount of triangles. For collisions, I would like to use Bullet physics engine and know what is the best way to handle collisions with the track efficiently. NOTE : The track will be stored as a static rigid body (mass = 0). The player will be represented by a sphere shape for collisions. Here is some possibilities i have in mind : Create one rigid body, then, put all triangles of the track (except non collidable stuff) into it. Result : 1 body with many triangles (eg : 30000 triangles) Split the track into several sections (eg: 10 sections). Then, for each section, create a rigid body and put corresponding triangles in it. Result : small amount of bodies with relatively small amount of triangles (eg : 1500 triangles per section). Split the track into many sub-sections (eg : 1200 sections). Here one subsection = very small step when generating the curve. Again for each sub-section, create a body and put triangles in it. Result : many bodies with very small amount of triangles (eg : 20 triangles). Advantage : it could be possible to "extra data" to each of the subsection, that could be used when handling collisions. Same as 2, but only put sections N and N+1 in physics engine (where N = current section where the player is). When player reach section N+1, unload section N and load section N+2 and so on... Issue : harder to implement, problems if the player suddenly "jump" from one section to another (eg : player fly away from section N, and fall on section N + 4 that was underneath : no collision handled, player will fall into void ) Same as 4, but with many sub-sections. Issues : since subsections are very small there will be constantly new bodies added and removed to physics engine at runtime. Possibilities for player to accidently skip some sections and fall into the void are higher than 4.

    Read the article

  • Making a Camera look at a target Vector

    - by Peteyslatts
    I have a camera that works as long as its stationary. Now I'm trying to create a child class of that camera class that will look at its target. The new addition to the class is a method called SetTarget(). The method takes in a Vector3 target. The camera wont move but I need it to rotate to look at the target. If I just set the target, and then call CreateLookAt() (which takes in position, target, and up), when the object gets far enough away and underneath the camera, it suddenly flips right side up. So I need to transform the up vector, which currently always stays at Vector3.Up. I feel like this has something to do with taking the angle between the old direction vector and the new one (which I know can be expressed by target - position). I feel like this is all really vague, so here's the code for my base camera class: public class BasicCamera : Microsoft.Xna.Framework.GameComponent { public Matrix view { get; protected set; } public Matrix projection { get; protected set; } public Vector3 position { get; protected set; } public Vector3 direction { get; protected set; } public Vector3 up { get; protected set; } public Vector3 side { get { return Vector3.Cross(up, direction); } protected set { } } public BasicCamera(Game game, Vector3 position, Vector3 target, Vector3 up) : base(game) { this.position = position; this.direction = target - position; this.up = up; CreateLookAt(); projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.PiOver4, (float)Game.Window.ClientBounds.Width / (float)Game.Window.ClientBounds.Height, 1, 500); } public override void Update(GameTime gameTime) { // TODO: Add your update code here CreateLookAt(); base.Update(gameTime); } } And this is the code for the class that extends the above class to look at its target. class TargetedCamera : BasicCamera { public Vector3 target { get; protected set; } public TargetedCamera(Game game, Vector3 position, Vector3 target, Vector3 up) : base(game, position, target, up) { this.target = target; } public void SetTarget(Vector3 target) { direction = target - position; } protected override void CreateLookAt() { view = Matrix.CreateLookAt(position, target, up); } }

    Read the article

  • What would be a good filter to create 'magnetic deformers' from a depth map?

    - by sebf
    In my project, I am creating a system for deforming a highly detailed mesh (clothing) so that it 'fits' a convex mesh. To do this I use depth maps of the item and the 'hull' to determine at what point in world space the deviation occurs and the extent. Simply transforming all occluded vertices to the depths as defined by the 'hull' is fairly effective, and has good performance, but it suffers the problem of not preserving the features of the mesh and requires extensive culling to avoid false-positives. I would like instead to generate from the depth deviation map a set of simple 'deformers' which will 'push'* all vertices of the deformed mesh outwards (in world space). This way, all features of the mesh are preserved and there is no need to have complex heuristics to cull inappropriate vertices. I am not sure how to go about generating this deformer set however. I am imagining something like an algorithm that attempts to match a spherical surface to each patch of contiguous deviations within a certain range, but do not know where to start doing this. Can anyone suggest a suitable filter or algorithm for generating deformers? Or to put it another way 'compressing' a depth map? (*Push because its fitting to a convex 'bulgy' humanoid so transforms are likely to be 'spherical' from the POV of the surface.)

    Read the article

  • Simple 3D Physics engine as a part of graduation project [on hold]

    - by Eugene Kolesnikov
    I am working on my graduation project and one part of it is to simulate the motion of a rigid body in 3D space. I can use either already written physics engine or to write it myself. It's quite an interesting challenge for me, so I would like to do it myself. I am able to use either C++ or Java for programming (prefer C++). I am using Mac OS X and Debian 7. Could you suggest any guides or tutorials how to do it, can't find it anywhere... More precisely, I need a very simple engine, without collision detection, and many other things that I do not know, I just need to calculate the forces and move my body, depending on the resultant force. If you think that this task is still very difficult or there is no such tutorial, please suggest me some good and simple engine.

    Read the article

  • Change alpha to a Frame in libgdx

    - by Rudy_TM
    I have this batch.draw(currentFrame, x, y, this.parent.originX, this.parent.originY, this.parent.width, this.parent.height, this.scaleX, this.scaleY,this.rotation); I want to apply the alpha that it gets from the method, but theres is not overload from the SpriteBatch class that takes the alpha value, is there some wey to apply it? (i did it this way, because this are animation, and i wanted to control them) in my static ones i apply sprite.draw(SpriteBatch, alpha) Thanks

    Read the article

  • Projective texture and deferred lighting

    - by Vodácek
    In my previous question, I asked whether it is possible to do projective texturing with deferred lighting. Now (more than half a year later) I have a problem with my implementation of the same thing. I am trying to apply this technique in light pass. (my projector doesn't affect albedo). I have this projector View a Projection matrix: Matrix projection = Matrix.CreateOrthographicOffCenter(-halfWidth * Scale, halfWidth * Scale, -halfHeight * Scale, halfHeight * Scale, 1, 100000); Matrix view = Matrix.CreateLookAt(Position, Target, Vector3.Up); Where halfWidth and halfHeight is are half of the texture's width and height, Position is the Projector's position and target is the projector's target. This seems to be ok. I am drawing full screen quad with this shader: float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; texture2D ProjectorTexture; float4x4 ProjectorViewProjection; sampler2D depthSampler = sampler_state { texture = <DepthTexture>; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = <NormalTexture>; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D projectorSampler = sampler_state { texture = <ProjectorTexture>; AddressU = Clamp; AddressV = Clamp; }; float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position :POSITION0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = input.Position; output.PositionCopy=output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float2 texCoord =postProjToScreen(input.PositionCopy) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); //return float4(depth.r,0,0,1); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; //compute projection float3 projection=tex2D(projectorSampler,postProjToScreen(mul(position,ProjectorViewProjection)) + halfPixel()); return float4(projection,1); } In first part of pixel shader is recovered position from G-buffer (this code I am using in other shaders without any problem) and then is tranformed to projector viewprojection space. Problem is that projection doesn't appear. Here is an image of my situation: The green lines are the rendered projector frustum. Where is my mistake hidden? I am using XNA 4. Thanks for advice and sorry for my English. EDIT: Shader above is working but projection was too small. When I changed the Scale property to a large value (e.g. 100), the projection appears. But when the camera moves toward the projection, the projection expands, as can bee seen on this YouTube video.

    Read the article

  • What is wrong with my game loop/mechanic?

    - by elias94xx
    I'm currently working on a 2d sidescrolling game prototype in HTML5 canvas. My implementations so far include a sprite, vector, loop and ticker class/object. Which can be viewed here: http://elias-schuett.de/apps/_experiments/2d_ssg/js/ So my game essentially works well on todays lowspec PC's and laptops. But it does not on an older win xp machine I own and on my Android 2.3 device. I tend to get ~10 FPS with these devices which results in a too high delta value, which than automaticly gets fixed to 1.0 which results in a slow loop. Now I know for a fact that there is a way to implement a super smooth 60 or 30 FPS loop on both devices. Best example would be: http://playbiolab.com/ I don't need all the chunk and debugging technology impact.js offers. I could even write a super simple game where you just control a damn square and it still wouldn't run on a equally fast 30 or 60 fps. Here is the Loop class/object I'm using. It requires a requestAnimationFrame unify function. Both devices I've tested my game on support requestAnimationFrame, so there is no interval fallback. var Loop = function(callback) { this.fps = null; this.delta = 1; this.lastTime = +new Date; this.callback = callback; this.request = null; }; Loop.prototype.start = function() { var _this = this; this.request = requestAnimationFrame(function(now) { _this.start(); _this.delta = (now - _this.lastTime); _this.fps = 1000/_this.delta; _this.delta = _this.delta / (1000/60) > 2 ? 1 : _this.delta / (1000/60); _this.lastTime = now; _this.callback(); }); }; Loop.prototype.stop = function() { cancelAnimationFrame(this.request); };

    Read the article

  • Incorrect results for frustum cull

    - by DeadMG
    Previously, I had a problem with my frustum culling producing too optimistic results- that is, including many objects that were not in the view volume. Now I have refactored that code and produced a cull that should be accurate to the actual frustum, instead of an axis-aligned box approximation. The problem is that now it never returns anything to be in the view volume. As the mathematical support library I'm using does not provide plane support functions, I had to code much of this functionality myself, and I'm not really the mathematical type, so it's likely that I've made some silly error somewhere. As follows is the relevant code: class Plane { public: Plane() { r0 = Math::Vector(0,0,0); normal = Math::Vector(0,1,0); } Plane(Math::Vector p1, Math::Vector p2, Math::Vector p3) { r0 = p1; normal = Math::Cross((p2 - p1), (p3 - p1)); } Math::Vector r0; Math::Vector normal; }; This class represents one plane as a point and a normal vector. class Frustum { public: Frustum( const std::array<Math::Vector, 8>& points ) { planes[0] = Plane(points[0], points[1], points[2]); planes[1] = Plane(points[4], points[5], points[6]); planes[2] = Plane(points[0], points[1], points[4]); planes[3] = Plane(points[2], points[3], points[6]); planes[4] = Plane(points[0], points[2], points[4]); planes[5] = Plane(points[1], points[3], points[5]); } Plane planes[6]; }; The points are passed in order where (the inverse of) each bit of the index of each point indicates whether it's the left, top, and back of the frustum, respectively. As such, I just picked any three points where they all shared one bit in common to define the planes. My intersection test is as follows (based on this): bool Intersects(Math::AABB lhs, const Frustum& rhs) const { for(int i = 0; i < 6; i++) { Math::Vector pvertex = lhs.TopRightFurthest; Math::Vector nvertex = lhs.BottomLeftClosest; if (rhs.planes[i].normal.x <= -0.0f) { std::swap(pvertex.x, nvertex.x); } if (rhs.planes[i].normal.y <= -0.0f) { std::swap(pvertex.y, nvertex.y); } if (rhs.planes[i].normal.z <= -0.0f) { std::swap(pvertex.z, nvertex.z); } if (Math::Dot(rhs.planes[i].r0, nvertex) < 0.0f) { return false; } } return true; } Also of note is that because I'm using a left-handed co-ordinate system, I wrote my Cross function to return the negative of the formula given on Wikipedia. Any suggestions as to where I've made a mistake?

    Read the article

  • Permanently Sync a wiimote with a computer

    - by Adam Geisweit
    i have tried to look up many ways to sync up my wiimotes to my computer so that i can program games with it, but every time it only syncs them up temporarily, or if it says it can permanently sync it, it doesn't actually do it. it gets tiresome when i have to keep on reconnecting it every time i want to save battery life. how would i be able to sync up my wiimote to my computer so that if i turn off my wiimote, i can just hit any button and it will automatically sync it up?

    Read the article

  • Logarithmic spacing of FFT subbands

    - by Mykel Stone
    I'm trying to do the examples within the GameDev.net Beat Detection article ( http://archive.gamedev.net/archive/reference/programming/features/beatdetection/index.html ) I have no issue with performing a FFT and getting the frequency data and doing most of the article. I'm running into trouble though in the section 2.B, Enhancements and beat decision factors. in this section the author gives 3 equations numbered R10-R12 to be used to determine how many bins go into each subband: R10 - Linear increase of the width of the subband with its index R11 - We can choose for example the width of the first subband R12 - The sum of all the widths must not exceed 1024 He says the following in the article: "Once you have equations (R11) and (R12) it is fairly easy to extract 'a' and 'b', and thus to find the law of the 'wi'. This calculus of 'a' and 'b' must be made manually and 'a' and 'b' defined as constants in the source; indeed they do not vary during the song." However, I cannot seem to understand how these values are calculated...I'm probably missing something simple, but learning fourier analysis in a couple of weeks has left me Decimated-in-Mind and I cannot seem to see it.

    Read the article

  • OpenGL 3 and the Radeon HD 4850x2

    - by rotard
    A while ago, I picked up a copy of the OpenGL SuperBible fifth edition and slowly and painfully started teaching myself OpenGL the 3.3 way, after having been used to the 1.0 way from school way back when. Making things more challenging, I am primarily a .NET developer, so I was working in Mono with the OpenTK OpenGL wrapper. On my laptop, I put together a program that let the user walk around a simple landscape using a couple shaders that implemented per-vertex coloring and lighting and texture mapping. Everything was working brilliantly until I ran the same program on my desktop. Disaster! Nothing would render! I have chopped my program down to the point where the camera sits near the origin, pointing at the origin, and renders a square (technically, a triangle fan). The quad renders perfectly on my laptop, coloring, lighting, texturing and all, but the desktop renders a small distorted non-square quadrilateral that is colored incorrectly, not affected by the lights, and not textured. I suspect the graphics card is at fault, because I get the same result whether I am booted into Ubuntu 10.10 or Win XP. I did find that if I pare the vertex shader down to ONLY outputting the positional data and the fragment shader to ONLY outputting a solid color (white) the quad renders correctly. But as SOON as I start passing in color data (whether or not I use it in the fragment shader) the output from the vertex shader is distorted again. The shaders follow. I left the pre-existing code in, but commented out so you can get an idea what I was trying to do. I'm a noob at glsl so the code could probably be a lot better. My laptop is an old lenovo T61p with a Centrino (Core 2) Duo and an nVidia Quadro graphics card running Ubuntu 10.10 My desktop has an i7 with a Radeon HD 4850 x2 (single card, dual GPU) from Saphire dual booting into Ubuntu 10.10 and Windows XP. The problem occurs in both XP and Ubuntu. Can anyone see something wrong that I am missing? What is "special" about my HD 4850x2? string vertexShaderSource = @" #version 330 precision highp float; uniform mat4 projection_matrix; uniform mat4 modelview_matrix; //uniform mat4 normal_matrix; //uniform mat4 cmv_matrix; //Camera modelview. Light sources are transformed by this matrix. //uniform vec3 ambient_color; //uniform vec3 diffuse_color; //uniform vec3 diffuse_direction; in vec4 in_position; in vec4 in_color; //in vec3 in_normal; //in vec3 in_tex_coords; out vec4 varyingColor; //out vec3 varyingTexCoords; void main(void) { //Get surface normal in eye coordinates //vec4 vEyeNormal = normal_matrix * vec4(in_normal, 0); //Get vertex position in eye coordinates //vec4 vPosition4 = modelview_matrix * vec4(in_position, 0); //vec3 vPosition3 = vPosition4.xyz / vPosition4.w; //Get vector to light source in eye coordinates //vec3 lightVecNormalized = normalize(diffuse_direction); //vec3 vLightDir = normalize((cmv_matrix * vec4(lightVecNormalized, 0)).xyz); //Dot product gives us diffuse intensity //float diff = max(0.0, dot(vEyeNormal.xyz, vLightDir.xyz)); //Multiply intensity by diffuse color, force alpha to 1.0 //varyingColor.xyz = in_color * diff * diffuse_color.xyz; varyingColor = in_color; //varyingTexCoords = in_tex_coords; gl_Position = projection_matrix * modelview_matrix * in_position; }"; string fragmentShaderSource = @" #version 330 //#extension GL_EXT_gpu_shader4 : enable precision highp float; //uniform sampler2DArray colorMap; //in vec4 varyingColor; //in vec3 varyingTexCoords; out vec4 out_frag_color; void main(void) { out_frag_color = vec4(1,1,1,1); //out_frag_color = varyingColor; //out_frag_color = vec4(varyingColor, 1) * texture(colorMap, varyingTexCoords.st); //out_frag_color = vec4(varyingColor, 1) * texture(colorMap, vec3(varyingTexCoords.st, 0)); //out_frag_color = vec4(varyingColor, 1) * texture2DArray(colorMap, varyingTexCoords); }"; Note that in this code the color data is accepted but not actually used. The geometry is outputted the same (wrong) whether the fragment shader uses varyingColor or not. Only if I comment out the line varyingColor = in_color; does the geometry output correctly. Originally the shaders took in vec3 inputs, I only modified them to take vec4s while troubleshooting.

    Read the article

  • How can I use WebGL to create a tile-based multi-layer scrolling platform game?

    - by Nicholas Hill
    I've found WebGL (based on OpenGL) to be a fiendish and unforgiving framework for those learning to write HTML5-based games. Despite the presence of many examples on how to get started, I'm really struggling to understand how I could simply load a bunch of images and render them to a canvas quickly using WebGL. My specific scenario involves trying to render a map using a bespoke but simple multi-layered tile engine, where each value in a three dimensional array points to the image to use for that location in the rendered image. Think "Sonic the Hedgehog" via tilesets, tiles, maps, layers, sprites etc. Can anyone enlighten me: 1) How can I load an image that I can use as a texture in WebGL? 2) How can I dynamically select an image at run time and draw it at any co-ordinate, that I also select at run time?

    Read the article

  • Displaying and updating score in Android (OpenGL ES 2)

    - by user16547
    I'm using a FrameLayout where I have a GLSurfaceView at the bottom and a few Views on top of it. One of those Views is a TextView that displays the current score. Here's how I'm updating the score TextView based on what happens in my game loop: whenever an event happens that increases the score, I call activity.runOnUiThread(updater), where activity is the activity that has the above FrameLayout (my main game activity) and updater is just a Runnable that increments the score. From my understanding, using runOnUiThread() from the OpenGL thread is standard practice - otherwise you'll get an exception, I can't remember its name. The problem is that there's a very noticeable lag between the event and the score update. For example the character gets a coin, but the coin count is not updated quickly. I tried summing all the score events from my game loop and calling runOnUiThread() only once per loop, but it doesn't seem to make much of a difference - the lag is still noticeable. How can I improve my design to avoid this lag?

    Read the article

  • How to move an object using X and Y coordinates in JavaScript

    - by Geroy290
    I am making a 2d game with JavaScript and HTML5 and am trying to move an image that I have drawn with JavaScript like so: //canvas var c = document.getElementById("gameCanvas"); var ctx = c.getContext("2d"); //baseball var baseball = new Image(); baseball.onload = function() { ctx.drawImage(baseball, 400, 425); }; baseball.src = "baseball2.png"; I'm not sure how I would move it though, I have seen many people seem to just type something like ballX and ballY but I don't understand where the actual x and y definition comes from. Here is my code so far: http://jsfiddle.net/xRfua/ I have a different image source but it is a local source so I couldn't include it. Thanks in a dvance for any help!

    Read the article

  • OpenGL setup on Windows

    - by kevin james
    I have been trying to use OpenGL for two days now. First on Mac, then on Windows. The problem with Mac is that it doesn't support the newer versions of OpenGL. I ran a tutorial that actually did get some things working, but it only works in XCode (i.e., I can't create a new file, paste in the same code, and get it to work). Because of these issues, I moved to Windows. My Windows 7 has OpenGL 4.3, which is the same that is used in alot of other tutorials. However, not one of these tutorials gives any instruction on how to set it up for the first time. I have come across some vague posts saying that some libraries need to be linked. But WHAT libraries, and HOW do I link them? Please help. I am pretty desperate to set this up as this project is due for work soon. I have actually used OpenGL before at my university, but the computers already had everything set up. The project itself is very easy, but setting up OpenGL is not something I know how to do.

    Read the article

  • Control convention for circular movement?

    - by Christian
    I'm currently doing a kind of training project in Unity (still a beginner). It's supposed to be somewhat like Breakout, but instead of just going left and right I want the paddle to circle around the center point. This is all fine and dandy, but the problem I have is: how do you control this with a keyboard or gamepad? For touch and mouse control I could work around the problem by letting the paddle follow the cursor/finger, but with the other control methods I'm a bit stumped. With a keyboard for example, I could either make it so that the Left arrow always moves the paddle clockwise (it starts at the bottom of the circle), or I could link it to the actual direction - meaning that if the paddle is at the bottom, it goes left and up along the circle or, if it's in the upper hemisphere, it moves left and down, both times toward the outer left point of the circle. Both feel kind of weird. With the first one, it can be counter intuitive to press Left to move the paddle right when it's in the upper area, while in the second method you'd need to constantly switch buttons to keep moving. So, long story short: is there any kind of existing standard, convention or accepted example for this type of movement and the corresponding controls? I didn't really know what to google for (control conventions for circular movement was one of the searches I tried, but it didn't give me much), and I also didn't really find anything about this on here. If there is a Question that I simply didn't see, please excuse the duplicate.

    Read the article

  • What are my tool options to prototype a 2D online multiplayer game?

    - by Asher Einhorn
    I'm looking for the best tool to allow me to quickly put together a 2D game that relies largely on networking. It's extremely likely that this game will require a server side program to constantly run. I have little experience with these things and since it's a prototype i'd like the easiest options for achieving this. I am looking to make this game for the web and mobile devices, although at present I only have access to ios hardware, (no android etc). I just want to get the bare bones of this set up so I can test it from the earliest opportunity to see if it's fun. EDIT - doesn't unity have some inbuilt networking stuff in it?

    Read the article

  • How to deal with Character body parts from Design to Cocos2d

    - by Edwin Soho
    I'm trying to figure out the pattern the game developers use together with game designers: See the picture below with the independent parts: Questions: 1) Should I create different image parts from different body parts or keep frame by frame animaton? (I know both can be used, but I'm trying to figure what is commonly used in the industry) 2) If I'm going to generate different image parts from different body parts (which is I thing is more logical) how would I export that to Cocos2d (Vector or Bitmap)?

    Read the article

  • Use PathModifier of MoveModifier for Tower of Defense Game

    - by Siddharth
    In my game I want to move enemy on the fixed path so that I have establish manual grid structure for that purpose not used tile map. Game contain multiple level and the path will be different for each level and also multiple fixed path exist for each level. So my question is, What I have to use MoveModifier or PathModifier for my game ? Also mention I have to use WayPoint or not. Further detail you all are free to ask. Please help me to decide what to do.

    Read the article

  • How do produce a "mucus spreading" effect in a 2D environment?

    - by nathan
    Here is an example of such a mucus spreading. The substance is spread around the source (in this example, the source would be the main alien building). The game is starcraft, the purple substance is called creep. How this kind of substance spreading would be achieved in a top down 2D environment? Recalculating the substance progression and regenerate the effect on the fly each frame or rather use a large collection of tiles or something else?

    Read the article

  • Pattern for performing game actions

    - by Arkiliknam
    Is there a generally accepted pattern for performing various actions within a game? A way a player can perform actions and also that an AI might perform actions, such as move, attack, self-destruct, etc. I currently have an abstract BaseAction which uses .NET generics to specify the different objects that get returned by the various actions. This is all implemented in a pattern similar to the Command, where each action is responsible for itself and does all that it needs. My reasoning for being abstract is so that I may have a single ActionHandler, and AI can just queue up different action implementing the baseAction. And the reason it is generic is so that the different actions can return result information relevant to the action (as different actions can have totally different outcomes in the game), along with some common beforeAction and afterAction implementations. So... is there a more accepted way of doing this, or does this sound alright?

    Read the article

  • Would like some help in understanding rendering geometry vs textures

    - by Anon
    So I was just pondering whether it is more taxing on the GPU to render geometry or a texture. What I'm trying to see is whether there is a huge difference in rendering two scenes with the same setup: Scene 1: Example Object: A dirt road (nothing else) Geometry: Detailed road, with all the bumps, cracks and so forth done in the mesh Scene 2: Example Object: A dirt road (nothing else) Geometry: A simple mesh, in a form of a road, but in this case maps and textures are simulating cracks, bumps, etc... So of these two, which one is likely to tax the hardware more? Or is it not a like for like comparison? What would be the best way of doing something like this? Go heavy on the textures? Or have a blend of both?

    Read the article

  • Sorting objects before rendering

    - by dreta
    I'm trying to implement a scene graph and in all the articles i've come across there is talk about object sorting. So you'd sort your objects by "material" for example. Now untill i sat down and started implementing it, i kind of took this for granted, because it made sense. But now i'm wondering what does sorting actually change? In my engine, i have a manager for UBOs, i use those to store data that'll be shared between programs, at the moment that only involves time, camera and projection matrices and lights (i'm not worrying about managing which lights affect which objects ATM). Now for each model i have to change the model to world matrix uniform, no sorting is going to change that. So is the jump from changing this matrix to also setting a material for each object that bad? I vaguely remember reading somewhere that each time you change something in the pipeline, it has to get flushed and that can cause performance issues. But for each drawing call i'm setting up a model to world matrix anyway, so what sense does it make to ever be concerned about this? BTW is there any information about whether changing a uniform and calling glBufferSubData is more (or less) expensive.

    Read the article

  • Selling your iphone games.

    - by Artemix
    Hi. So, long story short, some days ago I pusblished an iPhone game, I think the game wasnt that bad tbh, and still I got only 10 sells at $0.99. Are they any publishers, sponsors, or distributors to make your game "visible" on the app store market?, or the only thing you need is to have an amazing game and thats all? Somehow I think that even if you have an awesome game if you dont do that "marketing magic" correctly you will not exist in the store. Now Im making a second game, completly different, and I want to know how to do things right. If anyone knows something about this topic, let me know. Thx in advance.

    Read the article

< Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >