Search Results

Search found 39156 results on 1567 pages for 'device driver development'.

Page 582/1567 | < Previous Page | 578 579 580 581 582 583 584 585 586 587 588 589  | Next Page >

  • XNA - Moving Background Calculations

    - by Jesse Emond
    Hi, My question is relatively hard to explain(for me, at least), so I'll go one step at a time and just tell me in the comments if it's not clear enough. So I'm making a "Defend Your Castle" type 2D game, where two players own a castle and create units that will move horizontally to try to destroy the opponent's base. Here's a screenshot of the game: The distance between both castles is much bigger in a real game though, bigger than the screen's width actually. Because the distance is bigger than the screen's width, I had to implement a simple 2D camera: Camera2D, which only holds a Location Vector2 (and I always make sure this camera is within the field area). Then, I just move all the game elements(castles, units, health bars) by that location, so that if a unit is at (5, 0), and the camera's location is (5, 0), then the unit's position will be moved by 5 units to the left, making it (0, 0) on the screen. At first, I simply used a static background with mountains and clouds(yeah, those are supposed to be mountains and clouds). Obviously, this looked awful: when you moved the camera, the background would stay immobile. Instead, I'd like to make a moving background, kind of a "scrolling" one. But rather than making a background with the same width as the distance between the castles, I'd like to make one that is a little bit smaller(but still bigger than the screen's width). I thought this would create an effect of "distance" with the background(but it might just look awful, too). Here's the background I'm testing with: I tried different ways, but none of them seems to work. I tried this: float backgroundFieldRatio = BackgroundTexture.Width / fieldWidth;//find the ratio between the background and the field. float backgroundPositionX = -cam.Location.X * backgroundFieldRatio;//move the background to the left When I run this with fieldWith = 1600, BackgroundTexture.Width = 1500 and while looking at the rightmost area, the background is offset to the left by a too big amount, and we can see the black clear color in the back, as you can see here: I hope I explained properly what I'm trying to achieve. Thank you for your time. Note: I didn't know what to look for on Google, so I thought I'd ask here.

    Read the article

  • Collisions between moving ball and polygons

    - by miguelSantirso
    I know this is a very typical problem and that there area a lot of similar questions, but I have been looking for a while and I have not found anything that fits what I want. I am developing a 2D game in which I need to perform collisions between a ball and simple polygons. The polygons are defined as an array of vertices. I have implemented the collisions with the bounding boxes of the polygons (that was easy) and I need to refine that collision in the cases where the ball collides with the bounding box. The ball can move quite fast and the polygons are not too big so I need to perform continuous collisions. I am looking for a method that allows me to detect if the ball collides with a polygon and, at the same time, calculate the new direction for the ball after bouncing in the polygon. (I am using XNA, in case that helps)

    Read the article

  • Multi Pass Blend

    - by Kirk Patrick
    I am seeking the simplest working example of a two pass HLSL pixel shader. It can do anything really, but the main idea is to perform "ping ponging" to take the output of the first pass and then send it for the second pass. In my example I want to draw to the R channel and then draw to the G channel and produce a simple Venn Diagram in the shader, but need to detect overlap. I can currently detect one or the other but not overlap. There are a red and green circle overlapping, and I want to put a dynamic texture map in the overlap region. I can currently put it in either or. Below is how it looks in the shader. -------------------------------- Texture2D shaderTexture; SamplerState SampleType; ////////////// // TYPEDEFS // ////////////// struct PixelInputType { float4 position : SV_POSITION; float2 tex0 : TEXCOORD0; float2 tex1 : TEXCOORD1; float4 color : COLOR; }; //////////////////////////////////////////////////////////////////////////////// // Pixel Shader //////////////////////////////////////////////////////////////////////////////// float4 main(PixelInputType input) : SV_TARGET { float4 textureColor0; float4 textureColor1; // Sample the pixel color from the texture using the sampler at this texture coordinate location. textureColor0 = shaderTexture.Sample(SampleType, input.tex0); textureColor1 = shaderTexture.Sample(SampleType, input.tex1); if (input.color[0]==1.0f && input.color[1]==1.0f) // Requires multi-pass textureColor0 = textureColor1; return textureColor0; } Here is the calling code (that needs to be modified) m_d3dContext->IASetVertexBuffers(0, 2, vbs, strides, offsets); m_d3dContext->IASetIndexBuffer(m_indexBuffer.Get(), DXGI_FORMAT_R32_UINT,0); m_d3dContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); m_d3dContext->IASetInputLayout(m_inputLayout.Get()); m_d3dContext->VSSetShader(m_vertexShader.Get(), nullptr, 0); m_d3dContext->VSSetConstantBuffers(0, 1, m_constantBuffer.GetAddressOf()); m_d3dContext->PSSetShader(m_pixelShader.Get(), nullptr, 0); m_d3dContext->PSSetShaderResources(0, 1, m_SRV.GetAddressOf()); m_d3dContext->PSSetSamplers(0, 1, m_QuadsTexSamplerState.GetAddressOf());

    Read the article

  • Torque2D, Class vs Datablock

    - by Max Kielland
    I'm scripting my first game with Torque2D and have not fully understood the difference between "Class" and Datablock. To me it seems like Datablock is similar to a struct in C/C++ or a Record in Pascal. If I create Datablocks with new, are they instantiated in the same way as a "Class"? I have a large TileMap and need to attach some information to each Tile. I was thinking to use a Datablock, as a struct, to attach this information to the tile's CustomData property. The two questions are: What is a Datablock and should I use a Datablock or a "Class" for this tile information?

    Read the article

  • Having trouble with projection matrix, need help

    - by Mr.UNOwen
    I'm having trouble with what appears to be the projection matrix. Given a wide enough of a screen, when a cube is on the left and right most edge, the left or right wall will appear stretched to the point that the front face is 1/10 the width of the side. So I do update the screen ratio along with the projection matrix and view port on screen resize, am I safe to assume all the trouble is from the matrix class? Also the cube follows the mouse, but it's only vertically aligned and ahead of the mouse when going left or right from the center of the screen. Perspective function call: * setPerspective * * @param fov: angle in radians * @param aspect: screen ratio w/h * @param near: near distance * @param far: far distance **/ void APCamera::setPerspective(GMFloat_t fov, GMFloat_t aspect, GMFloat_t near, GMFloat_t far) { GMFloat_t difZ = near - far; GMFloat_t *data; mProjection->clear(); //set to identity matrix data = mProjection->getData(); GMFloat_t v = 1.0f / tan(fov / 2.0f); data[_AP_MAA] = v / aspect; data[_AP_MBB] = v; data[_AP_MCC] = (far + near) / difZ; data[_AP_MCD] = -1.0f; data[_AP_MDD] = 0.0f; data[_AP_MDC] = 2.0f * far * near/ difZ; mRatio = aspect; mInvProjOutdated = true; mIsPerspective = true; } and... #define _AP_MAA 0 #define _AP_MAB 1 #define _AP_MAC 2 #define _AP_MAD 3 #define _AP_MBA 4 #define _AP_MBB 5 #define _AP_MBC 6 #define _AP_MBD 7 #define _AP_MCA 8 #define _AP_MCB 9 #define _AP_MCC 10 #define _AP_MCD 11 #define _AP_MDA 12 #define _AP_MDB 13 #define _AP_MDC 14 #define _AP_MDD 15

    Read the article

  • Resultant Vector Algorithm for 2D Collisions

    - by John
    I am making a Pong based game where a puck hits a paddle and bounces off. Both the puck and the paddles are Circles. I came up with an algorithm to calculate the resultant vector of the puck once it meets a paddle. The game seems to function correctly but I'm not entirely sure my algorithm is correct. Here are my variables for the algorithm: Given: velocity = the magnitude of the initial velocity of the puck before the collision x = the x coordinate of the puck y = the y coordinate of the puck moveX = the horizontal speed of the puck moveY = the vertical speed of the puck otherX = the x coordinate of the paddle otherY = the y coordinate of the paddle piece.horizontalMomentum = the horizontal speed of the paddle before it hits the puck piece.verticalMomentum = the vertical speed of the paddle before it hits the puck slope = the direction, in radians, of the puck's velocity distX = the horizontal distance between the center of the puck and the center of the paddle distY = the vertical distance between the center of the puck and the center of the paddle Algorithm solves for: impactAngle = the angle, in radians, of the angle of impact. newSpeedX = the speed of the resultant vector in the X direction newSpeedY = the speed of the resultant vector in the Y direction Here is the code for my algorithm: int otherX = piece.x; int otherY = piece.y; double velocity = Math.sqrt((moveX * moveX) + (moveY * moveY)); double slope = Math.atan(moveX / moveY); int distX = x - otherX; int distY = y - otherY; double impactAngle = Math.atan(distX / distY); double newAngle = impactAngle + slope; int newSpeedX = (int)(velocity * Math.sin(newAngle)) + piece.horizontalMomentum; int newSpeedY = (int)(velocity * Math.cos(newAngle)) + piece.verticalMomentum; for those who are not program savvy here is it simplified: velocity = v(moveX² + moveY²) slope = arctan(moveX / moveY) distX = x - otherX distY = y - otherY impactAngle = arctan(distX / distY) newAngle = impactAngle + slope newSpeedX = velocity * sin(newAngle) + piece.horizontalMomentum newSpeedY = velocity * cos(newAngle) + piece.verticalMomentum My Question: Is this algorithm correct? Is there an easier/simpler way to do what I'm trying to do?

    Read the article

  • Joystick example problem for android 2D

    - by iQue
    I've searched all over the web for an answer to this, and there are simular topics but nothing works for me, and I have no Idea why. I just want to move my sprite using a joystick, since I'm useless at math when it comes to angles etc I used an example, Ill post the code here: public float initx = 50; //og 425; public float inity = 300; //og 267; public Point _touchingPoint = new Point(50, 300); //og(425, 267); public Point _pointerPosition = new Point(100, 170); private Boolean _dragging = false; private MotionEvent lastEvent; @Override public boolean onTouchEvent(MotionEvent event) { if (event == null && lastEvent == null) { return _dragging; } else if (event == null && lastEvent != null) { event = lastEvent; } else { lastEvent = event; } // drag drop if (event.getAction() == MotionEvent.ACTION_DOWN) { _dragging = true; } else if (event.getAction() == MotionEvent.ACTION_UP) { _dragging = false; } if (_dragging) { // get the pos _touchingPoint.x = (int) event.getX(); _touchingPoint.y = (int) event.getY(); // bound to a box if (_touchingPoint.x < 25) { _touchingPoint.x = 25; //og 400 } if (_touchingPoint.x > 75) { _touchingPoint.x = 75; //og 450 } if (_touchingPoint.y < 275) { _touchingPoint.y = 275; //og 240 } if (_touchingPoint.y > 325) { _touchingPoint.y = 325; //og 290 } // get the angle double angle = Math.atan2(_touchingPoint.y - inity, _touchingPoint.x - initx) / (Math.PI / 180); // Move the beetle in proportion to how far // the joystick is dragged from its center _pointerPosition.y += Math.sin(angle * (Math.PI / 180)) * (_touchingPoint.x / 70); _pointerPosition.x += Math.cos(angle * (Math.PI / 180)) * (_touchingPoint.x / 70); // stop the sprite from goin thru if (_pointerPosition.x + happy.getWidth() >= getWidth()) { _pointerPosition.x = getWidth() - happy.getWidth(); } if (_pointerPosition.x < 0) { _pointerPosition.x = 0; } if (_pointerPosition.y + happy.getHeight() >= getHeight()) { _pointerPosition.y = getHeight() - happy.getHeight(); } if (_pointerPosition.y < 0) { _pointerPosition.y = 0; } } public void render(Canvas canvas) { canvas.drawColor(Color.BLUE); canvas.drawBitmap(joystick.get_joystickBg(), initx-45, inity-45, null); canvas.drawBitmap(happy, _pointerPosition.x, _pointerPosition.y, null); canvas.drawBitmap(joystick.get_joystick(), _touchingPoint.x - 26, _touchingPoint.y - 26, null); } public void update() { this.onTouchEvent(null); } og= original position. as you can see Im trying to move the joystick, but when I do it stops working correctly, I mean it still works like a joystick but the sprite dosnt move accordingly, if I for example push the joystick down, the sprite moves up, and if I push it up it moves left. can anyone PLEASE help me, I've been stuck here for sooo long and its really frustrating.

    Read the article

  • Per-pixel displacement mapping GLSL

    - by Chris
    Im trying to implement a per-pixel displacement shader in GLSL. I read through several papers and "tutorials" I found and ended up with trying to implement the approach NVIDIA used in their Cascade Demo (http://www.slideshare.net/icastano/cascades-demo-secrets) starting at Slide 82. At the moment I am completly stuck with following problem: When I am far away the displacement seems to work. But as more I move closer to my surface, the texture gets bent in x-axis and somehow it looks like there is a little bent in general in one direction. EDIT: I added a video: click I added some screen to illustrate the problem: Well I tried lots of things already and I am starting to get a bit frustrated as my ideas run out. I added my full VS and FS code: VS: #version 400 layout(location = 0) in vec3 IN_VS_Position; layout(location = 1) in vec3 IN_VS_Normal; layout(location = 2) in vec2 IN_VS_Texcoord; layout(location = 3) in vec3 IN_VS_Tangent; layout(location = 4) in vec3 IN_VS_BiTangent; uniform vec3 uLightPos; uniform vec3 uCameraDirection; uniform mat4 uViewProjection; uniform mat4 uModel; uniform mat4 uView; uniform mat3 uNormalMatrix; out vec2 IN_FS_Texcoord; out vec3 IN_FS_CameraDir_Tangent; out vec3 IN_FS_LightDir_Tangent; void main( void ) { IN_FS_Texcoord = IN_VS_Texcoord; vec4 posObject = uModel * vec4(IN_VS_Position, 1.0); vec3 normalObject = (uModel * vec4(IN_VS_Normal, 0.0)).xyz; vec3 tangentObject = (uModel * vec4(IN_VS_Tangent, 0.0)).xyz; //vec3 binormalObject = (uModel * vec4(IN_VS_BiTangent, 0.0)).xyz; vec3 binormalObject = normalize(cross(tangentObject, normalObject)); // uCameraDirection is the camera position, just bad named vec3 fvViewDirection = normalize( uCameraDirection - posObject.xyz); vec3 fvLightDirection = normalize( uLightPos.xyz - posObject.xyz ); IN_FS_CameraDir_Tangent.x = dot( tangentObject, fvViewDirection ); IN_FS_CameraDir_Tangent.y = dot( binormalObject, fvViewDirection ); IN_FS_CameraDir_Tangent.z = dot( normalObject, fvViewDirection ); IN_FS_LightDir_Tangent.x = dot( tangentObject, fvLightDirection ); IN_FS_LightDir_Tangent.y = dot( binormalObject, fvLightDirection ); IN_FS_LightDir_Tangent.z = dot( normalObject, fvLightDirection ); gl_Position = (uViewProjection*uModel) * vec4(IN_VS_Position, 1.0); } The VS just builds the TBN matrix, from incoming normal, tangent and binormal in world space. Calculates the light and eye direction in worldspace. And finally transforms the light and eye direction into tangent space. FS: #version 400 // uniforms uniform Light { vec4 fvDiffuse; vec4 fvAmbient; vec4 fvSpecular; }; uniform Material { vec4 diffuse; vec4 ambient; vec4 specular; vec4 emissive; float fSpecularPower; float shininessStrength; }; uniform sampler2D colorSampler; uniform sampler2D normalMapSampler; uniform sampler2D heightMapSampler; in vec2 IN_FS_Texcoord; in vec3 IN_FS_CameraDir_Tangent; in vec3 IN_FS_LightDir_Tangent; out vec4 color; vec2 TraceRay(in float height, in vec2 coords, in vec3 dir, in float mipmap){ vec2 NewCoords = coords; vec2 dUV = - dir.xy * height * 0.08; float SearchHeight = 1.0; float prev_hits = 0.0; float hit_h = 0.0; for(int i=0;i<10;i++){ SearchHeight -= 0.1; NewCoords += dUV; float CurrentHeight = textureLod(heightMapSampler,NewCoords.xy, mipmap).r; float first_hit = clamp((CurrentHeight - SearchHeight - prev_hits) * 499999.0,0.0,1.0); hit_h += first_hit * SearchHeight; prev_hits += first_hit; } NewCoords = coords + dUV * (1.0-hit_h) * 10.0f - dUV; vec2 Temp = NewCoords; SearchHeight = hit_h+0.1; float Start = SearchHeight; dUV *= 0.2; prev_hits = 0.0; hit_h = 0.0; for(int i=0;i<5;i++){ SearchHeight -= 0.02; NewCoords += dUV; float CurrentHeight = textureLod(heightMapSampler,NewCoords.xy, mipmap).r; float first_hit = clamp((CurrentHeight - SearchHeight - prev_hits) * 499999.0,0.0,1.0); hit_h += first_hit * SearchHeight; prev_hits += first_hit; } NewCoords = Temp + dUV * (Start - hit_h) * 50.0f; return NewCoords; } void main( void ) { vec3 fvLightDirection = normalize( IN_FS_LightDir_Tangent ); vec3 fvViewDirection = normalize( IN_FS_CameraDir_Tangent ); float mipmap = 0; vec2 NewCoord = TraceRay(0.1,IN_FS_Texcoord,fvViewDirection,mipmap); //vec2 ddx = dFdx(NewCoord); //vec2 ddy = dFdy(NewCoord); vec3 BumpMapNormal = textureLod(normalMapSampler, NewCoord.xy, mipmap).xyz; BumpMapNormal = normalize(2.0 * BumpMapNormal - vec3(1.0, 1.0, 1.0)); vec3 fvNormal = BumpMapNormal; float fNDotL = dot( fvNormal, fvLightDirection ); vec3 fvReflection = normalize( ( ( 2.0 * fvNormal ) * fNDotL ) - fvLightDirection ); float fRDotV = max( 0.0, dot( fvReflection, fvViewDirection ) ); vec4 fvBaseColor = textureLod( colorSampler, NewCoord.xy,mipmap); vec4 fvTotalAmbient = fvAmbient * fvBaseColor; vec4 fvTotalDiffuse = fvDiffuse * fNDotL * fvBaseColor; vec4 fvTotalSpecular = fvSpecular * ( pow( fRDotV, fSpecularPower ) ); color = ( fvTotalAmbient + (fvTotalDiffuse + fvTotalSpecular) ); } The FS implements the displacement technique in TraceRay method, while always using mipmap level 0. Most of the code is from NVIDIA sample and another paper I found on the web, so I guess there cannot be much wrong in here. At the end it uses the modified UV coords for getting the displaced normal from the normal map and the color from the color map. I looking forward for some ideas. Thanks in advance! Edit: Here is the code loading the heightmap: glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, mWidth, mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, mImageData); glGenerateMipmap(GL_TEXTURE_2D); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); Maybe something wrong in here?

    Read the article

  • How do I make a more or less realistic water surface?

    - by Johnny
    I want to make a similar water surface like in this picture: http://www.publicdomainpictures.net/pictures/20000/velka/water-surface-detail-11291208064MpI.jpg I need the water surface in the same view than in the picture. Is it possible to work without shaders? I want to develop a little game for Xbox Live Indie Marketplace, Windows Phone and maybe later iPhone/iPad. How should I make the water surface, so that it works on multiple platforms?

    Read the article

  • What is wrong with my specular phong shading

    - by Thijser
    I'm sorry if this should be placed on stackoverflow instead however seeing as this is graphics related I was hoping you guys could help me: I'm attempting to write a phong shader and currently working on the specular. I came acros the following formula: base*pow(dot(V,R),shininess) and attempted to implement it (V is the posion of the viewer and R the reflective vector). This gave the following result and code: Vec3Df phongSpecular(const Vec3Df & vertexPos, Vec3Df & normal, const Vec3Df & lightPos, const Vec3Df & cameraPos, unsigned int index) { Vec3Df relativeLightPos=(lightPos-vertexPos); relativeLightPos.normalize(); Vec3Df relativeCameraPos= (cameraPos-vertexPos); relativeCameraPos.normalize(); int DotOfNormalAndLight = Vec3Df::dotProduct(normal,relativeLightPos); Vec3Df reflective =(relativeLightPos-(2*DotOfNormalAndLight*normal))*-1; reflective.normalize(); float phongyness= Vec3Df::dotProduct(reflective,relativeCameraPos); if (phongyness<0){ phongyness=0; } float shininess= Shininess[index]; float speculair = powf(phongyness,shininess); return Ks[index]*speculair; } I'm looking for something more like this:

    Read the article

  • Initializing OpenFeint for Android outside the main Application

    - by Ef Es
    I am trying to create a generic C++ bridge to use OpenFeint with Cocos2d-x, which is supposed to be just "add and run" but I am finding problems. OpenFeint is very exquisite when initializing, it requires a Context parameter that MUST be the main Application, in the onCreate method, never the constructor. Also, the main Apps name must be edited into the manifest. I am trying to fix this. So far I have tried to create a new Application that calls my Application to test if just the type is needed, but you do really need the main Android application. I also tried using a handler for a static initialization but I found pretty much the same problem. Has anybody been able to do it? This is my working-but-not-as-intended code snippet public class DerpHurr extends Application{ @Override public void onCreate() { super.onCreate(); initializeOpenFeint("TestApp", "edthedthedthedth", "aeyaetyet", "65462"); } public void initializeOpenFeint(String appname, String key, String secret, String id){ Map<String, Object> options = new HashMap<String, Object>(); options.put(OpenFeintSettings.SettingCloudStorageCompressionStrategy, OpenFeintSettings.CloudStorageCompressionStrategyDefault); OpenFeintSettings settings = new OpenFeintSettings(appname, key, secret, id, options); //RIGHT HERE OpenFeint.initialize(***this***, settings, new OpenFeintDelegate() { }); System.out.println("OpenFeint Started"); } } Manifest <application android:debuggable="true" android:label="@string/app_name" android:name=".DerpHurr">

    Read the article

  • Box2D how to implement a camera?

    - by Romeo
    By now i have this Camera class. package GameObjects; import main.Main; import org.jbox2d.common.Vec2; public class Camera { public int x; public int y; public int sx; public int sy; public static final float PIXEL_TO_METER = 50f; private float yFlip = -1.0f; public Camera() { x = 0; y = 0; sx = x + Main.APPWIDTH; sy = y + Main.APPHEIGHT; } public Camera(int x, int y) { this.x = x; this.y = y; sx = x + Main.APPWIDTH; sy = y + Main.APPHEIGHT; } public void update() { sx = x + Main.APPWIDTH; sy = y + Main.APPHEIGHT; } public void moveCam(int mx, int my) { if(mx >= 0 && mx <= 80) { this.x -= 2; } else if(mx <= Main.APPWIDTH && mx >= Main.APPWIDTH - 80) { this.x += 2; } if(my >= 0 && my <= 80) { this.y += 2; } else if(my <= Main.APPHEIGHT && my >= Main.APPHEIGHT - 80) { this.y -= 2; } this.update(); } public float meterToPixel(float meter) { return meter * PIXEL_TO_METER; } public float pixelToMeter(float pixel) { return pixel / PIXEL_TO_METER; } public Vec2 screenToWorld(Vec2 screenV) { return new Vec2(screenV.x + this.x, yFlip * screenV.y + this.y); } public Vec2 worldToScreen(Vec2 worldV) { return new Vec2(worldV.x - this.x, yFlip * worldV.y - this.y); } } I need to know how to modify the screenToWorld and worldToScreen functions to include the PIXEL_TO_METER scaling.

    Read the article

  • Will polishing my current project be a better learning experience than starting a new one?

    - by Alejandro Cámara
    I started programming many years ago. Now I'm trying to make games. I have read many recommendations to start cloning some well known games like galaga, tetris, arkanoid, etc. I have also read that I should go for the whole game (including menus, sound, score, etc.). Yesterday I finished the first complete version of my arkanoid clone. But it is far from over. I can still work on it for months (I program as a hobby in my free time) implementing a screen resolution switcher, remap of the control keys, power-ups falling from broken bricks, and a huge etc. But I do not want to be forever learning how to clone ONE game. I have the urge to get to the next clone in order to apply some design ideas I have come upon while developing this arkanoid clone (at the same time I am reading the GoF book and much source code from Ludum Dare 21 game contest). So the question is: Should I keep improving the arkanoid clone until it has all the features the original game had? or should I move to the next clone (there are almost infinite games to clone) and start mending the things I did wrong with the previous clone? This can be a very subjective question, so please restrain the answers to the most effective way to learn how to make my own games (not cloning someone ideas). Thank you! CLARIFICATION In order to clarify what I have implemented I make this list: Features implemented: Bouncing capabilities (the ball bounces on walls, on bricks, and on the bar). Sounds when bouncing on bricks and the bar, and when the player wins or loses. Basic title menu (new game and exit only). Also in-game menu and win/lose menus. Only three levels, but the map system is so easy I do not think it will teach me much (am I wrong?). Features not-implemented: Power-ups when breaking the bricks. Complex bricks (with more than one "hit point" and invincible). Better graphics (I am not really good at it). Programming polishing (use more intensively the design patterns). Here's a link to its (minimal) webpage: http://blog.acamara.es/piperine/ I kind of feel ashamed to show it, so please do not hit me too hard :-) My question was related to the not-implemented features. I wondered what was the fastest (optimal) path to learn. 1) implement the not-implemented features in this project which is getting big, or 2) make a new game which probably will teach me those lessons and new ones. ANSWER I choose @ashes999 answer because, in my case, I think I should polish more and try to "ship" the game. I think all the other answers are also important to bear in mind, so if you came here having the same question, before taking a rush decision read all the discussion. Thank you all!

    Read the article

  • map data structure in pacman

    - by Sam Fisher
    i am trying to make a pacman game in c# using GDI+, i have done some basic work and i have previously replicated games like copter-it and minesweeper. but i am confused about how do i implement the map in pacman, i mean which datastructure to use, so i can use it for moving AI controlled objects and check collisions with walls. i thought of a 2d array of ints but that didnt make sense to me. looking for some help. thanks.

    Read the article

  • Time calculation between openGL update calls.

    - by Vijayendra
    In XNA, the system calls update and draw function with the time information. This contains information such as how much time has passed since last update was called. This makes easy to integrate time and do animation calculation accordingly. But I dont see any such mechanism in openGL. I see openGL requires programmers to have their own implementation which could be buggy or inefficient. Is there any standard (and efficient) code that demonstrate this practice in openGL?

    Read the article

  • Determinism in multiplayer simulation with Box2D, and single computer

    - by Jake
    I wrote a small test car driving multiplayer game with Box2D using TCP server-client communcations. I ran 1 instance of server.exe and 2 instance of client.exe on the same machine that I code and compile the executables. I type inputs (WASD for a simple car movement) into one of the 2 clients and I can get both clients to update the simulation. There are 2 cars in the simulation. As long as the cars do not collide, I get the same identical output on both client.exe. I can run the car(s) around for as long as I could they still update the same. However, if I start to collide the cars, very quickly they go out of sync. My tools: Windows 7, C++, MSVS 2010, Box2D, freeGlut. My Psuedocode: // client.exe void timer(int value) { tcpServer.send(my_inputs); foreach(i = player including myself) inputs[i] = tcpServer.receive(); foreach(i = player including myself) players[i].process(inputs[i]); myb2World.step(33, 8, 6); // Box2D world step simulation foreach(i = player including myself) renderer.render(player[i]); glutTimerFunc(33, timer, 0); } // server.exe void serviceloop { while(all clients alive) { foreach(c = clients) tcpClients[c].receive(&inputs[c]); // send input of each client to all clients foreach(source = clients) { foreach(dest = clients) { tcpClients[dest].send(inputs[source]); } } } } I have read all over the internet and SE the following claims (paraphrased): Box2D is deterministic as long as floating point architecture/implementation is the same. (For any deterministic engine) Determinism is gauranteed if playback of recorded inputs is on the same machine with exe compiled using same compiler and machine. Additionally my server.exe and client.exe gameloop is single thread with blocking socket calls and fixed time step. Question: Can anyone explain what I did wrong to get different Box2D output?

    Read the article

  • Examples of good Javascript/HTML5 based games

    - by Zuch
    Now that Flash is largely being replaced with HTML5 elements (video, audio, canvas, etc.) are there any good examples of web-based games built on completely open standards (meaning Javascript, HTML and CSS)? I see a lot of examples of pure HTML5 implementations of what was once only in Flash (like stuff here: http://www.html5rocks.com/) but not many games, a domain which still seem dominated by Flash. I'm curious what's possible and what the limitations are.

    Read the article

  • How to correctly export UV coordinates from Blender

    - by KlashnikovKid
    Alright, so I'm just now getting around to texturing some assets. After much trial and error I feel I'm pretty good at UV unwrapping now and my work looks good in Blender. However, either I'm using the UV data incorrectly (I really doubt it) or Blender doesn't seem to export the correct UV coordinates into the obj file because the texture is mapped differently in my game engine. And in Blender I've played with the texture panel and it's mapping options and have noticed it doesn't appear to affect the exported obj file's uv coordinates. So I guess my question is, is there something I need to do prior to exporting in order to bake the correct UV coordinates into the obj file? Or something else that needs to be done to massage the texture coordinates for sampling. Or any thoughts at all of what could be going wrong? (Also here is a screen shot of my diffused texture in blender and the game engine. As you can see in the image, I have the same problem with a simple test cube not getting correct uv's either) http://www.digitalinception.net/blenderSS.png http://www.digitalinception.net/gameSS.png

    Read the article

  • Sound not playing on Windows XP - SoundEffect or Song: Monogame

    - by ashes999
    I'm trying to integrate sound into my Monogame game. I don't have the content pipeline hack -- just straight Monogame (Beta 3) at this point. (I tried adding the content pipeline, but ran into some issues.) I added a .wav file to my /Content directory, and I can create and instantiate both SoundEffect and Song classes. However, both show durations of 00:00:00 (on a ten-second long file), and neither plays. I can call LoadContent without any issue. But when I call Play, nothing plays. I've tried a couple of different sounds, and different formats (MP3 and WAV) to rule that out. Only WAV seems to even load without crashing out, but it doesn't play. There seems to be a GitHub issue that fixes this problem in 2.5.1. Downgrading to 2.5.1 doesn't fix this problem; it seems like it's fixed in 3.0 (_data is set in the SoundEffect instance). This issue only occurs on Windows XP. I tested it on a Windows 7 laptop, and the sound plays fine.

    Read the article

  • Unity3D: How to make the camera focus a moving game object with ITween?

    - by nathan
    I'm trying to write a solar system with Unity3D. Planets are sphere game objects rotating around another sphere game object representing the star. What i want to achieve is let the user click on a planet and then zoom the camera on this planet and then make the camera follow and keep it centered on the screen while it keep moving around the star. I decided to use iTween library and so far i was able to create the zoom effect using iTween.MoveUpdate. My problem is that the focused planet does not say properly centered as it moves. Here is the relevant part of my script: void Update () { if (Input.GetButtonDown("Fire1")) { Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition); RaycastHit hit; if (Physics.Raycast(ray, out hit, Mathf.Infinity, concernedLayers)) { selectedPlanet = hit.collider.gameObject; } } } void LateUpdate() { if (selectedPlanet != null) { Vector3 pos = selectedPlanet.transform.position; pos.z = selectedPlanet.transform.position.z - selectedPlanet.transform.localScale.z; pos.y = selectedPlanet.transform.position.y; iTween.MoveUpdate(Camera.main.gameObject, pos, 2); } } What do i need to add to this script to make the selected planet stay centered on the screen? I hosted my current project as a webplayer application so you see what's going wrong. You can access it here.

    Read the article

  • The technology behind 22can's curiosity

    - by Cameron Scully
    I don't have alot of experience with mobile apps and I definitely don't know much about MMO's but I was wondering what the basic architecture of a game like that would be (understandably some don't consider it a game, but it must use some game theory and implementation). Mainly, how are they able to send/recieve real time feed back of the cube being chipped away by thousands of players on their mobile devices? How is data of the cube's millions of pieces stored and accessed so quickly? Thanks

    Read the article

  • Draw Rectangle To All Dimensions of Image

    - by opiop65
    I have some rudimentary collision code: public class Collision { static boolean isColliding = false; static Rectangle player; static Rectangle female; public static void collision(){ Rectangle player = Game.Playerbounds(); Rectangle female = Game.Femalebounds(); if(player.intersects(female)){ isColliding = true; }else{ isColliding = false; } } } And this is the rectangle code: public static Rectangle Playerbounds() { return(new Rectangle(posX, posY, 25, 25)); } public static Rectangle Femalebounds() { return(new Rectangle(femaleX, femaleY, 25, 25)); } My InputHandling class: public static void movePlayer(GameContainer gc, int delta){ Input input = gc.getInput(); if(input.isKeyDown(input.KEY_W)){ Game.posY -= walkSpeed * delta; walkUp = true; if(Collision.isColliding == true){ Game.posY += walkSpeed * delta; } } if(input.isKeyDown(input.KEY_S)){ Game.posY += walkSpeed * delta; walkDown = true; if(Collision.isColliding == true){ Game.posY -= walkSpeed * delta; } } if(input.isKeyDown(input.KEY_D)){ Game.posX += walkSpeed * delta; walkRight = true; if(Collision.isColliding == true){ Game.posX -= walkSpeed * delta; } } if(input.isKeyDown(input.KEY_A)){ Game.posX -= walkSpeed * delta; walkLeft = true; if(Collision.isColliding == true){ Game.posX += walkSpeed * delta; } } } The code works partially. Only the right and top side of the images collide. How do I correct the rectangle so it will draw on all sides? Thanks for any suggestions!

    Read the article

  • How AlphaBlend Blendstate works in XNA when accumulighting light into a RenderTarget?

    - by cubrman
    I am using a Deferred Rendering engine from Catalin Zima's tutorial: His lighting shader returns the color of the light in the rgb channels and the specular component in the alpha channel. Here is how light gets accumulated: Game.GraphicsDevice.SetRenderTarget(LightRT); Game.GraphicsDevice.Clear(Color.Transparent); Game.GraphicsDevice.BlendState = BlendState.AlphaBlend; // Continuously draw 3d spheres with lighting pixel shader. ... Game.GraphicsDevice.BlendState = BlendState.Opaque; MSDN states that AlphaBlend field of the BlendState class uses the next formula for alphablending: (source × Blend.SourceAlpha) + (destination × Blend.InvSourceAlpha), where "source" is the color of the pixel returned by the shader and "destination" is the color of the pixel in the rendertarget. My question is why do my colors are accumulated correctly in the Light rendertarget even when the new pixels' alphas equal zero? As a quick sanity check I ran the following code in the light's pixel shader: float specularLight = 0; float4 light4 = attenuation * lightIntensity * float4(diffuseLight.rgb,specularLight); if (light4.a == 0) light4 = 0; return light4; This prevents lighting from getting accumulated and, subsequently, drawn on the screen. But when I do the following: float specularLight = 0; float4 light4 = attenuation * lightIntensity * float4(diffuseLight.rgb,specularLight); return light4; The light is accumulated and drawn exactly where it needs to be. What am I missing? According to the formula above: (source x 0) + (destination x 1) should equal destination, so the "LightRT" rendertarget must not change when I draw light spheres into it! It feels like the GPU is using the Additive blend instead: (source × Blend.One) + (destination × Blend.One)

    Read the article

  • Unity - Invert Movement Direction

    - by m41n
    I am currently developing a 2,5D Sidescroller in Unity (just starting to get to know it). Right now I added a turn-script to have my character face the appropriate direction of movement, though something with the movement itself is behaving oddly now. When I press the right arrow key, the character moves and faces towards the right. If I press the left arrow key, the character faces towards the left, but "moon-walks" to the right. I allready had enough trouble getting the turning to work, so what I am trying is to find a simple solution, if possible without too much reworking of the rest of my project. I was thinking of just inverting the movement direction for a specific input-key/facing-direction. So if anyone knows how to do something like that, I'd be thankful for the help. If it helps, the following is the current part of my "AnimationChooser" script to handle the turning: Quaternion targetf = Quaternion.Euler(0, 270, 0); // Vector3 Direction when facing frontway Quaternion targetb = Quaternion.Euler(0, 90, 0); // Vector3 Direction when facing opposite way if (Input.GetAxisRaw ("Vertical") < 0.0f) // if input is lower than 0 turn to targetf { transform.rotation = Quaternion.Lerp(transform.rotation, targetf, Time.deltaTime * smooth); } if (Input.GetAxisRaw ("Vertical") > 0.0f) // if input is higher than 0 turn to targetb { transform.rotation = Quaternion.Lerp(transform.rotation, targetb, Time.deltaTime * smooth); } The Values (270 and 90) and Axis are because I had to turn my model itself in the very first place to face towards any of the movement directions.

    Read the article

  • High level project workflow

    - by user775060
    We are a small software company trying our hand at our second game. Since our first games' process was a living nightmare (since we used webdevelopment workflow) I have decided to educate myself on how to manage a game project on a high level. How does your process work, from idea to launch? Preferably in situations where you have a team that needs to cooperate. I've seen these 2 links, which are useful in a way, but was wondering if there are better/more comprehensive ways to do this? http://www.goodcontroller.com/blog/?p=136 http://gogogic.wordpress.com/2009/02/09/symbol6-how-we-created-an-iphone-game/ All input would be infinitely appreciated.

    Read the article

< Previous Page | 578 579 580 581 582 583 584 585 586 587 588 589  | Next Page >