Search Results

Search found 28230 results on 1130 pages for 'embedded development'.

Page 484/1130 | < Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >

  • Using Google App Engine to Perform World Updates vs an Authoritative Server

    - by Error 454
    I am considering different game server architectures that use GAE. The types of games I am considering are turn-based where the world status would need to be updated about once per minute. I am looking for an answer that persuades me to either perform the world update on the google servers OR an authoritative server that syncs with the datastore. The main goal here would be to minimize GAE daily quotas. For some rough numbers, I am assuming 10,000 entities requiring updates. Each entity update would require: Reading 5 private entity variables (fetched from datastore) Fetching as many as 20 static variables (from datastore or persisted in server memory) Writing 5 entity variables Clients of the game would authenticate and set state directly against GAE as well as pull the latest world state from GAE. Running the update on GAE would consist of a cron job launched every minute. This would update all of the entities and save the results to the datastore. This would be more CPU intensive for GAE. Running the update on an authoritative server would consist of fetching entity data from the GAE datastore, calculating the new entity states and pushing the new state variables back to the datastore. This would be more bandwidth intensive for the datastore.

    Read the article

  • How can I use the dualforward parameter in my unity shader to use lightmaps and normal maps together?

    - by Raphaeltm
    I'm using the free version of unity and I would like to combine lightmaps with specularity and normal maps. After doing a -bunch- of research, I've figured out that there doesn't seem to be any easy way to do this in the free version of unity, which doesn't support deferred rendering/easy use of dual lightmaps. However, it looks like it's possible, by writing a custom shader, using the "dualforward" parameter in a shader, switching the lightmapping mode to "dual lightmaps" and turning on "Use in forward ren." (basically, writing a shader that specifies the use of dual lightmaps, which should allow for a combination of lightmaps and normal maps) So I downloaded the source code for the default shaders (because all I need is a normal specular bumped shader) and added "dualforward" to the parameters: Shader "Bumped Specular Dual Lightmaps" { Properties { _Color ("Main Color", Color) = (1,1,1,1) _SpecColor ("Specular Color", Color) = (0.5, 0.5, 0.5, 1) _Shininess ("Shininess", Range (0.03, 1)) = 0.078125 _MainTex ("Base (RGB) Gloss (A)", 2D) = "white" {} _BumpMap ("Normalmap", 2D) = "bump" {} } SubShader { Tags { "RenderType"="Opaque" } LOD 400 CGPROGRAM #pragma surface surf BlinnPhong dualforward sampler2D _MainTex; sampler2D _BumpMap; fixed4 _Color; half _Shininess; struct Input { float2 uv_MainTex; float2 uv_BumpMap; }; void surf (Input IN, inout SurfaceOutput o) { fixed4 tex = tex2D(_MainTex, IN.uv_MainTex); o.Albedo = tex.rgb * _Color.rgb; o.Gloss = tex.a; o.Alpha = tex.a * _Color.a; o.Specular = _Shininess; o.Normal = UnpackNormal(tex2D(_BumpMap, IN.uv_BumpMap)); } ENDCG } FallBack "Specular" } This, however, doesn't seem to work. When I keep the "dualforward" param, every object that uses it seems to be lit by the one directional light in the scene. When I remove the "dualforward" param, it they look like normal lightmapped objects with no normal maps or specularity. I noticed that the support for "dualforward" seems to be new in v.3.4.2, so I made sure to download it (I was running 3.4.1), but it still doesn't work. Anybody have any advice for me?

    Read the article

  • How do I make this rendering thread run together with the main one?

    - by funk
    I'm developing an Android game and need to show an animation of an exploding bomb. It's a spritesheet with 1 row and 13 different images. Each image should be displayed in sequence, 200 ms apart. There is one Thread running for the entire game: package com.android.testgame; import android.graphics.Canvas; public class GameLoopThread extends Thread { static final long FPS = 10; // 10 Frames per Second private final GameView view; private boolean running = false; public GameLoopThread(GameView view) { this.view = view; } public void setRunning(boolean run) { running = run; } @Override public void run() { long ticksPS = 1000 / FPS; long startTime; long sleepTime; while (running) { Canvas c = null; startTime = System.currentTimeMillis(); try { c = view.getHolder().lockCanvas(); synchronized (view.getHolder()) { view.onDraw(c); } } finally { if (c != null) { view.getHolder().unlockCanvasAndPost(c); } } sleepTime = ticksPS - (System.currentTimeMillis() - startTime); try { if (sleepTime > 0) { sleep(sleepTime); } else { sleep(10); } } catch (Exception e) {} } } } As far as I know I would have to create a second Thread for the bomb. package com.android.testgame; import android.graphics.Bitmap; import android.graphics.Canvas; import android.graphics.Rect; public class Bomb { private final Bitmap bmp; private final int width; private final int height; private int currentFrame = 0; private static final int BMPROWS = 1; private static final int BMPCOLUMNS = 13; private int x = 0; private int y = 0; public Bomb(GameView gameView, Bitmap bmp) { this.width = bmp.getWidth() / BMPCOLUMNS; this.height = bmp.getHeight() / BMPROWS; this.bmp = bmp; x = 250; y = 250; } private void update() { currentFrame++; new BombThread().start(); } public void onDraw(Canvas canvas) { update(); int srcX = currentFrame * width; int srcY = height; Rect src = new Rect(srcX, srcY, srcX + width, srcY + height); Rect dst = new Rect(x, y, x + width, y + height); canvas.drawBitmap(bmp, src, dst, null); } class BombThread extends Thread { @Override public void run() { try { sleep(200); } catch(InterruptedException e){ } } } } The Threads would then have to run simultaneously. How do I do this?

    Read the article

  • Why are textures always square powers of two? What if they aren't?

    - by Keavon
    Why are the resolution of textures in games always a power of two (128x128, 256x256, 512x512, 1024x1024, etc.)? Wouldn't it be smart to save on the game's file size and make the texture exactly fit the UV unwrapped model? What would happen if there was a texture that was not a power of two? Would it be incorrect to have a texture be something like 256x512, or 512x1024? Or would this cause the problems that non-power-of-two textures may cause?

    Read the article

  • 3D Graphics with XNA Game Studio 4.0 bug in light map?

    - by Eibis
    i'm following the tutorials on 3D Graphics with XNA Game Studio 4.0 and I came up with an horrible effect when I tried to implement the Light Map http://i.stack.imgur.com/BUWvU.jpg this effect shows up when I look towards the center of the house (and it moves with me). it has this shape because I'm using a sphere to represent light; using other light shapes gives different results. I'm using a class PreLightingRenderer: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Dhpoware; using Microsoft.Xna.Framework.Content; namespace XNAFirstPersonCamera { public class PrelightingRenderer { // Normal, depth, and light map render targets RenderTarget2D depthTarg; RenderTarget2D normalTarg; RenderTarget2D lightTarg; // Depth/normal effect and light mapping effect Effect depthNormalEffect; Effect lightingEffect; // Point light (sphere) mesh Model lightMesh; // List of models, lights, and the camera public List<CModel> Models { get; set; } public List<PPPointLight> Lights { get; set; } public FirstPersonCamera Camera { get; set; } GraphicsDevice graphicsDevice; int viewWidth = 0, viewHeight = 0; public PrelightingRenderer(GraphicsDevice GraphicsDevice, ContentManager Content) { viewWidth = GraphicsDevice.Viewport.Width; viewHeight = GraphicsDevice.Viewport.Height; // Create the three render targets depthTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Single, DepthFormat.Depth24); normalTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); lightTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); // Load effects depthNormalEffect = Content.Load<Effect>(@"Effects\PPDepthNormal"); lightingEffect = Content.Load<Effect>(@"Effects\PPLight"); // Set effect parameters to light mapping effect lightingEffect.Parameters["viewportWidth"].SetValue(viewWidth); lightingEffect.Parameters["viewportHeight"].SetValue(viewHeight); // Load point light mesh and set light mapping effect to it lightMesh = Content.Load<Model>(@"Models\PPLightMesh"); lightMesh.Meshes[0].MeshParts[0].Effect = lightingEffect; this.graphicsDevice = GraphicsDevice; } public void Draw() { drawDepthNormalMap(); drawLightMap(); prepareMainPass(); } void drawDepthNormalMap() { // Set the render targets to 'slots' 1 and 2 graphicsDevice.SetRenderTargets(normalTarg, depthTarg); // Clear the render target to 1 (infinite depth) graphicsDevice.Clear(Color.White); // Draw each model with the PPDepthNormal effect foreach (CModel model in Models) { model.CacheEffects(); model.SetModelEffect(depthNormalEffect, false); model.Draw(Camera.ViewMatrix, Camera.ProjectionMatrix, Camera.Position); model.RestoreEffects(); } // Un-set the render targets graphicsDevice.SetRenderTargets(null); } void drawLightMap() { // Set the depth and normal map info to the effect lightingEffect.Parameters["DepthTexture"].SetValue(depthTarg); lightingEffect.Parameters["NormalTexture"].SetValue(normalTarg); // Calculate the view * projection matrix Matrix viewProjection = Camera.ViewMatrix * Camera.ProjectionMatrix; // Set the inverse of the view * projection matrix to the effect Matrix invViewProjection = Matrix.Invert(viewProjection); lightingEffect.Parameters["InvViewProjection"].SetValue(invViewProjection); // Set the render target to the graphics device graphicsDevice.SetRenderTarget(lightTarg); // Clear the render target to black (no light) graphicsDevice.Clear(Color.Black); // Set render states to additive (lights will add their influences) graphicsDevice.BlendState = BlendState.Additive; graphicsDevice.DepthStencilState = DepthStencilState.None; foreach (PPPointLight light in Lights) { // Set the light's parameters to the effect light.SetEffectParameters(lightingEffect); // Calculate the world * view * projection matrix and set it to // the effect Matrix wvp = (Matrix.CreateScale(light.Attenuation) * Matrix.CreateTranslation(light.Position)) * viewProjection; lightingEffect.Parameters["WorldViewProjection"].SetValue(wvp); // Determine the distance between the light and camera float dist = Vector3.Distance(Camera.Position, light.Position); // If the camera is inside the light-sphere, invert the cull mode // to draw the inside of the sphere instead of the outside if (dist < light.Attenuation) graphicsDevice.RasterizerState = RasterizerState.CullClockwise; // Draw the point-light-sphere lightMesh.Meshes[0].Draw(); // Revert the cull mode graphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; } // Revert the blending and depth render states graphicsDevice.BlendState = BlendState.Opaque; graphicsDevice.DepthStencilState = DepthStencilState.Default; // Un-set the render target graphicsDevice.SetRenderTarget(null); } void prepareMainPass() { foreach (CModel model in Models) foreach (ModelMesh mesh in model.Model.Meshes) foreach (ModelMeshPart part in mesh.MeshParts) { // Set the light map and viewport parameters to each model's effect if (part.Effect.Parameters["LightTexture"] != null) part.Effect.Parameters["LightTexture"].SetValue(lightTarg); if (part.Effect.Parameters["viewportWidth"] != null) part.Effect.Parameters["viewportWidth"].SetValue(viewWidth); if (part.Effect.Parameters["viewportHeight"] != null) part.Effect.Parameters["viewportHeight"].SetValue(viewHeight); } } } } that uses three effect: PPDepthNormal.fx float4x4 World; float4x4 View; float4x4 Projection; struct VertexShaderInput { float4 Position : POSITION0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 Depth : TEXCOORD0; float3 Normal : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 viewProjection = mul(View, Projection); float4x4 worldViewProjection = mul(World, viewProjection); output.Position = mul(input.Position, worldViewProjection); output.Normal = mul(input.Normal, World); // Position's z and w components correspond to the distance // from camera and distance of the far plane respectively output.Depth.xy = output.Position.zw; return output; } // We render to two targets simultaneously, so we can't // simply return a float4 from the pixel shader struct PixelShaderOutput { float4 Normal : COLOR0; float4 Depth : COLOR1; }; PixelShaderOutput PixelShaderFunction(VertexShaderOutput input) { PixelShaderOutput output; // Depth is stored as distance from camera / far plane distance // to get value between 0 and 1 output.Depth = input.Depth.x / input.Depth.y; // Normal map simply stores X, Y and Z components of normal // shifted from (-1 to 1) range to (0 to 1) range output.Normal.xyz = (normalize(input.Normal).xyz / 2) + .5; // Other components must be initialized to compile output.Depth.a = 1; output.Normal.a = 1; return output; } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPLight.fx float4x4 WorldViewProjection; float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; sampler2D depthSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 LightColor; float3 LightPosition; float LightAttenuation; // Include shared functions #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : POSITION0; float4 LightPosition : TEXCOORD0; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = mul(input.Position, WorldViewProjection); output.LightPosition = output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Find the pixel coordinates of the input position in the depth // and normal textures float2 texCoord = postProjToScreen(input.LightPosition) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; // Extract the normal from the normal map and move from // 0 to 1 range to -1 to 1 range float4 normal = (tex2D(normalSampler, texCoord) - .5) * 2; // Perform the lighting calculations for a point light float3 lightDirection = normalize(LightPosition - position); float lighting = clamp(dot(normal, lightDirection), 0, 1); // Attenuate the light to simulate a point light float d = distance(LightPosition, position); float att = 1 - pow(d / LightAttenuation, 6); return float4(LightColor * lighting * att, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPShared.vsi has some common functions: float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } and finally from the Game class I set up in LoadContent with: effect = Content.Load(@"Effects\PPModel"); models[0] = new CModel(Content.Load(@"Models\teapot"), new Vector3(-50, 80, 0), new Vector3(0, 0, 0), 1f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); house = new CModel(Content.Load(@"Models\house"), new Vector3(0, 0, 0), new Vector3((float)-Math.PI / 2, 0, 0), 35.0f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); models[0].SetModelEffect(effect, true); house.SetModelEffect(effect, true); renderer = new PrelightingRenderer(GraphicsDevice, Content); renderer.Models = new List(); renderer.Models.Add(house); renderer.Models.Add(models[0]); renderer.Lights = new List() { new PPPointLight(new Vector3(0, 120, 0), Color.White * .85f, 2000) }; where PPModel.fx is: float4x4 World; float4x4 View; float4x4 Projection; texture2D BasicTexture; sampler2D basicTextureSampler = sampler_state { texture = ; addressU = wrap; addressV = wrap; minfilter = anisotropic; magfilter = anisotropic; mipfilter = linear; }; bool TextureEnabled = true; texture2D LightTexture; sampler2D lightSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 AmbientColor = float3(0.15, 0.15, 0.15); float3 DiffuseColor; #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; float2 UV : TEXCOORD0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 worldViewProjection = mul(World, mul(View, Projection)); output.Position = mul(input.Position, worldViewProjection); output.PositionCopy = output.Position; output.UV = input.UV; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Sample model's texture float3 basicTexture = tex2D(basicTextureSampler, input.UV); if (!TextureEnabled) basicTexture = float4(1, 1, 1, 1); // Extract lighting value from light map float2 texCoord = postProjToScreen(input.PositionCopy) + halfPixel(); float3 light = tex2D(lightSampler, texCoord); light += AmbientColor; return float4(basicTexture * DiffuseColor * light, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } I don't have any idea on what's wrong... googling the web I found that this tutorial may have some bug but I don't know if it's the LightModel fault (the sphere) or in a shader or in the class PrelightingRenderer. Any help is very appreciated, thank you for reading!

    Read the article

  • Making more complicated systems(entity-component-system model question)

    - by winch
    I'm using a model where entities are collections of components and components are just data. All the logic goes into systems which operate on components. Making basic systems(for Rendering and handling collision) was easy. But how do I do more compilcated systems? For example, in a CollisionSystem I can check if entity A collides with entity B. I have this code in CollisionSystem for checking if B damages A: if(collides(a, b)) { HealthComponent* hc = a->get<HealthComponent(); hc.reduceHealth(b->get<DamageComponent>()->getDamage()); But I feel that this code shouldn't belong to Collision system. Where should code like this be and which additional systems should I create to make this code generic?

    Read the article

  • How to import or "using" a custom class in Unity script?

    - by Bobbake4
    I have downloaded the JSONObject plugin for parsing JSON in Unity but when I use it in a script I get an error indicating JSONObject cannot be found. My question is how do I use a custom object class defined inside another class. I know I need a using directive to solve this but I am not sure of the path to these custom objects I have imported. They are in the root project folder inside JSONObject folder and class is called JSONObject. Thanks

    Read the article

  • User generated content: a basic yet simple to use OR a complex yet powerful solution?

    - by ne5tebiu
    As stated above, which solution is better for a game based on user generated content? The simple solution (in-game editor) is great for gamers without experience in coding and etc. In this way every player could populate the game with content. But the content would be very limited. The complex solution would allow the content to be with almost no limitation but casual gamers probably couldn't make hardly any content at all. If both solutions are used, the quality behind the second solution would be more valuable than the first solution's quantity. However, making a powerful in-game editor could even take more time and manpower than the actual game and every gamer would have to learn how to use the new complex tool, understand it, and master it if he or she wants to make quality content.

    Read the article

  • Creating a level editor event system

    - by Vaughan Hilts
    I'm designing a level editor for game, and I'm trying to create sort of an 'event' system so I can chain together things. If anyone has used RPG Maker, I'm trying to do similar to their system. Right now, I have an 'EventTemplate' class and a bunch of sub-classed 'EventNodes' which basically just contain properties of their data. Orginally, the IAction and IExecute interface performed logic but it was moved into a DLL to share between the two projects. Question: How can I abstract logic from data in this case? Is my model wrong? Isn't cast typing expensive to parse these actions all the time? Should I write a 'Processor' class to execute these all? But then these actions that can do all sorts of things need to interact with all sorts of sub-systems.

    Read the article

  • Would someone please explain Octree Collisions to me?

    - by A-Type
    I've been reading everything I can find on the subject and I feel like the pieces are just about to fall into place, but I just can't quite get it. I'm making a space game, where collisions will occur between planets, ships, asteroids, and the sun. Each of these objects can be subdivided into 'chunks', which I have implemented to speed up rendering (the vertices can and will change often at runtime, so I've separated the buffers). These subdivisions also have bounding primitives to test for collision. All of these objects are made of blocks (yeah, it's that kind of game). Blocks can also be tested for rough collisions, though they do not have individual bounding primitives for memory reasons. I think the rough testing seems to be sufficient, though. So, collision needs to be fairly precise; at block resolution. Some functions rely on two blocks colliding. And, of course, attacking specific blocks is important. Now what I am struggling with is filtering my collision pairs. As I said, I've read a lot about Octrees, but I'm having trouble applying it to my situation as many tutorials are vague with very little code. My main issues are: Are Octrees recalculated each frame, or are they stored in memory and objects are shuffled into different divisions as they move? Despite all my reading I still am not clear on this... the vagueness of it all has been frustrating. How far do Octrees subdivide? Planets in my game are quite large, while asteroids are smaller. Do I subdivide to the size of the planet, or asteroid (where planet is in multiple divisions)? Or is the limit something else entirely, like number of elements in the division? Should I load objects into the octrees as 'chunks' or in the whole, then break into chunks later? This could be specific to my implementation, I suppose. I was going to ask about how big my root needed to be, but I did manage to find this question, and the second answer seems sufficient for me. I'm afraid I don't really get what he means by adding new nodes and doing subdivisions upon adding new objects, probably because I'm confused about whether the tree is maintained in memory or recalculated per-frame.

    Read the article

  • How do I fix these compiler errors in Apple Crunch?

    - by BluFire
    I've been looking around and I finally got the full source code for a game called Apple-Crunch from Google Code. But when I put it into my project, the source code included so many errors in the class files such as: cannot be resolved into a type the constructor is undefined the method method() is undefined for the type Sprite class.java I downloaded the source directly from the command-line and noticed errors popping up on my project. Since I couldn't figure out how to import the actual folder into my workspace (it wouldn't show up on existing projects) I decided to copy and overwrite the folders into the project. The errors were still there so I looked at the class files and noticed that the classes with errors extended from RokonActivity. I then proceeded to add to the libs folder the Rokon library in hopes to fix the errors. Sadly it didn't work and now I don't what to do to fix the errors. How do I fix the errors without having to manually change the code? The source code should be fully functional so why are there errors?

    Read the article

  • In a state machine, is it a good idea to separate states and transitions?

    - by codablank1
    I have implemented a small state machine in this way (in pseudo code): class Input {} class KeyInput inherits Input { public : enum { Key_A, Key_B, ..., } } class GUIInput inherits Input { public : enum { Button_A, Button_B, ..., } } enum Event { NewGame, Quit, OpenOptions, OpenMenu } class BaseState { String name; Event get_event (Input input); void handle (Event e); //event handling function } class Menu inherits BaseState{...} class InGame inherits BaseState{...} class Options inherits BaseState{...} class StateMachine { public : BaseState get_current_state () { return current_state; } void add_state (String name, BaseState state) { statesMap.insert(name, state);} //raise an exception if state not found BaseState get_state (String name) { return statesMap.find(name); } //raise an exception if state or next_state not found void add_transition (Event event, String state_name, String next_state_name) { BaseState state = get_state(state_name); BaseState next_state = get_state(next_state_name); transitionsMap.insert(pair<event, state>, next_state); } //raise exception if couple not found BaseState get_next_state(Event event, BaseState state) { return transitionsMap.find(pair<event, state>); } void handle(Input input) { Event event = current_state.get_event(input) current_state.handle(event); current_state = get_next_state(event, current_state); } private : BaseState current_state; map<String, BaseState> statesMap; //map of all states in the machine //for each couple event/state, this map stores the next state map<pair<Event, BaseState>, BaseState> transitionsMap; } So, before getting the transition, I need to convert the key input or GUI input to the proper event, given the current state; thus the same key 'W' can launch a new game in the 'Menu' state or moving forward a character in the 'InGame' state; Then I get the next state from the transitionsMap and I update the current state Does this configuration seem valid to you ? Is it a good idea to separate states and transitions ? And I have some kind of trouble to represent a 'null state' or a 'null event'; What initial value can I give to the current state and which one should be returned by get_state if it fails ?

    Read the article

  • Assigning valid moves on board game

    - by Kunal4536
    I am making a board game in unity 4.3 2d similar to checkers. I have added an empty object to all the points where player can move and added a box collider to each empty object.I attached a click to move script to each player token. Now I want to assign valid moves. e.g. as shown in picture... Players can only move on vertex of each square.Player can only move to adjacent vertex.Thus it can only move from red spot to yellow and cannot move to blue spot.There is another condition which is : if there is the token of another player at the yellow spot then the player cannot move to that spot. Instead it will have to go from red to green spot. How can I find the valid moves of the player by scripting. I have another problem with click to move. When I click all the objects move to that position.But I only want to move a single token. So what can i add to script to select a specific object and then click to move the specific object.Here is my script for click to move. var obj:Transform; private var hitPoint : Vector3; private var move: boolean = false; private var startTime:float; var speed = 1; function Update () { if(Input.GetKeyDown(KeyCode.Mouse0)) { var hit : RaycastHit; // no point storing this really var ray = Camera.main.ScreenPointToRay (Input.mousePosition); if (Physics.Raycast (ray, hit, 10000)) { hitPoint = hit.point; move = true; startTime = Time.time; } } if(move) { obj.position = Vector3.Lerp(obj.position, hitPoint, Time.deltaTime * speed); if(obj.position == hitPoint) { move = false; } } }`

    Read the article

  • How to export 3D models that consist of several parts (eg. turret on a tank)?

    - by Will
    What are the standard alternatives for the mechanics of attaching turrets and such to 3D models for use in-game? I don't mean the logic, but rather the graphics aspects. My naive approach is to extend the MD2-like format that I'm using (blender-exported using a script) to include a new set of properties for a mesh that: is anchored in another 'parent' mesh. The anchor is a point and normal in the parent mesh and a point and normal in the child mesh; these will always be colinear, giving the child rotation but not translation relative to the parent point. has a normal that is aligned with a 'target'. Classically this target is the enemy that is being engaged, but it might be some other vector e.g. 'the wind' (for sails and flags (and smoke, which is a particle system but the same principle applies)) or 'upwards' (e.g. so bodies of riders bend properly when riding a horse up an incline etc). that the anchor and target alignments have maximum and minimum and a speed coeff. there is game logic for multiple turrets and on a model and deciding which engages which enemy. 'primary' and 'secondary' or 'target0' ... 'targetN' or some such annotation will be there. So to illustrate, a classic tank would be made from three meshes; a main body mesh, a turret mesh that is anchored to the top of the main body so it can spin only horizontally and a barrel mesh that is anchored to the front of the turret and can only move vertically within some bounds. And there might be a forth flag mesh on top of the turret that is aligned with 'wind' where wind is a function the engine solves that merges environment's wind angle with angle the vehicle is travelling in an velocity, or something fancy. This gives each mesh one degree of freedom relative to its parent. Things with multiple degrees of freedom can be modelled by zero-vertex connecting meshes perhaps? This is where I think the approach I outlined begins to feel inelegant, yet perhaps its still a workable system? This is why I want to know how it is done in professional games ;) Are there better approaches? Are there formats that already include this information? Is this routine?

    Read the article

  • Facebook Game database design

    - by facebook-100000781341887
    Hi, I'm currently develop a facebook mafia like PHP game(of course, a light weight version), here is a simplify database(MySQL) of the game id-a <int3> <for index> uid <chr15> <facebook uid> HP <int3> <health point> exp <int3> <experience> money <int3> <money> list_inventory <chr5> <the inventory user hold...some special here, talk next> ... and 20 other fields just like reputation, num of combat... *the number next to the type is the size(byte) of the type For the list_inventory, there have 40 inventorys in my game, (actually, I have 5 these kind of list in my database), and each user can only contain 1 qty of each inventory, therefore, I assign 5 char for this field and each bit of char as 1 item(5 char * 8 bit = 40 slot), and I will do some manipulation by PHP to extract the data from this 5 byte. OK, I was thinking on this, if this game contains 100,000 user, and only 10% are active, therefore, if use my method, for the space use, 5 byte * 100,000 = 500 KB if I use another method, create a table user_hold_inventory, if the user have the inventory, then insert a record into this table, so, for 10,000 active user, I assume they got all item, but for other, I assume they got no item, here is the fields of the new table id-b <int3> <for index> id-a <int3> <id of the user table> inv_no <int1> <inventory that user hold> for the space use, ([id] (3+3) byte + [inv_no] 1 byte ) * [active user] 10,000 * [all inventory] * 40 = 2.8 MB seems method 2 have use more space, but it consume less CPU power. Please comment these 2 method or please correct me if there have another better method rather than what I think. Another question is, my database contain 26 fields, but I counted 5 of them are not change frquently, should I need to separate it on the other table or not? So many words, thanks for reading :)

    Read the article

  • Writing to a structured buffer with a compute shader (D3D11)

    - by Vertexwahn
    I have some problems writing to a structured buffer. First I create a structured buffer that is filled with float values beginning from 0 to 99. Afterwards a copy the structured buffer to a CPU accessible buffer is made to print the content of the structured buffer to the console. The output is as expected (Numbers 0 to 99 appear on the console). Afterwards I use a compute shader that should change the contents of the structured buffer: RWStructuredBuffer<float> Result : register( u0 ); [numthreads(1, 1, 1)] void CS_main( uint3 GroupId : SV_GroupID ) { Result[GroupId.x] = GroupId.x * 10; } But the compute shader does not change the contents of the structured buffer. The source code can be found here (main.cpp): https://bitbucket.org/Vertexwahn/cmakedemos/src/4abb067afd5781b87a553c4c720956668adca22a/D3D11ComputeShader/src/main.cpp?at=default FillCS.hlsl: https://bitbucket.org/Vertexwahn/cmakedemos/src/4abb067afd5781b87a553c4c720956668adca22a/D3D11ComputeShader/src/FillCS.hlsl?at=default

    Read the article

  • Is using the student version of 3DS Max and Unity3d legal?

    - by SubZeron
    I am developing an indie game together with my friend using Unity3D engine. I bought "Silo 3D" for modeling two month ago and for texturing I use 3D coat. We plan to sell our game in the future. For the animations I work with 3DS max (only animation part). My question is, can I work with a students license? The license for the original version is too expensive for me. I am still at the university and I can not buy the 3DS Max license which costs 4000 €. As an alternative I have the choice beetween Blender (can´t work with this software and don't have time to invest for learning a new program) and Truespace (can´t export fbx animation and specially with bones) so for me, 3DS Max is the best choice to be effective and quick. Is it possible to prove it when I export my fbx characters from 3DS Max to Unity3D? I mean can they find out that I have used the students license of 3DS Max for the animations after the release of the game? Maybe with help of DRM? Can I solve that problem when I export the fbx from 3DS Max to Blender and after that export the same fbx to Unity3D?

    Read the article

  • Child transforms problem when loading 3DS models using assimp

    - by MhdSyrwan
    I'm trying to load a textured 3d model into my scene using assimp model loader. The problem is that child meshes are not situated correctly (they don't have the correct transformations). In brief: all the mTansform matrices are identity matrices, why would that be? I'm using this code to render the model: void recursive_render (const struct aiScene *sc, const struct aiNode* nd, float scale) { unsigned int i; unsigned int n=0, t; aiMatrix4x4 m = nd->mTransformation; m.Scaling(aiVector3D(scale, scale, scale), m); // update transform m.Transpose(); glPushMatrix(); glMultMatrixf((float*)&m); // draw all meshes assigned to this node for (; n < nd->mNumMeshes; ++n) { const struct aiMesh* mesh = scene->mMeshes[nd->mMeshes[n]]; apply_material(sc->mMaterials[mesh->mMaterialIndex]); if (mesh->HasBones()){ printf("model has bones"); abort(); } if(mesh->mNormals == NULL) { glDisable(GL_LIGHTING); } else { glEnable(GL_LIGHTING); } if(mesh->mColors[0] != NULL) { glEnable(GL_COLOR_MATERIAL); } else { glDisable(GL_COLOR_MATERIAL); } for (t = 0; t < mesh->mNumFaces; ++t) { const struct aiFace* face = &mesh->mFaces[t]; GLenum face_mode; switch(face->mNumIndices) { case 1: face_mode = GL_POINTS; break; case 2: face_mode = GL_LINES; break; case 3: face_mode = GL_TRIANGLES; break; default: face_mode = GL_POLYGON; break; } glBegin(face_mode); for(i = 0; i < face->mNumIndices; i++)// go through all vertices in face { int vertexIndex = face->mIndices[i];// get group index for current index if(mesh->mColors[0] != NULL) Color4f(&mesh->mColors[0][vertexIndex]); if(mesh->mNormals != NULL) if(mesh->HasTextureCoords(0))//HasTextureCoords(texture_coordinates_set) { glTexCoord2f(mesh->mTextureCoords[0][vertexIndex].x, 1 - mesh->mTextureCoords[0][vertexIndex].y); //mTextureCoords[channel][vertex] } glNormal3fv(&mesh->mNormals[vertexIndex].x); glVertex3fv(&mesh->mVertices[vertexIndex].x); } glEnd(); } } // draw all children for (n = 0; n < nd->mNumChildren; ++n) { recursive_render(sc, nd->mChildren[n], scale); } glPopMatrix(); } What's the problem in my code ? I've added some code to abort the program if there's any bone in the meshes, but the program doesn't abort, this means : no bones, is that normal? if (mesh->HasBones()){ printf("model has bones"); abort(); } Note: I am using openGL & SFML & assimp

    Read the article

  • Skip the first RenderTarget when writing to MRT with Opaque blending

    - by cubrman
    I am writing to three rendertargets and whant to know how to tell a GPU not to write to the first RT. When you write a shader you can simply output less data than you have RTs (like output a single float4 when writing to three RTs) and only the first RTs will be affected, but you cannot specify to output this data anywhere else but to COLOR0, then 1, etc. Is there a way to write to several RTs but skip the first target? If I output zeroes, the data in the target will become zeroes, but I need it to remain untuched in the first target and only change in the specified ones. The reason I need this is to prevent data loss when calling SetRendertarget() with DiscardContents RTs. I write to all the RTs at one point and I need to write to only the specified ones afterwards. It must be the first texture as I have a depth buffer linked to it (XNA 4.0). Thanks.

    Read the article

  • Level and Player objects - which should contain which?

    - by Thane Brimhall
    I've been working on a several simple games, and I've always come to a decision point where I have to choose whether to have the Level object as an attribute of the Player class or the Player as an attribute of the Level class. I can see arguments for both: The Level should contain the player because it also contains every other entity. In fact it just makes sense this way: "John is in the room." It makes it a bit more difficult to move the player to a new level, however, because then each level has to pass its player object to an upcoming level. On the other hand, it makes programming sense to me to leave the player as the top-level object that is persistent between levels, and the environment changes because the player decides to change his level and location. It becomes very easy to change levels, because all I have to do is replace the level variable on the player. What's the most common practice here? Or better yet, is there a "right" way to architecture this relationship?

    Read the article

  • Surface normal to screen angle

    - by Tannz0rz
    I've been struggling to get this working. I simply wish to take a surface normal and convert it to a screen angle. As an example, assuming we're working with the highlighted surface on the sphere below, where the arrow is the normal, the 2D angle would obviously be PI/4 radians. Here's one of the many things I've tried to no avail: float4 A = v.vertex; float4 B = v.vertex + float4(v.normal, 0.0); A = mul(VP, A); B = mul(VP, B); A.xy = (0.5 * (A.xy / A.w)) + 0.5; B.xy = (0.5 * (B.xy / B.w)) + 0.5; o.theta = atan2(B.y - A.y, B.x - A.x); I'm finally at my wit's end. Thanks for any and all help.

    Read the article

  • Collision detection with entities/AI

    - by James Williams
    I'm making my first game in Java, a top down 2D RPG. I've handled basic collision detection, rendering and have added an NPC, but I'm stuck on how to handle interaction between the player and the NPC. Currently I'm drawing out my level and then drawing characters, NPCs and animated tiles on top of this. The problem is keeping track of the NPCs so that my Character class can interact with methods in the NPC classes on collision. I'm not sure my method of drawing the level and drawing everything else on top is a good one - can anyone shed any light on this topic?

    Read the article

  • graphical interface when using assembly language

    - by Hellbent
    Im looking to use assembly language to make a great game, not just an average game but a really great game. I want to learn a framework to use in assembly. I know thats not possible without learning the framework in c first. So im thinking of learning sdl in c and then learn, teach myself, how to interpret the program and run it as assembly language code which shouldnt be that hard. Then i will have a window and some graphics routines to display the game while using assembly to code everything in. I need to spend some time learning sdl and then some more time learning how to code all those statements using assembly while calling c functions and knowing what registers returned calls use and what they leave etc. My question is , Is this a good way to go or is there something better to get a graphical window display using assembly language? Regards HellBent

    Read the article

  • The how of a collision engine

    - by JXPheonix
    This is a very, very broad question - what is the general algorithm of how a collision engine works? No code in specific, but rather, just a general idea of how a collision engine does what it does, constantly refreshing the points of an object and comparing it to other objects? (see, I have the general gist of it here.) A collision engine is basically an engine used in games (generally) so that your player (call him Bob), whenever bob moves into a wall, Bob stops, Bob does not walk through the wall. They also generally handle the gravity in a game and environmental things like that.

    Read the article

  • Linking one uniform variable to many shaders

    - by Winged
    Let's say, that I have 3 programs, and in each of those programs there is a view matrix uniform, which should be the same in all those programs. Right now, when my camera moves, I need to re-upload the modified matrix to every program separately. Is it possible to create some kind of global uniforms which are constant for all programs linked to it, so I could just upload the matrix once? I tried creating a globalUniforms object which looked kinda like this: var globalUniforms = { program: {}, // (...) vMatrixUniform: null, // (...) initialize: function() { vMatrixUniform = gl.getUniformLocation(this.program, 'uVMatrix'); } }; So I could just link it to proper programs like this: program.vMatrixUniform = globalUniforms.vMatrixUniform;, and then pass the matrix like this: if (camera.isDirty.viewMatrix !== false) { camera.isDirty.viewMatrix = false; gl.uniformMatrix4fv(globalUniforms.vMatrixUniform, false, camera.viewMatrix.element); } but unfortunately it throws an error: Uncaught exception: gl.INVALID_VALUE was caused by call to: getUniformLocation called from line 272, column 2 in () in mysite/js/mesh.js: vMatrixUniform = gl.getUniformLocation(this.program, 'uVMatrix'); Summing up: is there a more efficient way of managing shaders which follows my logic?

    Read the article

< Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >