Search Results

Search found 30601 results on 1225 pages for 'team development'.

Page 445/1225 | < Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >

  • How are realistic 3D faces created and animated in video games?

    - by Anton
    I'm interested in being able to create realistic faces and facial expressions for the 3D characters of a game I'm working on. Think something similar to the dialog scenes in games like Mass Effect. Unfortunately I'm not sure where to begin. I'm sure the faces/animations are created through 3D Modeling software, but otherwise I am lost. Do facial animations use the same "bones" that normal body animation uses? Is there any preferred 3D software for realistic faces and animations? Is there a preferred format to export these faces and animations in?

    Read the article

  • Determine percentage of screen covered by an object without using frustum culling

    - by Meltac
    On the CPU-side of an 3D first-person / ego perspective game I need to check whether what the players currently sees on screen is the inside of a box object defined by world space coordinates (the player might be outside of that box but on screen sees only/mostly the inside of the box, or vice-versa, looks from within the box to the outside). The "casual" way of performing such a check would incorporate frustum culling but such an approach would be hard to achieve with my given set of engine parameters which I'd like to avoid if there is a simpler way. What I actually have at the point where I would like to do the check (high-level script on CPU, not GPU side): Camera world position Camera direction Camera FOV Two Box corner world coordinates (left-bottom-front, right-top-back) What I do not have right away: View frustrum definition (near/far plane or say 6 planes defining frustum) Any specific pixel information (uv, view space position, depth or the like) What I would like to calculate: Percentage of screen "covered" by box. Any hints on how to perform such calculation?

    Read the article

  • Scaling Down Pixel Art?

    - by Michael Stum
    There's plenty of algorithms to scale up pixel art (I prefer hqx personally), but are there any notable algorithms to scale it down? In my case, the game is designed to run at 1280x720, but if someone plays at a lower resolution I want it to still look good. Most Pixel Art discussions center around 320x200 or 640x480 and upscaling for use in console emulators, but I wonder how modern 2D games like the Monkey Island Remake look good on lower resolutions? (Ignoring the options of having multiple versions of assets (essentially, mipmapping))

    Read the article

  • Using XNA for a 2D isometric game, but wanna move on

    - by Daniel Ribeiro
    I've been building a 2D isometric game (with learning purposes) in C# using XNA. I found it's really easy to manage sprite sheets loading, collision, basic physics and such with the XNA api. The thing is, I want to move on. My real goal is to learn C++ and develop a game using that language. What engine/library would you guys recommend for me to keep going on that same 2D isometric game direction using pretty much sprite sheets for the graphical part of the game?

    Read the article

  • 2D Platformer Collision Handling

    - by defender-zone
    Hello, everyone! I am trying to create a 2D platformer (Mario-type) game and I am some having some issues with handling collisions properly. I am writing this game in C++, using SDL for input, image loading, font loading, etcetera. I am also using OpenGL via the FreeGLUT library in conjunction with SDL to display graphics. My method of collision detection is AABB (Axis-Aligned Bounding Box), which is really all I need to start with. What I need is an easy way to both detect which side the collision occurred on and handle the collisions properly. So, basically, if the player collides with the top of the platform, reposition him to the top; if there is a collision to the sides, reposition the player back to the side of the object; if there is a collision to the bottom, reposition the player under the platform. I have tried many different ways of doing this, such as trying to find the penetration depth and repositioning the player backwards by the penetration depth. Sadly, nothing I've tried seems to work correctly. Player movement ends up being very glitchy and repositions the player when I don't want it to. Part of the reason is probably because I feel like this is something so simple but I'm over-thinking it. If anyone thinks they can help, please take a look at the code below and help me try to improve on this if you can. I would like to refrain from using a library to handle this (as I want to learn on my own) or the something like the SAT (Separating Axis Theorem) if at all possible. Thank you in advance for your help! void world1Level1CollisionDetection() { for(int i; i < blocks; i++) { if (de2dCheckCollision(ball,block[i],0.0f,0.0f)==true) { int up = 0; int left = 0; int right = 0; int down = 0; if(ball.coords[0] < block[i].coords[0] && block[i].coords[0] < ball.coords[2] && ball.coords[2] < block[i].coords[2]) { left = 1; } if(block[i].coords[0] < ball.coords[0] && ball.coords[0] < block[i].coords[2] && block[i].coords[2] < ball.coords[2]) { right = 1; } if(ball.coords[1] < block[i].coords[1] && block[i].coords[1] < ball.coords[3] && ball.coords[3] < block[i].coords[3]) { up = 1; } if(block[i].coords[1] < ball.coords[1] && ball.coords[1] < block[i].coords[3] && block[i].coords[3] < ball.coords[3]) { down = 1; } cout << left << ", " << right << ", " << up << ", " << down << ", " << endl; if (left == 1) { ball.coords[0] = block[i].coords[0] - 16.0f; ball.coords[2] = block[i].coords[0] - 0.0f; } if (right == 1) { ball.coords[0] = block[i].coords[2] + 0.0f; ball.coords[2] = block[i].coords[2] + 16.0f; } if (down == 1) { ball.coords[1] = block[i].coords[3] + 0.0f; ball.coords[3] = block[i].coords[3] + 16.0f; } if (up == 1) { ball.yspeed = 0.0f; ball.gravity = 0.0f; ball.coords[1] = block[i].coords[1] - 16.0f; ball.coords[3] = block[i].coords[1] - 0.0f; } } if (de2dCheckCollision(ball,block[i],0.0f,0.0f)==false) { ball.gravity = -0.5f; } } } To explain what some of this code means: The blocks variable is basically an integer that is storing the amount of blocks, or platforms. I am checking all of the blocks using a for loop, and the number that the loop is currently on is represented by integer i. The coordinate system might seem a little weird, so that's worth explaining. coords[0] represents the x position (left) of the object (where it starts on the x axis). coords[1] represents the y position (top) of the object (where it starts on the y axis). coords[2] represents the width of the object plus coords[0] (right). coords[3] represents the height of the object plus coords[1] (bottom). de2dCheckCollision performs an AABB collision detection. Up is negative y and down is positive y, as it is in most games. Hopefully I have provided enough information for someone to help me successfully. If there is something I left out that might be crucial, let me know and I'll provide the necessary information. Finally, for anyone who can help, providing code would be very helpful and much appreciated. Thank you again for your help!

    Read the article

  • AndEngine doesn't fill correctly an image on my device

    - by Guille
    I'm learning about AndEngine a little bit, I'm trying to follow a tutorial but I don't get to fill the background image correctly, so, it's just appear in one side of my screen. My device is a Galaxy Nexus (1270x768 I think...). The image is 800x480. The code is: public EngineOptions onCreateEngineOptions() { camera = new Camera(0, 0, 800, 480); EngineOptions engineOptions = new EngineOptions(true, ScreenOrientation.LANDSCAPE_FIXED, new FillResolutionPolicy(), this.camera); engineOptions.getAudioOptions().setNeedsMusic(true).setNeedsSound(true); engineOptions.getRenderOptions().setMultiSampling(true);//.getConfigChooserOptions().setRequestedMultiSampling(true); engineOptions.setWakeLockOptions(WakeLockOptions.SCREEN_ON); return engineOptions; } I have been trying with several values in the camera, but it doesn't fill in all the screen, why?

    Read the article

  • Cocos2d sprite's parent not reflecting true scale value

    - by Paul Renton
    I am encountering issues with determining a CCSprite's parent node's scale value. In my game I have a class that extends CCLayer and scales itself based on game triggers. Certain child sprites of this CCLayer have mathematical calculations that become inaccurate once I scale the parent CCLayer. For instance, I have a tank sprite that needs to determine its firing point within the parent node. Whenever I scale the layer and ask the layer for its scale values, they are accurate. However, when I poll the sprites contained within the layer for their parent's scale values, they always appear as one. // From within the sprite CCLOG(@"ChildSprite-> Parent's scale values are scaleX: %f, scaleY: %f", self.parent.scaleX, self.parent.scaleY); // Outputs 1.0,1.0 // From within the layer CCLOG(@"Layer-> ScaleX : %f, ScaleY: %f , SCALE: %f", self.scaleX, self.scaleY, self.scale); // Output is 0.80,0.80 Could anyone explain to me why this is the case? I don't understand why these values are different. Maybe I don't understand the inner design of Cocos2d fully. Any help is appreciated.

    Read the article

  • Shadow mapping: what is the light looking at?

    - by PgrAm
    I'm all set to set up shadow mapping in my 3d engine but there is one thing I am struggling to understand. The scene needs to be rendered from the light's point of view so I simply first move my camera to the light's position but then I need to find out which direction the light is looking. Since its a point light its not shining in any particular direction. How do I figure out what the orientation for the light point of view is?

    Read the article

  • How to run the pixel shader effcet??

    - by Yashwinder
    Below stated is the code for my pixel shader which I am rendering after the vertex shader. I have set the wordViewProjection matrix in my program but I don't know to set the progress variable i.e in my pixel shader file which will make the image displayed by the help of a quad to give out transition effect. Here is the code for my pixel shader program::: As my pixel shader is giving a static effect and now I want to use it to give some effect. So for this I have to add a progress variable in my pixel shader and initialize to the Constant table function i.e constantTable.SetValue(D3DDevice,"progress",progress ); I am having the problem in using this function for progress in my program. Anybody know how to set this variable in my program. And my new pixel Shader code is float progress : register(C0); sampler2D implicitInput : register(s0); sampler2D oldInput : register(s1); struct VS_OUTPUT { float4 Position : POSITION; float4 Color : COLOR0; float2 UV : TEXCOORD 0; }; float4 Blinds(float2 uv) { if(frac(uv.y * 5) < progress) { return tex2D(implicitInput, uv); } else { return tex2D(oldInput, uv); } } // Pixel Shader { return Blinds(input.UV); }

    Read the article

  • Island Generation Library

    - by thatguy
    Can anyone recommend a tile map generator (written in Java is a plus), where one can control some land types? For example: islands, large continents, singe large continent, archipelago, etc. I've been reading through many posts on the subject, it almost seems like many are just rolling their own. Before creating my own, I'm wondering if there's already an open source implementation that I might not be finding. If not, it seems like using Perlin Noise is a popular choice. Some articles I've been reading: http://simblob.blogspot.com/2010/01/simple-map-generation.html Generate islands/continents with simplex noise https://sites.google.com/site/minecraftlandgenerator/

    Read the article

  • Moving Character in C# XNA Not working

    - by Matthew Stenquist
    I'm having trouble trying to get my character to move for a game I'm making in my sparetime for the Xbox. However, I can't seem to figure out what I'm doing wrong , and I'm not even sure if I'm doing it right. I've tried googling tutorials on this but I haven't found any helpful ones. Mainly, ones on 3d rotation on the XNA creators club website. My question is : How can I get the character to walk towards the right in the MoveInput() function? What am I doing wrong? Did I code it wrong? The problem is : The player isn't moving. I think the MoveInput() class isn't working. Here's my code from my character class : using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; namespace Jumping { class Character { Texture2D texture; Vector2 position; Vector2 velocity; int velocityXspeed = 2; bool jumping; public Character(Texture2D newTexture, Vector2 newPosition) { texture = newTexture; position = newPosition; jumping = true; } public void Update(GameTime gameTime) { JumpInput(); MoveInput(); } private void MoveInput() { //Move Character right GamePadState gamePad1 = GamePad.GetState(PlayerIndex.One); velocity.X = velocity.X + (velocityXspeed * gamePad1.ThumbSticks.Right.X); } private void JumpInput() { position += velocity; if (GamePad.GetState(PlayerIndex.One).Buttons.A == ButtonState.Pressed && jumping == false) { position.Y -= 1f; velocity.Y = -5f; jumping = true; } if (jumping == true) { float i = 1.6f; velocity.Y += 0.15f * i; } if (position.Y + texture.Height >= 1000) jumping = false; if (jumping == false) velocity.Y = 0f; } public void Draw(SpriteBatch spriteBatch) { spriteBatch.Draw(texture, position, Color.White); } } }

    Read the article

  • Scale a game object to Bounds

    - by Spikeh
    I'm trying to scale a lot of dynamically created game objects in Unity3D to the bounds of a sphere collider, based on the size of their current mesh. Each object has a different scale and mesh size. Some are bigger than the AABB of the collider, and some are smaller. Here's the script I've written so far: private void ScaleToCollider(GameObject objectToScale, SphereCollider sphere) { var currentScale = objectToScale.transform.localScale; var currentSize = objectToScale.GetMeshHierarchyBounds().size; var targetSize = (sphere.radius * 2); var newScale = new Vector3 { x = targetSize * currentScale.x / currentSize.x, y = targetSize * currentScale.y / currentSize.y, z = targetSize * currentScale.z / currentSize.z }; Debug.Log("{0} Current scale: {1}, targetSize: {2}, currentSize: {3}, newScale: {4}, currentScale.x: {5}, currentSize.x: {6}", objectToScale.name, currentScale, targetSize, currentSize, newScale, currentScale.x, currentSize.x); //DoorDevice_meshBase Current scale: (0.1, 4.0, 3.0), targetSize: 5, currentSize: (2.9, 4.0, 1.1), newScale: (0.2, 5.0, 13.4), currentScale.x: 0.125, currentSize.x: 2.869114 //RedControlPanelForAirlock_meshBase Current scale: (1.0, 1.0, 1.0), targetSize: 5, currentSize: (0.0, 0.3, 0.2), newScale: (147.1, 16.7, 25.0), currentScale.x: 1, currentSize.x: 0.03400017 objectToScale.transform.localScale = newScale; } And the supporting extension method: public static Bounds GetMeshHierarchyBounds(this GameObject go) { var bounds = new Bounds(); // Not used, but a struct needs to be instantiated if (go.renderer != null) { bounds = go.renderer.bounds; // Make sure the parent is included Debug.Log("Found parent bounds: " + bounds); //bounds.Encapsulate(go.renderer.bounds); } foreach (var c in go.GetComponentsInChildren<MeshRenderer>()) { Debug.Log("Found {0} bounds are {1}", c.name, c.bounds); if (bounds.size == Vector3.zero) { bounds = c.bounds; } else { bounds.Encapsulate(c.bounds); } } return bounds; } After the re-scale, there doesn't seem to be any consistency to the results - some objects with completely uniform scales (x,y,z) seem to resize correctly, but others don't :| Its one of those things I've been trying to fix for so long I've lost all grasp on any of the logic :| Any help would be appreciated!

    Read the article

  • Implementing Light Volume Front Faces

    - by cubrman
    I recently read an article about light indexed deferred rendering from here: http://code.google.com/p/lightindexed-deferredrender/ It explains its ideas in a clear way, but there was one point that I failed to understand. It in fact is one of the most interesting ones, as it explains how to implement transparency with this approach: Typically when rendering light volumes in deferred rendering, only surfaces that intersect the light volume are marked and lit. This is generally accomplished by a “shadow volume like” technique of rendering back faces – incrementing stencil where depth is greater than – then rendering front faces and only accepting when depth is less than and stencil is not zero. By only rendering front faces where depth is less than, all future lookups by fragments in the forward rendering pass will get all possible lights that could hit the fragment. Can anyone explain how exactly you need to render only front faces? Another question is why do you need the front faces at all? Why can't we simply render all the lights and store the ones that overlap at this pixel in a texture? Does this approach serves as a cut-off plane to discard lights blocked by opaque geometry?

    Read the article

  • Open GL stars are not rendering

    - by Darestium
    I doing Nehe's Open GL Lesson 9. I'm using SFML for windowing, the strange thing is no stars are rendering. #include <SFML/System.hpp> #include <SFML/Window.hpp> #include <SFML/Graphics.hpp> #include <iostream> void processEvents(sf::Window *app); void processInput(sf::Window *app); void renderGlScene(sf::Window *app); void init(); int loadResources(); const int NUM_OF_STARS = 50; float triRot = 0.0f; float quadRot = 0.0f; bool twinkle = false; bool tKey = false; float zoom = 15.0f; float tilt = 90.0f; float spin = 0.0f; unsigned int loop; unsigned int texture_handle[1]; typedef struct { int r, g, b; float distance; float angle; } stars; stars star[NUM_OF_STARS]; int main() { sf::Window app(sf::VideoMode(800, 600, 32), "Nehe Lesson 9"); app.UseVerticalSync(false); init(); if (loadResources() == -1) { return EXIT_FAILURE; } while (app.IsOpened()) { processEvents(&app); processInput(&app); renderGlScene(&app); app.Display(); } return EXIT_SUCCESS; } int loadResources() { sf::Image img_data; // Load Texture if (!img_data.LoadFromFile("data/images/star.bmp")) { std::cout << "Could not load data/images/star.bmp"; return -1; } // Generate 1 texture glGenTextures(1, &texture_handle[0]); // Linear filtering glBindTexture(GL_TEXTURE_2D, texture_handle[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, img_data.GetWidth(), img_data.GetHeight(), 0, GL_RGBA, GL_UNSIGNED_BYTE, img_data.GetPixelsPtr()); return 0; } void processInput(sf::Window *app) { const sf::Input& input = app->GetInput(); if (input.IsKeyDown(sf::Key::T) && !tKey) { tKey = true; twinkle = !twinkle; } if (!input.IsKeyDown(sf::Key::T)) { tKey = false; } if (input.IsKeyDown(sf::Key::Up)) { tilt -= 0.05f; } if (input.IsKeyDown(sf::Key::Down)) { tilt += 0.05f; } if (input.IsKeyDown(sf::Key::PageUp)) { zoom -= 0.02f; } if (input.IsKeyDown(sf::Key::Up)) { zoom += 0.02f; } } void init() { glClearDepth(1.f); glClearColor(0.f, 0.f, 0.f, 0.f); // Enable texturing glEnable(GL_TEXTURE_2D); //glDepthMask(GL_TRUE); // Setup a perpective projection glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.f, 1.f, 1.f, 500.f); glShadeModel(GL_SMOOTH); glBlendFunc(GL_SRC_ALPHA, GL_ONE); glEnable(GL_BLEND); for (loop = 0; loop < NUM_OF_STARS; loop++) { star[loop].distance = (float)loop / NUM_OF_STARS * 5.0f; // Calculate distance from the centre // Give stars random rgb value star[loop].r = rand() % 256; star[loop].g = rand() % 256; star[loop].b = rand() % 256; } } void processEvents(sf::Window *app) { sf::Event event; while (app->GetEvent(event)) { if (event.Type == sf::Event::Closed) { app->Close(); } if (event.Type == sf::Event::KeyPressed && event.Key.Code == sf::Key::Escape) { app->Close(); } } } void renderGlScene(sf::Window *app) { app->SetActive(); // Clear color depth buffer glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Apply some transformations glMatrixMode(GL_MODELVIEW); glLoadIdentity(); // Select texture glBindTexture(GL_TEXTURE_2D, texture_handle[0]); for (loop = 0; loop < NUM_OF_STARS; loop++) { glLoadIdentity(); // Reset The View Before We Draw Each Star glTranslatef(0.0f, 0.0f, zoom); // Zoom Into The Screen (Using The Value In 'zoom') glRotatef(tilt, 1.0f, 0.0f, 0.0f); // Tilt The View (Using The Value In 'tilt') glRotatef(star[loop].angle, 0.0f, 1.0f, 0.0f); // Rotate To The Current Stars Angle glTranslatef(star[loop].distance, 0.0f, 0.0f); // Move Forward On The X Plane glRotatef(-star[loop].angle,0.0f,1.0f,0.0f); // Cancel The Current Stars Angle glRotatef(-tilt,1.0f,0.0f,0.0f); // Cancel The Screen Tilt if (twinkle) { glColor4ub(star[(NUM_OF_STARS - loop) - 1].r, star[(NUM_OF_STARS - loop)-1].g, star[(NUM_OF_STARS - loop) - 1].b, 255); glBegin(GL_QUADS); // Begin Drawing The Textured Quad glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.0f, -1.0f, 0.0f); glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.0f, -1.0f, 0.0f); glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.0f, 1.0f, 0.0f); glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.0f, 1.0f, 0.0f); glEnd(); // Done Drawing The Textured Quad } glRotatef(spin,0.0f,0.0f,1.0f); // Rotate The Star On The Z Axis // Assign A Color Using Bytes glColor4ub(star[loop].r, star[loop].g, star[loop].b, 255); glBegin(GL_QUADS); // Begin Drawing The Textured Quad glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.0f,-1.0f, 0.0f); glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.0f,-1.0f, 0.0f); glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.0f, 1.0f, 0.0f); glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.0f, 1.0f, 0.0f); glEnd(); // Done Drawing The Textured Quad spin += 0.01f; // Used To Spin The Stars star[loop].angle += (float)loop / NUM_OF_STARS; // Changes The Angle Of A Star star[loop].distance -= 0.01f; // Changes The Distance Of A Star if (star[loop].distance < 0.0f) { star[loop].distance += 5.0f; // Move The Star 5 Units From The Center star[loop].r = rand() % 256; // Give It A New Red Value star[loop].g = rand() % 256; // Give It A New Green Value star[loop].b = rand() % 256; // Give It A New Blue Value } } } I've looked over the code atleast 10 times now and I can't figure out the problem. Any help would be much appreciated.

    Read the article

  • Making a surface transparent from blackness of texture

    - by Dan the Man
    I am making a "halo" shader in unity using GLSL. And I've come to a roadblock. What I need to do is take a texture, like the following, and make it transparent according to the darkness of it. And I don't want a cutout, because that cuts it off at a hard edge. This line of code doesn't seem to work. gl_FragColor = texture2D( vec4( _MainTex.r, _MainTex.g, _MainTex.b, _MainTex.a), vec2(textureCoordinates));

    Read the article

  • How can I import models from Blender into jMonkeyEngine?

    - by Nathan Sabruka
    I have some blender model files (Blender version 2.6) which I would like to use with the jMonkeyEngine SDK. However, when I use Blender's native .obj exporter, I can't import it in jMonkeyEngine (the model simply fails to import or looks messed up). I've tried importing .obj files or .blend files directly into the jMonkeyEngine SDK to no avail. I've also tried to use various OGRE exporters to export .scene and .material files, but only the .scene file is created. Is there a simple way to simply export files from Blender into the jMonkeyEngine SDK? EDIT: I seem to have found something in Blender. When I go under addons, there's a warning in the OGRE exporter; "'.mesh' output requires OgreCommandLineTools". However, I have already installed those tools under the C drive. Has anyone else encountered this issue?

    Read the article

  • How do you calculate UVW coordinates?

    - by Jenko
    I'm working on a 3d engine and I'm calculating UVT coordinates, where U and V represent pixels on the texture measured in 0-1, and T is: T = perspective / Z But I'm trying to use this perspective-correct triangle rasteriser, which requires a W, per vertex. How do I calculate the W for each vertex for the drawPerspectiveTexturedPolygon() function? Hint: The code comments refer to W as the "homogenous coordinate" ... does that mean anything?

    Read the article

  • Checker AI in visual basic not working [on hold]

    - by Eugene Galkine
    I am trying to a make checkers in visual basic with ai. I am using the minimax algorithm (or at least what I understand of it) and it works, except the ai is retarded and plays like it is trying to loose and I tried to switch around the min and the max but the results are IDENTICAL. I am pissed of and have been trying to fix it for over a week now, I would really appreciate it if someone could help me out here. I have 3 years experience of programming (in Java, only about of month of VB experience) and I always am able to solve all my errors on my own so I don't know why I can't get this to work. The program is not at all optimized or anything at this point and is over 1.2K lines long, so here is the entire vb project instead: https://www.dropbox.com/sh/evii0jendn93ir2/9fntwH2dNW I would really appreciate any help I could get.

    Read the article

  • How to get all keys values of the player prefs in unity [java script ]

    - by Akari
    in the first test game I've developed if the player passed all the levels and win , he must enter his name ... so his name and his score will be stored in a player prefs : and there is another scene that displays the names and scores of all the user passed the game : I've searched from the morning and try all the ways I know and finally I failed to perform this .... is it possible to display all the keys values previously stored in the player prefs ??? or can someone to provide me by a JavaScript to do this ???? thanks...

    Read the article

  • Center directional light shadow to the cameras eye

    - by Caesar
    I'm currently drawing my directional light shadow using this view and projection: XMFLOAT3 dir((float)pitch, (float)yaw, (float)roll); XMFLOAT3 center(0.0f, 0.0f, 0.0f); XMVECTOR lightDir = XMLoadFloat3(&dir); XMVECTOR lightPos = radius * lightDir; XMVECTOR targetPos = XMLoadFloat3(&center); XMVECTOR up = XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f); XMMATRIX V = XMMatrixLookAtLH(lightPos, targetPos, up); // This is the view // Transform bounding sphere to light space. XMFLOAT3 sphereCenterLS; XMStoreFloat3(&sphereCenterLS, XMVector3TransformCoord(targetPos, V)); // Ortho frustum in light space encloses scene. float l = sphereCenterLS.x - radius; float b = sphereCenterLS.y - radius; float n = sphereCenterLS.z - radius; float r = sphereCenterLS.x + radius; float t = sphereCenterLS.y + radius; float f = sphereCenterLS.z + radius; XMMATRIX P = XMMatrixOrthographicOffCenterLH(l, r, b, t, n, f); // This is the projection Which works prefect if the center of my scene is at 0.0, 0.0, 0.0. What I would like to do is move the center of the scene relative to the cameras position. How can I do that?

    Read the article

  • Typical Applications of Linear System Solver in Game Developemnt

    - by craftsman.don
    I am going to write a custom solver for linear system. I would like to survey the typical problems involved the linear system solving in games. So that I can custom optimization on these problems based on the shape of the matrix. currently I am focus on these problems: B-Spline editing (I use a linear solve to resolve the C0, C1, C2 continuity) Constraint in Simulation (especially Position-Constraint, cloth) Both of them are Banded Matrix. I want to hear about some other applications of a linear system in games. Thank you.

    Read the article

  • Combine 3D objects in XNA 4

    - by Christoph
    Currently I am writing on my thesis for university, the theme I am working on is 3D Visualization of hierarchical structures using cone trees. I want to do is to draw a cone and arrange a number of spheres at the bottom of the cone. The spheres should be arranged according to the radius and the number of spheres correctly. As you can imagine I need a lot of these cone/sphere combinations. First Attempt I was able to find some tutorials that helped with drawing cones and spheres. Cone public Cone(GraphicsDevice device, float height, int tessellation, string name, List<Sphere> children) { //prepare children and calculate the children spacing and radius of the cone if (children == null || children.Count == 0) { throw new ArgumentNullException("children"); } this.Height = height; this.Name = name; this.Children = children; //create the cone if (tessellation < 3) { throw new ArgumentOutOfRangeException("tessellation"); } //Create a ring of triangels around the outside of the cones bottom for (int i = 0; i < tessellation; i++) { Vector3 normal = this.GetCircleVector(i, tessellation); // add the vertices for the top of the cone base.AddVertex(Vector3.Up * height, normal); //add the bottom circle base.AddVertex(normal * this.radius + Vector3.Down * height, normal); //Add indices base.AddIndex(i * 2); base.AddIndex(i * 2 + 1); base.AddIndex((i * 2 + 2) % (tessellation * 2)); base.AddIndex(i * 2 + 1); base.AddIndex((i * 2 + 3) % (tessellation * 2)); base.AddIndex((i * 2 + 2) % (tessellation * 2)); } //create flate triangle to seal the bottom this.CreateCap(tessellation, height, this.Radius, Vector3.Down); base.InitializePrimitive(device); } Sphere public void Initialize(GraphicsDevice device, Vector3 qi) { int verticalSegments = this.Tesselation; int horizontalSegments = this.Tesselation * 2; //single vertex on the bottom base.AddVertex((qi * this.Radius) + this.lowering, Vector3.Down); for (int i = 0; i < verticalSegments; i++) { float latitude = ((i + 1) * MathHelper.Pi / verticalSegments) - MathHelper.PiOver2; float dy = (float)Math.Sin(latitude); float dxz = (float)Math.Cos(latitude); //Create a singe ring of latitudes for (int j = 0; j < horizontalSegments; j++) { float longitude = j * MathHelper.TwoPi / horizontalSegments; float dx = (float)Math.Cos(longitude) * dxz; float dz = (float)Math.Sin(longitude) * dxz; Vector3 normal = new Vector3(dx, dy, dz); base.AddVertex(normal * this.Radius, normal); } } // Finish with a single vertex at the top of the sphere. AddVertex((qi * this.Radius) + this.lowering, Vector3.Up); // Create a fan connecting the bottom vertex to the bottom latitude ring. for (int i = 0; i < horizontalSegments; i++) { AddIndex(0); AddIndex(1 + (i + 1) % horizontalSegments); AddIndex(1 + i); } // Fill the sphere body with triangles joining each pair of latitude rings. for (int i = 0; i < verticalSegments - 2; i++) { for (int j = 0; j < horizontalSegments; j++) { int nextI = i + 1; int nextJ = (j + 1) % horizontalSegments; base.AddIndex(1 + i * horizontalSegments + j); base.AddIndex(1 + i * horizontalSegments + nextJ); base.AddIndex(1 + nextI * horizontalSegments + j); base.AddIndex(1 + i * horizontalSegments + nextJ); base.AddIndex(1 + nextI * horizontalSegments + nextJ); base.AddIndex(1 + nextI * horizontalSegments + j); } } // Create a fan connecting the top vertex to the top latitude ring. for (int i = 0; i < horizontalSegments; i++) { base.AddIndex(CurrentVertex - 1); base.AddIndex(CurrentVertex - 2 - (i + 1) % horizontalSegments); base.AddIndex(CurrentVertex - 2 - i); } base.InitializePrimitive(device); } The tricky part now is to arrange the spheres at the bottom of the cone. I tried is to draw just the cone and then draw the spheres. I need a lot of these cones, so it would be pretty hard to calculate all the positions correctly. Second Attempt So the second try was to generate a object that builds all vertices of the cone and all of the spheres at once. So I was hoping to render a cone with all its spheres arranged correctly. After a short debug I found out that the cone is created and the first sphere, when it turn of the second sphere I am running into an OutOfBoundsException of ushort.MaxValue. Cone and Spheres public ConeWithSpheres(GraphicsDevice device, float height, float coneDiameter, float sphereDiameter, int coneTessellation, int sphereTessellation, int numberOfSpheres) { if (coneTessellation < 3) { throw new ArgumentException(string.Format("{0} is to small for the tessellation of the cone. The number must be greater or equal to 3", coneTessellation)); } if (sphereTessellation < 3) { throw new ArgumentException(string.Format("{0} is to small for the tessellation of the sphere. The number must be greater or equal to 3", sphereTessellation)); } //set properties this.Height = height; this.ConeDiameter = coneDiameter; this.SphereDiameter = sphereDiameter; this.NumberOfChildren = numberOfSpheres; //end set properties //generate the cone this.GenerateCone(device, coneTessellation); //generate the spheres //vector that defines the Y position of the sphere on the cones bottom Vector3 lowering = new Vector3(0, 0.888f, 0); this.GenerateSpheres(device, sphereTessellation, numberOfSpheres, lowering); } // ------ GENERATE CONE ------ private void GenerateCone(GraphicsDevice device, int coneTessellation) { int doubleTessellation = coneTessellation * 2; //Create a ring of triangels around the outside of the cones bottom for (int index = 0; index < coneTessellation; index++) { Vector3 normal = this.GetCircleVector(index, coneTessellation); //add the vertices for the top of the cone base.AddVertex(Vector3.Up * this.Height, normal); //add the bottom of the cone base.AddVertex(normal * this.ConeRadius + Vector3.Down * this.Height, normal); //add indices base.AddIndex(index * 2); base.AddIndex(index * 2 + 1); base.AddIndex((index * 2 + 2) % doubleTessellation); base.AddIndex(index * 2 + 1); base.AddIndex((index * 2 + 3) % doubleTessellation); base.AddIndex((index * 2 + 2) % doubleTessellation); } //create flate triangle to seal the bottom this.CreateCap(coneTessellation, this.Height, this.ConeRadius, Vector3.Down); base.InitializePrimitive(device); } // ------ GENERATE SPHERES ------ private void GenerateSpheres(GraphicsDevice device, int sphereTessellation, int numberOfSpheres, Vector3 lowering) { int verticalSegments = sphereTessellation; int horizontalSegments = sphereTessellation * 2; for (int childCount = 1; childCount < numberOfSpheres; childCount++) { //single vertex at the bottom of the sphere base.AddVertex((this.GetCircleVector(childCount, this.NumberOfChildren) * this.SphereRadius) + lowering, Vector3.Down); for (int verticalSegmentsCount = 0; verticalSegmentsCount < verticalSegments; verticalSegmentsCount++) { float latitude = ((verticalSegmentsCount + 1) * MathHelper.Pi / verticalSegments) - MathHelper.PiOver2; float dy = (float)Math.Sin(latitude); float dxz = (float)Math.Cos(latitude); //create a single ring of latitudes for (int horizontalSegmentsCount = 0; horizontalSegmentsCount < horizontalSegments; horizontalSegmentsCount++) { float longitude = horizontalSegmentsCount * MathHelper.TwoPi / horizontalSegments; float dx = (float)Math.Cos(longitude) * dxz; float dz = (float)Math.Sin(longitude) * dxz; Vector3 normal = new Vector3(dx, dy, dz); base.AddVertex((normal * this.SphereRadius) + lowering, normal); } } //finish with a single vertex at the top of the sphere base.AddVertex((this.GetCircleVector(childCount, this.NumberOfChildren) * this.SphereRadius) + lowering, Vector3.Up); //create a fan connecting the bottom vertex to the bottom latitude ring for (int i = 0; i < horizontalSegments; i++) { base.AddIndex(0); base.AddIndex(1 + (i + 1) % horizontalSegments); base.AddIndex(1 + i); } //Fill the sphere body with triangles joining each pair of latitude rings for (int i = 0; i < verticalSegments - 2; i++) { for (int j = 0; j < horizontalSegments; j++) { int nextI = i + 1; int nextJ = (j + 1) % horizontalSegments; base.AddIndex(1 + i * horizontalSegments + j); base.AddIndex(1 + i * horizontalSegments + nextJ); base.AddIndex(1 + nextI * horizontalSegments + j); base.AddIndex(1 + i * horizontalSegments + nextJ); base.AddIndex(1 + nextI * horizontalSegments + nextJ); base.AddIndex(1 + nextI * horizontalSegments + j); } } //create a fan connecting the top vertiex to the top latitude for (int i = 0; i < horizontalSegments; i++) { base.AddIndex(this.CurrentVertex - 1); base.AddIndex(this.CurrentVertex - 2 - (i + 1) % horizontalSegments); base.AddIndex(this.CurrentVertex - 2 - i); } base.InitializePrimitive(device); } } Any ideas how I could fix this?

    Read the article

  • What's the best game engine to use for my PC game project? [closed]

    - by user19860
    I'm in the planning phase of creating an action-rpg for the PC, and I'd like to create a League of Legends style look for the game (animated/cartoony). Any idea which engine best replicates this look? I ask because when I look at a lot of the UDk/Unreal games, they've all got the more realistic 3-D look that I'd like to avoid, so I was wondering if an alternate look was possible on that type of engine. Source SDK and Unity also look very interesting, I just don't know what types of visual capabilities these engines have. Thanks in advance.

    Read the article

  • Ideas for attack damage algorithm (language irrelevant)

    - by Dillon
    I am working on a game and I need ideas for the damage that will be done to the enemy when your player attacks. The total amount of health that the enemy has is called enemyHealth, and has a value of 1000. You start off with a weapon that does 40 points of damage (may be changed.) The player has an attack stat that you can increase, called playerAttack. This value starts off at 1, and has a possible max value of 100 after you level it up many times and make it farther into the game. The amount of damage that the weapon does is cut and dry, and subtracts 40 points from the total 1000 points of health every time the enemy is hit. But what the playerAttack does is add to that value with a percentage. Here is the algorithm I have now. (I've taken out all of the gui, classes, etc. and given the variables very forward names) double totalDamage = weaponDamage + (weaponDamage*(playerAttack*.05)) enemyHealth -= (int)totalDamage; This seemed to work great for the most part. So I statrted testing some values... //enemyHealth ALWAYS starts at 1000 weaponDamage = 50; playerAttack = 30; If I set these values, the amount of damage done on the enemy is 125. Seemed like a good number, so I wanted to see what would happen if the players attack was maxed out, but with the weakest starting weapon. weaponDamage = 50; playerAttack = 100; the totalDamage ends up being 300, which would kill an enemy in just a few hits. Even with your attack that high, I wouldn't want the weakest weapon to be able to kill the enemy that fast. I thought about adding defense, but I feel the game will lose consistency and become unbalanced in the long run. Possibly a well designed algorithm for a weapon decrease modifier would work for lower level weapons or something like that. Just need a break from trying to figure out the best way to go about this, and maybe someone that has experience with games and keeping the leveling consistent could give me some ideas/pointers.

    Read the article

  • Algorithm to find average position

    - by Simran kaur
    In the given diagram, I have the extreme left and right points, that is -2 and 4 in this case. So, obviously, I can calculate the width which is 6 in this case. What we know: The number of partitions:3 in this case The partition number at at any point i.e which one is 1st,second or third partition (numbered starting from left) What I want: The position of the purple line drawn which is positio of average of a particular partition So, basically I just want a generalized formula to calculate position of the average at any point.

    Read the article

< Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >