Search Results

Search found 30601 results on 1225 pages for 'team development'.

Page 443/1225 | < Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >

  • Is there an algorithm for a pool game?

    - by Dmitri
    Hello! I am looking for algorithm to calculate direction and speed of balls in a pool game. I am sure there has to be some type of open source code for this since pool games are some of the oldest computer games I can remember. I mean, when one ball hits another, I need a algorithm to calculate direction of both of them. It will depend of exact angle of where they hit each other and on speed. I want to practice Java coding, so I am looking for java code or package that has this type of code.

    Read the article

  • how to link a c++ object to a local variable in Lua

    - by MahanGM
    I'm completing my scripting interface with Lua, but recently I've stuck at some point. I have several functions for my Entitiy events like Update(). I have a function called create_entitiy() which instantiate a new entity from a given entity index: function Update() local bullet = create_entity(0, 0, "obj_bullet") end create_entity returns a table which is the properties of the created entity. Now how can I make a connection between bullet variable and my newly created object? Right now for previously added objects to the scene, I simply set a global table for each of them and then after every call to Update(), I go through registered names to find object tables and perform new changes. Like the one below: function Update() if keyboard_key_press(vk_right) then obj_player.x += 3 end I can get obj_player table because I know its name from C++, plus I can get it as a global table and simply reach for the first instance named obj_player. Is there any solution for me to make bullet variable act like this? I was thinking to get all local variables in Update() function and check for every one to see if is it table and it has an unique field attached to it like id, this way I can determine that this is an object table and do the rest of the process. By the way, is this interface going to work easier with luaBind if I implement it? Bottom line: How can I make a local variable in Lua that receives a table from create_entity function and track that local variable to capture it from C++. e. g. function Update() local bullet = create_entity(0, 0, "obj_bullet") bullet.x = 10 <== Commit a change in table end Now I want to get variable bullet from C++. And it's not just this variable, there might be a ton of these local variables with different names.

    Read the article

  • Trying to setup first DirectX project (don't understand the error) [on hold]

    - by user1157885
    I've just started learning DirectX with the book "3D Game Programming with DirectX". I just finished setting up all the paths and adding the code to the project which I think I did correctly, but I get this massive error which I don't really understand and is hard to google. Could someone tell me what it means and how to fix it? Error 1 error TRK0002: Failed to execute command: ""C:\Program Files (x86)\Microsoft DirectX SDK (June 2010)\Utilities\bin\x64\fxc.exe" /nologo /Emain /Fo "C:\Desktop\DirectX 11 Projects\box\Win32Project2\Debug\color.cso" /Od /Zi "....\Book Files\3DGameProg\DVD\Code\Chapter 6 Drawing in Direct3D\Box\FX\color.fx"". The handle is invalid.

    Read the article

  • tunnel effect cocos2d

    - by samfisher
    I am looking to create a similar tunnel effect in COCOS2D (iOS). Could anyone suggest any pointers? ref Video 1 ref Video 2 Till now I have tried with several ring shape sprites with decreasing scale and positioned center to a same point and keeping Z decreasing as well for each smaller sprite. With that, animating it with CCScaleTo and changing the size to 2.0 with animation duration but it does not come anyway near to the tunnel effect shown in the reference. Thanks, sam

    Read the article

  • Corona SDK: Animation takes a long time to play after "prepare" step

    - by Michael Taufen
    First off, I'm using the current publicly available build, version 2011.704 I'm building a platformer, and have a character that runs along and jumps when the screen is tapped. While jumping, the animation code has him assume a svelte jumping pose, and upon the detection of a collision with the ground, he returns to running. All of this happens. The problem is that there is this strange gap of time, about 1/2 a second by the feel of it, where my character sits on the first frame of the run animation after landing, before it actually starts playing. This leads me to believe that the problem is somewhere between the "prepare" step of loading up a sprite set's animation sequence and the "play" step. Thanks in advance for any help :). My code for when my character lands is as follows: local function collisionHandler ( event ) if (event.object1.myName == "character") and (event.object2.type == "terrain") then inAir = false characterInstance:prepare( "run" ) -- TODO: time between prepare and play is curiously long... characterInstance:play() end end

    Read the article

  • How can I solve for the game's world coordinates?

    - by HyperGroups
    I've used 3DReaperDX to get a obj file, the header information are shown as follows: #AR=2.00606, FOV=45.09583(height), Xscale:0.83290, Yscale:0.41519, Zscale:1.0 #************************************************** #ALPHABLENDENABLE: No #ZENABLE: Yes #ZWRITEENABLE: Yes #TWOSIDED: No #INVALID: No #THIN: No #RENDERTARGET_IS_BACKBUFFER: Yes #WIDTH_DO_MATCH: Yes #RGBWRITEDISABLED: No # object DrawCall_0 to come ... g v 2143.35547 6654.99023 25835.37109 v 2243.17773 6296.61523 25957.53906 v 2343.00000 5856.84473 26093.97656 How can I get the game's world coordinates. For example: I can map the scaleTransform to the VertexData scaleTransform={X1scale,Y1scale,Z1scale} {0.8329,0.41519,1.} Is the obj file enough to get the game's world coordinates? I want to put a object in this ground, and the coordinates is the same to that in the Game Engine, And I can place something(with some fixed coordinates) in the Game and then to use 3DReaper to get the obj file. If the file is not enough itself to get the game world coordinates.

    Read the article

  • Changing the material on an object on click in unity

    - by user1509674
    Iam working on unity2d.I have six game object Object1,Object1,Object1,(these are images) ObjectImage1,ObjectImage2,ObjectImage3(these are images). I have arranged the object in the scene as a list one below another Object1 Object2 Object3 When I click the Object1 --- should change to ObjectImage1 Object2 ----should change to ObjectImage2, but the above image of object1(objectImage1) at present should change to Object1 Object3 ----? should change to ObjectImage3,but the above image on object2(objectImage2) should change to Object2 These is similar to selection.I have coded Like when I click of Object2 its changing to ObjectIamge2 but the first object is not changing to object1 from objectImage1.Can anybody help me coding it out. Edit: public GameObject newSprite; private Vector3 currentSpritePosition; void Start() { newSprite.renderer.enabled = false; currentSpritePosition = transform.position; //then make it invisible renderer.enabled = false; //give the new sprite the position of the latter newSprite.transform.position = currentSpritePosition; //then make it visible newSprite.renderer.enabled = true; } void OnMouseExit(){ //just the reverse process renderer.enabled = true; newSprite.renderer.enabled = false; } This is the code used to change the material: public GameObject newSprite; private Vector3 currentSpritePosition; void Start(){ newSprite.renderer.enabled = false; } void OnMouseEnter(){ //getting the current position of the current sprite if ever it can move; currentSpritePosition = transform.position; //then make it invisible renderer.enabled = false; //give the new sprite the position of the latter newSprite.transform.position = currentSpritePosition; //then make it visible newSprite.renderer.enabled = true; } void OnMouseExit(){ //just the reverse process renderer.enabled = true; newSprite.renderer.enabled = false; }

    Read the article

  • Custom Music in Skyrim's Creation Kit?

    - by CptSupermrkt
    Can you bring in external music such as mp3s? If so, how? I didn't see anything about this in the wiki Bethesda released. Also how does this work with regards to the Steam Workshop? Don't imagine they would appreciate uploading copyrighted content. I don't particularly care about making a public mod, I just want to screw around privately and create dungeons/towns using music from some of my favorite games. Thanks.

    Read the article

  • CreateRenderTarget returns 0x80070057 in big surface resolution

    - by senggen
    I have created the SLI merged desktop of three 1920x1680 monitors, so the desktop resolution is 5760x1080. There is a 0x80070057 error, while calling CreateRenderTarget to create the RT_Surface: IDirect3DSurface9* _render_surface; HRESULT hr = _device->CreateRenderTarget( _desktop_width * 2, _desktop_height + 1, D3DFMT_A8R8G8B8, D3DMULTISAMPLE_NONE, 0, TRUE, &_render_surface, NULL); It works OK with desktop resolution 1024x768, and the total resolution is 3072x768. In http://msdn.microsoft.com/en-us/library/windows/desktop/bb174361(v=vs.85).aspx, it says If the method succeeds, the return value is D3D_OK. If the method fails, the return value can be one of the following: D3DERR_NOTAVAILABLE, D3DERR_INVALIDCALL, D3DERR_OUTOFVIDEOMEMORY, E_OUTOFMEMORY. and no description about 0x80070057. HRESULT: 0x80070057 (2147942487) Name: E_INVALIDARG Description: An invalid parameter was passed to the returning function Somebody please help me.

    Read the article

  • SlimDX Texture2D from DataRectangle array

    - by Rebekah Bryant
    I'm totally new to DirectX. I'm using SlimDX to create a texture consisting of 13046 DataRectangles. Here's my code. It's breaking on the Texture2D constructor with "E_INVALIDARG: An invalid parameter was passed to the returning function (-2147024809)." inParms is just a struct containing handle to a Panel. public Renderer(Parameters inParms, ref DataRectangle[] inShapes) { Texture2DDescription description = new Texture2DDescription() { Width = 500, Height = 500, MipLevels = 1, ArraySize = inShapes.Length, Format = Format.R32G32B32_Float, SampleDescription = new SampleDescription(1, 0), Usage = ResourceUsage.Default, BindFlags = BindFlags.RenderTarget | BindFlags.ShaderResource, CpuAccessFlags = CpuAccessFlags.None, OptionFlags = ResourceOptionFlags.None }; SwapChainDescription chainDescription = new SwapChainDescription() { BufferCount = 1, IsWindowed = true, Usage = Usage.RenderTargetOutput, ModeDescription = new ModeDescription(0, 0, new Rational(60, 1), Format.R8G8B8A8_UNorm), SampleDescription = new SampleDescription(1, 0), Flags = SwapChainFlags.None, OutputHandle = inParms.Handle, SwapEffect = SwapEffect.Discard }; Device.CreateWithSwapChain(DriverType.Hardware, DeviceCreationFlags.None, chainDescription, out mDevice, out mSwapChain); Texture2D texture = new Texture2D(Device, description, inShapes); } EDIT: Running with the Debug flag set, I got: D3D11 ERROR: ID3D11Device::CreateTexture2D: The format (0x6, R32G32B32_FLOAT) cannot be bound as a RenderTarget, or cast to a format that could be bound as a RenderTarget. This is because the current graphics implementation does not even support this Format. Therefore this format does not support D3D11_BIND_RENDER_TARGET. Use CheckFormatSupport to check Format support. [ STATE_CREATION ERROR #92: CREATETEXTURE2D_UNSUPPORTEDFORMAT] D3D11 ERROR: ID3D11Device::CreateTexture2D: Returning E_INVALIDARG, meaning invalid parameters were passed. [ STATE_CREATION ERROR #104: CREATETEXTURE2D_INVALIDARG_RETURN]

    Read the article

  • Load order in XNA?

    - by marc wellman
    I am wondering whether the is a mechanism to manually control the call-order of void Game.LoadContent() as it is the case with void Game.Draw(GameTime gt) by setting int DrawableGameComponent.DrawOrder ? except the order that results from adding components to the Game.Components container and maybe there exists something similar with Game.Update(GameTime gt) ? UPDATE To exemplify my issue consider you have several game components which do depends to each other regarding their instantiation. All are inherited from DrawableGameComponent. Now suppose that in one of these components you are loading a Model from the games content pipeline and add it to some static container in order to provide access to it for other game components. public override LoadContent() { // ... Model m = _contentManager.Load<Model>(@"content/myModel"); // GameComponents is a static class with an accessible list where game components reside. GameComponents.AddCompnent(m); // ... } Now it's easy to imagine that this components load method has to precede other game components that do want to access the model m in their own load method.

    Read the article

  • Game Formula/Mechanic

    - by Georgiadis Abraam
    i am trying to design a game for a project i have, The main idea is a 3 Type of Heroes 3 Stat per Hero There are no levels involved so the differences must be located on Stats. Flogic)The logic of fight is that type1hero has good chances winning type2hero, type2hero has good chances type3hero and type3hero has good chances winning type1hero. For over a week i am trying to find a stats based formula that will allow me to fix this but i cant, i was meddling with numbers yesterday and it was decent but i cant extract the formula out of it. Could you plz guide me or give me hints on how should i start creating formulas on a Non lvl game that fulfills the fLogic?

    Read the article

  • Open GL stars are not rendering

    - by Darestium
    I doing Nehe's Open GL Lesson 9. I'm using SFML for windowing, the strange thing is no stars are rendering. #include <SFML/System.hpp> #include <SFML/Window.hpp> #include <SFML/Graphics.hpp> #include <iostream> void processEvents(sf::Window *app); void processInput(sf::Window *app); void renderGlScene(sf::Window *app); void init(); int loadResources(); const int NUM_OF_STARS = 50; float triRot = 0.0f; float quadRot = 0.0f; bool twinkle = false; bool tKey = false; float zoom = 15.0f; float tilt = 90.0f; float spin = 0.0f; unsigned int loop; unsigned int texture_handle[1]; typedef struct { int r, g, b; float distance; float angle; } stars; stars star[NUM_OF_STARS]; int main() { sf::Window app(sf::VideoMode(800, 600, 32), "Nehe Lesson 9"); app.UseVerticalSync(false); init(); if (loadResources() == -1) { return EXIT_FAILURE; } while (app.IsOpened()) { processEvents(&app); processInput(&app); renderGlScene(&app); app.Display(); } return EXIT_SUCCESS; } int loadResources() { sf::Image img_data; // Load Texture if (!img_data.LoadFromFile("data/images/star.bmp")) { std::cout << "Could not load data/images/star.bmp"; return -1; } // Generate 1 texture glGenTextures(1, &texture_handle[0]); // Linear filtering glBindTexture(GL_TEXTURE_2D, texture_handle[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, img_data.GetWidth(), img_data.GetHeight(), 0, GL_RGBA, GL_UNSIGNED_BYTE, img_data.GetPixelsPtr()); return 0; } void processInput(sf::Window *app) { const sf::Input& input = app->GetInput(); if (input.IsKeyDown(sf::Key::T) && !tKey) { tKey = true; twinkle = !twinkle; } if (!input.IsKeyDown(sf::Key::T)) { tKey = false; } if (input.IsKeyDown(sf::Key::Up)) { tilt -= 0.05f; } if (input.IsKeyDown(sf::Key::Down)) { tilt += 0.05f; } if (input.IsKeyDown(sf::Key::PageUp)) { zoom -= 0.02f; } if (input.IsKeyDown(sf::Key::Up)) { zoom += 0.02f; } } void init() { glClearDepth(1.f); glClearColor(0.f, 0.f, 0.f, 0.f); // Enable texturing glEnable(GL_TEXTURE_2D); //glDepthMask(GL_TRUE); // Setup a perpective projection glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.f, 1.f, 1.f, 500.f); glShadeModel(GL_SMOOTH); glBlendFunc(GL_SRC_ALPHA, GL_ONE); glEnable(GL_BLEND); for (loop = 0; loop < NUM_OF_STARS; loop++) { star[loop].distance = (float)loop / NUM_OF_STARS * 5.0f; // Calculate distance from the centre // Give stars random rgb value star[loop].r = rand() % 256; star[loop].g = rand() % 256; star[loop].b = rand() % 256; } } void processEvents(sf::Window *app) { sf::Event event; while (app->GetEvent(event)) { if (event.Type == sf::Event::Closed) { app->Close(); } if (event.Type == sf::Event::KeyPressed && event.Key.Code == sf::Key::Escape) { app->Close(); } } } void renderGlScene(sf::Window *app) { app->SetActive(); // Clear color depth buffer glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Apply some transformations glMatrixMode(GL_MODELVIEW); glLoadIdentity(); // Select texture glBindTexture(GL_TEXTURE_2D, texture_handle[0]); for (loop = 0; loop < NUM_OF_STARS; loop++) { glLoadIdentity(); // Reset The View Before We Draw Each Star glTranslatef(0.0f, 0.0f, zoom); // Zoom Into The Screen (Using The Value In 'zoom') glRotatef(tilt, 1.0f, 0.0f, 0.0f); // Tilt The View (Using The Value In 'tilt') glRotatef(star[loop].angle, 0.0f, 1.0f, 0.0f); // Rotate To The Current Stars Angle glTranslatef(star[loop].distance, 0.0f, 0.0f); // Move Forward On The X Plane glRotatef(-star[loop].angle,0.0f,1.0f,0.0f); // Cancel The Current Stars Angle glRotatef(-tilt,1.0f,0.0f,0.0f); // Cancel The Screen Tilt if (twinkle) { glColor4ub(star[(NUM_OF_STARS - loop) - 1].r, star[(NUM_OF_STARS - loop)-1].g, star[(NUM_OF_STARS - loop) - 1].b, 255); glBegin(GL_QUADS); // Begin Drawing The Textured Quad glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.0f, -1.0f, 0.0f); glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.0f, -1.0f, 0.0f); glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.0f, 1.0f, 0.0f); glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.0f, 1.0f, 0.0f); glEnd(); // Done Drawing The Textured Quad } glRotatef(spin,0.0f,0.0f,1.0f); // Rotate The Star On The Z Axis // Assign A Color Using Bytes glColor4ub(star[loop].r, star[loop].g, star[loop].b, 255); glBegin(GL_QUADS); // Begin Drawing The Textured Quad glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.0f,-1.0f, 0.0f); glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.0f,-1.0f, 0.0f); glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.0f, 1.0f, 0.0f); glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.0f, 1.0f, 0.0f); glEnd(); // Done Drawing The Textured Quad spin += 0.01f; // Used To Spin The Stars star[loop].angle += (float)loop / NUM_OF_STARS; // Changes The Angle Of A Star star[loop].distance -= 0.01f; // Changes The Distance Of A Star if (star[loop].distance < 0.0f) { star[loop].distance += 5.0f; // Move The Star 5 Units From The Center star[loop].r = rand() % 256; // Give It A New Red Value star[loop].g = rand() % 256; // Give It A New Green Value star[loop].b = rand() % 256; // Give It A New Blue Value } } } I've looked over the code atleast 10 times now and I can't figure out the problem. Any help would be much appreciated.

    Read the article

  • Best algorithm for recursive adjacent tiles?

    - by OhMrBigshot
    In my game I have a set of tiles placed in a 2D array marked by their Xs and Zs ([1,1],[1,2], etc). Now, I want a sort of "Paint Bucket" mechanism: Selecting a tile will destroy all adjacent tiles until a condition stops it, let's say, if it hits an object with hasFlag. Here's what I have so far, I'm sure it's pretty bad, it also freezes everything sometimes: void destroyAdjacentTiles(int x, int z) { int GridSize = Cubes.GetLength(0); int minX = x == 0 ? x : x-1; int maxX = x == GridSize - 1 ? x : x+1; int minZ = z == 0 ? z : z-1; int maxZ = z == GridSize - 1 ? z : z+1; Debug.Log(string.Format("Cube: {0}, {1}; X {2}-{3}; Z {4}-{5}", x, z, minX, maxX, minZ, maxZ)); for (int curX = minX; curX <= maxX; curX++) { for (int curZ = minZ; curZ <= maxZ; curZ++) { if (Cubes[curX, curZ] != Cubes[x, z]) { Debug.Log(string.Format(" Checking: {0}, {1}", curX, curZ)); if (Cubes[curX,curZ] && Cubes[curX,curZ].GetComponent<CubeBehavior>().hasFlag) { Destroy(Cubes[curX,curZ]); destroyAdjacentTiles(curX, curZ); } } } } }

    Read the article

  • What makes game sound effects "good"?

    - by you786
    I'm making a small game, and I've found some free sound effects that I'd like to use. The issue is that I can't get the sound effects to sound like they "belong" in my game. I don't know what to look for that can make sound effects match the rest of my game style. I have some ideas on what affects the meshing of audio with graphics. For example, I have a feeling that the current SFX I may be too "realistic" for my graphical style, which is pretty cartoon-like. Also, is there a golden standard for what volume various SFX should be at? (for example, I am thinking that footsteps or other common sounds should be at barely audible volumes, while enemy deaths or something that is a "big deal" should be louder). I found a similar question about graphics, I'm looking for a similar response with sound effects.

    Read the article

  • Does DirectX implement Triple Buffering?

    - by Asik
    As AnandTech put it best in this 2009 article: In render ahead, frames cannot be dropped. This means that when the queue is full, what is displayed can have a lot more lag. Microsoft doesn't implement triple buffering in DirectX, they implement render ahead (from 0 to 8 frames with 3 being the default). The major difference in the technique we've described here is the ability to drop frames when they are outdated. Render ahead forces older frames to be displayed. Queues can help smoothness and stuttering as a few really quick frames followed by a slow frame end up being evened out and spread over more frames. But the price you pay is in lag (the more frames in the queue, the longer it takes to empty the queue and the older the frames are that are displayed). As I understand it, DirectX "Swap Chain" is merely a render ahead queue, i.e. buffers cannot be dropped; the longer the chain, the greater the input latency. At the same time, I find it hard to believe that the most widely used graphics API would not implement such fundamental functionality correctly. Is there a way to get proper triple buffered vertical synchronisation in DirectX?

    Read the article

  • How can I use iteration to lead targets?

    - by e100
    In my 2D game, I have stationary AI turrets firing constant speed bullets at moving targets. So far I have used a quadratic solver technique to calculate where the turret should aim in advance of the target, which works well (see Algorithm to shoot at a target in a 3d game, Predicting enemy position in order to have an object lead its target). But it occurs to me that an iterative technique might be more realistic (e.g. it should fire even when there is no exact solution), efficient and tunable - for example one could change the number of iterations to improve accuracy. I thought I could calculate the current range and thus an initial (inaccurate) bullet flight time to target, then work out where the target would actually be by that time, then recalculate a more accurate range, then recalculate flight time, etc etc. I think I am missing something obvious to do with the time term, but my aimpoint calculation does not currently converge after the significant initial correction in the first iteration: import math def aimpoint(iters, target_x, target_y, target_vel_x, target_vel_y, bullet_speed): aimpoint_x = target_x aimpoint_y = target_y range = math.sqrt(aimpoint_x**2 + aimpoint_y**2) time_to_target = range / bullet_speed time_delta = time_to_target n = 0 while n <= iters: print "iteration:", n, "target:", "(", aimpoint_x, aimpoint_y, ")", "time_delta:", time_delta aimpoint_x += target_vel_x * time_delta aimpoint_y += target_vel_y * time_delta range = math.sqrt(aimpoint_x**2 + aimpoint_y**2) new_time_to_target = range / bullet_speed time_delta = new_time_to_target - time_to_target n += 1 aimpoint(iters=5, target_x=0, target_y=100, target_vel_x=1, target_vel_y=0, bullet_speed=100)

    Read the article

  • Game Sound Effects Availability

    - by Ben
    Is there a need in the community for affordable game-focused sound effect packs? I am considering putting together some effects specifically geared toward games and indie developers that desire to get a working prototype quickly off the ground. Is there a need for this, or is there another standard "go-to" spot for this kind of thing? I want to offer value to the community but wanted to assess the need first. If anyone has thoughts, insight, or personal opinions on this I would love to hear it!

    Read the article

  • Cool examples of procedural pixel shader effects?

    - by Robert Fraser
    What are some good examples of procedural/screen-space pixel shader effects? No code necessary; just looking for inspiration. In particular, I'm looking for effects that are not dependent on geometry or the rest of the scene (would look okay rendered alone on a quad) and are not image processing (don't require a "base image", though they can incorporate textures). Multi-pass or single-pass is fine. Screenshots or videos would be ideal, but ideas work too. Here are a few examples of what I'm looking for (all from the RenderMonkey samples): PS - I'm aware of this question; I'm not asking for a source of actual shader implementations but instead for some inspirational ideas -- and the ones at the NVIDIA Shader Library mostly require a scene or are image processing effects. EDIT: this is an open-ended question and I wish there was a good way to split the bounty. I'll award the rep to the best answer on the last day.

    Read the article

  • obj-c classes and sub classes (Cocos2d) conversion

    - by Lewis
    Hi I'm using this version of cocos2d: https://github.com/krzysztofzablocki/CCNode-SFGestureRecognizers Which supports the UIGestureRecognizer within a CCLayer in a cocos2d scene like so: @interface HelloWorldLayer : CCLayer <UIGestureRecognizerDelegate> { } Now I want to make this custom gesture work within the scene, attaching it to a sprite in cocos2d: #import <Foundation/Foundation.h> #import <UIKit/UIGestureRecognizerSubclass.h> @protocol OneFingerRotationGestureRecognizerDelegate <NSObject> @optional - (void) rotation: (CGFloat) angle; - (void) finalAngle: (CGFloat) angle; @end @interface OneFingerRotationGestureRecognizer : UIGestureRecognizer { CGPoint midPoint; CGFloat innerRadius; CGFloat outerRadius; CGFloat cumulatedAngle; id <OneFingerRotationGestureRecognizerDelegate> target; } - (id) initWithMidPoint: (CGPoint) midPoint innerRadius: (CGFloat) innerRadius outerRadius: (CGFloat) outerRadius target: (id) target; - (void)reset; - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event; @end #include <math.h> #import "OneFingerRotationGestureRecognizer.h" @implementation OneFingerRotationGestureRecognizer // private helper functions CGFloat distanceBetweenPoints(CGPoint point1, CGPoint point2); CGFloat angleBetweenLinesInDegrees(CGPoint beginLineA, CGPoint endLineA, CGPoint beginLineB, CGPoint endLineB); - (id) initWithMidPoint: (CGPoint) _midPoint innerRadius: (CGFloat) _innerRadius outerRadius: (CGFloat) _outerRadius target: (id <OneFingerRotationGestureRecognizerDelegate>) _target { if ((self = [super initWithTarget: _target action: nil])) { midPoint = _midPoint; innerRadius = _innerRadius; outerRadius = _outerRadius; target = _target; } return self; } /** Calculates the distance between point1 and point 2. */ CGFloat distanceBetweenPoints(CGPoint point1, CGPoint point2) { CGFloat dx = point1.x - point2.x; CGFloat dy = point1.y - point2.y; return sqrt(dx*dx + dy*dy); } CGFloat angleBetweenLinesInDegrees(CGPoint beginLineA, CGPoint endLineA, CGPoint beginLineB, CGPoint endLineB) { CGFloat a = endLineA.x - beginLineA.x; CGFloat b = endLineA.y - beginLineA.y; CGFloat c = endLineB.x - beginLineB.x; CGFloat d = endLineB.y - beginLineB.y; CGFloat atanA = atan2(a, b); CGFloat atanB = atan2(c, d); // convert radiants to degrees return (atanA - atanB) * 180 / M_PI; } #pragma mark - UIGestureRecognizer implementation - (void)reset { [super reset]; cumulatedAngle = 0; } - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesBegan:touches withEvent:event]; if ([touches count] != 1) { self.state = UIGestureRecognizerStateFailed; return; } } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesMoved:touches withEvent:event]; if (self.state == UIGestureRecognizerStateFailed) return; CGPoint nowPoint = [[touches anyObject] locationInView: self.view]; CGPoint prevPoint = [[touches anyObject] previousLocationInView: self.view]; // make sure the new point is within the area CGFloat distance = distanceBetweenPoints(midPoint, nowPoint); if ( innerRadius <= distance && distance <= outerRadius) { // calculate rotation angle between two points CGFloat angle = angleBetweenLinesInDegrees(midPoint, prevPoint, midPoint, nowPoint); // fix value, if the 12 o'clock position is between prevPoint and nowPoint if (angle > 180) { angle -= 360; } else if (angle < -180) { angle += 360; } // sum up single steps cumulatedAngle += angle; // call delegate if ([target respondsToSelector: @selector(rotation:)]) { [target rotation:angle]; } } else { // finger moved outside the area self.state = UIGestureRecognizerStateFailed; } } - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesEnded:touches withEvent:event]; if (self.state == UIGestureRecognizerStatePossible) { self.state = UIGestureRecognizerStateRecognized; if ([target respondsToSelector: @selector(finalAngle:)]) { [target finalAngle:cumulatedAngle]; } } else { self.state = UIGestureRecognizerStateFailed; } cumulatedAngle = 0; } - (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesCancelled:touches withEvent:event]; self.state = UIGestureRecognizerStateFailed; cumulatedAngle = 0; } @end Header file for view controller: #import "OneFingerRotationGestureRecognizer.h" @interface OneFingerRotationGestureViewController : UIViewController <OneFingerRotationGestureRecognizerDelegate> @property (nonatomic, strong) IBOutlet UIImageView *image; @property (nonatomic, strong) IBOutlet UITextField *textDisplay; @end then this is in the .m file: gestureRecognizer = [[OneFingerRotationGestureRecognizer alloc] initWithMidPoint: midPoint innerRadius: outRadius / 3 outerRadius: outRadius target: self]; [self.view addGestureRecognizer: gestureRecognizer]; Now my question is, is it possible to add this custom gesture into the cocos2d project found on that github, and if so, what do I need to change in the OneFingerRotationGestureRecognizerDelegate to get it to work within cocos2d. Because at the minute it is setup in a standard iOS project and not a cocos2d project and I do not know enough about UIViews and classing/ sub classing in obj-c to get this to work. Also it seems to inherit from a UIView where cocos2d uses CCLayer. Kind regards, Lewis. I also realise I may have not included enough code from the custom gesture project for readers to interpret it fully, so the full project can be found here: https://github.com/melle/OneFingerRotationGestureDemo

    Read the article

  • Arcball 3D camera - how to convert from camera to object coordinates

    - by user38873
    I have checked multiple threads before posting, but i havent been able to figure this one out. Ok so i have been following this tutorial, but im not using glm, ive been implementing everything up until now, like lookat etc. http://en.wikibooks.org/wiki/OpenGL_Programming/Modern_OpenGL_Tutorial_Arcball So i can rotate with the click and drag of the mouse, but when i rotate 90º degrees around Y and then move the mouse upwards or donwwards, it rotates on the wrong axis, this problem is demonstrated on this part of the tutorial An extra trick is converting the rotation axis from camera coordinates to object coordinates. It's useful when the camera and object are placed differently. For instace, if you rotate the object by 90° on the Y axis ("turn its head" to the right), then perform a vertical move with your mouse, you make a rotation on the camera X axis, but it should become a rotation on the Z axis (plane barrel roll) for the object. By converting the axis in object coordinates, the rotation will respect that the user work in camera coordinates (WYSIWYG). To transform from camera to object coordinates, we take the inverse of the MV matrix (from the MVP matrix triplet). What i have to do acording to the tutorial is convert my axis_in_camera_coordinates to object coordinates, and the rotation is done well, but im confused on what matrix i use to do just that. The tutorial talks about converting the axis from camera to object coordinates by using the inverse of the MV. Then it shows these 3 lines of code witch i havent been able to understand. glm::mat3 camera2object = glm::inverse(glm::mat3(transforms[MODE_CAMERA]) * glm::mat3(mesh.object2world)); glm::vec3 axis_in_object_coord = camera2object * axis_in_camera_coord; So what do i aply to my calculated axis?, the inverse of what, i supose the inverse of the model view? So my question is how do you transform camera axis to object axis. Do i apply the inverse of the lookat matrix? My code: if (cur_mx != last_mx || cur_my != last_my) { va = get_arcball_vector(last_mx, last_my); vb = get_arcball_vector( cur_mx, cur_my); angle = acos(min(1.0f, dotProduct(va, vb)))*20; axis_in_camera_coord = crossProduct(va, vb); axis.x = axis_in_camera_coord[0]; axis.y = axis_in_camera_coord[1]; axis.z = axis_in_camera_coord[2]; axis.w = 1.0f; last_mx = cur_mx; last_my = cur_my; } Quaternion q = qFromAngleAxis(angle, axis); Matrix m; qGLMatrix(q,m); vi = mMultiply(m, vi); up = mMultiply(m, up); ViewMatrix = ogLookAt(vi.x, vi.y, vi.z,0,0,0,up.x,up.y,up.z);

    Read the article

  • Implementing Light Volume Front Faces

    - by cubrman
    I recently read an article about light indexed deferred rendering from here: http://code.google.com/p/lightindexed-deferredrender/ It explains its ideas in a clear way, but there was one point that I failed to understand. It in fact is one of the most interesting ones, as it explains how to implement transparency with this approach: Typically when rendering light volumes in deferred rendering, only surfaces that intersect the light volume are marked and lit. This is generally accomplished by a “shadow volume like” technique of rendering back faces – incrementing stencil where depth is greater than – then rendering front faces and only accepting when depth is less than and stencil is not zero. By only rendering front faces where depth is less than, all future lookups by fragments in the forward rendering pass will get all possible lights that could hit the fragment. Can anyone explain how exactly you need to render only front faces? Another question is why do you need the front faces at all? Why can't we simply render all the lights and store the ones that overlap at this pixel in a texture? Does this approach serves as a cut-off plane to discard lights blocked by opaque geometry?

    Read the article

  • What are the most common AI systems implemented in Tower Defense Games

    - by the_Dan
    I'm currently in the middle of researching on the various types of AI techniques used in tower defense type games. If someone could be help me in understanding the different types of techniques and their associated advantages. Using Google I already found several techniques. Random Map traversal Path finding e.g. Cost based Traversing Algorithms i.e. A* I have already found a great answer to this type of question with the below link, but I feel that this answer is tailored to FPS. If anyone could add to this and make it specific to tower defense games then I would be truly great-full. How is AI most commonly implemented in popular games? Example of such games would be: Radiant Defense Plant Vs Zombies - Not truly Intelligent, but there must be an AI system used right? Field Runners Edit: After further research I found an interesting book that may be useful: http://www.amazon.com/dp/0123747317/?tag=stackoverfl08-20

    Read the article

  • JPEG images not loading on PlayBook (Marmalade + iwgame)

    - by Vexille
    I'm using iwgame on a test project and I was trying to render different resolutions of JPG and PNG images. Everything works fine on the Marmalade Simulator, however once I deploy the game to our PlayBook and run it, only the PNG images are shown. I have declared the images in the MKB file and on a XML file iwgame's using to load the images. I've checked the deployments folder and all images are present in the intermediatefiles/native folder. We're currently using a BlackBerry only license, so we can only test this on the PlayBook, but we do intend to get a Community license and deploy to iOS and Android devices eventually (I'm not sure if this is a problem exclusive to the PlayBook). I really don't know if this is a Marmalade or a iwgame issue. I have a different test project without iwgame and it simply won't run with jpg images (I get the error: 'Could not find handler for extension "jpg"'). While searching for a sollution, I've seen people talking about using libjpg, but I've also found that Marmalade supposedly has integrated native jpeg support (and because of that iwgame has abandoned their jpeg loading support since v0.340), so I don't know what to think. I'm currently using the most recent versions of both Marmalade and iwgame, I believe: Marmalade 6.1.2 and iwgame 0.400. Also, please let me know if there's an easier or better way to do this, such as linking libjpg or something (I'm not exactly sure how to do this). I really would appreciate some help with this, there's a huge difference in size for the images we're planning to use, from a ~500kb jpg file to a ~3.5mb png file. Thanks, guys.

    Read the article

  • Loading Wavefront Data into VAO and Render It

    - by Jordan LaPrise
    I have successfully loaded a triangulated wavefront(.obj) into 6 vectors, the first 3 vectors contain the locations for vertices, uv coords, and normals. The last three have the indices stored for each of the faces. I have been looking into using VAO's and VBO's to render, and I'm not quite sure how to load and render the data. One of my biggest concerns is the fact that indexed rendering only allows you to have one array of indices, meaning I somehow have to make all of the first three vectors the same size, the only way I thought of doing this, is to make 3 new vertex's of equal size, and load in the data for each face, but that would completely defeat the purpose of indexing. Any help would be appreciated. Thanks in advance, Jordan

    Read the article

< Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >