Search Results

Search found 26093 results on 1044 pages for 'career development'.

Page 383/1044 | < Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >

  • What kind of steering behaviour or logic can I use to get mobiles to surround another?

    - by Vaughan Hilts
    I'm using path finding in my game to lead a mob to another player (to pursue them). This works to get them overtop of the player, but I want them to stop slightly before their destination (so picking the penultimate node works fine). However, when multiple mobs are pursuing the mobile they sometimes "stack on top of each other". What's the best way to avoid this? I don't want to treat the mobs as opaque and blocked (because they're not, you can walk through them) but I want the mobs to have some sense of structure. Example: Imagine that each snake guided itself to me and should surround "Setsuna". Notice how both snakes have chosen to prong me? This is not a strict requirement; even being slightly offset is okay. But they should "surround" Setsuna.

    Read the article

  • Alternative to Game State System?

    - by Ricket
    As far as I can tell, most games have some sort of "game state system" which switches between the different game states; these might be things like "Intro", "MainMenu", "CharacterSelect", "Loading", and "Game". On the one hand, it totally makes sense to separate these into a state system. After all, they are disparate and would otherwise need to be in a large switch statement, which is obviously messy; and they certainly are well represented by a state system. But at the same time, I look at the "Game" state and wonder if there's something wrong about this state system approach. Because it's like the elephant in the room; it's HUGE and obvious but nobody questions the game state system approach. It seems silly to me that "Game" is put on the same level as "Main Menu". Yet there isn't a way to break up the "Game" state. Is a game state system the best way to go? Is there some different, better technique to managing, well, the "game state"? Is it okay to have an intro state which draws a movie and listens for enter, and then a loading state which loops on the resource manager, and then the game state which does practically everything? Doesn't this seem sort of unbalanced to you, too? Am I missing something?

    Read the article

  • Packing jar files into library jar files

    - by Hillel
    Firstly, this question is not about packing a simple jar file (e.g. lwjgl) into a runnable jar file. I know how to do this using JarSplice. So if I have a game which uses JInput, I will pack my game jar and jinput.jar using JarSplice and enter the natives in the process. The problem arises when I want to create a custom library that uses JInput, and then pack that into my games. See, the whole idea of writing a game library is that I don't ever have to even copy code like the wrapper I wrote for JInput Controller, and I always have a definitive version inside a library jar. Basically what I wanna do is create a jar file of my library, pack jinput.jar into it using JarSplice, possibly with the natives as well, and then when I want to export a jar of my game, I either export it automatically through Eclipse with the library jar, or, if that doesn't work, use JarSplice. I've tried several solutions, and nothing works. When I try to pack the game jar and the library jar using JarSplice, I get an error saying that there's either duplicate .project or .classpath. When I try to export my game through Eclipse with the library jar, it won't run (which is to be expected), but then, if I try to attach the natives with JarSplice, it doesn't give me any errors but the jar doesn't run. I'm not expecting anyone to solve this, but if anyone has an idea, something that will allow me to never look at the Gamepad code ever again, that would be awesome. I don't care if I have to package my library jar using JarSplice 5 times, and then do the same with the game jar, as long as it works. Otherwise I'll just have to copy the Gamepad class into every project alongside the library jar. :(

    Read the article

  • How to implement an experience system?

    - by Roflcoptr
    I'm currently writing a small game that is based on earning experiences when killing enemies. As usual, each level requires more experience gain than the level before, and on higher levels killing enemies awards more experience. But I have problem balancing this system. Are there any prebuild algorithms that help to caculate how the experience curve required for each level should look like? And how much experience an average enemy on a specific level should provide?

    Read the article

  • How to make game sessions like "with friends" games?

    - by Miguel lugo
    I want to make a game like "words with friends" or "chess with friends" or "Draw Something" or any of the other online multiplayer type games that are based around friends having game sessions with each other. I have made one app before that had no online features so I know the basics of objective-C and xCode. I looked up facebook connect so I know how to make friends find other friends to play with through Facebook. Just not how to make the gaming session. I only need my game to send a small array (or XML if it's better) of strings and integers from one iPhone to the other as each iPhone takes a turn. I'm NOT sending some complex video or anything like in "Draw Something." I was hoping someone could point me in the right direction (whether link to website or book or just a general idea) for how to do gaming sessions between two iPhones. I read this tutorial http://www.raywenderlich.com/3932/how-to-create-a-socket-based-iphone-app-and-server but it seems to be more about having two iPhones communicate over a server on a laptop or through the same wifi, not how to have iPhones game together over any Internet connection like in "with friends" games. I've tried to research this in other places but I'm never quite sure if what the articles I find are talking about is related to what I want or not. Someone please just point me in the right direction or give me a general outline of what to do. I will look up the specifics on my own once I know what to look for. Thank you.

    Read the article

  • Video Encoding library for C++ game

    - by Paulo Pinto
    I'm looking for a video encoding library in C++ that I can use to record game footage. It can not be an external application like Fraps, it must be a library. Ideally the encoding can be done in real time without affecting game performance too much, although this is not a must have requirement. Another preference is that the video file being saved from the game is already compressed and ready to be used by most video players without any further processing. I realize that this might not be possible especially for real time encoding, so I would accept a trade off of having to process the file later for better compression and/or better file format. I'd like to hear about your experience integrating the library into a game if possible and any interesting trade offs you had to make. Some libraries support more that one file format or codec, so advice on the file format would also be appreciated.

    Read the article

  • Camera changes view when controller connected

    - by ChocoMan
    I have a weird situation. I have a model set to 0 for X,Y and Z. My camera's position is set to: 0 (X-value, but updates when the model moves around) the model's height + 20f (about the same level as the model's shoulders) 25f (behind the model) Without the controller plugged in, everything looks fine as I want it. But as soon as I plug the controller in, the camera aims to the sky! But when I unplug the controller, the camera is back to what it should be. Does anyone have any insight as to what may cause this from plugging a controller in?

    Read the article

  • How can I generate a navigation mesh for a tile grid?

    - by Roflha
    I haven't actually started programming for this one yet, but I wanted to see how I would go about doing this anyway. Say I have a grid of tiles, all of the same size, some traversable and some not. How would I go about creating a navigation mesh of polygons from this grid? My idea was to take the non-traversable tiles out and extend lines from there edges to make polygons... that's all I have got so far. Any advice?

    Read the article

  • Running an a single action on multiple sprites at the same time

    - by Stephen
    Ok so I have created a spiraling animation for a football and I want to be able to run it on 2 sprites at the same time. This is what I have done. CCAnimation* footballAnim = [CCAnimation animationWithFrame:@"Football" frameCount:60 delay:0.005f]; spiral = [CCAnimate actionWithAnimation:footballAnim]; CCRepeatForever* repeat = [CCRepeatForever actionWithAction:spiral]; [Sprite1 runAction: repeat]; [Sprite2 runAction: repeat]; but it only runs the action on the first sprite. What am I doing wrong?

    Read the article

  • Parabolic throw with set Height and range (libgdx)

    - by Tauboga
    Currently i'm working on a minigame for android where you have a rotating ball in the center of the display which jumps when touched in the direction of his current angle. I'm simply using a gravity vector and a velocity vector in this way: positionBall = positionBall.add(velocity); velocity = velocity.add(gravity); and velocity.x = (float) Math.cos(angle) * 12; /* 12 to amplify the velocity */ velocity.y = (float) Math.sin(angle) * 15; /* 15 to amplify the velocity */ That works fine. Here comes the problem: I want to make the jump look the same on all possible resolutions. The velocity needs to be scaled in a way that when the ball is thrown straight upwards it will touch the upper display border. When thrown directly left or right the range shall be exactly long enough to touch the left/right display border. Which formula(s) do I need to use and how to implement them correctly? Thanks in advance!

    Read the article

  • Scripting for a C#, multiplayer game

    - by Vaughan Hilts
    I have a multiplayer game written in C# and we've recently been creating a lot of content but have been looking for a way to give our entities customization logic that the designers can hook into. I took a look at this post. With something like this in mind (using C# as a scripting language); I have a few questions. 1) Would one embed the script itself in the entity object before persisting to it to the disk? Is this okay? 2) Would I compile once per scripting then - this seems like a lot of overhead to store all these compiled Assemblies to execute. Any general advice on how to do thigns is welcome, too. These entities are generated on the fly inside the editor and could be composed of a lot of different things.

    Read the article

  • how to use serial port in UDK using windows DLL and DLLBind directive?

    - by Shayan Abbas
    I want to use serial port in UDK, For that purpose i use a windows DLL and DLLBind directive. I have a thread in windows DLL for serial port data recieve event. My problem is: this thread doesn't work properly. Please Help me. below is my code SerialPortDLL Code: // SerialPortDLL.cpp : Defines the exported functions for the DLL application. // #include "stdafx.h" #include "Cport.h" extern "C" { // This is an example of an exported variable //SERIALPORTDLL_API int nSerialPortDLL=0; // This is an example of an exported function. //SERIALPORTDLL_API int fnSerialPortDLL(void) //{ // return 42; //} CPort *sp; __declspec(dllexport) void Open(wchar_t* portName) { sp = new CPort(portName); //MessageBox(0,L"ha ha!!!",L"ha ha",0); //MessageBox(0,portName,L"ha ha",0); } __declspec(dllexport) void Close() { sp->Close(); MessageBox(0,L"ha ha!!!",L"ha ha",0); } __declspec(dllexport) wchar_t *GetData() { return sp->GetData(); } __declspec(dllexport) unsigned int GetDSR() { return sp->getDSR(); } __declspec(dllexport) unsigned int GetCTS() { return sp->getCTS(); } __declspec(dllexport) unsigned int GetRing() { return sp->getRing(); } } CPort class code: #include "stdafx.h" #include "CPort.h" #include "Serial.h" CSerial serial; HANDLE HandleOfThread; LONG lLastError = ERROR_SUCCESS; bool fContinue = true; HANDLE hevtOverlapped; HANDLE hevtStop; OVERLAPPED ov = {0}; //char szBuffer[101] = ""; wchar_t *szBuffer = L""; wchar_t *data = L""; DWORD WINAPI ThreadHandler( LPVOID lpParam ) { // Keep reading data, until an EOF (CTRL-Z) has been received do { MessageBox(0,L"ga ga!!!",L"ga ga",0); //Sleep(10); // Wait for an event lLastError = serial.WaitEvent(&ov); if (lLastError != ERROR_SUCCESS) { //LOG( " Unable to wait for a COM-port event" ); } // Setup array of handles in which we are interested HANDLE ahWait[2]; ahWait[0] = hevtOverlapped; ahWait[1] = hevtStop; // Wait until something happens switch (::WaitForMultipleObjects(sizeof(ahWait)/sizeof(*ahWait),ahWait,FALSE,INFINITE)) { case WAIT_OBJECT_0: { // Save event const CSerial::EEvent eEvent = serial.GetEventType(); // Handle break event if (eEvent & CSerial::EEventBreak) { //LOG( " ### BREAK received ###" ); } // Handle CTS event if (eEvent & CSerial::EEventCTS) { //LOG( " ### Clear to send %s ###", serial.GetCTS() ? "on":"off" ); } // Handle DSR event if (eEvent & CSerial::EEventDSR) { //LOG( " ### Data set ready %s ###", serial.GetDSR() ? "on":"off" ); } // Handle error event if (eEvent & CSerial::EEventError) { switch (serial.GetError()) { case CSerial::EErrorBreak: /*LOG( " Break condition" );*/ break; case CSerial::EErrorFrame: /*LOG( " Framing error" );*/ break; case CSerial::EErrorIOE: /*LOG( " IO device error" );*/ break; case CSerial::EErrorMode: /*LOG( " Unsupported mode" );*/ break; case CSerial::EErrorOverrun: /*LOG( " Buffer overrun" );*/ break; case CSerial::EErrorRxOver: /*LOG( " Input buffer overflow" );*/ break; case CSerial::EErrorParity: /*LOG( " Input parity error" );*/ break; case CSerial::EErrorTxFull: /*LOG( " Output buffer full" );*/ break; default: /*LOG( " Unknown" );*/ break; } } // Handle ring event if (eEvent & CSerial::EEventRing) { //LOG( " ### RING ###" ); } // Handle RLSD/CD event if (eEvent & CSerial::EEventRLSD) { //LOG( " ### RLSD/CD %s ###", serial.GetRLSD() ? "on" : "off" ); } // Handle data receive event if (eEvent & CSerial::EEventRecv) { // Read data, until there is nothing left DWORD dwBytesRead = 0; do { // Read data from the COM-port lLastError = serial.Read(szBuffer,33,&dwBytesRead); if (lLastError != ERROR_SUCCESS) { //LOG( "Unable to read from COM-port" ); } if( dwBytesRead == 33 && szBuffer[0]=='$' ) { // Finalize the data, so it is a valid string szBuffer[dwBytesRead] = '\0'; ////LOG( "\n%s\n", szBuffer ); data = szBuffer; } } while (dwBytesRead > 0); } } break; case WAIT_OBJECT_0+1: { // Set the continue bit to false, so we'll exit fContinue = false; } break; default: { // Something went wrong //LOG( "Error while calling WaitForMultipleObjects" ); } break; } } while (fContinue); MessageBox(0,L"kka kk!!!",L"kka ga",0); return 0; } CPort::CPort(wchar_t *portName) { // Attempt to open the serial port (COM2) //lLastError = serial.Open(_T(portName),0,0,true); lLastError = serial.Open(portName,0,0,true); if (lLastError != ERROR_SUCCESS) { //LOG( "Unable to open COM-port" ); } // Setup the serial port (115200,8N1, which is the default setting) lLastError = serial.Setup(CSerial::EBaud115200,CSerial::EData8,CSerial::EParNone,CSerial::EStop1); if (lLastError != ERROR_SUCCESS) { //LOG( "Unable to set COM-port setting" ); } // Register only for the receive event lLastError = serial.SetMask(CSerial::EEventBreak | CSerial::EEventCTS | CSerial::EEventDSR | CSerial::EEventError | CSerial::EEventRing | CSerial::EEventRLSD | CSerial::EEventRecv); if (lLastError != ERROR_SUCCESS) { //LOG( "Unable to set COM-port event mask" ); } // Use 'non-blocking' reads, because we don't know how many bytes // will be received. This is normally the most convenient mode // (and also the default mode for reading data). lLastError = serial.SetupReadTimeouts(CSerial::EReadTimeoutNonblocking); if (lLastError != ERROR_SUCCESS) { //LOG( "Unable to set COM-port read timeout" ); } // Create a handle for the overlapped operations hevtOverlapped = ::CreateEvent(0,TRUE,FALSE,0);; if (hevtOverlapped == 0) { //LOG( "Unable to create manual-reset event for overlapped I/O" ); } // Setup the overlapped structure ov.hEvent = hevtOverlapped; // Open the "STOP" handle hevtStop = ::CreateEvent(0,TRUE,FALSE,_T("Overlapped_Stop_Event")); if (hevtStop == 0) { //LOG( "Unable to create manual-reset event for stop event" ); } HandleOfThread = CreateThread( NULL, 0, ThreadHandler, 0, 0, NULL); } CPort::~CPort() { //fContinue = false; //CloseHandle( HandleOfThread ); //serial.Close(); } void CPort::Close() { fContinue = false; CloseHandle( HandleOfThread ); serial.Close(); } wchar_t *CPort::GetData() { return data; } bool CPort::getCTS() { return serial.GetCTS(); } bool CPort::getDSR() { return serial.GetDSR(); } bool CPort::getRing() { return serial.GetRing(); } Unreal Script Code: class MyPlayerController extends GamePlayerController DLLBind(SerialPortDLL); dllimport final function Open(string portName); dllimport final function Close(); dllimport final function string GetData();

    Read the article

  • Is there a grey-area with Copyright infringement?

    - by Z.O
    Currently a student, I'm trying to put together a game for iOS. From everywhere I've read, it seems any game's sound and art are apart of their IP and covered under their Copyright. That being said, say I wanted to use the coin sound effect from the original Mario (less than 1s long and used sparsely)... would anyone really care? Having no experience with this, I'm just wondering if cases like this are treated like "Ya you're driving slightly over the speed limit, but nobody cares" or as "you stole that car". Thanks for any insight anyone may be able to provide.

    Read the article

  • Steady zoom on center in LWJGL (Modelview)

    - by l5p4ngl312
    I am having a problem in LWJGL with zooming in and out. I am using glScaled(zoom, zoom, 1) before glTranslated. There are 2 problems: 1. The rate of zoom speeds up a lot when zooming out (lower zoom value). 2. It zooms in on the bottom left corner of the screen rather than the center. Eventually, I would like to have the zoom focused on the mouse position. I have tried to fix these problems by make it glScaled(zoom^12, zoom^12, 1) so that the greater the zoom value, the faster it will zoom in order to balance out the faster zoom at lower zoom values. To compensate for the zoom focused on the bottom left, I have tried to subtract (zoom+1)^10 + 2^10 from the X and Y of each sprite. This results in a curved zoom path, first to the left and then to the right. It is a 2D game.

    Read the article

  • Is it better to hard code data or find an algorithm?

    - by OghmaOsiris
    I've been working on a boardgame that has a hex grid as the board (the upper right grid in the image below) Since the board will never change and the spaces on the board will always be linked to the same other spaces around it, should I just hard code every space with the values that I need? Or should I use various algorithms to calculate links and traversals? To be more specific, my board game is a 4 player game where each player has a 5x5x5x5x5x5 hex grid (again, the upper right grid in th eimage above). The object is to get from the bottom of the grid to the top, with various obstacles in the way, and each players being able to attack eachother from the edge of their grid onto other players based on a range multiplier. Since the players grid will never change and the distance of any arbitrary space from the edge of the grid will always be the same, should I just hard code this number into each of the spaces, or should I still use a breadth first search algorithm when players are attacking? The only con I can think of for hard coding everything is that I'm going to code 9+ 2(5+6+7+8) = 61 individual cells. Is there anything else that I'm missing that I should consider using more complex algorithms?

    Read the article

  • Xna Loading Screens

    - by Cyral
    I'm making a 2D XNA game. I'd like to implement loading screens when stuff has to load for a while. Like when I login to an account, connect to the server, and generate worlds. I'm pretty sure it needs to be multithreaded, because I want to be able to do something like "Generating World 10%...11%...". GenerateWorld() { //Call StartLoading("Generating World"); or something //Starter generating, Updating progress... //End loading screen and fade into world } Help appreciated, I'm new.

    Read the article

  • Anti-cheat Javascript for browser/HTML5 game

    - by Billy Ninja
    I'm planning on venturing on making a single player action rpg in js/html5, and I'd like to prevent cheating. I don't need 100% protection, since it's not going to be a multiplayer game, but I want some level of protection. So what strategies you suggest beyond minify and obfuscation? I wouldn't bother to make some server side simple checking, but I don't want to go the Diablo 3 path keeping all my game state changes on the server side. Since it's going to be a rpg of sorts I came up with the idea of making a stats inspector that checks abrupt changes in their values, but I'm not sure how it consistent and trusty it can be. What about variables and functions escopes? Working on smaller escopes whenever possible is safer, but it's worth the effort? Is there anyway for the javascript to self inspect it's text, like in a checksum? There are browser specific solutions? I wouldn't bother to restrain it for Chrome only in the early builds.

    Read the article

  • Mixing XNA and silverlight gives wierd graphics

    - by Mech0z
    I making a small 3dgame which is made as a Silverlight and XNA app, but when I draw the sprites the graphics becomes all wierd. All my primitive types are rendered correctly, but my 3d models are just wierd My Draw is like this when silverlight is set to draw private void OnDraw(object sender, GameTimerEventArgs e) { // Render the Silverlight controls using the UIElementRenderer elementRenderer.Render(); // Clear the screen to a solid color SharedGraphicsDeviceManager.Current.GraphicsDevice.Clear(Color.CornflowerBlue); switch (gameState) { case GameState.ChooseStarter: TextBlockStatus.Text = "Find Starting Player"; break; case GameState.PlaceBrick: TextBlockPlayer.Text = (playerTurn == PlayerTurn.PlayerOne) ? "Player One" : "Player Two"; TextBlockState.Text = "Place Brick"; foreach (IGraphicObject obj in _3dObjects) { obj.Draw(cameraPosition, e); } break; case GameState.GiveBrick: TextBlockState.Text = "Give Brick"; break; } spriteBatch.Begin(); // Using the texture from the UIElementRenderer, // draw the Silverlight controls to the screen spriteBatch.Draw(elementRenderer.Texture, cameraProjection, Color.White); spriteBatch.End(); } This gives me this output If I comment the spritebatch lines out I get the correct output, except the silverlight text is of course not shown I am not entirely sure what to look for except that zero vector I am giving to the spritebatch, but if thats the source I have no idea what I am supposed to set it as epspecially when its a 2d vector

    Read the article

  • Acceptable sound quality: stereo needed for an Android game?

    - by Thomas Calc
    I have various simple short sound effects (damage sound, dying sound, thunderbolt, fanfare, breaking) for a game that is developed for Android currently. I use OGG files: 96kbps VBR, 44.1KHz, 2 channels (that means stereo, right?). I read the other stackexchange topics about "acceptable sound quality", but they're too general, address too many things. My experience is that even with 80kbps, my effects sound OK. But I tested it on a limited number of Android devices (including a Sony Ericsson Xperia Neo and a HTC Desire HD). My questions: For mobile phones and tablets, generally, what parameters are recommended? Won't my 80kbps sounds be bad on a newer device (such as a modern tablet)? I don't hear any difference between stereo and mono (2 channels vs. 1 channel, right?), is there any noticeable difference at all for mobile phones / tablets? (in terms of the player experience) May it worth it at all? I assume that stereo sounds take much more in memory (when they're decoded to PCM), despite of the fact that the compressed OGG size is practically the same. Reacting to Roy T.'s great comment: Actually, I couldn't measure the PCM size (Android decodes OGG internally), but I thought that stereo will take more space than mono when uncompressed After throwing out one of the WAV channels in Audacity, and re-exporting it: The new WAV file size is half than before The OGG file size is practically the same as before The sound effects and game music was recorded by my friend who is an experienced hobby musician/composer, but he knows little about computers & software so he just gave me some high-quality WAV files generated via his hardware.These were stereo, but if I check them in Audacity, both channels appear to be exactly the same.Can I consider them the same (= moving to mono), or might there be some unnoticeable differences to the human eye?

    Read the article

  • Correct level of abstraction for a 3d rendering component?

    - by JohnB
    I've seen lots of questions around this area but not this exact question so apologies if this is a duplicate. I'm making a small 3d game. Well to be honest, it's just a little hobby project and likely won't turn out to be an actual game, I'll be happy to make a nice graphics demo and learn about 3d rendering and c++ design. My intent is to use direct3d9 for rendering as I have some little experience of it, and it seems to meet my requirements. However if I've learned one thing as a programmer it's to ask "is there any conceivable reason that this component might be replaced by a different implmentation" and if the answer is yes then I need to design a proper abstraction and interface to that component. So even though I intend to implment d3d9 I need to design a 3d interface that could be implemented for d3d11, opengl... My question then is what level is it best to do this at? I'm thinking that an interface capable of creating and later drawing Vertex buffers and index buffers Textures Vertex and Pixel "shaders" Some representation of drawing state (blending modes etc...) In other words a fairly low level interface where my code to draw for example an animated model would use the interface to obtain abstract vertex buffers etc. I worry though that it's too low level to abstract out all the functionallity I need efficiently. The alternative is to do this at a higher level where the interface can draw objects, animations, landscapes etc, and implement them for each system. This seems like more work, but it more flexible I guess. So that's my question really, when abstracting out the drawing system, what level of interface works best?

    Read the article

  • Blending animations for more character movements

    - by Noob Saibot
    I am making a hack n slash 3rd person game. And I want the character movements to be more dynamic not like fighting games where you have a moves list. I want to animate tons of different animations and have them "Tween" between each other? Because I want the controls to not be keyboard mouse. I want it to be all keyboard. that way you have up to 10 inputs (All your fingers) to blend and morph animations to create more fluid movements. In the end this will almost be similar to characters typing a phrase or string of keys rather than move forward mouse look click to melee. My question is. Has anyone done this before and would someone go about trying to tween lets say one for key on the keyboard excluding Tab, Caps, R+Shift, L+Shift, Enter, R+Ctrl, L+Ctrl, L+Alt, R+Alt, Windows Key, and Menu. So thats all the numbers, letters and punctuation keys. Thats 46 keys gives me a combination of 46P1 = 5502622159812088949850305428800254892961651752960000000000L (used Python) and with a minimum entry value of 2 keypresses shortening to half. This is not humanly possible to create so many inique animations in one lifetime. But I'm guessing there is a reason this hasn't been done already. Or if I just used 10 basic keys. Maybe ASDF SPACE (RIGHT HAND) 456+0 (LEFT HAND KEYPAD) it would give me 3,628,800 posible unique animations.

    Read the article

  • Alpha blend 3D png texture in XNA

    - by ProgrammerAtWork
    I'm trying to draw a partly transparent texture a plane, but the problem is that it's incorrectly displaying what is behind that texture. Pseudo code: vertices1 basiceffect1 // The vertices of vertices1 are located BEHIND vertices2 vertices2 basiceffect2 // The vertices of vertices2 are located IN FRONT vertices1 GraphicsDevice.Clear(Blue); PrimitiveBatch.Begin(); //if I draw like this: PrimitiveBatch.Draw(vertices1, trianglestrip, basiceffect1) PrimitiveBatch.Draw(vertices2, trianglestrip, basiceffect2) //Everything gets draw correctly, I can see the texture of vertices2 trough //the transparent parts of vertices1 //but if I draw like this: PrimitiveBatch.Draw(vertices2, trianglestrip, basiceffect2) PrimitiveBatch.Draw(vertices1, trianglestrip, basiceffect1) //I cannot see the texture of vertices1 in behind the texture of vertices2 //Instead, the texture vertices2 gets drawn, and the transparent parts are blue //The clear color PrimitiveBatch.Draw(vertice PrimitiveBatch.End(); My question is, Why does the order in which I call draw matter?

    Read the article

  • emo-framework in android move on collision of sprites with physics

    - by KaHeL
    I'm developing my first ever game for Android where I'm still learning about using of framework. To begin I made two sprites of ball where one ball is movable by dragging and another one is just standing on it's place on load. Now I've already added the collision listener for both sprites and as tested it's working properly. Now what I need to learn is on how can I add physics on both sprites where when they collide the standing sprite will move based on the physics and bounce around the screen. It would be best if you teach it to me step by step since I'm a little slow on this. Here's my nut so far: local stage = emo.Stage(); class Okay_1 { sprite = null; spriteok = null; dragStart = false; angle = 0; // Called when the stage is loaded function onLoad() { print("Level_1 is loaded!"); // Create new sprite and load 'f1.png' sprite = emo.Sprite("f1.png"); sprite.moveCenter(stage.getWindowWidth() * 0.5, stage.getWindowHeight() * 0.5); sprite.load(); spriteok = emo.Sprite("okay.png") spriteok.setWidth(100); spriteok.setHeight(100); spriteok.load(); // Check if the coordinate (X=100, Y=100) is inside the sprite if (spriteok.contains(100, 100)) { print("contains!"); } // Does the sprite collides with the other sprite? if (spriteok.collidesWith(sprite)) { print("collides!"); } } function onMotionEvent(ev) { if (ev.getAction() == MOTION_EVENT_ACTION_DOWN) { // Moves the sprite at the position of motion event angle = sprite.getAngle(); sprite.remove(); sprite = emo.Sprite("f2.png"); sprite.load(); sprite.rotate(angle); sprite.moveCenter(ev.getX(), ev.getY()); sprite.rotate(sprite.getAngle()+10); // Check if the coordinate (X=100, Y=100) is inside the sprite if (sprite.contains(sprite.getWidth(), sprite.getHeight())) { print("contains!"); } // Does the sprite collides with the other sprite? if (sprite.collidesWith(spriteok)) { print("collides!"); } dragStart = true; }else if (ev.getAction() == MOTION_EVENT_ACTION_MOVE) { if (dragStart) { // Moves the sprite at the position of motion event sprite.moveCenter(ev.getX(), ev.getY()); sprite.rotate(sprite.getAngle()+10); // Check if the coordinate (X=100, Y=100) is inside the sprite if (sprite.contains(sprite.getWidth(), sprite.getHeight())) { print("contains!"); } // Does the sprite collides with the other sprite? if (sprite.collidesWith(spriteok)) { print("collides!"); } } }else if (ev.getAction() == MOTION_EVENT_ACTION_UP || ev.getAction() == MOTION_EVENT_ACTION_CANCEL) { if (dragStart) { // change block color to red dragStart = false; angle = sprite.getAngle(); sprite.remove(); sprite = emo.Sprite("f1.png"); sprite.load(); sprite.moveCenter(ev.getX(), ev.getY()); sprite.rotate(angle); // Check if the coordinate (X=100, Y=100) is inside the sprite if (sprite.contains(sprite.getWidth(), sprite.getHeight())) { print("contains!"); } // Does the sprite collides with the other sprite? if (sprite.collidesWith(spriteok)) { print("collides!"); } } } } // Called when the stage is disposed function onDispose() { sprite.remove(); // Remove the sprite print("Level_1 is disposed!"); } } function emo::onLoad() { emo.Stage().load(Okay_1()); }

    Read the article

  • unity4.3 rigidbody2d unexpected force behaviour

    - by Lilz Votca Love
    So guys ive edited the question and here is what my problem is i have a player which has a rigidbody2d attached to it.my player is able to doublejump in the air nicely and stick to walls when colliding with them and slowly slides to the ground.All movement is handle through physics and no transform manipulations.here i did something similar to this in the FixedUpdate of my player. void FixedUpdate() { if(wall && Input.GetButtonDown("Jump")) { if(facingright)//player is facing the left side of the wall { rigidbody2D.Addforce(new vector2(-1f,2f)*jumpforce); /*Now the player should jump backwards following this directional vector and should follow a smooth curve which in this part works well*/ } else { rigidbody2D.Addforce(new vector2(1f,2f)*jumpforce); /*Now this is where everything gets complicated as you should have noticed this is the same directional vector only the opposite x axis value and the same amount of force is used but it behaves like the red curve in the picture below*/ } } } bad behaviour and vector in red .I tested the same thing(both addforce methods) for a simple jump and they exactly behave like mentionned above in the picture.so here is my problem.Jumping diagonally forward with rigidbody2d.addforce() do not have the same impact,do not follow the same curve as jumping the opposite direction with the same exact amount of force.if i could fix this or get past this i could implement a walljump system like a ninja jumping in zigzag between two opposite wall to climb them.Any ideas or alternatives?

    Read the article

  • Sphere Texture Mapping shows visible seams

    - by AvengerDr
    As you can see from the above picture there is a visible seam in the texture mapping. The underlying mesh is a geosphere based on octahedron subdivisions. On that particular latitude, vertices have been duplicated. However there still is a visible seam. Here is how I calculate the UV coordinates: float longitude = (float)Math.Atan2(normal.X, -normal.Z); float latitude = (float)Math.Acos(normal.Y); float u = (float)(longitude / (Math.PI * 2.0) + 0.5); float v = (float)(latitude / Math.PI); Is this a problem in the coordinates or a mipmapping issue?

    Read the article

< Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >