Search Results

Search found 25550 results on 1022 pages for 'umbraco development'.

Page 478/1022 | < Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >

  • Initial direction of intersection between two moving vehicles?

    - by Larolaro
    I'm working with a bit of projectile prediction for my AI and I'm looking for some ideas, any input? If a blue vehicle is moving in a direction at a constant speed of X m/s and a stationary orange vehicle has a rocket that travels Y m/s, which initial direction would the orange vehicle have to fire the rocket for it to hit the blue vehicle at the earliest time in the future? Thanks for reading!

    Read the article

  • Using a Vertex Buffer and DrawUserIndexedPrimitives?

    - by MattMcg
    Let's say I have a large but static world and only a single moving object on said world. To increase performance I wish to use a vertex and index buffer for the static part of the world. I set them up and they work fine however if I throw in another draw call to DrawUserIndexedPrimitives (to draw my one single moving object) after the call to DrawIndexedPrimitives, it will error out saying a valid vertex buffer must be set. I can only assume the DrawUserIndexedPrimitive call destroyed/replaced the vertex buffer I set. In order to get around this I must call device.SetVertexBuffer(vertexBuffer) every frame. Something tells me that isn't correct as that kind of defeats the point of a buffer? To shed some light, the large vertex buffer is the final merged mesh of many repeated cubes (think Minecraft) which I manually create to reduce the amount of vertices/indexes needed (for example two connected cubes become one cuboid, the connecting faces are cut out), and also the amount of matrix translations (as it would suck to do one per cube). The moving objects would be other items in the world which are dynamic and not fixed to the block grid, so things like the NPCs who move constantly. How do I go about handling the large static world but also allowing objects to freely move about?

    Read the article

  • What's a way to implement a flexible buff/debuff system?

    - by gkimsey
    Overview: Lots of games which RPG-like statistics allow for character "buffs", ranging from simple "Deal 25% extra damage" to more complicated things like "Deal 15 damage back to attackers when hit." The specifics of each type of buff aren't really relevant. I'm looking for a (presumably object-oriented) way to handle arbitrary buffs. Details: In my particular case, I have multiple characters in a turn-based battle environment, so I envisioned buffs being tied to events like "OnTurnStart", "OnReceiveDamage", etc. Perhaps each buff is a subclass of a main Buff abstract class, where only the relevant events are overloaded. Then each character could have a vector of buffs currently applied. Does this solution make sense? I can certainly see dozens of event types being necessary, it feels like making a new subclass for each buff is overkill, and it doesn't seem to allow for any buff "interactions". That is, if I wanted to implement a cap on damage boosts so that even if you had 10 different buffs which all give 25% extra damage, you only do 100% extra instead of 250% extra. And there's more complicated situations that ideally I could control. I'm sure everyone can come up with examples of how more sophisticated buffs can potentially interact with each other in a way that as a game developer I may not want. As a relatively inexperienced C++ programmer (I generally have used C in embedded systems), I feel like my solution is simplistic and probably doesn't take full advantage of the object-oriented language. Thoughts? Has anyone here designed a fairly robust buff system before?

    Read the article

  • Create a kind of Interface c++ [migrated]

    - by Liuka
    I'm writing a little 2d rendering framework with managers for input and resources like textures and meshes (for 2d geometry models, like quads) and they are all contained in a class "engine" that interacts with them and with a directX class. So each class have some public methods like init or update. They are called by the engine class to render the resources, create them, but a lot of them should not be called by the user: //in pseudo c++ //the textures manager class class TManager { private: vector textures; .... public: init(); update(); renderTexture(); //called by the "engine class" loadtexture(); gettexture(); //called by the user } class Engine { private: Tmanager texManager; public: Init() { //initialize all the managers } Render(){...} Update(){...} Tmanager* GetTManager(){return &texManager;} //to get a pointer to the manager //if i want to create or get textures } In this way the user, calling Engine::GetTmanager will have access to all the public methods of Tmanager, including init update and rendertexture, that must be called only by Engine inside its init, render and update functions. So, is it a good idea to implement a user interface in the following way? //in pseudo c++ //the textures manager class class TManager { private: vector textures; .... public: init(); update(); renderTexture(); //called by the "engine class" friend class Tmanager_UserInterface; operator Tmanager_UserInterface*(){return reinterpret_cast<Tmanager_UserInterface*>(this)} } class Tmanager_UserInterface : private Tmanager { //delete constructor //in this class there will be only methods like: loadtexture(); gettexture(); } class Engine { private: Tmanager texManager; public: Init() Render() Update() Tmanager_UserInterface* GetTManager(){return texManager;} } //in main function //i need to load a texture //i always have access to Engine class engine-GetTmanger()-LoadTexture(...) //i can just access load and get texture; In this way i can implement several interface for each object, keeping visible only the functions i (and the user) will need. There are better ways to do the same?? Or is it just useless(i dont hide the "framework private functions" and the user will learn to dont call them)? Before i have used this method: class manager { public: //engine functions userfunction(); } class engine { private: manager m; public: init(){//call manager init function} manageruserfunciton() { //call manager::userfunction() } } in this way i have no access to the manager class but it's a bad way because if i add a new feature to the manager i need to add a new method in the engine class and it takes a lot of time. sorry for the bad english.

    Read the article

  • Point Light Soft Shadows

    - by notabene
    How to implement soft shadows for omni directional (point) light. We use typical shadow mapping technique. Depth is rendered to texture cube and addresing is pretty simple then. Just using vector from light to fragments world position. It works perfectly. Until you want soft shadows. In our engine we use PCSS technique for spot lights. But for point light there begins troubles. How to sample in 3D? I developed technique when orthonormal basis is created from a direction and upvector (0,1,0). And then multiply sampling vector (something like this (1.0,i/depthMapSize,j/depthMapSize) with this basis. But this (of course :)) looks pretty bad for vectors near (0,1,0) and (0,-1,0). I will appreciate any help on this.

    Read the article

  • How can a load and play an .x model using vertex animation in XNA?

    - by Christian
    From a game I developed years ago, I still have character models that my former 3D engine designer created and that I'd like to reuse in a Windows Phone project now. However, the files are in DirectX format (.x) containing keyframe animation only. No bones. No skeleton. There are a lot of animation keys defined on several frames to animate the characters. I don't quite understand how that works, to be frankly. However, I did a lot of research regarding a possible way of getting the characters animated via XNA on Windows Phone and all I found are hints that it is generally possible but not supported. Possibly by implementing own Content Importers and Processors. I didn't find anyone who successfully did something like that yet. How should I go about loading and displaying these models in XNA?

    Read the article

  • Entity system in Lua, communication with C++ and level editor. Need advice.

    - by Notbad
    Hi!, I know this is a really difficult subject. I have been reading a lot this days about Entity systems, etc... And now I'm ready to ask some questions (if you don't mind answering them) because I'm really messed. First of all, I have a 2D basic editor written in Qt, and I'm in the process of adding entitiy edition. I want the editor to be able to receive RTTI information from entities to change properties, create some logic being able to link published events to published actions (Ex:A level activate event throws a door open action), etc... Because all of this I guess my entity system should be written in scripting, in my case Lua. In the other hand I want to use a component based design for my entities, and here starts my questions: 1) Should I define my componentes en C++? If I do this en C++ won't I loose all the RTTI information I want for my editor?. In the other hand, I use box2d for physics, if I define all my components in script won't it be a lot of work to expose third party libs to lua? 2) Where should I place the messa system for my game engine? Lua? C++?. I'm tempted to just have C++ object to behave as servers, offering services to lua business logic. Things like physics system, rendering system, input system, World class, etc... And for all the other things, lua. Creation/Composition of entities based on components, game logic, etc... Could anyone give any insight on how to accomplish this? And what aproach is better?. Thanks in advance, HexDump.

    Read the article

  • 2D Side scroller collision detection

    - by Shanon Simmonds
    I am trying to do some collision detection between objects and tiles, but the tiles do not have there own x and y position, they are just rendered to the x and y position given, there is an array of integers which has the ids of the tiles to use(which are given from an image and all the different colors are assigned different tiles) int x0 = camera.x / 16; int y0 = camera.y / 16; int x1 = (camera.x + screen.width) / 16; int y1 = (camera.y + screen.height) / 16; for(int y = y0; y < y1; y++) { if(y < 0 || y >= height) continue; // height is the height of the level for(int x = x0; x < x1; x++) { if(x < 0 || x >= width) continue; // width is the width of the level getTile(x, y).render(screen, x * 16, y * 16); } } I tried using the levels getTile method to see if the tile that the object was going to advance to, to see if it was a certain tile, but, it seems to only work in some directions. Any ideas on what I'm doing wrong and fixes would be greatly appreciated. What's wrong is that it doesn't collide properly in every direction and also this is how I tested for a collision in the objects class if(!level.getTile((x + xa) / 16, (y + ya) / 16).isSolid()) { x += xa; y += ya; } EDIT: xa and ya represent the direction as well as the movement, if xa is negative it means the object is moving left, if its positive it is moving right, and same with ya except negative for up, positive for down.

    Read the article

  • UIView with IrrlichtScene - iOS

    - by user1459024
    i have a UIViewController in a Storyboard and want to draw a IrrlichtScene in this View Controller. My Code: WWSViewController.h #import <UIKit/UIKit.h> @interface WWSViewController : UIViewController { IBOutlet UILabel *errorLabel; } @end WWSViewController.mm #import "WWSViewController.h" #include "../../ressources/irrlicht/include/irrlicht.h" using namespace irr; using namespace core; using namespace scene; using namespace video; using namespace io; using namespace gui; @interface WWSViewController () @end @implementation WWSViewController -(void)awakeFromNib { errorLabel = [[UILabel alloc] init]; errorLabel.text = @""; IrrlichtDevice *device = createDevice( video::EDT_OGLES1, dimension2d<u32>(640, 480), 16, false, false, false, 0); /* Set the caption of the window to some nice text. Note that there is an 'L' in front of the string. The Irrlicht Engine uses wide character strings when displaying text. */ device->setWindowCaption(L"Hello World! - Irrlicht Engine Demo"); /* Get a pointer to the VideoDriver, the SceneManager and the graphical user interface environment, so that we do not always have to write device->getVideoDriver(), device->getSceneManager(), or device->getGUIEnvironment(). */ IVideoDriver* driver = device->getVideoDriver(); ISceneManager* smgr = device->getSceneManager(); IGUIEnvironment* guienv = device->getGUIEnvironment(); /* We add a hello world label to the window, using the GUI environment. The text is placed at the position (10,10) as top left corner and (260,22) as lower right corner. */ guienv->addStaticText(L"Hello World! This is the Irrlicht Software renderer!", rect<s32>(10,10,260,22), true); /* To show something interesting, we load a Quake 2 model and display it. We only have to get the Mesh from the Scene Manager with getMesh() and add a SceneNode to display the mesh with addAnimatedMeshSceneNode(). We check the return value of getMesh() to become aware of loading problems and other errors. Instead of writing the filename sydney.md2, it would also be possible to load a Maya object file (.obj), a complete Quake3 map (.bsp) or any other supported file format. By the way, that cool Quake 2 model called sydney was modelled by Brian Collins. */ IAnimatedMesh* mesh = smgr->getMesh("/Users/dbocksteger/Desktop/test/media/sydney.md2"); if (!mesh) { device->drop(); if (!errorLabel) { errorLabel = [[UILabel alloc] init]; } errorLabel.text = @"Konnte Mesh nicht laden."; return; } IAnimatedMeshSceneNode* node = smgr->addAnimatedMeshSceneNode( mesh ); /* To let the mesh look a little bit nicer, we change its material. We disable lighting because we do not have a dynamic light in here, and the mesh would be totally black otherwise. Then we set the frame loop, such that the predefined STAND animation is used. And last, we apply a texture to the mesh. Without it the mesh would be drawn using only a color. */ if (node) { node->setMaterialFlag(EMF_LIGHTING, false); node->setMD2Animation(scene::EMAT_STAND); node->setMaterialTexture( 0, driver->getTexture("/Users/dbocksteger/Desktop/test/media/sydney.bmp") ); } /* To look at the mesh, we place a camera into 3d space at the position (0, 30, -40). The camera looks from there to (0,5,0), which is approximately the place where our md2 model is. */ smgr->addCameraSceneNode(0, vector3df(0,30,-40), vector3df(0,5,0)); /* Ok, now we have set up the scene, lets draw everything: We run the device in a while() loop, until the device does not want to run any more. This would be when the user closes the window or presses ALT+F4 (or whatever keycode closes a window). */ while(device->run()) { /* Anything can be drawn between a beginScene() and an endScene() call. The beginScene() call clears the screen with a color and the depth buffer, if desired. Then we let the Scene Manager and the GUI Environment draw their content. With the endScene() call everything is presented on the screen. */ driver->beginScene(true, true, SColor(255,100,101,140)); smgr->drawAll(); guienv->drawAll(); driver->endScene(); } /* After we are done with the render loop, we have to delete the Irrlicht Device created before with createDevice(). In the Irrlicht Engine, you have to delete all objects you created with a method or function which starts with 'create'. The object is simply deleted by calling ->drop(). See the documentation at irr::IReferenceCounted::drop() for more information. */ device->drop(); } - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view, typically from a nib. } - (void)viewDidUnload { [super viewDidUnload]; // Release any retained subviews of the main view. } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return (interfaceOrientation != UIInterfaceOrientationPortraitUpsideDown); } @end Sadly the result is just a black View in the Simulator. :( Hope here is anyone who can explain me how i draw the scene in a UIView. Furthermore I'm getting this Error: Could not load sprite bank because the file does not exist: #DefaultFont How can i fix it ?

    Read the article

  • Player sprite moving slower on iPhone 4

    - by nvillec
    I just finished getting movement/jump animation for a player sprite in Xcode using Cocos2D. The basic movement algorithm is a timer that updates every 0.01 sec, changing the sprite position to (sprite.position.x + xVel, sprite.position.y + yVel). Each time a movement button is tapped, the appropriate velocity (initialized to 0) is changed to whatever speed I choose, then a stop movement button returns the velocity to 0. It's not an ideal solution but I'm very new at this and stoked to at least have that working with little help from the internet. So I may not have explained that perfectly, but it is in fact working to my satisfaction in Xcode's iPhone Simulator, however when I build it for my device and run it on my phone, the sprite's movement speed is noticeably slower than in Xcode. At first I thought it must have to do with the resolution of the iPhone 4, making the sprite's movement path twice as long, but I found that if I pull up the multitask bar, then return to the app the speed will sometimes jump back to normal. My second theory was that the code is just inefficient and is bogging the processes down, but I would see this reflected in the frame rate wouldn't I? It stays at 59-60 the whole time, and the spritesheet animation runs at the correct speed. Has anyone experienced this? Is this a really obvious issue that I'm completely missing? Any help (or tips for optimizing my approach to movement) would be much appreciated!

    Read the article

  • What are effective marketing strategies for iPhone games?

    - by Artemix
    So, long story short, some days ago I published an iPhone game, I think the game wasn't that bad tbh, and still I got only 10 sells at $0.99. Are they any publishers, sponsors, or distributors to make your game "visible" on the app store market?, or the only thing you need is to have an amazing game and that's all? Somehow I think that even if you have an awesome game if you don't do that "marketing magic" correctly you will not exist in the store. Now I'm making a second game, completely different, and I want to know how to do things right. If anyone knows something about this topic, let me know.

    Read the article

  • object detection in bitmmap javacanvas

    - by user1538127
    i want to detect clicks on canvas elements which are drawn using paths. so far i have think of to store elements path in javascript data structure and then check the cordinates of hits which matches the elements cordinates. i belive there is algorithm already for thins kind o cordinate search. rendering each of element path and checking the hits would be inefficient when elements number is larger. can anyone point on me that?

    Read the article

  • i am going to start learning to develop games, and have a very importent question

    - by levi s.
    so i am going to be starting to start learning to develop games soon, and i have already learned the basics of java. before i really go balls out. am i making a bad choice of language? should i stop now and move to c++ or c#? will that hinder me? is java going to hinder me worse? im kinda having regrets on saying "oh hey minecraft was made in java, it must be best!" im mainly asking, what should i do?

    Read the article

  • How can I support the creation and rendering of both interior and exterior environments?

    - by Nick
    Say I already have a renderer that can support outdoor terrain and environment rendering. How would I go about adding support for interior environments (for example, like World of Warcraft's dungeons) to my game? I'm interested both in how I should fit the interiors into my content creation process (for example, I thought about leaving holes in the terrain mesh into which I can "paste" the interior dungeon mesh at runtime) and how to render them (it seems like I'd want a different rendering flow other than a blended texture rendering phase that terrain uses).

    Read the article

  • How to perform simple collision detection?

    - by Rob
    Imagine two squares sitting side by side, both level with the ground like so: A simple way to detect if one is hitting the other is to compare the location of each side. They are touching if all of the following are false: The right square's left side is to the right of the left square's right side. The right square's right side is to the left of the left square's left side. The right square's bottom side is above the left square's top side. The right square's top side is below the left square's bottom side. If any of those are true, the squares are not touching. But consider a case like this, where one square is at a 45 degree angle: Is there an equally simple way to determine if those squares are touching?

    Read the article

  • DX10 sprite and pixel shader

    - by Alex Farber
    I am using ID3DX10Sprite to draw 2D image on the screen. 3D scene contains only one textured sprite placed over the whole window area. Render method looks like this: m_pDevice-ClearRenderTargetView(...); m_pSprite-Begin(D3DX10_SPRITE_SORT_TEXTURE); m_pSprite-DrawSpritesImmediate(&m_SpriteDefinition, 1, 0, 0); m_pSprite-End(); Now I want to make some transformations with the sprite texture in a shader. Currently the program doesn't work with shader. How it is possible to add pixel shader to the program with this structure? Inside the shader, I need to set all colors equal to red, and multiply pixel values by some coefficient. Something like this: float4 TexturePixelShader(PixelInputType input) : SV_Target { float4 textureColor; textureColor = shaderTexture.Sample(SampleType, input.tex); textureColor.x = textureColor.x * coefficient; textureColor.y = textureColor.x; textureColor.z = textureColor.x; return textureColor; }

    Read the article

  • XNA 4.0 Refresh AudioEngine, WaveBank and Others Not Found

    - by Peteyslatts
    I'm going through the Learning XNA 4.0 book, and unfortunately I installed XNA 4.0 refresh. All the code up until now has worked, with the exception of me needing to remove the Framework.Net and Framework.Storage. (As a side question, will this be problematic later?) The problem I'm having now is that in my Game1.cs file, I have imported all of the XNA.Framework libraries, and when I try and create instances of any of the following classes, an error pops up saying VisualStudio can't find them: AudiEngine, WaveBank, SoundBank, and Cue. I have googled around for a while, and the only solution I saw was to import Microsoft.Xna.Framework.Xact, but this doesn't seem to exist for me. Any help is much appreciated, Thanks Peter.

    Read the article

  • SFML: Monster following player on a straight line

    - by user3504658
    I've searched for this and found a few topics , usually they used a function normalize and using simple vector subtracting which is ok , but how should I do it in sfml ? Instead of using: Movement = p.position() - m.position(); p is the player and m is the monster I used something like this to move on a straight line: sf::Vector2f Tail(0,0); if((mPlayer.getPosition().y - mMonster.GetInstance().getPosition().y) >= (mPlayer.getPosition().x - mMonster.GetInstance().getPosition().x)){ //sf::Vector2f Tail(0,0); Tail.x = mPlayer.getPosition().x - mMonster.GetInstance().getPosition().x; } else if((mPlayer.getPosition().y - mMonster.GetInstance().getPosition().y) <= (mPlayer.getPosition().x - mMonster.GetInstance().getPosition().x)){ //sf::Vector2f Tail(0,0); Tail.y = mPlayer.getPosition().y - mMonster.GetInstance().getPosition().y; } if(!MonsterCollosion()) mMonster.Move(Tail * (TimePerFrame.asSeconds() * 1/2 ) ); It works ok if the the height = the width for the game window, although I think it's not the best looking game when it comes to a moving monster, since it starts fast and then it gets slower so what do you guys advise me to do ?

    Read the article

  • How do I create a bounding frustrum from a view & projection matrix?

    - by Narf the Mouse
    Given a left-handed Projection matrix, a left-handed View matrix, a ViewProj matrix of View * Projection - How do I create a bounding Frustum comprised of near, far, left, right and top, bottom planes? The only example I could find on Google (Tutorial 16: Frustum Culling) seems to not work; for example, if the math is used as given, the near-plane's distance is a negative. This places the near-plane behind the camera...

    Read the article

  • Unity 3D coding language, C# or JavaScript [on hold]

    - by hemantchhabra
    Hello to the gaming community. I am a budding game designer, learning to code for the first time in my life. I did learned c++ in school, 8 years back, so I sort of understand the logic when people are doing coding and I can suggest them the right route also, but to an extent I can't code. I am beginning to learn coding for Unity 3D. Which one do you suggest is more versatile and easier to work on for future, because I am a game designer not a coder, I would do coding until I don't have anyone else to code for me. It should be easy and fast to learn, functional and universal to apply, and innovative at the same time. C# or JavaScript ? Thank you for your time Ps- if you could suggest me steps to learn and tutorials to look for, that would be just awesome.

    Read the article

  • Rendering output to arbitary quadrilateral

    - by Trainee4Life
    I want to render output on a device to an arbitary quadirlateral, i.e. project texture on to a quad. What are the possible ways I could implement it? Till now, I have investigated: Drawing textured quadrilateral - Quads look odd as they are composed of triangles, and the distortion looks odd. The issue I'm facing has been discussed here and here as well. Setting transformation on device - Need help in getting this implemented. Pixel shaders - Not able to implement the desired effect. Any help would be much appreciated.

    Read the article

  • How do I get the point coords of a rotated SFML shaperect?

    - by user15498
    I am trying to get collisions of bullets working, and am using SFML. I am using code to get the position of the points of the rectangle for collisions, however I think there's a way to do this without having to get points but by simply getting the points from SFML, since the shape is a rectangle and the points are stored in that way. Is there a way to do that? Through a combination of getPoint() and getGlobalBounds() maybe? While on this topic, is it better to use shapeRects or sprites? I used to only use sprites, however with the addition of textures and more low level stuff I think it would be best to switch to using rectangles and setting their size.

    Read the article

  • Understanding IDAT chunk of PNG file format

    - by DRapp
    From the sample image below, I have a border in yellow just for display purposes only. The actual .png file is a simple black/white image 3 pixels by 3 pixels. I was originally thinking to try as a 2x2, but that would not help trying to interpret low/hi vs hi/low drawing stream. At least this way, I would have two black, one white from the top, or one white, two black from the bottom.. So I read the chunks of data, get to the IDAT chunk, decode that (zlib) and come up with 12 bytes as follows 00 20 00 40 00 80 So, my question, how does the above get broken down into the 3x3 black and white sample... Also, it is saved in palette format and properly recognizes the bit depth of 1 and color palette of 2... color pallet[0] is RGBA all zeros. Palette1 has RGBA of 255, 255, 255, 0 I'll eventually get into the multiple other depth formats later, just wanted to start with what would expect to be the easiest. Part II. Any guidance on handling the other depth formats would help if anything special to be considered especially regarding alpha channel (which I am already looking for in the palette) that might trip me up.

    Read the article

  • Sun & Moon Movement

    - by Thomas Mosey
    I'm creating a 2D HTML5 Canvas Game and am stuck on how to go about animating my Sun & Moon. The current setup is basically setting the moon at -1024 on the X-axis and the sun at 0 and animating them at 1 pixel a second. My canvas width is 1024 pixels and whenever the sun/moons X position crosses over the width of the canvas, it's X position is then set to -1024 to repeat the animation. What I am trying to do is get it to sync up with my day/night cycles. Each day is 10000 ticks long (A tick being added every frame) with Day/Night being 50% each (5000 ticks each). What I am trying to calculate is what I'll need to add to the X position of each per frame to get the sun from an X of 0 to 1024 after 5000 ticks/frames. Any help is appreciated.

    Read the article

< Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >