Search Results

Search found 29201 results on 1169 pages for 'game development'.

Page 569/1169 | < Previous Page | 565 566 567 568 569 570 571 572 573 574 575 576  | Next Page >

  • Initial direction of intersection between two moving vehicles?

    - by Larolaro
    I'm working with a bit of projectile prediction for my AI and I'm looking for some ideas, any input? If a blue vehicle is moving in a direction at a constant speed of X m/s and a stationary orange vehicle has a rocket that travels Y m/s, which initial direction would the orange vehicle have to fire the rocket for it to hit the blue vehicle at the earliest time in the future? Thanks for reading!

    Read the article

  • How to perform simple collision detection?

    - by Rob
    Imagine two squares sitting side by side, both level with the ground like so: A simple way to detect if one is hitting the other is to compare the location of each side. They are touching if all of the following are false: The right square's left side is to the right of the left square's right side. The right square's right side is to the left of the left square's left side. The right square's bottom side is above the left square's top side. The right square's top side is below the left square's bottom side. If any of those are true, the squares are not touching. But consider a case like this, where one square is at a 45 degree angle: Is there an equally simple way to determine if those squares are touching?

    Read the article

  • Improving the efficiency of frustum culling

    - by DeadMG
    I've got some code which performs frustum culling. However, this defines the "frustum" way too broadly- when I have ~10 objects on screen, the code returns 42 objects to be rendered. I've tried taking "slices" through the frustum to attempt to increase the accuracy of the technique, but it doesn't seem to have made much impact. I also significantly reduced the far plane, so that the objects are barely at the edge. Here's my code (where size is the size in screen space- the resolution of the client area of the window I'm rendering into). Any suggestions? auto&& size = GetDimensions(); D3DVIEWPORT9 vp = { 0, 0, size.x, size.y, 0, 1 }; D3DCALL(device->SetViewport(&vp)); static const int slices = 10; std::vector<Object*> result; for(int i = 0; i < slices; i++) { D3DXVECTOR3 WorldSpaceFrustrumPoints[8] = { D3DXVECTOR3(0, size.y, static_cast<float>(i) / slices), D3DXVECTOR3(size.x, 0, static_cast<float>(i) / slices), D3DXVECTOR3(size.x, size.y, static_cast<float>(i) / slices), D3DXVECTOR3(0, 0, static_cast<float>(i) / slices), D3DXVECTOR3(0, 0, static_cast<float>(i + 1) / slices), D3DXVECTOR3(size.x, 0, static_cast<float>(i + 1) / slices), D3DXVECTOR3(size.x, size.y, static_cast<float>(i + 1) / slices), D3DXVECTOR3(0, size.y, static_cast<float>(i + 1) / slices) }; D3DXMATRIXA16 Identity; D3DXMatrixIdentity(&Identity); D3DXVec3UnprojectArray( WorldSpaceFrustrumPoints, sizeof(D3DXVECTOR3), WorldSpaceFrustrumPoints, sizeof(D3DXVECTOR3), &vp, &Projection, &View, &Identity, 8 ); Math::AABB Frustrum; auto world_begin = std::begin(WorldSpaceFrustrumPoints); auto world_end = std::end(WorldSpaceFrustrumPoints); auto world_initial = WorldSpaceFrustrumPoints[0]; Frustrum.BottomLeftClosest.x = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.x < rhs.x ? lhs : rhs; }).x; Frustrum.BottomLeftClosest.y = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.y < rhs.y ? lhs : rhs; }).y; Frustrum.BottomLeftClosest.z = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.z < rhs.z ? lhs : rhs; }).z; Frustrum.TopRightFurthest.x = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.x > rhs.x ? lhs : rhs; }).x; Frustrum.TopRightFurthest.y = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.y > rhs.y ? lhs : rhs; }).y; Frustrum.TopRightFurthest.z = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.z > rhs.z ? lhs : rhs; }).z; auto slices_result = ObjectTree.collision(Frustrum); result.insert(result.end(), slices_result.begin(), slices_result.end()); } return result;

    Read the article

  • XNA 4.0 Point Vertex Rendering

    - by luis
    I have a buffer of about 134 million particles and a very powerful computer to render them smoothly but I am getting an error when trying to render them as primitive lines it says I cannot render more than around 1 million. I wonder how can I do this, also if is there a better way to render this other than with lines, I'm comfortable with having 1 pixel points or anything as long as the vertices are shown all the time. I'm basically just plotting the points. thanks.

    Read the article

  • Indexed Drawing in OpenGL not working

    - by user2050846
    I am trying to render 2 types of primitives- - points ( a Point Cloud ) - triangles ( a Mesh ) I am rendering points simply without any index arrays and they are getting rendered fine. To render the meshes I am using indexed drawing with the face list array having the indices of the vertices to be rendered as Triangles. Vertices and their corresponding vertex colors are stored in their corresponding buffers. But the indexed drawing command do not draw anything. The code is as follows- Main Display Function: void display() { simple->enable(); simple->bindUniform("MV",modelview); simple->bindUniform("P", projection); // rendering Point Cloud glBindVertexArray(vao); // Vertex buffer Point Cloud glBindBuffer(GL_ARRAY_BUFFER,vertexbuffer); glEnableVertexAttribArray(0); glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,0); // Color Buffer point Cloud glBindBuffer(GL_ARRAY_BUFFER,colorbuffer); glEnableVertexAttribArray(1); glVertexAttribPointer(1,3,GL_FLOAT,GL_FALSE,0,0); // Render Colored Point Cloud //glDrawArrays(GL_POINTS,0,model->vertexCount); glDisableVertexAttribArray(0); glDisableVertexAttribArray(1); // ---------------- END---------------------// //// Floor Rendering glBindBuffer(GL_ARRAY_BUFFER,fl); glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,0); glVertexAttribPointer(1,4,GL_FLOAT,GL_FALSE,0,(void *)48); glDrawArrays(GL_QUADS,0,4); glDisableVertexAttribArray(0); glDisableVertexAttribArray(1); // -----------------END---------------------// //Rendering the Meshes //////////// PART OF CODE THAT IS NOT DRAWING ANYTHING //////////////////// glBindVertexArray(vid); for(int i=0;i<NUM_MESHES;i++) { glBindBuffer(GL_ARRAY_BUFFER,mVertex[i]); glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,0); glVertexAttribPointer(1,3,GL_FLOAT,GL_FALSE,0,(void *)(meshes[i]->vertexCount*sizeof(glm::vec3))); //glDrawArrays(GL_TRIANGLES,0,meshes[i]->vertexCount); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,mFace[i]); //cout<<gluErrorString(glGetError()); glDrawElements(GL_TRIANGLES,meshes[i]->faceCount*3,GL_FLOAT,(void *)0); glDisableVertexAttribArray(0); glDisableVertexAttribArray(1); } glUseProgram(0); glutSwapBuffers(); glutPostRedisplay(); } Point Cloud Buffer Allocation Initialization: void initGLPointCloud() { glGenBuffers(1,&vertexbuffer); glGenBuffers(1,&colorbuffer); glGenBuffers(1,&fl); //Populates the position buffer glBindBuffer(GL_ARRAY_BUFFER,vertexbuffer); glBufferData(GL_ARRAY_BUFFER, model->vertexCount * sizeof (glm::vec3), &model->positions[0], GL_STATIC_DRAW); //Populates the color buffer glBindBuffer(GL_ARRAY_BUFFER, colorbuffer); glBufferData(GL_ARRAY_BUFFER, model->vertexCount * sizeof (glm::vec3), &model->colors[0], GL_STATIC_DRAW); model->FreeMemory(); // To free the not needed memory, as the data has been already // copied on graphic card, and wont be used again. glBindBuffer(GL_ARRAY_BUFFER,0); } Meshes Buffer Initialization: void initGLMeshes(int i) { glBindBuffer(GL_ARRAY_BUFFER,mVertex[i]); glBufferData(GL_ARRAY_BUFFER,meshes[i]->vertexCount*sizeof(glm::vec3)*2,NULL,GL_STATIC_DRAW); glBufferSubData(GL_ARRAY_BUFFER,0,meshes[i]->vertexCount*sizeof(glm::vec3),&meshes[i]->positions[0]); glBufferSubData(GL_ARRAY_BUFFER,meshes[i]->vertexCount*sizeof(glm::vec3),meshes[i]->vertexCount*sizeof(glm::vec3),&meshes[i]->colors[0]); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,mFace[i]); glBufferData(GL_ELEMENT_ARRAY_BUFFER,meshes[i]->faceCount*sizeof(glm::vec3), &meshes[i]->faces[0],GL_STATIC_DRAW); meshes[i]->FreeMemory(); //glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,0); } Initialize the Rendering, load and create shader and calls the mesh and PCD initializers. void initRender() { simple= new GLSLShader("shaders/simple.vert","shaders/simple.frag"); //Point Cloud //Sets up VAO glGenVertexArrays(1, &vao); glBindVertexArray(vao); initGLPointCloud(); //floorData glBindBuffer(GL_ARRAY_BUFFER, fl); glBufferData(GL_ARRAY_BUFFER, sizeof(floorData), &floorData[0], GL_STATIC_DRAW); glBindBuffer(GL_ARRAY_BUFFER,0); glBindVertexArray(0); //Meshes for(int i=0;i<NUM_MESHES;i++) { if(i==0) // SET up the new vertex array state for indexed Drawing { glGenVertexArrays(1, &vid); glBindVertexArray(vid); glGenBuffers(NUM_MESHES,mVertex); glGenBuffers(NUM_MESHES,mColor); glGenBuffers(NUM_MESHES,mFace); } initGLMeshes(i); } glEnable(GL_DEPTH_TEST); } Any help would be much appreciated, I have been breaking my head on this problem since 3 days, and still it is unsolved.

    Read the article

  • Can't use SFML sprite drawing and OpenGL rendering at the same time

    - by Ken
    I'm using some SFML built in functions to draw sprites and text as an overlay on top of some OpenGL rending in an SFML RenderWindow. The opengl rendering appears fine until I add the code to draw the sprites or text. The sprite or text drawing causes the OpenGL stuff to disappear. The follow code show what I'm trying to do sf::RenderWindow window(sf::VideoMode(viewport.width,viewport.height,32), "SFML Window"); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0,viewport.width,0,viewport.height,0,1); while (window.pollEvent(Event)) { //event handling... //begin drawing glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glBegin(GL_TRIANGLES); glColor3f(col.x,col.y,col.z); for(int i=0;i<3;i++) glVertex2f(pos.x+verts[i].x,pos.y+verts[i].y); glEnd(); // adding this line causes all the previous opengl triangles not to appear window.draw("Sometext"); window.display(); }

    Read the article

  • Efficiently rendering to 3D texture

    - by TravisG
    I have an existing depth texture and some other color textures, and want to process the information in them by rendering to a 3D texture (based on the depth contained in the depth texture, i.e. a point at (x/y) in the depth texture will be rendered to (x/y/texture(depth,uv)) in the 3D texture). Simply doing one manual draw call for each slice of the 3D texture (via glFramebufferTextureLayer) is terribly slow, since I don't know beforehand to what slice of the 3D texture a given texel from one of the color textures or the depth texture belongs. This means the entire process is effectively for each slice for each texel in depth texture process color textures and render to slice So I have to sample the depth texture completely per each slice, and I also have to go through the processing (at least until to discard;) for all texels in it. It would be much faster if I could rearrange the process to for each texel in depth texture figure out what slice it should end up in process color textures and render to slice Is this possible? If so, how? What I'm actually trying to do: the color textures contain lighting information (as seen from light view, it's a reflective shadow map). I want to accumulate that information in the 3D texture and then later use it to light the scene. More specifically I'm trying to implement Cryteks Light Propagation Volumes algorithm.

    Read the article

  • Cocos2d update leaking memory

    - by Andrey Chernukha
    I have a weird issue - my app is leaking memory on device only, not on a simulator. It is leaking if i schedule update method anywhere, on any scene. It is leaking despite update method is empty, there's nothing inside it except NSLog. How can it be? I have even scheduled update on the very first scene where it seems there's nothing to leak, and scheduled another empty and it's leaking or not leaking but allocating something, the result is the same - the volume of the memory consumed is increasing and my app is crashing soon. I can detect the leakage via using Instruments-Memory-Activity Monitor or with help of following function: void report_memory(void) { struct task_basic_info info; mach_msg_type_number_t size = sizeof(info); kern_return_t kerr = task_info(mach_task_self(), TASK_BASIC_INFO, (task_info_t)&info, &size); if( kerr == KERN_SUCCESS ) { NSLog(@"Memory in use (in bytes): %u", info.resident_size); } else { NSLog(@"Error with task_info(): %s", mach_error_string(kerr)); } } Can anyone explain me what's going on?

    Read the article

  • About floating point precision and why do we still use it

    - by system_is_b0rken
    Floating point has always been troublesome for precision on large worlds. This article explains behind-the-scenes and offers the obvious alternative - fixed point numbers. Some facts are really impressive, like: "Well 64 bits of precision gets you to the furthest distance of Pluto from the Sun (7.4 billion km) with sub-micrometer precision. " Well sub-micrometer precision is more than any fps needs (for positions and even velocities), and it would enable you to build really big worlds. My question is, why do we still use floating point if fixed point has such advantages? Most rendering APIs and physics libraries use floating point (and suffer it's disadvantages, so developers need to get around them). Are they so much slower? Additionally, how do you think scalable planetary engines like outerra or infinity handle the large scale? Do they use fixed point for positions or do they have some space dividing algorithm?

    Read the article

  • XNA Drag Gestures - fractional delta values

    - by Den
    I have an issue with objects moving roughly twice as far as expected when dragging them. I am comparing my application to the standard TouchGestureSample sample from MSDN. For some reason in my application gesture samples have fractional positions and deltas. Both are using same Microsoft.Xna.Framework.Input.Touch.dll, v4.0.30319. I am running both apps using standard Windows Phone Emulator. I am setting my break point immediately after this line of code in a simple Update method: GestureSample gesture = TouchPanel.ReadGesture(); Typical values in my app: Delta = {X:-13.56522 Y:4.166667} Position = {X:184.6956 Y:417.7083} Typical values in sample app: Delta = {X:7 Y:16} Position = {X:497 Y:244} Have anyone seen this issue? Does anyone have any suggestions? Thank you.

    Read the article

  • How to alter image pixels of a wild life bird?

    - by NoobScratcher
    Hello so I was hoping someone knew how to move or change color and position actual image pixels and could explain and show the code to do so. I know how to write pixels on a surface or screen-surface usigned int *ptr = static_cast <unsigned int *> (screen-pixels); int offset = y * (screen->pitch / sizeof(unsigned int)); ptr[offset + x] = color; But I don't know how to alter or manipulate a image pixel of a png image my thoughts on this was How do I get the values and locations of pixels and what do I have to write to make it happen? Then how do I actually change the values or locations of those gotten pixels and how do I make that happen? any ideas tip suggestions are also welcome! int main(int argc , char *argv[]) { SDL_Surface *Screen = SDL_SetVideoMode(640,480,32,SDL_SWSURFACE); SDL_Surface *Image; Image = IMG_Load("image.png"); bool done = false; SDL_Event event; while(!done) { SDL_FillRect(Screen,NULL,(0,0,0)); SDL_BlitSurface(Image,NULL,Screen,NULL); while(SDL_PollEvent(&event)) { switch(event.type) { case SDL_QUIT: return 0; break; } } SDL_Flip(Screen); } return 0; }

    Read the article

  • Kinect joint coordinates and XNA animation

    - by Sweta Dwivedi
    I have written a program to record the x,y,z coordinated of the Hand joint and I want to animate my models 2D or 3D according to these coordinates. . .However the output of the x,y,z coordinates are fluctuating from -0 to 1 but not more than that.. So i assume I will need to multiply them back with the screen width and height, however it still doesnt seem to animate according to the original x,y,z points Any transformations I might be missing out? while ((line = r.ReadLine()) != null) { string[] temp = line.Split(','); int x = (int) float.Parse(temp[0]))* maxWidth); int y = (int) float.Parse(temp[1])) * maxHeight); }

    Read the article

  • UIView with IrrlichtScene - iOS

    - by user1459024
    i have a UIViewController in a Storyboard and want to draw a IrrlichtScene in this View Controller. My Code: WWSViewController.h #import <UIKit/UIKit.h> @interface WWSViewController : UIViewController { IBOutlet UILabel *errorLabel; } @end WWSViewController.mm #import "WWSViewController.h" #include "../../ressources/irrlicht/include/irrlicht.h" using namespace irr; using namespace core; using namespace scene; using namespace video; using namespace io; using namespace gui; @interface WWSViewController () @end @implementation WWSViewController -(void)awakeFromNib { errorLabel = [[UILabel alloc] init]; errorLabel.text = @""; IrrlichtDevice *device = createDevice( video::EDT_OGLES1, dimension2d<u32>(640, 480), 16, false, false, false, 0); /* Set the caption of the window to some nice text. Note that there is an 'L' in front of the string. The Irrlicht Engine uses wide character strings when displaying text. */ device->setWindowCaption(L"Hello World! - Irrlicht Engine Demo"); /* Get a pointer to the VideoDriver, the SceneManager and the graphical user interface environment, so that we do not always have to write device->getVideoDriver(), device->getSceneManager(), or device->getGUIEnvironment(). */ IVideoDriver* driver = device->getVideoDriver(); ISceneManager* smgr = device->getSceneManager(); IGUIEnvironment* guienv = device->getGUIEnvironment(); /* We add a hello world label to the window, using the GUI environment. The text is placed at the position (10,10) as top left corner and (260,22) as lower right corner. */ guienv->addStaticText(L"Hello World! This is the Irrlicht Software renderer!", rect<s32>(10,10,260,22), true); /* To show something interesting, we load a Quake 2 model and display it. We only have to get the Mesh from the Scene Manager with getMesh() and add a SceneNode to display the mesh with addAnimatedMeshSceneNode(). We check the return value of getMesh() to become aware of loading problems and other errors. Instead of writing the filename sydney.md2, it would also be possible to load a Maya object file (.obj), a complete Quake3 map (.bsp) or any other supported file format. By the way, that cool Quake 2 model called sydney was modelled by Brian Collins. */ IAnimatedMesh* mesh = smgr->getMesh("/Users/dbocksteger/Desktop/test/media/sydney.md2"); if (!mesh) { device->drop(); if (!errorLabel) { errorLabel = [[UILabel alloc] init]; } errorLabel.text = @"Konnte Mesh nicht laden."; return; } IAnimatedMeshSceneNode* node = smgr->addAnimatedMeshSceneNode( mesh ); /* To let the mesh look a little bit nicer, we change its material. We disable lighting because we do not have a dynamic light in here, and the mesh would be totally black otherwise. Then we set the frame loop, such that the predefined STAND animation is used. And last, we apply a texture to the mesh. Without it the mesh would be drawn using only a color. */ if (node) { node->setMaterialFlag(EMF_LIGHTING, false); node->setMD2Animation(scene::EMAT_STAND); node->setMaterialTexture( 0, driver->getTexture("/Users/dbocksteger/Desktop/test/media/sydney.bmp") ); } /* To look at the mesh, we place a camera into 3d space at the position (0, 30, -40). The camera looks from there to (0,5,0), which is approximately the place where our md2 model is. */ smgr->addCameraSceneNode(0, vector3df(0,30,-40), vector3df(0,5,0)); /* Ok, now we have set up the scene, lets draw everything: We run the device in a while() loop, until the device does not want to run any more. This would be when the user closes the window or presses ALT+F4 (or whatever keycode closes a window). */ while(device->run()) { /* Anything can be drawn between a beginScene() and an endScene() call. The beginScene() call clears the screen with a color and the depth buffer, if desired. Then we let the Scene Manager and the GUI Environment draw their content. With the endScene() call everything is presented on the screen. */ driver->beginScene(true, true, SColor(255,100,101,140)); smgr->drawAll(); guienv->drawAll(); driver->endScene(); } /* After we are done with the render loop, we have to delete the Irrlicht Device created before with createDevice(). In the Irrlicht Engine, you have to delete all objects you created with a method or function which starts with 'create'. The object is simply deleted by calling ->drop(). See the documentation at irr::IReferenceCounted::drop() for more information. */ device->drop(); } - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view, typically from a nib. } - (void)viewDidUnload { [super viewDidUnload]; // Release any retained subviews of the main view. } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return (interfaceOrientation != UIInterfaceOrientationPortraitUpsideDown); } @end Sadly the result is just a black View in the Simulator. :( Hope here is anyone who can explain me how i draw the scene in a UIView. Furthermore I'm getting this Error: Could not load sprite bank because the file does not exist: #DefaultFont How can i fix it ?

    Read the article

  • How does Minecraft render its sunset and sky?

    - by Nick
    In Minecraft, the sunset looks really beautiful and I've always wanted to know how they do it. Do they use several skyboxes rendered over eachother? That is, one for the sky (which can turn dark and light depending on the time of the day), one for the sun and moon, and one for the orange horizon effect? I was hoping someone could enlighten me... I wish I could enter wireframe or something like that but as far as I know that is not possible.

    Read the article

  • Efficient way to calculate "vision cones" on 2D tile map?

    - by OverMachoGrande
    I'm trying to calculate which tiles a particular unit can "see" if facing a certain direction on a tile map (within a certain range and angle of facing). The easiest way would be to draw a certain number of tiles outward and raycast to each tile. However, I'm hoping for something slightly more efficient. A picture says a thousand words: The red dot is the unit (who's facing upwards). My goal is to calculate the yellow tiles. The green blocks are walls (walls are between tiles, and it's easy to check if you can pass between two tiles). The blue line represents something like the "raycasting" method I was talking about, but I'd rather not have to do this. EDIT: Units can only be facing north/south/east/west (0, 90, 180, or 270 degrees) and FoV is always 90 degrees. Should simplify some calculations. I'm thinking there's some sort of recursive-ish/stack-based/queue-based algorithm, but I can't quite figure it out. Thanks!

    Read the article

  • What is a good practice for 2D scene graph partitioning for culling?

    - by DevilWithin
    I need to know an efficient way to cull the scene graph objects, to render exclusively the ones in the view, and as fast as possible. I am thinking of doing it the following way, having in each object a local boundingbox which holds the object bounds, and a global boundingbox which holds the bounds of the object and all children. When a camera is moved, the render list is updated by traversing the global boundingboxes. When only the object is being moved, it tries to enlarge or shrink the ancestors global boundingboxes, and in the end updating or not the renderlist. What do you think of this approach? Do you think it will provide a fast and efficient culling? Also, because the render list is a contiguous list, it could accelerate the rendering, right? Any further tips for a 2D scene graphs are highly appreciated!

    Read the article

  • How do I use D3DXVec3Unproject with D3D11?

    - by Miguel P
    I'm having a small issue with D3DXVec3Unproject. I'm currently using Direct3D 11 and not 10, and the signature for this function is: D3DXVECTOR3 *pOut, CONST D3DXVECTOR3 *pV, CONST D3D10_VIEWPORT *pViewport, CONST D3DXMATRIX *pProjection, CONST D3DXMATRIX *pView, CONST D3DXMATRIX *pWorld As you may have noticed, it requires a D3D10_VIEWPORT, and I'm using a Direct3D 11 viewport, D3D11_VIEWPORT. Do you have any ideas how I can use D3DXVec3Unproject with Direct3D 11?

    Read the article

  • Help with converting an XML into a 2D level (Actionscript 3.0)

    - by inzombiak
    I'm making a little platformer and wanted to use Ogmo to create my level. I've gotten everything to work except the level that my code generates is not the same as what I see in Ogmo. I've checked the array and it fits with the level in Ogmo, but when I loop through it with my code I get the wrong thing. I've included my code for creating the level as well as an image of what I get and what I'm supposed to get. EDIT: I tried to add it, but I couldn't get it to display properly Also, if any of you know of better level editors please let me know. xmlLoader.addEventListener(Event.COMPLETE, LoadXML); xmlLoader.load(new URLRequest("Level1.oel")); function LoadXML(e:Event):void { levelXML = new XML(e.target.data); xmlFilter = levelXML.* for each (var levelTest:XML in levelXML.*) { crack = levelTest; } levelArray = crack.split(''); trace(levelArray); count = 0; for(i = 0; i <= 23; i++) { for(j = 0; j <= 35; j++) { if(levelArray[i*36+j] == 1) { block = new Platform; s.addChild(block); block.x = j*20; block.y = i*20; count++; trace(i); trace(block.x); trace(j); trace(block.y); } } } trace(count);

    Read the article

  • Calculate vector direction

    - by Starkers
    Is the direction angle always measured from the plus x axis? Does a vector in the +,+ quadrant always have a direction between 0 and 90, and in -,+ between 90 and 180 and in -,- between 180 and 270 and in -,+ between 270 and 360 ? Also, how should we calculate the direction using tan? Would that mean nested if statements to find out what quadrant we're in, and then applying the appropriate "work arounds"? E.g. If we were in the -,+ (like in the diagram) would we find the angle from the + axis would be 90 + tan^-1(y/x), the 90 + only used because we're in the -,+ quadrant. Also, that's just a quick solution, may be off, I just want to know if we use nested if statements to get the angle from the + x axis. Finally, should we find the distance in degrees or radians?

    Read the article

  • Heightmap and Textures

    - by Robert
    Im trying to find the "best way" to apply a texture to a heightmap with opengl 3.x. Its really hard to find something on google because tutorials are olds and they're all using different methods, im really lost and i dont know what to use at all. Here is my code that generates the heightmap (its basic) float[] vertexes = null; float[] textureCoords = null; for(int x = 0; x < this.m_size.width; x++) { for(int y = 0; y < this.m_size.height; y++) { vertexes ~= [x, 1.0f, y]; textureCoords ~= [cast(float)x / 50, cast(float)y / 50]; } } As you can see, i dont know how to apply the texture at all (i was using / 50 for my tests). Result of that code : I would like to have something very basic like : (you can find more pics in his blog) Edit : my texture size is 1024x1024.

    Read the article

  • Using a Vertex Buffer and DrawUserIndexedPrimitives?

    - by MattMcg
    Let's say I have a large but static world and only a single moving object on said world. To increase performance I wish to use a vertex and index buffer for the static part of the world. I set them up and they work fine however if I throw in another draw call to DrawUserIndexedPrimitives (to draw my one single moving object) after the call to DrawIndexedPrimitives, it will error out saying a valid vertex buffer must be set. I can only assume the DrawUserIndexedPrimitive call destroyed/replaced the vertex buffer I set. In order to get around this I must call device.SetVertexBuffer(vertexBuffer) every frame. Something tells me that isn't correct as that kind of defeats the point of a buffer? To shed some light, the large vertex buffer is the final merged mesh of many repeated cubes (think Minecraft) which I manually create to reduce the amount of vertices/indexes needed (for example two connected cubes become one cuboid, the connecting faces are cut out), and also the amount of matrix translations (as it would suck to do one per cube). The moving objects would be other items in the world which are dynamic and not fixed to the block grid, so things like the NPCs who move constantly. How do I go about handling the large static world but also allowing objects to freely move about?

    Read the article

  • Why are trees shining in background?

    - by Kinected
    Currently I am creating a forest scene in the dark, and the trees are shining far away, but when I get close they are fine. I have the shaders set to "Nature/Tree Soft Occlusion [bark/leaves]", but they are still rendering strange far away, but close they are fine. I tried placing the trees in a folder named "Ambient-Occlusion" like said here, but no luck. Also fog is turned off. Thanks in advance.

    Read the article

  • Working out of a vertex array for destrucible objects

    - by bobobobo
    I have diamond-shaped polygonal bullets. There are lots of them on the screen. I did not want to create a vertex array for each, so I packed them into a single vertex array and they're all drawn at once. | bullet1.xyz | bullet1.rgb | bullet2.xyz | bullet2.rgb This is great for performance.. there is struct Bullet { vector<Vector3f*> verts ; // pointers into the vertex buffer } ; This works fine, the bullets can move and do collision detection, all while having their data in one place. Except when a bullet "dies" Then you have to clear a slot, and pack all the bullets towards the beginning of the array. Is this a good approach to handling lots of low poly objects? How else would you do it?

    Read the article

  • Circle physics and collision using vectors

    - by Joe Hearty
    This is a problem I've been having, When making a set number of filled circles at random locations on a JPanel and applying a gravity (a negative change in the y), each of the circles collide. I want them to have collision detection and push in the opposite direction using vectors but I don't know how to apply that to my scenario could someone help? public void drawballs(Graphics g){ g.setColor (Color.white); //displays circles for(int i = 0; i<xlocationofcircles.length-1; i++){ g.fillOval( (int) xlocationofcircles[i], (int) (ylocationofcircles[i]) ,16 ,16 ); ylocationofcircles[i]+=.2; //gravity if(ylocationofcircles[i] > 550) //stops gravity at bottom of screen ylocationofcircles[i]-=.2; //Check distance between circles(i think..) float distance =(xlocationofcircles[i+1]-xlocationofcircles[i]) + (ylocationofcircles[i+1]-xlocationofcircles[i]); if( Math.sqrt(distance) <16) ...

    Read the article

< Previous Page | 565 566 567 568 569 570 571 572 573 574 575 576  | Next Page >