Search Results

Search found 31839 results on 1274 pages for 'plugin development'.

Page 559/1274 | < Previous Page | 555 556 557 558 559 560 561 562 563 564 565 566  | Next Page >

  • Does SFML render graphics outside the window?

    - by ThePlan
    While working on a tile-based map I figured it would be a good idea if I would only render what the player sees on the game window, but then it occurred to me that SFML could already be optimized enough to know when it doesn't have to render those things. Let's say I draw a 30x30 squared maps (A medium one) but the player only sees a bunch of them, not entirely. Would SFML automatically hide what the player doesn't see, or should I hide it myself?

    Read the article

  • How to deal with Character body parts from Design to Cocos2d

    - by Edwin Soho
    I'm trying to figure out the pattern the game developers use together with game designers: See the picture below with the independent parts: Questions: 1) Should I create different image parts from different body parts or keep frame by frame animaton? (I know both can be used, but I'm trying to figure what is commonly used in the industry) 2) If I'm going to generate different image parts from different body parts (which is I thing is more logical) how would I export that to Cocos2d (Vector or Bitmap)?

    Read the article

  • PHP city-sim castle layout

    - by Gert
    I am currently contemplating the layout system for my php based game but i've run into a couple of worries. So my idea is a 9X9 grid where the center 3X3 are inner castle. The inner castle will be 6X6 if you enter it(click on it). and with the option to expand the inner castle converting one of the 9X9 tiles to a 4X4 inner castle tile. So here is my question: What is the best way to tackle this type of layout? my original idea was a 18X18 grid and saving it in the db as (idCastle, Y, X) where X is a string of 18 numbers long telling me if the tile is an inner/outer tile or a inner/outer building. but i am not really fond of this idea and would like to hear some other ideas on how to tackle this. Thanks in Advance, Gert

    Read the article

  • What would be the best mean for a gui with a lot of FX in Unity

    - by Lionel Barret
    The game I am working on (we are in R&D) is based almost exclusively on a windowed gui with a lot of FX (fading, growing, etc). We will also likely need custom widgets (like a sound recording graph). The game will be made with Unity and from what I heard, the default gui system has quite a bad rep, it is too slow for many usages. So, I wondering what would be the best way to do what we need.

    Read the article

  • simple collision detection

    - by Rob
    Imagine 2 squares sitting side by side, both level with the ground: http://img19.imageshack.us/img19/8085/sqaures2.jpg A simple way to detect if one is hitting the other is to compare the location of each side. They are touching if ALL of the following are NOT true: The right square's left side is to the right of the left square's right side. The right square's right side is to the left of the left square's left side. The right square's bottom side is above the left square's top side. The right square's top side is below the left square's bottom side. If any of those are true, the squares are not touching. If all of those are false, the squares are touching. But consider a case like this, where one square is at a 45 degree angle: http://img189.imageshack.us/img189/4236/squaresb.jpg Is there an equally simple way to determine if those squares are touching?

    Read the article

  • scaling point sprites with distance

    - by Will
    How can you scale a point sprite by its distance from the camera? GLSL fragment shader: gl_PointSize = size / gl_Position.w; seems along the right tracks; for any given scene all sprites seem nicely scaled by distance. Is this correct? How do you compute the proper scaling for my vertex attribute size? I want each sprite to be scaled by the modelview matrix. I had played with arbitrary values and it seems that size is the radius in pixels at the camera, and is not in modelview scale. I've also tried: gl_Position = pMatrix * mvMatrix * vec4(vertex,1.0); vec4 v2 = pMatrix * mvMatrix * vec4(vertex.x,vertex.y+0.5*size,vertex.z,1.0); gl_PointSize = length(gl_Position.xyz-v2.xyz) * gl_Position.w; But this makes the sprites be bigger in the distance, rather than smaller:

    Read the article

  • How to snap a 2D Quad to the mouse cursor using OpenGL 3.0/WIN32?

    - by NoobScratcher
    I've been having issues trying to snap a 2D Quad to the mouse cursor position I'm able : 1.) To get values into posX, posY, posZ 2.) Translate with the values from those 3 variables But the quad positioning I'm not able to do correctly in such a way that the 2D Quad is near the mouse cursor using those values from those 3 variables eg."posX, posY, posZ" I need the mouse cursor in the center of the 2D Quad. I'm hoping someone can help me achieve this. I've tried searching around with no avail. Heres the function that is ment to do the snapping but instead creates weird flicker or shows nothing at all only the 3d models show up : void display() { glClearColor(0.0,0.0,0.0,1.0); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); for(std::vector<GLuint>::iterator I = cube.begin(); I != cube.end(); ++I) { glCallList(*I); } if(DrawArea == true) { glReadPixels(winX, winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ); cerr << winZ << endl; glGetDoublev(GL_MODELVIEW_MATRIX, modelview); glGetDoublev(GL_PROJECTION_MATRIX, projection); glGetIntegerv(GL_VIEWPORT, viewport); gluUnProject(winX, winY, winZ , modelview, projection, viewport, &posX, &posY, & posZ); glBindTexture(GL_TEXTURE_2D, DrawAreaTexture); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL); glTexImage2D (GL_TEXTURE_2D, 0, GL_RGB, DrawAreaSurface->w, DrawAreaSurface->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, DrawAreaSurface->pixels); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, DrawAreaTexture); glTranslatef(posX , posY, posZ); glBegin(GL_QUADS); glTexCoord2f (0.0, 0.0); glVertex3f(0.5, 0.5, 0); glTexCoord2f (1.0, 0.0); glVertex3f(0, 0.5, 0); glTexCoord2f (1.0, 1.0); glVertex3f(0, 0, 0); glTexCoord2f (0.0, 1.0); glVertex3f(0.5, 0, 0); glEnd(); } SwapBuffers(hDC); } I'm using : OpenGL 3.0 WIN32 API C++ GLSL if you really want the full source here it is - http://pastebin.com/1Ncm9HNf , Its pretty messy.

    Read the article

  • How important is a single-player mode in a 2-player game?

    - by Davy8
    So say you have a 2 player game, taking Chess as an example (except it's an original game with no ready-to-go AI available). Let's say there's also a social-aspect to the meta-game, so let's say it's a Chess game on Facebook where you can challenge your friends. How important is it to have a single-player mode, knowing that an AI will need to be created (I've done minimax AI for tic tac toe, but nothing too sophisticated)? Is it important enough that it should be in the initial launch of the game? Can it wait for a future iteration (knowing that being hosted on the web means the game can be updated at any time)?

    Read the article

  • Behavior Trees and Animations

    - by Tom
    I have started working on the AI for a game, but am confused how I should handle animations. I will be using a Behavior Tree for AI behavior and Cocos2D for my game engine. Should my "PlayAnimationWalk" just be another node in the tree? Something similar to this: [Approach Player] - Play Walk animation - Move Towards player - Stop Walk animation Or should the node just update an AnimationState in the blackboard and have some type of animation handler/component reference this for which animation should be playing? This has been driving me nuts :)

    Read the article

  • Splitting a texture atlas into seperate images

    - by bigtunacan
    I'm doing a port of an existing game and the designer no longer has all of the original art; he only has the resulting texture atlases he used when developing for iPad. The tool I'm using won't support these files so I need to break them back out into separate PNG files. I'm hoping someone knows of a software tool that does this. PC software would be preferred in this case, but Mac would suffice.

    Read the article

  • Techniques for separating game model from presentation

    - by liortal
    I am creating a simple 2D game using XNA. The elements that make up the game world are what i refer to as the "model". For instance, in a board game, i would have a GameBoard class that stores information about the board. This information could be things such as: Location Size Details about cells on the board (occupied/not occupied) etc This object should either know how to draw itself, or describe how to draw itself to some other entity (renderer) in order to be displayed. I believe that since the board only contains the data+logic for things regarding it or cells on it, it should not provide the logic of how to draw things (separation of concerns). How can i achieve a good partitioning and easily allow some other entity to draw it properly? My motivations for doing so are: Allow multiple "implementations" of presentation for a single game entity Easier porting to other environments where the presentation code is not available (for example - porting my code to Unity or other game technology that does not rely on XNA).

    Read the article

  • Java Applet Tower Defence Game needs tweeking

    - by Ephiras
    Hello :) i have made a tower defence Game for my computer science class as one of my major projects, but have encountered some rather fatal roadblocks. here they are creating a menu screen (class Menu) that can set the total number of enimies, the max number of towers, starting money and the map. i tried creating a constructor in my Main class that sets all the values to whatever the Menu class passes in. I want the Menu screen to close after a difficulty has been selected and the main class to begin. Another problem i would really like some help with is instead of having to write entire arrays i would like to create a small segment of code that runs through an entire picture and sets up an array based on that pixels color.this way i can have multiple levels just dragged into a level folder and have the program read through them. users can even create their own. so a 1 if its yellow, a two if blue and a 3 if purple, then everything else = 0; you can download all the classes and code uif you'd like here sorry about having to redirect you but i wasn't sure how to efficently add a code spoiler. help is greatly appreciated

    Read the article

  • UDK/ UnrealScript class interaction? HUD advice?

    - by Holly
    Beginner basics requested here, While i'm familiar with the basics of OOP programming i've just started looking as UnrealScript for a game i had made in the UDK editor up to now. I have a class that extends UTHUD and another that extends UDKPAWN. I have the pawn destroyed when its been shot 3 times and some basic helloworld text displaying in my HUD but i'm completely lost as to how one would get some sort of feedback between the two classes going on? What i would like to do to start off, is have some text that says something like "Amount of baddies killed: 0" Displayed on the HUD which would then increment each time the player destroyed one of my pawns. I'm sorry if this is an inappropriate question but i've never really worked within a framework like this before and wasn't sure where to go for help to get my footing. All advice appreciated!

    Read the article

  • 2D SAT Collision Detection not working when using certain polygons (With example)

    - by sFuller
    My SAT algorithm falsely reports that collision is occurring when using certain polygons. I believe this happens when using a polygon that does not contain a right angle. Here is a simple diagram of what is going wrong: Here is the problematic code: std::vector<vec2> axesB = polygonB->GetAxes(); //loop over axes B for(int i = 0; i < axesB.size(); i++) { float minA,minB,maxA,maxB; polygonA->Project(axesB[i],&minA,&maxA); polygonB->Project(axesB[i],&minB,&maxB); float intervalDistance = polygonA->GetIntervalDistance(minA, maxA, minB, maxB); if(intervalDistance >= 0) return false; //Collision not occurring } This function retrieves axes from the polygon: std::vector<vec2> Polygon::GetAxes() { std::vector<vec2> axes; for(int i = 0; i < verts.size(); i++) { vec2 a = verts[i]; vec2 b = verts[(i+1)%verts.size()]; vec2 edge = b-a; axes.push_back(vec2(-edge.y,edge.x).GetNormailzed()); } return axes; } This function returns the normalized vector: vec2 vec2::GetNormailzed() { float mag = sqrt( x*x + y*y ); return *this/mag; } This function projects a polygon onto an axis: void Polygon::Project(vec2* axis, float* min, float* max) { float d = axis->DotProduct(&verts[0]); float _min = d; float _max = d; for(int i = 1; i < verts.size(); i++) { d = axis->DotProduct(&verts[i]); _min = std::min(_min,d); _max = std::max(_max,d); } *min = _min; *max = _max; } This function returns the dot product of the vector with another vector. float vec2::DotProduct(vec2* other) { return (x*other->x + y*other->y); } Could anyone give me a pointer in the right direction to what could be causing this bug? Edit: I forgot this function, which gives me the interval distance: float Polygon::GetIntervalDistance(float minA, float maxA, float minB, float maxB) { float intervalDistance; if (minA < minB) { intervalDistance = minB - maxA; } else { intervalDistance = minA - maxB; } return intervalDistance; //A positive value indicates this axis can be separated. } Edit 2: I have recreated the problem in HTML5/Javascript: Demo

    Read the article

  • Calculate the intersection depth between a rectangle and a right triangle

    - by Celarix
    all. I'm working on a 2D platformer built in C#/XNA, and I'm having a lot of problems calculating the intersection depth between a standard rectangle (used for sprites) and a right triangle (used for sloping tiles). Ideally, the rectangle will collide with the solid edges of the triangle, and its bottom-center point will collide with the sloped edge. I've been fighting with this for a couple of days now, and I can't make sense of it. So far, the method detects intersections (somewhat), but it reports wildly wrong depths. How does one properly calculate the depth? Is there something I'm missing? Thanks!

    Read the article

  • Maya Animated Character export for XNA 4.0 problem

    - by FahidK
    To begin with, I'm trying to export an animated character in .fbx format from Maya 2013 to XNA 4.0 In Maya, The Model has a basic rig and the animations are in clips made in the Trax editor. so the issue i'm having is after selecting the model and the root joint and then hitting export in .fbx format, for some reason when i open the exported .fbx file the joint system is detached from the model with no animation. Btw, i have the animations in clips so that they can be called in code, for example "run","walk","attack". So, what can i do to solve this problem? Thank you.

    Read the article

  • Optimal sprite size for rotations

    - by Panda Pajama
    I am making a sprite based game, and I have a bunch of images that I get in a ridiculously large resolution and I scale them to the desired sprite size (for example 64x64 pixels) before converting them to a game resource, so when draw my sprite inside the game, I don't have to scale it. However, if I rotate this small sprite inside the game (engine agnostically), some destination pixels will get interpolated, and the sprite will look smudged. This is of course dependent on the rotation angle as well as the interpolation algorithm, but regardless, there is not enough data to correctly sample a specific destination pixel. So there are two solutions I can think of. The first is to use the original huge image, rotate it to the desired angles, and then downscale all the reaulting variations, and put them in an atlas, which has the advantage of being quite simple to implement, but naively consumes twice as much sprite space for each rotation (each rotation must be inscribed in a circle whose diameter is the diagonal of the original sprite's rectangle, whose area is twice of that original rectangle, supposing square sprites). It also has the disadvantage of only having a predefined set of rotations available, which may be okay or not depending on the game. So the other choice would be to store a larger image, and rotate and downscale while rendering, which leads to my question. What is the optimal size for this sprite? Optimal meaning that a larger image will have no effect in the resulting image. This is definitely dependent on the image size, the amount of desired rotations without data loss down to 1/256, which is the minimum representable color difference. I am looking for a theoretical general answer to this problem, because trying a bunch of sizes may be okay, but is far from optimal.

    Read the article

  • Wrong faces culled in OpenGL when drawing a rectangular prism

    - by BadSniper
    I'm trying to learn opengl. I did some code for building a rectangular prism. I don't want to draw back faces so I used glCullFace(GL_BACK), glEnable(GL_CULL_FACE);. But I keep getting back faces also when viewing from front and also sometimes when rotating sides are vanishing. Can someone point me in right direction? glPolygonMode(GL_FRONT,GL_LINE); // draw wireframe polygons glColor3f(0,1,0); // set color green glCullFace(GL_BACK); // don't draw back faces glEnable(GL_CULL_FACE); // don't draw back faces glTranslatef(-10, 1, 0); // position glBegin(GL_QUADS); // face 1 glVertex3f(0,-1,0); glVertex3f(0,-1,2); glVertex3f(2,-1,2); glVertex3f(2,-1,0); // face 2 glVertex3f(2,-1,2); glVertex3f(2,-1,0); glVertex3f(2,5,0); glVertex3f(2,5,2); // face 3 glVertex3f(0,5,0); glVertex3f(0,5,2); glVertex3f(2,5,2); glVertex3f(2,5,0); // face 4 glVertex3f(0,-1,2); glVertex3f(2,-1,2); glVertex3f(2,5,2); glVertex3f(0,5,2); // face 5 glVertex3f(0,-1,2); glVertex3f(0,-1,0); glVertex3f(0,5,0); glVertex3f(0,5,2); // face 6 glVertex3f(0,-1,0); glVertex3f(2,-1,0); glVertex3f(2,5,0); glVertex3f(0,5,0); glEnd();

    Read the article

  • How to make it so units don't stack up in one location?

    - by Daggio
    So I'm making a game in AS3, it's a strategy DotA-like game (for flash game equivalent, there's UDE) so far so good, I have the A* pathfinding algorithm all sorted out and the minion units move to the desired location as I want them to be. The problem a rise when a unit stops in a node that has already occupied by another friendly unit. Both (or more than two) of them stacks up in one location, it looks like they're one unit. I want to add collision detection so when they collide they don't stack up together. But now they stop when they collide on they way to a node. This isn't good because they won't move at all midway (they won't respond to enemy attacks like that). I've added a deltatime so they only stopped for 2 seconds before they move again to their designated designation. This moves them again but they flicker. Not how I want it. So, like the title said. How to make more than one units don't stack up in a node? And if possible, how to make them not flicker while moving (it's good if they can tell other friendly units on the way and avoid them accordingly)?

    Read the article

  • How change LOD in geometry?

    - by ChaosDev
    Im looking for simple algorithm of LOD, for change geometry vertexes and decrease frame time. Im created octree, but now I want model or terrain vertex modify algorithm,not for increase(looking on tessellation later) but for decrease. I want something like this Questions: Is same algorithm can apply either to model and terrain correctly? Indexes need to be modified ? I must use octree or simple check distance between camera and object for desired effect ? New value of indexcount for DrawIndexed function needed ? Code: //m_LOD == 10 in the beginning //m_RawVerts - array of 3d Vector filled with values from vertex buffer. void DecreaseLOD() { m_LOD--; if(m_LOD<1)m_LOD=1; RebuildGeometry(); } void IncreaseLOD() { m_LOD++; if(m_LOD>10)m_LOD=10; RebuildGeometry(); } void RebuildGeometry() { void* vertexRawData = new byte[m_VertexBufferSize]; void* indexRawData = new DWORD[m_IndexCount]; auto context = mp_D3D->mp_Context; D3D11_MAPPED_SUBRESOURCE data; ZeroMemory(&data,sizeof(D3D11_MAPPED_SUBRESOURCE)); context->Map(mp_VertexBuffer->mp_buffer,0,D3D11_MAP_READ,0,&data); memcpy(vertexRawData,data.pData,m_VertexBufferSize); context->Unmap(mp_VertexBuffer->mp_buffer,0); context->Map(mp_IndexBuffer->mp_buffer,0,D3D11_MAP_READ,0,&data); memcpy(indexRawData,data.pData,m_IndexBufferSize); context->Unmap(mp_IndexBuffer->mp_buffer,0); DWORD* dwI = (DWORD*)indexRawData; int sz = (m_VertexStride/sizeof(float));//size of vertex element //algorithm must be here. std::vector<Vector3d> vertices; int i = 0; for(int j = 0; j < m_VertexCount; j++) { float x1 = (((float*)vertexRawData)[0+i]); float y1 = (((float*)vertexRawData)[1+i]); float z1 = (((float*)vertexRawData)[2+i]); Vector3d lv = Vector3d(x1,y1,z1); //my useless attempts if(j+m_LOD+1<m_RawVerts.size()) { float v1 = VECTORHELPER::Distance(m_RawVerts[dwI[j]],m_RawVerts[dwI[j+m_LOD]]); float v2 = VECTORHELPER::Distance(m_RawVerts[dwI[j]],m_RawVerts[dwI[j+m_LOD+1]]); if(v1>v2) lv = m_RawVerts[dwI[j+1]]; else if(v2<v1) lv = m_RawVerts[dwI[j+2]]; } (((float*)vertexRawData)[0+i]) = lv.x; (((float*)vertexRawData)[1+i]) = lv.y; (((float*)vertexRawData)[2+i]) = lv.z; i+=sz;//pass others vertex format values without change } for(int j = 0; j < m_IndexCount; j++) { //indices ? } //set vertexes to device UpdateVertexes(vertexRawData,mp_VertexBuffer->getSize()); delete[] vertexRawData; delete[] indexRawData; }

    Read the article

  • std::vector::size with glDrawElements crashes?

    - by NoobScratcher
    ( win32 / OpenGL 3.3 / GLSL 330 ) I decided after a long time of trying to do a graphical user interface using just opengl graphics to go back to a gui toolkit and so in the process have had to port alot of my code to win32. But I have a problem with my glDrawElement function. my program compiles and runs fine until it gets to glDrawElements then crashes.. which is rather annoying right. so I was trying to figure out why and I found out its std::vector::size member not returning the correct amount of faces in the unsigned interger vector eg, "vector<unsigned int>faces; " so when I use cout << faces.size() << endl; I got 68 elements???? instead of 24 as you can see here in this .obj file: # Blender v2.61 (sub 0) OBJ File: '' # www.blender.org v 1.000000 -1.000000 -1.000000 v 1.000000 -1.000000 1.000000 v -1.000000 -1.000000 1.000000 v -1.000000 -1.000000 -1.000000 v 1.000000 1.000000 -0.999999 v 0.999999 1.000000 1.000001 v -1.000000 1.000000 1.000000 v -1.000000 1.000000 -1.000000 s off f 1 2 3 4 f 5 8 7 6 f 1 5 6 2 f 2 6 7 3 <--- 24 Faces not 68? f 3 7 8 4 f 5 1 4 8 I'm using a parser I created to get the faces/vertexes in my .obj file: char modelbuffer [20000]; int MAX_BUFF = 20000; unsigned int face[3]; FILE * pfile; pfile = fopen(szFileName, "rw"); while(fgets(modelbuffer, MAX_BUFF, pfile) != NULL) { if('v') { Point p; sscanf(modelbuffer, "v %f %f %f", &p.x, &p.y, &p.z); points.push_back(p); cout << " p.x = " << p.x << " p.y = " << p.y << " p.z = " << p.x << endl; } if('f') { sscanf(modelbuffer, "f %d %d %d %d", &face[0], &face[1], &face[2], &face[3]); cout << "face[0] = " << face[0] << " face[1] = " << face[1] << " face[2] = " << face[2] << " face[3] = " << face[3] << "\n"; faces.push_back(face[0] - 1); faces.push_back(face[1] - 1); faces.push_back(face[2] - 1); faces.push_back(face[3] - 1); cout << face[0] - 1 << face[1] - 1 << face[2] - 1 << face[3] - 1 << endl; } } using this struct to store the x,y,z positions also this vector was used with Point: vector<Point>points; struct Point { float x, y, z; }; If someone could tell me why its not working and how to fix it that would be awesome I also provide a pastebin to the full source code if you want a closer look. http://pastebin.com/gznYLVw7

    Read the article

  • Permanently Sync a wiimote with a computer

    - by Adam Geisweit
    i have tried to look up many ways to sync up my wiimotes to my computer so that i can program games with it, but every time it only syncs them up temporarily, or if it says it can permanently sync it, it doesn't actually do it. it gets tiresome when i have to keep on reconnecting it every time i want to save battery life. how would i be able to sync up my wiimote to my computer so that if i turn off my wiimote, i can just hit any button and it will automatically sync it up?

    Read the article

  • Rotation and translation like in GTA 1 OpenGL

    - by user1876377
    Okay, so I have a figure in XZ plain. I want to move it forward/backward and rotate at it's own Y axis, then move forward again in the rotation's direction, like the character in GTA 1. Code so far: Init: spaceship_position = glm::vec3(0,0,0); spaceship_rotation = glm::vec3(0,0,0); spaceship_scale = glm::vec3(1, 1, 1); Draw: glm::mat4 transform = glm::scale<float>(spaceship_scale) * glm::rotate<float>(spaceship_rotation.x, 1, 0, 0) * glm::rotate<float>(spaceship_rotation.y, 0, 1, 0) * glm::rotate<float>(spaceship_rotation.z, 0, 0, 1) * glm::translate<float>(spaceship_position); drawMesh(spaceship, texture, transform); Update: switch (key.keysym.sym) { case SDLK_UP: spaceship_position.z += 0.1; break; case SDLK_DOWN: spaceship_position.z -= 0.1; break; case SDLK_LEFT: spaceship_rotation.y += 1; break; case SDLK_RIGHT: spaceship_rotation.y -= 1; break; } So this only moves on the Z axis, but how can I move the object on both Z and X axis where the object is facing?

    Read the article

  • Texture2D.GetData fails to return pixel colour data

    - by Chris Charabaruk
    Because I'm using sprite sheets instead of an individual texture per sprite, I need to pass in a Rectangle when calling Texture2D.GetData() in my collision detection for per-pixel tests. Unfortunately, without fail I get an ArgumentException percolated down from an internal method inside the Texture (not Texture2D) class. My code for getting the texture data looks like this: public override Color[] GetPixelData() { Color[] data = new Color[(int)size.Product()]; Rectangle rect = new Rectangle(hframe * (int)size.X, vframe * (int)size.Y, (int)size.X, (int)size.Y); #if DEBUG if (sprite.Bounds.Contains(rect) && sprite.Format == SurfaceFormat.Color) #endif sprite.GetData(0, rect, data, 0, 1); return data; } Even with the check to ensure I'm grabbing a valid rectangle and that the texture format matches what I'm trying to get, I still get that exception, claiming "The size of the data passed in is too large or too small for this resource." Unfortunately, the debugger won't let me check the locals within the Texture.ValidateTotalSize() method where the exception originates. Has anyone else had this problem and knows how to fix it? I'm relying on AABB testing only for now, but that doesn't really work for some of my game's entities due to odd shapes, rotation and scaling.

    Read the article

  • Applying prerecorded animations to models with the same skeleton

    - by Jeremias Pflaumbaum
    well my question sounds a bit like, how do I apply mo-cap animations to my model, but thats not really it I guess. Animations and model share the same skeleton, but the models vary in size and proportion, but I still want to be able to apply any animation to any model. I think this should be possible since the models got the same skeleton bone structure and the bones are always in the same area only their position varies from model to model. In particular Im trying to apply this to 2D characters that got 2arm, 2legs, a head and a body, but if you got anything related to that topic even if its 3D related or keywords, articles, books whatever Im gratefull for everything cause Im a bit stuck at the moment. cheers Jery

    Read the article

< Previous Page | 555 556 557 558 559 560 561 562 563 564 565 566  | Next Page >