Search Results

Search found 19338 results on 774 pages for 'game loop'.

Page 327/774 | < Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >

  • How are these bullets done?

    - by Mike
    I really want to know how the bullets in Radiangames Inferno are done. The bullets seem like they are just billboard particles but I am curious about how their tails are implemented. They can curve so this means they are not just a billboard. Also, they appear continuous which implies that the tails are not made of a bunch of smaller particles (I think). Can anyone shead some light on this for me?

    Read the article

  • Is there an easy and automatic way of converting a Windows XNA project into a Monotouch Monogame project?

    - by Krumelur
    I have just started with XNA development on Windows. But as I'm a fan of iOS I had to try porting my test code over to Monotouch on the Mac. I used these instructions: http://www.facepuncher.com/blogs/10parameters/?p=42 But this is so much (manual) work! And it really doesn't answer open topics like: why would I copy all the XNB files and in addition all the resources, like PNGs? Is there maybe a tool that automatically converts a Windows XNA project into a Monotouch iOS project or at least creates the correct folder structure?

    Read the article

  • What are the benefits of designing a KeyBinding relay?

    - by Adam Naylor
    The input system of Quake3 is handled using a Keybinding relay, whereby each keypress is matched against a 'binding' which is then passed to the CLI along with a time stamp of when the keypress (or release) occurred. I just wanted to get an idea from developers what they considered to be the key benefits of designing your input system around this approach? One thing i don't particularly like is the appending of the timestamp to the bound command. This seems like a bit of a hack to bend the CLI into handling the games input? Also I feel that detecting the keypress only to add the command to a stream of text that gets parsed at a later date to be a slightly latent way of responding to input? (or is this unfounded?) The only real benefit i can see is that it allows you to bind 'complex' commands to keypresses; like 'switch weapon;+fire;' for example. Or maybe for journaling purposes? Thanks for any insights!

    Read the article

  • Collision detection of convex shapes on voxel terrain

    - by Dave
    I have some standard convex shapes (cubes, capsules) on a voxel terrain. It is very easy to detect single vertex collisions. However, it becomes computationally expensive when many vertices are involved. To clarify, currently my algorithm represents a cube as multiple vertices covering every face of the cube, not just the corners. This is because the cubes can be much bigger than the voxels, so multiple sample points (vertices) are required (the distance between sample points must be at least the width of a voxel). This very rapidly becomes intractable. It would be great if there were some standard algorithm(s) for collision detection between convex shapes and arbitrary voxel based terrain (like there is with OBB's and seperating axis theorem etc). Any help much appreciated.

    Read the article

  • How should I unbind and delete OpenAL buffers?

    - by Joe Wreschnig
    I'm using OpenAL to play sounds. I'm trying to implement a fire-and-forget play function that takes a buffer ID and assigns it to a source from a pool I have previously allocated, and plays it. However, there is a problem with object lifetimes. In OpenGL, delete functions either automatically unbind things (e.g. textures), or automatically deletes the thing when it eventually is unbound (e.g. shaders) and so it's usually easy to manage deletion. However alDeleteBuffers instead simply fails with AL_INVALID_OPERATION if the buffer is still bound to a source. Is there an idiomatic way to "delete" OpenAL buffers that allows them to finish playing, and then automatically unbinds and really them? Do I need to tie buffer management more deeply into the source pool (e.g. deleting a buffer requires checking all the allocated sources also)? Similarly, is there an idiomatic way to unbind (but not delete) buffers when they are finished playing? It would be nice if, when I was looking for a free source, I only needed to see if a buffer was attached at all and not bother checking the source state. (I'm using C++, although approaches for C are also fine. Approaches assuming a GCd language and using finalizers are probably not applicable.)

    Read the article

  • Algorithm to find average position

    - by Simran kaur
    In the given diagram, I have the extreme left and right points, that is -2 and 4 in this case. So, obviously, I can calculate the width which is 6 in this case. What we know: The number of partitions:3 in this case The partition number at at any point i.e which one is 1st,second or third partition (numbered starting from left) What I want: The position of the purple line drawn which is positio of average of a particular partition So, basically I just want a generalized formula to calculate position of the average at any point.

    Read the article

  • Mobile 3D engine renders alpha as full-object transparency

    - by Nils Munch
    I am running a iOS project using the isgl3d framework for showing pod files. I have a stylish car with 0.5 alpha windows, that I wish to render on a camera background, seeking some augmented reality goodness. The alpha on the windows looks okay, but when I add the object, I notice that it renders the entire object transparently, where the windows are. Including interior of the car. Like so (in example, keyboard can be seen through the dashboard, seats and so on. should be solid) The car interior is a seperate object with alpha 1.0. I would rather not show a "ghost car" in my project, but I haven't found a way around this. Have anyone encountered the same issue, and eventually reached a solution ?

    Read the article

  • Box2D networking

    - by spacevillain
    I am trying to make a simple sync between two box2d rooms, where you can drag boxes using the mouse. So every time player clicks (and holds the mousedown) a box, I try send joint parameters to server, and server sends them to other clients. When mouseup occurs, I send command to delete joint. The problem is that sync breaks too often. Is my way radically wrong, or it just needs some tweaks? http://www.youtube.com/watch?v=eTN2Gwj6_Lc Source code https://github.com/agentcooper/Box2d-networking

    Read the article

  • How can I locate empty space next to polygon regions?

    - by Stephen
    Let's say I have the following area in a top-down map: The circle is the player, the black square is an obstacle, and the grey polygons with red borders are walk-able areas that will be used as a navigation mesh for enemies. Obstacles and grey polygons are always convex. The grey regions were defined using an algorithm when the world was generated at runtime. Notice the little white column. I need to figure out where any empty space like this is, if at all, after the algorithm builds the grey regions, so that I can fill the space with another region. Basically what I'm hoping for is an algorithm that can detect empty space next to a polygon.

    Read the article

  • converting a mouse click to a ray

    - by Will
    I have a perspective projection. When the user clicks on the screen, I want to compute the ray between the near and far planes that projects from the mouse point, so I can do some ray intersection code with my world. I am using my own matrix and vector and ray classes and they all work as expected. However, when I try and convert the ray to world coordinates my far always ends up as 0,0,0 and so my ray goes from the mouse click to the centre of the object space, rather than through it. (The x and y coordinates of near and far are identical, they differ only in the z coordinates where they are negatives of each other) GLint vp[4]; glGetIntegerv(GL_VIEWPORT,vp); matrix_t mv, p; glGetFloatv(GL_MODELVIEW_MATRIX,mv.f); glGetFloatv(GL_PROJECTION_MATRIX,p.f); const matrix_t inv = (mv*p).inverse(); const float unit_x = (2.0f*((float)(x-vp[0])/(vp[2]-vp[0])))-1.0f, unit_y = 1.0f-(2.0f*((float)(y-vp[1])/(vp[3]-vp[1]))); const vec_t near(vec_t(unit_x,unit_y,-1)*inv); const vec_t far(vec_t(unit_x,unit_y,1)*inv); ray = ray_t(near,far-near); What have I got wrong? (How do you unproject the mouse-point?)

    Read the article

  • Corona SDK: Quality of support and resources?

    - by Nick Wiggill
    I've not used Corona SDK before and am looking into it for a friend. (He is also considering Unity.) I wonder what the support is like for Corona? While Unity has a great many customers and so the Unity team can often not address issues directly, the community at large is very helpful and there are many excellent resources: tutorials, forum posts, code resources. What is Corona like in this regard, and by comparison?

    Read the article

  • Avoid if statements in DirectX 10 shaders?

    - by PolGraphic
    I have heard that if statements should be avoid in shaders, because both parts of the statements will be execute, and than the wrong will be dropped (which harms the performance). It's still a problem in DirectX 10? Somebody told me, that in it only the right branch will be execute. For the illustration I have the code: float y1 = 5; float y2 = 6; float b1 = 2; float b2 = 3; if(x>0.5){ x = 10 * y1 + b1; }else{ x = 10 * y2 + b2; } Is there an other way to make it faster? If so, how do it? Both branches looks similar, the only difference is the values of "constants" (y1, y2, b1, b2 are the same for all pixels in Pixel Shader).

    Read the article

  • How to move the object around the screen

    - by Abhishek
    I am trying to move the object around the screen I try this code -(void) move { CGFloat upperLimit = mWinSize.height - (mGunda.contentSize.height / 2.0); CGFloat upperLimit1 = mWinSize.height; CGFloat lowerLimit = (mGunda.contentSize.height / 2.0); CGFloat RightLimit = mWinSize.width - (mGunda.contentSize.width/2.0); CGFloat Right = (mGunda.contentSize.width/2.0); if ( mImageGoingUpward ) { mGunda.position = ccp( mGunda.position.x, mGunda.position.y + 5); if ( mGunda.position.y >= upperLimit ) { mImageGoingUpward = NO; mHori = NO; } } else { mGunda.position = ccp( mGunda.position.x, mGunda.position.y - 5); if ( mGunda.position.y <= lowerLimit ) { mGunda.position = ccp(mGunda.position.x +5, lowerLimit); } if(mGunda.position.x >= RightLimit) { mGunda.position = ccp(mGunda.position.x, mGunda.position.y+10); mHori = YES; } if(mHori) { if(mGunda.position.y >= upperLimit) { mGunda.position = ccp(mGunda.position.x - 5,mGunda.position.y); } } } } } It move the object from bottom to top & top to bottom & bottom to right & right to right top of the screen here is problem I have got It not move to the right top to left side of screen this rotationis not happen. How can I do this

    Read the article

  • Homemaking a 2d soft body physics engine

    - by Griffin
    hey so I've decided to Code my own 2D soft-body physics engine in C++ since apparently none exist and I'm starting only with a general idea/understanding on how physics work and could be simulated: by giving points and connections between points properties such as elasticity, density, mass, shape retention, friction, stickiness, etc. What I want is a starting point: resources and helpful examples/sites that could give me the specifics needed to actually make this such as equations and required physics knowledge. It would be great if anyone out there also would give me their attempts or ideas. finally I was wondering if it was possible to... use the source code of an existing 3D engine such as Bullet and transform it to be 2D based? use the source code of a 2D Rigid body physics engine such as box2d as a starting point?

    Read the article

  • Normal map applied as diffuse textures looks wrong

    - by KaiserJohaan
    Diffuse textures works fine, but I am having problem with normal maps, so I thought I'd tried to apply the normal maps as the diffuse map in my fragment shader so I could see everything is OK. I comment-out my normal map code and just set the diffuse map to the normal map and I get this: http://postimg.org/image/j9gudjl7r/ Looks like a smurf! This is the actual normal map of the main body: http://postimg.org/image/sbkyr6fg9/ Here is my fragment shader, notice I commented out normal map code so I could debug the normal map as a diffuse texture "#version 330 \n \ \n \ layout(std140) uniform; \n \ \n \ const int MAX_LIGHTS = 8; \n \ \n \ struct Light \n \ { \n \ vec4 mLightColor; \n \ vec4 mLightPosition; \n \ vec4 mLightDirection; \n \ \n \ int mLightType; \n \ float mLightIntensity; \n \ float mLightRadius; \n \ float mMaxDistance; \n \ }; \n \ \n \ uniform UnifLighting \n \ { \n \ vec4 mGamma; \n \ vec3 mViewDirection; \n \ int mNumLights; \n \ \n \ Light mLights[MAX_LIGHTS]; \n \ } Lighting; \n \ \n \ uniform UnifMaterial \n \ { \n \ vec4 mDiffuseColor; \n \ vec4 mAmbientColor; \n \ vec4 mSpecularColor; \n \ vec4 mEmissiveColor; \n \ \n \ bool mHasDiffuseTexture; \n \ bool mHasNormalTexture; \n \ bool mLightingEnabled; \n \ float mSpecularShininess; \n \ } Material; \n \ \n \ uniform sampler2D unifDiffuseTexture; \n \ uniform sampler2D unifNormalTexture; \n \ \n \ in vec3 frag_position; \n \ in vec3 frag_normal; \n \ in vec2 frag_texcoord; \n \ in vec3 frag_tangent; \n \ in vec3 frag_bitangent; \n \ \n \ out vec4 finalColor; " " \n \ \n \ void CalcGaussianSpecular(in vec3 dirToLight, in vec3 normal, out float gaussianTerm) \n \ { \n \ vec3 viewDirection = normalize(Lighting.mViewDirection); \n \ vec3 halfAngle = normalize(dirToLight + viewDirection); \n \ \n \ float angleNormalHalf = acos(dot(halfAngle, normalize(normal))); \n \ float exponent = angleNormalHalf / Material.mSpecularShininess; \n \ exponent = -(exponent * exponent); \n \ \n \ gaussianTerm = exp(exponent); \n \ } \n \ \n \ vec4 CalculateLighting(in Light light, in vec4 diffuseTexture, in vec3 normal) \n \ { \n \ if (light.mLightType == 1) // point light \n \ { \n \ vec3 positionDiff = light.mLightPosition.xyz - frag_position; \n \ float dist = max(length(positionDiff) - light.mLightRadius, 0); \n \ \n \ float attenuation = 1 / ((dist/light.mLightRadius + 1) * (dist/light.mLightRadius + 1)); \n \ attenuation = max((attenuation - light.mMaxDistance) / (1 - light.mMaxDistance), 0); \n \ \n \ vec3 dirToLight = normalize(positionDiff); \n \ float angleNormal = clamp(dot(normalize(normal), dirToLight), 0, 1); \n \ \n \ float gaussianTerm = 0.0; \n \ if (angleNormal > 0.0) \n \ CalcGaussianSpecular(dirToLight, normal, gaussianTerm); \n \ \n \ return diffuseTexture * (attenuation * angleNormal * Material.mDiffuseColor * light.mLightIntensity * light.mLightColor) + \n \ (attenuation * gaussianTerm * Material.mSpecularColor * light.mLightIntensity * light.mLightColor); \n \ } \n \ else if (light.mLightType == 2) // directional light \n \ { \n \ vec3 dirToLight = normalize(light.mLightDirection.xyz); \n \ float angleNormal = clamp(dot(normalize(normal), dirToLight), 0, 1); \n \ \n \ float gaussianTerm = 0.0; \n \ if (angleNormal > 0.0) \n \ CalcGaussianSpecular(dirToLight, normal, gaussianTerm); \n \ \n \ return diffuseTexture * (angleNormal * Material.mDiffuseColor * light.mLightIntensity * light.mLightColor) + \n \ (gaussianTerm * Material.mSpecularColor * light.mLightIntensity * light.mLightColor); \n \ } \n \ else if (light.mLightType == 4) // ambient light \n \ return diffuseTexture * Material.mAmbientColor * light.mLightIntensity * light.mLightColor; \n \ else \n \ return vec4(0.0); \n \ } \n \ \n \ void main() \n \ { \n \ vec4 diffuseTexture = vec4(1.0); \n \ if (Material.mHasDiffuseTexture) \n \ diffuseTexture = texture(unifDiffuseTexture, frag_texcoord); \n \ \n \ vec3 normal = frag_normal; \n \ if (Material.mHasNormalTexture) \n \ { \n \ diffuseTexture = vec4(normalize(texture(unifNormalTexture, frag_texcoord).xyz * 2.0 - 1.0), 1.0); \n \ // vec3 normalTangentSpace = normalize(texture(unifNormalTexture, frag_texcoord).xyz * 2.0 - 1.0); \n \ //mat3 tangentToWorldSpace = mat3(normalize(frag_tangent), normalize(frag_bitangent), normalize(frag_normal)); \n \ \n \ // normal = tangentToWorldSpace * normalTangentSpace; \n \ } \n \ \n \ if (Material.mLightingEnabled) \n \ { \n \ vec4 accumLighting = vec4(0.0); \n \ \n \ for (int lightIndex = 0; lightIndex < Lighting.mNumLights; lightIndex++) \n \ accumLighting += Material.mEmissiveColor * diffuseTexture + \n \ CalculateLighting(Lighting.mLights[lightIndex], diffuseTexture, normal); \n \ \n \ finalColor = pow(accumLighting, Lighting.mGamma); \n \ } \n \ else { \n \ finalColor = pow(diffuseTexture, Lighting.mGamma); \n \ } \n \ } \n"; Here is my wrapper around a texture OpenGLTexture::OpenGLTexture(const std::vector<uint8_t>& textureData, uint32_t textureWidth, uint32_t textureHeight, TextureFormat textureFormat, TextureType textureType, Logger& logger) : mLogger(logger), mTextureID(gNextTextureID++), mTextureType(textureType) { glGenTextures(1, &mTexture); CHECK_GL_ERROR(mLogger); glBindTexture(GL_TEXTURE_2D, mTexture); CHECK_GL_ERROR(mLogger); GLint glTextureFormat = (textureFormat == TextureFormat::TEXTURE_FORMAT_RGB ? GL_RGB : textureFormat == TextureFormat::TEXTURE_FORMAT_RGBA ? GL_RGBA : GL_RED); glTexImage2D(GL_TEXTURE_2D, 0, glTextureFormat, textureWidth, textureHeight, 0, glTextureFormat, GL_UNSIGNED_BYTE, &textureData[0]); CHECK_GL_ERROR(mLogger); glGenerateMipmap(GL_TEXTURE_2D); CHECK_GL_ERROR(mLogger); glBindTexture(GL_TEXTURE_2D, 0); CHECK_GL_ERROR(mLogger); } OpenGLTexture::~OpenGLTexture() { glDeleteBuffers(1, &mTexture); CHECK_GL_ERROR(mLogger); } And here is the sampler I create which is shared between Diffuse and normal textures // texture sampler setup glGenSamplers(1, &mTextureSampler); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_MAG_FILTER, GL_LINEAR); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_S, GL_REPEAT); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_T, GL_REPEAT); CHECK_GL_ERROR(mLogger); glSamplerParameterf(mTextureSampler, GL_TEXTURE_MAX_ANISOTROPY_EXT, mCurrentAnisotropy); CHECK_GL_ERROR(mLogger); glUniform1i(glGetUniformLocation(mDefaultProgram.GetHandle(), "unifDiffuseTexture"), OpenGLTexture::TEXTURE_UNIT_DIFFUSE); CHECK_GL_ERROR(mLogger); glUniform1i(glGetUniformLocation(mDefaultProgram.GetHandle(), "unifNormalTexture"), OpenGLTexture::TEXTURE_UNIT_NORMAL); CHECK_GL_ERROR(mLogger); glBindSampler(OpenGLTexture::TEXTURE_UNIT_DIFFUSE, mTextureSampler); CHECK_GL_ERROR(mLogger); glBindSampler(OpenGLTexture::TEXTURE_UNIT_NORMAL, mTextureSampler); CHECK_GL_ERROR(mLogger); SetAnisotropicFiltering(mCurrentAnisotropy); The diffuse textures looks like they should, but the normal looks so wierd. Why is this?

    Read the article

  • Setting up collision using a tilemap and cocos2d

    - by James
    I'm building my first platformer using cocos2d and a tilemap. I'm having trouble coming up with a decent way of determining if the character is colliding with an object. More specifically, in which direction is the character colliding with an object. Following the tutorial here, I have made a separate "meta" layer of collidable tiles. The problem is that unless the character is in the tile, you can't detect the collision. Also, there's no way of telling WHERE the collision is occurring. The best solution would be one that could tell me if a character is up against a wall, or walking on top of a platform. However, I can't seem to figure out a good technique for this.

    Read the article

  • Strange mesh import problem with Assimp and OpenGL

    - by Morgan
    Using the assimp library for importing 3D data into an OpenGL application. I get some strange problems regarding indexing of the vertices: If I use the following code for importing vertex indices: for (unsigned int t = 0; t < mesh->mNumFaces; ++t) { const struct aiFace * face = &mesh->mFaces[t]; if (face->mNumIndices == 3) { indices->push_back(face->mIndices[0]); indices->push_back(face->mIndices[1]); indices->push_back(face->mIndices[2]); } } I get the following result: Instead, if I use the following code: for(int k = 0; k < 2 ; k++) { for (unsigned int t = 0; t < mesh->mNumFaces; ++t) { const struct aiFace * face = &mesh->mFaces[t]; if (face->mNumIndices == 3) { indices->push_back(face->mIndices[0]); indices->push_back(face->mIndices[1]); indices->push_back(face->mIndices[2]); } } } I get the correct result: Hence adding the indices twice, renders the correct result? The OpenGL buffer is populated, like so: glBufferData(GL_ELEMENT_ARRAY_BUFFER, indices->size() * sizeof(unsigned int), indices->data(), GL_STATIC_DRAW); And rendered as follows: glDrawElements(GL_TRIANGLES, vertexCount*3, GL_UNSIGNED_INT, indices->data());

    Read the article

  • SpriteBatch.end() generating null pointer exception

    - by odaymichael
    I am getting a null pointer exception using libGDX that the debugger points as the SpriteBatch.end() line. I was wondering what would cause this. Here is the offending code block, specifically the batch.end() line: batch.begin(); for (int j = 0; j < 3; j++) for (int i = 0; i < 3; i++) if (zoomgrid[i][j].getPiece().getImage() != null) zoomgrid[i][j].getPiece().getImage().draw(batch); batch.end(); The top of the stack is actually a line that calls lastTexture.bind(); In the flush() method of com.badlogic.gdx.graphics.g2d.SpriteBatch. I appreciate any input, let me know if I haven't included enough information.

    Read the article

  • OpenGL font rendering

    - by DEElekgolo
    I am trying to make an openGL text rendering class using FreeType. I was originally following this code but it doesn't seem to work out for me. I get nothing reguardless of what parameters I put for Draw(). class Font { public: Font() { if (FT_Init_FreeType(&ftLibrary)) { printf("Could not initialize FreeType library\n"); return; } glGenBuffers(1,&iVerts); } bool Load(std::string sFont, unsigned int Size = 12.0f) { if (FT_New_Face(ftLibrary,sFont.c_str(),0,&ftFace)) { printf("Could not open font: %s\n",sFont.c_str()); return true; } iSize = Size; FT_Set_Pixel_Sizes(ftFace,0,(int)iSize); FT_GlyphSlot gGlyph = ftFace->glyph; //Generating the texture atlas. //Rather than some amazing rectangular packing method, I'm just going //to have one long strip of letters with the height being that of the font size. int width = 0; int height = 0; for (int i = 32; i < 128; i++) { if (FT_Load_Char(ftFace,i,FT_LOAD_RENDER)) { printf("Error rendering letter %c for font %s.\n",i,sFont.c_str()); } width += gGlyph->bitmap.width; height += std::max(height,gGlyph->bitmap.rows); } //Generate the openGL texture glActiveTexture(GL_TEXTURE0); //if I texture exists then delete it. iTexture ? glDeleteBuffers(1,&iTexture):0; glGenTextures(1,&iTexture); glBindTexture(GL_TEXTURE_2D,iTexture); glPixelStorei(GL_UNPACK_ALIGNMENT,1); glTexImage2D(GL_TEXTURE_2D,0,GL_ALPHA,width,height,0,GL_ALPHA,GL_UNSIGNED_BYTE,0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); //load the glyphs and set the glyph data int x = 0; for (int i = 32; i < 128; i++) { if (FT_Load_Char(ftFace,i,FT_LOAD_RENDER)) { //if it cant load the character continue; } //load the glyph map into the texture glTexSubImage2D(GL_TEXTURE_2D,0,x,0, gGlyph->bitmap.width, gGlyph->bitmap.rows, GL_ALPHA, GL_UNSIGNED_BYTE, gGlyph->bitmap.buffer); //move the "pen" down the strip x += gGlyph->bitmap.width; chars[i].ax = (float)(gGlyph->advance.x >> 6); chars[i].ay = (float)(gGlyph->advance.y >> 6); chars[i].bw = (float)gGlyph->bitmap.width; chars[i].bh = (float)gGlyph->bitmap.rows; chars[i].bl = (float)gGlyph->bitmap_left; chars[i].bt = (float)gGlyph->bitmap_top; chars[i].tx = (float)x/width; } printf("Loaded font: %s\n",sFont.c_str()); return true; } void Draw(std::string sString,Vector2f vPos = Vector2f(0,0),Vector2f vScale = Vector2f(1,1)) { struct pPoint { pPoint() { x = y = s = t = 0; } pPoint(float a,float b,float c,float d) { x = a; y = b; s = c; t = d; } float x,y; float s,t; }; pPoint* cCoordinates = new pPoint[6*sString.length()]; int n = 0; for (const char *p = sString.c_str(); *p; p++) { float x2 = vPos.x() + chars[*p].bl * vScale.x(); float y2 = -vPos.y() - chars[*p].bt * vScale.y(); float w = chars[*p].bw * vScale.x(); float h = chars[*p].bh * vScale.y(); float x = vPos.x() + chars[*p].ax * vScale.x(); float y = vPos.y() + chars[*p].ay * vScale.y(); //skip characters with no pixels //still advances though if (!w || !h) { continue; } //triangle one cCoordinates[n++] = pPoint( x2 , -y2 , chars[*p].tx , 0); cCoordinates[n++] = pPoint( x2+w , -y2 , chars[*p].tx + chars[*p].bw / w , 0); cCoordinates[n++] = pPoint( x2 , -y2-h , chars[*p].tx , chars[*p].bh / h); cCoordinates[n++] = pPoint( x2+w , -y2 , chars[*p].tx + chars[*p].bw / w , 0); cCoordinates[n++] = pPoint( x2 , -y2-h , chars[*p].tx , chars[*p].bh / h); cCoordinates[n++] = pPoint( x2+w , -y2-h , chars[*p].tx + chars[*p].bw / w , chars[*p].bh / h); } glBindBuffer(GL_ARRAY_BUFFER,iVerts); glBindBuffer(GL_TEXTURE_2D,iTexture); //Vertices glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(2,GL_FLOAT,sizeof(pPoint),&cCoordinates[0].x); //TexCoord 0 glClientActiveTexture(GL_TEXTURE0); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glTexCoordPointer(2,GL_FLOAT,sizeof(pPoint),&cCoordinates[0].s); glCullFace(GL_NONE); glBufferData(GL_ARRAY_BUFFER,6*sString.length(),cCoordinates,GL_DYNAMIC_DRAW); glDrawArrays(GL_TRIANGLES,0,n); glCullFace(GL_BACK); glBindBuffer(GL_ARRAY_BUFFER,0); glBindBuffer(GL_TEXTURE_2D,0); glDisableClientState(GL_VERTEX_ARRAY); glDisableClientState(GL_TEXTURE_COORD_ARRAY); } ~Font() { glDeleteBuffers(1,&iVerts); glDeleteBuffers(1,&iTexture); } private: unsigned int iSize; //openGL texture atlas unsigned int iTexture; //openGL geometry buffer; unsigned int iVerts; FT_Library ftLibrary; FT_Face ftFace; struct Character { float ax,ay;//Advance float bw,bh;//bitmap size float bl,bt;//bitmap left and top float tx; } chars[128]; };

    Read the article

  • Any good C++ Component/Entity frameworks?

    - by Pat
    (Skip to the bold if you want to get straight to my question :) ) I've been dabbling in the different technologies available out there to use. I tried Unity and component based design, managing to get a little guy up and running around a map with basic pathfinding. I really loved how easy it was to program using components, but I wanted a bit more control and something more 2D friendly, so I went with LibGDX. I looked around and found 2 good frameworks for Java, which are Artemis and Apollo. I didn't like Artemis much, so I went with Apollo, which I loved. I managed to integrate it with Box2D and get a little guy running around bouncing balls. Great! But since I want to try out most of the options, there is still C++/SFML that I haven't tried yet. Coming from a Java/C# background, I've always wanted to get my hands dirty with C++. But then, after some looking around, I noticed there aren't any Component-Based frameworks for me to use. There's a somewhat done porting of Artemis, but, aside from not being completely finished, I didn't quite like Artemis even in Java. I found Apollo's approach much more.. logical. So, my question is, are there any good Component/Entity frameworks for C++ that I can use that are similar to Artemis, or preferably, Apollo?

    Read the article

  • Can't click on a button with startDrag() active on stage

    - by Pedro
    I need to know how can I enable mouse click on a button when I have a MouseEvent listener for the stage. I have a MClip associated with the mouse cursor: Mouse.hide(); scope.startDrag(true); And an MouseEnvet on the stage: stage.addEventListener(MouseEvent.CLICK, FunctionXYZ); When I try to click on any button they don't assume the function that I create for those buttons... for example, button for fullscreen, exit, help, etc... Thank you very much. BR, Pedro

    Read the article

  • Problems implementing a screen space shadow ray tracing shader

    - by Grieverheart
    Here I previously asked for the possibility of ray tracing shadows in screen space in a deferred shader. Several problems were pointed out. One of the most important problem is that only visible objects can cast shadows and objects between the camera and the shadow caster can interfere. Still I thought it'd be a fun experiment. The idea is to calculate the view coordinates of pixels and cast a ray to the light. The ray is then traced pixel by pixel to the light and its depth is compared with the depth at the pixel. If a pixel is in front of the ray, a shadow is casted at the original pixel. At first I thought that I could use the DDA algorithm in 2D to calculate the distance 't' (in p = o + t d, where o origin, d direction) to the next pixel and use it in the 3D ray equation to find the ray's z coordinate at that pixel's position. For the 2D ray, I would use the projected and biased 3D ray direction and origin. The idea was that 't' would be the same in both 2D and 3D equations. Unfortunately, this is not the case since the projection matrix is 4D. Thus, some tweak needs to be done to make this work this way. I would like to ask if someone knows of a way to do what I described above, i.e. from a 2D ray in texture coordinate space to get the 3D ray in screen space. I did implement a simple version of the idea which you can see in the following video: video here Shadows may seem a bit pixelated, but that's mostly because of the size of the step in 't' I chose. And here is the shader: #version 330 core uniform sampler2D DepthMap; uniform vec2 projAB; uniform mat4 projectionMatrix; const vec3 light_p = vec3(-30.0, 30.0, -10.0); noperspective in vec2 pass_TexCoord; smooth in vec3 viewRay; layout(location = 0) out float out_AO; vec3 CalcPosition(void){ float depth = texture(DepthMap, pass_TexCoord).r; float linearDepth = projAB.y / (depth - projAB.x); vec3 ray = normalize(viewRay); ray = ray / ray.z; return linearDepth * ray; } void main(void){ vec3 origin = CalcPosition(); if(origin.z < -60) discard; vec2 pixOrigin = pass_TexCoord; //tex coords vec3 dir = normalize(light_p - origin); vec2 texel_size = vec2(1.0 / 600.0); float t = 0.1; ivec2 pixIndex = ivec2(pixOrigin / texel_size); out_AO = 1.0; while(true){ vec3 ray = origin + t * dir; vec4 temp = projectionMatrix * vec4(ray, 1.0); vec2 texCoord = (temp.xy / temp.w) * 0.5 + 0.5; ivec2 newIndex = ivec2(texCoord / texel_size); if(newIndex != pixIndex){ float depth = texture(DepthMap, texCoord).r; float linearDepth = projAB.y / (depth - projAB.x); if(linearDepth > ray.z + 0.1){ out_AO = 0.2; break; } pixIndex = newIndex; } t += 0.5; if(texCoord.x < 0 || texCoord.x > 1.0 || texCoord.y < 0 || texCoord.y > 1.0) break; } } As you can see, here I just increment 't' by some arbitrary factor, calculate the 3D ray and project it to get the pixel coordinates, which is not really optimal. Hopefully, I would like to optimize the code as much as possible and compare it with shadow mapping and how it scales with the number of lights. PS: Keep in mind that I reconstruct position from depth by interpolating rays through a full screen quad.

    Read the article

  • How important is Programming for a Level Designer?

    - by WryGrin
    I'm currently attending school in a Level Design program, and I was wondering how important programming really is in being a Level Designer? I'm apparently incapable of learning programming (despite my best efforts), and tend to do very well in all other courses 3D modelling, story/character design, narrative and dialogue writing, environmental and conceptual design etc. I'm wondering if my strengths in the other areas are enough (with practice) to let me become a Level Designer, or I'm wasting my time if I can't program? I really want to be a Designer, but I just can't seem to wrap my head around the "language" of programming in general (Java kicks my teeth in even with tutoring and additional work on my own).

    Read the article

  • Convert rotation from Right handed System to left handed

    - by Hector Llanos
    I have Euler angles from a right handed system that I am trying to convert to a left handed system. All the information that I have read online says that to convert it simply multiply the axis and the angle in the correct order and it should work. In other words, Z * Y * X. When I do this what I see in Maya, and in engine still do not match up. This is what I have so far: static Quaternion ConvertToRightHand(Vector3 Euler) { Quaternion x = Quaternion.AngleAxis(-Euler.x, Vector3.right); Quaternion y = Quaternion.AngleAxis(Euler.y, Vector3.up); Quaternion z = Quaternion.AngleAxis(Euler.z, Vector3.forward); return (z * y * x); } Keeping the -Euler.x helps keep the object pointing up correctly, but when I pass ( 0,0,0) to face in the -z, it faces in the +z. Help :/

    Read the article

  • Increase animation speed according to the swipe speed in unity for Android

    - by rohit
    I have the animation done through Maya and brought the FBX file to unity. Here is my code to calculate the speed of the swipe: Vector2 speedMeasuredInScreenWidthsPerSecond =(Input.touches[0].deltaPosition / Screen.width) * Input.touches[0].deltaTime; Now I wanted to take speedMeasuredInScreenWidthsPerSecond and use it to increase the animation speed accordingly like this: animation["gmeChaAnimMiddle"].speed=Mathf.Round(speedMeasuredInScreenWidthsPerSecond); However, this results in an error that I need to convert Vector2 to float. So how do I overcome it?

    Read the article

< Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >