Search Results

Search found 32114 results on 1285 pages for 'general development'.

Page 468/1285 | < Previous Page | 464 465 466 467 468 469 470 471 472 473 474 475  | Next Page >

  • Queries regarding Geometry Shaders

    - by maverick9888
    I am dealing with geometry shaders using GL_ARB_geometry_shader4 extension. My code goes like : GLfloat vertices[] = { 0.5,0.25,1.0, 0.5,0.75,1.0, -0.5,0.75,1.0, -0.5,0.25,1.0, 0.6,0.35,1.0, 0.6,0.85,1.0, -0.6,0.85,1.0, -0.6,0.35,1.0 }; glProgramParameteriEXT(psId, GL_GEOMETRY_INPUT_TYPE_EXT, GL_TRIANGLES); glProgramParameteriEXT(psId, GL_GEOMETRY_OUTPUT_TYPE_EXT, GL_TRIANGLE_STRIP); glLinkProgram(psId); glBindAttribLocation(psId,0,"Position"); glEnableVertexAttribArray (0); glVertexAttribPointer(0, 3, GL_FLOAT, 0, 0, vertices); glDrawArrays(GL_TRIANGLE_STRIP,0,4); My vertex shader is : #version 150 in vec3 Position; void main() { gl_Position = vec4(Position,1.0); } Geometry shader is : #version 150 #extension GL_EXT_geometry_shader4 : enable in vec4 pos[3]; void main() { int i; vec4 vertex; gl_Position = pos[0]; EmitVertex(); gl_Position = pos[1]; EmitVertex(); gl_Position = pos[2]; EmitVertex(); gl_Position = pos[0] + vec4(0.3,0.0,0.0,0.0); EmitVertex(); EndPrimitive(); } Nothing is rendered with this code. What exactly should be the mode in glDrawArrays() ? How does the GL_GEOMETRY_OUTPUT_TYPE_EXT parameter will affect glDrawArrays() ? What I expect is 3 vertices will be passed on to Geometry Shader and using those we construct a primitive of size 4 (assuming GL_TRIANGLE_STRIP requires 4 vertices). Can somebody please throw some light on this ?

    Read the article

  • Collision detection via adjacent tiles - sprite too big

    - by BlackMamba
    I have managed to create a collision detection system for my tile-based jump'n'run game (written in C++/SFML), where I check on each update what values the surrounding tiles of the player contain and then I let the player move accordingly (i. e. move left when there is an obstacle on the right side). This works fine when the player sprite is not too big: Given a tile size of 5x5 pixels, my solution worked quite fine with a spritesize of 3x4 and 5x5 pixels. My problem is that I actually need the player to be quite gigantic (34x70 pixels given the same tilesize). When I try this, there seems to be an invisible, notably smaller boundingbox where the player collides with obstacles, the player also seems to shake strongly. Here some images to explain what I mean: Works: http://tinypic.com/r/207lvfr/8 Doesn't work: http://tinypic.com/r/2yuk02q/8 Another example of non-functioning: http://tinypic.com/r/kexbwl/8 (the player isn't falling, he stays there in the corner) My code for getting the surrounding tiles looks like this (I removed some parts to make it better readable): std::vector<std::map<std::string, int> > Game::getSurroundingTiles(sf::Vector2f position) { // converting the pixel coordinates to tilemap coordinates sf::Vector2u pPos(static_cast<int>(position.x/tileSize.x), static_cast<int>(position.y/tileSize.y)); std::vector<std::map<std::string, int> > surroundingTiles; for(int i = 0; i < 9; ++i) { // calculating the relative position of the surrounding tile(s) int c = i % 3; int r = static_cast<int>(i/3); // we subtract 1 to place the player in the middle of the 3x3 grid sf::Vector2u tilePos(pPos.x + (c - 1), pPos.y + (r - 1)); // this tells us what kind of block this tile is int tGid = levelMap[tilePos.y][tilePos.x]; // converts the coords from tile to world coords sf::Vector2u tileRect(tilePos.x*5, tilePos.y*5); // storing all the information std::map<std::string, int> tileDict; tileDict.insert(std::make_pair("gid", tGid)); tileDict.insert(std::make_pair("x", tileRect.x)); tileDict.insert(std::make_pair("y", tileRect.y)); // adding the stored information to our vector surroundingTiles.push_back(tileDict); } // I organise the map so that it is arranged like the following: /* * 4 | 1 | 5 * -- -- -- * 2 | / | 3 * -- -- -- * 6 | 0 | 7 * */ return surroundingTiles; } I then check in a loop through the surrounding tiles, if there is a 1 as gid (indicates obstacle) and then check for intersections with that adjacent tile. The problem I just can't overcome is that I think that I need to store the values of all the adjacent tiles and then check for them. How? And may there be a better solution? Any help is appreciated. P.S.: My implementation derives from this blog entry, I mostly just translated it from Objective-C/Cocos2d.

    Read the article

  • How can I implement an Iris Wipe effect?

    - by Vandell
    For those who doesn't know: An iris wipe is a wipe that takes the shape of a growing or shrinking circle. It has been frequently used in animated short subjects, such as those in the Looney Tunes and Merrie Melodies cartoon series, to signify the end of a story. When used in this manner, the iris wipe may be centered around a certain focal point and may be used as a device for a "parting shot" joke, a fourth wall-breaching wink by a character, or other purposes. Example from flasheff.com Your answer may or may not include a coding sample, a language agnostic explanation is considered enough.

    Read the article

  • XNA Diffuse Shader Issue. Edge lighting problem. Image Attached

    - by adtither
    As you can see in this image the diffuse shading is working correctly in some places but in other places such as the the bottom of the sphere you can see the squares/triangles of the mesh. Any idea what would be causing this? Let me know if you need anymore information related to code. I can upload my normals calculations and shader effect if required. EDIT: Here's a link to the shader I'm using http://pastebin.com/gymVc7CP Link to normals calculations: http://pastebin.com/KnMGdzHP Seems to be an issue with edge lighting. Can't seem to see where I'm going wrong with the normals calculations though.

    Read the article

  • Multi Pass Blend

    - by Kirk Patrick
    I am seeking the simplest working example of a two pass HLSL pixel shader. It can do anything really, but the main idea is to perform "ping ponging" to take the output of the first pass and then send it for the second pass. In my example I want to draw to the R channel and then draw to the G channel and produce a simple Venn Diagram in the shader, but need to detect overlap. I can currently detect one or the other but not overlap. There are a red and green circle overlapping, and I want to put a dynamic texture map in the overlap region. I can currently put it in either or. Below is how it looks in the shader. -------------------------------- Texture2D shaderTexture; SamplerState SampleType; ////////////// // TYPEDEFS // ////////////// struct PixelInputType { float4 position : SV_POSITION; float2 tex0 : TEXCOORD0; float2 tex1 : TEXCOORD1; float4 color : COLOR; }; //////////////////////////////////////////////////////////////////////////////// // Pixel Shader //////////////////////////////////////////////////////////////////////////////// float4 main(PixelInputType input) : SV_TARGET { float4 textureColor0; float4 textureColor1; // Sample the pixel color from the texture using the sampler at this texture coordinate location. textureColor0 = shaderTexture.Sample(SampleType, input.tex0); textureColor1 = shaderTexture.Sample(SampleType, input.tex1); if (input.color[0]==1.0f && input.color[1]==1.0f) // Requires multi-pass textureColor0 = textureColor1; return textureColor0; } Here is the calling code (that needs to be modified) m_d3dContext->IASetVertexBuffers(0, 2, vbs, strides, offsets); m_d3dContext->IASetIndexBuffer(m_indexBuffer.Get(), DXGI_FORMAT_R32_UINT,0); m_d3dContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); m_d3dContext->IASetInputLayout(m_inputLayout.Get()); m_d3dContext->VSSetShader(m_vertexShader.Get(), nullptr, 0); m_d3dContext->VSSetConstantBuffers(0, 1, m_constantBuffer.GetAddressOf()); m_d3dContext->PSSetShader(m_pixelShader.Get(), nullptr, 0); m_d3dContext->PSSetShaderResources(0, 1, m_SRV.GetAddressOf()); m_d3dContext->PSSetSamplers(0, 1, m_QuadsTexSamplerState.GetAddressOf());

    Read the article

  • Creating practically solvable 15 puzzle inputs

    - by Ashwin
    I am now developing a 15 puzzle game. I know the method to detect unsolvable puzzles. But unlike 8-puzzle, solution for 15-puzzle takes quite long time for some input states and can be solved within 5 seconds some other set of input states. Now the problem is that I cannot give the user(the player), a problem for which the solution takes more than 10 seconds(if he/she chooses to see the solution). So what I want is that when I initially shuffle the puzzle, I want to only present those puzzles which can be solved within 10 seconds. There must be some way to determine the hardness of the puzzle. I tried searching the net but could not find it. Does anyone know a way of determining the hardness of a puzzle? NOTE : I am using A* algorithm to find out the solution on a computer with 3GB RAM and 2.27GHZ processor.

    Read the article

  • Drag Gestures - fractional delta values

    - by Den
    I have an issue with objects moving roughly twice as far as expected when dragging them. I am comparing my application to the standard TouchGestureSample sample from MSDN. For some reason in my application gesture samples have fractional positions and deltas. Both are using same Microsoft.Xna.Framework.Input.Touch.dll, v4.0.30319. I am running both apps using standard Windows Phone Emulator. I am setting my break point immediately after this line of code in a simple Update method: GestureSample gesture = TouchPanel.ReadGesture(); Typical values in my app: Delta = {X:-13.56522 Y:4.166667} Position = {X:184.6956 Y:417.7083} Typical values in sample app: Delta = {X:7 Y:16} Position = {X:497 Y:244} Have anyone seen this issue? Does anyone have any suggestions? Thank you.

    Read the article

  • blender: 3D model from guide images

    - by Stefan
    In a effort to learn the blender interface, which is confusing to say the least, I've chosen to model a model from referrence pictures easily found on the web. Problem is that I can't ( and won't ) get perfect "right", "front" and "top" pictures. Blender only allows you to see the background pictures when in ortographic mode and only from right|front|top, which doesn't help me. How to I proceed to model from non-perfect guide images?

    Read the article

  • How to keep balance / Unlock items / achievement rules

    - by Mark Knol
    I'm working on an engine for a game, too learn javascript and just because its fun. I'm a flashdeveloper, I know how to build websites. Now making games is a different challenge, javascript is a challenge, but I'd love to learn how to structure code and what patterns are common. I dont mind if the game ever finish, I'm mostly interested in the programming part of it. I dont have a particular endresult in mind, so I'll see where it takes me. I currently have a system where you can buy items. The items cost a specified amount of gold, silver, diamonds etc. When you have selected and bought the item, it takes time before getting rewarded. When time is over, you are getting rewarded with other properties (gold, energy, diamonds). For example, you can buy an apple for 50gold, It takes a minute, you get rewarded with 75energy. Or if you take a run, it cost 50energy, it takes 5minutes, reward is 25gold and 25silver. These definitions is what i call actions. Currently I already have a system where this already works and I can define as much actions with as much properties as I want. The definitions I have kinda looks like this: {id:101, category:544, onInit:{gold:-75}, onComplete:{energy:75}, time:2000, name:"Apple", locked: false} {id:102, category:544, onInit:{gold:-135}, onComplete:{energy:145}, time:2000, name:"Banana", locked: false} {id:106, category:302, onInit:{energy:-50, power: -25}, onComplete:{gold:100, diamonds:2}, time:10000, name:"Run", locked: false} {id:107, category:302, onInit:{energy:-70, silver: -55}, onComplete:{gold:100}, time:10000, name:"Dance", locked: false} {id:108, category:302, onInit:{energy:-230, power: -355}, onComplete:{gold:70, silver:70}, time:10000, name:"Fitness", locked: false} Now, I would love to add a system where I can lock/unlock the actions using achievement rules. Lets say, if you buy 10 apples, you unlock a new action, like bananas which cost more, and reward more. In the future I maybe want to restrict achievements and actions to levels. I am kinda stuck how to structure this. I have 2 questions: Which patterns are used to define achievements? How/where are they defined? Should it be part of the action, or should it be a separate controller? Is it a good idea to register all completed actions to it? I think I want multiple types of achievement rules, Id love to hear some ideas how to develop it. How do you create/find a good balance, so the user does not get stuck or can cheat by repeat a pattern of actions to get too much rewards. I know there is not a simple answer and i'm lacking of a good game-concept, but I wonder if anyone created such a game and how you dealed and played with it.

    Read the article

  • Flickering when accessing texture by offset

    - by TravisG
    I have this simple compute shader that basically just takes the input from one image and writes it to another. Both images are 128/128/128 in size and glDispatchCompute is called with (128/8,128/8,128/8). The source images are cleared to 0 before this compute shader is executed, so no undefined values should be floating around in there. (I have the appropriate memory barrier on the C++ side set before the 3D texture is accessed). This version works fine: #version 430 layout (location = 0, rgba16f) uniform image3D ping; layout (location = 1, rgba16f) uniform image3D pong; layout (local_size_x = 8, local_size_y = 8, local_size_z = 8) in; void main() { ivec3 sampleCoord = gl_GlobalInvocationID.xyz; imageStore(pong, imageLoad(ping,sampleCoord)); } Reading values from pong shows that it's just a copy, as intended. However, when I load data from ping with an offset: #version 430 layout (location = 0, rgba16f) uniform image3D ping; layout (location = 1, rgba16f) uniform image3D pong; layout (local_size_x = 8, local_size_y = 8, local_size_z = 8) in; void main() { ivec3 sampleCoord = gl_GlobalInvocationID.xyz; imageStore(pong, imageLoad(ping,sampleCoord+ivec3(1,0,0))); } The data that is written to pong seems to depend on the order of execution of the threads within the work groups, which makes no sense to me. When reading from the pong texture, visible flickering occurs in some spots on the texture. What am I doing wrong here?

    Read the article

  • OpenGL fovx question

    - by Nick
    To boil my question down to the simplest form, I fear I am oversimplifying how mat4 perspective works. I am using mat4.perspective(45, 2, 0.1, 1000.0) (the binding is WebGL fwiw). With a fovy of 45, and an aspect ratio of 2, I expect to have a fovx of 90. Thus, if I position my camera at (0, 0, 50), looking towards the origin, I expect to see a cube positioned at (50, 0, 0) (45 degrees) right at the very periphery of my screen, half on, half off,. Instead, a cube at (50, 0, 0) is totally off screen, and my actually periphery occurs at about (41.1, 0, 0). What am I missing here? Thanks, nick

    Read the article

  • Spherical harmonics lighting - what does it accomplish?

    - by TravisG
    From my understanding, spherical harmonics are sometimes used to approximate certain aspects of lighting (depending on the application). For example, it seems like you can approximate the diffuse lighting cause by a directional light source on a surface point, or parts of it, by calculating the SH coefficients for all bands you're using (for whatever accuracy you desire) in the direction of the surface normal and scaling it with whatever you need to scale it with (e.g. light colored intensity, dot(n,l),etc.). What I don't understand yet is what this is supposed to accomplish. What are the actual advantages of doing it this way as opposed to evaluating the diffuse BRDF the normal way. Do you save calculations somewhere? Is there some additional information contained in the SH representation that you can't get out of the scalar results of the normal evaluation?

    Read the article

  • Simple project - make a 3D box tumble and fall to the ground [closed]

    - by Dominic Bou-Samra
    Possible Duplicate: Resources to learn programming rigid body simulation Hi guys, I want to try learning rigid-body dynamic simulation. I have done a fluid and cloth simulation before, but never anything rigid. My maths knowledge is limited in that I don't know the notation that well. Are there any good cliff-notes, tutorials, guides on how I would accomplish a simple task like this? I don't want a super complex pdf that's only a little relevant. Thanks.

    Read the article

  • How to properly add texture to multi-fixture/shape b2Body

    - by Blazej Wdowikowski
    Hello to everyone this is my first poste here I hope that will be not fail start. At start I must say I make part 1 in Ray's Tutorial "How To Make A Game Like Fruit Ninja With Box2D and Cocos2D". But I wonder what when I want make more complex body with texture? Simple just add n b2FixtureDef to the same body. OK but what about texture? If I will take code from that tutorial it only fill last fixture. Probably it does not takes every b2Vec2 point. I was right, it did not. So quick refactor and from that -(id)initWithTexture:(CCTexture2D*)texture body:(b2Body*)body original:(BOOL)original { // gather all the vertices from our Box2D shape b2Fixture *originalFixture = body->GetFixtureList(); b2PolygonShape *shape = (b2PolygonShape*)originalFixture->GetShape(); int vertexCount = shape->GetVertexCount(); NSMutableArray *points = [NSMutableArray arrayWithCapacity:vertexCount]; for(int i = 0; i < vertexCount; i++) { CGPoint p = ccp(shape->GetVertex(i).x * PTM_RATIO, shape->GetVertex(i).y * PTM_RATIO); [points addObject:[NSValue valueWithCGPoint:p]]; } if ((self = [super initWithPoints:points andTexture:texture])) { _body = body; _body->SetUserData(self); _original = original; // gets the center of the polygon _centroid = self.body->GetLocalCenter(); // assign an anchor point based on the center self.anchorPoint = ccp(_centroid.x * PTM_RATIO / texture.contentSize.width, _centroid.y * PTM_RATIO / texture.contentSize.height); } return self; } I came up with that -(id)initWithTexture:(CCTexture2D*)texture body:(b2Body*)body original:(BOOL)original { int vertexCount = 0; //gather total number of b2Vect2 points b2Fixture *currentFixture = body->GetFixtureList(); while (currentFixture) { //new b2PolygonShape *shape = (b2PolygonShape*)currentFixture->GetShape(); vertexCount += shape->GetVertexCount(); currentFixture = currentFixture->GetNext(); } NSMutableArray *points = [NSMutableArray arrayWithCapacity:vertexCount]; // gather all the vertices from our Box2D shape b2Fixture *originalFixture = body->GetFixtureList(); while (originalFixture) { //new NSLog((NSString*)@"-"); b2PolygonShape *shape = (b2PolygonShape*)originalFixture->GetShape(); int currentVertexCount = shape->GetVertexCount(); for(int i = 0; i < currentVertexCount; i++) { CGPoint p = ccp(shape->GetVertex(i).x * PTM_RATIO, shape->GetVertex(i).y * PTM_RATIO); [points addObject:[NSValue valueWithCGPoint:p]]; } originalFixture = originalFixture->GetNext(); } if ((self = [super initWithPoints:points andTexture:texture])) { _body = body; _body->SetUserData(self); _original = original; // gets the center of the polygon _centroid = self.body->GetLocalCenter(); // assign an anchor point based on the center self.anchorPoint = ccp(_centroid.x * PTM_RATIO / texture.contentSize.width,_centroid.y * PTM_RATIO / texture.contentSize.height); } return self; } I was working for simple two fixtures body like b2BodyDef bodyDef; bodyDef.type = b2_dynamicBody; bodyDef.position = position; bodyDef.angle = rotation; b2Body *body = world->CreateBody(&bodyDef); b2FixtureDef fixtureDef; fixtureDef.density = 1.0; fixtureDef.friction = 0.5; fixtureDef.restitution = 0.2; fixtureDef.filter.categoryBits = 0x0001; fixtureDef.filter.maskBits = 0x0001; b2Vec2 vertices[] = { b2Vec2(0.0/PTM_RATIO,50.0/PTM_RATIO), b2Vec2(0.0/PTM_RATIO,0.0/PTM_RATIO), b2Vec2(50.0/PTM_RATIO,30.1/PTM_RATIO), b2Vec2(60.0/PTM_RATIO,60.0/PTM_RATIO) }; b2PolygonShape shape; shape.Set(vertices, 4); fixtureDef.shape = &shape; body->CreateFixture(&fixtureDef); b2Vec2 vertices2[] = { b2Vec2(20.0/PTM_RATIO,50.0/PTM_RATIO), b2Vec2(20.0/PTM_RATIO,0.0/PTM_RATIO), b2Vec2(70.0/PTM_RATIO,30.1/PTM_RATIO), b2Vec2(80.0/PTM_RATIO,60.0/PTM_RATIO) }; shape.Set(vertices2, 4); fixtureDef.shape = &shape; body->CreateFixture(&fixtureDef); But if I try put secondary shape upper than first it starting wierd, texture goes crazy. For example not mention about more complex shapes. What's more if shapes have one common point texture will not render for them at all [For that I use Physics Edytor like in tutorial part1] BTW. I use PolygonSprite and in method createWithWorld... another shapes. Uff.. Question So my question is, why texture coords are in such a mess up? It's my modify method or just wrong approach? Maybe I should remove duplicated from points array?

    Read the article

  • How to design a leaderboard?

    - by PeterK
    This sounds like an easy thing but when i considering the following Many players Some have played many games and some just started Different type of statistics ...on what information should the actual ranking be based on. I am planning to display the board in a UITableView so there is limited space available per player. However, I am not bound to the UITableView if there is a better solution. This is a quiz game and the information i am currently capturing per player is: #games played totally #games played per game type (current version have only one game type) #questions answered #correct answers Maybe i should include additional information. I have been thinking about having a leaderboard property page where the player can decide on what basis the leaderboard should display information but would like to avoid the complexity in that. However, if that is needed i will do it. Anyone that can give me some advice on how to design the presentation of this would be highly appreciated?

    Read the article

  • Blender: How to "meshify" an object I made from Bezier curves

    - by capcom
    I made a star shape using Bezier curves, and extruded it (see pic below): What I want to do is give it a rounder look - not just around the edges by using beveling. I want it to kind of look like this (well, that shape anyway): How would I go about doing this? Please keep in mind that I am extremely new to Blender. I thought that I could somehow turn this star into those default shapes that have tonnes of squares which I could pull out, and apply a mirror to it so that the same thing happens on both sides. I really don't know how to do it, and would appreciate your help.

    Read the article

  • Embed Unity3D and load multiple games from a single app

    - by Rafael Steil
    Is is possible to export an entire unity3d project/game as an AssetBundle and load it on iOS/Android/Windows on an app that doesn't know anything about such game beforehand? What I have in mind is something like the web plugin does - it loads a series of .unity3d files over http, and render inline in the browser window. Is it even possible to do something closer for iOS/Android? I have read a lot of docs so far, but still can't be sure: http://floored.com/blog/2013/integrating-unity3d-within-ios-native-application.html http://docs.unity3d.com/Manual/LoadingResourcesatRuntime.html http://docs.unity3d.com/Manual/AssetBundlesIntro.html The code from the post at http://forum.unity3d.com/threads/112703-Override-Unity-Data-folder-path?p=749108&viewfull=1#post749108 works for Android, but how about iOS and other platforms?

    Read the article

  • Height Map Mapping to "Chunked" Quadrilateralized Spherical Cube

    - by user3684950
    I have been working on a procedural spherical terrain generator for a few months which has a quadtree LOD system. The system splits the six faces of a quadrilateralized spherical cube into smaller "quads" or "patches" as the player approaches those faces. What I can't figure out is how to generate height maps for these patches. To generate the heights I am using a 3D ridged multi fractals algorithm. For now I can only displace the vertices of the patches directly using the output from the ridged multi fractals. I don't understand how I generate height maps that allow the vertices of a terrain patch to be mapped to pixels in the height map. The only thing I can think of is taking each vertex in a patch, plug that into the RMF and take that position and translate into u,v coordinates then determine the pixel position directly from the u,v coordinates and determine the grayscale color based on the height. I feel as if this is the right approach but there are a few other things that may further complicate my problem. First of all I intend to use "height maps" with a pixel resolution of 192x192 while the vertex "resolution" of each terrain patch is only 16x16 - meaning that I don't have any vertices to sample for the RMF for most of the pixels. The main reason the height map resolution is higher so that I can use it to generate a normal map (otherwise the height maps serve little purpose as I can just directly displace vertices as I currently am). I am pretty much following this paper very closely. This is, essentially, the part I am having trouble with. Using the cube-to-sphere mapping and the ridged multifractal algorithm previously described, a normalized height value ([0, 1]) is calculated. Using this height value, the terrain position is calculated and stored in the first three channels of the positionmap (RGB) – this will be used to calculate the normalmap. The fourth channel (A) is used to store the height value itself, to be used in the heightmap. The steps in the first sentence are my primary problem. I don't understand how the pixel positions correspond to positions on the sphere and what positions are sampled for the RMF to generate the pixels if only vertices cannot be used.

    Read the article

  • Understanding how texCUBE works and writing cubemaps properly into a cube rendertarget

    - by cubrman
    My goal is to create accurate reflections, sampled from a dynamic cubemap, for specific 3d objects (mostly lights) in XNA 4.0. To sample the cubemap I compute the 3d reflection vector in a classic way: half3 ReflectionVec = reflect(-directionToCamera, Normal.rgb); I then use the vector to get the actual reflected color: half3 ReflectionCol = texCUBElod(ReflectionSampler, float4(ReflectionVec, 0)); The cubemap I am sampling from is a RenderTarget with 6 flat faces. So my question is, given the 3d world position of an arbitrary 3d object, how can I make sure that I get accurate reflections of this object, when I re-render the cubemap. Should I build the ViewProjection matrix in a specific way? Or is there any other approach?

    Read the article

  • Overriding component behavior

    - by deft_code
    I was thinking of how to implement overriding of behaviors in a component based entity system. A concrete example, an entity has a heath component that can be damaged, healed, killed etc. The entity also has an armor component that limits the amount of damage a character receives. Has anyone implemented behaviors like this in a component based system before? How did you do it? If no one has ever done this before why do you think that is. Is there anything particularly wrong headed about overriding component behaviors? Below is rough sketch up of how I imagine it would work. Components in an entity are ordered. Those at the front get a chance to service an interface first. I don't detail how that is done, just assume it uses evil dynamic_casts (it doesn't but the end effect is the same without the need for RTTI). class IHealth { public: float get_health( void ) const = 0; void do_damage( float amount ) = 0; }; class Health : public Component, public IHealth { public: void do_damage( float amount ) { m_damage -= amount; } private: float m_health; }; class Armor : public Component, public IHealth { public: float get_health( void ) const { return next<IHealth>().get_health(); } void do_damage( float amount ) { next<IHealth>().do_damage( amount / 2 ); } }; entity.add( new Health( 100 ) ); entity.add( new Armor() ); assert( entity.get<IHealth>().get_health() == 100 ); entity.get<IHealth>().do_damage( 10 ); assert( entity.get<IHealth>().get_health() == 95 ); Is there anything particularly naive about the way I'm proposing to do this?

    Read the article

  • Can I become a Game Designer? [on hold]

    - by user32721
    This is my first time posting something on a forum in 4 years. I am posting this because I want to adjust my expectations and goals regarding game design. I am in college in Morocco (Al Akhawayn university). just started my junior year. I am a communications major (school of humanities) and a gender studies minor. I want to become a video game designer. It is the only career that I am interested in. I have been playing ever since I was 5 and haven't stopped yet. Currently I don't have any noteworthy skills to become a designer. I don't know how to program (don't really have the patience for it) and I can't draw to save my life. I haven't tried visual software like MAYA or MAX so I can't comment on graphic design. So I basically want to know whether my current education is capable of helping me reach my goal. If not then should I take a master's in game design (in the U.S?) or switch my minor to computer science? I am sorry that this post is long! I look forward to hearing your advice!

    Read the article

  • DrawIndexedPrimitives overdraws data in previous buffer if called in loop

    - by Daniel Excinsky
    I doubled the question from stackoverflow here, and will delete the opposite of a question that gave me the answer. I have the Draw method in one of my renderers, that loops through the dictionary and gets precollected and preinitialized buffers. When dictionary has only one element, everything is just fine. But with more elements what I get on the screen is only the data from the last buffer (I suppose, not sure) My Draw method: public void Draw(GameTime gameTime) { if (!_areStaticEffectsSet) { // blockEffect.Parameters["TextureAtlas"].SetValue(textureAtlas); blockEffect.Parameters["HorizonColor"].SetValue(World.HORIZONCOLOR); blockEffect.Parameters["NightColor"].SetValue(World.NIGHTCOLOR); blockEffect.Parameters["MorningTint"].SetValue(World.MORNINGTINT); blockEffect.Parameters["EveningTint"].SetValue(World.EVENINGTINT); blockEffect.Parameters["SunColor"].SetValue(World.SUNCOLOR); _areStaticEffectsSet = true; } blockEffect.Parameters["World"].SetValue(Matrix.Identity); blockEffect.Parameters["View"].SetValue(_player.CameraView); blockEffect.Parameters["Projection"].SetValue(_player.CameraProjection); blockEffect.Parameters["CameraPosition"].SetValue(_player.CameraPosition); blockEffect.Parameters["timeOfDay"].SetValue(_world.TimeOfDay); var viewFrustum = new BoundingFrustum(_player.CameraView * _player.CameraProjection); _graphicsDevice.BlendState = BlendState.Opaque; _graphicsDevice.DepthStencilState = DepthStencilState.Default; foreach (KeyValuePair<int, Texture2D> textureAtlas in textureAtlases) { blockEffect.Parameters["TextureAtlas"].SetValue(textureAtlas.Value); foreach (EffectPass pass in blockEffect.CurrentTechnique.Passes) { pass.Apply(); //TODO: ?????????? ??????????????? ?? ?????? ?? ??????? ??????? VertexBuffer ? IndexBuffer foreach (Chunk chunk in _world.Chunks.Values) { if (chunk == null || chunk.IsDisposed) { continue; } if (chunk.BoundingBox.Intersects(viewFrustum) && chunk.GetBlockIndexBuffer(textureAtlas.Key) != null) { lock (chunk) { if (chunk.GetBlockIndexBuffer(textureAtlas.Key).IndexCount > 0) { VertexBuffer vertexBuffer = chunk.GetBlockVertexBuffer(textureAtlas.Key); IndexBuffer indexBuffer = chunk.GetBlockIndexBuffer(textureAtlas.Key); //if (chunk.DrawIndex == new Vector3i(0, 0, 0)) //{ //if (textureAtlas.Key == -1) //{ //var varray = new [] //{ //new VertexPositionTextureLight(new Vector3(0,68,0), new Vector2(0,1),1,new Vector3(0,0,0), new Vector3(1,1,1)), //new VertexPositionTextureLight(new Vector3(0,68,1), new Vector2(0,1),1,new Vector3(0,0,0), new Vector3(1,1,1)), //new VertexPositionTextureLight(new Vector3(1,68,0), new Vector2(0,1),1,new Vector3(0,0,0), new Vector3(1,1,1)) //}; //var iarray = new short[] {0, 1, 2}; //vertexBuffer = new VertexBuffer(_graphicsDevice, typeof(VertexPositionTextureLight), varray.Length, BufferUsage.WriteOnly); //indexBuffer = new IndexBuffer(_graphicsDevice, IndexElementSize.SixteenBits, iarray.Length, BufferUsage.WriteOnly); //vertexBuffer.SetData(varray); //indexBuffer.SetData(iarray); } } _graphicsDevice.SetVertexBuffer(vertexBuffer); _graphicsDevice.Indices = indexBuffer; _graphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vertexBuffer.VertexCount, 0, indexBuffer.IndexCount / 3); } } } } } } } Noteworthy things about the code: XNA version is 4.0. I've commented the debugging code in the loop, but left it for it may bring some insight. I try not only to change vertices/indices in the loop, but textureAtlas also. Code in the shader about textureAtlas: Texture TextureAtlas; sampler TextureAtlasSampler = sampler_state { texture = <TextureAtlas>; magfilter = POINT; minfilter = POINT; mipfilter = POINT; AddressU = WRAP; AddressV = WRAP; }; struct VSInput { float4 Position : POSITION0; float2 TexCoords1 : TEXCOORD0; float SunLight : COLOR0; float3 LocalLight : COLOR1; float3 Normal : NORMAL0; }; VertexPositionTextureLight is my own realization of IVertexType. So, do anybody know about this problem, or see the wrongness in my code (that's far more likely)?

    Read the article

  • How to monetize and/or protect framework rights?

    - by Arthur Wulf White
    I made a game engine/framerwork for ActionScript 3 that allows very efficient and flexible level design for Platformers, Tower Defense game, RPG's, RTS and racing games. The algorithms I used are new and are not available in any other level editor I've seen. What are the best ways to benefit myself and others with my new framework? It is written for ActionScript 3 so unless I translate it to C# I'm guessing it will be decompiled and used by others. I want to have some lisence, allowing me to share the framework and still benefit from it. Any advice would be appreciated. This issue has been on my mind a lot this year. I am hoping to find a solution that will bring me some relief.

    Read the article

  • Server-side Input

    - by Thomas
    Currently in my game, the client is nothing but a renderer. When input state is changed, the client sends a packet to the server and moves the player as if it were processing the input, but the server has the final say on the position. This generally works really well, except for one big problem: falling off edges. Basically, if a player is walking towards an edge, say a cliff, and stops right before going off the edge, sometimes a second later, he'll be teleported off of the edge. This is because the "I stopped pressing W" packet is sent after the server processes the information. Here's a lag diagram to help you understand what I mean: http://i.imgur.com/Prr8K.png I could just send a "W Pressed" packet each frame for the server to process, but that would seem to be a bandwidth-costly solution. Any help is appreciated!

    Read the article

  • How can I dynamically load the correct sprite from a sprite sheet?

    - by Leonard Challis
    I am making a simple card game in unity. The game is based on a standard 52-card pack, with identical backs for unique faces. In my particular game different cards are worth different values and have various special abilities. The game will have 52 cards on the table (on the draw position or in the face-down deck or in someone's hand) at all times, so this number won't change. I thought that making a Card prefab and instantiating 52 of these manually would be a bad idea. Even doing it in code, I thought, would be a bit OTT, and that I should just instantiate visual cards when they are face-up to the player. I have a sprite sheet of the 52 cards and the back, which is imported as a Sprite in multiple mode, sliced in to a grid containing all the cards needed to play the game. The problem I now face is that, through my GameController script I want to generate a shuffled pack of cards, deal some to each player and then show those cards to a player. However, I am not sure of the best way, or even if it's possible, to do this dynamically with the sprite sheets as they are. For instance if I have the following: private CardRank rank; private CardSuite suite; private void Start() { this.rank = CardRank.Ace; this.suite = CardSuite.Spades; } This class would be instantiated by the game manager. I would have 52 of these in code. Whenever I have to visually show a card in the scene, I would use a card prefab, which is essentially a game object with a SpriteRenderer on it. I would need to dynamically load the correct sprite for this object from the spritesheet. The sliced sprites from the sprite sheet actually have names in the format AS (Ace of Spades), 7H (Seven of Hearts), etc - though this was a manual thing I did myself of course. I have also tried various alternative solutions, including creating animations, having separate sprites not in a spritesheet and having an array of available sprites in an array with a specific index for each card, but none seem as elegant as trying to load the correct sprite at runtime, as I'm trying to. So, how do I load a specific sprite from a spritesheet at runtime? I'm open to suggestions, even those that make me think differently about how to approach the problem.

    Read the article

< Previous Page | 464 465 466 467 468 469 470 471 472 473 474 475  | Next Page >