Search Results

Search found 25550 results on 1022 pages for 'umbraco development'.

Page 472/1022 | < Previous Page | 468 469 470 471 472 473 474 475 476 477 478 479  | Next Page >

  • QuadTree: store only points, or regions?

    - by alekop
    I am developing a quadtree to keep track of moving objects for collision detection. Each object has a bounding shape, let's say they are all circles. (It's a 2D top-down game) I am unsure whether to store only the position of each object, or the whole bounding shape. If working with points, insertion and subdivision is easy, because objects will never span multiple nodes. On the other hand, a proximity query for an object may miss collisions, because it won't take the objects' dimensions into account. How to calculate the query region when you only have points? If working with regions, how to handle an object that spans multiple nodes? Should it be inserted in the nearest parent node that completely contains it, even if this exceeds the node's capacity? Thanks.

    Read the article

  • 3D Modeling Software for Programmer [closed]

    - by Pathachiever11
    I've recently learned how to make games for Unity3d, and now I want to start making games! I can't wait to start! However, before I can make 3D games, I need to learn 3D modeling for character design, level design, and some animation. What is the easiest 3D modeling software, compatible with Unity3d? I do not want to spend too much time learning the software. From what I've heard, Blender is a bit complicated to use. Maya and 3dsMax seem very powerful. Could someone point me in the right direction? I don't want to spend a lot of time learning. I know its not that easy, but you guys have experience, you guys probably know out of all which one is easier and powerful. Could you recommend a software? Many Thanks!

    Read the article

  • Tile-wide extent tracing on a grid.

    - by Larolaro
    I'm currently working on A* pathfinding on a grid and I'm looking to smooth the generated path, while also considering the extent of the character moving along it. I'm using a grid for the pathfinding, however character movement is free roaming, not strict tile to tile movement. To achieve a smoother, more efficient path, I'm doing line traces on a grid to determine if there is unwalkable tiles between tiles to shave off unecessary corners. However, because a line trace is zero extent, it doesn't consider the extent of the character and gives bad results (not returning unwalkable tiles just missed by the line, causing unwanted collisions). So what I'm looking for is rather than a line algorithm that determines the tiles under it, I'm looking for one that determines the tiles under a tile-wide extent line. Here is an image to help visualise my problem! Does anyone have any ideas? I've been working with Bresenham's line and other alternatives but I haven't yet figured out how to nail this specific problem.

    Read the article

  • vector collision on polygon in 3d space detection/testing?

    - by LRFLEW
    In the 3d fps in java I'm working on, I need a bullet to be fired and to tell if it hit someone. All visual objects in the game are defined through OpenGL, so the object it can be colliding with can be any drawable polygon (although they will most likely be triangles and rectangles anyways). The bullet is not an object, but will be treated as a vector that instantaneously moves all the way across the map (like the snipper riffle in Halo). What's the best way to detect/test collisions with the polygon and the vector. I have access to OpenCL, however I have absolutely no experience with it. I am very early in the developmental stage, so if you think there's a better way of going about this, feel free to tell me (I barley have a player model to collide with anyways, so I'm flexible with it). Thanks

    Read the article

  • 2D Tile-based terrian generation

    - by a240
    As a summer project I decided it would be fun to make a flash game. Right now I'm going for something like the look of http://www.terraria.org/. It's been a lot of fun, but today I've hit a snag. I need a way to generate my worlds. I've read up Perlin Noise ( http://freespace.virgin.net/hugo.elias/models/m_perlin.htm ) as a possibility, but I my attempts have given me sporadic looking results. What are some techniques used to generate these 2D tile-based worlds? Ideally I would like to be able to generate mountains, plains, and caves.

    Read the article

  • How can I mark a pixel in the stencil buffer?

    - by János Turánszki
    I never used the stencil buffer for anything until now, but I want to change this. I have an idea of how it should work: the gpu discards or keeps rasterized pixels before the pixel shader based on the stencil buffer value on the given position and some stencil operation. What I don't know is how would I mark a pixel in the stencil buffer with a specific value. For example I draw my scene and want to mark everything which is drawn with a specific material (this material could be looked up from a texture so ideally I should mark the pixel in the pixel shader), so that later when I do some post processing on my scene I would only do it on the marked pixels. I didn't find anything on the internet besides how to set up a stencil buffer and explaining the different stencil operations. I was expecting to find some System-Value semantics like SV_Depth to write to in the pixel shader (because the stencil buffer shares the same resource with the depth buffer in D3D11), but there is no such thing on MSDN. So how should I do this? If I am misunderstanding something please help me clear that up.

    Read the article

  • GLSL Normals not transforming propertly

    - by instancedName
    I've been stuck on this problem for two days. I've read many articles about transforming normals, but I'm just totaly stuck. I understand choping off W component for "turning off" translation, and doing inverse/traspose transformation for non-uniform scaling problem, but my bug seems to be from a different source. So, I've imported a simple ball into OpenGL. Only transformation that I'm applying is rotation over time. But when my ball rotates, the illuminated part of the ball moves around just as it would if direction light direction was changing. I just can't figure out what is the problem. Can anyone help me with this? Here's the GLSL code: Vertex Shader: #version 440 core uniform mat4 World, View, Projection; layout(location = 0) in vec3 VertexPosition; layout(location = 1) in vec3 VertexColor; layout(location = 2) in vec3 VertexNormal; out vec4 Color; out vec3 Normal; void main() { Color = vec4(VertexColor, 1.0); vec4 n = World * vec4(VertexNormal, 0.0f); Normal = n.xyz; gl_Position = Projection * View * World * vec4(VertexPosition, 1.0); } Fragment Shader: #version 440 core uniform vec3 LightDirection = vec3(0.0, 0.0, -1.0); uniform vec3 LightColor = vec3(1f); in vec4 Color; in vec3 Normal; out vec4 FragColor; void main() { diffuse = max(0.0, dot(normalize(-LightDirection), normalize(Normal))); vec4 scatteredLight = vec4(LightColor * diffuse, 1.0f); FragColor = min(Color * scatteredLight, vec4(1.0)); }

    Read the article

  • Algorithm to map an area [on hold]

    - by user37843
    I want to create a crawler that starts in a room and from that room to move North,East,West and South until there aren't any new rooms to visit. I don't want to have duplicates and the output format per line to be something like this: current room, neighbour 1, neighbour 2 ... and in the end to apply BFS algorithm to find the shortest path between 2 rooms. Can anyone offer me some suggestion what to use? Thanks

    Read the article

  • Best way to implement a simple bullet trajectory

    - by AirieFenix
    I searched and searched and although it's a fair simple question, I don't find the proper answer but general ideas (which I already have). I have a top-down game and I want to implement a gun which shoots bullets that follow a simple path (no physics nor change of trajectory, just go from A to B thing). a: vector of the position of the gun/player. b: vector of the mouse position (cross-hair). w: the vector of the bullet's trajectory. So, w=b-a. And the position of the bullet = [x=x0+speed*time*normalized w.x , y=y0+speed*time * normalized w.y]. I have the constructor: public Shot(int shipX, int shipY, int mouseX, int mouseY) { //I get mouse with Gdx.input.getX()/getY() ... this.shotTime = TimeUtils.millis(); this.posX = shipX; this.posY = shipY; //I used aVector = aVector.nor() here before but for some reason didn't work float tmp = (float) (Math.pow(mouseX-shipX, 2) + Math.pow(mouseY-shipY, 2)); tmp = (float) Math.sqrt(Math.abs(tmp)); this.vecX = (mouseX-shipX)/tmp; this.vecY = (mouseY-shipY)/tmp; } And here I update the position and draw the shot: public void drawShot(SpriteBatch batch) { this.lifeTime = TimeUtils.millis() - this.shotTime; //position = positionBefore + v*t this.posX = this.posX + this.vecX*this.lifeTime*speed*Gdx.graphics.getDeltaTime(); this.posY = this.posY + this.vecY*this.lifeTime*speed*Gdx.graphics.getDeltaTime(); ... } Now, the behavior of the bullet seems very awkward, not going exactly where my mouse is (it's like the mouse is 30px off) and with a random speed. I know I probably need to open the old algebra book from college but I'd like somebody says if I'm in the right direction (or points me to it); if it's a calculation problem, a code problem or both. Also, is it possible that Gdx.input.getX() gives me non-precise position? Because when I draw the cross-hair it also draws off the cursor position. Sorry for the long post and sorry if it's a very basic question. Thanks!

    Read the article

  • What are the parameters of APEX destructible asset / actor, and what are the effect of them?

    - by Semih Kekül
    There are parameters of the NxDestructibleAsset such as: defaultBehaviorGroup.damageToRadius destructibleParameters.fractureImpulseScale p3BodyDescTemplate.density structureSettings.useStressSolver destructibleParameters.runtimeFracture.glass.firstSegmentSize, etc. However, i can not find any document explaining these parameters. Are there any documents/videos or codes (anything) which explains these parameters?

    Read the article

  • Game testing on Android - emulator or real devices?

    - by n00bfuscator
    I am working at a localization agency and we have been approached by a client about testing their games on iOS as well as Android. Testing on iOS seems fairly easy as we can just buy a couple of devices and we should be covered. For Android it seems to be completely different. From what i found, the emulator can cover all API levels, screen sizes and such, but i hear it's buggy and nothing could replace testing on real devices. With the vast amount of Android devices out there and the rate at which new devices are released it seems impossible to keep up. How can i test games (localization and functional) on Android covering all compatible devices?

    Read the article

  • Manually updating HTML5 local storage?

    - by hustlerinc
    I'm just starting out HTML5 game developement (and game dev in general) and watching all the videos and tutorials available something has crossed my mind. Everyone keep saying I should set the cookie's (or cached files) to be expired after a certain amount of time. So that when it reaches that time the browser automatically downloads all assets again, even if it's the same asset's. Wouldn't it be possible to manually define the version of the game? For example the user has downloaded all the files for 1.01 of the game, when updating I change a simple variable to 1.02. When the user logs in it would compare his version to the current and if they are not equal only then it downloads the files? This could even be improved to download only specific files depending on what needs to be updated? Would this be possible or am I just dreaming? What are the possible downsides of this approach?

    Read the article

  • Passing data between engine layers

    - by spaceOwl
    I am building a software system (game engine with networking support ) that is made up of (roughly) these layers: Game Layer Messaging Layer Networking Layer Game related data is passed to the messaging layer (this could be anything that is game specific), where they are to be converted to network specific messages (which are then serialized to byte arrays). I'm looking for a way to be able to convert "game" data into "network" data, such that no strong coupling between these layers will exist. As it looks now, the Messaging layer sits between both layers (game and network) and "knows" both of them (it contains Converter objects that know how to translate between data objects of both layers back and forth). I am not sure this is the best solution. Is there a good design for passing objects between layers? I'd like to learn more about the different options.

    Read the article

  • 2D scene graph not transforming relative to parent

    - by Dr.Denis McCracleJizz
    I am currently in the process of coding my own 2D Scene graph, which is basically a port of flash's render engine. The problem I have right now is my rendering doesn't seem to be working properly. This code creates the localTransform property for each DisplayObject. Matrix m_transform = Matrix.CreateRotationZ(rotation) * Matrix.CreateScale(scaleX, scaleY, 1) * Matrix.CreateTranslation(new Vector3(x, y, z)); This is my render code. float dRotation; Vector2 dPosition, dScale; Matrix transform; transform = this.localTransform; if (parent != null) transform = localTransform * parent.localTransform; DecomposeMatrix(ref transform, out dPosition, out dRotation, out dScale); spriteBatch.Draw(this.texture, dPosition, null, Color.White, dRotation, new Vector2(originX, originY), dScale, SpriteEffects.None, 0.0f); Here is the result when I try to add the Stage then to the stage a First DisplayObjectContainer and then a second one. It may look fine but the problem lies in the fact that I add a first DisplayObjectContainer at (400,400) and the second one within it (that's the smallest one) at position (0,0). So he should be right over its parent but he gets render within the parent at the same position the parent has (400, 400) for some reason. It's just as if I double the parent's localMatrix and then render the second cat there. This is the code i use to loop through every childs. base.Draw(spriteBatch); foreach (DisplayObject childs in _childs) { childs.Draw(spriteBatch); }

    Read the article

  • Multiplayer online game engine/pipeline

    - by Slav
    I am implementing online multiplayer game where client must be written in AS3 (Flash) to embed game into browser and server in C++ (abstract part of which is already written and used with other games). Networking models may differ from each other, but currently I'm looking toward game's logic run on both client and server parts but they're written on different languages while it's not the main problem. My previous game (pretty big one - was implemented with efforts of ~5 programmers in 1.5 years) was mainly "written" within electronic tables as structured objects with implemented inheritance: was written standalone tool which generated AS3 and C++ (languages of platforms to which the game was published) using specified electronic tables file (.xls or .ods). That file contained ~50 tables with ~50 rows and ~50 columns each and was mainly written by game designers which do not know any programming languages. But that game was single-player. Having declared problem with my currently implementing MMO, I'm looking toward some vast pipeline, where will be resolved such problems like: game objects descriptions (which starships exist within game, how much HP they have, how fast move, what damage deal...) actions descriptions (what players or NPCs can do: attack each other, collect resources, build structures, move, teleport, cast spells) - actions are transmitted through server between clients influences (what happens when specified action applied on specified object, e.i "Ship A attacked Ship B: field "HP" of Ship B reduced by amount of field "damage" of Ship A" Influences can be much more difficult, yes, e.i. "damage is twice it's size when Ship has =5 allies around him in a 200 units range during night" and so on. If to be able to write such logic within some "design document" it will be easily possible to: let designers to do their job without programmer's intervention or any bug-prone programming validate described logic transfer (transform, convert) to any programming language where it will be executed Did somebody worked on something like that? Is there some tools/engines/pipelines which concernes with it? How to handle all of this problems simultaneously in a best way or do I properly imagine my tasks and problems to myself?

    Read the article

  • J2ME character animation with multiple sprite sheets

    - by Alex
    I'm working on a J2ME game and I want to have walking animations. Each direction of walking has a separate sprite sheet (i.e. one for walking up, one for walking right etc), I also have a static idle image for each direction held together in a single file. I've tried to hold an array of sprites in my player class and then just drawing the sprite corresponding to the current direction, but this doesn't seem to work. I'm aware that if I combine all the animations into one sprite sheet I could set up different animation sequences, but I want to be able to do it with separate images for each animation. Is there a way that anyone knows of to achieve this? And ideally without too much extra code (as opposed to combining the sprites into one sheet)

    Read the article

  • Publishing a game -- any way to target both WP7 and Win8 Store?

    - by Rei Miyasaka
    I'm at a dilemma which seems should soon become an important issue for a lot of developers. If I build a game in XNA, I won't be able to publish it on the Windows 8 Store, as it would be a classic application -- and classic applications can't be sold on the store. If I build a game in Metro DirectX, I would be able to sell it on the Store, but porting it to Windows Phone would involve porting it to Reach XNA, which in fact would likely involve more effort even than porting to OS X or Android -- both of which support C++. Of all the WinRT API that is supported on C++/JS/.NET, DirectX can only be programmed from C++. It's also unlikely that Microsoft will update Windows 7 or Vista to support the new DirectX features, although that would make the Metro DirectX the first new version of DirectX to stop supporting the immediate predecessor OS. If I build a game in Pre-Win8 DirectX 9/10/11, I won't be able to sell it on the Windows Store or Windows Phone, but I could sell it on something like Steam. It would also involve the most amount of manual plumbing. In fact, DirectWrite, despite being part of DirectX 11, doesn't talk to Direct3D. I'm getting really tired of all these restrictions -- artificial and otherwise -- and I'm coming to a point where I'm considering switching to a platform with a less fragmented API, like Android or Mac/iOS. As far as bringing a game into market goes, excluding the actual market share of any platforms that I might consider, what other factors would help me in making a decision? Just a few years ago this question was a lot easier to answer: if you were primarily concerned with Windows platforms, all you had to answer was whether you wanted DirectX, XNA, or something like SlimDX. If you made the wrong decision, no biggie -- all you really would have lost is XBox and the fairly small Windows Phone market.

    Read the article

  • Intersection points of plane set forming convex hull

    - by Toji
    Mostly looking for a nudge in the right direction here. Given a set of planes (defined as a normal and distance from origin) that form a convex hull, I would like to find the intersection points that form the corners of that hull. More directly, I'm looking for a way to generate a point cloud appropriate to provide to Bullet. Bonus points if someone knows of a way I could give bullet the plane list directly, since I somewhat suspect that's what it's building on the backend anyway.

    Read the article

  • Where can I find affordable legal advice for game software related inquiries?

    - by Steven Lu
    I am working on simulation middleware which is applicable for game engine implementations. What I would like to do is to make it freely available for use for all non-commercial purposes, while at the same time imposing some percentage of royalty on revenue (above a certain threshold) that is derived from my work. Something very similar to Epic's UDK licensing model. To facilitate the use of my software, I plan to offer binaries (static libs) for several platforms, as well as obfuscated source code which I will freely distribute, in addition to documentation of the API. I simply want to impose the restriction that if you try to make money from it, I get a cut eventually. I'm wondering if there are online forums and such where I am likely to find people who are willing to assist me in terms of learning what sort of things I have to do to get things down on the right kinds of documents. So far a site like this seems to be the most promising.

    Read the article

  • 3d trajectory - calculate initial velocity

    - by Skoder
    Hey, I've got a 2D projectile code sample working, but would like to extend it to 3D. How would I calculate the initial velocity of the Z-axis? At the moment, I've got: initVel.X = (float)Math.Cos(45.0); initVel.Y = (float)Math.Sin(45.0); How would I convert this to work in 3D (and add the initial velocity for the Z-axis)? In my example, X is across, Y is up down and Z is going into the screen. I also normalize the vector and multiply it by the speed. Thanks

    Read the article

  • How to cover the widest range of computers when publishing?

    - by DevilWithin
    When you plan a game, or even when you already made a game, and its time to publish, you wonder how much of your audience is covered by the game technology demands. I'm directing this essentialy to casual games, as I constantly see people having old laptops and being unable to replace them. Laptops with integrated cards whose OpenGL version doesn't even support textures larger than 1024x1024. These people may be avid gamers as well, and a reasonable share of the audience to consider giving them the chance to play casual games, once they cannot play any blockbusters. As I've seen happening, a very "noticeable" example is Angry Birds. It's gameplay is merely casual (I think nobody disagrees here) and still, it uses so high resolution textures that at least OpenGL 2.0 or around is needed, which blocks away a lot of people. So, the actual question is: what is a good tradeoff for this issue? Would it be better to just sacrifice the texture resolution for everyone, but have more supported hardware? Would it be better to keep the high quality and just slice the textures into smaller ones, sacrificing the performance a little bit? What else? Any ideas about this topic are welcome for discussion.

    Read the article

  • XNA hlsl tex2D() only reads 3 channels from normal maps and specular maps

    - by cubrman
    Our engine uses deferred rendering and at the main draw phase gathers plenty of data from the objects it draws. In order to save on tex2D calls, we packed our objects' specular maps with all sorts of data, so three out of four channels are already taken. To make it clear: I am talking about the assets that come with the models and are stored in their material's Specular Level channel, not about the RenderTarget. So now I need another information to be stored in the alpha channel, but I cannot make the shader to read it properly! Nomatter what I write into alpha it ends up being 1 (255)! I tried: saving the textures in PNG/TGA formats. turning off pre-computed alpha in model's properties. Out of every texture available to me (we use Diffuse map, Normal Map and Specular Map) I was only able to read alpha successfully from the Diffuse Map! Here is how I add specular and normal maps to my model's material in the content processor: if (geometry.Material.Textures.ContainsKey(normalMapKey)) { ExternalReference<TextureContent> texRef = geometry.Material.Textures[normalMapKey]; geometry.Material.Textures.Remove("NormalMap"); geometry.Material.Textures.Add("NormalMap", texRef); } ... foreach (KeyValuePair<String, ExternalReference<TextureContent>> texture in material.Textures) { if ((texture.Key == "Texture") || (texture.Key == "NormalMap") || (texture.Key == "SpecularMap")) mat.Textures.Add(texture.Key, texture.Value); } In the shader I obviously use: float4 data = tex2D(specularMapSampler, TexCoords); so data.a is always 1 in my case, could you suggest a reason?

    Read the article

  • write to depth buffer while using multiple render targets

    - by DocSeuss
    Presently my engine is set up to use deferred shading. My pixel shader output struct is as follows: struct GBuffer { float4 Depth : DEPTH0; //depth render target float4 Normal : COLOR0; //normal render target float4 Diffuse : COLOR1; //diffuse render target float4 Specular : COLOR2; //specular render target }; This works fine for flat surfaces, but I'm trying to implement relief mapping which requires me to manually write to the depth buffer to get correct silhouettes. MSDN suggests doing what I'm already doing to output to my depth render target - however, this has no impact on z culling. I think it might be because XNA uses a different depth buffer for every RenderTarget2D. How can I address these depth buffers from the pixel shader?

    Read the article

  • Understanding Unity3d physics: where is the force applied?

    - by Heisenbug
    I'm trying to understand which is the right way to apply forces to a RigidBody. I noticed that there are AddForce and AddRelativeForce methods, one applied in world space coordinate system meanwhile the other in the local space. The thing that I do not understand is the following: usually in physics library (es. Bullet) we can specify the force vector and also the force application point. How can I do this in Unity? Is it possible to apply a force vector in a specific point relative to the given RigidBody coordinate system? Where does AddForce apply the force?

    Read the article

  • LOD in modern games

    - by Firas Assaad
    I'm currently working on my master's thesis about LOD and mesh simplification, and I've been reading many academic papers and articles about the subject. However, I can't find enough information about how LOD is being used in modern games. I know many games use some sort of dynamic LOD for terrain, but what about elsewhere? Level of Detail for 3D Graphics for example points out that discrete LOD (where artists prepare several models in advance) is widely used because of the performance overhead of continuous LOD. That book was published in 2002 however, and I'm wondering if things are different now. There has been some research in performing dynamic LOD using the geometry shader (this paper for example, with its implementation in ShaderX6), would that be used in a modern game? To summarize, my question is about the state of LOD in modern video games, what algorithms are used and why? In particular, is view dependent continuous simplification used or does the runtime overhead make using discrete models with proper blending and impostors a more attractive solution? If discrete models are used, is an algorithm used (e.g. vertex clustering) to generate them offline, do artists manually create the models, or perhaps a combination of both methods is used?

    Read the article

< Previous Page | 468 469 470 471 472 473 474 475 476 477 478 479  | Next Page >