Search Results

Search found 19855 results on 795 pages for 'game console'.

Page 332/795 | < Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >

  • Staggered Isometric Map: Calculate map coordinates for point on screen

    - by Chris
    I know there are already a lot of resources about this, but I haven't found one that matches my coordinate system and I'm having massive trouble adjusting any of those solutions to my needs. What I learned is that the best way to do this is to use a transformation matrix. Implementing that is no problem, but I don't know in which way I have to transform the coordinate space. Here's an image that shows my coordinate system: How do I transform a point on screen to this coordinate system?

    Read the article

  • Placing & deleting element(s) from a object (stack)

    - by Chris
    Hello, Initialising 2 stack objects: Stack s1 = new Stack(), s2 = new Stack(); s1 = 0 0 0 0 0 0 0 0 0 0 (array of 10 elements wich is empty to start with) top:0 const int Rows = 10; int[] Table = new int[Rows]; public void TableStack(int[] Table) { for (int i=0; i < Table.Length; i++) { } } My question is how exactly do i place a element on a stack (push) or take a element from the stack (pop) as the following: Push: s1.Push(5); // s1 = 5 0 0 0 0 0 0 0 0 0 (top:1) s1.Push(9); // s1 = 5 9 0 0 0 0 0 0 0 0 (top:2) Pop: int number = s1.Pop(); // s1 = 5 0 0 0 0 0 0 0 0 0 0 (top:1) 9 got removed Do i have to use get & set, and if so how exactly do i implent this with a array? Not really sure what to exactly use for this. Any hints highly appreciated. The program uses the following driver to test the Stack class (wich cannot be changed or modified): public void ExecuteProgram() { Console.Title = "StackDemo"; Stack s1 = new Stack(), s2 = new Stack(); ShowStack(s1, "s1"); ShowStack(s2, "s2"); Console.WriteLine(); int getal = TryPop(s1); ShowStack(s1, "s1"); TryPush(s2, 17); ShowStack(s2, "s2"); TryPush(s2, -8); ShowStack(s2, "s2"); TryPush(s2, 59); ShowStack(s2, "s2"); Console.WriteLine(); for (int i = 1; i <= 3; i++) { TryPush(s1, 2 * i); ShowStack(s1, "s1"); } Console.WriteLine(); for (int i = 1; i <= 3; i++) { TryPush(s2, i * i); ShowStack(s2, "s2"); } Console.WriteLine(); for (int i = 1; i <= 6; i++) { getal = TryPop(s2); //use number ShowStack(s2, "s2"); } }/*ExecuteProgram*/ Regards.

    Read the article

  • Blender: How to "meshify" an object I made from Bezier curves

    - by capcom
    I made a star shape using Bezier curves, and extruded it (see pic below): What I want to do is give it a rounder look - not just around the edges by using beveling. I want it to kind of look like this (well, that shape anyway): How would I go about doing this? Please keep in mind that I am extremely new to Blender. I thought that I could somehow turn this star into those default shapes that have tonnes of squares which I could pull out, and apply a mirror to it so that the same thing happens on both sides. I really don't know how to do it, and would appreciate your help.

    Read the article

  • Android Loading Screen: How do I use a stack to load elements?

    - by tom_mai78101
    I have some problems with figuring out what value I should put in the function: int value_needed_to_figure_out = X; ProgressBar.incrementProgressBy(value_needed_to_figure_out); I've been researching about loading screens and how to use them. Some examples I've seen have implemented Thread.sleep() in a Handler.post(new Runnable()) function. To me, I got most of that concept of using the Handler to update the ProgressBar, while pretending to do some heavy crunching work. So, I kept looking. I have read this thread here: How do I load chunks of data from an assest manager during a loading screen? It said that I can try using a stack it needs to load, and adding a size counter as I add elements to the stack. What does it mean? This is the part where I'm totally stumped. If anyone would provide some hints, I'll gladly appreciate it. Thanks in advance.

    Read the article

  • Why don't Normal maps in tangent space have a single blue color?

    - by seahorse
    Normal maps are predominantly blue in color because the z component maps to Blue and since normals point out of the surface in the z direction we see Blue as the predominant component. If the above is true then why are normal maps just of one color i.e. blue and they should not be having any other shades(not even shades of blue) Since by definition tangent space is perpendicular to normal at any point we should have the normal always pointing in the Z (Blue direction) with no X(Red component) and Y(Green component). Thus the normal map(since it is a "normal map") should have had color of normals which is just the Blue(Z =Blue compoennt = 1, R=0, G=0) and the normal map should have been of only Blue color with no shades in between. But even then normal maps are not so, and they have gradients of shades in them, why is this so?

    Read the article

  • blender: 3D model from guide images

    - by Stefan
    In a effort to learn the blender interface, which is confusing to say the least, I've chosen to model a model from referrence pictures easily found on the web. Problem is that I can't ( and won't ) get perfect "right", "front" and "top" pictures. Blender only allows you to see the background pictures when in ortographic mode and only from right|front|top, which doesn't help me. How to I proceed to model from non-perfect guide images?

    Read the article

  • Entity Component System for HUD and GUI

    - by Jason L.
    This is a very rough sketch of how I currently have things designed. It should, at least, give an idea of how my ECS is currently designed. If you notice in that diagram, I have basically split the HUD out of the ECS. They have their own set of things (HudLayer, HudComponent, etc) and are handled differently. This is where I'm struggling, though. There are many different instances in which the HUD will need to know about entities. Not just data changing (I have an event dispatcher for that), but the actual entity and all it encompasses. There are also situations where entities will need to be able to query the HUD for data. Let's take a couple examples: First, my equipment screen. On here I can change the equipment on a character (Entity). In order for this to happen, I need to know about the entity. At least I think I do? How can I handle this? The second scenario involves my Systems needing to query a HudComponent for data. A specific example would be my battle system. Each "team" is given a 3x3 grid they can move around in. See here: Skills target these cells, and not the player, so I would need a way for my systems to determine which cells are occupied and which are not. Basically I need a way for two way communication between Systems and my HUD. I know it's recommended (by some people, anyways) to take your HUD out of the ECS. Is that appropriate in my case?

    Read the article

  • How To Smoothly Animate From One Camera Position To Another

    - by www.Sillitoy.com
    The Question is basically self explanatory. I have a scene with many cameras and I'd like to smoothly switch from one to another. I am not looking for a cross fade effect but more to a camera moving and rotating the view in order to reach the next camera point of view and so on. To this end I have tried the following code: firstCamera.transform.position.x = Mathf.Lerp(firstCamera.transform.position.x, nextCamer.transform.position.x,Time.deltaTime*smooth); firstCamera.transform.position.y = Mathf.Lerp(firstCamera.transform.position.y, nextCamera.transform.position.y,Time.deltaTime*smooth); firstCamera.transform.position.z = Mathf.Lerp(firstCamera.transform.position.z, nextCamera.transform.position.z,Time.deltaTime*smooth); firstCamera.transform.rotation.x = Mathf.Lerp(firstCamera.transform.rotation.x, nextCamera.transform.rotation.x,Time.deltaTime*smooth); firstCamera.transform.rotation.z = Mathf.Lerp(firstCamera.transform.rotation.z, nextCamera.transform.rotation.z,Time.deltaTime*smooth); firstCamera.transform.rotation.y = Mathf.Lerp(firstCamera.transform.rotation.y, nextCamera.transform.rotation.y,Time.deltaTime*smooth); But the result is actually not that good.

    Read the article

  • Mouse Clicks, Reactive Extensions and StreamInsight Mashup

    I had an hour spare this afternoon so I wanted to have another play with Reactive Extensions in .Net and StreamInsight.  I also didn’t want to simply use a console window as a way of gathering events so I decided to use a windows form instead. The task I set myself was this. Whenever I click on my form I want to subscribe to the event and output its location to the console window and also the timestamp of the event.  In addition to this I want to know for every mouse click I do, how many mouse clicks have happened in the last 5 seconds. The second point here is really interesting.  I have often found this when working with people on problems.  It is how you ask the question that determines how you tackle the problem.  I will show 2 ways of possibly answering the second question depending on how the question was interpreted. As a side effect of this example I will show how time in StreamInsight can stand still.  This is an important concept and we can see it in the output later. Now to the code.  I will break it all down in this blogpost but you can download the solution and see it all together. I created a Console application and then instantiate a windows form.   frm = new Form(); Thread g = new Thread(CallUI); g.SetApartmentState(ApartmentState.STA); g.Start();   Call UI looks like this   static void CallUI() { System.Windows.Forms.Application.Run(frm); frm.Activate(); frm.BringToFront(); }   Now what we need to do is create an observable from the MouseClick event on the form.  For this we use the Reactive Extensions.   var lblevt = Observable.FromEvent<MouseEventArgs>(frm, "MouseClick").Timestamp();   As mentioned earlier I have two objectives in this example and to solve the first I am going to again use the Reactive extensions.  Let’s subscribe to the MouseClick event and output the location and timestamp to the console. lblevt.Subscribe(evt => { Console.WriteLine("Clicked: {0}, {1} ", evt.Value.EventArgs.Location,evt.Timestamp); }); That should take care of obective #1 but what about the second objective.  For that we need some temporal windowing and this means StreamInsight.  First we need to turn our Observable collection of MouseClick events into a PointStream Server s = Server.Create("Default"); Microsoft.ComplexEventProcessing.Application a = s.CreateApplication("MouseClicks"); var input = lblevt.ToPointStream( a, evt => PointEvent.CreateInsert( evt.Timestamp, new { loc = evt.Value.EventArgs.Location.ToString(), ts = evt.Timestamp.ToLocalTime().ToString() }), AdvanceTimeSettings.IncreasingStartTime);   Now that we have created out PointStream we need to do something with it and this is where we get to our second objective.  It is pretty clear that we want some kind of windowing but what? Here is one way of doing it.  It might not be what you wanted but again it is how the second objective is interpreted   var q = from i in input.TumblingWindow(TimeSpan.FromSeconds(5), HoppingWindowOutputPolicy.ClipToWindowEnd) select new { CountOfClicks = i.Count() };   The above code creates tumbling windows of 5 seconds and counts the number of events in the windows.  If there are no events in the window then no result is output.  Likewise until an event (MouseClick) is issued then we do not see anything in the output (that is not strictly true because it is the CTI strapped to our MouseClick events that flush the events through the StreamInsight engine not the events themselves).  This approach is centred around the windows and not the events.  Until the windows complete and a CTI is issued then no events are pushed through. An alternate way of answering our second question is below   var q = from i in input.AlterEventDuration(evt => TimeSpan.FromSeconds(5)).SnapshotWindow(SnapshotWindowOutputPolicy.Clip) select new { CountOfClicks = i.Count() };   In this code we extend the duration of each MouseClick to five seconds.  We then create  Snapshot Windows over those events.  Snapshot windows are discussed in detail here.  With this solution we are centred around the events.  It is the events that are driving the output.  Let’s have a look at the output from this solution as it may be a little confusing. First though let me show how we get the output from StreamInsight into the Console window. foreach (var x in q.ToPointEnumerable().Where(e => e.EventKind != EventKind.Cti)) { Console.WriteLine(x.Payload.CountOfClicks); }   Ok so now to the output.   The table at the top shows the output from our routine and the table at the bottom helps to explain the output.  One of the things that will help as well is, you will note that for our PointStream we set the issuing of CTIs to be IncreasingStartTime.  What this means is that the CTI is placed right at the start of the event so will not flush the event with which it was issued but will flush those prior to it.  In the bottom table the Blue fill is where we issued a click.  Yellow fill is the duration and boundaries of our events.  The numbers at the bottom indicate the count of events   Clicked 22:40:16                                 Clicked 23:40:18                                 1                                   Clicked 23:40:20                                 2                                   Clicked 23:40:22                                 3                                   2                                   Clicked 23:40:24                                 3                                   2                                   Clicked 23:40:32                                 3                                   2                                   1                                                                                                         secs 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32                                                                                                                                                                                                                         counts   1   2 3 2 3 2 3   2   1           What we can see here in the output is that the counts include all the end edges that have occurred between the mouse clicks.  If we look specifically at the mouse click at 22:40:32. then we see that 3 events are returned to us. These include the following End Edge count at 22:40:25 End Edge count at 22:40:27 End Edge count at 22:40:29 Another thing we notice is that until we actually issue a CTI at 22:40:32 then those last 3 snapshot window counts will never be reported. Hopefully this has helped to explain  a few concepts around StreamInsight and the IObservable() pattern.   You can download this solution from here and play.  You will need the Reactive Framework from here and StreamInsight 1.1

    Read the article

  • DrawIndexedPrimitives overdraws data in previous buffer if called in loop

    - by Daniel Excinsky
    I doubled the question from stackoverflow here, and will delete the opposite of a question that gave me the answer. I have the Draw method in one of my renderers, that loops through the dictionary and gets precollected and preinitialized buffers. When dictionary has only one element, everything is just fine. But with more elements what I get on the screen is only the data from the last buffer (I suppose, not sure) My Draw method: public void Draw(GameTime gameTime) { if (!_areStaticEffectsSet) { // blockEffect.Parameters["TextureAtlas"].SetValue(textureAtlas); blockEffect.Parameters["HorizonColor"].SetValue(World.HORIZONCOLOR); blockEffect.Parameters["NightColor"].SetValue(World.NIGHTCOLOR); blockEffect.Parameters["MorningTint"].SetValue(World.MORNINGTINT); blockEffect.Parameters["EveningTint"].SetValue(World.EVENINGTINT); blockEffect.Parameters["SunColor"].SetValue(World.SUNCOLOR); _areStaticEffectsSet = true; } blockEffect.Parameters["World"].SetValue(Matrix.Identity); blockEffect.Parameters["View"].SetValue(_player.CameraView); blockEffect.Parameters["Projection"].SetValue(_player.CameraProjection); blockEffect.Parameters["CameraPosition"].SetValue(_player.CameraPosition); blockEffect.Parameters["timeOfDay"].SetValue(_world.TimeOfDay); var viewFrustum = new BoundingFrustum(_player.CameraView * _player.CameraProjection); _graphicsDevice.BlendState = BlendState.Opaque; _graphicsDevice.DepthStencilState = DepthStencilState.Default; foreach (KeyValuePair<int, Texture2D> textureAtlas in textureAtlases) { blockEffect.Parameters["TextureAtlas"].SetValue(textureAtlas.Value); foreach (EffectPass pass in blockEffect.CurrentTechnique.Passes) { pass.Apply(); //TODO: ?????????? ??????????????? ?? ?????? ?? ??????? ??????? VertexBuffer ? IndexBuffer foreach (Chunk chunk in _world.Chunks.Values) { if (chunk == null || chunk.IsDisposed) { continue; } if (chunk.BoundingBox.Intersects(viewFrustum) && chunk.GetBlockIndexBuffer(textureAtlas.Key) != null) { lock (chunk) { if (chunk.GetBlockIndexBuffer(textureAtlas.Key).IndexCount > 0) { VertexBuffer vertexBuffer = chunk.GetBlockVertexBuffer(textureAtlas.Key); IndexBuffer indexBuffer = chunk.GetBlockIndexBuffer(textureAtlas.Key); //if (chunk.DrawIndex == new Vector3i(0, 0, 0)) //{ //if (textureAtlas.Key == -1) //{ //var varray = new [] //{ //new VertexPositionTextureLight(new Vector3(0,68,0), new Vector2(0,1),1,new Vector3(0,0,0), new Vector3(1,1,1)), //new VertexPositionTextureLight(new Vector3(0,68,1), new Vector2(0,1),1,new Vector3(0,0,0), new Vector3(1,1,1)), //new VertexPositionTextureLight(new Vector3(1,68,0), new Vector2(0,1),1,new Vector3(0,0,0), new Vector3(1,1,1)) //}; //var iarray = new short[] {0, 1, 2}; //vertexBuffer = new VertexBuffer(_graphicsDevice, typeof(VertexPositionTextureLight), varray.Length, BufferUsage.WriteOnly); //indexBuffer = new IndexBuffer(_graphicsDevice, IndexElementSize.SixteenBits, iarray.Length, BufferUsage.WriteOnly); //vertexBuffer.SetData(varray); //indexBuffer.SetData(iarray); } } _graphicsDevice.SetVertexBuffer(vertexBuffer); _graphicsDevice.Indices = indexBuffer; _graphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vertexBuffer.VertexCount, 0, indexBuffer.IndexCount / 3); } } } } } } } Noteworthy things about the code: XNA version is 4.0. I've commented the debugging code in the loop, but left it for it may bring some insight. I try not only to change vertices/indices in the loop, but textureAtlas also. Code in the shader about textureAtlas: Texture TextureAtlas; sampler TextureAtlasSampler = sampler_state { texture = <TextureAtlas>; magfilter = POINT; minfilter = POINT; mipfilter = POINT; AddressU = WRAP; AddressV = WRAP; }; struct VSInput { float4 Position : POSITION0; float2 TexCoords1 : TEXCOORD0; float SunLight : COLOR0; float3 LocalLight : COLOR1; float3 Normal : NORMAL0; }; VertexPositionTextureLight is my own realization of IVertexType. So, do anybody know about this problem, or see the wrongness in my code (that's far more likely)?

    Read the article

  • Implmenting RLE into a tilemap or how to create a large 3D array?

    - by Smallbro
    Currently I've been using a 3D array for my tiles in a 2D world but the 3D side comes in when moving down into caves and whatnot. Now this is not memory efficient and I switched over to a 2D array and can now have much larger maps. The only issue I'm having now is that it seems that my tiles cannot occupy the same space as a tile on the same z level. My current structure means that each block has its own z variable. This is what it used to look like: map.blockData[x][y][z] = new Block(); however now it works like this map.blockData[x][y] = new Block(z); I'm not sure why but if I decide to use the same space on say the floor below it wont allow me to. Does anyone have any ideas on how I can add a z-axis to my 2D array? I'm using java but I reckon the concept carries across different languages. Edit: As Will posted, RLE sounds like the best method for achieving a fast 3D array. However I'm struggling to understand how I would even start to implement it? Would I create a 4D array the 4th being something which controls how many to skip? Or would the x-axis simply change altogether and have large gaps in between - for example [5][y][z] would skip 5 tiles? Is there something really obvious here which I am missing? The number of z levels I'm trying to have is around 66, it would be preferably that I can have up to or more than 1000 in x and y.

    Read the article

  • What is UVIndex and how do I use it on OpenGL?

    - by Delta
    I am a noob in OpenGL ES 2.0 (for WebGL) and I'm trying to draw a simple model I've made with a 3D tool and exported to .fbx format. I've been able to draw some models that only have: A vertex buffer, a index buffer for the vertices, a normal buffer and a texture coordinate buffer, but this model now has a "UVIndex" and I'm not sure where am I supposed to put this UVIndex. My code looks like this: GL.bindBuffer(GL.ARRAY_BUFFER, this.Model.House.VertexBuffer); GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute["vPosition"],3,GL.FLOAT, false, 0, 0); GL.bindBuffer(GL.ARRAY_BUFFER, this.Model.House.NormalBuffer); GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute["vNormal"], 3, GL.FLOAT, false, 0, 0); GL.bindBuffer(GL.ARRAY_BUFFER, this.Model.House.TexCoordBuffer); GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute["TexCoord"], 2, GL.FLOAT, false, 0, 0); GL.bindBuffer(GL.ELEMENT_ARRAY_BUFFER, this.Model.House.IndexBuffer); GL.bindTexture(GL.TEXTURE_2D, this.Texture.HTex1); GL.activeTexture(GL.TEXTURE0); GL.drawElements(GL.TRIANGLES, this.Model.House.IndexBuffer.Length, GL.UNSIGNED_SHORT, 0); But my model renders totally incorrect and I think it has to do with the fact that I am ignoring this "UVIndex" in the .fbx file, since I've never drawn any model that uses this UVIndex I really have no clue on what to do with it. This is the json file containing the model's data: http://pastebin.com/raw.php?i=G294TVmz

    Read the article

  • How to efficiently map tokens to code in a script interpreter?

    - by lithander
    I'm writing an interpreter for a simple scripting language where each line is a complete, executable command. (Like the instructions in assembler) When parsing a line I have to map the requested command to actual code. My current solution looks like this: std::string op, param1, param2; //parse line, identify op, param1, param2 ... //call command specific code if(op == "MOV") ExecuteMov(AsNumber(param1)); else if(op == "ROT") ExecuteRot(AsNumber(param1)); else if(op == "SZE") ExecuteSze(AsNumber(param1)); else if(op == "POS") ExecutePos((AsNumber(param1), AsNumber(param2)); else if(op == "DIR") ExecuteDir((AsNumber(param1), AsNumber(param2)); else if(op == "SET") ExecuteSet(param1, AsNumber(param2)); else if(op == "EVL") ... The more commands are supported the more string comparisions I'll have to do to identify and call the associated method. Can you point me to a more efficient implementation in the described scenario?

    Read the article

  • Space-efficient data structures for broad-phase collision detection

    - by Marian Ivanov
    As far as I know, these are three types of data structures that can be used for collision detection broadphase: Unsorted arrays: Check every object againist every object - O(n^2) time; O(log n) space. It's so slow, it's useless if n isn't really small. for (i=1;i<objects;i++){ for(j=0;j<i;j++) narrowPhase(i,j); }; Sorted arrays: Sort the objects, so that you get O(n^(2-1/k)) for k dimensions O(n^1.5) for 2d and O(n^1.67) for 3d and O(n) space. Assuming the space is 2D and sortedArray is sorted so that if the object begins in sortedArray[i] and another object ends at sortedArray[i-1]; they don't collide Heaps of stacks: Divide the objects between a heap of stacks, so that you only have to check the bucket, its children and its parents - O(n log n) time, but O(n^2) space. This is probably the most frequently used approach. Is there a way of having O(n log n) time with less space? When is it more efficient to use sorted arrays over heaps and vice versa?

    Read the article

  • 3D Paint tool in Maya

    - by Joris
    Hey everyone, could someone help me out on this question? When I am trying to texture objects in maya (for example a barrel) I want to texture it with a brush. When I try to use the 3D painting tool it al gets black even when I have a image file selected. Also the resolution is very very low of the models and textures in maya, though the texture image is very high quality. Can someone please help me? Thanks so much for helping.

    Read the article

  • Render full-screen gradient or texture

    - by Filip Skakun
    What's the simplest way to fill the background of the screen with a gradient or a texture in Direct3D 10/11? I'm building a Windows 8 metro app in which the camera never moves and I render some content in D3D, but I need to fill the background with something else than a solid color. Do I need to figure out the size and position of a rectangle and position it in 3D space or can I have some simpler solution? I don't care about depth at all, I don't use any depth buffer since all my content is sorted back to front, so I could just start by drawing to the background.

    Read the article

  • Group Matchmaking

    - by Simon Kérouack
    Consider different groups(1 or more players) queuing together, we want to make 2 opposing teams containing each the same amount of players while keeping the groups together. At the same time we want to make both teams' average ranking as close as possible. Now also consider we have as a working set the subset of groups currently queuing within a given ranking range. For an example, let's say we have the following groups, ordered by queuing time: Id, playerCount, totalRank, avgRank 0, 3, 126, 42 1, 2, 60, 30 2, 1, 25, 25 3, 2, 80, 40 4, 1, 40, 40 5, 1, 20, 20 6, 3, 150, 50 for this specific subset, the expected output should ideally be: team1: 0, 1 (total: 186) team2: 2, 5, 6 (total: 195) up to now the solution I have been using is to balance out each team by making each team pick the group with highest ranking within the subset turn by turn. The team who picks is the one with the currently lowest average rank unless one is already full. If one team is already full the other team tries to complete itself with groups that would make the rank gap as small as possible. This solution turns out to have issues with frequent edge cases and I'm looking for a better solution, or some fine-tuning that could be made. In most cases, players seems to want teams of 5 people and queue in group of 2. Our average subset when 2 teams of 5 are chosen is made of about 14 players if that may be of any help.

    Read the article

  • Ray Tracing concers: Efficient Data Structure and Photon Mapping

    - by Grieverheart
    I'm trying to build a simple ray tracer for specific target scenes. An example of such scene can be seen below. I'm concerned as to what accelerating data structure would be most efficient in this case since all objects are touching but on the other hand, the scene is uniform. The objects in my ray tracer are stored as a collection of triangles, thus I also have access to individual triangles. Also, when trying to find the bounding box of the scene, how should infinite planes be handled? Should one instead use the viewing frustum to calculate the bounding box? A few other questions I have are about photon mapping. I've read the original paper by Jensen and many more material. In the compact data structure for the photon they introduce, they store photon power as 4 chars, which from my understanding is 3 chars for color and 1 for flux. But I don't understand how 1 char is enough to store a flux of the order of 1/n, where n is the number of photons (I'm also a bit confused about flux vs power). The other question about photon mapping is, if it would be more efficient in my case to store photons per object (or even per Object's triangle) instead of using a balanced kd-tree. Also, same question about bounding box of the scene but for photon mapping. How should one find a bounding box from the pov of the light when infinite planes are involved?

    Read the article

  • Where can I find free or buy "next-gen" 3D Assets?

    - by Valmond
    Usually I buy 3D Assets from sites like turbosquid.com or similar. My problem is that I have lately implemented glow, normal maps, specular (and specular power) maps and reflection maps and I can't find any models that use those techniques. So where can I find / buy "next gen" assets (at least models/items with a normal map)? I have checked for similar posts but those I found are about either free only or 2D or 'ordinary' 3D so I hope this is not a duplicate.

    Read the article

  • Drag Gestures - fractional delta values

    - by Den
    I have an issue with objects moving roughly twice as far as expected when dragging them. I am comparing my application to the standard TouchGestureSample sample from MSDN. For some reason in my application gesture samples have fractional positions and deltas. Both are using same Microsoft.Xna.Framework.Input.Touch.dll, v4.0.30319. I am running both apps using standard Windows Phone Emulator. I am setting my break point immediately after this line of code in a simple Update method: GestureSample gesture = TouchPanel.ReadGesture(); Typical values in my app: Delta = {X:-13.56522 Y:4.166667} Position = {X:184.6956 Y:417.7083} Typical values in sample app: Delta = {X:7 Y:16} Position = {X:497 Y:244} Have anyone seen this issue? Does anyone have any suggestions? Thank you.

    Read the article

  • Foreach loop with 2d array of objects

    - by Jacob Millward
    I'm using a 2D array of objects to store data about tiles, or "blocks" in my gameworld. I initialise the array, fill it with data and then attempt to invoke the draw method of each object. foreach (Block block in blockList) { block.Draw(spriteBatch); } I end up with an exception being thrown "Object reference is not set to an instance of an object". What have I done wrong? EDIT: This is the code used to define the array Block[,] blockList; Then blockList = new Block[screenRectangle.Width, screenRectangle.Height]; // Fill with dummy data for (int x = 0; x <= screenRectangle.Width / texture.Width; x++) { for (int y = 0; y <= screenRectangle.Height / texture.Width; y++) { if (y >= screenRectangle.Height / (texture.Width*2)) { blockList[x, y] = new Block(1, new Rectangle(x * 16, y * 16, texture.Width, texture.Height), texture); } else { blockList[x, y] = new Block(0, new Rectangle(x * 16, y * 16, texture.Width, texture.Height), texture); } } }

    Read the article

  • Rendering oily/polluted water?

    - by Fraser
    Any shader wizards out there have an idea of how to achieve an oily/polluted water effect, similar to this: Ideally, the water would not be uniformly oily, but instead the oil could be generated from some source (such as a polluting drain from a chemical plant) and then diffuse throughout the water body. My thought for this part would be to keep an "oil map" as a 2D texture that determines the density of oil at each point on the water surface. It would diffuse and move naturally with the water vel;ocity at that point (I have a wave-particle simulation for dynamic waves, and am already doing something similar for foam on the water surface). However, I'm not sure how physically correct that would be, since oil might not move at the same velocity as the water. And I have no idea how to make all those trippy colors :-). Thoughts?

    Read the article

  • Implementing top view physics using box2D

    - by humbleBee
    How can top view physics games be done in box2D? One idea I have is to set the linear velocity of an object manually or to alter the linear and angular damping as my object moves over different surfaces. For example if my object is over a wet surface it'll have less linear damping and if it is over rough surface it'll have more damping. And to see if my object has fallen over an edge I'll try to use an AABB and check if its still inside or manually see if object.x > boundary.x etc. Is there any better way?

    Read the article

  • Deformation of Sphere using Transformations

    - by Mert Toka
    I have a graphic related question. I need to have a transformation matrix that I have no idea about what it is. The problem is to create right image from the right sphere. I created those images in Maya, but I need some matrices for the graphics course. Here is the image: Our professor told us to use some sine and cosine in our transformations, but I have no idea what he meant. I thought of intersecting a plane from the grid(that is xz plane) and sphere, and then scaling down the resulting circle. Would that work? I also checked this paper, however it looks like a bit advanced for me. Another thing is I guess that paper is not about the same type of information I was looking for. It would be great if you could help me.

    Read the article

  • OpenGL fovx question

    - by Nick
    To boil my question down to the simplest form, I fear I am oversimplifying how mat4 perspective works. I am using mat4.perspective(45, 2, 0.1, 1000.0) (the binding is WebGL fwiw). With a fovy of 45, and an aspect ratio of 2, I expect to have a fovx of 90. Thus, if I position my camera at (0, 0, 50), looking towards the origin, I expect to see a cube positioned at (50, 0, 0) (45 degrees) right at the very periphery of my screen, half on, half off,. Instead, a cube at (50, 0, 0) is totally off screen, and my actually periphery occurs at about (41.1, 0, 0). What am I missing here? Thanks, nick

    Read the article

< Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >