Search Results

Search found 16473 results on 659 pages for 'game logic'.

Page 280/659 | < Previous Page | 276 277 278 279 280 281 282 283 284 285 286 287  | Next Page >

  • Render on other render targets starting from one already rendered on

    - by JTulip
    I have to perform a double pass convolution on a texture that is actually the color attachment of another render target, and store it in the color attachment of ANOTHER render target. This must be done multiple time, but using the same texture as starting point What I do now is (a bit abstracted, but what I have abstract is guaranteed to work singularly) renderOnRT(firstTarget); // This is working. for each other RT currRT{ glBindFramebuffer(GL_FRAMEBUFFER, currRT.frameBufferID); programX.use(); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, firstTarget.colorAttachmentID); programX.setUniform1i("colourTexture",0); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, firstTarget.depthAttachmentID); programX.setUniform1i("depthTexture",1); glBindBuffer(GL_ARRAY_BUFFER, quadBuffID); // quadBuffID is a VBO for a screen aligned quad. It is fine. programX.vertexAttribPointer(POSITION_ATTRIBUTE, 3, GL_FLOAT, GL_FALSE, 0, (void*)0); glDrawArrays(GL_QUADS,0,4); programY.use(); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, currRT.colorAttachmentID); // The second pass is done on the previous pass programY.setUniform1i("colourTexture",0); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, currRT.depthAttachmentID); programY.setUniform1i("depthTexture",1); glBindBuffer(GL_ARRAY_BUFFER, quadBuffID); programY.vertexAttribPointer(POSITION_ATTRIBUTE, 3, GL_FLOAT, GL_FALSE, 0, (void*)0); glDrawArrays(GL_QUADS, 0, 4); } The problem is that I end up with black textures and not the wanted result. The GLSL programs program(X,Y) works fine, already tested on single targets. Is there something stupid I am missing? Even an hint is much appreciated, thanks!

    Read the article

  • XNA - Render texture to a rendertarget 2d via SpriteBatch error

    - by Jared B
    I got simple code that uses SpriteBatch to draw a texture onto a RenderTarget2D... private void drawScene(GameTime g) { GraphicsDevice.Clear(skyColor); GraphicsDevice.SetRenderTarget(targetScene); drawSunAndMoon(); effect.Fog = true; GraphicsDevice.SetVertexBuffer(line); effect.MainEffect.CurrentTechnique.Passes[0].Apply(); GraphicsDevice.DrawPrimitives(PrimitiveType.TriangleStrip, 0, 2); GraphicsDevice.SetRenderTarget(null); SceneTexture = targetScene; } private void drawPostProcessing(GameTime g) { effect.SceneTexture = SceneTexture; GraphicsDevice.SetRenderTarget(targetBloom); spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque, null, null, null); { if(Bloom) effect.BlurEffect.CurrentTechnique.Passes[0].Apply(); spriteBatch.Draw(targetScene, new Rectangle(0, 0, Window.ClientBounds.Width, Window.ClientBounds.Height), Color.White); } spriteBatch.End(); BloomTexture = targetBloom; GraphicsDevice.SetRenderTarget(null); } Both methods are called from Draw(GameTime gameTime). First drawScene is called, then drawPostProcessing is called. The thing is, I can't run the code because "the render target must not be set on the device when it is used as a texture." at line spriteBatch.Draw(targetScene, new Rectangle(0, 0, Window.ClientBounds.Width, Window.ClientBounds.Height), Color.White); I already found the solution, which is to draw the actual renderTarget (targetScene) to the texture so it doesn't create a reference to the loaded rendertarget. However, to my knowledge, the only way of doing this is to write: GraphicsDevice.SetRenderTarget(OutputTarget) SpriteBatch.Draw(InputTarget, ...) GraphicsDevice.SetRenderTarget(null) Which encounters the same exact problem I'm having right now. So, the question I'm asking is: how would I render InputTarget to OutputTarget without reference issues?

    Read the article

  • Does somebody know of a testcase(s) of libRocket

    - by Bjorn
    Today I implemented the interfaces for libRocket in my engine using opengl 3.3. I got a standard rml file and some fonts and images which where needed in this rml file. It seems that the page/rml I'm rendering know is the correct one, but I'm not 100% sure. Also because I still don't know a lot of rml and it is a fairly complex one. So my question: Are there any testcases for example an rml with images and fonts with the result as it should be rendered in a png or some other image format? If there aren't would somebody who actually implemented the libRocket interface correctly be so kind and share a result of a rendering in a png with the rml tobe rendered?

    Read the article

  • Camera lookAt target changes when rotating parent node

    - by Michael IV
    have the following issue.I have a camera with lookAt method which works fine.I have a parent node to which I parent the camera.If I rotate the parent node while keeping the camera lookAt the target , the camera lookAt changes too.That is nor what I want to achieve.I need it to work like in Adobe AE when you parent camera to a null object:when null object is rotated the camera starts orbiting around the target while still looking at the target.What I do currently is multiplying parent's model matrix with camera model matrix which is calculated from lookAt() method.I am sure I need to decompose (or recompose ) one of the matrices before multiplying them .Parent model or camera model ? Anyone here can show the right way doing it ? UPDATE: The parent is just a node .The child is the camera.The parented camera in AfterEffects works like this: If you rotate the parent node while camera looks at the target , the camera actually starts orbiting around the target based on the parent rotation.In my case the parent rotation changes also Camera's lookAt direction which IS NOT what I want.Hope now it is clear .

    Read the article

  • Hype and LINQ

    - by Tony Davis
    "Tired of querying in antiquated SQL?" I blinked in astonishment when I saw this headline on the LinqPad site. Warming to its theme, the site suggests that what we need is to "kiss goodbye to SSMS", and instead use LINQ, a modern query language! Elsewhere, there is an article entitled "Why LINQ beats SQL". The designers of LINQ, along with many DBAs, would, I'm sure, cringe with embarrassment at the suggestion that LINQ and SQL are, in any sense, competitive ways of doing the same thing. In fact what LINQ really is, at last, is an efficient, declarative language for C# and VB programmers to access or manipulate data in objects, local data stores, ORMs, web services, data repositories, and, yes, even relational databases. The fact is that LINQ is essentially declarative programming in a .NET language, and so in many ways encourages developers into a "SQL-like" mindset, even though they are not directly writing SQL. In place of imperative logic and loops, it uses various expressions, operators and declarative logic to build up an "expression tree" describing only what data is required, not the operations to be performed to get it. This expression tree is then parsed by the language compiler, and the result, when used against a relational database, is a SQL string that, while perhaps not always perfect, is often correctly parameterized and certainly no less "optimal" than what is achieved when a developer applies blunt, imperative logic to the SQL language. From a developer standpoint, it is a mistake to consider LINQ simply as a substitute means of querying SQL Server. The strength of LINQ is that that can be used to access any data source, for which a LINQ provider exists. Microsoft supplies built-in providers to access not just SQL Server, but also XML documents, .NET objects, ADO.NET datasets, and Entity Framework elements. LINQ-to-Objects is particularly interesting in that it allows a declarative means to access and manipulate arrays, collections and so on. Furthermore, as Michael Sorens points out in his excellent article on LINQ, there a whole host of third-party LINQ providers, that offers a simple way to get at data in Excel, Google, Flickr and much more, without having to learn a new interface or language. Of course, the need to be generic enough to deal with a range of data sources, from something as mundane as a text file to as esoteric as a relational database, means that LINQ is a compromise and so has inherent limitations. However, it is a powerful and beautifully compact language and one that, at least in its "query syntax" guise, is accessible to developers and DBAs alike. Perhaps there is still hope that LINQ can fulfill Phil Factor's lobster-induced fantasy of a language that will allow us to "treat all data objects, whether Word files, Excel files, XML, relational databases, text files, HTML files, registry files, LDAPs, Outlook and so on, in the same logical way, as linked databases, and extract the metadata, create the entities and relationships in the same way, and use the same SQL syntax to interrogate, create, read, write and update them." Cheers, Tony.

    Read the article

  • First-Time GLSL Shadow Mapping Problems

    - by Locke
    I'm working on building out a 2.5D engine and having massive problems getting my shadows working. I'm at a point where I'm VERY close. So, let's see a picture to see what I have: As you can see above, the image has lighting -- but the shadow map is displaying incorrectly. The shadow map is shown in the bottom left hand side of the screen as a normal 2D texture, so we can see what it looks like at any given time. If you notice, it appears that the shadows are generating backwards in the wrong direction -- I think. But the problem is a little more deep -- I'm just plotting the shadow onto the screen, which I know is wrong -- I'm ignoring the actual test to see if we NEED to show a shadow. The incoming parameters all appear to be correct -- so there has to be something wrong with my shader code somewhere. Here's what my code looks like: VERTEX: uniform mat4 LightModelViewProjectionMatrix; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { Normal = normalize(gl_NormalMatrix * gl_Normal); LightDirection = normalize(gl_NormalMatrix * gl_LightSource[0].position.xyz); LightCoordinate = LightModelViewProjectionMatrix * gl_Vertex; LightCoordinate.xy = ( LightCoordinate.xy * 0.5 ) + 0.5; gl_Position = ftransform(); gl_TexCoord[0] = gl_MultiTexCoord0; } FRAGMENT: uniform sampler2D DiffuseMap; uniform sampler2D ShadowMap; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { vec4 Texel = texture2D(DiffuseMap, vec2(gl_TexCoord[0])); // Directional lighting //Build ambient lighting vec4 AmbientElement = gl_LightSource[0].ambient; //Build diffuse lighting float Lambert = max(dot(Normal, LightDirection), 0.0); //max(abs(dot(Normal, LightDirection)), 0.0); vec4 DiffuseElement = ( gl_LightSource[0].diffuse * Lambert ); vec4 LightingColor = ( DiffuseElement + AmbientElement ); LightingColor.r = min(LightingColor.r, 1.0); LightingColor.g = min(LightingColor.g, 1.0); LightingColor.b = min(LightingColor.b, 1.0); LightingColor.a = min(LightingColor.a, 1.0); LightingColor *= Texel; //Everything up to this point is PERFECT // Shadow mapping // ------------------------------ vec4 ShadowCoordinate = LightCoordinate / LightCoordinate.w; float DistanceFromLight = texture2D( ShadowMap, ShadowCoordinate.st ).z; float DepthBias = 0.001; float ShadowFactor = 1.0; if( LightCoordinate.w > 0.0 ) { ShadowFactor = DistanceFromLight < ( ShadowCoordinate.z + DepthBias ) ? 0.5 : 1.0; } LightingColor.rgb *= ShadowFactor; //gl_FragColor = LightingColor; //Yes, I know this is wrong, but the line above (gl_FragColor = LightingColor;) produces the wrong effect gl_FragColor = LightingColor * texture2D( ShadowMap, ShadowCoordinate.st ); } I wanted to make sure the coordinates were correct for the shadow map -- so that's why you see it applied to the image as it is below. But the depth for each point seems to be wrong -- the shadows SHOULD be opposite (look at how the image is -- the shaded areas from normal lighting are facing the opposite direction of the shadows). Maybe my matrices are bad or something going in? They're isolated and appear to be correct -- nothing else is going in unusual. When I view from the light's view and get the MVP matrices for it, they're correct. EDIT: Added an image so you can see what happens when I do the correct command at the end of the GLSL: That's the image when the last line is just glFragColor = LightingColor; Maybe someone has some idea of what I screwed up?

    Read the article

  • Collision detection when pathfinding with pathnodes, UDK

    - by Dave Voyles
    I'm trying to create a class that allows my AIController to path find using pathnodes (NOT NavMeshes). It's doing a swell job of going from point to point in a set order (although I would like for it to be a random patrol at some point), but it gets caught up on collision from time to time. I.E. He'll walk the same set path, and when he runs into the blocks in the middle of the map he continues to rub against them until they finish, and continues on his merry way to the next path node. How can I prevent this from happening, or at least have him move from the wall if he does a trace and detects that it is there? It looks like I need to use MoveToward() instead of MoveTo(), as MoveToward allows the pawn to adjust its course during movement. I'm just not sure of how to use those paramters. Mougli has a decent tutorial on it[/URL], but I can't seem to get it to work correctly with my pathnode array. class PathfindingAIController extends UDKBot; var array Waypoints; var int _PathNode; //declare it at the start so you can use it throughout the script var int CloseEnough; simulated function PostBeginPlay() { local PathNode Current; super.PostBeginPlay(); //add the pathnodes to the array foreach WorldInfo.AllActors(class'Pathnode',Current) { Waypoints.AddItem( Current ); } } simulated function Tick(float DeltaTime) { local int Distance; local Rotator DesiredRotation; super.Tick(DeltaTime); if (Pawn != None) { // Smoothly rotate the pawn towards the focal point DesiredRotation = Rotator(GetFocalPoint() - Pawn.Location); Pawn.FaceRotation(RLerp(Pawn.Rotation, DesiredRotation, 3.125f * DeltaTime, true), DeltaTime); } Distance = VSize2D(Pawn.Location - Waypoints[_PathNode].Location); if (Distance <= CloseEnough) { _PathNode++; } if (_PathNode >= Waypoints.Length) { _PathNode = 0; } GoToState('Pathfinding'); } auto state Pathfinding { Begin: if (Waypoints[_PathNode] != None) // make sure there is a pathnode to move to { MoveTo(Waypoints[_PathNode].Location); //move to it `log("STATE: Pathfinding"); } } DefaultProperties { CloseEnough=400 bIsplayer = True }

    Read the article

  • How to prevent 2D camera rotation if it would violate the bounds of the camera?

    - by Andrew Price
    I'm working on a Camera class and I have a rectangle field named Bounds that determines the bounds of the camera. I have it working for zooming and moving the camera so that the camera cannot exit its bounds. However, I'm a bit confused on how to do the same for rotation. Currently I allow rotating of the camera's Z-axis. However, if sufficiently zoomed out, upon rotating the camera, areas of the screen outside the camera's bounds can be shown. I'd like to deny the rotation assuming it meant that the newly rotated camera would expose areas outside the camera's bounds, but I'm not quite sure how. I'm still new to Matrix and Vector math and I'm not quite sure how to model if the newly rotated camera sees outside of its bounds, undo the rotation. Here's an image showing the problem: http://i.stack.imgur.com/NqprC.png The red is out of bounds and as a result, the camera should never be allowed to rotate itself like this. This seems like it would be a problem with all rotated values, but this is not the case when the camera is zoomed in enough. Here are the current member variables for the Camera class: private Vector2 _position = Vector2.Zero; private Vector2 _origin = Vector2.Zero; private Rectangle? _bounds = Rectangle.Empty; private float _rotation = 0.0f; private float _zoom = 1.0f; Is this possible to do? If so, could someone give me some guidance on how to accomplish this? Thanks. EDIT: I forgot to mention I am using a transformation matrix style camera that I input in to SpriteBatch.Begin. I am using the same transformation matrix from this tutorial.

    Read the article

  • Entity component system -> handling components that depend on one another

    - by jtedit
    I really like the idea of an entity component system and feel it has great flexibility, but have a question. How should dependent components be handled? I'm not talking about how components should communicate with other components they depend on, I have that sorted, but rather how to ensure components are present. For example, an entity cannot have a "velocity" component if it doesn't have a "position" component, in the same way it cant have an "acceleration" component if it doesn't have a "velocity" component. My first idea was every component class overrides an "onAddedToEntity(Entity ent)" function. Then in that function it checks that prerequisite components are also added to the entity, eg: struct EntCompVelocity() : public EntityComponent{ //member variables here void onAddedToEntity(Entity ent){ if(!ent.hasComponent(EntCompPosition::Id)){ ent.addComponent(new EntCompPosition()); } } } This has the nice property that if the acceleration component adds the velocity component, the velocity component will itself add the position component to the entity so dependency "trees" will sort themselves out. However my concern is if I do this components will silently be added with default values and, in the example of adding position, many entities will appear at the origin. Another idea was to simple have the "Entity.addComponent();" function return false if the component's prerequisite components aren't already on the entity, this would force you to manually add the position component and set its value before adding the velocity component. Finally I could simply not ensure a components prerequisite components are added, the "UpdatePosition" system only deals with entities with both a position and velocity component, so therefore adding a velocity component without having a position component wont be a problem (it wont cause crashes due to null pointer/etc), but it does mean entities will carry useless unused data if you add components but not their prerequisite components. Does anyone have experience with this problem and/or any of these methods to solve it? How did you solve the problem?

    Read the article

  • Transform 3D vectors between coordinate systems

    - by Nir Cig
    I've got 6 points in 3D space: A,B,C,D,E,F, that represent 4 vectors. AB is perpendicular to AC and DE is perpendicular to DF. I need to find a transformation matrix M, that transforms AB to DE and AC to DF. In other words: M·AB=DE, M·AC=DF If no scaling was involved, this could be solved with a simple rotation matrix. But since the ratios |AB|/|DE|, |AC|/|DF| might be different, I'm not sure how to proceed.

    Read the article

  • Pix for visual studio express 2012 (Desktop)

    - by JohnB
    (Originally asked on stackoverflow) Using visual c++ express 2010 for direct3d you have to download the directX sdk and there is a tool called pix for debugging shaders, looking at 3d resources etc. With visual studio 2012 express the directx sdk is included in the windows sdk that comes with it but this does not seem to include the winpix.exe tool. Is this very useful tool still available? I guess I can still use the one from the previous sdk but it seems wrong to install the entire sdk just for that tool. Is there a version for VS2012 express that I'm missing?

    Read the article

  • Any reliable polygon normal calculation code?

    - by Jenko
    Do you have any reliable face normal calculation code? I'm using this but it fails when faces are 90 degrees upright or similar. // the normal point var x:Number = 0; var y:Number = 0; var z:Number = 0; // if is a triangle with 3 points if (points.length == 3) { // read vertices of triangle var Ax:Number, Bx:Number, Cx:Number; var Ay:Number, By:Number, Cy:Number; var Az:Number, Bz:Number, Cz:Number; Ax = points[0].x; Bx = points[1].x; Cx = points[2].x; Ay = points[0].y; By = points[1].y; Cy = points[2].y; Az = points[0].z; Bz = points[1].z; Cz = points[2].z; // calculate normal of a triangle x = (By - Ay) * (Cz - Az) - (Bz - Az) * (Cy - Ay); y = (Bz - Az) * (Cx - Ax) - (Bx - Ax) * (Cz - Az); z = (Bx - Ax) * (Cy - Ay) - (By - Ay) * (Cx - Ax); // if is a polygon with 4+ points }else if (points.length > 3){ // calculate normal of a polygon using all points var n:int = points.length; x = 0; y = 0; z = 0 // ensure all points above 0 var minx:Number = 0, miny:Number = 0, minz:Number = 0; for (var p:int = 0, pl:int = points.length; p < pl; p++) { var po:_Point3D = points[p] = points[p].clone(); if (po.x < minx) { minx = po.x; } if (po.y < miny) { miny = po.y; } if (po.z < minz) { minz = po.z; } } if (minx > 0 || miny > 0 || minz > 0){ for (p = 0; p < pl; p++) { po = points[p]; po.x -= minx; po.y -= miny; po.z -= minz; } } var cur:int = 1, prev:int = 0, next:int = 2; for (var i:int = 1; i <= n; i++) { // using Newell method x += points[cur].y * (points[next].z - points[prev].z); y += points[cur].z * (points[next].x - points[prev].x); z += points[cur].x * (points[next].y - points[prev].y); cur = (cur+1) % n; next = (next+1) % n; prev = (prev+1) % n; } } // length of the normal var length:Number = Math.sqrt(x * x + y * y + z * z); // if area is 0 if (length == 0) { return null; }else{ // turn large values into a unit vector x = x / length; y = y / length; z = z / length; }

    Read the article

  • unity4.3 rigidbody2d unexpected force behaviour

    - by Lilz Votca Love
    So guys ive edited the question and here is what my problem is i have a player which has a rigidbody2d attached to it.my player is able to doublejump in the air nicely and stick to walls when colliding with them and slowly slides to the ground.All movement is handle through physics and no transform manipulations.here i did something similar to this in the FixedUpdate of my player. void FixedUpdate() { if(wall && Input.GetButtonDown("Jump")) { if(facingright)//player is facing the left side of the wall { rigidbody2D.Addforce(new vector2(-1f,2f)*jumpforce); /*Now the player should jump backwards following this directional vector and should follow a smooth curve which in this part works well*/ } else { rigidbody2D.Addforce(new vector2(1f,2f)*jumpforce); /*Now this is where everything gets complicated as you should have noticed this is the same directional vector only the opposite x axis value and the same amount of force is used but it behaves like the red curve in the picture below*/ } } } bad behaviour and vector in red .I tested the same thing(both addforce methods) for a simple jump and they exactly behave like mentionned above in the picture.so here is my problem.Jumping diagonally forward with rigidbody2d.addforce() do not have the same impact,do not follow the same curve as jumping the opposite direction with the same exact amount of force.if i could fix this or get past this i could implement a walljump system like a ninja jumping in zigzag between two opposite wall to climb them.Any ideas or alternatives?

    Read the article

  • GLSL: How Do I cast a float into an int?

    - by dugla
    In a GLSL fragment shader I am trying to cast a float into an int. The compiler has other ideas. It complains thusly: ERROR: 0:60: '=' : cannot convert from 'mediump float' to 'highp int' I am trying to do this: mediump float indexf = floor(2.0 * mixer); highp int index = indexf; I (vainly) tried to raise the precision of the int above the float to appease the GL Gods but no joy. Could someone please school me here? Thanks, Doug

    Read the article

  • How can I move a polygon edge 1 unit away from the center?

    - by Stephen
    Let's say I have a polygon class that is represented by a list of vector classes as vertices, like so: var Vector = function(x, y) { this.x = x; this.y = y; }, Polygon = function(vectors) { this.vertices = vectors; }; Now I make a polygon (in this case, a square) like so: var poly = new Polygon([ new Vector(2, 2), new Vector(5, 2), new Vector(5, 5), new Vector(2, 5) ]); So, the top edge would be [poly.vertices[0], poly.vertices[1]]. I need to stretch this polygon by moving each edge away from the center of the polygon by one unit, along that edge's normal. The following example shows the first edge, the top, moved one unit up: The final polygon should look like this new one: var finalPoly = new Polygon([ new Vector(1, 1), new Vector(6, 1), new Vector(6, 6), new Vector(1, 6) ]); It is important that I iterate, moving one edge at a time, because I will be doing some collision tests after moving each edge. Here is what I tried so far (simplified for clarity), which fails triumphantly: for(var i = 0; i < vertices.length; i++) { var a = vertices[i], b = vertices[i + 1] || vertices[0]; // in case of final vertex var ax = a.x, ay = a.y, bx = b.x, by = b.y; // get some new perpendicular vectors var a2 = new Vector(-ay, ax), b2 = new Vector(-by, bx); // make into unit vectors a2.convertToUnitVector(); b2.convertToUnitVector(); // add the new vectors to the original ones a.add(a2); b.add(b2); // the rest of the code, collision tests, etc. } This makes my polygon start slowly rotating and sliding to the left, instead of what I need. Finally, the example shows a square, but the polygons in question could be anything. They will always be convex, and always with vertices in clockwise order.

    Read the article

  • Given the presentation model pattern, is the view, presentation model, or model responsible for adding child views to an existing view at runtime?

    - by Ryan Taylor
    I am building a Flex 4 based application using the presentation model design pattern. This application will have several different components to it as shown in the image below. The MainView and DashboardView will always be visible and they each have corresponding presentation models and models as necessary. These views are easily created by declaring their MXML in the application root. <s:HGroup width="100%" height="100%"> <MainView width="75% height="100%"/> <DashboardView width="25%" height="100%"/> </s:HGroup> There will also be many WidgetViewN views that can be added to the DashboardView by the user at runtime through a simple drop down list. This will need to be accomplished via ActionScript. The drop down list should always show what WidgetViewN has already been added to the DashboardView. Therefore some state about which WidgetViewN's have been created needs to be stored. Since the list of available WidgetViewN and which ones are added to the DashboardView also need to be accessible from other components in the system I think this needs to be stored in a Model object. My understanding of the presentation model design pattern is that the view is very lean. It contains as close to zero logic as is practical. The view communicates/binds to the presentation model which contains all the necessary view logic. The presentation model is effectively an abstract representation of the view which supports low coupling and eases testability. The presentation model may have one or more models injected in in order to display the necessary information. The models themselves contain no view logic whatsoever. So I have a several questions around this design. Who should be responsible for creating the WidgetViewN components and adding these to the DashboardView? Is this the responsibility of the DashboardView, DashboardPresentationModel, DashboardModel or something else entirely? It seems like the DashboardPresentationModel would be responsible for creating/adding/removing any child views from it's display but how do you do this without passing in the DashboardView to the DashboardPresentationModel? The list of available and visible WidgetViewN components needs to be accessible to a few other components as well. Is it okay for a reference to a WidgetViewN to be stored/referenced in a model? Are there any good examples of the presentation model pattern online in Flex that also include creating child views at runtime?

    Read the article

  • UnrealScript error: Importing defaults for actor: Changing Role in defaultproperties illegal, - what is it importing?

    - by user3079666
    I added the line var float Mass; to Actor and commented it out of the classes that inherit from actor and declare it, fixed all issues but I now get the error message: Error, Importing defaults for Actor: Changing Role in defaultproperties is illegal (was RemoteRole intended?) The thing is, I did not change anything related to Role or in defaultproperties. Also since it says Importing, I'm guessing it's some ini file.. any clues?

    Read the article

  • best way to compute vertex normals from a Triangle's list

    - by nkint
    hi i'm a complete newbie in computergraphics so sorry if it's a stupid answer. i'm trying to make a simple 3d engine from scratch, more for educational purpose than for real use. i have a Surface object with inside a Triangle's list. For now i compute normals inside Triangle class, in this way: triangle.computeFaceNormals() { Vec3D u = v1.sub(v3) Vec3D v = v1.sub(v2) Vec3D normal = Vec3D.cross(u,v) normal.normalized() this.n1 = this.n2 = this.n3 = normal } and when building surface: t = new Triangle(v1,v2,v3).computeFaceNormals() surface.addTriangle(t) and i think this is the best way to do that.. isn't it? now.. what about for vertex normals? i've found this simple algorithm: flipcode vertex normal but.. hei this algorithm has.. exponential complexity? (if my memory doesn't fail my computer science background..) (bytheway.. it has 3 nested loops.. i don't think it's the best way to do it..) any suggestion?

    Read the article

  • Bloom shader makes it impossible to render black?

    - by Mathias Lykkegaard Lorenzen
    I am playing around with the bloom shader from the XNA sample page, to do some glow shading. I am rendering primitive vector-ish squares of linelists/linestrips, on a background. However, I am facing a few problems. With a black background and white squares, I can actually see the squares. However, with a white background and black squares, I can't see them at all. Why is this happening, and is there any way of me fixing it? Can I modify my bloom shader to also "glow" dark elements, if that's what is causing it?

    Read the article

  • Unity scaling instantiated GameObject at Start() doesn't "keep"

    - by Shivan Dragon
    I have a very simple scenario: A box-like Prefab which is imported from Blender automatically (I have the .blend file in the Assets folder). A script that has two public GameObject fields. In one I place the above prefab, and in the other I place a terrain object (which I've created in Unity's graphical view): public Collider terrain; public GameObject aStarCellHighlightPrefab; This script is attached to the camera. The idea is to have the Blender prefab instantiated, have the terrain set as its parent, and then scale said prefab instance up. I first did it like this, in the Start() method: void Start () { cursorPositionOnTerrain = new RaycastHit(); aStarCellHighlight = (GameObject)Instantiate(aStarCellHighlightPrefab, new Vector3(300,300,300), terrain.transform.rotation); aStarCellHighlight.name = "cellHighlight"; aStarCellHighlight.transform.parent = terrain.transform; aStarCellHighlight.transform.localScale = new Vector3(100,100,100); } and first thought it didn't work. However later I noticed that it did in fact work, in the sense where the scale was applied right at the start, but then right after the prefab instance came back to its initial scale. Putting the scale code in the Update() methods fixes it in the sense where now it stays scaled all the time: void Update () { aStarCellHighlight.transform.localScale = new Vector3(100,100,100); //... } However I've noticed that when I run this code, the object is first displayed without the scale being applied, and it takes about 5-10 seconds for the scale to happen. During this time everything works fine (like input and logging, etc). The scene is very simple, it's not like it has a lot of stuff to load or anything (there's a Ray cast from the camera on to the terrain, but that seems to happen without such delays). My (2 part) question is: Why doesn't it take the scale transform when I do it at the beginning in the Start() method. Why do I have to keep scaling it in the Update() method? Why does it take so long for the scale to "apply/show up".

    Read the article

  • Multiple render targets and gamma correctness in Direct3D9

    - by Mario
    Let's say in a deferred renderer when building your G-Buffer you're going to render texture color, normals, depth and whatever else to your multiple render targets at once. Now if you want to have a gamma-correct rendering pipeline and you use regular sRGB textures as well as rendertargets, you'll need to apply some conversions along the way, because your filtering, sampling and calculations should happen in linear space, not sRGB space. Of course, you could store linear color in your textures and rendertargets, but this might very well introduce bad precision and banding issues. Reading from sRGB textures is easy: just set SRGBTexture = true; in your texture sampler in your HLSL effect code and the hardware does the conversion sRGB-linear for you. Writing to an sRGB rendertarget is theoretically easy, too: just set SRGBWriteEnable = true; in your effect pass in HLSL and your linear colors will be converted to sRGB space automatically. But how does this work with multiple rendertargets? I only want to do these corrections to the color textures and rendertarget, not to the normals, depth, specularity or whatever else I'll be rendering to my G-Buffer. Ok, so I just don't apply SRGBTexture = true; to my non-color textures, but when using SRGBWriteEnable = true; I'll do a gamma correction to all the values I write out to my rendertargets, no matter what I actually store there. I found some info on gamma over at Microsoft: http://msdn.microsoft.com/en-us/library/windows/desktop/bb173460%28v=vs.85%29.aspx For hardware that supports Multiple Render Targets (Direct3D 9) or Multiple-element Textures (Direct3D 9), only the first render target or element is written. If I understand correctly, SRGBWriteEnable should only be applied to the first rendertarget, but according to my tests it doesn't and is used for all rendertargets instead. Now the only alternative seems to be to handle these corrections manually in my shader and only correct the actual color output, but I'm not totally sure, that this'll not have any negative impact on color correctness. E.g. if the GPU does any blending or filtering or multisampling after the Linear-sRGB conversion... Do I even need gamma correction in this case, if I'm just writing texture color without lighting to my rendertarget? As far as I know, I DO need it because of the texture filtering and mip sampling happening in sRGB space instead, if I don't correct for it. Anyway, it'd be interesting to hear other people's solutions or thoughts about this.

    Read the article

  • Powder games: how do they work?

    - by Marc Müller
    Hey guys, I recently found these two gems: http://powdertoy.co.uk/ http://dan-ball.jp/en/javagame/dust/ My question is: How are the physics with so many elements efficiently handled? Am I just severely underestimating modern computing power or is it possible to 'just' have a two-dimensional array, each cell of which describes what is placed at the according position and simulate each cell in every step. Or are there more complex things being done like summarising large areas of the same kind into a single data set and separating said set as needed? Are there any open-source games like this I could look at?

    Read the article

  • Smooth animation in Cocos2d for iOS

    - by MrDatabase
    I move a simple CCSprite around the screen of an iOS device using this code: [self schedule:@selector(update:) interval:0.0167]; - (void) update:(ccTime) delta { CGPoint currPos = self.position; currPos.x += xVelocity; currPos.y += yVelocity; self.position = currPos; } This works however the animation is not smooth. How can I improve the smoothness of my animation? My scene is exceedingly simple (just has one full-screen CCSprite with a background image and a relatively small CCSprite that moves slowly). I've logged the ccTime delta and it's not consistent (it's almost always greater than my specified interval of 0.0167... sometimes up to a factor of 4x). I've considered tailoring the motion in the update method to the delta time (larger delta = larger movement etc). However given the simplicity of my scene it's seems there's a better way (and something basic that I'm probably missing).

    Read the article

  • Discovering path through unknown territory

    - by TravisG
    Let's say all the AI knows about it's surroundings is a pixel-map that it has which clearly shows walkable terrain and obstacles. I want the AI to be able to traverse this terrain until it finds an exit point. There are some restrictions: There is always a way to the exit in the entire map that the AI walks around in, but there may be dead ends. The path to the exit is always pretty random, meaning that if you stand at crossroads, nothing indicates which direction would be the right one to go. It doesn't matter if the AI reaches a dead end, but it has to be able walk back out of it to a previously not inspected location and continue its search there. Initially, the AI starts out knowing only the starting area of the whole map. As it walks around, new points will be added to the pixel-map as the AI corresponding to the AIs range of sight (think of it like the AI is clearing the fog of war) The problem is in 2D space. All I have is the pixel map. There are no paths in the pixel map which are "too narrow". The AI fits through everything. It shouldn't be a brute force solution. E.g. it would be possible to simply find a path to each pixel in the pixel map that is yet undiscovered (with A*, for example), which will lead to the AI discovering new pixels. This could be repeated until the end is reached. The path doesn't have to be the shortest path (this is impossible without knowing the entire map beforehand), but when movements within the visible area are calculated, the shortest and from a human standpoint most logical path should be taken (e.g. if you can see a way out of your room into a hallway, you would obviously go there instead of exploring the corner of your current room). What kind of approaches to solve this problem are there?

    Read the article

  • apply non-hierarchial transforms to hierarchial skeleton?

    - by user975135
    I use Blender3D, but the answer might not API-exclusive. I have some matrices I need to assign to PoseBones. The resulting pose looks fine when there is no bone hierarchy (parenting) and messed up when there is. I've uploaded an archive with sample blend of the rigged models, text animation importer and a test animation file here: http://www.2shared.com/file/5qUjmnIs/sample_files.html Import the animation by selecting an Armature and running the importer on "sba" file. Do this for both Armatures. This is how I assign the poses in the real (complex) importer: matrix_bases = ... # matrix from file animation_matrix = matrix_basis * pose.bones['mybone'].matrix.copy() pose.bones[bonename].matrix = animation_matrix If I go to edit mode, select all bones and press Alt+P to undo parenting, the Pose looks fine again. The API documentation says the PoseBone.matrix is in "object space", but it seems clear to me from these tests that they are relative to parent bones. Final 4x4 matrix after constraints and drivers are applied (object space) I tried doing something like this: matrix_basis = ... # matrix from file animation_matrix = matrix_basis * (pose.bones['mybone'].matrix.copy() * pose.bones[bonename].bone.parent.matrix_local.copy().inverted()) pose.bones[bonename].matrix = animation_matrix But it looks worse. Experimented with order of operations, no luck with all. For the record, in the old 2.4 API this worked like a charm: matrix_basis = ... # matrix from file animation_matrix = armature.bones['mybone'].matrix['ARMATURESPACE'].copy() * matrix_basis pose.bones[bonename].poseMatrix = animation_matrix pose.update() Link to Blender API ref: http://www.blender.org/documentation/blender_python_api_2_63_17/bpy.types.BlendData.html#bpy.types.BlendData http://www.blender.org/documentation/blender_python_api_2_63_17/bpy.types.PoseBone.html#bpy.types.PoseBone

    Read the article

< Previous Page | 276 277 278 279 280 281 282 283 284 285 286 287  | Next Page >