Search Results

Search found 11979 results on 480 pages for 'game of thrones'.

Page 291/480 | < Previous Page | 287 288 289 290 291 292 293 294 295 296 297 298  | Next Page >

  • Realtime rendering using a ray tracing engine

    - by Keyhan Asghari
    I want to render an object that has a mesh with one million hexagonal elements(100 * 100 * 100). Lights, shadows and textures is not important and each element has a solid color. and finally, the actions I want to have, is simply rotating the object, zooming and panning. I am wondering what ray tracing engine is better for my conditions. or, do I have to take another approach? any help will be appreciated.

    Read the article

  • Interpolating between two networked states?

    - by Vaughan Hilts
    I have many entities on the client side that are simulated (their velocities are added to their positions on a per frame basis) and I let them dead reckon themselves. They send updates about where they were last seen and their velocity changes. This works great and other players see this work find. However, after a while these players begin to desync after some time. This is because of latency. I'd like to know how I can interpolate between states so they appear to be in the correct position. I know where the player was LAST seen and their current velocity but interpolating to the last seen state causes the player to actually move -backwards-. I could not use velocity at all for other clients and simply 'lerp' them towards the appropriate direction but I feel this would cause jaggy movement. What are the alternatives?

    Read the article

  • Isometric layer moving inside map

    - by gronzzz
    i'm created isometric map and now trying to limit layer moving. Main idea, that i have left bottom, right bottom, left top, right top points, that camera can not move outside, so player will not see map out of bounds. But i can not understand algorithm of how to do that. It's my layer scale/moving code. - (void)touchBegan:(UITouch *)touch withEvent:(UIEvent *)event { _isTouchBegin = YES; } - (void)touchMoved:(UITouch *)touch withEvent:(UIEvent *)event { NSArray *allTouches = [[event allTouches] allObjects]; UITouch *touchOne = [allTouches objectAtIndex:0]; CGPoint touchLocationOne = [touchOne locationInView: [touchOne view]]; CGPoint previousLocationOne = [touchOne previousLocationInView: [touchOne view]]; // Scaling if ([allTouches count] == 2) { _isDragging = NO; UITouch *touchTwo = [allTouches objectAtIndex:1]; CGPoint touchLocationTwo = [touchTwo locationInView: [touchTwo view]]; CGPoint previousLocationTwo = [touchTwo previousLocationInView: [touchTwo view]]; CGFloat currentDistance = sqrt( pow(touchLocationOne.x - touchLocationTwo.x, 2.0f) + pow(touchLocationOne.y - touchLocationTwo.y, 2.0f)); CGFloat previousDistance = sqrt( pow(previousLocationOne.x - previousLocationTwo.x, 2.0f) + pow(previousLocationOne.y - previousLocationTwo.y, 2.0f)); CGFloat distanceDelta = currentDistance - previousDistance; CGPoint pinchCenter = ccpMidpoint(touchLocationOne, touchLocationTwo); pinchCenter = [self convertToNodeSpace:pinchCenter]; CGFloat predictionScale = self.scale + (distanceDelta * PINCH_ZOOM_MULTIPLIER); if([self predictionScaleInBounds:predictionScale]) { [self scale:predictionScale scaleCenter:pinchCenter]; } } else { // Dragging _isDragging = YES; CGPoint previous = [[CCDirector sharedDirector] convertToGL:previousLocationOne]; CGPoint current = [[CCDirector sharedDirector] convertToGL:touchLocationOne]; CGPoint delta = ccpSub(current, previous); self.position = ccpAdd(self.position, delta); } } - (void)touchEnded:(UITouch *)touch withEvent:(UIEvent *)event { _isDragging = NO; _isTouchBegin = NO; // Check if i need to bounce _touchLoc = [touch locationInNode:self]; } #pragma mark - Update - (void)update:(CCTime)delta { CGPoint position = self.position; float scale = self.scale; static float friction = 0.92f; //0.96f; if(_isDragging && !_isScaleBounce) { _velocity = ccp((position.x - _lastPos.x)/2, (position.y - _lastPos.y)/2); _lastPos = position; } else { _velocity = ccp(_velocity.x * friction, _velocity.y *friction); position = ccpAdd(position, _velocity); self.position = position; } if (_isScaleBounce && !_isTouchBegin) { float min = fabsf(self.scale - MIN_SCALE); float max = fabsf(self.scale - MAX_SCALE); int dif = max > min ? 1 : -1; if ((scale > MAX_SCALE - SCALE_BOUNCE_AREA) || (scale < MIN_SCALE + SCALE_BOUNCE_AREA)) { CGFloat newSscale = scale + dif * (delta * friction); [self scale:newSscale scaleCenter:_touchLoc]; } else { _isScaleBounce = NO; } } }

    Read the article

  • Wavefront mesh: determine which face a point belongs to?

    - by Mina Samy
    I have a 3D mesh Wavefront .obj file. Is there any algorithm that takes an arbitrary point coordinates as input and determines which face of the mesh that point belongs to ?? The mesh is rendered on the screen, then the user clicks on it, I want to determine which part of the mesh the user has clicked on ? Here's the code using LibGDX: Vector3 intersection=new Vector3(); Ray ray=camera.getPickRay(x, y); //vertices is an array that hold the coordinates of the mesh boolean ok=Intersector.intersectRayTriangles(ray, vertices, intersection); Thanks

    Read the article

  • Slick 2d scrolling off screen

    - by Peter
    I have something scrolling in and out of the screen. Now when it goes off screen, I want it to scroll into the screen at another location. What I do is I grab the last pixels at the screens edge using g.copyArea and then g.drawImage on the edge of the screen. And then I do a g.translate to create room for the next row which is next render cycle. My problem is that I get a single pixel row, which is not copied onto the canvas. Where as I want each row to be added and then translated, so that the image that scrolled off screen is recreated on the other side of the screen. Here is my code, maybe there is a better way of doing this, open to any suggests, cause I'm totally stuck @Override public void render(GameContainer gc, Graphics g) throws SlickException { //g.setClip(0, 0, 300, gc.getHeight()); g.translate(0, y); g.drawImage(image,0,200); g.resetTransform(); //g.clearClip(); g.copyArea(rightImage, 0, gc.getHeight() - 1); g.drawImage(rightImage, 300, 0); g.translate(0, y); y=y+3; }

    Read the article

  • Why can't a blendShader sample anything but the current coordinate of the background image?

    - by Triynko
    In Flash, you can set a DisplayObject's blendShader property to a pixel shader (flash.shaders.Shader class). The mechanism is nice, because Flash automatically provides your Shader with two input images, including the background surface and the foreground display object's bitmap. The problem is that at runtime, the shader doesn't allow you to sample the background anywhere but under the current output coordinate. If you try to sample other coordinates, it just returns the color of the current coordinate instead, ignoring the coordinates you specified. This seems to occur only at runtime, because it works properly in the Pixel Bender toolkit. This limitation makes it impossible to simulate, for example, the Aero Glass effect in Windows Vista/7, because you cannot sample the background properly for blurring. I must mention that it is possible to create the effect in Flash through manual composition techniques, but it's hard to determine when it actually needs updated, because Flash does not provide information about when a particular area of the screen or a particular display object needs re-rendered. For example, you may have a fixed glass surface with objects moving underneath it that don't dispatch events when they move. The only alternative is to re-render the glass bar every frame, which is inefficient, which is why I am trying to do it through a blendShader so Flash determines when it needs rendered automatically. Is there a technical reason for this limitation, or is it an oversight of some sort? Does anyone know of a workaround, or a way I could provide my manual composition implementation with information about when it needs re-rendered? The limitation is mentioned with no explanation in the last note in this page: http://help.adobe.com/en_US/as3/dev/WSB19E965E-CCD2-4174-8077-8E5D0141A4A8.html It says: "Note: When a Pixel Bender shader program is run as a blend in Flash Player or AIR, the sampling and outCoord() functions behave differently than in other contexts.In a blend, a sampling function will always return the current pixel being evaluated by the shader. You cannot, for example, use add an offset to outCoord() in order to sample a neighboring pixel. Likewise, if you use the outCoord() function outside a sampling function, its coordinates always evaluate to 0. You cannot, for example, use the position of a pixel to influence how the blended images are combined."

    Read the article

  • SDL mouse wheel not picking up

    - by Chris
    Running Ubuntu 11.04, SDL 1.2 trying to pickup mouse wheel up/down movement with this (stripped down) code: int main( int argc, char **argv ) { SDL_MouseButtonEvent *mousebutton = NULL; while ( !done ) { if(mousebutton != NULL && mousebutton->button == SDL_BUTTON_LEFT) yrot += 0.75f; else if(mousebutton != NULL && mousebutton->button == SDL_BUTTON_RIGHT) yrot -= 0.75f; else if(mousebutton != NULL && mousebutton->button == SDL_BUTTON_WHEELUP){ xrot += 0.75f; }else if(mousebutton != NULL && mousebutton->button == SDL_BUTTON_WHEELDOWN){ xrot -= 0.75f; } while ( SDL_PollEvent( &event ) ) { switch( event.type ) { case SDL_MOUSEBUTTONDOWN: mousebutton = &event.button; break; case SDL_MOUSEBUTTONUP: mousebutton = NULL; break; default: break; } } } return 0; } strange thing is, scrolling with the mouse button does nothing, but if I hold down a mouse button or two and then move the mouse it hits the SDL_BUTTON_WHEEL code occasionally. This honestly reeks of a pointer issue, which would make sense since I've been spoiled with C# for the past couple years, but I am just not seeing it. How do i correctly find mouse scroll events in SDL?

    Read the article

  • DX10 sprite and pixel shader

    - by Alex Farber
    I am using ID3DX10Sprite to draw 2D image on the screen. 3D scene contains only one textured sprite placed over the whole window area. Render method looks like this: m_pDevice-ClearRenderTargetView(...); m_pSprite-Begin(D3DX10_SPRITE_SORT_TEXTURE); m_pSprite-DrawSpritesImmediate(&m_SpriteDefinition, 1, 0, 0); m_pSprite-End(); Now I want to make some transformations with the sprite texture in a shader. Currently the program doesn't work with shader. How it is possible to add pixel shader to the program with this structure? Inside the shader, I need to set all colors equal to red, and multiply pixel values by some coefficient. Something like this: float4 TexturePixelShader(PixelInputType input) : SV_Target { float4 textureColor; textureColor = shaderTexture.Sample(SampleType, input.tex); textureColor.x = textureColor.x * coefficient; textureColor.y = textureColor.x; textureColor.z = textureColor.x; return textureColor; }

    Read the article

  • How to automatically render all opaque meshes with a specific shader?

    - by dsilva.vinicius
    I have a specular outline shader that I want to be used on all opaque meshes of the scene whenever a specific camera renders. The shader is working properly when it is manually applied to some material. The shader is as follows: Shader "Custom/Outline" { Properties { _Color ("Main Color", Color) = (.5,.5,.5,1) _OutlineColor ("Outline Color", Color) = (1,0.5,0,1) _Outline ("Outline width", Range (0.0, 0.1)) = .05 _SpecColor ("Specular Color", Color) = (0.5, 0.5, 0.5, 1) _Shininess ("Shininess", Range (0.03, 1)) = 0.078125 _MainTex ("Base (RGB) Gloss (A)", 2D) = "white" {} } SubShader { Tags { "Queue"="Overlay" "RenderType"="Opaque" } Pass { Name "OUTLINE" Tags { "LightMode" = "Always" } Cull Off ZWrite Off // Uncomment to show outline always. //ZTest Always CGPROGRAM #pragma target 3.0 #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" struct appdata { float4 vertex : POSITION; float3 normal : NORMAL; }; struct v2f { float4 pos : POSITION; float4 color : COLOR; }; float _Outline; float4 _OutlineColor; v2f vert(appdata v) { // just make a copy of incoming vertex data but scaled according to normal direction v2f o; o.pos = mul(UNITY_MATRIX_MVP, v.vertex); float3 norm = mul ((float3x3)UNITY_MATRIX_IT_MV, v.normal); float2 offset = TransformViewToProjection(norm.xy); o.pos.xy += offset * o.pos.z * _Outline; o.color = _OutlineColor; return o; } float4 frag(v2f fromVert) : COLOR { return fromVert.color; } ENDCG } UsePass "Specular/FORWARD" } FallBack "Specular" } The camera used fot the effect has just a script component which setups the shader replacement: using UnityEngine; using System.Collections; public class DetectiveEffect : MonoBehaviour { public Shader EffectShader; // Use this for initialization void Start () { this.camera.SetReplacementShader(EffectShader, "RenderType=Opaque"); } // Update is called once per frame void Update () { } } Unfortunately, whenever I use this camera I just see the background color. Any ideas?

    Read the article

  • Rendering order in an Entity System

    - by Daedalus
    Say I use a basic ES approach, and also inside Systems I hold lists of all entities that Systems are required to process. How do I maintain this list of entities in desired rendering order, i.e. for a dumb 2D RenderingSystem? I saw this discussion, and what they suggest is to do something like Z ordering - what I would probably do is just to store a "layer" int in DrawableComponent and then, inside RenderingSystem, just sort entities by mentioned "layer" whenever the entity list for RenderingSystem changes. They also say we could just delete and recreate the entity whenever we want it on the top, but it seems too inflexible to me. How is this problem usually solved?

    Read the article

  • How to perform efficient 2D picking in HTML5?

    - by jSepia
    I'm currently using an R-Tree for both picking and collision testing. Each entity on screen has a bounding box for collisions and a separate one for picking. Since entities may change position very frequently, both trees must be updated/reordered once per frame. While this is very efficient for collisions, because the tree is used in hundreds of collision queries every frame, I'm finding it too costly for picking, because it only gets queried when the user clicks, thus leading to a lot of wasted tree updates. What would be a more efficient way to implement picking without as much overhead?

    Read the article

  • Drawing a sprite or text causes the OpenGl rendering to 'disappear' in SFML

    - by Ken
    I'm using some SFML built in functions to draw sprites and text as an overlay on top of some OpenGL rending in an SFML RenderWindow. The opengl rendering appears fine until I add the code to draw the sprites or text. The sprite or text drawing causes the OpenGL stuff to disappear. The follow code show what I'm trying to do sf::RenderWindow window(sf::VideoMode(viewport.width,viewport.height,32), "SFML Window"); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0,viewport.width,0,viewport.height,0,1); while (window.pollEvent(Event)) { //event handling... //begin drawing glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glBegin(GL_TRIANGLES); glColor3f(col.x,col.y,col.z); for(int i=0;i<3;i++) glVertex2f(pos.x+verts[i].x,pos.y+verts[i].y); glEnd(); // adding this line causes all the previous opengl triangles not to appear window.draw("Sometext"); window.display(); }

    Read the article

  • how can i get rotation vector from matrix4x4 in xna?

    - by mr.Smyle
    i want to get rotation vector from matrix to realize some parent-children system for models. Matrix bonePos = link.Bone.Transform * World; Matrix m = Matrix.CreateTranslation(link.Offset) * Matrix.CreateScale(link.gameObj.Scale.X, link.gameObj.Scale.Y, link.gameObj.Scale.Z) * Matrix.CreateFromYawPitchRoll(MathHelper.ToRadians(link.gameObj.Rotation.Y), MathHelper.ToRadians(link.gameObj.Rotation.X), MathHelper.ToRadians(link.gameObj.Rotation.Z)) //need rotation vector from bone matrix here (now it's global model rotation vector) * Matrix.CreateFromYawPitchRoll(MathHelper.ToRadians(Rotation.Y), MathHelper.ToRadians(Rotation.X), MathHelper.ToRadians(Rotation.Z)) * Matrix.CreateTranslation(bonePos.Translation); link.gameObj.World = m; where : link - struct with children model settings, like position, rotation etc. And link.Bone - Parent Bone

    Read the article

  • Architecture a for a central renderer rather than self-rendering

    - by The Communist Duck
    For the architectural side of rendering, there's two main ways: having each object render itself, and having a single renderer which renders everything. I'm currently aiming for the second idea, for the following reasons: The list can be sorted to only use shaders once. Else each object would have to bind the shader, because it's not sure if it's active. The objects could be sorted and grouped. Easier to swap APIs. With a few macro lines, it can be easy to swap between a DirectX renderer and an OpenGL renderer (not a reason for my project, but still a good point) Easier to manage rendering code Of course, if anyone has strong recommendations for the first method, I will listen to them. But I was wondering how make this work. First idea The renderer has a list of pointers to the renderable components of each entity, which register themselves on RenderCompoent creation. However, I'm worrying that this may end up as a lot of extra pointer weight. But I can sort the list of pointers every so often. Second idea The entire list of entities is passed to the renderer each render call. The renderer then sorts the list (each call, or maybe once?) and gets what it wants. That's a lot of passing and/or sorting, however. Other ideas ??? PROFIT Anyone got ideas? Thank you.

    Read the article

  • How do I show a minimap in a 3D world?

    - by Bubblewrap
    Got a really typical use-case here. I have large map made up of hexagons and at any given time only a small section of the map is visible. To provide an overview of the complete map, I want to show a small 2D representation of the map in a corner of the screen. What is the recommended approach for this in libgdx? Keep in mind the minimap must be updated when the currently visible section changes and when the map is updated. I've found SpriteBatch(info here), but the warning label on it made me think twice: A SpriteBatch is a pretty heavy object so you should only ever have one in your program. I'm not sure I'm supposed to use the one SpriteBatch that I can have on the minimap, and I'm also not sure how to interpret "heavy" in this context. Another thing to possibly keep in mind is that the minimap will probably be part of a larger UI, is there any way to integrate these two?

    Read the article

  • XNA - Am I screwing up the LoadContent for Texture2D?

    - by Bombcode
    I've read forum threads and questions and I done just about everything. I need to know what am I screwing up here. Here's the code in the constructor. Content.RootDirectory = "GameStateContent"; //Content.RootDirectory = "Content"; And this is in the LoadContent method menu = this.Content.Load<Texture2D>("mainmenu"); And here's the image screen shot of the folder structure. http://i.imgur.com/HnndE.png Any helps on this? Thanks.

    Read the article

  • Quaternion LookAt for camera

    - by Homar
    I am using the following code to rotate entities to look at points. glm::vec3 forwardVector = glm::normalize(point - position); float dot = glm::dot(glm::vec3(0.0f, 0.0f, 1.0f), forwardVector); float rotationAngle = (float)acos(dot); glm::vec3 rotationAxis = glm::normalize(glm::cross(glm::vec3(0.0f, 0.0f, 1.0f), forwardVector)); rotation = glm::normalize(glm::quat(rotationAxis * rotationAngle)); This works fine for my usual entities. However, when I use this on my Camera entity, I get a black screen. If I flip the subtraction in the first line, so that I take the forward vector to be the direction from the point to my camera's position, then my camera works but naturally my entities rotate to look in the opposite direction of the point. I compute the transformation matrix for the camera and then take the inverse to be the View Matrix, which I pass to my OpenGL shaders: glm::mat4 viewMatrix = glm::inverse( cameraTransform->GetTransformationMatrix() ); The orthographic projection matrix is created using glm::ortho. What's going wrong?

    Read the article

  • Unable to Call Instantiate in Class Member Function

    - by onguarde
    The following javascript is attached to a gameObject. var instance : GameObject; class eg_class { function eg_func(){ var thePrefab : GameObject; instance = Instantiate(thePrefab); } } Error, Unknown identifier: 'instance'. Unknown identifier: 'Instantiate'. Questions, 1) Why is it that "instance" cannot be accessed within a class? Isn't it supposed to be a public variable? 2) "Instantiate" function works in Start()/Update() root functions. Is there a way to make it work from within member functions? Thanks in advance!

    Read the article

  • Is it safe StringToHash() to use in Unity?

    - by Sebastian Krysmanski
    I'm currently browsing through the Unity tutorials and saw that they're recommending to use Animator.StringToHash("some string") to created unique ids for animation properties (see here). Since I'm a programmer, to me the word "hash" doesn't represents something unique. Like the Java documentation for hashValue() states: It is not required that if two objects are unequal [...], then calling the hashCode method on each of the two objects must produce distinct integer results. So, according to this (and my definition of "hash"), two strings may have the same hash value. (You can also argue that there are an infinite number of possible strings but only 2^32 possible int values.) So, is there a possibility that StringToHash() will give me an id that actually belongs to another property (than the one I requested the hash for)?

    Read the article

  • OpenGL setup on Windows

    - by kevin james
    I have been trying to use OpenGL for two days now. First on Mac, then on Windows. The problem with Mac is that it doesn't support the newer versions of OpenGL. I ran a tutorial that actually did get some things working, but it only works in XCode (i.e., I can't create a new file, paste in the same code, and get it to work). Because of these issues, I moved to Windows. My Windows 7 has OpenGL 4.3, which is the same that is used in alot of other tutorials. However, not one of these tutorials gives any instruction on how to set it up for the first time. I have come across some vague posts saying that some libraries need to be linked. But WHAT libraries, and HOW do I link them? Please help. I am pretty desperate to set this up as this project is due for work soon. I have actually used OpenGL before at my university, but the computers already had everything set up. The project itself is very easy, but setting up OpenGL is not something I know how to do.

    Read the article

  • object detection in bitmmap javacanvas

    - by user1538127
    i want to detect clicks on canvas elements which are drawn using paths. so far i have think of to store elements path in javascript data structure and then check the cordinates of hits which matches the elements cordinates. i belive there is algorithm already for thins kind o cordinate search. rendering each of element path and checking the hits would be inefficient when elements number is larger. can anyone point on me that?

    Read the article

  • New to CG shader programming, what program should I use to write and test them?

    - by Notbad
    I have started witting some shaders. First ones were fairly easy to write in notepad but now I need something with a bit more meat. I have checked rendermonnkey that seems to support CG but it is really old and don't know if it is a good option. On the other hand there exist this FX Composer 2.0 but it seems somthing that could really distract me from learning shaders because it seems a pretty deep program. Are there any other possibilities? There's a really nice alternative to write shaders named ShaderToy but just supports GLSL. Any information will be really welcomed. Thanks in advance.

    Read the article

  • Where to start? (3D Modeling)

    - by herfus
    I'm looking for a good resource to start learning 3d modeling. I'm looking for something that starts with the basics (e.g. terminology; what are quads, triangles etc.) before/while going into the actual modeling. Book, website, video, anything will do. I'm only concerned with the quality of the tutorials, how thorough they are. I have experience with texturing, level design and so on - but I've never created anything more than simple shapes/editing existing assets.

    Read the article

  • Which Graphics/Geometry abstraction to choose?

    - by Robz
    I've been thinking about the design for a browser app on the HTML5 canvas that simulates a 2D robot zooming around, sensing the world around it. I decided to do this from scratch just for fun. I need shapes, like polygons, circles, and lines in order to model the robot and the world it lives in. These shapes need to be drawn with different appearance attributes, like border/fill style/width/color. I also need to have geometry functions to detect intersections and containment for the robot's sensors and so that the robot doesn't go inside stuff. One idea for functions is to have two totally separate libraries, one to implement graphics (like drawShape(context, shape)) and one for geometry operations (like shapeIntersectsShape(shape1, shape2)). Or, in a more object-oriented approach, the shape objects themselves could implement methods to do their own graphics (shape.draw(context)) and geometry operations (shape1.intersects(shape2)). Then there is the data itself: whether the data to draw a shape and the data to do geometric operations on that shape should be encapsulated within the same object, or be separate structures (where one would contain the other, or both be contained inside another structure). How do existing applications that do graphics/geometry stuff deal with this? Is there one model that is best, or is each good for certain applications? Should the fact that I'm using Javascript instead of a more classical language change how I approach the design?

    Read the article

  • Movement of body after applying weld joint

    - by ved
    I have two rectangular bodies. I've applied Weldjoint successfully on these bodies. I want to move that joined body by applying linear impulse. After weld joint, these two bodies becomes single body right? How do I apply force/impulse on the joined body? I am using Box2D with LibGDX. I've tried this: polygon1.applyLinearImpulse(new Vector2(-5, 0), polygon1.getWorldCenter(), true); I thought that if I move polygon1 then polygon2 will also move due to my weld joint but it is not working properly. Why don't they move together after being welded?

    Read the article

< Previous Page | 287 288 289 290 291 292 293 294 295 296 297 298  | Next Page >