Search Results

Search found 1449 results on 58 pages for 'coordinate geometry'.

Page 37/58 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Tkinter on Ubuntu 14.04 seems not to work

    - by empedokles
    I receive following Traceback: Traceback (most recent call last): File "tkinter_basic_frame.py", line 4, in <module> from Tkinter import Tk, Frame, BOTH File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 42, in raise ImportError, str(msg) + ', please install the python-tk package' ImportError: No module named _tkinter, please install the python-tk package This is the demoscript I'm trying to run: #!/usr/bin/python # -*- coding: utf-8 -*- from Tkinter import Tk, Frame, BOTH class Example(Frame): def __init__(self, parent): Frame.__init__(self, parent, background="white") self.parent = parent self.initUI() def initUI(self): self.parent.title("Simple") self.pack(fill=BOTH, expand=1) def main(): root = Tk() root.geometry("250x150+300+300") app = Example(root) root.mainloop() if __name__ == '__main__': main() From my knowledge Tkinter should be included in Python 2.7. Why do I receive the traceback? Doesn't ubuntu contain the standard-python-distribution? This is solved. I had to install it manually in synaptic (got the hint in the meantime from another forum), see here: Wikipedia says: "Tkinter is a Python binding to the Tk GUI toolkit. It is the standard Python interface to the Tk GUI toolkit1 and is Python's de facto standard GUI,2 and is included with the standard Windows and Mac OS X install of Python." - Not good, that it isn't included in Ubuntu as well. Tkinter on Wikipedia

    Read the article

  • How can I find the right UV coordinates for interpolating a bezier curve?

    - by ssb
    I'll let this picture do the talking. I'm trying to create a mesh from a bezier curve and then add a texture to it. The problem here is that the interpolation points along the curve do not increase linearly, so points farther from the control point (near the endpoints) stretch and those in the bend contract, causing the texture to be uneven across the curve, which can be problematic when using a pattern like stripes on a road. How can I determine how far along the curve the vertices actually are so I can give a proper UV coordinate? EDIT: Allow me to clarify that I'm not talking about the trapezoidal distortion of the roads. That I know is normal and I'm not concerned about. I've updated the image to show more clearly where my concerns are. Interpolating over the curve I get 10 segments, but each of these 10 segments is not spaced at an equal point along the curve, so I have to account for this in assigning UV data to vertices or else the road texture will stretch/shrink depending on how far apart vertices are at that particular part of the curve.

    Read the article

  • Firefox 4 crashing in VNC

    - by MacThePenguin
    On my desktop PC I've been using Kubuntu 10.04 for about a year. Recently I've updated Firefox to version 4 using aptitude and the mozilla-team/firefox-stable repository. Since then, I can't run it when I'm logged in through a VNC session. Firefox crashes immediately: when I try to run it from the console I get this error: ###!!! ABORT: X_ShmPutImage: BadShmSeg (invalid shared segment parameter); 3 requests ago: file /build/buildd/firefox-4.0.1+build1+nobinonly/build-tree/mozilla/toolkit/xre/nsX11ErrorHandler.cpp, line 203 ###!!! ABORT: X_ShmPutImage: BadShmSeg (invalid shared segment parameter); 3 requests ago: file /build/buildd/firefox-4.0.1+build1+nobinonly/build-tree/mozilla/toolkit/xre/nsX11ErrorHandler.cpp, line 203 Firefox works fine when I run it directly from the PC. Firefox 3.x worked fine also from a VNC session. I tried to turn off the hardware acceleration from the Firefox preferences, but that doesn't fix the problem. firefox --sync, firefox -safe-mode and firefox -ProfileManager also crash the same way. Any idea how to troubleshoot this? Thanks. Edit: additional info. I run vnc (RealVNC 4.1.1) from xinetd, this is the config I use: service Xvnc { type = UNLISTED disable = no socket_type = stream protocol = tcp wait = yes user = root server = /usr/bin/Xvnc4 server_args = -inetd :1 -desktop vnc5901 -query localhost -geometry 1160x675 -depth 16 -once -DisconnectClients=0 -NeverShared passwordFile=/path/to/vnc/password -render port = 5901 }

    Read the article

  • How can I better implement A star algorithm with a very large set of nodes?

    - by Stephen
    I'm making a game with nodejs in which many enemies must converge on the player as the player moves around a relatively open space (right now it is an open field with few obstacles, but eventually there may be some small buildings in the field with 1 or 2 rooms). It's a multiplayer game using websockets, so the server needs to keep track of enemies and players. I found this javascript A* library which I've modified to be used on the server as a nodejs module. The library utilizes a Binary Heap to track the nodes for the algorithm, so it should be pretty fast (and indeed, with a small grid, say 100x100 it is lightning fast). The problem is that my game is not really tile-based. As the player moves around the map, he is moving on a more or less 1-to-1 per-pixel coordinate system (the player can move in 8 directions, 1 or 2 pixels at a time). In preliminary tests, on an 800x600 field, the path-finding can take anywhere from 400 to 1000 ms. Multiply that by 10 enemies and the game starts to get pretty choppy. I have already set it up so that each enemy will only do a path-finding call once per second or even as slow as once every 2 seconds (they have to keep updating their path because the players can move freely). But even with this long interval, there are noticeable lag spikes or chops every couple of seconds as the enemies update their paths. I'm willing to approach the problem of path-finding differently, if there's another option. I'm assuming that the real problem is the enormous grid (800x600). It also occurs to me that maybe the large arrays are to blame, as I've read that V8 has trouble with large arrays.

    Read the article

  • Per-vertex animation with VBOs: VBO per character or VBO per animation?

    - by charstar
    Goal To leverage the richness of well vetted animation tools such as Blender to do the heavy lifting for a small but rich set of animations. I am aware of additive pose blending like that from Naughty Dog and similar techniques but I would prefer to expend a little RAM/VRAM to avoid implementing a thesis-ready pose solver. I would also like to avoid implementing a key-frame + interpolation curve solver (reinventing Blender vertex groups and IPOs), if possible. Scenario Meshes are animated using either skeletons (skinned animation) or some form of morph targets (i.e. per-vertex key frames). However, in either case, the animations are known in full at load-time, that is, there is no physics, IK solving, or any other form of in-game pose solving. The number of character actions (animations) will be limited but rich (hand-animated). There may be multiple characters using a each mesh and its animations simultaneously in-game (they will likely be at different frames of the same animation at the same time). Assume color and texture coordinate buffers are static. Current Considerations Much like a non-shader-powered pose solver, create a VBO for each character and copy vertex and normal data to each VBO on each frame (VBO in STREAMING). Create one VBO for each animation where each frame (interleaved vertex and normal data) is concatenated onto the VBO. Then each character simply has a buffer pointer offset based on its current animation frame (e.g. pointer offset = (numVertices+numNormals)*frameNumber). (VBO in STATIC) Known Trade-Offs In 1 above: Each VBO would be small but there would be many VBOs and therefore lots of buffer binding and vertex copying each frame. Both client and pipeline intensive. In 2 above: There would be few VBOs therefore insignificant buffer binding and no vertex data getting jammed down the pipe each frame, but each VBO would be quite large. Are there any pitfalls to number 2 (aside from finite memory)? I've found a lot of information on what you can do, but no real best practices. Are there other considerations or methods that I am missing?

    Read the article

  • OpenGL camera moves faster than player

    - by opiop65
    I have a side scroller game made in OpenGL, and I'm trying to center the player in the viewport when he moves. I know how to do it: cameraX = Width / 2 / TileSize - playerPosX cameraY = Height / 2 / TileSize - playerPosY However, I have a problem. The player and "camera" move, but the player moves faster than the "camera" scrolls. So, the player can actually move out of the screen. Some code, this is how I translate the camera: public Camera(){ } public void update(Player p){ glTranslatef(-p.getPos().x - Main.WIDTH / 64 / 2, -p.getPos().y - Main.HEIGHT / 64 / 2, 1); } Here's how I move the player: public void update(){ if(Keyboard.isKeyDown(Keyboard.KEY_D)){ this.move(MOVESPEED, 0); } if(Keyboard.isKeyDown(Keyboard.KEY_A)){ this.move(-MOVESPEED, 0); } } The move method: public void move(float x, float y){ this.getPos().set(this.getPos().x + x, this.getPos().y + y); } And then after I move the player, I update the player's geometry, which shouldn't matter. What am I doing wrong here, this seems like such a simple problem, yet it doesn't work!

    Read the article

  • How to fix bad Collada produced by FBX?

    - by David
    I tried to use the FBX SDK (2011.3.1) to load FBX files and save them as Collada files in order to be able to import FBX files in Panda3D. Unfortunately the resulting Collada files are not usable for several reasons, among them: There's a Maya specific extra technique diffuse <diffuse> <texture texture="Map__2-image" texcoord="CHANNEL0"> <extra> <technique profile="MAYA"> <wrapU sid="wrapU0">TRUE</wrapU> <wrapV sid="wrapV0">TRUE</wrapV> <blend_mode>ADD</blend_mode> </technique> </extra> </texture> </diffuse> It assigns a texcoord channel name that isn't referenced anywhere else in the file (in the previous code sample, no geometry uses "CHANNEL0"...) Every polygon is exported twice, a first time with a basic material (only diffuse color, specular color, etc.) and a second time with a textured material -- this doubles the number of polygons of each model without any valuable reason Anyway, the resulting Collada file cannot be opened correctly either with OpenCOLLADA or Panda3D's "dae2egg". Anyone has any experience on how to "fix" it and make it understandable by common and well-reputed Collada importers such as OpenCOLLADA?

    Read the article

  • State of the art Culling and Batching techniques in rendering

    - by Kristian Skarseth
    I'm currently working with upgrading and restructuring an OpenGL render engine. The engine is used for visualising large scenes of architectural data (buildings with interior), and the amount of objects can become rather large. As is the case with any building, there is a lot of occluded objects within walls, and you naturally only see the objects that are in the same room as you, or the exterior if you are on the outside. This leaves a large number of objects that should be occluded through occlusion culling and frustum culling. At the same time there is a lot of repetative geometry that can be batched in renderbatches, and also a lot of objects that can be rendered with instanced rendering. The way I see it, it can be difficult to combine renderbatching and culling in an optimal fashion. If you batch too many objects in the same VBO it's difficult to cull the objects on the CPU in order to skip rendering that batch. At the same time if you skip the culling on the cpu, a lot of objects will be processed by the GPU while they are not visible. If you skip batching copletely in order to more easily cull on the CPU, there will be an unwanted high amount of render calls. I have done some research into existing techniques and theories as to how these problems are solved in modern graphics, but I have not been able to find any concrete solution. An idea a colleague and me came up with was restricting batches to objects relatively close to eachother e.g all chairs in a room or within a radius of n meeters. This could be simplified and optimized through use of oct-trees. Does anyone have any pointers to techniques used for scene managment, culling, batching etc in state of the art modern graphics engines?

    Read the article

  • Suitability of ground fog using layered alpha quads?

    - by Nick Wiggill
    A layered approach would use a series of massive alpha-textured quads arranged parallel to the ground, intersecting all intervening terrain geometry, to provide the illusion of ground fog quite effectively from high up, looking down, and somewhat less effectively when inside the fog and looking toward the horizon (see image below). Alternatively, a shader-heavy approach would instead calculate density as function of view distance into the ground fog substrate, and output the fragment value based on that. Without having to performance-test each approach myself, I would like first to hear others' experiences (not speculation!) on what sort of performance impact the layered alpha texture approach is likely to have. I ask specifically due to the oft-cited impacts of overdraw (not sure how fill-rate bound your average desktop system is). A list of games using this approach, particularly older games, would be immensely useful: if this was viable on pre DX9/OpenGL2 hardware, it is likely to work fine for me. One big question is in regards to this sort of effect: (Image credit goes to Lume of lume.com) Notice how the vertical fog gradation is continuous / smooth. OTOH, using textured quad layers, I can only assume that layers would be mighty obvious when walking through them -- the more sparse they were, the more obvious this would be. This is in contrast to where fog planes are aligned to face the player every frame, where this coarseness would be much less obvious.

    Read the article

  • AutoVue 20.2 for Agile Released

    - by Kerrie Foy
    I saw an important post on the Oracle's AutoVue Enterprise Visualization Blog that I wanted to share with you all in the Agile community.  This was originally posted by Angus Graham here. AutoVue 20.2 for Agile Released Oracle’s AutoVue 20.2 for Agile PLM is now available on Oracle’s Software Delivery Cloud. This latest release allows Agile PLM customers to take advantage of new AutoVue 20.2 features in the following Agile PLM environments: 9.3.1.x; 9.3.0.  AutoVue 20.2 delivers improvements in the following areas. New Format Support: AutoVue 20.2 adds support for the latest versions of popular file formats including: ECAD: Cadence Concept HDL 16.5, Allegro Layout 16.5, Orcad Capture 16.5, Board Station ASCII Symbol Geometry, Cadence Cell Library MCAD: CATIA V5 R21, PTC Creo Parametric 1.0, Creo Element\Direct Modeling 17.10, 17.20, 17.25, 17.30, 18.00, SolidWorks 2012, SolidEdge ST3 & ST4, PLM XML 2D CAD: Creo Element/Direct Drafting 17.10 to 18.00 Office: MS Office 2010: Word, Excel, PowerPoint, Outlook Enhancements to AutoVue enterprise readiness: reliability and performance improvements, as well as security enhancements which adhere to Oracle’s Software Security Assurance standards Updated version of AutoVue Document Print Service offerings, which include the ability to select CAD layers for printing  For further details, check out the What’s New in AutoVue 20.2 datasheet

    Read the article

  • How to fix bad Collada produced by FBX?

    - by David
    I tried to use the FBX SDK (2011.3.1) to load FBX files and save them as Collada files in order to be able to import FBX files in Panda3D. Unfortunately the resulting Collada files are not usable for several reasons, among them: There's a Maya specific extra technique diffuse <diffuse> <texture texture="Map__2-image" texcoord="CHANNEL0"> <extra> <technique profile="MAYA"> <wrapU sid="wrapU0">TRUE</wrapU> <wrapV sid="wrapV0">TRUE</wrapV> <blend_mode>ADD</blend_mode> </technique> </extra> </texture> </diffuse> It assigns a texcoord channel name that isn't referenced anywhere else in the file (in the previous code sample, no geometry uses "CHANNEL0"...) Every polygon is exported twice, a first time with a basic material (only diffuse color, specular color, etc.) and a second time with a textured material -- this doubles the number of polygons of each model without any valuable reason Anyway, the resulting Collada file cannot be opened correctly either with OpenCOLLADA or Panda3D's "dae2egg". Anyone has any experience on how to "fix" it and make it understandable by common and well-reputed Collada importers such as OpenCOLLADA?

    Read the article

  • Camera field of view: 3D projections & trigonometry

    - by Thomas O
    Okay, here goes. I have a camera at (Xc, Yc, Zc.) The Xc and Yc coordinates are latitude/longitude, and the Zc coordinate is an altitude in metres. I have a point at (Xp, Yp, Zp) and a field of view on the camera (Th1, Th2) - where Th1 is horizontal FOV and Th2 is vertical FOV. Given this information, I'd like to: test if the point is visible (i.e. in the camera's FOV) project the point as the camera would see it I've figured out already that the camera's horizontal view at any given distance is tan(Th1) * distance, but I don't know how to test if the point is visible. Accuracy is not critical. I would prefer a simple solution over a complicated solution, if it works well enough. The computations will be performed by a small microcontroller, which isn't very fast at things like trig functions. P.S. this is not homework, I'm doing this for some game development. It will be integrated with the real world, hence the latitude/longitude/altitude. It involves flying real RC planes through virtual hoops (or chasing virtual targets), so I have to project the positions of these hoops on a display.

    Read the article

  • Issue in understanding how to compare performance of classifier using ROC

    - by user1214586
    I am trying to demystify pattern recognition techniques and understood few of them. I am trying to design a classifier M. A gesture is classified based on the hamming distance between the sample time series y and the training time series x. The result of the classifier are probabilistic values. There are 3 classes/categories with labels A,B,C which classifies hand gestures where there are 100 samples for each class which are to be classified (single feature and data length=100). The data are different time series (x coordinate vs time). The training set is used to assign probabilities indicating which gesture has occured how many times. So,out of 10 training samples if gesture A appeared 6 times then probability that a gesture falls under category A is P(A)=0.6 similarly P(B)=0.3 and P(C)=0.1 Now, I am trying to compare the performance of this classifier with Bayes classifier, K-NN, Principal component analysis (PCA) and Neural Network. On what basis,parameter and method should I do it if I consider ROC or cross validate since the features for my classifier are the probabilistic values for the ROC plot hence what shall be the features for k-nn,bayes classification and PCA? Is there a code for it which will be useful. What should be the value of k is there are 3 classes of gestures? Please help. I am in a fix.

    Read the article

  • Tessellating to a curve?

    - by Avi
    I'm creating a game engine, and I'm trying to define a 3D model format I want to use. I haven't come across a format that quite does what I want. My game engine assumes a shader model 5+ environment. By the time I'm finished with it, that won't be a very unreasonable requirement. Because it assumes such a modern environment, I'm going to try and exploit tessellation. The most popular way, it seems, to procedurally increase geometry through tessellation is to tessellate to a height map. This works for a lot of things, but has limitations in that height maps still use up VRAM and also only have finite scalability. So I want to be able to use curves to define what a mesh should tessellate to. The thing is, I have no idea what definition of curves I should use, how I should store it, and how I should tessellate to it. Do I use NURBS curves? Bezier? Hermite? And once I figure that out, is there an algorithm to determine how the tessellation shader should produce and move vertices to match the curve as closely as possible? Is the infinite scalability and lower memory usage when compared to height maps worth the added computational complexity? I'm sorry I'm kind if ignorant as to these matters. I just don't know where to start.

    Read the article

  • Can't use the hardware scissor any more, should I use the stencil buffer or manually clip sprites?

    - by Alex Ames
    I wrote a simple UI system for my game. There is a clip flag on my widgets that you can use to tell a widget to clip any children that try to draw outside their parent's box (for scrollboxes for example). The clip flag uses glScissor, which is fed an axis aligned rectangle. I just added arbitrary rotation and transformations to my widgets, so I can rotate or scale them however I want. Unfortunately, this breaks the scissor that I was using as now my clip rectangle might not be axis aligned. There are two ways I can think of to fix this: either by using the stencil buffer to define the drawable area, or by having a wrapper function around my sprite drawing function that will adjust the vertices and texture coords of the sprites being drawn based on the clipper on the top of a clipper stack. Of course, there may also be other options I can't think of (something fancy with shaders possibly?). I'm not sure which way to go at the moment. Changing the implementation of my scissor functions to use the stencil buffer probably requires the smallest change, but I'm not sure how much overhead that has compared to the coordinate adjusting or if the performance difference is even worth considering.

    Read the article

  • Does Unity have existing support for Timelines?

    - by Raven Dreamer
    I am planning the development of a game in Unity3D, and trying to come to terms with what the engine has already provided, and what I must code myself. The game itself is going to be a rhythm game, which means synching audio and graphical events so that they always play when they're supposed to. What I'm looking to avoid is a potential scenario of lag where either the audio or the graphics starts to progress faster than the other. When we discussed this type of coordinating system in my game design class back at university, my professor called this type of design a "Timeline" class. The idea being that you can instantiate one or more of these to progress at different rates, schedule things to happen in the future, and synch up periodic events. However, calling this a "Timeline" class seems to have been limited to my professor himself, as googling for whether a certain API features "Timeline" functionality has been a fruitless endeavor. Is there some more common name for this kind of functionality? Does Unity have any pre-existing methods to coordinate the scheduling of events like this, or is this the kind of thing that needs to be built onto the engine? And if it does, I'd appreciate being pointed towards some tutorials!

    Read the article

  • Implementing Light Volume Front Faces

    - by cubrman
    I recently read an article about light indexed deferred rendering from here: http://code.google.com/p/lightindexed-deferredrender/ It explains its ideas in a clear way, but there was one point that I failed to understand. It in fact is one of the most interesting ones, as it explains how to implement transparency with this approach: Typically when rendering light volumes in deferred rendering, only surfaces that intersect the light volume are marked and lit. This is generally accomplished by a “shadow volume like” technique of rendering back faces – incrementing stencil where depth is greater than – then rendering front faces and only accepting when depth is less than and stencil is not zero. By only rendering front faces where depth is less than, all future lookups by fragments in the forward rendering pass will get all possible lights that could hit the fragment. Can anyone explain how exactly you need to render only front faces? Another question is why do you need the front faces at all? Why can't we simply render all the lights and store the ones that overlap at this pixel in a texture? Does this approach serves as a cut-off plane to discard lights blocked by opaque geometry?

    Read the article

  • Xna Equivalent of Viewport.Unproject in a draw call as a matrix transformation

    - by Nick Crowther
    I am making a 2D sidescroller and I would like to draw my sprite to world space instead of client space so I do not have to lock it to the center of the screen and when the camera stops the sprite will walk off screen instead of being stuck at the center. In order to do this I wanted to make a transformation matrix that goes in my draw call. I have seen something like this: http://stackoverflow.com/questions/3570192/xna-viewport-projection-and-spritebatch I have seen Matrix.CreateOrthographic() used to go from Worldspace to client space but, how would I go about using it to go from clientspace to worldspace? I was going to try putting my returns from the viewport.unproject method I have into a scale matrix such as: blah = Matrix.CreateScale(unproject.X,unproject.Y,0); however, that doesn't seem to work correctly. Here is what I'm calling in my draw method(where X is the coordinate my camera should follow): Vector3 test = screentoworld(X, graphics); var clienttoworld = Matrix.CreateScale(test.X,test.Y, 0); animationPlayer.Draw(theSpriteBatch, new Vector2(X.X,X.Y),false,false,0,Color.White,new Vector2(1,1),clienttoworld); Here is my code in my unproject method: Vector3 screentoworld(Vector2 some, GraphicsDevice graphics): Vector2 Position =(some.X,some.Y); var project = Matrix.CreateOrthographic(5*graphicsdevice.Viewport.Width, graphicsdevice.Viewport.Height, 0, 1); var viewMatrix = Matrix.CreateLookAt( new Vector3(0, 0, -4.3f), new Vector3(X.X,X.Y,0), Vector3.Up); //I have also tried substituting (cam.Position.X,cam.Position.Y,0) in for the (0,0,-4.3f) Vector3 nearSource = new Vector3(Position, 0f); Vector3 nearPoint = graphicsdevice.Viewport.Unproject(nearSource, project, viewMatrix, Matrix.Identity); return nearPoint;

    Read the article

  • How to convert eps file to a large jpeg image

    - by Anand
    Hello, I am using Linux. I want to convert an eps file to jpeg file. I find that I can use "convert" command. However, the resulting image looks very small. I want to enlarge the jpeg file by -resize option. It seems not to work. The resulting image is a pure black one. Do anyone has the same problem? Here are more details: 1: if I use convert -scale 1000x1000 your.eps your.jpg the resulting image looks like a low quality image. The eps vector image is not scaled properly. 2: if I use convert -geometry 300% your.eps your.jpg I get a pure black image. Here is my phf file: 2shared.com/document/RXl2Be-g/askquestions.html and my eps file: 2shared.com/file/qrmwKegj/askquestions.html Thank you very much for your help!

    Read the article

  • Blur gets displaced compared to original image

    - by user1294203
    I have implemented a SSAO and I'm using a blur step to smooth it out. The problem is that the blurred texture is slightly displaced compared to the original. I'm blurring using a 4x4 kernel since that was my noise kernel in SSAO. The following is the blurring shader: float result = 0.0; for(int i = 0; i < 4; i++){ for(int j = 0; j < 4; j++){ vec2 offset = vec2(TEXEL_SIZE.x * i, TEXEL_SIZE.y * j); result += texture(aoSampler, TexCoord + offset).r; } } out_AO = vec4(vec3(0.0), result * 0.0625); Where TEXEL_SIZE is one over my window resolution. I was thinking that this is was an error based on how OpenGL counts the Texel center, so I tried displacing the texture coordinate I was using by 0.5 * TEXEL_SIZE, but there was still a slight displacement. The texture input to my blur shader, has wrap parameters: glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); When I tell the blur shader to just output the the value of the pixel, the result is not displaced, so it must have something to do with how neighboring pixels are sampled. Any thoughts?

    Read the article

  • Rendering Text with the HTML5 Canvas

    - by dwahlin
    In a previous post I walked through the fundamentals of rendering shapes such as squares and circles using the HTML5 Canvas API. In this post I’ll provide a simple example of rendering and rotating text. To render text you can use the fillText() or strokeText() functions which take the text to render as well as the x and y coordinates of where to render it. To rotate text you can use the transform functions available with the HTML5 Canvas such as save(), rotate(), and restore(). To run the live demos that follow click the Result tab in the blue bar of each demo.   Rendering Text This example provides a simple look at how text can be rendered using the HTML5 Canvas. It iterates through a loop, updates the text and font size dynamically, measures the width of the text using the measureText() function, and then calls fillText() to render the text with the desired font size to the screen.   Here’s what the code above renders:   Rotating Text This example shows how text can be rendered and even rotated by using transform functions built into the HTML5 Canvas. The code starts by rendering text the standard way using fillText(). It then saves the state of the canvas performs an x,y coordinate transform (moves to 100, 300 respectively) and then rotates the canvas –90 degrees using the rotate() function. After the text is rendered, the canvas is reverted back to it’s existing state (saved by calling the save() function) by calling the restore() function. An additional line of text is then rendered.   Here’s what the code above renders:   If you’re interested in learning more about the HTML5 Canvas and how it can be used in your Web or Windows 8 applications, check out my HTML5 Canvas Fundamentals course from Pluralsight.

    Read the article

  • Does Unity's "Transparent Bumped Specular" translate to "semi-shiny must be semi-transparent"?

    - by Shivan Dragon
    Unity's documentation for the "Transparent Bumped Specular" shader/material-type is simply a concatenation of each of the descriptions for its Transparent and Specular Shaders (and also Bumped, but that doesn't apply to the question): Transparent Properties This shader can make mesh geometry partially or fully transparent by reading the alpha channel of the main texture. In the alpha, 0 (black) is completely transparent while 255 (white) is completely opaque. If your main texture does not have an alpha channel, the object will appear completely opaque. (...) Specular Properties (...) Additionally, the alpha channel of the main texture acts as a Specular Map (sometimes called "gloss map"), defining which areas of the object are more reflective than others. Black areas of the alpha will be zero specular reflection, while white areas will be full specular reflection. To me this translates to: I have a mesh representig a car tire The texture need to be very shiny on the rims parts, and almost not shiny at all for the rubber parts Also since the rim is really complex, (with like cut-out decoretions and such), I will not build that into the mesh, but fake it with transparency in the texture I can't do all this using Unity's "Transparent Bumped Specular" shader, because the "rubber" part of the texture will become semi transparent due to me painting the alpha channel dark-grey (because I want it to also be less shiny). Is this correct? If not, how can I make this work?

    Read the article

  • What's the best algorithm for... [closed]

    - by Paska
    Hi programmers! Today come out a little problem. I have an array of coordinates (latitude and longitude) maded in this way: [0] = "45.01234,9.12345" [1] = "46.11111,9.12345" [2] = "47.22222,9.98765" [...] etc In a loop, convert these coordinates in meters (UTM northing / UTM easting) and after that i convert these coords in pixel (X / Y) on screen (the output device is an iphone) to draw a route line on a custom map. [0] = "512335.00000,502333.666666" [...] etc The returning pixel are passed to a method that draw a line on screen (simulating a route calculation). [0] = "20,30" [1] = "21,31" [2] = "25,40" [...] etc As coordinate (lat/lon) are too many, i need to truncate lat/lon array eliminating the values that doesn't fill in the map bound (the visible part of map on screen). Map bounds are 2 couple of coords lat/lon, upper left and lower right. Now, what is the best way to loop on this array (NOT SORTED) and check if a value is or not in bound and after remove the value that is outside? To return a clean array that contains only the coords visible on screen? Note: the coords array is a very big array. 4000/5000 couple of items. This is a method that should be looped every drag or zoom. Anyone have an idea to optimize search and controls in this array? many thanks, A

    Read the article

  • Z Order in 2D with orthographic projection and texture atlas

    - by Carbon Crystal
    I am working with a 2D game in OpenGL ES and have a question about z-order together with a texture atlas. I am using an orthographic projection because I want pixel-perfect rendering of 2D sprites, however from what I can determine the draw order is really the only thing that will determine which textures (sprites) appear above or below their neighbors. That is, the "z-index" is a function of the order in which the textures are drawn as opposed to the z coordinate on the vertex array being drawn. So.. I have a texture atlas to save binding multiple textures for each draw call but this immediately creates a problem if there is more than one atlas in play. If I need to draw textures from more than one atlas (typically the case if I have too many sprites to fit in a single atlas of a reasonable size), then I can't maintain a "draw order" across atlases unless I want to bind/unbind the atlas textures more than once.. which kinda defeats the purpose. Does anyone have any clues as to what the best approach is here? Currently I'm running under an assumption that I will have to declare different fixed "depths" (e.g foreground, background etc) in my 2D scene and assume that the z-order for sprites at a given depth is the same. Then I can have as many atlases as I need at each depth and simply draw the depths in order (along with their associated atlases) I'd love to hear what other people are doing.

    Read the article

  • How to remove seams from a tile map in 3D?

    - by Grimshaw
    I am using my OpenGL custom engine to render a tilemap made with Tiled, using a well spread tileset from the web. There is nothing fancy going on. I load the TMX file from Tiled and generate vertex arrays and index arrays to render the tilemap. I am rendering this tilemap as a wall in my 3D world, meaning that I move around with a fly camera in my 3D world and at Z=0 there is a plane showing me my tiles. Everything is working correctly but I get ugly seems between the tiles. I've tried orthographic and perspective cameras and with either I found particular sets of parameters for the projection and view matrices where the artifacts did not show, but otherwise they are there 99% of the time in multiple patterns, depending on the zoom and camera parameters like field of view. Here's a screenshot of the artifact being shown: http://i.imgur.com/HNV1g4M.png Here's the tileset I am using (which Tiled also uses and renders correctly): http://i.imgur.com/SjjHK4q.png My tileset has no mipmaps and is set to GL_NEAREST and GL_CLAMP_TO_EDGE values. I've looked around many articles in the internet and nothing helped. I tried uv correction so the uv fall at half of the texel, rather than the end of the texel to prevent interpolating with the neighbour value(which is transparency). I tried debugging with my geometry and I verified that with no texture and a random color in each tile, I don't seem to see any seams. All vertices have integer coordinates, i.e, the first tile is a quad from (0,0) to (1,1) and so on. Tried adding a little offset both to the UV and to the vertices to see if the gaps cease to exist. Disabled multisampling too. Nothing fixed it so far. Thanks.

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >