Search Results

Search found 5920 results on 237 pages for 'hand drawn'.

Page 64/237 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Questions before I revamp my rendering engine to use shaders (GLSL)

    - by stephelton
    I've written a fairly robust rendering engine using OpenGL ES 1.1 (fixed-function.) I've been looking into revamping the engine to use OpenGL ES 2.0, which necessitates that I use shaders. I've been absorbing information all day long and still have some questions. Firstly, lighting. The fixed-function pipeline is guaranteed to have at least 8 lights available. My current engine finds lights that are "close" to the primitives being drawn and enables them; I don't know how many lights are going to be enabled until I draw a given model. Nothing is dynamically allocated in GLSL, so I have to define in a shader some number of lights to be used, right? So if I want to stick with 8, should I write my general purpose shader to have 8 lights and then use uniforms to tell it how many / which lights to use? Which brings me to another question: should I be concerned with the amount of data I'm allocating in a shader? Recent video cards have hundreds of "stream processors." If I've got a fragment shader being used on some number of fragments in a given triangle, I assume they must each have their own stack to work on. Are read-only variables copied here, or read when needed? My initial goal is to rework my code so that it is virtually identical to the current implementation. What I have in mind is to create my own matrix stack so that I can implement something along the lines of push/popMatrix and apply all my translations, rotations, and scales to this matrix, then provide the matrix to the vertex shader so that it can make very quick vertex translations. Is this approach sound? Edit: My original intention was to ask if there was a tutorial that would explain the bare minimum necessary to jump from fixed-function to using shaders. Thanks!

    Read the article

  • LL(8) and left-recursion

    - by Peregring-lk
    I want to understand the relation between LL/LR grammars and the left-recursion problem (for any question I know parcially the answer, but I ask them as I don't know nothing, because I am a little confused now, and prefer complete answers) I'm happy with sintetized or short and direct answers (or just links solving it unambiguously): What type of language isn't LL(8) languages? LL(K) and LL(8) have problems with left-recursion? Or only LL(k) parsers? LALR(1) parser have troubles with left or right recursion? What type of troubles? Only in terms of the LL/LALR comparision. What is better, Bison (LALR(1)) or Boost.Spirit (LL(8))? (Let's suppose other features of them are irrelevant in this question) Why GCC use a (hand-made) LL(8) parser? Only for the "handling-error" problem?

    Read the article

  • How to remove space between application and workspace switcher on Unity launcher?

    - by Tyler Marengo
    I just downloaded Ubuntu 11.10 for the first time. I really enjoy the launcher on the left hand side but have a problem. After playing around with adding applications in it, there are large spaces from the last application to workspace switcher. If I try to drag a new application into the launcher everything moves farther down, instead of filling in the space in between apps. When I right click on the launcher in a blank spot it reads "drop to add application." I'm looking for a way to have all of the applications close together with no spaces, like when I first downloaded Ubuntu.

    Read the article

  • When to update jQuery?

    - by epaulk
    When you recommend updating jQuery/jQuery UI? Or in other words: what are the best practices for updating jQuery/jQuery UI? I’m working on a long project that will take at least one more year. In that time span, I’m sure that jQuery/jQuery UI will be updated many times. Do you recommend update my jQuery/jQuery UI files every time an update is released? Or is better to stick with a particular version until the end of the project? I’m afraid of “breaking” code changes, and every time an update is released, I have to test everything. That takes too much time. But on the other hand, if I didn’t update, I’m afraid of bugs that later will bite me in the rear. The project is an ASP.MVC and I use jQuery a lot. Any thoughts?

    Read the article

  • Kinect joint coordinates and XNA animation

    - by Sweta Dwivedi
    I have written a program to record the x,y,z coordinated of the Hand joint and I want to animate my models 2D or 3D according to these coordinates. . .However the output of the x,y,z coordinates are fluctuating from -0 to 1 but not more than that.. So i assume I will need to multiply them back with the screen width and height, however it still doesnt seem to animate according to the original x,y,z points Any transformations I might be missing out? while ((line = r.ReadLine()) != null) { string[] temp = line.Split(','); int x = (int) float.Parse(temp[0]))* maxWidth); int y = (int) float.Parse(temp[1])) * maxHeight); }

    Read the article

  • Is it possible that Unity would some day switch back to Mutter?

    - by David
    I remembered that the first Unity was indeed built on Mutter, but later ported to Compiz due to poor performance. I also know Canonical practically incorporated Compiz to work closely for future Unity, so this is getting less likely. But Compiz just seems pretty outdated now that GNOME3/GTK3/Mutter is becoming more mainstreamed, and it is known to deliver some performance issue, but on the other hand Mutter seems pretty good and is still steadily developing now, I'm just wondering if anyone related to the project is still testing and evaluating the possibility of Unity on Mutter? Not that you have to tell me now if you're going to do it or not. I just wanna know if anyone is considering it. Thanks.

    Read the article

  • How can I mark a pixel in the stencil buffer?

    - by János Turánszki
    I never used the stencil buffer for anything until now, but I want to change this. I have an idea of how it should work: the gpu discards or keeps rasterized pixels before the pixel shader based on the stencil buffer value on the given position and some stencil operation. What I don't know is how would I mark a pixel in the stencil buffer with a specific value. For example I draw my scene and want to mark everything which is drawn with a specific material (this material could be looked up from a texture so ideally I should mark the pixel in the pixel shader), so that later when I do some post processing on my scene I would only do it on the marked pixels. I didn't find anything on the internet besides how to set up a stencil buffer and explaining the different stencil operations. I was expecting to find some System-Value semantics like SV_Depth to write to in the pixel shader (because the stencil buffer shares the same resource with the depth buffer in D3D11), but there is no such thing on MSDN. So how should I do this? If I am misunderstanding something please help me clear that up.

    Read the article

  • How to solve dual monitor issue, which happens only during X start?

    - by tamashumi
    When is loading and two monitors are connected, instead of a login screen I see this: ...after clicking OK, selection appears: Then I'm following to console login, disconnecting by hand the secondary monitor cable, restart lightdm with a command sudo service lightdm restart ...and voila! System loads fine. If I disconnect the cable before boot X will be loaded fine too. It's not a nice 'feature' when I have to disconnect the cable each boot or X restart. I was trying to delete monitors.xml but it didn't help. The situation relates to my notebook with Intel integrated GPU. The same happens on two different pairs of monitors: at the office and at home. How can I fix this? Ubuntu 12.04 x64 Desktop with default Unity GUI.

    Read the article

  • Java: How to Make a Player Class in a Tile-Based RPG

    - by A.K.
    So I've been following a JavaHub tutorial that basically uses a pixel engine similar to MiniCraft. I've attempted to make a Player Class as such, and I'm basically making a mock Pokemon game for learning's sake: package pokemon.entity; import java.awt.Rectangle; import pokemon.gfx.Screen; import pokemon.levelgen.Tile; import pokemon.entity.SpritesManage;; public class Player { int x, y; int vx, vy; public Rectangle AshRec; public Sprite AshSprite; Screen screen; Sprite[][] AshSheet; public Player() { AshSprite = SpritesManage.AshSheet[1][0]; AshRec = new Rectangle(0, 0, 16, 16); x = 0; y = 0; vx = 1; vy = 1; screen.renderSprite(0, 0, AshSprite); } public void update() { move(); checkCollision(); } private void checkCollision() { } private void move() { AshRec.x += vx; AshRec.y += vy; } public void render(Screen screen, int x, int y) { screen.renderSprite(x, y, AshSprite); } } I guess what I really want to do is have the Player centered in the screen and have the sprite drawn based on an Input Handler. I'm just stumped as to how to sync these together.

    Read the article

  • GLES2.0 3D Android game performance and multi threading the update?

    - by Ofer
    I have profiled my mixed Java\C++ Android game and I got the following result: https://dl.dropbox.com/u/8025882/PompiDev/AndroidProfile.png As you can see, the pink think is a C++ functions that updates the game. It does things like updating the logic but it mostly it generates a "request list" for rendering. The thing is, I generate DrawLists on C++ and then send them to Java to process and draw using GLES2.0. Since then I was able to improve update from 9ms down to about 7ms, but I would like to ask if I would benefit from multi threading the update? As I understand from that diagram is that the function that takes the most time is the one you see it's color on the timeline. So the pink area is taken mostly by update. The other area has MainOpenGL.Handle as it's main contributor(whch is my java function), but since it's not drawn to the top of the diagram I can conclude other things are happening at the same time that use the CPU? Or even GPU stuff that isn't shown in this diagram. I am not sure how the GPU works on this. Does it calculate stuff in parallel to the CPU? Or is it part of the CPU usage as in SoC? I am not sure. Anyway, in case GPU things DO happen in parallel to CPU, then I would guess that if I do this C++ Update in parallel to the thread that makes the OpenGL calls, I might make use of "dead CPU time" due to GPU stalling or maybe have the GPU calls getting processed earlier because it won't have to wait for Update to finish? How do you suggest to improve performance based on that? Thanks.

    Read the article

  • Is having 'Util' classes a cause for concern? [closed]

    - by Matt Fenwick
    I sometimes create 'Util' classes which primarily serve to hold methods and values that don't really seem to belong elsewhere. But every time I create one of these classes, I think "uh-oh, I'm gonna regret this later ...", because I read somewhere that it's bad. But on the other hand, there seem to be two compelling (at least for me) cases for them: implementation secrets that are used in multiple classes within a package providing useful functionality to augment a class, without cluttering its interface Am I on the way to destruction? What you say !! Should I refactor?

    Read the article

  • What language has the best/most library bindings?

    - by Rook
    A library binding allows a programming language to use a library written in another language. Most commonly you want to access a C library like libcurl from a language like PHP or Python. Not all bindings are created equally, for instance the libcurl binding for Python was abandoned almost 3 years ago and their sf.net bug tracker is overrun with unsolved problems. PHP on the other hand has very good libcurl bindings that are actively maintained. So here is my question: What language has the best and/or the most bindings?

    Read the article

  • What is the value of a let expression

    - by Grzegorz Slawecki
    From what I understand, every code in f# is an expression, including let binding. Say we got the following code: let a = 5 printfn "%d" a I've read that this would be seen by the compiler as let a = 5 in ( printfn "%d" a ) And so the value of all this would be value of inner expression, which is value of printf. On the other hand, in f# interactive: > let a = 5;; val a : int = 5 Which clearly indicates that the value of let expression is the value bound to the identifier. Q: Can anyone explain what is the value of a let expression? Can it be different in compiled code than in F# interactive?

    Read the article

  • Making it myself vs. modifying someone else's code as a beginner

    - by JamesGold
    I just started getting into open source projects mainly for the learning experience. I've made a few tiny contributions to some small projects. Most of my time has been spent just reading over other people's code and trying to understand how it works. Often times I find myself frustrated by a lack of documentation and unit tests. There are also times where I think I can see a more intuitive solution to a problem, but implementing it would require large restructuring of code. I see all this and wonder to myself why I don't just start clean on the whole thing by myself and do things "the right way"? I'd also enjoy the experience of building it from scratch, as it would force me to learn skills that I might not learn by working on other people's code. On the other hand, working on other people's code is also a great experience because it requires me to understand and work with other people's code and collaborate with them. It's just harder, IMO. Thoughts?

    Read the article

  • Flowchart for solving programming problems

    - by nurne
    I noticed that every developer implements a somewhat different flowchart for solving programming problems. By flowchart I mean a defined system of techniques that the developer goes through in a certain sequence, trying to solve the problem at hand. Some examples for techniques: Google "how to..." or "... tutorial". Search the java/msdn/apple/etc API doc for the specific class or method. Search in stack overflow the exact problem with some tags like [iphone]/[java] etc. Take a nap and let the subconscious work. Debug. Draw the algorithm or system. Google the logged error message. Ask a colleague or manager. Ask a new question in stack overflow. From your experience, what is the best flowchart for solving a programming problem?

    Read the article

  • Radeon Open Source Drivers Configuration

    - by Andy Turfer
    How does one configure the Radeon Open Source drivers? I have just installed Ubuntu 12.10 and want to try the Open Source drivers instead of the proprietary AMD binaries. After the installation, the driver seems to be installed, I have wobbly windows working (can't use a PC without wobbly windows!), and life is generally good. I have a problem when I connect a secondary monitor. Performance is killed (everything becomes laggy and jerky) and my laptop sits on the right-hand side of the monitor, not the left. I'd like to know how to turn off the Laptop's monitor so I'm just using the external monitor. How can I do this using the Open Source Radeon drivers? I can't find a GUI management tool, and there's no longer an xorg.conf. What to do?

    Read the article

  • Beginner: How to Make Explorer Always Show the Full Path in Windows 8

    - by Taylor Gibb
    In older versions of Windows the Title Bar used to display your current location in the file system. In Windows 8 this is not the default behavior, however, you can enable it if you wish to. Display the Full Path in the Windows Explorer Title Bar Press the Windows + E keyboard combination to open Windows Explorer and then switch over to the View tab. On the right-hand side click on options and then select Change folder and search options from the drop-down. When the Folder Options dialog opens, switch over to the View options. Here you will need to tick the Display the full path in the title bar check box. That’s all there is to it. How To Switch Webmail Providers Without Losing All Your Email How To Force Windows Applications to Use a Specific CPU HTG Explains: Is UPnP a Security Risk?

    Read the article

  • Why are my file selection dialogs so big? How do I make them smaller?

    - by Amanda
    When I use an external monitor, my file selection dialog boxes seem to be huge -- wider than my larger screen. I assume it is a nautilus issue since it happens whether I'm trying to open a file to upload (in firefox) or attach (in thunderbird) or just open it in LibreOffice. See screenshot: The browser window fills my left-hand monitor, the "open" dialog is wider than one screen, and wider than the window that spawned it. It's huge. It didn't used to be huge. Is there some way to force dialog windows to be smaller by default? Whenever I try to open/attach/upload a file I have to re-size the finder dialog before I can see what I'm looking at. I don't understand why it is defaulting to such a huge window.

    Read the article

  • Confusion about Rotation matrices from Euler Angles

    - by xEnOn
    I am trying to learn more about Euler Angles so as to help myself in understanding how I can control my camera better in the game. I came across the following formula that converts Euler Angles to rotation matrices: In the equation, I could see that the first matrix from the left is the rotation matrix about x-axis, the second is about y-axis and the third is about z-axis. From my understanding about ordinary matrix transformations, the later transformation is always applied to the right hand side. And if I'm right about this, then the above equation should have a rotation order starting from rotating about z-axis, y-axis, then finally x-axis. But, from the symbols it seems that the rotation order start rotating about x-axis, then y-axis, then finally z-axis. What should the actual order of the rotation be? Also, I am confuse about if the input vector, in this case, would be a row vector on the left, or a column vector on the right?

    Read the article

  • Education and Career Resources from Microsoft and the Community

    - by KKline
    Sometimes I'm timely in getting the news out on useful resources. And, other times, I'm a bit slower on the draw. As I told my friends back at New Year's Day, "As an official member of the Procrastinators Club, welcome to 2008!" On the other hand, it's always good to remind folks of great resources that are still available and on the shelf. Why? Well, the Internet hits us with such a deluge of constantly new material, that we often forget about the old(ish) stuff that's still really useful. Darth...(read more)

    Read the article

  • Checking for collisions on a 3D heightmap

    - by Piku
    I have a 3D heightmap drawn using OpenGL (which isn't important). It's represented by a 2D array of height data. To draw this I go through the array using each point as a vertex. Three vertices are wound together to form a triangle, two triangles to make a quad. To stop the whole mesh being tiny I scale this by a certain amount called 'gridsize'. This produces a fairly nice and lumpy, angular terrain kind of similar to something you'd see in old Atari/Amiga or DOS '3D' games (think Virus/Zarch on the Atari ST). I'm now trying to work out how to do collision with the terrain, testing to see if the player is about to collide with a piece of scenery sticking upwards or fall into a hole. At the moment I am simply dividing the player's co-ordinates by the gridsize to find which vertex the player is on top of and it works well when the player is exactly over the corner of a triangle piece of terrain. However... How can I make it more accurate for the bits between the vertices? I get confused since they don't exist in my heightmap data, they're a product of the GPU trying to draw a triangle between three points. I can calculate the height of the point closest to the player, but not the space between them. I.e if the player is hovering over the centre of one of these 'quads', rather than over the corner vertex of one, how do I work out the height of the terrain below them? Later on I may want the player to slide down the slopes in the terrain.

    Read the article

  • Programming *into* a language vs. writing C code in Ruby

    - by bastibe
    Code Complete states that you should aways code into a language as opposed to code in it. By that, they mean Don't limit your programming thinking only to the concepts that are supported automatically by your language. The best programmers think of what they want to do, and then they assess how to accomplish their objectives with the programming tools at their disposal. (chapter 34.4) Doesn't this lead to using one style of programming in every language out there, regardless of the particular strengths and weaknesses of the language at hand? Or, to put the question in a more answerable format: Would you propose that one should try to encode one's problem as neatly as possible with the particulars of one's language, or should you rather search the most elegant solution overall, even if that means that you need to implement possibly awkward constructs that do not exist natively in one's language?

    Read the article

  • JSIL - a Dot Net to JavaScript translator

    - by TATWORTH
    JSI is described at http://jsil.org/ as:"JSIL is a compiler that transforms .NET applications and libraries from their native executable format - CIL bytecode - into standards-compliant, cross-browser JavaScript. You can take this JavaScript and run it in a web browser or any other modern JavaScript runtime. Unlike other cross-compiler tools targeting JavaScript, JSIL produces readable, easy-to-debug JavaScript that resembles the code a developer might write by hand, while still maintaining the behavior and structure of the original .NET code. Because JSIL transforms bytecode, it can support most .NET-based languages - C# to JavaScript and VB.NET to JavaScript work right out of the box."

    Read the article

  • Roll Your Own Solaris Blogroll

    - by Larry Wake
    Something handy I just ran across: There are lots of people here who blog about Solaris, either as their main topic, or as the occasional tangent. If the blogger has tagged their post appropriately, here's a quick way to find them: Articles tagged Solaris Articles tagged ZFS Articles tagged IPS Articles tagged DTrace Articles tagged Zones Articles tagged Studio Articles tagged Cluster Note that this is a little different from using the "word cloud" you can find in the right-hand column on this page, since that only finds articles tagged in this blog. The above links will find all tagged blogs.oracle.com posts. Some topics are a little trickier to nail down, because there may not be a standardized tag for the topic, so building a more conventional "blogroll" is on my to-do list. In the meantime, you can also refer to the post Markus Weber made of interesting Solaris 11 launch-related posts.

    Read the article

  • Why does Unity in 2d mode employ scaling and the default othographic size the way it does?

    - by Neophyte
    I previously used SFML, XNA, Monogame, etc to create 2d games, where if I display a 100px sprite on the screen, it will take up 100px. If I use 128px tiles to create a background, the first tile will be at (0,0) while the second will be at (129,0). Unity on the other hand, has its own odd unit system, scaling on all transforms, pixel-to-units, othographic size, etc etc. So my question is two-fold, namely: Why does Unity have this system by default for 2d? Is it for mobile dev? Is there a big benefit I'm not seeing? How can I setup my environment, so that if I have a 128x128 sprite in Photoshop, it displays as a 128x128 sprite in Unity when I run my game? Note that I am targeting desktop exclusively.

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >