Search Results

Search found 3875 results on 155 pages for 'opengl es lighting'.

Page 107/155 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • How to: Simulating keystroke inputs in shell to an app running in an embedded target

    - by fzkl
    I am writing an automation script that runs on an embedded linux target. A part of the script involves running an app on the target and obtaining some data from the stdout. Stdout here is the ssh terminal connection I have to the target. However, this data is available on the stdout only if certain keys are pressed and the key press has to be done on the keyboard connected to the embedded target and not on the host system from which I have ssh'd into the target. Is there any way to simulate this? Edit: Elaborating on what I need - I have an OpenGL app that I run on the embedded linux (works like regular linux) target. This displays some graphics on the embedded system's display device. Pressing f on the keyboard connected to the target outputs the fps data onto the ssh terminal from which I control the target. Since I am automating the process of running this OpenGL app and obtaining the fps scores, I can't expect a keyboard to be connected to the target let alone expect a user to input a keystroke on the embedded target keyboard. How do I go about this? Thanks.

    Read the article

  • How to use python to create a GUI application which have cool animation/effects under Linux (like 3D

    - by sgon00
    Hi, I am not sure if my question title makes sense to you or not. I am seeing many cool applications which have cool animations/effects. I would like to learn how to use python to create this kind of GUI applications under Linux. "cool animation/effects" like 3D wall in Cooliris which is written in flash and compiz effects with opengl. I also heard of some python GUI library like wxPython and pyQT. Since I am completely new to python GUI programming, can anyone suggest me where to start and what I should learn to achieve and create such application? maybe learn pyQT with openGL feature? pyopengl binding? I have no clue on where to start. thank you very much for your time and suggestion. By the way, in case if someone need to know which kind of application I am going to create, well, just any kind of applications. maybe photo explorer with 3D wall, maybe IM client, maybe facebook client etc...

    Read the article

  • C/C++ macro/template blackmagic to generate unique name.

    - by anon
    Macros are fine. Templates are fine. Pretty much whatever it works is fine. The example is OpenGL; but the technique is C++ specific and relies on no knowledge of OpenGL. Precise problem: I want an expression E; where I do not have to specify a unique name; such that a constructor is called where E is defined, and a destructor is called where the block E is in ends. For example, consider: class GlTranslate { GLTranslate(float x, float y, float z); { glPushMatrix(); glTranslatef(x, y, z); } ~GlTranslate() { glPopMatrix(); } }; Manual solution: { GlTranslate foo(1.0, 0.0, 0.0); // I had ti give it a name ..... } // auto popmatrix Now, I have this not only for glTranslate, but lots of other PushAttrib/PopAttrib calls too. I would prefer not to have to come up with a unique name for each var. Is there some trick involving macros templates ... or something else that will automatically create a variable who's constructor is called at point of definition; and destructor called at end of block? Thanks!

    Read the article

  • Custom UITableViewCell trouble with UIAccessibility elements

    - by ojreadmore
    No matter what I try, I can't keep my custom UITableViewCell from acting like it should under the default rules for UIAccessiblity. I don't want this cell to act like an accessibility container (per se), so following this guide I should be able to make all of my subviews accessible, right?! It says to make each element accessible separately and make sure the cell itself is not accessible. - (BOOL)isAccessibilityElement { return NO; } - (NSString *)accessibilityLabel { return nil; } - (NSInteger)accessibilityElementCount { return 0; } - (id)initWithStyle:(UITableViewCellStyle)style reuseIdentifier:(NSString *)reuseIdentifier //cells use this reusage stuff { if (self = [super initWithStyle:style reuseIdentifier:reuseIdentifier]) { [self setIsAccessibilityElement:NO]; sub1 = [[UILabel alloc] initWithFrame:CGRectMake(0,0,1,1)]; [sub1 setAccessibilityLanguage:@"es"]; [sub1 setIsAccessibilityElement:YES]; [sub1 setAccessibilityLabel:sub1.text] sub2 = [[UILabel alloc] initWithFrame:CGRectMake(0,0,1,1)]; [sub2 setAccessibilityLanguage:@"es"]; [sub2 setIsAccessibilityElement:YES]; [sub2 setAccessibilityLabel:sub2.text] The voice over system reads the contents of the whole cell all at once, even though I'm trying to stop that behavior. I could say [sub2 setIsAccessibilityElement:NO]; but that would would make this element entirely unreadable. I want to keep it readable, but not have the whole cell be treated like a container (and assumed to be the English language). There does not appear to be a lot of information out there on this, so at the very least I'd like to document it.

    Read the article

  • Is this XML file correct for RSS feeding?

    - by Hermet
    Hi guys I am generating a XML-RSS type file from PHP. The output for example is like this <?xml version="1.0" encoding="iso-8859-1"?> <rss version="2.0"> <channel> <title>Mi web mola</title> <link>http://www.dominio.com/blog.php</link> <language>es-ES</language> <description>Mallas y eso</description> <generator>Autor</generator> <item> <title>Articulo de prueba</title> <link>http://www.midominio.com/2342</link> <pubDate>14/06/2010</pubDate> <description><![CDATA[Descripcion de prueba bla bla bla]]></description> <content:encoded><![CDATA[Contenido prueba]]></content:encoded> </item> </channel> </rss> ... and all I can see in the Firefox preview is the title and the description of the blog, not the items, but in the source it appears correctly, so I've thought it must be a parse error or something like that.. What could be wrong? Again, excuse me for my bad english, and thank you very much.

    Read the article

  • An html input box isn't being displayed, Firebug says it has style="display: none" but I haven't don

    - by Ankur
    I have placed a form on a page which looks like this: <form id="editClassList" name="editClassList" method="get" action="EditClassList"> <label> <input name="class-to-add" id="class-to-add" size="42" type="text"> </label> <label> <input name="save-class-btn" id="save-class-btn" value="Save Class(es)" type="submit"> </label> </form> But when it get's rendered by a browser it comes out like this: <form id="editClassList" name="editClassList" method="get" action="EditClassList"> <label> <input style="display: none;" name="class-to-add" id="class-to-add" size="42" type="text"> </label> <label> <input name="save-class-btn" id="save-class-btn" value="Save Class(es)" type="submit"> </label> </form> For some reason style="display: none;" is being added, and I cann't understand why. This results in the text box not displaying.

    Read the article

  • Error: expected specifier-qualifier-list before 'QTVisualContextRef'

    - by Moonlight293
    Hi everyone, I am currently getting this error message in my header code, and I'm not sure as to why: "Error: expected specifier-qualifier-list before 'QTVisualContextRef'" #import <Cocoa/Cocoa.h> #import <QTKit/QTKit.h> #import <OpenGL/OpenGL.h> #import <QuartzCore/QuartzCore.h> #import <CoreVideo/CoreVideo.h> @interface MyRecorderController : NSObject { IBOutlet QTCaptureView *mCaptureView; IBOutlet NSPopUpButton *videoDevicePopUp; NSMutableDictionary *namesToDevicesDictionary; NSString *defaultDeviceMenuTitle; CVImageBufferRef mCurrentImageBuffer; QTCaptureDecompressedVideoOutput *mCaptureDecompressedVideoOutput; QTVisualContextRef qtVisualContext; // the context the movie is playing in // filters for CI rendering CIFilter *colorCorrectionFilter; // hue saturation brightness control through one CI filter CIFilter *effectFilter; // zoom blur filter CIFilter *compositeFilter; // composites the timecode over the video CIContext *ciContext; QTCaptureSession *mCaptureSession; QTCaptureMovieFileOutput *mCaptureMovieFileOutput; QTCaptureDeviceInput *mCaptureDeviceInput; } @end In the examples I have seen through other code (e.g. Cocoa Video Tutorial) I have not seen any difference in their code to mine. If anyone would be able to point out as to how this error could have occurred that would be great. Thanks heaps! :)

    Read the article

  • Zend_Router requirements mismatch

    - by elbicho
    Hello you all: I have two routes that match a url with the same apparent pattern, the difference lies in the $actionRoute, this should only be matched if the variable :action on it equals 'myaction'. If I go to /en/mypage/whatever/myaction it goes as expected through $actionRoute. If I go to /en/mypage/whatever/blahblah it gets rejected by $actionRoute and matched by $genRoute. If I go to /en/mypage/whatever it should be matched by $genRoute but it gets matched by $actionRoute instead throwing and exception because the action noactionAction() does not exist. Don't know what I'm doing wrong, I'd appreciate your help. $genRoute = new Zend_Controller_Router_Route( ':lang/mypage/:var1/:var2', array( 'lang' => '', 'module' => 'mymodule', 'controller' => 'index', 'action' => 'index', 'var1' => 'noone', 'var2' => 'no' ), array( 'var1' => '[a-z\-]+?', 'lang' => '(es|en|fr|de){1}' ) ); $actionRoute = new Zend_Controller_Router_Route( ':lang/mypage/:var1/:action', array( 'lang' => '', 'module' => 'mymodule', 'controller' => 'index', 'action' => 'noaction', 'var1' => 'noone', ), array( 'action' => '(myaction)+?', 'var' => '[a-z\-]+?', 'lang' => '(es|en|fr|de){1}', ) ); $router->addRoute('genroute',$genRoute); $router->addRoute('actionroute',$actionRoute);

    Read the article

  • Problem with Java Scanner sc.nextLine();

    - by Jonathan B
    Hi, sry about my english :) Im new to Java programming and i have a problem with Scanner. I need to read an Int, show some stuff and then read a string so i use sc.nextInt(); show my stuff showMenu(); and then try to read a string palabra=sc.nextLine(); Some one told me i need to use a sc.nextLine(); after sc.nextInt(); but i dont understand why do you have to do it :( Here is my code: public static void main(String[] args) { // TODO code application logic here Scanner sc = new Scanner(System.in); int respuesta = 1; showMenu(); respuesta = sc.nextInt(); sc.nextLine(); //Why is this line necessary for second scan to work? switch (respuesta){ case 1: System.out.println("=== Palindromo ==="); String palabra = sc.nextLine(); if (esPalindromo(palabra) == true) System.out.println("Es Palindromo"); else System.out.println("No es Palindromo"); break; } } Ty so much for your time and Help :D

    Read the article

  • MS SideWinder Force Feedback Wheel under Win7 x64 - steering works but force feedback not

    - by user24752
    I just bought this second-hand ancient but professional steering wheel: Microsoft SideWinder Force Feedback Wheel. I hooked it up to my Win7 x64 machine, it recognized it without installing anything, it did show up in the "Devices and Printers" section. Right-click - I could calibrate it, I could use it under Flatout2 right away. However, force feedback does not seem to work. The steering wheel has a force-button. If I set it using force feedback, it should lit up according to the manual (originally written for Win98). However, instead of lighting up, it blinks. The manual does not associate anything to blinking. I never used any game controllers before on any Windows. Is there a way to check/calibrate force feedback?

    Read the article

  • Seeking glass lcd montiors with LED backlight

    - by dlamblin
    The only LCD monitors with glass fronts and LED back-lighting I can find are the ones by Apple. And they only sell a 24" one at 2.4x the price of any other 24" monitor at 1920x1200, and a 30" one, which honestly I can't put on my desk. Oh, and the 24" one uses a mini-display port plug only. So I'd be out of luck until there's display side adapter available. I am generally looking for a 16:10 or 4:3 rather than 16:9 monitor. It would be awesome if someone could find another, cheaper, monitor that isn't fronted by a plastic film, but rather with glass. It would be double awesome if said monitor was also 120hz so that I can use nVidia's 3D goggles. Update: One month and 16 days later I seem to not be the only one that can't find another glass based computer lcd monitor. LED backlighting is available though.

    Read the article

  • How can I broadcast video live (preferably wirelessly)?

    - by Blixt
    Update I've gotten plenty of feedback on the software solutions and the unanimous solution for having a handheld device to record video seems to be to use a mobile phone (I was hoping there'd be some webcam-like device with wifi support...) I'd appreciate more hardware suggestions now. That is, what mobile phones have good video recording quality (and battery time)? I'm looking for a solution to broadcast video live on the internet from a location (an apartment), with a device that can be carried around. What options are there? I'm looking for complete solutions (i.e., what hardware to use, what software to use, how it should all be set up.) Currently, I have my mobile phone (Nokia N95 8GB) with Qik installed connected to wifi, but unfortunately the videos get bad quality (especially since it's indoors with poor lighting) plus the battery gets used up quickly.

    Read the article

  • How to prevent laptop screen brightness from changing when un/plugging battery power

    - by Nomad
    When I am using my laptop, I continually adjust the screen's brightness based on the lighting conditions in the room (e.g. how much light is coming in from windows, etc.). But if I unplug the laptop or plug it back in, Windows looks at the default brightness setting in the power profile for "on battery" or "plugged in" and changes the brightness accordingly. This is a jarring experience and then I have to hunt down the ideal brightness for my current situation again, rather than getting on with my work. I would like to make it so that plugging or unplugging the battery is not a trigger that adjusts the screen brightness at all. The screen brightness should only change when I adjust it myself. Does anybody know how this might be accomplished?

    Read the article

  • how to disable isight auto adjustments ?

    - by George Profenza
    The built-in isight cam on my macbook machine keeps re-adjusting the lighting (and focus I think). I need to manually set those, but I found nothing of any use in System Preferences or System Profiler. Any way to access the settings ? Any magic terminal commands that allows access to the camera ? Anyone has a driver that allows for any camera access ? 'mac - it just works'...sure, if you want to use it like a kid. the second you actually want to do something with you mac other than the basic things, you can do on ANY regular machine anyway, your 'rights' are done with, as apple seems to only encourage dumb clients. I'm not saying saying this applies to all mac users, but the 'typical/average' one in my view is only going to use it for media(music,video) and web(facebook,blogging, all that) and maybe podcasting,webcasting,etc....ok this is turning into a rant, so I will finish here.

    Read the article

  • Why Does My iMac Keep Setting the Screen Brightness to 'Full'?

    - by TomB
    I have a 2 week old 24" iMac running Mac OS X 10.6. It is the primary monitor with the menu bar on the top of display. I have an external monitor as well, a 19" Viewsonic LCD. The LCD is set to the left side of the iMac, rotated 90 degrees CCW and has the Dock along the far left edge. When I restart the screen brightness on the iMac reverts to Full brightness. The Viewsonic LCD retains the setting I have for it. I am using a Mini DVI to DVI cable for the external display. I even tried setting my Huey Pro to do automatic screen adjustment based on ambient lighting but the iMac still goes to stun with a reboot. I am sure it is something dumb I have overlooked.

    Read the article

  • Black screen after scheduled resume from sleep

    - by macbirdie
    I have a problem with my two Windows 7 setups at home. Whenever a scheduled task wakes up the machines from sleep, a black screen with a mouse cursor appears. If I move the mouse or press any key, desktop appears. The biggest problem is that during that black screen phase the display, sometimes even the PC too, doesn't go to stand by after a set period of time in power settings. If I get the desktop to show up, standby functions work fine. Display goes blank after a few minutes and as soon as the task doesn't need the PC, it goes back to sleep. It's frustrating that this "feature" is wasting energy and keeps lighting up the pc in the middle of the night for me to find it that way after I wake up. I found a handful of threads on the intertubes about the issue, but there were no answers in any of them. It sometimes happens in my XP machine at work as well.

    Read the article

  • How do I blend 2 lightmaps for day/night cycle in Unity?

    - by Timothy Williams
    Before I say anything else: I'm using dual lightmaps, meaning I need to blend both a near and a far. So I've been working on this for a while now, I have a whole day/night cycle set up for renderers and lighting, and everything is working fine and not process intensive. The only problem I'm having is figuring out how I could blend two lightmaps together, I've figured out how to switch lightmaps, but the problem is that looks kind of abrupt and interrupts the experience. I've done hours of research on this, tried all kinds of shaders, pixel by pixel blending, and everything else to no real avail. Pixel by pixel blending in C# turned out to be a bit process intensive for my liking, though I'm still working on cleaning it up and making it run more smoothly. Shaders looked promising, but I couldn't find a shader that could properly blend two lightmaps. Does anyone have any leads on how I could accomplish this? I just need some sort of smooth transition between my daytime and nighttime lightmap. Perhaps I could overlay the two textures and use an alpha channel? Or something like that?

    Read the article

  • SQLAuthority News – Happy Deepavali and Happy News Year

    - by pinaldave
    Diwali or Deepavali is popularly known as the festival of lights. It literally means “array of light” or “row of lamps“. Today we build a small clay maps and fill it with oil and light it up. The significance of lighting the lamp is the triumph of good over evil. I work every single day in a year but today I am spending my time with family and little one. I make sure that my daughter is aware of our culture and she learns to celebrate the festival with the same passion and values which I have. Every year on this day, I do not write a long blog post but rather write a small post with various SQL Tips and Tricks. After reading them you should quickly get back to your friends and family – it is the most important festival day. Here are a few tips and tricks: Take regular full backup of your database Avoid cursors if they can be replaced by set based process Keep your index maintenance script handy and execute them at intervals Consider Solid State Drive (SDD) for crucial database and tempdb placement Update statistics for OLTP transactions at intervals I guess that’s it for today. If you still have more time to learn. Here are few things you should consider. Get FREE Books by Sign up for tomorrow’s webcast by Rick Morelan Watch SQL in Sixty Seconds Series – FREE SQL Learning Read my earlier 2300+ articles Well, I am sure that will keep you busy for the rest of the day! Happy Diwali to All of You! Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Rendering shadow sprites in cocos2d-x

    - by lukeluke
    I am writing a 2D game with cocos2d-x. I want to put a "shadow" sprite on a background sprite using the equation: MAX(0, Cd*1 - Cs*S) where Cd is the destination color (that is, a background pixel), Cs is the source color (the shadow pixel) , S is the scale factor (between 0 and 1). The MAX() function is used to avoid negative results. This is a lighting effect: when the shadow sprite pixel is 0, there is no effect on the background pixel, otherwise, the background pixel becomes darker. Now, the only way that comes to my mind is to change the blending equation to GL_FUNC_SUBTRACT, but it doesn't compile with cocos2d-x (can't found it)... I would subclass the CCSprite class in order to implement the draw() method in order to change, when needed, the blending equation, call the original draw() method and restore the blending equation to its previous state at the end of the method. So my questions are two: how to use glBlendEquation() with cocos2d-x? Keep in mind that i am writing a game for iphone/android/windows. are shadows handled this way in 2D games? Thx

    Read the article

  • Wpf vs WinForms for a vb programmer? [closed]

    - by Jeroen
    I am asked by a client to develop an application that is basically a screen on which the user can choose several items to pass the time (used in holding cells in mental hospitals for example). The baisc idea is as follows: TV (choosing this will provide the user with a number of TV streams from the interweb) Radio (...) Games (serveral flash games, also from the interweb) Music (play local music or streams) Draw something (not the game) Create an email Choose lighting settings for the room etc. etc. I am torn between WinForms and WPF for this project. It seems that WPF is the way to go since there is quite a bit of rich media involved but I have a 15 year VB background. The project obviously has a dead line and certain budget that I cannot cross and if I can avoid starting from scratch with some thing that will be nice. Is WPF worth it in this particular case or can I use WinForms with the incorperation of WPF controls? I would very much like to hear your thoughts/comments/suggestions!

    Read the article

  • Manually writing a dx11 tessellation shader

    - by Tudor
    I am looking for resources on what are the steps of manually implementing tessellation (I'm using Unity cg). Today it seems that it is all the rage to hide most of the gpu code far away and use rather rigid simplifications such as unity's SURFace shaders. And it seems useless unless you're doing supeficial stuff. A little background: I have procedurally generated meshes (using marching cubes) which have quality normals but no UVs and no Tangents. I have successfully written a custom vertex and fragment shader to do triplanar texture and bumpmap projection as well as some custom stuff (custom lighting, procedurally warping the texture for variation etc). I am using the GPU Gems book as reference. Now I need to implement tessellation, but It seems I must calculate the tangents at runtime by swizzling normals (ctrl+f this in gems: <normal.z, normal.y, -normal.x>) before the tessellator gets them. And I also need to keep my custom vert+frag setup (with my custom parameters/textures being passed between them) - so apparently I cannot use surface shaders. Can anyone provide some guidence?

    Read the article

  • HTG Explains: What’s a Solid State Drive and What Do I Need to Know?

    - by Jason Fitzpatrick
    Solid State Drives (SSDs) are the lighting fast new kid on the hard drive block, but are they a good match for you? Read on as we demystify SSDs. The last few years have seen a marked increase in the availability of SSDs and a decrease in price (although it certainly may not feel that way when comparing prices between SSDs and traditional HDDs). What is an SSD? In what ways do you benefit the most from paying the premium for an SSD? What, if anything, do you need to do differently with an SSD? Read on as we cut through  the new-product-haze surrounding Solid State Drives. Latest Features How-To Geek ETC How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? Save Files Directly from Your Browser to the Cloud in Chrome and Iron The Steve Jobs Chronicles – Charlie and the Apple Factory [Video] Google Chrome Updates; Faster, Cleaner Menus, Encrypted Password Syncing, and More Glowing Chess Set Combines LEDs, Chess, and DIY Electronics Fun Peaceful Alpine River on a Sunny Day [Wallpaper] Fast Society Creates Mini and Mobile Temporary Social Networks

    Read the article

  • How to Get MacBook-Style Finger Gestures on Ubuntu Linux

    - by Zainul Franciscus
    Apple users have been swiping, pinching, and rotating Mac’s user interfaces to their fingers’ content. In today’s article, we’ll show you how to do groovy things like expanding and reducing windows, and changing desktops using finger gestures. To accomplish this, we’ll use a piece of software called TouchEgg, which enhances Ubuntu’s multi touch capability by allowing us to configure actions to the finger gestures that TouchEgg supports. If you’re a Windows user and like the idea of finger-gestures, we also wrote a tutorial on how to enable MacBook-Style finger gestures on Windows Latest Features How-To Geek ETC Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines SnapBird Supercharges Your Twitter Searches Google’s New Personal Blocklist Extension Kills Search Engine Spam KeyCounter Tracks Your Keystrokes and Mouse Clicks Add Custom LED Ambient Lighting to Your PC or Media Center The Trackor Monitors Amazon Prices; Integrates with Chrome, Firefox, and Safari Four Awesome TRON Legacy Themes for Chrome and Iron

    Read the article

  • Watson Ties Against Human Jeopardy Opponents

    - by ETC
    In January we showed you a video of Waton in a practice round against Jeopardy champions Ken Jennings and Brad Rutter. Last night they squared off in a real round of Jeopardy with Watson in a tie with Rutter. Watson held his own against the two champions leveraging the 90 IBM Power 750 servers, 2,880 processors, and the 16TB of memory driving him to his full advantage. It was impressive to watch the round unfold and to see where Watson shined and where he faltered. Check out the video below to footage of Watson in training and then in action on Jeopardy. Pay special attention to the things that trip him up. Watson answers cut and dry questions with absolute lighting speed but stumbles when it comes to nuances in language–like finis vs. terminus in the train question that Jennings answered correctly. Watch Part 2 of the video above here. Latest Features How-To Geek ETC Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines MyPaint is an Open-Source Graphics App for Digital Painters Can the Birds and Pigs Really Be Friends in the End? [Angry Birds Video] Add the 2D Version of the New Unity Interface to Ubuntu 10.10 and 11.04 MightyMintyBoost Is a 3-in-1 Gadget Charger Watson Ties Against Human Jeopardy Opponents Peaceful Tropical Cavern Wallpaper

    Read the article

  • Computing pixel's screen position in a vertex shader: right or wrong?

    - by cubrman
    I am building a deferred rendering engine and I have a question. The article I took the sample code from suggested computing screen position of the pixel as follows: VertexShaderFunction() { ... output.Position = mul(worldViewProj, input.Position); output.ScreenPosition = output.Position; } PixelShaderFunction() { input.ScreenPosition.xy /= input.ScreenPosition.w; float2 TexCoord = 0.5f * (float2(input.ScreenPosition.x,-input.ScreenPosition.y) + 1); ... } The question is what if I compute the position in the vertex shader (which should optimize the performance as VSF is launched significantly less number of times than PSF) would I get the per-vertex lighting insted. Here is how I want to do this: VertexShaderFunction() { ... output.Position = mul(worldViewProj, input.Position); output.ScreenPosition.xy = output.Position / output.Position.w; } PixelShaderFunction() { float2 TexCoord = 0.5f * (float2(input.ScreenPosition.x,-input.ScreenPosition.y) + 1); ... } What exactly happens with the data I pass from VS to PS? How exactly is it interpolated? Will it give me the right per-pixel result in this case? I tried launching the game both ways and saw no visual difference. Is my assumption right? Thanks. P.S. I am optimizing the point light shader, so I actually pass a sphere geometry into the VS.

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >