Search Results

Search found 2838 results on 114 pages for 'graphic effects'.

Page 28/114 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • radeon display driver clones monitors while using Xinerama

    - by gregmuellegger
    I'm trying to get my two Radeon HD 4770 cards working with three monitors. Xinerama works so far in the way that I have two fully working monitors were I can move windows from one to the other. My problem now is that my third monitor is a clone of my second monitor (displaying the exact same thing). These monitors are connected to the same graphic card ("Screen Middle" and "Screen Right" in the xorg.conf below). Here is my xorg.conf: Section "ServerLayout" Identifier "ThreeMonitors" Screen "Screen Left" 0 0 Screen "Screen Middle" RightOf "Screen Left" Screen "Screen Right" RightOf "Screen Middle" Option "Xinerama" EndSection Section "Monitor" Identifier "Monitor Left" Option "DPMS" EndSection Section "Monitor" Identifier "Monitor Middle" Option "DPMS" EndSection Section "Monitor" Identifier "Monitor Right" Option "DPMS" EndSection Section "Device" Identifier "Device Left" Driver "radeon" VendorName "ATI Technologies Inc" BoardName "ATI Radeon HD 4770 [RV740]" BusID "PCI:3:0:0" Screen 0 EndSection Section "Screen" Identifier "Screen Left" Device "Device Left" Monitor "Monitor Left" SubSection "Display" Depth 24 EndSubSection EndSection Section "Device" Identifier "Device Middle" Driver "radeon" VendorName "ATI Technologies Inc" BoardName "ATI Radeon HD 4770 [RV740]" BusID "PCI:2:0:0" Screen 0 EndSection Section "Screen" Identifier "Screen Middle" Device "Device Middle" Monitor "Monitor Middle" SubSection "Display" Depth 24 EndSubSection EndSection Section "Device" Identifier "Device Right" Driver "radeon" VendorName "ATI Technologies Inc" BoardName "ATI Radeon HD 4770 [RV740]" BusID "PCI:2:0:1" Screen 1 EndSection Section "Screen" Identifier "Screen Right" Device "Device Right" Monitor "Monitor Right" SubSection "Display" Depth 24 EndSubSection EndSection I'm using a fresh Kubuntu 10.10 installation with propsed-updates enabled since this repo contains a xorg fix for using multiple graphic cards. I hope someone can help me out. Very many thanks!!

    Read the article

  • How do I get HDMI output working on a Dell XPS 15 L502x?

    - by Jones
    Recently I've bought my dream's notebook, a Dell XPS 15 but since then this dream became a kind of endless nightmare. I'm almost getting crazy to make my graphic card driver work properly, but it seems to be just impossible. Yes, I have a 2GB NVIDIA GeForce GT 540m (Optimus) in it! It simply doesn't work. Every time I generate the xorg.conf Ubuntu hangs on while starting up, which forces me to remove this file to be able to start the notebook with the standard graphic settings. Another problem is that the Dell XPS 15 does NOT have a VGA output, but a HDMI. So, to be able to use a second monitor I have to configure it by the NVIDIA X Server Settings, which just works if the driver is properly initialized with the xorg.conf. I've also tried to make it work with the Bumblebee, but unfortunately it didn't help me much with the HDMI output. Do you guys have any idea to solve this deadlock? Is there any way for me to use my second monitor?

    Read the article

  • Directx vs XNA - Which is better for me? [closed]

    - by tristo
    Recently I got Visual Studio 2012 from visual studio 2010, although did not expect Visual Studio to 2012 to designed the way it was. Anyway I am pleased with some of VS 2012 technology and have moved all of my projects to it. At this point of time since I got VS 2012 I have been into making windows applications and other non-game activities. ALTHOUGH have recently gotten into the spirit of game development and I am planning to make a 3d comical game, shader effects, not too complicated meshes, but it requires alot of lighting effects to emphasise certain parts of the game. When I was using VS 2010 I had a great time making 2d games with XNA, it uses a great language, and has a very awesome system. But I no longer have XNA with me, and the workarounds described in stackoverflow always gives me errors while using xna. Anyway it seems that microsoft have stuffed themselves up with xna anyway with the weirdness of Windows 8, and it being only avaliabe on pc and xbox. Due to these reasons I have decided to work with Directx and Direct3d to produce my new game, although the overflowing credits after each directx game gives me the shivers, and the low-level coding of directx also puts me on thin ice with my games, left in a confusional mess with what decision I should make. I don't know anything about directx or direct3d. I am an indie developer, but I am planning to take on alot of professional aspects of games. I don't have heaps of time(2-3 hours a day) I don't mind the complexity of how directx works, as long as I can learn how to make the fundementals of a game in a week. I am also unsure if directx is really for my situation, and keep with xna game development. Anyone can tell me the best technology for me would be great.

    Read the article

  • Ubuntu 3D does not work on Dell system with a AMD Radeon HD 6470M

    - by VeeKay
    I am running 64 bit Ubuntu on Dell with 1GB graphic card. I login with "Ubuntu" hoping to see Unity 3d but it doesn't. Unity 2D runs instead. when I type in echo "$DESKTOP_SESSION" it confirms the Unity-2D. I've checked the System info that shows like : The graphics row shows itself as empty. SO I've presumed that the graphic drivers aren't detected and hence I went to Unity- Additional Drivers and installed the fglrx driver that the UI has suggested. Even after installing so, the graphics part in System info details shows nothing and still Unity 2D runs in spite of all the effort. Please help! How can I get my Unity 3D back? Hardware Info Video Card : AMD Radeon™ HD 6470M - 1GB (For ICC) RAM : 6GB (1 X 2GB + 1 X 4GB) 2 DIMM DDR3 1333Mhz OS : 64 bit Ubuntu 11.10 Edit : Output for /usr/lib/nux/unity_support_test -p X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 155 (GLX) Minor opcode of failed request: 19 (X_GLXQueryServerString) Serial number of failed request: 21 Current serial number in output stream: 21

    Read the article

  • Ubuntu live cd : black screen and blinking cursor

    - by IFasel
    I try to install ubuntu 12.04 on my computer. I can get to the purple screen on the live cd but then, if I choose "Installing Ubuntu", I have a black screen with a cursor blinking (and nothing else happens). My PC : acer aspire M3920, CPU i5-2300, 8 Gb RAM, NVIDIA gt 405. What I already tried : I tried with 12.04 and 13.04 daily build I tried with a live usb and with a live dvd I tried the following boot options : nomodset, acpi=off I googled a lot and it seems that it could be a graphic card problem. Do you know any other boot options that I could try ? UPDATE This is not a duplicate : I've tried all the common boot options (nomodeset, noacpi...) and it doesn't change anything. With the option "no splash" (instead of "quiet splash"), I can see what happens before the forever-blinking cursor : [sdg] no caching mode present [sdg] assuming drive cache : write trough ata8.00: excetion Emask 0x52 ... frozen ata8 : SError : { RecovData RecovComm UnrecovData...} ata8.00 : failed command : IDENTIFY PACKET DEVICE ... ata8.00 : status : { DRDY } ata8 : hard resetting link Does somebody know what it means ? N.B. astonishingly, Puppy Linux boots fine (but Debian, Fedora and Ubuntu do not) Solution In fact, it was not a graphic card problem. I had to disconnect the dvd drive and connect it to another free sata connector (I don't really understand why Ubuntu had trouble with this connector and Windows 7 not). After that, everything worked fine.

    Read the article

  • Enabling GTX 570

    - by Silas
    Hello i just built up my new system: Asus Rock Z77 Extreme 4 Intel i7 3770k 16 Gb Corsair Ram Zotac Nvidia GTX 570 bequiet! 630W Power supply 120 GB SSD So after i installed UBUNTU 12.04 64 bit. It ran smoothly. I downloaded and installed all the recommended updates. After checking the Sytem details the GTX 570 didnt show up as graphics unit. so i figured i needed to download the drivers. So i did but being a complete newbie to linux i didnt succeed in installing them. (I think) Anyway after several tries and errors i shut down the PC and restarted it. Resultung in do Signal to my screen after trying to reboot and all the monitor outs with no result i took out the graphic card and now it boots normally but after booting it says there is a problem with the system the graphics cant be recognized something something. So Question A: What do i do? I Like linux but the arbitrarity of the Errors that occur without any changes to the system scare me. Question B: Is there A beginners guide to Ubuntu where i could start from scratch because i really want this to work? Question C: Now that the System is (suddently) showing these graphic errors So far without visible consequence despite the error message. should i reinstall the GPU and give the driver installation another try or the other way around? Ill be very grateful for any help. Thank you in advance!

    Read the article

  • Movement on the X an Z axis are combined?

    - by Magicaxis
    This is probably a stupid question, but I'm trying to simply move a 3D object up, down, left, and right (Not forward or backward). The Y axis works fine, but when I increment the object's X position, the object moves BOTH right and backwards! when I decrement X, left and forwards! setPosition(getPosition().X + 2/*times deltatime*/, getPosition().Y, getPosition().Z); I was astonished that XNA doesnt have its own setPosition function, so I made a parent class for all objects with a setPosition and Draw function. Setposition simply edits a variable "mPosition" and passes it to the common draw function: // Copy any parent transforms. Matrix[] transforms = new Matrix[block.Bones.Count]; block.CopyAbsoluteBoneTransformsTo(transforms); // Draw the model. A model can have multiple meshes, so loop. foreach (ModelMesh mesh in block.Meshes) { // This is where the mesh orientation is set, as well // as our camera and projection. foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.World = transforms[mesh.ParentBone.Index] * Matrix.CreateRotationY(MathHelper.ToRadians(mOrientation.Y)) * Matrix.CreateTranslation(mPosition); effect.View = game1.getView(); effect.Projection = game1.getProjection(); } // Draw the mesh, using the effects set above. mesh.Draw(); } I tried to work it out by attempting to increment and decrement the Z axis, but nothing happens?! So using the X axis changes the objects x and z axis', but changing the Z does nothing. Great. So how do I seperate the X and Z axis movement?

    Read the article

  • Ubuntu 11.10 very slow compared to Windows

    - by Patrick
    I'm new to Ubuntu/linux. Since my PC is very old and not very fast with Windows 7, I decided to give Ubuntu a try, so I downloaded and installed Ubuntu 11.10 today. When I first started it, I had bad 800x600 resolution and it was very slow and annoying. So I installed a driver for my graphic card and now everything looks very nice (1280x1024).But I think it's still far slower than Windows 7. I tried to run in Ubuntu like a few people suggested on the forum but if I log in I get a black screen saying something like "this video mode cannot be displayed". I get that same screen when booting Ubuntu btw, but after about 15 seconds it disappears and just starts Ubuntu. I also installed other drivers for my graphic card but everything stayed the same. I noticed that i.e. when I open Firefox or system settings it takes about 5 seconds till it opens (while Windows 7 takes under 1 second to start i.e. Chrome) and when I do this my CPU usage gets to 100% for a short time. Computer specs: Memory: 2GB RAM Processor: Intel Pentium 4 2.8GHz Graphics Card: Nvidia GeForce 6800 400MHz. I read on various forums that 11.04 works flawless on many PCs, where 11.10 is very slow. Should I install 11.04 or could anybody help me with this problem?

    Read the article

  • How to Boot or "enter" into ubnutu 11.04 (Natty Narwhall) on PS3 after installing it?

    - by Xdm
    After failing to install ubuntu 11.04 Desktop version (burned on a dvd, because iso's file size was 726 Mb), in tried the alternate version (about 686 Mb) which i burned into a CD. During the installation process, i manually partitioned the hard drive using ext3. After the installation, the CD ejects itself and i hit "continue". In the k boot prompt i hit "enter" on my the keyboard. I see i welcome message in a dark background (a kind of full screen terminal/console called CLI i think) and was asked to enter my user name and password. After that, instead of booting into the beautiful desktop of Ubuntu, i saw this: ubuntu@my_name$ It was like a line of command i think, where you can use "sudo" commands. It is called a non-graphic mode i think. My my REAL problem is to enter Ubuntu 11.04 with the chocolate-colored background and the African music drum. I tried many commands to boot into the graphic mode of the system such as Ctr+Alt+F7 but nothing happened. In this case, i just saw numbers showing available blocks. I DON'T UNDERSTAND. I also tried startx and sudo startx without result. I DO bought a PS3 BOTH for gaming and for Ubuntu because i can't afford buying a PC. Please help me with my trouble dear Ubuntu 11.04 users. I am counting on you. All the best Xdm

    Read the article

  • Use Advanced Font Ligatures in Office 2010

    - by Matthew Guay
    Fonts can help your documents stand out and be easier to read, and Office 2010 helps you take your fonts even further with support for OpenType ligatures, stylistic sets, and more.  Here’s a quick look at these new font features in Office 2010. Introduction Starting with Windows 7, Microsoft has made an effort to support more advanced font features across their products.  Windows 7 includes support for advanced OpenType font features and laid the groundwork for advanced font support in programs with the new DirectWrite subsystem.  It also includes the new font Gabriola, which includes an incredible number of beautiful stylistic sets and ligatures. Now, with the upcoming release of Office 2010, Microsoft is bringing advanced typographical features to the Office programs we love.  This includes support for OpenType ligatures, stylistic sets, number forms, contextual alternative characters, and more.  These new features are available in Word, Outlook, and Publisher 2010, and work the same on Windows XP, Vista and Windows 7. Please note that Windows does include several OpenType fonts that include these advanced features.  Calibri, Cambria, Constantia, and Corbel all include multiple number forms, while Consolas, Palatino Linotype, and Gabriola (Windows 7 only) include all the OpenType features.  And, of course, these new features will work great with any other OpenType fonts you have that contain advanced ligatures, stylistic sets, and number forms. Using advanced typography in Word To use the new font features, open a new document, select an OpenType font, and enter some text.  Here we have Word 2010 in Windows 7 with some random text in the Gabriola font.  Click the arrow on the bottom of the Font section of the ribbon to open the font properties. Alternately, select the text and click Font. Now, click on the Advanced tab to see the OpenType features. You can change the ligatures setting… Choose Proportional or Tabular number spacing… And even select Lining or Old-style number forms. Here’s a comparison of Lining and Old-style number forms in Word 2010 with the Calibri font. Finally, you can choose various Stylistic sets for your font.  The dialog always shows 20 styles, whether or not your font includes that many.  Most include only 1 or 2; Gabriola includes 6. Here’s lorem ipsum text, using the Gabriola font with Stylistic set 6. Impressive, huh?  The font ligatures change based on context, so they will automatically change as you are typing.  Watch the transition as we typed the word Microsoft in Word with Gabriola stylistic set 6. Here’s another example, showing the fi and tt ligatures in Calibri. These effects work great in Word 2010 in XP, too. And, since Outlook uses Word as it’s editing engine, you can use the same options in Outlook 2010.  Note that these font effects may not show up the same if the recipient’s email client doesn’t support advanced OpenType typography.  It will, of course, display perfectly if the recipient is using Outlook 2010. Using advanced typography in Publisher 2010 Publisher 2010 includes the same advanced font features.  This is especially nice for those using Publisher for professional layout and design.  Simply insert a text box, enter some text, select it, and click the arrow on the bottom of the font box as in Word to open the font properties. This font options dialog is actually more advanced than Word’s font options.  You can preview your font changes on sample text right in the properties box.  You can also choose to add or remove a swash from your characters.   Conclusion Advanced typographical effects are a welcome addition to Word and Publisher 2010, and they are very impressive when coupled with modern fonts such as Gabriola.  From designing elegant headers to using old-style numbers, these features are very useful and fun. Do you have a favorite OpenType font that includes advanced typographical features?  Let us know in the comments! More Reading Advances in typography in Windows 7 – Engineering 7 Blog New features in Microsoft Word 2010 Similar Articles Productive Geek Tips Change the Default Font in Excel 2007Ask the Readers: Do You Use a Laptop, Desktop, or Both?Keep Websites From Using Tiny Fonts in SafariAdd or Remove Apps from the Microsoft Office 2007 or 2010 SuiteFriday Fun: Desktop Tower Defense Pro TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional SpeedyFox Claims to Speed up your Firefox Beware Hover Kitties Test Drive Mobile Phones Online With TryPhone Ben & Jerry’s Free Cone Day, 3/23/10 New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users

    Read the article

  • Parallelism in .NET – Part 8, PLINQ’s ForAll Method

    - by Reed
    Parallel LINQ extends LINQ to Objects, and is typically very similar.  However, as I previously discussed, there are some differences.  Although the standard way to handle simple Data Parellelism is via Parallel.ForEach, it’s possible to do the same thing via PLINQ. PLINQ adds a new method unavailable in standard LINQ which provides new functionality… LINQ is designed to provide a much simpler way of handling querying, including filtering, ordering, grouping, and many other benefits.  Reading the description in LINQ to Objects on MSDN, it becomes clear that the thinking behind LINQ deals with retrieval of data.  LINQ works by adding a functional programming style on top of .NET, allowing us to express filters in terms of predicate functions, for example. PLINQ is, generally, very similar.  Typically, when using PLINQ, we write declarative statements to filter a dataset or perform an aggregation.  However, PLINQ adds one new method, which provides a very different purpose: ForAll. The ForAll method is defined on ParallelEnumerable, and will work upon any ParallelQuery<T>.  Unlike the sequence operators in LINQ and PLINQ, ForAll is intended to cause side effects.  It does not filter a collection, but rather invokes an action on each element of the collection. At first glance, this seems like a bad idea.  For example, Eric Lippert clearly explained two philosophical objections to providing an IEnumerable<T>.ForEach extension method, one of which still applies when parallelized.  The sole purpose of this method is to cause side effects, and as such, I agree that the ForAll method “violates the functional programming principles that all the other sequence operators are based upon”, in exactly the same manner an IEnumerable<T>.ForEach extension method would violate these principles.  Eric Lippert’s second reason for disliking a ForEach extension method does not necessarily apply to ForAll – replacing ForAll with a call to Parallel.ForEach has the same closure semantics, so there is no loss there. Although ForAll may have philosophical issues, there is a pragmatic reason to include this method.  Without ForAll, we would take a fairly serious performance hit in many situations.  Often, we need to perform some filtering or grouping, then perform an action using the results of our filter.  Using a standard foreach statement to perform our action would avoid this philosophical issue: // Filter our collection var filteredItems = collection.AsParallel().Where( i => i.SomePredicate() ); // Now perform an action foreach (var item in filteredItems) { // These will now run serially item.DoSomething(); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This would cause a loss in performance, since we lose any parallelism in place, and cause all of our actions to be run serially. We could easily use a Parallel.ForEach instead, which adds parallelism to the actions: // Filter our collection var filteredItems = collection.AsParallel().Where( i => i.SomePredicate() ); // Now perform an action once the filter completes Parallel.ForEach(filteredItems, item => { // These will now run in parallel item.DoSomething(); }); This is a noticeable improvement, since both our filtering and our actions run parallelized.  However, there is still a large bottleneck in place here.  The problem lies with my comment “perform an action once the filter completes”.  Here, we’re parallelizing the filter, then collecting all of the results, blocking until the filter completes.  Once the filtering of every element is completed, we then repartition the results of the filter, reschedule into multiple threads, and perform the action on each element.  By moving this into two separate statements, we potentially double our parallelization overhead, since we’re forcing the work to be partitioned and scheduled twice as many times. This is where the pragmatism comes into play.  By violating our functional principles, we gain the ability to avoid the overhead and cost of rescheduling the work: // Perform an action on the results of our filter collection .AsParallel() .Where( i => i.SomePredicate() ) .ForAll( i => i.DoSomething() ); The ability to avoid the scheduling overhead is a compelling reason to use ForAll.  This really goes back to one of the key points I discussed in data parallelism: Partition your problem in a way to place the most work possible into each task.  Here, this means leaving the statement attached to the expression, even though it causes side effects and is not standard usage for LINQ. This leads to my one guideline for using ForAll: The ForAll extension method should only be used to process the results of a parallel query, as returned by a PLINQ expression. Any other usage scenario should use Parallel.ForEach, instead.

    Read the article

  • SnagIt Live Writer Plug-in Updated

    - by Rick Strahl
    Ah, I love SnagIt from TechSmith and I use the heck out of it almost every day. So no surprise that I've decided some time ago to integrate SnagIt into a few applications that require screen shots extensively. It's been a while since I've posted an update to my small SnagIt Windows Live Writer plug-in. There have been a few nagging issues that have crept up with recent changes in the way SnagIt handles captures in recent versions and they have been addressed in this update of SnagIt. Personally I love SnagIt and use it extensively mostly for blogging, but also for writing documentation and articles etc. While there are many other (and also free) tools out there to do basic screen captures, SnagIt continues to be the most convenient tool for me with its nice built in capture and effects editor that makes creating professional looking captures childishly simple. And maybe even more importantly: SnagIt has a COM interface that can be automated and  makes it super easy to embed into other applications. I've built plugins for SnagIt as well as for one of my company's own tools, Html Help Builder. If you use the Windows Live Writer offline WebLog Editor to write blog posts and have a copy of SnagIt it's probably worth your while to check this out if you haven't already. In case you haven't, this plugin integrates SnagIt with Live Writer so you can easily capture and edit content and embed it into a post. Captures are shown in the SnagIt Preview editor where you can edit the image and apply image markup or effects, before selecting Finish (or Cancel). The final image can then be pasted directly into your Live Writer post. When installed the SnagIt plug-in shows up on the PlugIn list or in the Plug-Ins toolbar shortcut: Once you select the Plug in you get the capture window that allows you to customize the capture process which includes most of the useful SnagIt capture options: Once you're done capturing the image shows up in the SnagIt Image Editor and you can crop, mark up and apply effects. When done you click the Finish button and the image is embedded right into your blog post. Easy - how do you think the images in this blog entry got in here? The beauty of SnagIt is that it's all easily integrated - Capturing, editing and embedding, it only takes a few seconds to do it all especially if you save image effect presets in SnagIt. What's updated The main issue addressed in this update has to do with the plug-in updates the Live Writer window. When a capture starts Live Writer gets minimized to get out of the way to let you pick your capture source. When the capture is complete and the image has been embedded Live Writer is activated once again. Recent versions of SnagIt however had changed the Window positioning of SnagIt so that Live Writer ended up popping up back behind the SnagIt window which was pretty annoying. This update pushes Live Writer back to the top of the window stack using some delaying tactics in the code. There have also been a few small changes to the way the code interacts with the COM object which is more reliable if a capture fails or SnagIt blows up or is locked because it's already in a capture outside of the automation interface. Source Code SnagIt Automation is something I actually use a lot. As mentioned I've integrated this automation into Live Writer as well as my documentation tool Html Help Builder, which I use just about daily. The SnagIt integration has a similar interface in that application and provides similar functionality. It's quite useful to integrate SnagIt into other applications. Because it's quite useful to embed SnagIt into other apps there's source code that you can download and embed into your own applications. The code includes both the dialog class that is automated from Live Writer, as well as the basic capture component that captures images to a disk file. Resources Download the SnagIt Capture Plug-in Installer An MSI installer that you can run that will install the plug-in into Live Writer's PlugIns directory. Source Code to the SnagIt Capture Plug-in Contains the plug-in assembly, as well as the source code to the plug-in and the setup project.© Rick Strahl, West Wind Technologies, 2005-2011Posted in Live Writer  WebLog   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Spolskism or Twitterism: A Doctor writes...

    - by Phil Factor
    "I never realized I had a problem. I just 'twittered' because it was a social thing to do. All my mates were doing it. It made me feel good to have 'followers'; it bolstered my self-esteem. Of course, you don't think of the long-term effects on your work and on the way you think. There's no denying that it impairs your judgment…" Yes, this story is typical. Hundreds of people are waking up to the long term effects of twittering, and seeking help. Dave, who wishes to remain anonymous, told our reporter… "I started using Twitter at work. Just a few minutes now and then, throughout the day. A lot of my colleagues were doing it and I thought 'Well, that's cool; it must be part of what I should be doing at work'. Soon, I was avidly reading every twitter that came my way, and counting the minutes between my own twitters. I tried to kid myself that it was all about professional development and getting other people to help you with work-related problems, but in truth I had become addicted to the buzz of the social network. The worse thing was that it made me seem busy even when I was really just frittering my time away. Inevitably, I started to get behind with my real work." Experts have identified the syndrome and given it a name: 'Twitterism', sometimes referred to as 'Spolskism', after the person who first drew attention to the pernicious damage to well-being that the practice caused, and who had the courage to take the pledge of rejecting it. According to one expert… "The occasional Twitter does little harm to the participant, and can be an adaptive way of dealing with stress. Unfortunately, it rarely stops there. The addictive qualities of the practice have put a strain on the caring professions who are faced with a flood of people making that first bold step to seeking help". Dave is one of those now seeking help for his addiction… "I had lost touch with reality. Even though I twittered my work colleagues constantly, I found I actually spoke to them less and less. Even when out socializing, I would frequently disengage from the conversation, in order to twitter. I stopped blogging. I stopped responding to emails; the only way to reach me was through the world of Twitter. Unfortunately, my denial about the harm that twittering was doing to me, my friends, and my work-colleagues was so strong that I truly couldn't see that I had a problem." Like other addictions, the help and support of others who are 'taking the cure' is important. There is a common bond between those who have 'been through hell and back' and are once more able to experience the joys of actually conversing and socializing, rather than the false comfort of solitary 'twittering'. Complete abstinence is essential to the cure. Most of those who risk even an occasional twitter face a headlong slide back into 'binge' twittering. Tom, another twitterer who has managed to kick the habit explains… "My twittering addiction now seems more like a bad dream. You get to work, and switch on the PC. You say to yourself, just open up the browser, just for a minute, just to see what people are saying on Twitter. The next thing you know, half the day has gone by. The worst thing is that when you're addicted, you get good at covering up the habit; I spent so much time looking at the screen and typing on the keyboard, people just assumed I was working hard.I know that I must never forget what it was like then, and what it's like now that I've kicked the habit. I now have more time for productive work and a real social life." Like many addictions, Spolskism has its most detrimental effects on family, friends and workmates, rather than the addict. So often nowadays, we hear the sad stories of Twitter-Widows; tales of long lonely evenings spent whilst their partners are engrossed in their twittering into their 'mobiles' or indulging in their solitary spolskistic habits in privacy, under cover of 'having to do work at home'. Workmates suffer too, when the addicts even take their laptops or mobiles into meetings in order to 'twitter' with their fellow obsessives, even stooping to complain to their followers how boring the meeting is. No; The best advice is to leave twittering to the birds. You know it makes sense.

    Read the article

  • BizTalk &ndash; Routing failure on Delivery Notifications (BizTalk 2006 R2 to 2013)

    - by S.E.R.
    Originally posted on: http://geekswithblogs.net/SERivas/archive/2013/11/11/biztalk-routing-failure-on-delivery-notifications.aspxThis is a detailed explanation of a something I posted a few month ago on stackoverflow, concerning a weird behavior (a bug, really…) of the delivery notifications in BizTalk. Reminder: what are delivery notifications Mechanism BizTalk has the ability to automatically publish positive acknowledgments (ACK) when it has succeeded transmitting a message or negative acknowledgments (NACK) in case of a transmission failure. Orchestrations can use delivery notifications to subscribe to those ACKs and NACKs in order to know if a message sent on a one-way send port has been successfully transmitted. Delivery Notifications can be “activated” in two ways: The most common and easy way is to set the Delivery Notification property of a logical send port (in the orchestration designer) to Transmitted: Another way is to set the BTS.AckRequired context property of the message to be sent to true: NOTE: fundamentally, those methods are strictly equivalent since the fact of setting the Delivery Notification to Transmitted on the send port only tells BizTalk the BTS.AckRequired context property has to be set to true on the outgoing message. Related context properties ACKs and NACKs have a common set of propoted context properties, which are : Propriété Description AckType Equals ACK when successful or NACK otherwise AckID MessageID of the message concerned by the acknowledgment AckOwnerID InstanceID of the instance associated with the acknowledgment AckSendPortID ID of the send port AckSendPortName Name of the send port AckOutboundTransportLocation URI of the send port AckReceivePortID ID of the port the message came from AckReceivePortName Name of the port the message came from AckInboundTransportLocation URI of the port the message came from Detailed behavior The way Delivery Notifications are handled by BizTalk is peculiar compared to the standard behavior of the Message Box: if no active subscription exists for the acknowledgment, it is simply discarded. The direct consequence of this is that there can be no routing failure for an acknowledgment, and an acknowledgment cannot be suspended. Moreover, when a message is sent to a send port where Delivery Notification = Transmitted, a correlation set is initialized and a correlation token is attached to the message (Context property: CorrelationToken). This correlation token will also be attached to the acknowledgment. So when the acknowledgment is issued, it is automatically routed to the source orchestration. Finally, when a NACK is received by the source orchestration, a DeliveryFailureException is thrown, which can be caught in Catch section. Context of the problem Consider this scenario: In an orchestration, Delivery Notifications are activated on a One-Way send port In case of a transmission failure, the messaging instance is suspended and the orchestration catches an exception (DeliveryFailureException). When the exception is caught, the orchestration does some logging and then terminates (thanks to a Terminate shape). So that leaves only the suspended messaging instance, waiting to be resumed. Symptoms Once the problem that caused the transmission failure is solved, the messaging instance is resumed. Considering what was said in the reminder, we would expect the instance to complete, leaving no active or suspended instance. Nevertheless, the result is that the messaging instance is once more suspended, this time because of a routing failure: The routing failure report shows that the suspended message has the following attached properties: Explanation Those properties clearly indicate that the message being suspended is an acknowledgment (ACK in this case), which was published in the message box and was supended because no subscribers were found. This makes sense, since the source orchestration was terminated before we resumed the messaging instance. So its subscription to the acknowledgments was no longer active when the ACK was published, which explains the routing failure. But this behavior is in direct contradiction with what was said earlier: an acknowledgment must be discarded when no subscriber is found and therefore should not be suspended. Cause It is indeed an outright bug, which appeared with the SP1 of BizTalk 2006 R2 and was never corrected since then: not in the next 4 CUs, not in BizTalk 2009, not in 2010 and not event in 2013 – though I haven’t tested CU1 and CU2 for this last edition, but I bet there is nothing to be expected from those CUs (on this particular point). Side effects This bug can have pretty nasty side effects: this behavior can be propagated to other ports, due to routing mechanisms. For instance: you have configured the ESB Toolkit and have activated the “Enable routing failure for failed messages”. The result will be that the ESB Exception SQL send port will also try and publish ACKs or NACKs concerning its own messaging instances. In itself, this is already messy, but remember that those acknowledgments will also have the source correlation token attached to them… See how far it goes? Well, actually there is more: in SQL send ports, transactions will be rolled back because of the routing failure (I guess it also happens with other adapters - like Oracle, but I haven’t tested them). Again, think of what happens when the send port is the ESB Exception send port: your BizTalk box is going mad, but you have no idea since no exception can be written in the exception database! All of this can be tricky to diagnose, I can tell you that… Solution There is no real solution, only a work-around, but it won’t solve all of the problems and side effects. The idea is to create an orchestration which subscribes to all acknowledgments. That is to say: The message type of the incoming message will be XmlDocument The BTS.AckType property exists The logical receive port will use direct binding By doing so, all acknowledgments will be consumed by an instance of this orchestration, thus avoiding the routing failure. Here is an example of what this orchestration could look like: In order not to pollute the HAT and the DTA Db (after all, this orchestration is only meant to be a palliative to some faulty internal BizTalk mechanism, so there should be no trace of its execution), all tracking must be deactivated:

    Read the article

  • XNA - Use Mouse To Rotate & Arrow Keys To Scroll A Linearly Wrapped Texture:

    - by The Thing
    Using XNA I'm working on my first, relatively simple, videogame for the PC. At the moment my game window is 1024 X 768 and I have a 'Starfield' linearly wrapped background texture 1280 X 1280 in size whose origin has been set to its center point (width / 2, height / 2). This texture is drawn onscreen using (graphics.PreferredBackBufferWidth / 2, graphics.PreferredBackBufferHeight / 2) to place the origin in the center of the window. I want to be able to use the horizontal movement of the mouse to rotate my texture left or right and use the arrow keys to scroll the texture in four directions. From my own related coding experiments I have found that once I rotate the texture it no longer scrolls in the direction I want, it's as if somehow the XNA framework's 'sense of direction' has been 'rotated' along with the texture. As an example of what I've described above lets say I rotate the texture 45 degrees to the right, then pressing the up arrow key results in the texture scrolling diagonally from top-right to bottom-left. This is not what I want, regardless of the degree or direction of rotation I want my texture to scroll straight up, straight down, or to the left or right depending on which arrow key was pressed. How do I go about accomplishing this? Any help or guidance is appreciated. To finish up there are two points I'd like to clarify: [1] The reason I'm using linear wrapping on my starfield texture is that it gives a nice impression of an endless starfield. [2] Using a texture at least 1280 X 1280 in conjunction with a game window of 1024 X 768 means that at no point in it's rotation will the edges of the texture become visible. Thanks for reading..... Update # 1 - as requested by RCIX: The code below is what I was referring to earlier when I mentioned 'related coding experiments'. As you can see I am scrolling a linearly wrapped texture in the direction I've moved the mouse relative to the center of the screen. This works perfectly if I don't rotate the texture, but once I do rotate it the direction of the scrolling gets messed up for some reason. public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; int x; int y; float z = 250f; Texture2D Overlay; Texture2D RotatingBackground; Rectangle? sourceRectangle; Color color; float rotation; Vector2 ScreenCenter; Vector2 Origin; Vector2 scale; Vector2 Direction; SpriteEffects effects; float layerDepth; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } protected override void Initialize() { graphics.PreferredBackBufferWidth = 1024; graphics.PreferredBackBufferHeight = 768; graphics.ApplyChanges(); Direction = Vector2.Zero; IsMouseVisible = true; ScreenCenter = new Vector2(graphics.PreferredBackBufferWidth / 2, graphics.PreferredBackBufferHeight / 2); Mouse.SetPosition((int)graphics.PreferredBackBufferWidth / 2, (int)graphics.PreferredBackBufferHeight / 2); sourceRectangle = null; color = Color.White; rotation = 0.0f; scale = new Vector2(1.0f, 1.0f); effects = SpriteEffects.None; layerDepth = 1.0f; base.Initialize(); } protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); Overlay = Content.Load<Texture2D>("Overlay"); RotatingBackground = Content.Load<Texture2D>("Background"); Origin = new Vector2((int)RotatingBackground.Width / 2, (int)RotatingBackground.Height / 2); } protected override void UnloadContent() { } protected override void Update(GameTime gameTime) { float timePassed = (float)gameTime.ElapsedGameTime.TotalSeconds; MouseState ms = Mouse.GetState(); Vector2 MousePosition = new Vector2(ms.X, ms.Y); Direction = ScreenCenter - MousePosition; if (Direction != Vector2.Zero) { Direction.Normalize(); } x += (int)(Direction.X * z * timePassed); y += (int)(Direction.Y * z * timePassed); //No rotation = texture scrolls as intended, With rotation = texture no longer scrolls in the direction of the mouse. My update method needs to somehow compensate for this. //rotation += 0.01f; base.Update(gameTime); } protected override void Draw(GameTime gameTime) { spriteBatch.Begin(SpriteSortMode.Deferred, null, SamplerState.LinearWrap, null, null); spriteBatch.Draw(RotatingBackground, ScreenCenter, new Rectangle(x, y, RotatingBackground.Width, RotatingBackground.Height), color, rotation, Origin, scale, effects, layerDepth); spriteBatch.Draw(Overlay, Vector2.Zero, Color.White); spriteBatch.End(); base.Draw(gameTime); } }

    Read the article

  • 3D Graphics with XNA Game Studio 4.0 bug in light map?

    - by Eibis
    i'm following the tutorials on 3D Graphics with XNA Game Studio 4.0 and I came up with an horrible effect when I tried to implement the Light Map http://i.stack.imgur.com/BUWvU.jpg this effect shows up when I look towards the center of the house (and it moves with me). it has this shape because I'm using a sphere to represent light; using other light shapes gives different results. I'm using a class PreLightingRenderer: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Dhpoware; using Microsoft.Xna.Framework.Content; namespace XNAFirstPersonCamera { public class PrelightingRenderer { // Normal, depth, and light map render targets RenderTarget2D depthTarg; RenderTarget2D normalTarg; RenderTarget2D lightTarg; // Depth/normal effect and light mapping effect Effect depthNormalEffect; Effect lightingEffect; // Point light (sphere) mesh Model lightMesh; // List of models, lights, and the camera public List<CModel> Models { get; set; } public List<PPPointLight> Lights { get; set; } public FirstPersonCamera Camera { get; set; } GraphicsDevice graphicsDevice; int viewWidth = 0, viewHeight = 0; public PrelightingRenderer(GraphicsDevice GraphicsDevice, ContentManager Content) { viewWidth = GraphicsDevice.Viewport.Width; viewHeight = GraphicsDevice.Viewport.Height; // Create the three render targets depthTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Single, DepthFormat.Depth24); normalTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); lightTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); // Load effects depthNormalEffect = Content.Load<Effect>(@"Effects\PPDepthNormal"); lightingEffect = Content.Load<Effect>(@"Effects\PPLight"); // Set effect parameters to light mapping effect lightingEffect.Parameters["viewportWidth"].SetValue(viewWidth); lightingEffect.Parameters["viewportHeight"].SetValue(viewHeight); // Load point light mesh and set light mapping effect to it lightMesh = Content.Load<Model>(@"Models\PPLightMesh"); lightMesh.Meshes[0].MeshParts[0].Effect = lightingEffect; this.graphicsDevice = GraphicsDevice; } public void Draw() { drawDepthNormalMap(); drawLightMap(); prepareMainPass(); } void drawDepthNormalMap() { // Set the render targets to 'slots' 1 and 2 graphicsDevice.SetRenderTargets(normalTarg, depthTarg); // Clear the render target to 1 (infinite depth) graphicsDevice.Clear(Color.White); // Draw each model with the PPDepthNormal effect foreach (CModel model in Models) { model.CacheEffects(); model.SetModelEffect(depthNormalEffect, false); model.Draw(Camera.ViewMatrix, Camera.ProjectionMatrix, Camera.Position); model.RestoreEffects(); } // Un-set the render targets graphicsDevice.SetRenderTargets(null); } void drawLightMap() { // Set the depth and normal map info to the effect lightingEffect.Parameters["DepthTexture"].SetValue(depthTarg); lightingEffect.Parameters["NormalTexture"].SetValue(normalTarg); // Calculate the view * projection matrix Matrix viewProjection = Camera.ViewMatrix * Camera.ProjectionMatrix; // Set the inverse of the view * projection matrix to the effect Matrix invViewProjection = Matrix.Invert(viewProjection); lightingEffect.Parameters["InvViewProjection"].SetValue(invViewProjection); // Set the render target to the graphics device graphicsDevice.SetRenderTarget(lightTarg); // Clear the render target to black (no light) graphicsDevice.Clear(Color.Black); // Set render states to additive (lights will add their influences) graphicsDevice.BlendState = BlendState.Additive; graphicsDevice.DepthStencilState = DepthStencilState.None; foreach (PPPointLight light in Lights) { // Set the light's parameters to the effect light.SetEffectParameters(lightingEffect); // Calculate the world * view * projection matrix and set it to // the effect Matrix wvp = (Matrix.CreateScale(light.Attenuation) * Matrix.CreateTranslation(light.Position)) * viewProjection; lightingEffect.Parameters["WorldViewProjection"].SetValue(wvp); // Determine the distance between the light and camera float dist = Vector3.Distance(Camera.Position, light.Position); // If the camera is inside the light-sphere, invert the cull mode // to draw the inside of the sphere instead of the outside if (dist < light.Attenuation) graphicsDevice.RasterizerState = RasterizerState.CullClockwise; // Draw the point-light-sphere lightMesh.Meshes[0].Draw(); // Revert the cull mode graphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; } // Revert the blending and depth render states graphicsDevice.BlendState = BlendState.Opaque; graphicsDevice.DepthStencilState = DepthStencilState.Default; // Un-set the render target graphicsDevice.SetRenderTarget(null); } void prepareMainPass() { foreach (CModel model in Models) foreach (ModelMesh mesh in model.Model.Meshes) foreach (ModelMeshPart part in mesh.MeshParts) { // Set the light map and viewport parameters to each model's effect if (part.Effect.Parameters["LightTexture"] != null) part.Effect.Parameters["LightTexture"].SetValue(lightTarg); if (part.Effect.Parameters["viewportWidth"] != null) part.Effect.Parameters["viewportWidth"].SetValue(viewWidth); if (part.Effect.Parameters["viewportHeight"] != null) part.Effect.Parameters["viewportHeight"].SetValue(viewHeight); } } } } that uses three effect: PPDepthNormal.fx float4x4 World; float4x4 View; float4x4 Projection; struct VertexShaderInput { float4 Position : POSITION0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 Depth : TEXCOORD0; float3 Normal : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 viewProjection = mul(View, Projection); float4x4 worldViewProjection = mul(World, viewProjection); output.Position = mul(input.Position, worldViewProjection); output.Normal = mul(input.Normal, World); // Position's z and w components correspond to the distance // from camera and distance of the far plane respectively output.Depth.xy = output.Position.zw; return output; } // We render to two targets simultaneously, so we can't // simply return a float4 from the pixel shader struct PixelShaderOutput { float4 Normal : COLOR0; float4 Depth : COLOR1; }; PixelShaderOutput PixelShaderFunction(VertexShaderOutput input) { PixelShaderOutput output; // Depth is stored as distance from camera / far plane distance // to get value between 0 and 1 output.Depth = input.Depth.x / input.Depth.y; // Normal map simply stores X, Y and Z components of normal // shifted from (-1 to 1) range to (0 to 1) range output.Normal.xyz = (normalize(input.Normal).xyz / 2) + .5; // Other components must be initialized to compile output.Depth.a = 1; output.Normal.a = 1; return output; } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPLight.fx float4x4 WorldViewProjection; float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; sampler2D depthSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 LightColor; float3 LightPosition; float LightAttenuation; // Include shared functions #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : POSITION0; float4 LightPosition : TEXCOORD0; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = mul(input.Position, WorldViewProjection); output.LightPosition = output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Find the pixel coordinates of the input position in the depth // and normal textures float2 texCoord = postProjToScreen(input.LightPosition) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; // Extract the normal from the normal map and move from // 0 to 1 range to -1 to 1 range float4 normal = (tex2D(normalSampler, texCoord) - .5) * 2; // Perform the lighting calculations for a point light float3 lightDirection = normalize(LightPosition - position); float lighting = clamp(dot(normal, lightDirection), 0, 1); // Attenuate the light to simulate a point light float d = distance(LightPosition, position); float att = 1 - pow(d / LightAttenuation, 6); return float4(LightColor * lighting * att, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPShared.vsi has some common functions: float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } and finally from the Game class I set up in LoadContent with: effect = Content.Load(@"Effects\PPModel"); models[0] = new CModel(Content.Load(@"Models\teapot"), new Vector3(-50, 80, 0), new Vector3(0, 0, 0), 1f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); house = new CModel(Content.Load(@"Models\house"), new Vector3(0, 0, 0), new Vector3((float)-Math.PI / 2, 0, 0), 35.0f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); models[0].SetModelEffect(effect, true); house.SetModelEffect(effect, true); renderer = new PrelightingRenderer(GraphicsDevice, Content); renderer.Models = new List(); renderer.Models.Add(house); renderer.Models.Add(models[0]); renderer.Lights = new List() { new PPPointLight(new Vector3(0, 120, 0), Color.White * .85f, 2000) }; where PPModel.fx is: float4x4 World; float4x4 View; float4x4 Projection; texture2D BasicTexture; sampler2D basicTextureSampler = sampler_state { texture = ; addressU = wrap; addressV = wrap; minfilter = anisotropic; magfilter = anisotropic; mipfilter = linear; }; bool TextureEnabled = true; texture2D LightTexture; sampler2D lightSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 AmbientColor = float3(0.15, 0.15, 0.15); float3 DiffuseColor; #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; float2 UV : TEXCOORD0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 worldViewProjection = mul(World, mul(View, Projection)); output.Position = mul(input.Position, worldViewProjection); output.PositionCopy = output.Position; output.UV = input.UV; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Sample model's texture float3 basicTexture = tex2D(basicTextureSampler, input.UV); if (!TextureEnabled) basicTexture = float4(1, 1, 1, 1); // Extract lighting value from light map float2 texCoord = postProjToScreen(input.PositionCopy) + halfPixel(); float3 light = tex2D(lightSampler, texCoord); light += AmbientColor; return float4(basicTexture * DiffuseColor * light, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } I don't have any idea on what's wrong... googling the web I found that this tutorial may have some bug but I don't know if it's the LightModel fault (the sphere) or in a shader or in the class PrelightingRenderer. Any help is very appreciated, thank you for reading!

    Read the article

  • Recommended resources for learning MS Expression Blend

    - by fastmonkeywheels
    I've been designing applications using C# for some time now but I have the need to create a more custom application and Expression Blend was recommended to me. I've downloaded the free trial but it's a little fancier than I expected. I'm not a graphic designer and can't use photoshop to save my life, however I do have images provided by a graphic designer to use for this application. I'm looking for some good resources for learning Blend and using it as a front end to a C# application, much like I would a regular C# forum.

    Read the article

  • Format of compiled directx9 shader files?

    - by JB
    Is the format of compiled pixel and vertex shader object files as produced by fxc.exe documented anywhere either officially or unofficially? I'd like to be able to read the constant name to register assignments from the shader files. I know that the effects framework in D3DX can do this, but I need to avoid using D3DX as it may not be installed on user's machines and I don't need it for anything else so I want to avoid them having to run the directx update. If the effects framework can do it, then so can I if I can find out the file format but I can' seem to find it documented anywhere.

    Read the article

  • Will html5 change everything for designers?

    - by Sean Thompson
    What impact do you think html5 will have on the workflow/way graphic design is done for the web? Right now most designers stay in an Adobe tool, doing most of the design work there, and implement some elements with graphics and some with code. Checking out http://www.apple.com/html5/ it seems that almost everything done in a graphic can be done in code. Will designers have to learn very advanced levels of html5 and do the actual design work in the browser or do you see a more "designer friendly" gui being made for html/graphics work? Will tools like photoshop evolve in a way that handles this new lack of image files?

    Read the article

  • Git How do I Push a project, that was Downloaded from Source

    - by JZ
    I worked with a graphic designer that did not clone from my github account. He downloaded the project from source rather than using the command "git clone". Since he pulled his files, a month has gone by and I want to do the following tasks: Create a new branch Push the graphic designers project into that branch Merge his branch with Master I've tried the following the github forking guide with not much luck; when I attempt to push the files into a new branch I get an error: fatal: Not a git repository (or any of the parent directories): .git How do I do this?

    Read the article

  • Is it possible to drag windows between workspaces when using Compiz?

    - by Mike Stone
    When I used Metacity, I could drag windows between workspaces using the small icon view of my workspaces. I recently started using Compiz for all the cool desktop effects, however this drag and drop feature isn't working. I use the cube effect for switching workspaces, but I noticed the wall effect doesn't allow it either. Is this just a missing feature from Compiz or is there a setting somewhere that I can enable it? I know that I can enable dragging windows across edges to the next workspace, and the expose feature to drag windows between workspaces. However, the drag and drop on the icon view is really powerful, and I would love to have it back along with all the great Compiz special effects.

    Read the article

  • What's the best version control system for handling projects with graphics?

    - by acrosman
    I'm part of a small team (usually just two people), I handle the code, he handles the graphic design. In the past I've used CVS to handle version control of the code files, and while we've included the graphics in the repository, he hasn't derived nearly as much value from it as I have. Are there other packages that provide the better features for supporting graphics? The system would need to have an easy to use GUI interface, as I don't think it's fair to expect a graphic designer to learn command-line tools. Additional aspect: The client software needs to run smoothly on OS X (for the designer), and Windows (for the programmer).

    Read the article

  • Programming user interface advice?

    - by onurozcelik
    Hi, In my project I going to generate a user interface through programming. Scalability of this UI is very important requirement. So far I am using two dimensional graphics for generating the UI. I think there may be different solutions but for the moment I know only two. First one is supplying X,Y coordinates of each two dimensional graphic on my UI.(I do not prefer this solution because I do not want to calculate X,Y coordinates of each graphic. For the moment I don't have a logic for doing this easily) Second one(which is currently I am using now) is using layouts which organizes its contents according to size of item. In this solution I don't have to calculate X,Y coordinates of each item.(Layout is doing this for me) But this approach may have its own pitfalls. I am very new to user interface programming. Can you give me advice about this issue?

    Read the article

  • Confusion Matrix with number of classified/misclassified instances on it (Python/Matplotlib)

    - by Pinkie
    I am plotting a confusion matrix with matplotlib with the following code: from numpy import * import matplotlib.pyplot as plt from pylab import * conf_arr = [[33,2,0,0,0,0,0,0,0,1,3], [3,31,0,0,0,0,0,0,0,0,0], [0,4,41,0,0,0,0,0,0,0,1], [0,1,0,30,0,6,0,0,0,0,1], [0,0,0,0,38,10,0,0,0,0,0], [0,0,0,3,1,39,0,0,0,0,4], [0,2,2,0,4,1,31,0,0,0,2], [0,1,0,0,0,0,0,36,0,2,0], [0,0,0,0,0,0,1,5,37,5,1], [3,0,0,0,0,0,0,0,0,39,0], [0,0,0,0,0,0,0,0,0,0,38] ] norm_conf = [] for i in conf_arr: a = 0 tmp_arr = [] a = sum(i,0) for j in i: tmp_arr.append(float(j)/float(a)) norm_conf.append(tmp_arr) plt.clf() fig = plt.figure() ax = fig.add_subplot(111) res = ax.imshow(array(norm_conf), cmap=cm.jet, interpolation='nearest') cb = fig.colorbar(res) savefig("confmat.png", format="png") But I want to the confusion matrix to show the numbers on it like this graphic (the right one): http://i48.tinypic.com/2e30kup.jpg How can I plot the conf_arr on the graphic?

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >