Search Results

Search found 16032 results on 642 pages for 'everyday problems'.

Page 430/642 | < Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >

  • Why is a fully transparent pixel still rendered?

    - by Mr Bell
    I am trying to make a pixel shader that achieves an effect similar to this video http://www.youtube.com/watch?v=f1uZvurrhig&feature=related My basic idea is render the scene to a temp render target then Render the previously rendered image with a slight fade on to another temp render target Draw the current scene on top of that Draw the results on to a render target that persists between draws Draw the results on to the screen But I am having problems with the fading portion. If I have my pixel shader return a color with its A component set to 0, shouldn't that basically amount to drawing nothing? (Assuming that sprite batch blend mode is set to AlphaBlend) To test this I have my pixel shader return a transparent red color. Instead of nothing being drawn, it draws a partially transparent red box. I hope that my question makes sense, but if it doesnt please ask me to clarify Here is the drawing code public override void Draw(GameTime gameTime) { GraphicsDevice.SamplerStates[1] = SamplerState.PointWrap; drawImageOnClearedRenderTarget(presentationTarget, tempRenderTarget, fadeEffect); drawImageOnRenderTarget(sceneRenderTarget, tempRenderTarget); drawImageOnClearedRenderTarget(tempRenderTarget, presentationTarget); GraphicsDevice.SetRenderTarget(null); drawImage(backgroundTexture); drawImage(presentationTarget); base.Draw(gameTime); } private void drawImage(Texture2D image, Effect effect = null) { spriteBatch.Begin(0, BlendState.AlphaBlend, SamplerState.PointWrap, null, null, effect); spriteBatch.Draw(image, new Rectangle(0, 0, width, height), Color.White); spriteBatch.End(); } private void drawImageOnRenderTarget(Texture2D image, RenderTarget2D target, Effect effect = null) { GraphicsDevice.SetRenderTarget(target); drawImage(image, effect); } private void drawImageOnClearedRenderTarget(Texture2D image, RenderTarget2D target, Effect effect = null) { GraphicsDevice.SetRenderTarget(target); GraphicsDevice.Clear(Color.Transparent); drawImage(image, effect); } Here is the fade pixel shader sampler TextureSampler : register(s0); float4 PixelShaderFunction(float2 texCoord : TEXCOORD0) : COLOR0 { float4 c = 0; c = tex2D(TextureSampler, texCoord); //c.a = clamp(c.a - 0.05, 0, 1); c.r = 1; c.g = 0; c.b = 0; c.a = 0; return c; } technique Fade { pass Pass1 { PixelShader = compile ps_2_0 PixelShaderFunction(); } }

    Read the article

  • Language Club

    - by Ben Griswold
    We started a language club at work this week.  Thus far, we have a collective interest in a number of languages: Python, Ruby, F#, Erlang, Objective-C, Scala, Clojure, Haskell and Go. There are more but these 9 received the most votes. During the first few meetings we are going to determine which language we should tackle first. To help make our selection, each member will provide a quick overview of their favored language by answering the following set of questions: Why are you interested in learning “your” language(s). (There’s lots of work, I’m an MS shill, It’s hip and  fun, etc) What type of language is it?  (OO, dynamic, functional, procedural, declarative, etc) What types of problems is your language best suited to solve?  (Algorithms over big data, rapid application development, modeling, merely academic, etc) Can you provide examples of where/how it is being used?  If it isn’t being used, why not?  (Erlang was invented at Ericsson to provide an extremely fault tolerant, concurrent system.) Quick history – Who created/sponsored the language?  When was it created?  Is it currently active? Does the language have hardware support (an attempt was made at one point to create processor instruction sets specific to Prolog), or can it run as an interpreted language inside another language (like Ruby in the JVM)? Are there facilities for programs written in this language to communicate with other languages?  How does this affect its utility? Does the language have a IDE tool support?  (Think Eclipse or Visual Studio) How well is the language supported in terms of books, community and documentation? What’s the number one things which differentiates the language from others?  (i.e. Why is it cool?) How is the language applicability to us as consultants?  What would the impact be of using the language in terms of cost, maintainability, personnel costs, etc.? What’s the number one things which differentiates the language from others?  (i.e. Why is it cool?) This should provide an decent introduction into nearly a dozen languages and give us enough context to decide which single language deserves our undivided attention for the weeks to come.  Stay tuned for the winner…

    Read the article

  • How to Disable the New Geolocation Feature in Google Chrome

    - by Asian Angel
    The latest release of Google Chrome has geolocation enabled by default, and if you are worried about privacy or just don’t want websites to prompt you for your location, we’ve got the quick details on how to turn it off. Readers should note that the new Geolocation feature doesn’t give out your details by default, so don’t panic. It’s also only active, at the time of this writing, in the Dev channel builds of Chrome—so if you are using the regular stable build this feature won’t arrive for a while anyway. Note: If you’re a Firefox user, be sure to check out our guide to disabling geolocation in Firefox 3.x. What’s this Geolocation Feature About? Geolocation is a way for your browser to tell a website about your physical location, so you can get results tailored to where you actually are—for example, if you visited Google Maps it can ask you for your location to give you an accurate picture of where you are. To use this feature in Google Maps, you would click on the small white icon to activate the feature. As soon as you have clicked on the small white icon, a thin green toolbar will appear at the top of the webpage, asking to Allow or Deny.   How to Turn Chrome’s Geolocation Off If you want to turn geolocation off you will need to open the “Chrome Options Window”, navigate to the third tab, and click on the “Content settings… ” button. When the “Content Settings Window” opens go to the “Location Tab” and select “Do not allow any site to track my physical location”. Once that is done close out the “Content Settings & Chrome Options Windows”. When you go back to Google Maps and try using the small white icon again this is the message that you will see at the top of the page. Now that is much better! If you are unhappy with geolocation being enabled by default in the latest Dev Channel release then this will help get the problem sorted out nicely. Similar Articles Productive Geek Tips Stupid Geek Tricks: Compare Your Browser’s Memory Usage with Google ChromeHow To Disable Individual Plug-ins in Google ChromeStop YouTube Videos from Automatically Playing in ChromeDisable YouTube Comments while using ChromeHow to Make Google Chrome Your Default Browser TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff

    Read the article

  • Installing Forms and Reports on a development system

    - by Duncan Mills
    By popular demand I've resurrected / updated one of the old blog postings from Jan Carlin's Blog on GroundSide here. A recent (lengthy) post on the Forms forums chronicles the problems some of you have had installing F&R on a development machine. See the link in the headline of this post for the main one. When installing, here are some points to bear in mind: Download and install Weblogic Server first. http://www.oracle.com/technology/software/products/middleware/index.htmlFind the Forms and Reports (and Disco and Portal) zip files here. Download them to the desktop (or some other temporary directory of your choosing). Unzip both of the two zip files into the same new directory (maybe called 'stage') and check that you have 4 directories in the stage dir when you are finished unzipping: 'Disk1', 'Disk2', 'Disk3' and 'Disk4'. These folders are specified in the zip file structure and must be preserved for the setup executable to work. If you use WinZip and have a right click menu option that say "Extract to here", use that by right click-dragging the zip file onto the newly created directory. Don't use the "Extract to folder %HOME%\Desktop\ofm_pfrd...disk_1of2" option. That will get you into the trouble that was reported early in this thread. Free up as much memory as you can. Stop services and background processes and virus scanners and databases (you don't need a DB to install Forms) and other things lurking about on your machine. You can restart them when the install is done. Around 1.5 GB free real memory should do it. If it doesn't, free up more if you can. Don't change the swap space unless you know what you are doing. Let Windows handle it. A 1 GB machine will likely not be enough. You will likely need at least 2GB of RAM.Start the install with setup.exe from the 'Disk1' directoryChoose the Install and Configure option unless you have a good reason not to.Choose a unique instance name even if you deinstalled and removed the last install. I suggest using 'asinst_20090722_1' (today's date in ISO format with a running incremented number at the end if you install more than two times on a particular day).Unselect Portal and Discoverer and select the Builders you want.Unselect WebCacheUnselect OHS.Unselect the single sign-on option Check for any failures and choose the retry option if any occur. If that doesn't fix the problem, call Oracle Customer Support .

    Read the article

  • top tweets SOA Partner Community – June 2013

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity Oracle SOA Learn how Business Rules are used in Oracle SOA Suite. New free self-study course - Oracle Univ. #soa #oraclesoa http://pub.vitrue.com/ll9B OPITZ CONSULTING ?Wie #BPM und #SOA zusammengehören? Watch 100-Seconds-Video-Lesson by @Rolfbaer - http://ow.ly/luSjK @soacommunity Andrejus Baranovskis ?Customized BPM 11g PS6 Workspace Application http://fb.me/2ukaSBXKs Mark Nelson ?Case Management Samples Released http://wp.me/pgVeO-Lv Mark Nelson Instance Patching Demo for BPM 11.1.1.7 http://wp.me/pgVeO-Lx Simone Geib Antony Reynolds: Target Verification #oraclesoa https://blogs.oracle.com/reynolds/ OPITZ CONSULTING ?"It's all about Integration - Developing with Oracle #Cloud Services" @t_winterberg files: http://ow.ly/ljtEY #cloudworld @soacommunity Arun Pareek ?Functional Testing Business Processes In Oracle BPM Suite 11g http://wp.me/pkPu1-pc via @arrunpareek SOA Proactive Want to get started with Human Workflow? Check out the introductory video on OTN, http://pub.vitrue.com/enIL C2B2 Consulting Free tech workshop,London 6th of Jun Diagnosing Performance & Scalability Problems in Oracle SOASuite http://www.c2b2.co.uk/oracle_fusion_middleware_performance_seminar … @soacommunity Oracle BPM Must have technologies for delivering effective #CX : #BPM #Social #Mobile > #OracleBPM Whitepaper http://pub.vitrue.com/6pF6 OracleBlogs ?Introduction to Web Forms -Basic Tutorial http://ow.ly/2wQLTE OTNArchBeat ?Complete State of SOA podcast now available w/ @soacommunity @hajonormann @gschmutz @t_winterberg #industrialsoa http://pub.vitrue.com/PZFw Ronald Luttikhuizen VENNSTER Blog | Article published - Fault Handling and Prevention - Part 2 | http://blog.vennster.nl/2013/05/article-published-fault-handling-and.html … Mark Nelson ?Getting to know Maven http://wp.me/pgVeO-Lk gschmutz ?Cool! Our 2nd article has just been published: "Fault Handling and Prevention for Services in Oracle Service Bus" http://pub.vitrue.com/jMOy David Shaffer Interesting SOA Development and Delivery post on A-Team Redstack site - http://bit.ly/18oqrAI . Would be great to get others to contribute! Mark Nelson BPM PS6 video showing process lifecycle in more detail (30min) http://wp.me/pgVeO-Ko SOA Proactive ?Webcast: 'Introduction and Troubleshooting of the SOA 11g Database Adapter', May 9th. Register now at http://pub.vitrue.com/8In7 Mark Nelson ?SOA Development and Delivery http://wp.me/pgVeO-Kd Oracle BPM Manoj Das, VP Product Mangement talks about new #OracleBPM release #BPM #processmanagement http://pub.vitrue.com/FV3R OTNArchBeat Podcast: The State of SOA w/ @soacommunity @hajonormann @gschmutz @t_winterberg #industrialsoa http://pub.vitrue.com/OK2M gschmutz New article series on Industrial SOA started on OTN and Service Technology Magazine: http://guidoschmutz.wordpress.com/2013/04/22/first-two-chapters-of-industrial-soa-articles-series-have-been-published-both-on-otn-and-service-technology-magazine/ … #industrialSOA Danilo Schmiedel ?Article series #industrialSOA published on OTN and Service Technology Magazine http://inside-bpm-and-soa.blogspot.de/2013/04/industrial-soa_22.html … @soacommunity @OC_WIRE SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: twitter,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • SSIS Reporting Pack update

    - by jamiet
    Its been a while since I last posted anything in regard to SSIS Reporting Pack, the most recent release being on 27th May 2012, so here is a short update. There is still lots of work to do on SSIS Reporting Pack; lots more features to add, lots of performance work to be done, and a few bug fixes too. I have also been (fairly) hard at work on a framework to be used in conjunction with SSIS 2012 that I refer to as the Restart Framework (currently residing at http://ssisrestartframework.codeplex.com/). There is still much work to be done on the Restart Framework (not least some useful documentation on how to use it) which is why I haven’t mentioned it publicly before now although I am actively checking in changes. One thing I am considering is amalgamating the two projects into one; this would mean I could build a suite of reports that both work against the SSIS Catalog (what you currently know as “SSIS Reporting Pack”) and also against this Restart Framework thing. No decision has been made as yet though. There have been a number of bug reports and feature suggestions for SSIS Reporting Pack added to the Issue Tracker. Thank you to everyone that has submitted something, rest assured I am not going to ignore them forever; my time is at a premium right now unfortunately due to … well … life… so working on these items isn’t near the top of my priority list. Lastly, I am actively using SSIS Reporting Pack in a production environment right now and I’m happy to report that it is proving to be very useful. One of the reports that I have put a lot of time into is execution executable duration.rdl and its proving very adept at easily identifying bottlenecks in our SSIS 2012 executions: The report allows you to browse through the hierarchy of executables in each execution and each bar represents the duration of each executable in relation to all the other executables; longer bars being a good indication of where problems might lie. The colour of the bar indicates whether it was successful or not (green=success). Hovering over a bar brings up a tooltip showing more information about that executable. Clicking on a bar allows you to compare this particular instance of the executable against other executions. Please do let me know if you are using SSIS Reporting Pack. I would like to hear any anecdotes you might have, good or bad. @Jamiet

    Read the article

  • XNA extending an existing Content type

    - by Maarten
    We are doing a game in XNA that reacts to music. We need to do some offline processing of the music data and therefore we need a custom type containing the Song and some additional data: // Project AudioGameLibrary namespace AudioGameLibrary { public class GameTrack { public Song Song; public string Extra; } } We've added a Content Pipeline extension: // Project GameTrackProcessor namespace GameTrackProcessor { [ContentSerializerRuntimeType("AudioGameLibrary.GameTrack, AudioGameLibrary")] public class GameTrackContent { public SongContent SongContent; public string Extra; } [ContentProcessor(DisplayName = "GameTrack Processor")] public class GameTrackProcessor : ContentProcessor<AudioContent, GameTrackContent> { public GameTrackProcessor(){} public override GameTrackContent Process(AudioContent input, ContentProcessorContext context) { return new GameTrackContent() { SongContent = new SongProcessor().Process(input, context), Extra = "Some extra data" // Here we can do our processing on 'input' }; } } } Both the Library and the Pipeline extension are added to the Game Solution and references are also added. When trying to use this extension to load "gametrack.mp3" we run into problems however: // Project AudioGame protected override void LoadContent() { AudioGameLibrary.GameTrack gameTrack = Content.Load<AudioGameLibrary.GameTrack>("gametrack"); MediaPlayer.Play(gameTrack.Song); } The error message: Error loading "gametrack". File contains Microsoft.Xna.Framework.Media.Song but trying to load as AudioGameLibrary.GameTrack. AudioGame contains references to both AudioGameLibrary and GameTrackProcessor. Are we maybe missing other references? EDIT Selecting the correct content processor helped, it loads the audio file correctly. However, when I try to process some data, e.g: public override GameTrackContent Process(AudioContent input, ContentProcessorContext context) { int count = input.Data.Count; // With this commented out it works fine return new GameTrackContent() { SongContent = new SongProcessor().Process(input, context) }; } It crashes with the following error: Managed Debugging Assistant 'PInvokeStackImbalance' has detected a problem in 'C:\Users\Maarten\Documents\Visual Studio 2010\Projects\AudioGame\DebugPipeline\bin\Debug\DebugPipeline.exe'. Additional Information: A call to PInvoke function 'Microsoft.Xna.Framework.Content.Pipeline!Microsoft.Xna.Framework.Content.Pipeline.UnsafeNativeMethods+AudioHelper::OpenAudioFile' has unbalanced the stack. This is likely because the managed PInvoke signature does not match the unmanaged target signature. Check that the calling convention and parameters of the PInvoke signature match the target unmanaged signature. Information from logger right before crash: Using "BuildContent" task from assembly "Microsoft.Xna.Framework.Content.Pipel ine, Version=4.0.0.0, Culture=neutral, PublicKeyToken=842cf8be1de50553". Task "BuildContent" Building gametrack.mp3 -> bin\x86\Debug\Content\gametrack.xnb Rebuilding because asset is new Importing gametrack.mp3 with Microsoft.Xna.Framework.Content.Pipeline.Mp3Imp orter Im experiencing exactly this: http://forums.create.msdn.com/forums/t/75996.aspx

    Read the article

  • Why isn't this driver install working (sudo code)?

    - by Nick
    I have a soundcard that I'd like to use and I've been trying to install it and being a new Ubuntu user, I get about half way through this in the Terminal and it stops cooperating with me... See the link (soundcard hyperlink) but basically what I have here: I do the following and it works: sudo apt-get install subversion svn co https://line6linux.svn.sourceforge.net/svnroot/line6linux Change to the directory cd line6linux/driver/trunk Time to build from the source but first make sure you have the latest build and headers sudo apt-get install build-essential sudo apt-get install linux-headers Then after this point it says must specify file to install. Not sure how to do this or what it means. Then, running make gives the following output: ./set_revision.sh ./set_revision.sh: 9: test: https://line6linux.svn.sourceforge.net/svnroot/line6linux/driver/trunk: unexpected operator make -C /lib/modules/3.2.0-29-generic-pae/build CONFIG_LINE6_USB=m SUBDIRS=/home/nick/line6linux/driver/trunk modules make[1]: Entering directory /usr/src/linux-headers-3.2.0-29-generic-pae' CC [M] /home/nick/line6linux/driver/trunk/audio.o /home/nick/line6linux/driver/trunk/audio.c: In function ‘line6_init_audio’: /home/nick/line6linux/driver/trunk/audio.c:30:57: error: ‘THIS_MODULE’ undeclared (first use in this function) /home/nick/line6linux/driver/trunk/audio.c:30:57: note: each undeclared identifier is reported only once for each function it appears in make[2]: * [/home/nick/line6linux/driver/trunk/audio.o] Error 1 make[1]: * [module/home/nick/line6linux/driver/trunk] Error 2 make[1]: Leaving directory/usr/src/linux-headers-3.2.0-29-generic-pae' make: * [default] Error 2 This is in Ubuntu 12.04.1 LTS Another thing, semi related. Cut, copy, paste? Seems like it's different from program to program. I was in the terminal and hit Ctrl-C and then Ctrl-Shift-V in Firefox and it won't paste. But in terminal it will paste. I'm confused. Here is what it's giving me after I hit "Make": nick@NickUbuntu:~/line6linux/driver/trunk$ make ./set_revision.sh ./set_revision.sh: 9: test: https://line6linux.svn.sourceforge.net/svnroot/line6linux/driver/trunk: unexpected operator make -C /lib/modules/3.2.0-29-generic-pae/build CONFIG_LINE6_USB=m SUBDIRS=/home/nick/line6linux/driver/trunk modules make[1]: Entering directory /usr/src/linux-headers-3.2.0-29-generic-pae' CC [M] /home/nick/line6linux/driver/trunk/audio.o /home/nick/line6linux/driver/trunk/audio.c: In function ‘line6_init_audio’: /home/nick/line6linux/driver/trunk/audio.c:30:57: error: ‘THIS_MODULE’ undeclared (first use in this function) /home/nick/line6linux/driver/trunk/audio.c:30:57: note: each undeclared identifier is reported only once for each function it appears in make[2]: *** [/home/nick/line6linux/driver/trunk/audio.o] Error 1 make[1]: *** [_module_/home/nick/line6linux/driver/trunk] Error 2 make[1]: Leaving directory/usr/src/linux-headers-3.2.0-29-generic-pae' make: * [default] Error 2 Looks like these folks also had similar problems: http://ubuntuforums.org/showthread.php?t=1163608&page=3

    Read the article

  • The Politics of Junk Filtering

    - by mikef
    If the national postal service, such as the Royal Mail in the UK, were to go through your letters and throw away all the stuff it considered to be junk instead of delivering it to you, you might be rather pleased until you discovered that it took a too liberal decision about what was junk. Catalogs you'd asked for? Junk! Requests from charities? Who needs them! Parcels from competing carriers? Toss them away! The possibility for abuse for an agency that was in a monopolistic position is just too scary to tolerate. After all, the postal service could charge 'consultancy fees' to any sender who wanted to guarantee that his stuff got delivered, or they could even farm this out to other companies. Because Microsoft Outlook is just about the only email client used by the international business community in the west, its' SPAM filter is the final arbiter as to what gets read. My Outlook 2007, set to the default settings, junks all the perfectly innocent email newsletters that I subscribe to. Whereas Google Mail, Yahoo, and LIVE are all pretty accurate in detecting spam, Outlook makes all sorts of silly mistakes. The documentation speaks techno-babble about 'advanced heuristics', but the result boils down to an inaccurate mess. The more that Microsoft fiddles with it, the stickier the mess. To make matters worse, it still lets through obvious spam. The filter is occasionally updated along with other automatic 'security' updates you opt for automatic updates. As an editor for a popular online publication that provides a newsletter service, this is an obvious source of frustration. We follow all the best-practices we know about. We ensure that it is a trivial task to opt out of receiving it. We format the newsletter to the requirements of the Service Providers. We follow up, and resolve, every complaint. As a result, it gets delivered. It is galling to discover that, after all that effort, Outlook then often judges the contents to be junk on a whim, so you don't get to see it. A few days ago, Microsoft published the PST file format specification, under pressure from a European Union interoperability investigation by ECIS (the European Committee for Interoperable Systems). The objective was that other applications could then access existing PST files so as to migrate from existing Outlook installations to other solutions. Joaquín Almunia, the current competition commissioner, should now turn his attention to the more subtle problems of Microsoft Outlook. The Junk problem seems to have come from clumsy implementation of client-side spam filtering rather than from deliberate exploitation of a monopoly on the desktop email client for businesses, but it is a growing problem nonetheless. Cheers, Michael Francis

    Read the article

  • Gparted can't create partition table

    - by William
    Here's what the problem is. About a day or so ago I used Gparted live cd to create 3 NTFS primary partitions on my external 500 gig Goflex and one extended with 2 logical partitiones. I had planned to install windows 8 on the first partition, then ubuntu and kubuntu on the other 2. After I finished partitioning my drive with gparted, I booted into windows vista to make my bootable windows 8 usb to install it with, I also decided to check to make sure all my partitions were working properly. Then I found they were, and they weren't. My 50 gig first partition I had planned to install windows on showed up normal and the 300 gigs of space left in the extended partition did as well, the rest showed up as raw. So I figured alright, something went awal while making the partitions, so I booted up gparted once again. Then to my surprise gparted showed the entire drive as unallocated, and when I refreshed the list, it showed as all the partitions I had made earlier, buy with a exclamation mark by them all. So I figured ok, might be a problem with the partition table as I'd seen a similar problem in past on a drive that was not partitioned at all, so I decided to create a new partition table and take the time out again to sit and wait. Then I got a message saying gparted could not create the partition table, followed by it showing the entire drive as formatted into ntfs. After that I figured ok I'll take a break, come back in a hour, maybe it's something I did. So a hour later I came back after having booted up windows, plugged the drive in to see if by some miracle windows could access the drive. In disk management when I plugged the drive in, it would freeze attempting to read the drive, as I'd seen in the past with raw disks, yet when I unplugged it I got a glimpse of disk management showing it as a perfectly fine ntfs file system on the drive followed by a "you must format disk K in order to use it". So I then was assured the disk was raw as that is what had happened in the past, followed by a new partition table through gparted to fix the problem and a 10 hour format in windows. So I once again booted up gparted, to get the message "error fsyncing/closing/dev/sdg:input/output error" followed by "error opening dev/sdg No such file in directory" after I refreshed and somehow saw the disk show up as perfectly fine ntfs and then tried to create a new partition table to try to wipe out all my problems and start over again. And not gparted only shows the drive there about 1/10 refreshes the rest I get the directory error. If anybody can assist me in any way shape or form I will be thankful.

    Read the article

  • Hey You! Stop Using the Apply Button and Just Click OK! [Geek Rants]

    - by The Geek
    As a computer geek, I often find myself helping people, and watching them change settings on their PC… and they almost always click the Apply button, and then the OK button. Why? Whenever you encounter a dialog box in Windows, there are the standard OK, Cancel, Apply buttons—but you don’t actually have to click the Apply button first. The OK button does the same thing, saves the settings, and then closes the dialog box… saving you an extra click. Don’t believe me? Try it out for yourself. Only the worst possible application won’t behave that way, and you probably don’t want to use that type of application to begin with. The only exception to this rule is a multiple tab dialog box, on a badly written application. Sometimes… your settings on one tab won’t stick unless you click Apply. Note that in this particular case, you can make changes in any one of the tabs, and they will carry through without having to click Apply, because this dialog window is well written. We’re just using the screenshot as an example of a multiple tab setting interface. So now that you know better, you can tell us… do you always use the Apply button first? Have you ever found an instance where it behaves differently? Similar Articles Productive Geek Tips Got Awesome Skills? Why Not Write for How-To Geek?Customize Your Windows Vista Logon ScreenUse Outlook Rules to Prevent "Oh No!" After Sending EmailsGot Awesome Geek Skills? The How-To Geek is Looking for WritersQuick Firefox UI Tweaks TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Recycle ! Find That Elusive Icon with FindIcons Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems

    Read the article

  • SQL SERVER – Iridium I/O – SQL Server Deduplication that Shrinks Databases and Improves Performance

    - by Pinal Dave
    Database performance is a common problem for SQL Server DBA’s.  It seems like we spend more time on performance than just about anything else.  In many cases, we use scripts or tools that point out performance bottlenecks but we don’t have any way to fix them.  For example, what do you do when you need to speed up a query that is already tuned as well as possible?  Or what do you do when you aren’t allowed to make changes for a database supporting a purchased application? Iridium I/O for SQL Server was originally built at Confio software (makers of Ignite) because DBA’s kept asking for a way to actually fix performance instead of just pointing out performance problems. The technology is certified by Microsoft and was so promising that it was spun out into a separate company that is now run by the Confio Founder/CEO and technology management team. Iridium uses deduplication technology to both shrink the databases as well as boost IO performance.  It is intriguing to see it work.  It will deduplicate a live database as it is running transactions.  You can watch the database get smaller while user queries are running. Iridium is a simple tool to use. After installing the software, you click an “Analyze” button which will spend a minute or two on each database and estimate both your storage and performance savings.  Next, you click an “Activate” button to turn on Iridium I/O for your selected databases.  You don’t need to reboot the operating system or restart the database during any part of the process. As part of my test, I also wanted to see if there would be an impact on my databases when Iridium was removed.  The ‘revert’ process (bringing the files back to their SQL Server native format) was executed by a simple click of a button, and completed while the databases were available for normal processing. I was impressed and enjoyed playing with the software and encourage all of you to try it out.  Here is the link to the website to download Iridium for free. . Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Travelling MVP #3: Community event in Varna, Bulgaria

    - by DigiMortal
    Second stop in my DevReach 2012 trip was at Varna. We had not much time to hang around there but this problem will get fixed next year if not before. But still we had sessions there with Dimitar Georgijev and I had also chance to meet local techies. Next time we will have more tech and beers for sure! We started in the morning from Bucharest and travelled through Ruse, Razgrad and Shumen to Varna. It’s about 275km. We used cab, local bus and Dimitar father’s car. We had one food stop in Ruse and after that we went directly to Varna. Here is our route on map. Varna is Bulgarian city that locates on western coast of Black Sea. I have been there once before this trip and it’s good place to have vacation under sun. Also autumn is there milder than here in Estonia (third day of snow is going on). Bulgaria has some good beers, my favorite mankind killer called rakia and very good national cuisine. Food is made of fresh stuff and it is damn good experience. Here are some arbitrarily selected images (you can click on these to view at original size): Old bus “monument” in Razgrad Stuffed peppers, Bulgarian national cuisine Infra-red community having good time and beers We made our sessions at one study class of Varna technical university. It’s a little bit old style university but everything we needed was there and we had no problems with machinery. Sessions were same as in Bucharest. The user group in Varna is brand new and hopefully it will be something bigger one good day. At least I try to make my commits so they get on their feet quicker. As we had not much time to announce the event there was about 15 guys listening to us and I’m happy that it was not too much hyped event because still I was getting my first experiences with foreign audiences. After sessions we took our stuff to hotel and went to hang around with local techies. We had some good time there and made some new friends. Next time when I go to Varna I go back as more experienced speaker and I plan to do there one tougher and highly challenging session. Maybe somebody from Estonian community will join me and then it will be well planned surprise-attack to Varna :)

    Read the article

  • JavaOne Rock Star – Adam Bien

    - by Janice J. Heiss
    Among the most celebrated developers in recent years, especially in the domain of Java EE and JavaFX, is consultant Adam Bien, who, in addition to being a JavaOne Rock Star for Java EE sessions given in 2009 and 2011, is a Java Champion, the winner of Oracle Magazine’s 2011 Top Java Developer of the Year Award, and recently won a 2012 JAX Innovation Award as a top Java Ambassador. Bien will be presenting the following sessions: TUT3907 - Java EE 6/7: The Lean Parts CON3906 - Stress-Testing Java EE 6 Applications Without Stress CON3908 - Building Serious JavaFX 2 Applications CON3896 - Interactive Onstage Java EE Overengineering I spoke with Bien to get his take on Java today. He expressed excitement that the smallest companies and startups are showing increasing interest in Java EE. “This is a very good sign,” said Bien. “Only a few years ago J2EE was mostly used by larger companies -- now it becomes interesting even for one-person shows. Enterprise Java events are also extremely popular. On the Java SE side, I'm really excited about Project Nashorn.” Nashorn is an upcoming JavaScript engine, developed fully in Java by Oracle, and based on the Da Vinci Machine (JSR 292) which is expected to be available for Java 8.    Bien expressed concern about a common misconception regarding Java's mediocre productivity. “The problem is not Java,” explained Bien, “but rather systems built with ancient patterns and approaches. Sometimes it really is ‘Cargo Cult Programming.’ Java SE/EE can be incredibly productive and lean without the unnecessary and hard-to-maintain bloat. The real problems are ‘Ivory Towers’ and not Java’s lack of productivity.” Bien remarked that if there is one thing he wanted Java developers to understand it is that, "Premature optimization is the root of all evil. Or at least of some evil. Modern JVMs and application servers are hard to optimize upfront. It is far easier to write simple code and measure the results continuously. Identify the hotspots first, then optimize.”   He advised Java EE developers to, “Rethink everything you know about Enterprise Java. Before you implement anything, ask the question: ‘Why?’ If there is no clear answer -- just don't do it. Most well known best practices are outdated. Focus your efforts on the domain problem and not the technology.” Looking ahead, Bien remarked, “I would like to see open source application servers running directly on a hypervisor. Packaging the whole runtime in a single file would significantly simplify the deployment and operations.” Check out a recent Java Magazine interview with Bien about his Java EE 6 stress monitoring tool here.

    Read the article

  • Visual Studio 2010 and Target Framework Version

    - by Scott Dorman
    Almost two years ago, I wrote about a Visual Studio macro that allows you to change the Target Framework version of all projects in a solution. If you don’t know, the Target Framework version is what tells the compiler which version of the .NET Framework to compile against (more information is available here) and can be set to one of the following values: .NET Framework 2.0 .NET Framework 3.0 .NET Framework 3.5 .NET Framework 3.5 Client Profile .NET Framework 4.0 .NET Framework 4.0 Client Profile This can be easily accomplished by editing the project properties: The problem with this approach is that if you need to change a lot of projects at one time it becomes rather unwieldy. One possible solution is to edit the project files by hand in a text editor and change the <TargetFrameworkVersion /> and <TargetFrameworkProfile /> properties to the correct values. For example, for the .NET Framework 4.0 Client Profile, these values would be: <TargetFrameworkVersion>v4.0</TargetFrameworkVersion> <TargetFrameworkProfile>Client</TargetFrameworkProfile> Again, this is not only time consuming but can also be error-prone. The better solution is to automate this through the use of a Visual Studio macro. Since I had already created a macro to do this for Visual Studio 2008, I updated that macro to work with Visual Studio 2010 and .NET 4.0. It prompts you for the target framework version you want to set for all of the projects and then loops through each project in the solution and makes the change. If you select one of the Framework versions that support a Client Profile, it will ask if you want to use the Client Profile or the Full Profile. It is smart enough to skip project types that don’t support this property and projects that are already at the correct version. This version also incorporates the changes suggested by George (in the comments). The macro is available on my SkyDrive account. Download it to your <UserProfile>\Documents\Visual Studio 2010\Projects\VSMacros80\MyMacros folder, open the Visual Studio Macro IDE (Alt-F11) and add it as an existing item to the “MyMacros” project. I make no guarantees or warranties on this macro. I have tested it on several solutions and projects and everything seems to work and not cause any problems, but, as always, use with caution. Since it is a macro, you have the full source code available to investigate and see what it’s actually doing. If you find any bugs or make any useful changes, please let me know and I’ll update the macro. Technorati Tags: Macros,Visual Studio

    Read the article

  • Music Notation Editor - Refactoring view creation logic elsewhere

    - by Cyril Silverman
    Let me preface by saying that knowing some elementary music theory and music notation may be helpful in grasping the problem at hand. I'm currently building a Music Notation and Tablature Editor (in Javascript). But I've come to a point where the core parts of the program are more or less there. All functionality I plan to add at this point will really build off the foundation that I've created. As a result, I want to refactor to really solidify my code. I'm using an API called VexFlow to render notation. Basically I pass the parts of the editor's state to VexFlow to build the graphical representation of the score. Here is a rough and stripped down UML diagram showing you the outline of my program: In essence, a Part has many Measures which has many Notes which has many NoteItems (yes, this is semantically weird, as a chord is represented as a Note with multiple NoteItems, individual pitches or fret positions). All of the relationships are bi-directional. There are a few problems with my design because my Measure class contains the majority of the entire application view logic. The class holds the data about all VexFlow objects (the graphical representation of the score). It contains the graphical Staff object and the graphical notes. (Shouldn't these be placed somewhere else in the program?) While VexFlowFactory deals with actual creation (and some processing) of most of the VexFlow objects, Measure still "directs" the creation of all the objects and what order they are supposed to be created in for both the VexFlowStaff and VexFlowNotes. I'm not looking for a specific answer as you'd need a much deeper understanding of my code. Just a general direction to go in. Here's a thought I had, create an MeasureView/NoteView/PartView classes that contains the basic VexFlow objects for each class in addition to any extraneous logic for it's creation? but where would these views be contained? Do I create a ScoreView that is a parallel graphical representation of everything? So that ScoreView.render() would cascade down PartView and call render for each PartView and casade down into each MeasureView, etc. Again, I just have no idea what direction to go in. The more I think about it, the more ways to go seem to pop into my head. I tried to be as concise and simplistic as possible while still getting my problem across. Please feel free to ask me any questions if anything is unclear. It's quite a struggle trying to dumb down a complicated problem to its core parts.

    Read the article

  • ACORD LOMA 2010: Building Insurance Companies in the Clouds

    - by [email protected]
    Chuck Johnston, vice president of global strategy and alliances for Oracle Insurance, participated in a featured speaking session at ACORD LOMA 2010. He provides an update on his discussions with insurers at the show and after his presentation. Every year I always make a point of walking the show floor at the ACORD LOMA technology conference to visit with colleagues and competitors, and try to get a feel for which way the industry will move over the next 12 months. Insurers are looking for substance in cloud (computing), trying to mix business with pleasure (monetizing social networks), and expect differentiation through commodity (Software as a Service). The disconnect at this show is that most vendors are still struggling with creating a clear path from Facebook to customer intimacy, SaaS to core cost savings and clouds to ubiquitous presence. Vendors need to find new ways to help insurers find the real value in these potentially disruptive technologies by understanding the changes coming to the insurance business and how these new technologies impact the new insurance business. Oracle's approach to understanding the evolving insurance industry comes from a discussion with our customers in our Insurance CIO Council, where one of our customers suggested we buy an insurance company to really understand our customers. We have decided to do the next best thing and build our own model of an insurance company, Alamere Insurance, that uses the latest technologies to transform its own business. Alamere will never issue an actual policy, but it does give us a framework to consider the impacts of changes in the insurance landscape and how Oracle technology meets the challenge or needs to evolve to help our customers be successful. In preparing for my talk at the conference using Alamere as my organizing theme, I found myself reading actuarial memoranda on CSO table changes and articles on underwriting theory that really made me think about my customer's problems first and foremost, and then how Oracle technology can provide answers. As much as I prefer techno-thrillers and sci-fi novels to actuarial papers for plane reading, I got very excited about the idea of putting myself back in the customer shoes I haven't worn in a decade, and really looking at how Oracle can power the Adaptive Insurance Enterprise. Talking to customers and industry people after the session, the idea of Alamere seemed to excite people and I got a lot of suggestions as to what lines of business we should model and where we should focus first on technology uptake. One customer said to a colleague that Oracle's attempt to "share their pain" was unique among vendors. More about Alamere, and the Adaptive Insurance Enterprise next time. Chuck Johnston is vice president of global strategy and alliances for Oracle Insurance.

    Read the article

  • OpenGL - have object follow mouse

    - by kevin james
    I want to have an object follow around my mouse on the screen in OpenGL. (I am also using GLEW, GLFW, and GLM). The best idea I've come up with is: Get the coordinates within the window with glfwGetCursorPos. The window was created with window = glfwCreateWindow( 1024, 768, "Test", NULL, NULL); and the code to get coordinates is double xpos, ypos; glfwGetCursorPos(window, &xpos, &ypos); Next, I use GLM unproject, to get the coordinates in "object space" glm::vec4 viewport = glm::vec4(0.0f, 0.0f, 1024.0f, 768.0f); glm::vec3 pos = glm::vec3(xpos, ypos, 0.0f); glm::vec3 un = glm::unProject(pos, View*Model, Projection, viewport); There are two potential problems I can already see. The viewport is fine, as the initial x,y, coordinates of the lower left are indeed 0,0, and it's indeed a 1024*768 window. However, the position vector I create doesn't seem right. The Z coordinate should probably not be zero. However, glfwGetCursorPos returns 2D coordinates, and I don't know how to go from there to the 3D window coordinates, especially since I am not sure what the 3rd dimension of the window coordinates even means (since computer screens are 2D). Then, I am not sure if I am using unproject correctly. Assume the View, Model, Projection matrices are all OK. If I passed in the correct position vector in Window coordinates, does the unproject call give me the coordinates in Object coordinates? I think it does, but the documentation is not clear. Finally, to each vertex of the object I want to follow the mouse around, I just increment the x coordinate by un[0], the y coordinate by -un[1], and the z coordinate by un[2]. However, since my position vector that is being unprojected is likely wrong, this is not giving good results; the object does move as my mouse moves, but it is offset quite a bit (i.e. moving the mouse a lot doesn't move the object that much, and the z coordinate is very large). I actually found that the z coordinate un[2] is always the same value no matter where my mouse is, probably because the position vector I pass into unproject always has a value of 0.0 for z. Edit: The (incorrectly) unprojected x-values range from about -0.552 to 0.552, and the y-values from about -0.411 to 0.411.

    Read the article

  • SQL SERVER – Creating All New Database with Full Recovery Model

    - by pinaldave
    Sometimes, complex problems have very simple solutions. Let us see the following email which I received recently. “Hi Pinal, In our system when we create new database, by default, they are all created with the Simple Recovery Model. We have to manually change the recovery model after we create the database. We used the following simple T-SQL code: CREATE DATABASE dbname. We are very frustrated with this situation. We want all our databases to have the Full Recovery Model option by default. We are considering the following methods; please suggest the most efficient one among them. 1) Creating a Policy; when it is violated, the database model can be fixed 2) Triggers at Server Level 3) Automated Job which goes through all the databases and checks their recovery model; if the DBA has not changed the model, then the job will list the Databases and change their recovery model Also, we have a situation where we need a database in the Simple Recovery Model as well – how to white list them? Please suggest the best method.” Indeed, an interesting email! The answer to their question, i.e., which is the best method to fit their needs (white list, default, etc)? It will be NONE of the above. Here is the solution in one line and also the easiest way: Just go to your Model database: Path in SSMS >> Databases > System Databases >> model >> Right Click Properties >> Options >> Recovery Model - Select Full from dropdown. Every newly created database takes its base template from the Model Database. If you create a custom SP in the Model Database, when you create a new database, it will automatically exist in that database. Any database that was already created before making changes in the Model Database will not be affected at all. Creating Policy is also a good method, and I will blog about this in a separate blog post, but looking at current specifications of the reader, I think the Model Database should be modified to have a Full Recovery Option. While writing this blog post, I remembered my another blog post where the model database log file was growing drastically even though there were no transactions SQL SERVER – Log File Growing for Model Database – model Database Log File Grew Too Big. NOTE: Please do not touch the Model Database unnecessary. It is a strict “No.” If you want to create an object that you need in all the databases, then instead of creating it in model database, I suggest that you create a new database called maintenance and create the object there. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, Readers Question, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • What's your advise on a potential legal suit? [closed]

    - by ohho
    I [xxx app developer] received an email from Apple that a developer [of yyy app] believes I am "infringing their copyright." Description of Issue: [xxx developer] copied my application (my application is [yyy]) feature by feature. Even their donation model is completely copied from my application. Their first release was significantly later than mine, which implies copying of the application rather than parallel development. I suffered significant financial losses because of their actions, in additional to promotion problems as many people are confused with their product. My advertising was based around the idea of a "free [yyy] application for an iPhone" and they have just taken that as a title for their application. I would appreciate if someone takes a look at their release schedule and compare it to my releases. Additionally, please take a look at their functionality and how it point by point copies the functionality of my older releases. I am asking Apple to remove their application from the App Store, and ban them from resubmitting it. Thank you for your time! [yyy developer], the developer of the [yyy] application. My response was: The code of [xxx] is written by myself, using Apple public API. The graphics elements are designed by myself. The user interface and app control are independently designed and different from other [similar type] apps (please judge yourself). In-app Purchase is iOS Apple standard API. iAd is Apple iOS standard API. I don't think features can be owned exclusively. In fact, my app comes with fewer features, as I prefer minimalist design. I don't think idea can be owned exclusively. Apple responded: Thank you for your response. Unfortunately, Apple cannot serve as arbiter for disputes among third parties. Please contact [yyy developer] directly regarding your actions. You can reach [yyy developer] through: [...]. We look forward to confirmation from both parties that this issue has been resolved. If this issue is not resolved shortly, Apple may be forced to pull your application(s) from the App Store. Then I sent my response above to [yyy developer]. [yyy developer] then asked me "to provide (my) legal address and contact details that (his) lawyer requires to file a copyright infringement suit." IMO, I don't think the [yyy developer]'s claim on "feature by feature" copy is valid. I have fewer features, completely different user interface design. However, I don't think I can afford a legal action for an app of so little financial return. So what's your advise on this? Should I just let Apple pull my app? Or is there any alternative I can consider? FYI ... UI of [xxx app]: and UI of [yyy app]:

    Read the article

  • Equal Gifts Algorithm Problem

    - by 7Aces
    Problem Link - http://opc.iarcs.org.in/index.php/problems/EQGIFTS It is Lavanya's birthday and several families have been invited for the birthday party. As is customary, all of them have brought gifts for Lavanya as well as her brother Nikhil. Since their friends are all of the erudite kind, everyone has brought a pair of books. Unfortunately, the gift givers did not clearly indicate which book in the pair is for Lavanya and which one is for Nikhil. Now it is up to their father to divide up these books between them. He has decided that from each of these pairs, one book will go to Lavanya and one to Nikhil. Moreover, since Nikhil is quite a keen observer of the value of gifts, the books have to be divided in such a manner that the total value of the books for Lavanya is as close as possible to total value of the books for Nikhil. Since Lavanya and Nikhil are kids, no book that has been gifted will have a value higher than 300 Rupees... For the problem, I couldn't think of anything except recursion. The code I wrote is given below. But the problem is that the code is time-inefficient and gives TLE (Time Limit Exceeded) for 9 out of 10 test cases! What would be a better approach to the problem? Code - #include<cstdio> #include<climits> #include<algorithm> using namespace std; int n,g[150][2]; int diff(int a,int b,int f) { ++f; if(f==n) { if(a>b) { return a-b; } else { return b-a; } } return min(diff(a+g[f][0],b+g[f][1],f),diff(a+g[f][1],b+g[f][0],f)); } int main() { int i; scanf("%d",&n); for(i=0;i<n;++i) { scanf("%d%d",&g[i][0],&g[i][1]); } printf("%d",diff(g[0][0],g[0][1],0)); } Note - It is just a practice question, & is not part of a competition.

    Read the article

  • Non-English Character Display in Oracle SQL Developer

    - by thatjeffsmith
    I get a variation on this question at least once a week, if not more frequently. I’m from Israel, and the language on the databases is Hebrew. When I use the old and deprecated SQL*Plus (windows rich client) I can see the hebrew clearly, when I use the latest SQL Developer, I get gibberish. This question appears on the forums about every week or so as well. So what’s the deal? Well, it starts with a basic misunderstanding of NLS Client parameters. These should accurately reflect the language and locality setup on your LOCAL machine. DO NOT COPY what’s set in the database. The these parameters work together with the database so that information can be transferred back and forth correctly. Having the wrong NLS parameters locally can be bad. [ORACLE DOCS]Setting the NLS_LANG parameter properly is essential to proper data conversion. The character set that is specified by the NLS_LANG parameter should reflect the setting for the client operating system. Setting NLS_LANG correctly enables proper conversion from the client operating system character encoding to the database character set. When these settings are the same, Oracle Database assumes that the data being sent or received is encoded in the same character set as the database character set, so character set validation or conversion may not be performed. This can lead to corrupt data if conversions are necessary. OK, so what are you supposed to do? Set the Font! 9 times out of 10, this preference fixes the problem with display issues. Make sure you set a Font that supports the characters you’re trying to display. It’s as simple as that. This preference defines the font used to display characters in the editors and the data grids. If you have it set to a font that doesn’t have Hebrew character support – you’re not going to see Hebrew in SQL Developer. A few years ago…wow, like 15 years ago, I learned that the Tohama Font is pretty Unicode-friendly. Bad Font Selection A Font that’s not non-English friendly Good Font Selection Exact same text, except rendered with the Tahoma font Summary Having problems seeing non-English text in SQL Developer? Check the font! And do not start messing with NLS parameters without talking to your DBA first.

    Read the article

  • Tomboy error while tring to sync with Ubuntu one; Can anyone help?

    - by Michael Chapman
    So I'm sure you've heard the song before, but after trying to sync my notes with Ubuntu One(on 10.10 AMD64) I get "Could not synchronize notes. Check the details below and try again." Of course the problem is that there are no details and trying again doesn't help. So I ran tomboy -debug and compared my error to any thing I could find about similar problems (such as the post here) but found nothing useful. Any way here's my first error, I got this using preferencessynchronizationUbuntu_one [ERROR 21:08:42.271] Synchronization failed with the following exception: String was not recognized as a valid DateTime. at System.DateTime.Parse (System.String s, IFormatProvider provider, DateTimeStyles styles) [0x00000] in <filename unknown>:0 at System.DateTime.Parse (System.String s, IFormatProvider provider) [0x00000] in <filename unknown>:0 at System.DateTime.Parse (System.String s) [0x00000] in <filename unknown>:0 at Tomboy.WebSync.Api.NoteInfo.ParseJson (Hyena.Json.JsonObject jsonObj) [0x00000] in <filename unknown>:0 at Tomboy.WebSync.Api.UserInfo.ParseJsonNoteArray (Hyena.Json.JsonArray jsonArray) [0x00000] in <filename unknown>:0 at Tomboy.WebSync.Api.UserInfo.ParseJsonNotes (System.String jsonString, System.Nullable`1& latestSyncRevision) [0x00000] in <filename unknown>:0 at Tomboy.WebSync.Api.UserInfo.GetNotes (Boolean includeContent, Int32 sinceRevision, System.Nullable`1& latestSyncRevision) [0x00000] in <filename unknown>:0 at Tomboy.WebSync.WebSyncServer.GetNoteUpdatesSince (Int32 revision) [0x00000] in <filename unknown>:0 at Tomboy.Sync.SyncManager.SynchronizationThread () [0x00000] in <filename unknown>:0 The next thing I tried was using preferencessynchronizationtomboy_web with the default 'http://one.ubuntu.com/notes/' and got the same error plus one more. [ERROR 21:12:31.949] System.ObjectDisposedException: The object was used after being disposed. at System.Net.HttpListener.CheckDisposed () [0x00000] in <filename unknown>:0 at System.Net.HttpListener.EndGetContext (IAsyncResult asyncResult) [0x00000] in <filename unknown>:0 at Tomboy.WebSync.WebSyncPreferencesWidget.<OnAuthButtonClicked>m__1 (IAsyncResult localResult) [0x00000] in <filename unknown>:0 [ERROR 21:13:19.245] Synchronization failed with the following exception: String was not recognized as a valid DateTime. at System.DateTime.Parse (System.String s, IFormatProvider provider, DateTimeStyles styles) [0x00000] in <filename unknown>:0 at System.DateTime.Parse (System.String s, IFormatProvider provider) [0x00000] in <filename unknown>:0 at System.DateTime.Parse (System.String s) [0x00000] in <filename unknown>:0 at Tomboy.WebSync.Api.NoteInfo.ParseJson (Hyena.Json.JsonObject jsonObj) [0x00000] in <filename unknown>:0 at Tomboy.WebSync.Api.UserInfo.ParseJsonNoteArray (Hyena.Json.JsonArray jsonArray) [0x00000] in <filename unknown>:0 at Tomboy.WebSync.Api.UserInfo.ParseJsonNotes (System.String jsonString, System.Nullable`1& latestSyncRevision) [0x00000] in <filename unknown>:0 at Tomboy.WebSync.Api.UserInfo.GetNotes (Boolean includeContent, Int32 sinceRevision, System.Nullable`1& latestSyncRevision) [0x00000] in <filename unknown>:0 at Tomboy.WebSync.WebSyncServer.GetNoteUpdatesSince (Int32 revision) [0x00000] in <filename unknown>:0 at Tomboy.Sync.SyncManager.SynchronizationThread () [0x00000] in <filename unknown>:0 I Have also tried removing then re-adding My computer from my Ubuntu One account, but that did not help either. The only other Thing I have noticed is that under systempreferencesubuntu one services, "Notes" is not listed as a service. I don't know if this is normal or not. Thanks for any help and please let me know if anything is confusing.

    Read the article

  • Atmospheric scattering sky from space artifacts

    - by ollipekka
    I am in the process of implementing atmospheric scattering of a planets from space. I have been using Sean O'Neil's shaders from http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter16.html as a starting point. I have pretty much the same problem related to fCameraAngle except with SkyFromSpace shader as opposed to GroundFromSpace shader as here: http://www.gamedev.net/topic/621187-sean-oneils-atmospheric-scattering/ I get strange artifacts with sky from space shader when not using fCameraAngle = 1 in the inner loop. What is the cause of these artifacts? The artifacts disappear when fCameraAngle is limtied to 1. I also seem to lack the hue that is present in O'Neil's sandbox (http://sponeil.net/downloads.htm) Camera position X=0, Y=0, Z=500. GroundFromSpace on the left, SkyFromSpace on the right. Camera position X=500, Y=500, Z=500. GroundFromSpace on the left, SkyFromSpace on the right. I've found that the camera angle seems to handled very differently depending the source: In the original shaders the camera angle in SkyFromSpaceShader is calculated as: float fCameraAngle = dot(v3Ray, v3SamplePoint) / fHeight; Whereas in ground from space shader the camera angle is calculated as: float fCameraAngle = dot(-v3Ray, v3Pos) / length(v3Pos); However, various sources online tinker with negating the ray. Why is this? Here is a C# Windows.Forms project that demonstrates the problem and that I've used to generate the images: https://github.com/ollipekka/AtmosphericScatteringTest/ Update: I have found out from the ScatterCPU project found on O'Neil's site that the camera ray is negated when the camera is above the point being shaded so that the scattering is calculated from point to the camera. Changing the ray direction indeed does remove artifacts, but introduces other problems as illustrated here: Furthermore, in the ScatterCPU project, O'Neil guards against situations where optical depth for light is less than zero: float fLightDepth = Scale(fLightAngle, fScaleDepth); if (fLightDepth < float.Epsilon) { continue; } As pointed out in the comments, along with these new artifacts this still leaves the question, what is wrong with the images where camera is positioned at 500, 500, 500? It feels like the halo is focused on completely wrong part of the planet. One would expect that the light would be closer to the spot where the sun should hits the planet, rather than where it changes from day to night. The github project has been updated to reflect changes in this update.

    Read the article

  • Visual Studio 2010 plus Help Index : have your cake and eat it too

    - by Adrian Hara
    Although the team's intentions might have been good, the new help system in Visual Studio 2010  is a huge step backwards (more like a cannonball-shot-kind-of-leap really) from the one we all know (and love?) in Visual Studio 2008 and 2005 (and heck, even VS6). Its biggest problem, from my point of view, is the total and complete lack of the Help Index feature: you know...the thing where you just go and type in what you're looking for and it filters down the list of results automatically. For me this was the number one productivity feature in the "old" help system, allowing me to find stuff very quickly. Number two is that it's entirely web based and runs, by default, in the browser. So imagine, when you press F1, a new tab opens in your default browser pointing to the help entry. While this is wrong in many ways, it's also extremely annoying, cleaning up tabs in the browser becomes a chore which represents a serious productivity hit. These and many other problems were discussed extensively (and rather vocally) on connect but it seems MS seemed to ignore it and opt to release the new help system anyway, with the promise that more features will be added in a later release. Again, it kind of amazes me that they chose to ship a product with LESS features that the previous one and, what's worse, missing KEY features, just so it's "standards based" and "extensible". To be honest, I couldn't care less about the help system's implementation, I just want it to be usable and I would've thought that by now the software community and especially MS would've learned this lesson. In the end, what kind of saddens me is that MS regards these basic features as ones for the "power help user". I mean, come on! I mean a) it's not like my aunt's using Visual Studio 2010 and she represents the regular user, b) all software developers are, by definition, power users and c) it's a freakin help, not rocket science! As you can tell, I'm pretty pissed. Even more so because I really feel that the VS2010 & co. release really is a great one, with a lot of effort going into the various platforms and frameworks, most (if not all) of them being really REALLY good products. And then they go and screw up the help! How lame is that?!   Anyway, it's not all gloom-and-doom. Luckily there is a desktop app which presents a UI over the new help system that's very close to what was there in VS2008, by Robert Chandler (to which I hereby declare eternal gratitude). It still has some minor issues but I'll take it over the browser version of the help any day. It's free, pretty quick (on my machine ;)) and nicely usable. So, if you hate the new help system (passionately) like I do, download H3Viewer now.

    Read the article

< Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >