Search Results

Search found 549 results on 22 pages for 'nathan stoddard'.

Page 9/22 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Upgrading PS1 Light Gun [on hold]

    - by Nathan Taylor
    Is There any possible way to upgrade the retro G-con Light Gun for PS1 to allow it to interact with HD TV's? I am aware that they were Designed purely for Tube TV's but I would be happy to know of any hardware that would maybe convert the light to hit the Pixels on an LCD TV. If not is there any other Light gun that would work on PS1 games but has the newer light gun hardware that can interact with a higher Pixel LCD TV?

    Read the article

  • MS Powerpoint 2007

    - by Nathan
    I am using MS Powerpoint 2007 for a Training Presentation with links to other files and objects on slides that can be moved around for demonstration purposes. How can I deploy the Training Presentation and prevent saving unauthorized changes??

    Read the article

  • Can I tell Chrome not to redirect mistyped URLs?

    - by Nathan Long
    If I mistype a URL, Chrome sometimes redirects me to a search. For instance, typing "example_url_i_sometimes_mistype.conm" into the location bar gets me: Your search - example_url_i_sometimes_mistype.conm - did not match any documents. Not only is this annoying, but if the mistyped URL was one on private DNS, I've now just told Google that the domain exists. (A small concern, but bad in principle.) Can I configure Chrome to just show an error and not blab to Google Search about it?

    Read the article

  • Is it possible to not trigger my normal search highlighting when using search as a movement?

    - by Nathan Long
    When I do a search in vim, I like to have my results highlighted and super-visible, so I give them a bright yellow background and a black foreground in my .vimrc. " When highlighting search terms, make sure text is contrasting colors :highlight Search guibg=yellow guifg=black (This is for GUI versions of vim, like MacVim or Gvim; for command-line, you'd use ctermbg and ctermfg.) But I sometimes use search as a movement, as in c/\foo - "change from the cursor to the next occurrence of foo." In that case, I don't want all the occurrences of foo to be highlighted. Can I turn off highlighting in cases where search is used as a movement?

    Read the article

  • AD reset user passwords for a security group

    - by Nathan C
    I'm not quite sure if this is possible or not, but I need to force a certain security group's users to have their passwords expire so they'll be forced to change them on next login. The reason for this is because I applied a FGPP (password policy) to this particular group in order to enforce strong passwords. Well, many users have really weak passwords and they won't be changed unless they're forced. Is there a way to do this without forcing everyone to a single password?

    Read the article

  • Magical moving desktop icons

    - by Nathan Taylor
    I have encountered a very strange behavior in Windows 7 that I cannot seem to identify and I have never seen or heard of on any system configuration. Whenever I move my mouse to the left-most edge of my primary display (centered in 3-display setup), my desktop icons magically move away from the cursor (up or down and to the right). It only happens when my desktop has focus and the mouse is positioned on the left, top or bottom edge of the main display. Moving the mouse all the way to the right edge of my right secondary display causes the mouse icons to snap back into their correct position. Ridiculous video of the issue My setup is 3 displays on two display adapters. The main display is running at 2560x1600, connected to the machine via a USB-powered DVI-D to DisplayPort adapter and is driven by an NVIDIA NVS 3100M video card. The secondary displays are running at 1440x900 and 1200x1920 and are driven by integrated Intel HD Graphics (mobile). It seems like some kind of panning behavior, but it's obviously not working as expected. I have updated all of my drivers, but no change. It's probably worth noting that the desktop icons are set to auto-arrange.

    Read the article

  • Converting an external hard drive to internal

    - by Nathan DeWitt
    I have a WD Elements 1 TB external drive. I'm in a pinch and I need an internal SATA drive. How do I find out if the drive in here is actually a SATA drive? Edit: The WD Elements is a really nice external drive. I gently pried open the top and slid out a 1 TB Caviar Green hard drive. There were four black rubber brackets that slid off easily. One screw removed the SATA -- USB, and then I slid it into my computer.

    Read the article

  • jQuery Validate - require at least one field in a group to be filled

    - by Nathan Long
    I'm using the excellent jQuery Validate Plugin to validate some forms. On one form, I need to ensure that the user fills in at least one of a group of fields. I think I've got a pretty good solution, and wanted to share it. Please suggest any improvements you can think of. Finding no built-in way to do this, I searched and found Rebecca Murphey's custom validation method, which was very helpful. I improved this in three ways: To let you pass in a selector for the group of fields To let you specify how many of that group must be filled for validation to pass To show all inputs in the group as passing validation as soon as one of them passes validation. So you can say "at least X inputs that match selector Y must be filled." The end result is a rule like this: partnumber: { require_from_group: [2,".productinfo"] } //The partnumber input will validate if //at least 2 `.productinfo` inputs are filled For best results, put this rule AFTER any formatting rules for that field (like "must contain only numbers", etc). This way, if the user gets an error from this rule and starts filling out one of the fields, they will get immediate feedback about the formatting required without having to fill another field first. Item #3 assumes that you're adding a class of .checked to your error messages upon successful validation. You can do this as follows, as demonstrated here. success: function(label) { label.html(" ").addClass("checked"); } As in the demo linked above, I use CSS to give each span.error an X image as its background, unless it has the class .checked, in which case it gets a check mark image. Here's my code so far: jQuery.validator.addMethod("require_from_group", function(value, element, options) { // From the options array, find out what selector matches // our group of inputs and how many of them should be filled. numberRequired = options[0]; selector = options[1]; var commonParent = $(element).parents('form'); var numberFilled = 0; commonParent.find(selector).each(function(){ // Look through fields matching our selector and total up // how many of them have been filled if ($(this).val()) { numberFilled++; } }); if (numberFilled >= numberRequired) { // This part is a bit of a hack - we make these // fields look like they've passed validation by // hiding their error messages, etc. Strictly speaking, // they haven't been re-validated, though, so it's possible // that we're hiding another validation problem. But there's // no way (that I know of) to trigger actual re-validation, // and in any case, any other errors will pop back up when // the user tries to submit the form. // If anyone knows a way to re-validate, please comment. // // For imputs matching our selector, remove error class // from their text. commonParent.find(selector).removeClass('error'); // Also look for inserted error messages and mark them // with class 'checked' var remainingErrors = commonParent.find(selector) .next('label.error').not('.checked'); remainingErrors.text("").addClass('checked'); // Tell the Validate plugin that this test passed return true; } // The {0} in the next line is the 0th item in the options array }, jQuery.format("Please fill out at least {0} of these fields.")); Questions? Comments?

    Read the article

  • Entity Framework with MySQL - Timeout Expired while Generating Model

    - by Nathan Taylor
    I've constructed a database in MySQL and I am attempting to map it out with Entity Framework, but I start running into "GenerateSSDLException"s whenever I try to add more than about 20 tables to the EF context. An exception of type 'Microsoft.Data.Entity.Design.VisualStudio.ModelWizard.Engine.ModelBuilderEngine+GenerateSSDLException' occurred while attempting to update from the database. The exception message is: 'An error occurred while executing the command definition. See the inner exception for details.' Fatal error encountered during command execution. Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. There's nothing special about the affected tables, and it's never the same table(s), it's just that after a certain (unspecific) number of tables have been added, the context can no longer be updated without the "Timeout expired" error. Sometimes it's only one table left over, and sometimes it's three; results are pretty unpredictable. Furthermore, the variance in the number of tables which can be added before the error indicates to me that perhaps the problem lies in the size of the query being generated to update the context which includes both the existing table definitions, and also the new tables that are being added to it. Essentially, the SQL query is getting too large and it's failing to execute for some reason. If I generate the model with EdmGen2 it works without any errors, but the generated EDMX file cannot be updated within Visual Studio without producing the aforementioned exception. In all likelihood the source of this problem lies in the tool within Visual Studio given that EdmGen2 works fine, but I'm hoping that perhaps others could offer some advice on how to approach this very unique issue, because it seems like I'm not the only person experiencing it. One suggestion a colleague offered was maintaining two separate EBMX files with some table crossover, but that seems like a pretty ugly fix in my opinion. I suppose this is what I get for trying to use "new technology". :(

    Read the article

  • SQLite, python, unicode, and non-utf data

    - by Nathan Spears
    I started by trying to store strings in sqlite using python, and got the message: sqlite3.ProgrammingError: You must not use 8-bit bytestrings unless you use a text_factory that can interpret 8-bit bytestrings (like text_factory = str). It is highly recommended that you instead just switch your application to Unicode strings. Ok, I switched to Unicode strings. Then I started getting the message: sqlite3.OperationalError: Could not decode to UTF-8 column 'tag_artist' with text 'Sigur Rós' when trying to retrieve data from the db. More research and I started encoding it in utf8, but then 'Sigur Rós' starts looking like 'Sigur Rós' note: My console was set to display in 'latin_1' as @John Machin pointed out. What gives? After reading this, describing exactly the same situation I'm in, it seems as if the advice is to ignore the other advice and use 8-bit bytestrings after all. I didn't know much about unicode and utf before I started this process. I've learned quite a bit in the last couple hours, but I'm still ignorant of whether there is a way to correctly convert 'ó' from latin-1 to utf-8 and not mangle it. If there isn't, why would sqlite 'highly recommend' I switch my application to unicode strings? I'm going to update this question with a summary and some example code of everything I've learned in the last 24 hours so that someone in my shoes can have an easy(er) guide. If the information I post is wrong or misleading in any way please tell me and I'll update, or one of you senior guys can update. Summary of answers Let me first state the goal as I understand it. The goal in processing various encodings, if you are trying to convert between them, is to understand what your source encoding is, then convert it to unicode using that source encoding, then convert it to your desired encoding. Unicode is a base and encodings are mappings of subsets of that base. utf_8 has room for every character in unicode, but because they aren't in the same place as, for instance, latin_1, a string encoded in utf_8 and sent to a latin_1 console will not look the way you expect. In python the process of getting to unicode and into another encoding looks like: str.decode('source_encoding').encode('desired_encoding') or if the str is already in unicode str.encode('desired_encoding') For sqlite I didn't actually want to encode it again, I wanted to decode it and leave it in unicode format. Here are four things you might need to be aware of as you try to work with unicode and encodings in python. The encoding of the string you want to work with, and the encoding you want to get it to. The system encoding. The console encoding. The encoding of the source file Elaboration: (1) When you read a string from a source, it must have some encoding, like latin_1 or utf_8. In my case, I'm getting strings from filenames, so unfortunately, I could be getting any kind of encoding. Windows XP uses UCS-2 (a Unicode system) as its native string type, which seems like cheating to me. Fortunately for me, the characters in most filenames are not going to be made up of more than one source encoding type, and I think all of mine were either completely latin_1, completely utf_8, or just plain ascii (which is a subset of both of those). So I just read them and decoded them as if they were still in latin_1 or utf_8. It's possible, though, that you could have latin_1 and utf_8 and whatever other characters mixed together in a filename on Windows. Sometimes those characters can show up as boxes, other times they just look mangled, and other times they look correct (accented characters and whatnot). Moving on. (2) Python has a default system encoding that gets set when python starts and can't be changed during runtime. See here for details. Dirty summary ... well here's the file I added: \# sitecustomize.py \# this file can be anywhere in your Python path, \# but it usually goes in ${pythondir}/lib/site-packages/ import sys sys.setdefaultencoding('utf_8') This system encoding is the one that gets used when you use the unicode("str") function without any other encoding parameters. To say that another way, python tries to decode "str" to unicode based on the default system encoding. (3) If you're using IDLE or the command-line python, I think that your console will display according to the default system encoding. I am using pydev with eclipse for some reason, so I had to go into my project settings, edit the launch configuration properties of my test script, go to the Common tab, and change the console from latin-1 to utf-8 so that I could visually confirm what I was doing was working. (4) If you want to have some test strings, eg test_str = "ó" in your source code, then you will have to tell python what kind of encoding you are using in that file. (FYI: when I mistyped an encoding I had to ctrl-Z because my file became unreadable.) This is easily accomplished by putting a line like so at the top of your source code file: # -*- coding: utf_8 -*- If you don't have this information, python attempts to parse your code as ascii by default, and so: SyntaxError: Non-ASCII character '\xf3' in file _redacted_ on line 81, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details Once your program is working correctly, or, if you aren't using python's console or any other console to look at output, then you will probably really only care about #1 on the list. System default and console encoding are not that important unless you need to look at output and/or you are using the builtin unicode() function (without any encoding parameters) instead of the string.decode() function. I wrote a demo function I will paste into the bottom of this gigantic mess that I hope correctly demonstrates the items in my list. Here is some of the output when I run the character 'ó' through the demo function, showing how various methods react to the character as input. My system encoding and console output are both set to utf_8 for this run: '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Now I will change the system and console encoding to latin_1, and I get this output for the same input: 'ó' = original char <type 'str'> repr(char)='\xf3' 'ó' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Notice that the 'original' character displays correctly and the builtin unicode() function works now. Now I change my console output back to utf_8. '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' '?' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Here everything still works the same as last time but the console can't display the output correctly. Etc. The function below also displays more information that this and hopefully would help someone figure out where the gap in their understanding is. I know all this information is in other places and more thoroughly dealt with there, but I hope that this would be a good kickoff point for someone trying to get coding with python and/or sqlite. Ideas are great but sometimes source code can save you a day or two of trying to figure out what functions do what. Disclaimers: I'm no encoding expert, I put this together to help my own understanding. I kept building on it when I should have probably started passing functions as arguments to avoid so much redundant code, so if I can I'll make it more concise. Also, utf_8 and latin_1 are by no means the only encoding schemes, they are just the two I was playing around with because I think they handle everything I need. Add your own encoding schemes to the demo function and test your own input. One more thing: there are apparently crazy application developers making life difficult in Windows. #!/usr/bin/env python # -*- coding: utf_8 -*- import os import sys def encodingDemo(str): validStrings = () try: print "str =",str,"{0} repr(str) = {1}".format(type(str), repr(str)) validStrings += ((str,""),) except UnicodeEncodeError as ude: print "Couldn't print the str itself because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print ude try: x = unicode(str) print "unicode(str) = ",x validStrings+= ((x, " decoded into unicode by the default system encoding"),) except UnicodeDecodeError as ude: print "ERROR. unicode(str) couldn't decode the string because the system encoding is set to an encoding that doesn't understand some character in the string." print "\tThe system encoding is set to {0}. See error:\n\t".format(sys.getdefaultencoding()), print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the unicode(str) because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('latin_1') print "str.decode('latin_1') =",x validStrings+= ((x, " decoded with latin_1 into unicode"),) try: print "str.decode('latin_1').encode('utf_8') =",str.decode('latin_1').encode('utf_8') validStrings+= ((x, " decoded with latin_1 into unicode and encoded into utf_8"),) except UnicodeDecodeError as ude: print "The string was decoded into unicode using the latin_1 encoding, but couldn't be encoded into utf_8. See error:\n\t", print ude except UnicodeDecodeError as ude: print "Something didn't work, probably because the string wasn't latin_1 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('latin_1') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('utf_8') print "str.decode('utf_8') =",x validStrings+= ((x, " decoded with utf_8 into unicode"),) try: print "str.decode('utf_8').encode('latin_1') =",str.decode('utf_8').encode('latin_1') except UnicodeDecodeError as ude: print "str.decode('utf_8').encode('latin_1') didn't work. The string was decoded into unicode using the utf_8 encoding, but couldn't be encoded into latin_1. See error:\n\t", validStrings+= ((x, " decoded with utf_8 into unicode and encoded into latin_1"),) print ude except UnicodeDecodeError as ude: print "str.decode('utf_8') didn't work, probably because the string wasn't utf_8 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('utf_8') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t",uee print print "Printing information about each character in the original string." for char in str: try: print "\t'" + char + "' = original char {0} repr(char)={1}".format(type(char), repr(char)) except UnicodeDecodeError as ude: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), ude) except UnicodeEncodeError as uee: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), uee) print uee try: x = unicode(char) print "\t'" + x + "' = unicode(char) {1} repr(unicode(char))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = unicode(char) ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = unicode(char) {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('latin_1') print "\t'" + x + "' = char.decode('latin_1') {1} repr(char.decode('latin_1'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('latin_1') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('latin_1') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('utf_8') print "\t'" + x + "' = char.decode('utf_8') {1} repr(char.decode('utf_8'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('utf_8') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('utf_8') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) print x = 'ó' encodingDemo(x) Much thanks for the answers below and especially to @John Machin for answering so thoroughly.

    Read the article

  • C# Printing and Zebra Printer

    - by Nathan
    I wrote a library which creates a bitmap image from some user input. This bitmap is then printed using a zebra printer. The problem I am running into is everything is very faint and blurry on the image printed by the zebra printer but if I print the bitmap to a laser printer it looks perfectly normal. Has anyone run into this before and if so how did they fix it? I have tried nearly everything I can think of printer settings wise.

    Read the article

  • F# Powerpack's Metadata doesn't recognize FSharp.Core as an F# library

    - by Nathan Sanders
    Here's my test code to isolate the problem: open Microsoft.FSharp.Metadata [<EntryPoint>] let main args = let core = FSharpAssembly.FromFile @"C:\Program Files\FSharp-2.0.0.0\\bin\FSharp.Core.dll" let core2 = FSharpAssembly.FSharpLibrary let core3 = System.AppDomain.CurrentDomain.GetAssemblies() |> Seq.find (fun a -> a.FullName.Contains "Core") |> FSharpAssembly.FromAssembly core.Entities |> Seq.iter (printfn "%A") 0 All three lets should give me the same FSharpAssembly. Instead, all 3 throw an exception that FSharp.Core is not an F# assembly (details below, re-formatted for readability). Two more clues: Using the core3 method, I get the same error for the test F# assembly itself I don't get the error at FSI after doing #r "@C:\Program Files...\FSharp.Powerpack.Metadata.dll". Any ideas? I'm using Visual Studio 2008, F# 2.0 and F# Powerpack 2.0.0.0 (May 20, 2010) release on an oldish XP VM, I think it's updated to SP3 though. (I got the error this morning with Powerpack 1.9.9.9, so I upgraded to 2.0.0.0. I thought that if 1.9.9.9 doesn't recognise F#'s 2.0.0.0's assemblies, then maybe bugfixes in Powerpack 2.0.0.0 would help.) Unhandled Exception: System.TypeInitializationException: The type initializer for 'Microsoft.FSharp.Metadata.AssemblyLoader' threw an exception. ---> System.TypeInitializationException: The type initializer for '<StartupCode$FSharp-PowerPack-Metadata>.$Metadata' threw an exception. ---> System.ArgumentException: could not produce an FSharpAssembly object for the assembly 'FSharp.Core' because this is not an F# assembly Parameter name: name at Microsoft.FSharp.Metadata.AssemblyLoader.Add(String name,Assembly assembly) at <StartupCode$FSharp-PowerPack-Metadata>.$Metadata..cctor() --- End of inner exception stack trace --- at Microsoft.FSharp.Metadata.AssemblyLoader..cctor() --- End of inner exception stack trace --- at Microsoft.FSharp.Metadata.AssemblyLoader.Get(Assembly assembly) at Microsoft.FSharp.Metadata.FSharpAssembly.FromAssembly(Assembly assembly) at Program.main(String[] args) in C:\Documents an...\FSMetadataTest\Program.fs:line 11 Press any key to continue . . .

    Read the article

  • SQLite, python, unicode, and non-utf data

    - by Nathan Spears
    I started by trying to store strings in sqlite using python, and got the message: sqlite3.ProgrammingError: You must not use 8-bit bytestrings unless you use a text_factory that can interpret 8-bit bytestrings (like text_factory = str). It is highly recommended that you instead just switch your application to Unicode strings. Ok, I switched to Unicode strings. Then I started getting the message: sqlite3.OperationalError: Could not decode to UTF-8 column 'tag_artist' with text 'Sigur Rós' when trying to retrieve data from the db. More research and I started encoding it in utf8, but then 'Sigur Rós' starts looking like 'Sigur Rós' note: My console was set to display in 'latin_1' as @John Machin pointed out. What gives? After reading this, describing exactly the same situation I'm in, it seems as if the advice is to ignore the other advice and use 8-bit bytestrings after all. I didn't know much about unicode and utf before I started this process. I've learned quite a bit in the last couple hours, but I'm still ignorant of whether there is a way to correctly convert 'ó' from latin-1 to utf-8 and not mangle it. If there isn't, why would sqlite 'highly recommend' I switch my application to unicode strings? I'm going to update this question with a summary and some example code of everything I've learned in the last 24 hours so that someone in my shoes can have an easy(er) guide. If the information I post is wrong or misleading in any way please tell me and I'll update, or one of you senior guys can update. Summary of answers Let me first state the goal as I understand it. The goal in processing various encodings, if you are trying to convert between them, is to understand what your source encoding is, then convert it to unicode using that source encoding, then convert it to your desired encoding. Unicode is a base and encodings are mappings of subsets of that base. utf_8 has room for every character in unicode, but because they aren't in the same place as, for instance, latin_1, a string encoded in utf_8 and sent to a latin_1 console will not look the way you expect. In python the process of getting to unicode and into another encoding looks like: str.decode('source_encoding').encode('desired_encoding') or if the str is already in unicode str.encode('desired_encoding') For sqlite I didn't actually want to encode it again, I wanted to decode it and leave it in unicode format. Here are four things you might need to be aware of as you try to work with unicode and encodings in python. The encoding of the string you want to work with, and the encoding you want to get it to. The system encoding. The console encoding. The encoding of the source file Elaboration: (1) When you read a string from a source, it must have some encoding, like latin_1 or utf_8. In my case, I'm getting strings from filenames, so unfortunately, I could be getting any kind of encoding. Windows XP uses UCS-2 (a Unicode system) as its native string type, which seems like cheating to me. Fortunately for me, the characters in most filenames are not going to be made up of more than one source encoding type, and I think all of mine were either completely latin_1, completely utf_8, or just plain ascii (which is a subset of both of those). So I just read them and decoded them as if they were still in latin_1 or utf_8. It's possible, though, that you could have latin_1 and utf_8 and whatever other characters mixed together in a filename on Windows. Sometimes those characters can show up as boxes, other times they just look mangled, and other times they look correct (accented characters and whatnot). Moving on. (2) Python has a default system encoding that gets set when python starts and can't be changed during runtime. See here for details. Dirty summary ... well here's the file I added: \# sitecustomize.py \# this file can be anywhere in your Python path, \# but it usually goes in ${pythondir}/lib/site-packages/ import sys sys.setdefaultencoding('utf_8') This system encoding is the one that gets used when you use the unicode("str") function without any other encoding parameters. To say that another way, python tries to decode "str" to unicode based on the default system encoding. (3) If you're using IDLE or the command-line python, I think that your console will display according to the default system encoding. I am using pydev with eclipse for some reason, so I had to go into my project settings, edit the launch configuration properties of my test script, go to the Common tab, and change the console from latin-1 to utf-8 so that I could visually confirm what I was doing was working. (4) If you want to have some test strings, eg test_str = "ó" in your source code, then you will have to tell python what kind of encoding you are using in that file. (FYI: when I mistyped an encoding I had to ctrl-Z because my file became unreadable.) This is easily accomplished by putting a line like so at the top of your source code file: # -*- coding: utf_8 -*- If you don't have this information, python attempts to parse your code as ascii by default, and so: SyntaxError: Non-ASCII character '\xf3' in file _redacted_ on line 81, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details Once your program is working correctly, or, if you aren't using python's console or any other console to look at output, then you will probably really only care about #1 on the list. System default and console encoding are not that important unless you need to look at output and/or you are using the builtin unicode() function (without any encoding parameters) instead of the string.decode() function. I wrote a demo function I will paste into the bottom of this gigantic mess that I hope correctly demonstrates the items in my list. Here is some of the output when I run the character 'ó' through the demo function, showing how various methods react to the character as input. My system encoding and console output are both set to utf_8 for this run: '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Now I will change the system and console encoding to latin_1, and I get this output for the same input: 'ó' = original char <type 'str'> repr(char)='\xf3' 'ó' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Notice that the 'original' character displays correctly and the builtin unicode() function works now. Now I change my console output back to utf_8. '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' '?' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Here everything still works the same as last time but the console can't display the output correctly. Etc. The function below also displays more information that this and hopefully would help someone figure out where the gap in their understanding is. I know all this information is in other places and more thoroughly dealt with there, but I hope that this would be a good kickoff point for someone trying to get coding with python and/or sqlite. Ideas are great but sometimes source code can save you a day or two of trying to figure out what functions do what. Disclaimers: I'm no encoding expert, I put this together to help my own understanding. I kept building on it when I should have probably started passing functions as arguments to avoid so much redundant code, so if I can I'll make it more concise. Also, utf_8 and latin_1 are by no means the only encoding schemes, they are just the two I was playing around with because I think they handle everything I need. Add your own encoding schemes to the demo function and test your own input. One more thing: there are apparently crazy application developers making life difficult in Windows. #!/usr/bin/env python # -*- coding: utf_8 -*- import os import sys def encodingDemo(str): validStrings = () try: print "str =",str,"{0} repr(str) = {1}".format(type(str), repr(str)) validStrings += ((str,""),) except UnicodeEncodeError as ude: print "Couldn't print the str itself because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print ude try: x = unicode(str) print "unicode(str) = ",x validStrings+= ((x, " decoded into unicode by the default system encoding"),) except UnicodeDecodeError as ude: print "ERROR. unicode(str) couldn't decode the string because the system encoding is set to an encoding that doesn't understand some character in the string." print "\tThe system encoding is set to {0}. See error:\n\t".format(sys.getdefaultencoding()), print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the unicode(str) because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('latin_1') print "str.decode('latin_1') =",x validStrings+= ((x, " decoded with latin_1 into unicode"),) try: print "str.decode('latin_1').encode('utf_8') =",str.decode('latin_1').encode('utf_8') validStrings+= ((x, " decoded with latin_1 into unicode and encoded into utf_8"),) except UnicodeDecodeError as ude: print "The string was decoded into unicode using the latin_1 encoding, but couldn't be encoded into utf_8. See error:\n\t", print ude except UnicodeDecodeError as ude: print "Something didn't work, probably because the string wasn't latin_1 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('latin_1') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('utf_8') print "str.decode('utf_8') =",x validStrings+= ((x, " decoded with utf_8 into unicode"),) try: print "str.decode('utf_8').encode('latin_1') =",str.decode('utf_8').encode('latin_1') except UnicodeDecodeError as ude: print "str.decode('utf_8').encode('latin_1') didn't work. The string was decoded into unicode using the utf_8 encoding, but couldn't be encoded into latin_1. See error:\n\t", validStrings+= ((x, " decoded with utf_8 into unicode and encoded into latin_1"),) print ude except UnicodeDecodeError as ude: print "str.decode('utf_8') didn't work, probably because the string wasn't utf_8 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('utf_8') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t",uee print print "Printing information about each character in the original string." for char in str: try: print "\t'" + char + "' = original char {0} repr(char)={1}".format(type(char), repr(char)) except UnicodeDecodeError as ude: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), ude) except UnicodeEncodeError as uee: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), uee) print uee try: x = unicode(char) print "\t'" + x + "' = unicode(char) {1} repr(unicode(char))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = unicode(char) ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = unicode(char) {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('latin_1') print "\t'" + x + "' = char.decode('latin_1') {1} repr(char.decode('latin_1'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('latin_1') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('latin_1') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('utf_8') print "\t'" + x + "' = char.decode('utf_8') {1} repr(char.decode('utf_8'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('utf_8') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('utf_8') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) print x = 'ó' encodingDemo(x) Much thanks for the answers below and especially to @John Machin for answering so thoroughly.

    Read the article

  • jQuery Ajax call - Set variable value on success.

    - by Nathan
    Hey all, I have an application that I am writing that modifies data on a cached object in the server. The modifications are performed through an ajax call that basically updates properties of that object. When the user is done working, I have a basic 'Save Changes' button that allows them to Save the data and flush the cached object. In order to protect the user, I want to warn them if the try to navigate away from the page when modifications have been made to the server object if they have not saved. So, I created a web service method called IsInitialized that will return true or false based on whether or not changes have been saved. If they have not been saved, I want to prompt the user and give them a chance to cancel their navigation request. Here's my problem - although I have the calls working correctly, I can't seem to get the ajax success call to set the variable value on its callback function. Here's the code I have now. ////Catches the users to keep them from navigation off the page w/o saved changes... window.onbeforeunload = CheckSaveStatus; var IsInitialized; function CheckSaveStatus() { var temp = $.ajax({ type: "POST", url: "URL.asmx/CheckIfInstanceIsInitilized", data: "{}", contentType: "application/json; charset=utf-8", dataType: "json", success: function(result) { IsInitialized = result.d; }, error: function(xmlHttpRequest, status, err) { alert(xmlHttpRequest.statusText + " " + xmlHttpRequest.status + " : " + xmlHttpRequest.responseText); } }); if (IsInitialized) { return "You currently have unprocessed changes for this Simulation."; } } I feel that I might be trying to use the Success callback in an inappropriate manner. How do I set a javascript variable on the Success callback so that I can decide whether or not the user should be prompted w/ the unsaved changes message? As was just pointed out, I am making an asynchronous call, which means the rest of the code gets called before my method returns. Is there a way to use that ajax call, but still catch the window.onunload event? (without making synchronos ajax)

    Read the article

  • Cannot Open Shared Object cygmpfr-1.dll

    - by Nathan Campos
    I'm testing CeGCC, that is a gcc built to cross-compile applications to Windows CE devices. As everyone do to test compilers, I've done a Hello World program: #include <stdio.h> int main() { printf("Hello, World!"); return 0; } As I'm using Windows now(because this is my other laptop), I'm using Cygwin. But when I tried to compile I got some errors, as you can see on the terminal log: C:\Dev\WinCE\Testarm-mingw32ce-gcc test.c /opt/mingw32ce/libexec/gcc/arm-mingw32ce/4.4.0/cc1.exe: error while loading shared libraries: cygmpfr-1.dll: cannot open shared object file: No such file or directory C:\Dev\WinCE\Test What can I do?

    Read the article

  • WPF Binding with a Border

    - by Nathan
    I have a group of borders that make up a small map. Ideally I'd like to be able to bind the border's background property to a property in a custom list and when that property changes it changes the background. The tricky thing is, I have to do this in code behind. Could someone point me in the right direction? Thanks.

    Read the article

  • Revision Control For Windows CE

    - by Nathan Campos
    I have a HP Jornada 720 with Windows CE 3, called Handheld PC 2000. And as a good developer, I want to turn it into a fully-featured Scheme development environment. I already have Pocket Scheme on it, but now I need a revision control for my pocket development environment. Then I want to know: Where I can get it?

    Read the article

  • AS 400 Performance from .Net iSeries Provider

    - by Nathan
    Hey all, First off, I am not an AS 400 guy - at all. So please forgive me for asking any noobish questions here. Basically, I am working on a .Net application that needs to access the AS400 for some real-time data. Although I have the system working, I am getting very different performance results between queries. Typically, when I make the 1st request against a SPROC on the AS400, I am seeing ~ 14 seconds to get the full data set. After that initial call, any subsequent calls usually only take ~ 1 second to return. This performance improvement remains for ~ 20 mins or so before it takes 14 seconds again. The interesting part with this is that, if the stored procedure is executed directly on the iSeries Navigator, it always returns within milliseconds (no change in response time). I wonder if it is a caching / execution plan issue but I can only apply my SQL SERVER logic to the AS400, which is not always a match. Any suggestions on what I can do to recieve a more consistant response time or simply insight as to why the AS400 is acting in this manner when I was using the iSeries Data Provider for .Net? Is there a better access method that I should use? Just in case, here's the code I am using to connect to the AS400 Dim Conn As New IBM.Data.DB2.iSeries.iDB2Connection(ConnectionString) Dim Cmd As New IBM.Data.DB2.iSeries.iDB2Command("SPROC_NAME_HERE", Conn) Cmd.CommandType = CommandType.StoredProcedure Using Conn Conn.Open() Dim Reader = Cmd.ExecuteReader() Using Reader While Reader.Read() 'Do Something End While Reader.Close() End Using Conn.Close() End Using

    Read the article

  • ELMAH - Filtering 404 Errors

    - by Nathan Taylor
    I am attempting to configure ELMAH to filter 404 errors and I am running into difficulties with the XML-provided filter rules in my Web.config file. I followed the tutorial here and here and added an <is-type binding="BaseException" type="System.IO.FileNotFoundException" /> declaration under my <test><or>... declaration, but that completely failed. When I tested it locally I stuck a breakpoint in protected void ErrorLog_Filtering() {} of the Global.asax found that the System.Web.HttpException that gets fired by ASP.NET for a 404 doesn't have a base type of System.IO.FileNotFound, but rather it is simply a System.Web.HttpException. Next I decided to try a <regex binding="BaseException.Message" pattern="The file '/[^']+' does not exist" /> in the hopes that any exception matching the pattern "The file '/foo.ext' does not exist" would get filtered, but that too having no effect. As a last resort I tried <is-type binding="BaseException" type="System.Exception" />, and even that is entirely disregarded. I'm inclined to think there's a configuration error with ELMAH, but I fail to see any. Am I missing something blatantly obvious? Here's the relevant stuff from my web.config: <configuration> <configSections> <sectionGroup name="elmah"> <section name="security" requirePermission="false" type="Elmah.SecuritySectionHandler, Elmah"/> <section name="errorLog" requirePermission="false" type="Elmah.ErrorLogSectionHandler, Elmah"/> <section name="errorMail" requirePermission="false" type="Elmah.ErrorMailSectionHandler, Elmah"/> <section name="errorFilter" requirePermission="false" type="Elmah.ErrorFilterSectionHandler, Elmah" /> </sectionGroup> </configSections> <elmah> <errorFilter> <test> <or> <equal binding="HttpStatusCode" value="404" type="Int32" /> <regex binding="BaseException.Message" pattern="The file '/[^']+' does not exist" /> </or> </test> </errorFilter> <errorLog type="Elmah.XmlFileErrorLog, Elmah" logPath="~/App_Data/logs/elmah" /> </elmah> <system.web> <httpModules> <add name="ErrorFilter" type="Elmah.ErrorFilterModule, Elmah"/> <add name="ErrorLog" type="Elmah.ErrorLogModule, Elmah"/> </httpModules> </system.web> <system.webServer> <modules> <add name="ErrorFilter" type="Elmah.ErrorFilterModule, Elmah"/> <add name="ErrorLog" type="Elmah.ErrorLogModule, Elmah" /> </modules> </system.webServer> </configuration>

    Read the article

  • C# Winform bring control to front

    - by Nathan
    Are there any other methods of bringing a control to the front other than control.BringToFront()? I have series of labels on a user control and when I try to bring one of them to front it is not working. I have even looped through all the controls and sent them all the back except for the one I am interested in and it doesn't change a thing.

    Read the article

  • Linq To Objects Auto Increment Number

    - by Nathan
    This feels like a completely basic question, but, for the life of me, I can't seem to work out an elegant solution. Basically, I am doing a Linq Query creating a new object from the query. In the new object, I want to generate a auto-incremented number to allow me to keep a selection order for later use (named Iter in my example). Here is my current solution that does what I am needing: Dim query2 = From x As DictionaryEntry In MasterCalendarInstance _ Order By x.Key _ Select New With {.CalendarId = x.Key, .Iter = 0} For i = 0 To query2.Count - 1 query2(i).Iter = i Next Is there a way to do this within the context of the linq query (so that I don't have to loop the collection after the query)? Thanks!

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >