Search Results

Search found 17841 results on 714 pages for 'non ascii characters'.

Page 14/714 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • On Windows 7, dir or tree can't show unicode characters, even starting cmd with cmd /U

    - by Jian Lin
    On Windows 7, dir or tree can't show unicode characters, even starting cmd with cmd /U So I would press Window Key + R to run something, and type in cmd /U so that the content might handle Unicode. And then using dir or tree /F, the content in Unicode won't show as Unicode. (in Window Explorer (file manager), the Unicode will show) Is there a way to handle it? To get Unicode characters to test your filenames, you can go to http://news.google.com/news?edchanged=1&ned=tw and you will be able to get many Unicode characters there (UTF-8)

    Read the article

  • SQL SERVER – PREEMPTIVE and Non-PREEMPTIVE – Wait Type – Day 19 of 28

    - by pinaldave
    In this blog post, we are going to talk about a very interesting subject. I often get questions related to SQL Server 2008 Book-Online about various Preemptive wait types. I got a few questions asking what these wait types are and how they could be interpreted. To get current wait types of the system, you can read this article and run the script: SQL SERVER – DMV – sys.dm_os_waiting_tasks and sys.dm_exec_requests – Wait Type – Day 4 of 28. Before we continue understanding them, let us study first what PREEMPTIVE and Non-PREEMPTIVE waits in SQL Server mean. PREEMPTIVE: Simply put, this wait means non-cooperative. While SQL Server is executing a task, the Operating System (OS) interrupts it. This leads to SQL Server to involuntarily give up the execution for other higher priority tasks. This is not good for SQL Server as it is a particular external process which makes SQL Server to yield. This kind of wait can reduce the performance drastically and needs to be investigated properly. Non-PREEMPTIVE: In simple terms, this wait means cooperative. SQL Server manages the scheduling of the threads. When SQL Server manages the scheduling instead of the OS, it makes sure its own priority. In this case, SQL Server decides the priority and one thread yields to another thread voluntarily. In the earlier version of SQL Server, there was no preemptive wait types mentioned and the associated task status with them was marked as suspended. In SQL Server 2005, preemptive wait types were not listed as well, but their associated task status was marked as running. In SQL Server 2008, preemptive wait types are properly listed and their associated task status is also marked as running. Now, SQL Server is in Non-Preemptive mode by default and it works fine. When CLR, extended Stored Procedures and other external components run, they run in Preemptive mode, leading to the creation of these wait types. There are a wide variety of preemptive wait types. If you see consistent high value in the Preemptive wait types, I strongly suggest that you look into the wait type and try to know the root cause. If you are still not sure, you can send me an email or leave a comment about it and I will do my best to help you reduce this wait type. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Cannot copy non-latin characters from PDF document

    - by user17381
    Hi, I have a pdf file which contains some non-latin european characters. If I copy some text with the highlight tool, and paste it into another program (word, notepad) - the 'special' characters do not transfer correctly (I get other odd characters in their place). I have tried copying the text from both Acrobat Reader and Foxit. Is there anything I can do here to copy this? Thanks

    Read the article

  • Non-Unicode strings in VB.NET? (7 replies)

    I've been reading the MSDN documentation on the System.Char and System.String types and they mention Unicode throughout without even mentioning non Unicode versions. How do I get a gool 'ol one byte char and non Unicode string in .NET? Thanks, Alain

    Read the article

  • Non-Unicode strings in VB.NET? (7 replies)

    I've been reading the MSDN documentation on the System.Char and System.String types and they mention Unicode throughout without even mentioning non Unicode versions. How do I get a gool 'ol one byte char and non Unicode string in .NET? Thanks, Alain

    Read the article

  • Programming and Ubiquitous Language (DDD) in a non-English domain

    - by Sandor Drieënhuizen
    I know there are some questions already here that are closely related to this subject but none of them take Ubiquitous Language as the starting point so I think that justifies this question. For those who don't know: Ubiquitous Language is the concept of defining a (both spoken and written) language that is equally used across developers and domain experts to avoid inconsistencies and miscommunication due to translation problems and misunderstanding. You will see the same terminology show up in code, conversations between any team member, functional specs and whatnot. So, what I was wondering about is how to deal with Ubiquitous Language in non-English domains. Personally, I strongly favor writing programming code in English completely, including comments but ofcourse excluding constants and resources. However, in a non-English domain, I'm forced to make a decision either to: Write code reflecting the Ubiquitous Language in the natural language of the domain. Translate the Ubiquitous Language to English and stop communicating in the natural language of the domain. Define a table that defines how the Ubiquitous Language translates to English. Here are some of my thoughts based on these options: 1) I have a strong aversion against mixed-language code, that is coding using type/member/variable names etc. that are non-English. Most programming languages 'breathe' English to a large extent and most of the technical literature, design pattern names etc. are in English as well. Therefore, in most cases there's just no way of writing code entirely in a non-English language so you end up with mixed languages anyway. 2) This will force the domain experts to start thinking and talking in the English equivalent of the UL, something that will probably not come naturally to them and therefore hinders communication significantly. 3) In this case, the developers communicate with the domain experts in their native language while the developers communicate with each other in English and most importantly, they write code using the English translation of the UL. I'm sure I don't want to go for the first option and I think option 3 is much better than option 2. What do you think? Am I missing other options? UPDATE Today, about year later, having dealt with this issue on a daily basis, I have to say that option 3 has worked out pretty well for me. It wasn't as tedious as I initially feared and translating in real time while talking to the client wasn't a problem either. I also found the following advantages to be true, based on my experience. Translating the UL makes you pay more attention to defining the UL and even the domain itself, especially when you don't know how to translate a term and you have to start looking through dictionaries etc. This has even caused me to reconsider domain modeling decisions a few times. It helps you make your knowledge of the English language more profound. Obviously, your code is much more pleasant to look at instead of being a mind boggling obscenity.

    Read the article

  • SEO non-English domain name advice

    - by Dominykas Mostauskis
    I'm starting a website, that is meant for a non-English region, using an alphabet that is a bit different than that of English. Current plan is as follows. The website name, and the domain name, will be in the local language (not English); however, domain name will be spelled in the English alphabet, while the website's title will be the same word(s), but spelled properly with accents. E.g.: 'www.litterat.fr' and 'Littérat'. Does the difference between domain name and website name character use influence the site's SEO? Is it better, SEO-wise, to choose a name that can be spelled the same way in the English alphabet? From my experience, when searching online, invariably, the English alphabet is used, no matter the language, so people will still be searching 'litterat' (without accents and such). Edit: To sum up: Things have been said about IDN (Internationalized domain name). To make it simple, they are second-level domain names that contain language specific characters (LSP)(e.g. www.café.fr). Here you can check what top-level domains support what LSPs. Check initall's answer for more info on using LSPs in paths and queries. To answer my question about how and if search engines relate keywords spelled with and without language specific characters: Google can potentially tell that series and séries is the same keyword. However, (most relevant for words that are spelled differently across languages and have different meanings, like séries), for Google to make the connection (or lack thereof) between e and é, it has to deduce two things: Language that you are searching in. Language of your query. You can specify it manually through Advanced search or it guesses it, sometimes. I presume it can guess it wrong too. The more keywords specific to this language you use the higher Google's chance to guess the language. Language of the crawled document, against which the ASCII version of the word will be compared (in this example – series). Again, check initall's answer for how to help Google in understanding what language your document is in. Once it has that it can tell whether or not these two spellings should be treated as the same keyword. Google has to understand that even though you're not using french (in this example) specific characters, you're searching in French. The reason why I used the french word séries in this example, is that it demonstrates this very well. You have it in French and you have it in English without the accent. So if your search query is ambiguous like our series, unless Google has something more to go on, it will presume that there's no correlation between your search and séries in French documents. If you augment your query to series romantiques (try it), Google will understand that you're searching in French and among your results you'll see séries as well. But this does not always work, you should test it out with your keywords first. For example, if you search series francaises, it will associate francaises with françaises, but it will not associate series with séries. It depends on the words. Note: worth stressing that this problem is very relevant to words that, written in plain ASCII, might have some other meanings in other languages, it is less relevant to words that can be, by a distinct margin, just some one language. Tip: I've noticed that sometimes even if my non-accented search query doesn't get associated with the properly spelled word in a document (especially if it's the title or an important keyword in the doc), it still comes up. I followed the link, did a Ctrl-F search for my non-accented search query and found nothing, then checked the meta-tags in the source and you had the page's title in both accented and non-accented forms. So if you have meta-tags that can be spelled with language specific characters and without, put in both. Footnote: I hope this helps. If you have anything to add or correct, go ahead.

    Read the article

  • How to handle non-existent subdirectories?

    - by Question Overflow
    I have a dynamic website with friendly URLs. Example: Instead of /user.php?id=123, I have /user/123 Instead of /index.php?category=fishes, I have /fishes But, how do I handle non-existent subdirectories such as /about/123? Currently it gives a 200 success instead of a 404 not found error. Is there a way to deal with non-existent subdirectories in Apache config and at the same time allow for friendly URLs? Or do I have to handle this individually for each PHP script?

    Read the article

  • On Windows 7, dir or tree can't show unicode characters, even starting cmd with cmd /U

    - by ????
    On Windows 7, dir or tree can't show unicode characters, even starting cmd with cmd /U So I would press Window Key + R to run something, and type in cmd /U so that the content might handle Unicode. And then using dir or tree /F, the content in Unicode won't show as Unicode. (in Window Explorer (file manager), the Unicode will show) Is there a way to handle it? To get Unicode characters to test your filenames, you can go to http://news.google.com/news?edchanged=1&ned=tw and you will be able to get many Unicode characters there (UTF-8)

    Read the article

  • xp_cmdshell for Non-System Admin Individuals

    There may be times when you want to allow non-System Admin logins to be able to execute the xp_cmdshell extended stored procedure. In this articleGreg Larson will show you how to setup xp_cmdshell so non-System Admins can use this extended stored procedure. ‘10 Tips for Efficient Disaster Recovery’Steve Jones gives the final lesson in the ‘Top 5 Hard-earned Lessons of a DBA’. Read now and learn from the best.

    Read the article

  • Ubuntu 12.10 is slow and some programs gose to non-respond state

    - by user99631
    Ubuntu 12.10 is so slow and a lot of not responding applications I was using Skype whenever i open it it will go to non-responding state thin back to normal after a while even the software centre the system process is eating the CPU I don’t know if the compiz is the problem but issuing the command compiz --replace restore the applications from non-responding state CPU : Intel Celeron D 3.4 RAM : 1 GB VGA : Intel G45 Plz help

    Read the article

  • Unable to convert file to UTF-8

    - by antoniocs
    I am on windows xp sp3 and I am trying to convert a file from ASCII to UTF-8. I use notepad++ to do this. I go to Encoding Convert to UTF-8 without BOM. I save the file, reopen and it is still on ASCII. I am using this file in a webpage and I need the file to be UTF-8, because I have strings in utf-8 and they am seeing little squares with ? on them.

    Read the article

  • ASP.NET Web Service - how to handle special characters in strings?

    - by Vlorg
    To show this fundamental issue in .NET and the reason for this question, I have written a simple test web service with one method (EditString), and a consumer console app that calls it. They are both standard web service/console applications created via File/New Project, etc., so I won't list the whole code - just the methods in question: Web method: [WebMethod] public string EditString(string s, bool useSpecial) { return s + (useSpecial ? ((char)19).ToString() : ""); } [You can see it simply returns the string s if useSpecial is false. If useSpecial is true, it returns s + char 19.] Console app: TestService.Service1 service = new SCTestConsumer.TestService.Service1(); string response1 = service.EditString("hello", false); Console.WriteLine(response1); string response2 = service.EditString("hello", true); // fails! Console.WriteLine(response2); [The second response fails, because the method returns hello + a special character (ascii code 19 for argument's sake).] The error is: There is an error in XML document (1, 287) Inner exception: "'', hexadecimal value 0x13, is an invalid character. Line 1, position 287." A few points worth mentioning: The web method itself WORKS FINE when browsing directly to the ASMX file (e.g. http://localhost:2065/service1.asmx), and running the method through this (with the same parameters as in the console application) - i.e. displays XML with the string hello + char 19. Checking the serialized XML in other ways shows the special character is being encoded properly (the SERVER SIDE seems to be ok which is GOOD) So it seems the CLIENT SIDE has the issue - i.e. the .NET generated proxy class code doesn't handle special characters This is part of a bigger project where objects are passed in and out of the web methods - that contain string attributes - these are what need to work properly. i.e. we're de/serializing classes. Any suggestions for a workaround and how to implement it? Or have I completely missed something really obvious!!? Thanks in advance... PS. I've not had much luck with getting it to use CDATA tags (does .NET support these out of the box?).

    Read the article

  • Gawker Passwords

    - by Nick Harrison
    There has been much news about the hack of the Gawker web sites. There has even been an analysis of the common passwords found. This list is embarrassing in many ways. The most common password was "123456". The second most common password was "password". Much has also been written providing advice on how to create good passwords. This article provides some interesting advice, none of which should be taken. Anyone reading my blog, probably already knows the importance of strong passwords, so I am not going to reiterate the reasons here. My target audience is more the folks defining password complexity requirements. A user cannot come up with a strong password, if we have complexity requirements that don't make sense. With that in mind, here are a few guidelines:  Long Passwords Insist on long passwords. In some cases, you may need to change to allow a long password. I have seen many places that cap passwords at 8 characters. Passwords need to be at least 8 characters minimal. Consider how much stronger the passwords would be if you double the length. Passwords that are 15-20 characters will be that much harder to crack. There is no need to have limit passwords to 8 characters. Don't Require Special Characters Many complexity rules will require that your password include a capital letter, a lower case letter, a number, and one of the "special" characters, the shits above the number keys. The problem with such rules is that the resulting passwords are harder to remember. It also means that you will have a smaller set of characters in the resulting passwords. If you must include one of the 9 digits and one of the 9 "special" characters, then you have dramatically reduced the character set that will make up the final password. Two characters will be one of 10 possible values instead of one of 70. Two additional characters will be one of 26 possible characters instead of a 70 character potential character set. If you limit passwords to 8 characters, you are left with only 7 characters having the full set of 70 potential values. With these character restrictions in place, there are 1.6 x1012 possible passwords. Without these special character restrictions, but allowing numbers and special characters, you get a total of 5.76x1014 possible passwords. Even if you only allowed upper and lower case characters, you will still have 2.18X1014 passwords. You can do the math any number of ways, requiring special characters will always weaken passwords. Now imagine the number of passwords when you require more than 8 characters.  If you are responsible for defining complexity rules, I urge you to take these guidelines into account. What other guidelines do you follow?

    Read the article

  • Handling special characters with FOR XML PATH('')

    - by Rob Farley
    Because I hate seeing &gt; or &amp; in my results… Since SQL Server 2005, we’ve been able to use FOR XML PATH('') to do string concatenation. I’ve blogged about it before several times. But I don’t think I’ve blogged about the fact that it all goes a bit wrong if you have special characters in the strings you’re concatenating. Generally, I don’t even worry about this. I should, but I don’t, particularly when the solution is so easy. Suppose I want to concatenate the list of user databases...(read more)

    Read the article

  • Getting a Web Resource Url in non WebForms Applications

    - by Rick Strahl
    WebResources in ASP.NET are pretty useful feature. WebResources are resources that are embedded into a .NET assembly and can be loaded from the assembly via a special resource URL. WebForms includes a method on the ClientScriptManager (Page.ClientScript) and the ScriptManager object to retrieve URLs to these resources. For example you can do: ClientScript.GetWebResourceUrl(typeof(ControlResources), ControlResources.JQUERY_SCRIPT_RESOURCE); GetWebResourceUrl requires a type (which is used for the assembly lookup in which to find the resource) and the resource id to lookup. GetWebResourceUrl() then returns a nasty old long URL like this: WebResource.axd?d=-b6oWzgbpGb8uTaHDrCMv59VSmGhilZP5_T_B8anpGx7X-PmW_1eu1KoHDvox-XHqA1EEb-Tl2YAP3bBeebGN65tv-7-yAimtG4ZnoWH633pExpJor8Qp1aKbk-KQWSoNfRC7rQJHXVP4tC0reYzVw2&t=634533278261362212 While lately excessive resource usage has been frowned upon especially by MVC developers who tend to opt for content distributed as files, I still think that Web Resources have their place even in non-WebForms applications. Also if you have existing assemblies that include resources like scripts and common image links it sure would be nice to access them from non-WebForms pages like MVC views or even in plain old Razor Web Pages. Where's my Page object Dude? Unfortunately natively ASP.NET doesn't have a mechanism for retrieving WebResource Urls outside of the WebForms engine. It's a feature that's specifically baked into WebForms and that relies specifically on the Page HttpHandler implementation. Both Page.ClientScript (obviously) and ScriptManager rely on a hosting Page object in order to work and the various methods off these objects require control instances passed. The reason for this is that the script managers can inject scripts and links into Page content (think RegisterXXXX methods) and for that a Page instance is required. However, for many other methods - like GetWebResourceUrl() - that simply return resources or resource links the Page reference is really irrelevant. While there's a separate ClientScriptManager class, it's marked as sealed and doesn't have any public constructors so you can't create your own instance (without Reflection). Even if it did the internal constructor it does have requires a Page reference. No good… So, can we get access to a WebResourceUrl generically without running in a WebForms Page instance? We just have to create a Page instance ourselves and use it internally. There's nothing intrinsic about the use of the Page class in ClientScript, at least for retrieving resources and resource Urls so it's easy to create an instance of a Page for example in a static method. For our needs of retrieving ResourceUrls or even actually retrieving script resources we can use a canned, non-configured Page instance we create on our own. The following works just fine: public static string GetWebResourceUrl(Type type, string resource ) { Page page = new Page(); return page.ClientScript.GetWebResourceUrl(type, resource); } A slight optimization for this might be to cache the created Page instance. Page tends to be a pretty heavy object to create each time a URL is required so you might want to cache the instance: public class WebUtils { private static Page CachedPage { get { if (_CachedPage == null) _CachedPage = new Page(); return _CachedPage; } } private static Page _CachedPage; public static string GetWebResourceUrl(Type type, string resource) { return CachedPage.ClientScript.GetWebResourceUrl(type, resource); } } You can now use GetWebResourceUrl in a Razor page like this: <!DOCTYPE html> <html <head> <script src="@WebUtils.GetWebResourceUrl(typeof(ControlResources),ControlResources.JQUERY_SCRIPT_RESOURCE)"> </script> </head> <body> <div class="errordisplay"> <img src="@WebUtils.GetWebResourceUrl(typeof(ControlResources),ControlResources.WARNING_ICON_RESOURCE)" /> This is only a Test! </div> </body> </html> And voila - there you have WebResources served from a non-Page based application. WebResources may be a on the way out, but legacy apps have them embedded and for some situations, like fallback scripts and some common image resources I still like to use them. Being able to use them from non-WebForms applications should have been built into the core ASP.NETplatform IMHO, but seeing that it's not this workaround is easy enough to implement.© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  MVC   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • When Your Favorite Video Game Characters go Trick-or-Treating [Video]

    - by Asian Angel
    Halloween has arrived and all of your favorite video game characters are out and about collecting lots of candy goodness. The question is whether or not all will be successful in collecting treats or if the tricks will be on them! Note: Video contains some language that may be considered inappropriate. Videogame Trick-or-Treating [Dorkly] 6 Start Menu Replacements for Windows 8 What Is the Purpose of the “Do Not Cover This Hole” Hole on Hard Drives? How To Log Into The Desktop, Add a Start Menu, and Disable Hot Corners in Windows 8

    Read the article

  • Favorite, Well-Known Characters as Pacman Ghosts [Humorous Image]

    - by Asian Angel
    Can you name them all? Note: Ryan has a complete list of all the characters at his Flickr page if you find any that you are unable to identify. Pacman Ghosts – Ryan “Dash” Coleman (Flickr) [via Neatorama] HTG Explains: What Is Two-Factor Authentication and Should I Be Using It? HTG Explains: What Is Windows RT and What Does It Mean To Me? HTG Explains: How Windows 8′s Secure Boot Feature Works & What It Means for Linux

    Read the article

  • XNA 4.0 SpriteFont not displaying all Characters

    - by Iain Brown
    Am looking for a little help and trying to use SpriteFont in my XNA 4.0 game but the problem is am displaying to string "This is a test" but all that's displayed on the screen is "This is st" so the "a te" are missing from the screen. The space is there for the characters but the letters are not. The code am using is: spriteBatch.Begin(SpriteSortMode.BackToFront, BlendState.AlphaBlend); spriteBatch.DrawString(font,"this is a test",new Vector2(692,372),Color.White); spriteBatch.Draw(texture,new Rectangle(0,0,100,100),Color.White); spriteBatch.End(); Any help with this would be great!

    Read the article

  • How to show other characters in online 2D rpg

    - by Loligans
    I have Player 1 and Player 2 I am using Json to send and retrieve player data between the client and the server, but when another player logs in, and is in the same map, how would I send that data to both players to update the graphics engine to show there are 2 Players on the map? About my game it is a 2D RPG tile based game it is 24x15 Tiles it is Real time Action it should interact anywhere between 10-150 ping players interact with each other when in the same map and can see each other moving around the game world is persistent, and is saved when the server shuts down Right now the server just sends the player Only their information which is inside a Json Object Here is an example of what I am talking about If you notice there are 2 separate characters in 2 separate clients, but they are running on the same server. I am trying to get them to show up on both clients, but I don't know how I should accomplish this. Should I send it as an added value in the Json object? Also what is the name of this process so I can look it up and find more info on it?

    Read the article

  • How do you write a macro for a special character in LibreOffice?

    - by JasperKov
    Does anyone know how to write a macro for a special character? I know LibreOffice currently doesn't have a way to set a special character to a keyboard shortcut. However, I want to work around this with a macro. My plan is to create a macro for a special character then set a keyboard shortcut to that macro. Problem is I don't know the first thing about writing a macro. Any one have a template or something that works? I also know about the compose key, but I guess I am lazy and want to actually insert special characters with as few keys as possible.

    Read the article

  • how to insert multiple characters into a string in C [closed]

    - by John Li
    I wish to insert some characters into a string in C: Example: char string[20] = "20120910T090000"; I want to make it something like "2012-09-10-T-0900-00" My code so far: void append(char subject[],char insert[], int pos) { char buf[100]; strncpy(buf, subject, pos); int len = strlen(buf); strcpy(buf+len, insert); len += strlen(insert); strcpy(buf+len, subject+pos); strcpy(subject, buf); } When I call this the first time I get: 2012-0910T090000 However when I call it a second time I get: 2012-0910T090000-10T090000 Any help is appreciated

    Read the article

  • "Adding Esperanto circumflexes (supersigno)" option has no effect

    - by Gardel
    I'm used to use the keyboard layout option to type the accentuated characters in Esperanto. This option is in System Settings Keyboard Layout Options Adding Esperanto circumflexes (supersigno) To the corresponding key in a Qwerty keyboard. It worked great but since I upgraded to Ubuntu 12.04, this option has no effect. I'm using Ubuntu in French with the keyboard layout French (variant) and Gnome-shell. I tested on another computer with Unity and there is the same issue. Is it a known issue? I did not find anything about it…

    Read the article

  • Convert extended ASCII characters to it's right presentation using Console.ReadKey() method and ConsoleKeyInfo variable

    - by mishamosher
    Readed about 30 minutes, and didn't found some specific for this in this site. Suppose the following, in C#, console application: ConsoleKeyInfo cki; cki = Console.ReadKey(true); Console.WriteLine(cki.KeyChar.ToString()); //Or Console.WriteLine(cki.KeyChar) as well Console.ReadKey(true); Now, let's put ¿ in the console entry, and asign it to cki via a Console.ReadKey(true). What will be shown isn't the ¿ symbol, the ¨ symbol is the one that's shown instead. And the same happens with many other characters. Examples: ñ shows ¤, ¡ shows -, ´ shows ï. Now, let's take the same code snipplet and add some things for a more Console.ReadLine() like behavior: string data = string.Empty; ConsoleKeyInfo cki; for (int i = 0; i < 10; i++) { cki = Console.ReadKey(true); data += cki.KeyChar; } Console.WriteLine(data); Console.ReadKey(true); The question, how to handle this by the right way, end printing the right characters that should be stored on data, not things like ¨, ¤, -, ï, etc? Please note that I want a solution that works with ConsoleKeyInfo and Console.ReadKey(), not use other variable types, or read methods. EDIT: Because ReadKey() method, that comes from Console namespace, depends on Kernel32.dll and it definetively bad handles the extended ASCII and unicode, it's not an option anymore to just find a valid conversion for what it returns. The only valid way to handle the bad behavior of ReadKey() is to use the cki.Key property that's written in cki = Console.ReadKey(true) execution and apply a switch to it, then, return the right values on dependence of what key was pressed. For example, to handle the Ñ key pressing: string data = string.Empty; ConsoleKeyInfo cki; cki = Console.ReadKey(true); switch (cki.Key) { case ConsoleKey.Oem3: if (cki.Modifiers.ToString().Contains("Shift")) //Could added handlers for Alt and Control, but not putted in here to keep the code small and simple data += "Ñ"; else data += "ñ"; break; } Console.WriteLine(data); Console.ReadKey(true); So, now the question has a wider focus... Which others functions completes it's execution with only one key pressed, and returns what's pressed (a substitute of ReadKey())? I think that there's not such substitutes, but a confirmed answer would be usefull. EDIT2: HA! Found the way, for something I used for so many times Windows 98 SE. There are the codepages, the ones responsibles for how's presented the info in the console. ReadLine() reconfigures the codepage to use properly the extended ASCII and Unicode characters. ReadKey() leaves it in EN-US default (codepage 850). Just use a codepage that prints the characters you want, and that's all. Refer to http://en.wikipedia.org/wiki/Code_page for some of them :) So, for the Ñ key press, the solution is this: Console.OutputEncoding = Encoding.GetEncoding(1252); //Also 28591 is valid for `Ñ` key, and others too string data = string.Empty; ConsoleKeyInfo cki; cki = Console.ReadKey(true); data += cki.KeyChar; Console.WriteLine(data); Console.ReadKey(true); Simple :) Now I'm wrrr with myself... how could I forget those codepages!? Question answered, so, no more about this!

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >