Search Results

Search found 1490 results on 60 pages for 'rxvt unicode'.

Page 9/60 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Weird error using preg_match and unicode

    - by Thorpe Obazee
    if (preg_match('(\p{Nd}{4}/\p{Nd}{2}/\p{Nd}{2}/\p{L}+)', '2010/02/14/this-is-something')) { // do stuff } The above code works. However this one doesn't. if (preg_match('/\p{Nd}{4}/\p{Nd}{2}/\p{Nd}{2}/\p{L}+/u', '2010/02/14/this-is-something')) { // do stuff } Maybe someone could shed some light as to why the one below doesn't work. This is the error that is being produced: A PHP Error was encountered Severity: Warning Message: preg_match() [function.preg-match]: Unknown modifier '\'

    Read the article

  • Unicode string turns garbage at serverside.

    - by this. __curious_geek
    I have a situation. I have a label in ASP.NET 2.0(C#). The label should display a dutch language text that is "Sähköpostiosoite", I tried setting the Label.Text both from markup and code-behind but what I see in the browser response is "Sähköpostiosoite". Originally assigned string "Sähköpostiosoite" get replaced with "Sähköpostiosoite". I have no idea why this happens can you please help me diagnose the problem ??

    Read the article

  • Apache htdocs in folder with unicode name

    - by Zsolti
    I have my apache (for windows) htdocs in a folder like c:\anything1\????\anything2. The problem is that in this case php won't execute any scripts from here and will display an error message like this: `Warning: Unknown: failed to open stream: No such file or directory in Unknown on line 0 Fatal error: Unknown: Failed opening required 'c:/anything1/????/anything2/index.php' (include_path='.;C:\php5\pear') in Unknown on line 0 ` If I try to open a html file, it is served by apache, so it seems that the problem appears only with php. Do you have an idea how to solve this?

    Read the article

  • Flex 3 - Full unicode support fonts and CSS

    - by BS_C3
    Hi! I'm developping a web application that will be used either in Europe or in Asia (specially Japan -Hiragana, Kanji and Katana-, China and Korea). I'm using the following fonts: - ericssonga628.TTF - HelveticaNeueLTStd-Lt.otf - HelveticaNeueLTStd-LtEx.otf - HelveticaNeueLTStd-Bd.otf - HelveticaNeueLTStd-BdEx.otf When I tried to display Japanese characters, I don't get anything. I guess these fonts don't support East Asian characters... Do you know of any equivalent fonts? Also, I was thinking of creating a CSS for each language (or pack of languages) when the user changes the display language. For example, if the user selects "japanese", I'll use the japanese stylesheet. However, how do I switch from a CSS to another? Thanks in advance for your answers. Regards,

    Read the article

  • reading unicode

    - by user121196
    I'm using java io to retrieve text from a server that might output character such as é. then output it using System.err, they turn out to be '?'. I am using UTF8 encoding. what's wrong? int len=0; char[]buffer=new char[1024]; OutputStream os = sock.getOutputStream(); InputStream is = sock.getInputStream(); os.write(query.getBytes("UTF8"));//iso8859_1")); Reader reader = new InputStreamReader(is, Charset.forName("UTF-8")); do{ len = reader.read(buffer); if (len0) { if(outstring==null)outstring=new StringBuffer(); outstring.append(buffer,0,len); } }while(len0); System.err.println(outstring);

    Read the article

  • Unicode escaping in C/C++

    - by Geo
    Hi guys! I'm having a dispute with a colleague of mine. She says that the following: char* a = "\x000aaxz"; will/can be seen by the compiler as "\x000aa". I do not agree with her, as I think you can have a maximum number of 4 hex characters after the \x. Can you have more than 4 hex chars? Who is right here?

    Read the article

  • Unicode filenames on windows in ruby

    - by delivarator
    I have a piece of code that looks like this: Dir.new(path).each do |entry| puts entry end The problem comes when I have a file named ???????.txt in the directory that I list. On a Windows 7 machine I get the output: ???????.txt From googling around, properly reading this filename on windows seems to be an impossible task. Any suggestions?

    Read the article

  • Joomla 1.5 & Indic Unicode Fonts - How-to?

    - by Ganesh
    I am using Inscript Keyboard to directly type into TinyMCE. However when I click on save, all the characters appear as question marks on website and even in article list on admin side. How I should solve the problem? I am specifically talking about Marathi but the problem-solution might be same for all Devnagrari fonts. Thanks in advance.

    Read the article

  • How to do proper Unicode and ANSI output redirection on cmd.exe?

    - by Sorin Sbarnea
    If you are doing automation on windows and you are redirecting the output of different commands (internal cmd.exe or external, you'll discover that your log files contains combined Unicode and ANSI output (meaning that they are invalid and will not load well in viewers/editors). Is it is possible to make cmd.exe work with UTF-8? This question is not about display, s about stdin/stdout/stderr redirection and Unicode. I am looking for a solution that would allow you to: redirect the output of the internal commands to a file using UTF-8 redirect output of external commands supporting Unicode to the files but encoded as UTF-8. If it is impossible to obtain this kind of consistence using batch files, is there another way of solving this problem, like using python scripting for this? In this case, I would like to know if it is possible to do the Unicode detection alone (user using the scripting should not remember if the called tools will output Unicode or not, it will just expect to convert the output to UTF-8. For simplicity we'll assume that if the tool output is not-Unicode it will be considered as UTF-8 (no codepage conversion).

    Read the article

  • Delphi 2009 dbExpress and Interbase: Unicode migration steps and risks?

    - by mjustin
    Currently, our database uses Win1252 as the only character encoding. We will have to support Unicode in the database tables soon, which means we have to perform this migration for four databases and around 80 Delphi applications which run in-house in a 24/7 environment. Are there recommendations for database migrations to UTF-8 (or UNICODE_FSS) for Delphi applications? Some questions listed below. Many thanks in advance for your answers! are there tools which help with the migration of the existing databases (sizes between 250 MB and 2 GB, no Blob fields), by dumping the data, recreating the database with UNICODE_FSS or UTF-8, and loading the data back? are there known problems with Delphi 2009, dbExpress and Interbase 7.5 related to Unicode character sets? would you recommend to upgrade the databases to Interbase 2009 first? (This upgrade is planned but does not have a high priority) can we simply migrate the database and Delphi will handle the Unicode character sets automatically, or will we have to change all character field types in every Datamodule (dfm and source code) too? which strategy would you recommend to work on the migration in parallel with the normal development and maintenance of the existing application? The application runs in-house so development and database administration is done internally. Update: one problem I found now is that there are two different persistent field types for Unicode and non Unicode character fields. For the existing database, dbExpress creates TStringField objects. For the Unicode database fields, dbExpress creates (or expects!) TWideStringField objects. This looks like a lot of work lies ahead. While we could try to avoid persistent fields (and add calculated fields at run time), Of course we would prefer a solution which does not require so many changes in existing units and DFM files.

    Read the article

  • Using unicodedata.normalize in Python 2.7

    - by dpitch40
    Once again, I am very confused with a unicode question. I can't figure out how to successfully use unicodedata.normalize to convert non-ASCII characters as expected. For instance, I want to convert the string u"Cœur" To u"Coeur" I am pretty sure that unicodedata.normalize is the way to do this, but I can't get it to work. It just leaves the string unchanged. >>> s = u"Cœur" >>> unicodedata.normalize('NFKD', s) == s True What am I doing wrong?

    Read the article

  • Where can I find a useful multi-language Unicode font for Mac OS X?

    - by Stephen Jennings
    On every browser I've tried (Firefox, Safari, Chrome, and Omniweb), when I go to a web page containing somewhat less-common characters, I can't see the glyphs. For example, on the Wikipedia page for the Bengali Language, the very first line contains a string of squares; on Windows, I can see the Bengali writing. Firefox does display code points on the Coptic Language article, but not Bengali. I'm not sure why. On Windows, as long as I have the Arial Unicode MS font installed, these characters fall back to that font and display properly. Mac OS X doesn't seem to ship with a font containing these Unicode characters (it has Arial Unicode MS, but it must be a subset of the Windows version because Bengali doesn't display in that font). I checked on my Snow Leopard DVD and I installed "Additional Fonts" from the Optional Installs package, but I'm still missing many languages. Is there any good, free font that contains a large collection of languages? I know creating fonts is difficult and time-consuming, but it seems like including at least one font like this with operating systems should be standard by now.

    Read the article

  • Should UTF-16 be considered harmful?

    - by Artyom
    I'm going to ask what is probably quite a controversial question: "Should one of the most popular encodings, UTF-16, be considered harmful?" Why do I ask this question? How many programmers are aware of the fact that UTF-16 is actually a variable length encoding? By this I mean that there are code points that, represented as surrogate pairs, take more than one element. I know; lots of applications, frameworks and APIs use UTF-16, such as Java's String, C#'s String, Win32 APIs, Qt GUI libraries, the ICU Unicode library, etc. However, with all of that, there are lots of basic bugs in the processing of characters out of BMP (characters that should be encoded using two UTF-16 elements). For example, try to edit one of these characters: 𝄞 (U+1D11E) MUSICAL SYMBOL G CLEF 𝕥 (U+1D565) MATHEMATICAL DOUBLE-STRUCK SMALL T 𝟶 (U+1D7F6) MATHEMATICAL MONOSPACE DIGIT ZERO 𠂊 (U+2008A) Han Character You may miss some, depending on what fonts you have installed. These characters are all outside of the BMP (Basic Multilingual Plane). If you cannot see these characters, you can also try looking at them in the Unicode Character reference. For example, try to create file names in Windows that include these characters; try to delete these characters with a "backspace" to see how they behave in different applications that use UTF-16. I did some tests and the results are quite bad: Opera has problem with editing them (delete required 2 presses on backspace) Notepad can't deal with them correctly (delete required 2 presses on backspace) File names editing in Window dialogs in broken (delete required 2 presses on backspace) All QT3 applications can't deal with them - show two empty squares instead of one symbol. Python encodes such characters incorrectly when used directly u'X'!=unicode('X','utf-16') on some platforms when X in character outside of BMP. Python 2.5 unicodedata fails to get properties on such characters when python compiled with UTF-16 Unicode strings. StackOverflow seems to remove these characters from the text if edited directly in as Unicode characters (these characters are shown using HTML Unicode escapes). WinForms TextBox may generate invalid string when limited with MaxLength. It seems that such bugs are extremely easy to find in many applications that use UTF-16. So... Do you think that UTF-16 should be considered harmful?

    Read the article

  • How can I check if PHP was compiled with the UNICODE version of the Win32 API?

    - by Wesley Murch
    This is related to this Stack Overflow post: glob() can't find file names with multibyte characters on Windows? I'm having issues with PHP and files that have multibyte characters on Windows. Here's my test case: print_r(scandir('./uploads/')); print_r(glob('./uploads/*')); Correct Output on remote UNIX server: Array ( [0] => . [1] => .. [2] => filename-äöü.jpg [3] => filename.jpg [4] => test?test.jpg [5] => ??? ?????.jpg [6] => ?????????.jpg [7] => ???.jpg ) Array ( [0] => ./uploads/filename-äöü.jpg [1] => ./uploads/filename.jpg [2] => ./uploads/test?test.jpg [3] => ./uploads/??? ?????.jpg [4] => ./uploads/?????????.jpg [5] => ./uploads/???.jpg ) Incorrect Output locally on Windows: Array ( [0] => . [1] => .. [2] => ??? ?????.jpg [3] => ???.jpg [4] => ?????????.jpg [5] => filename-äöü.jpg [6] => filename.jpg [7] => test?test.jpg ) Array ( [0] => ./uploads/filename-äöü.jpg [1] => ./uploads/filename.jpg ) Here's a relevant excerpt from the answer I chose to accept (which actually is a quote from an article that was posted online over 2 years ago): From the comments on this article: http://www.rooftopsolutions.nl/blog/filesystem-encoding-and-php The output from your PHP installation on Windows is easy to explain : you installed the wrong version of PHP, and used a version not compiled to use the Unicode version of the Win32 API. For this reason, the filesystem calls used by PHP will use the legacy "ANSI" API and so the C/C++ libraries linked with this version of PHP will first try to convert yout UTF-8-encoded PHP string into the local "ANSI" codepage selected in the running environment (see the CHCP command before starting PHP from a command line window) Your version of Windows is MOST PROBABLY NOT responsible of this weird thing. Actually, this is YOUR version of PHP which is not compiled correctly, and that uses the legacy ANSI version of the Win32 API (for compatibility with the legacy 16-bit versions of Windows 95/98 whose filesystem support in the kernel actually had no direct support for Unicode, but used an internal conversion layer to convert Unicode to the local ANSI codepage before using the actual ANSI version of the API). Recompile PHP using the compiler option to use the UNICODE version of the Win32 API (which should be the default today, and anyway always the default for PHP installed on a server that will NEVER be Windows 95 or Windows 98...) I can't confirm whether this is my problem or not. I used phpinfo() and did not find anything interesting, but I wasn't sure what to look for. I've been using XAMPP for easy installations, so I'm really not sure exactly how it was installed. I'm using Windows 7, 64 bit - so forgive my ignorance, but I'm not even sure if "Win32" is relevant here. How can I check if my current version of PHP was compiled with the configuration mentioned above? PHP Version: 5.3.8 System: Windows NT WES-PC 6.1 build 7601 (Windows 7 Home Premium Edition Service Pack 1) i586 Build Date: Aug 23 2011 11:47:20 Compiler: MSVC9 (Visual C++ 2008) Architecture: x86 Configure Command: cscript /nologo configure.js "--enable-snapshot-build" "--disable-isapi" "--enable-debug-pack" "--disable-isapi" "--without-mssql" "--without-pdo-mssql" "--without-pi3web" "--with-pdo-oci=D:\php-sdk\oracle\instantclient10\sdk,shared" "--with-oci8=D:\php-sdk\oracle\instantclient10\sdk,shared" "--with-oci8-11g=D:\php-sdk\oracle\instantclient11\sdk,shared" "--enable-object-out-dir=../obj/" "--enable-com-dotnet" "--with-mcrypt=static" "--disable-static-analyze"

    Read the article

  • Unicode To ASCII Conversion [closed]

    - by Yuvaraj
    Hi all, i creating an small application in Delphi 2009. here i got problem that when i run my application in WindowsXP its working but it is not working in Windows95. i know the problem that 95 will not support Unicode. if anyone knows the solution please tell me. and also i have one more idea that converting Unicode to ASCII. is it possible please tell how to do that. Thanks in Advance Worm Regards, Yuvaraj

    Read the article

  • Unicode in PostgreSQL 8.4

    - by user8382
    I installed the "postgresql-8.4" package with default options. Everything worked fine, however I can't seem to manage to create unicode databases: -- This doesn't work createdb test1 --encoding UNICODE -- This works createdb test2 The error message "createdb: database creation failed: ERROR: new encoding (UTF8) is incompatible with the encoding of the template database (SQL_ASCII)" is a bit puzzling because (afaik) I don't use a template for creating the new db, or is it implicitely referring to the default "postgres" database for some reason ? Or maybe I'm missing a setting in a .conf file ?

    Read the article

  • PHP PDF library with unicode support. anybody??

    - by soden
    I tried dompdf. Its a lot easier than the other libraries but it doesnt have unicode support. Well it has unicode support but requires another library calld PDFLib ($1k version). so just wondering if anybodies ever stumbled upon or used any PHP pdf library which is easy to use and has unicode support. thanks,

    Read the article

  • How do I best remove the unicode characters that XHTML regards as non-valid using php?

    - by Andrew Stacey
    I run a forum designed to support an international mathematics group. I've recently switched it to unicode for better support of international characters. In debugging this conversion, I've discovered that not all unicode characters are considered as valid XHTML (the relevant website appears to be http://www.w3.org/TR/unicode-xml/). One of the steps that the forum software goes through before presenting the posts to the browser is an XHTML validation/sanitisation step. It seems a reasonable idea that at that stage it should remove any unicode characters that XHTML doesn't like. So my question is: Is there a standard (or best) way of doing this in PHP? (The forum is written in PHP, by the way.) I guess that the failsafe would be a simple str_replace (if that's also the best, do I need to do anything extra to make sure it works properly with unicode?) but that would involve me having to go through the XHTML DTD (or the above-referenced W3 page) carefully to figure out what characters to list in the search part of str_replace, so if this is the best way, has someone already done that so that I can steal, err, copy, it? (Incidentally, the character that caused the problem was U+000C, the 'formfeed', which (according to the W3 page) is valid HTML but invalid XHTML!)

    Read the article

  • Is the Unicode prefix N still needed in SQL Compact Edition?

    - by Dave
    At least in previous versions of SQL Server, you had to prefix Unicode string constants with an "N" to make them be treated as Unicode. Thus, select foo from bar where fizz = N'buzz' (See "Server-Side Programming with Unicode" for SQL Server 2005 "from the horse's mouth" documentation.) We have an application that is using SQL Compact Edition and I am wondering if that is still necessary. From the testing I am doing, it appears to be unneeded. That is, the following SQL statements both behave identically in SQL CE, but the second one fails in SQL Server 2005: select foo from bar where foo=N'???' select foo from bar where foo='???' (I hope I'm not swearing in some language I don't know about...) I'm wondering if that is because all strings are treated as Unicode in SQL CE, or if perhaps the default code page is now Unicode-aware. If anyone has seen any official documentation, either yea or nay, I'd appreciate it. I know I could go the safe route and just add the "N"'s, but there's a lot of code that will need changed, and if I don't need to, I don't want to! Thanks for your help!

    Read the article

  • Why does Python sometimes upgrade a string to unicode and sometimes not?

    - by samtregar
    I'm confused. Consider this code working the way I expect: >>> foo = u'Émilie and Juañ are turncoats.' >>> bar = "foo is %s" % foo >>> bar u'foo is \xc3\x89milie and Jua\xc3\xb1 are turncoats.' And this code not at all working the way I expect: >>> try: ... raise Exception(foo) ... except Exception as e: ... foo2 = e ... >>> bar = "foo2 is %s" % foo2 ------------------------------------------------------------ Traceback (most recent call last): File "<ipython console>", line 1, in <module> UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-1: ordinal not in range(128) Can someone explain what's going on here? Why does it matter whether the unicode data is in a plain unicode string or stored in an Exception object? And why does this fix it: >>> bar = u"foo2 is %s" % foo2 >>> bar u'foo2 is \xc3\x89milie and Jua\xc3\xb1 are turncoats.' I am quite confused! Thanks for the help! UPDATE: My coding buddy Randall has added to my confusion in an attempt to help me! Send in the reinforcements to explain how this is supposed to make sense: >>> class A: ... def __str__(self): return "string" ... def __unicode__(self): return "unicode" ... >>> "%s %s" % (u'niño', A()) u'ni\xc3\xb1o unicode' >>> "%s %s" % (A(), u'niño') u'string ni\xc3\xb1o' Note that the order of the arguments here determines which method is called!

    Read the article

  • What is the best way to use Unicode in C++ on iPhone?

    - by Olli Wang
    Hi, I want to create my C++ libraries with Unicode support so they can be reused on other platforms. I have found the ICU (International Components for Unicode) project but I also found a discuss about Apple rejecting for using ICU (see http://tinyurl.com/y86phfb). So how do you guys use Unicode in C++ on iPhone? Thanks.

    Read the article

  • copy text (Indian language- GUjarati) from word document to web page text area problem.

    - by Avinash
    Hi all, I am developing one site in Indian language (Gujarati). My problem is as below: My client wants that they able to copy Gujarati text from word document and paste into the Text area. But when i copy text from word doc and paste into text area the its get converted to the English letters. http://www.chanakyanipothi.com/gujchanakya/Gopika.ttf Above is the link of fonts which I am using. I can provide you the demo code for you to make some work on it. Is there any special thing which I am missing. Hope I am clear to you. I am running in PHP and apache. Thanks Avinash

    Read the article

  • How to get number of bytes read from QTextStream

    - by user261882
    The following code I am using to find the number of read bytes from QFile. With some files it gives the correct file size, but with some files it gives me a value that is approximatively fileCSV.size()/2. I am sending two files that have same number of characters in it, but have different file sizes link text. Should i use some other objects for reading the QFile? QFile fileCSV("someFile.txt"); if ( !fileCSV.open(QIODevice::ReadOnly | QIODevice::Text)) emit errorOccurredReadingCSV(this); QTextStream textStreamCSV( &fileCSV ); // use a text stream int fileCSVSize = fileCSV.size()); qint64 reconstructedCSVFileSize = 0; while ( !textStreamCSV.atEnd() ) { QString line = textStreamCSV.readLine(); // line of text excluding '\n' if (!line.isEmpty()) { reconstructedCSVFileSize += line.size(); //this doesn't work always reconstructedCSVFileSize += 2; } else reconstructedCSVFileSize += 2; } I know that reading the size of QString is wrong, give me some other solutions if you can. Thank you.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >