Search Results

Search found 13217 results on 529 pages for 'non unicode'.

Page 17/529 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • How to use Unicode characters in a vim script?

    - by Thomas
    I'm trying to get vim to display my tabs as ? so they cannot be mistaken for actual characters. I'd hoped the following would work: if has("multi_byte") set lcs=tab:? else set lcs=tab:>- endif However, this gives me E474: Invalid argument: lcs=tab:? The file is UTF-8 encoded and includes a BOM. Googling "vim encoding" or similar gives me many results about the encoding of edited files, but nothing about the encoding of executed scripts. How to get this character into my .vimrc so that it is properly displayed?

    Read the article

  • Unicode special characters - Dingbats - Appear differently in Firefox vs. Chrome/IE

    - by Oren
    Hi there, I'm trying to find a way to make dingbats appear exactly the same in Firefox, Chrome, Safari and IE. I noticed that the Dingbats appear the same in IE/Chrome/Safari, HOWEVER - in Firefox - they look "thiner". For example - try to visit the following page: http://en.wikipedia.org/wiki/Dingbat You'll notice that when viewing that page in Firefox - the characters look different in comparison to Chrome/IE. Does anybody know why and how can I cause Firefox to display the characters EXACTLY like they appear in Chrome/IE? Thx, Oren.

    Read the article

  • Is it safe to use random Unicode for complex delimiter sequences in strings?

    - by ccomet
    Question: In terms of program stability and ensuring that the system will actually operate, how safe is it to use chars like ¦, § or ‡ for complex delimiter sequences in strings? Can I reliable believe that I won't run into any issues in a program reading these incorrectly? I am working in a system, using C# code, in which I have to store a fairly complex set of information within a single string. The readability of this string is only necessary on the computer side, end-users should only ever see the information after it has been parsed by the appropriate methods. Because some of the data in these strings will be collections of variable size, I use different delimiters to identify what parts of the string correspond to a certain tier of organization. There are enough cases that the standard sets of ;, |, and similar ilk have been exhausted. I considered two-char delimiters, like ;# or ;|, but I felt that it would be very inefficient. There probably isn't that large of a performance difference in storing with one char versus two chars, but when I have the option of picking the smaller option, it just feels wrong to pick the larger one. So finally, I considered using the set of characters like the double dagger and section. They only take up one char, and they are definitely not going to show up in the actual text that I'll be storing, so they won't be confused for anything. But character encoding is finicky. While the visibility to the end user is meaningless (since they, in fact, won't see it), I became recently concerned about how the programs in the system will read it. The string is stored in one database, while a separate program is responsible for both encoding and decoding the string into different object types for the rest of the application to work with. And if something is expected to be written one way, is possibly written another, then maybe the whole system will fail and I can't really let that happen. So is it safe to use these kind of chars for background delimiters?

    Read the article

  • Does Postgresql varchar count using unicode character length or ASCII character length?

    - by bennylope
    I tried importing a database dump from a SQL file and the insert failed when inserting the string Mér into a field defined as varying(3). I didn't capture the exact error, but it pointed to that specific value with the constraint of varying(3). Given that I considered this unimportant to what I was doing at the time, I just changed the value to Mer, it worked, and I moved on. Is a varying field with its limit taking into account length of the byte string? What really boggles my mind is that this was dumped from another PostgreSQL database. So it doesn't make sense how a constraint could allow the value to be written initially.

    Read the article

  • How to test if a string has a certain unicode char?

    - by Ruben Trancoso
    Supose you have a command line executable that receives arguments. This executalbe is widechar ready and you want to test if one of this arguments starts with an HYPHEN case in which its an option: command -o foo how you could test it inside your code if you don't know the charset been used by the host? Should be not possible to a given console to produce the same HYPHEN representation by another char in the widechar forest? (in such case it would be a wild char :P) int _tmain(int argc, _TCHAR* argv[]) { std::wstring inputFile(argv[1]); if(inputFile->c_str() <is an HYPHEN>) { _tprintf(_T("First argument cannot be an option")); } }

    Read the article

  • Storing and displaying unicode string (??????) using PHP and MySQL

    - by Anirudh Goel
    I have to store hindi text in a MySQL database, fetch it using a PHP script and display it on a webpage. I did the following: I created a database and set its encoding to UTF-8 and also the collation to utf8_bin. I added a varchar field in the table and set it to accept UTF-8 text in the charset property. Then I set about adding data to it. Here I had to copy data from an existing site. The hindi text looks like this: ????????:05:30 I directly copied this text into my database and used the PHP code echo(utf8_encode($string)) to display the data. Upon doing so the browser showed me "??????". When I inserted the UTF equivalent of the text by going to "view source" in the browser, however, ???????? translates into &#2360;&#2370;&#2352;&#2381;&#2351;&#2379;&#2342;&#2351;. If I enter and store &#2360;&#2370;&#2352;&#2381;&#2351;&#2379;&#2342;&#2351; in the database, it converts perfectly. So what I want to know is how I can directly store ???????? into my database and fetch it and display it in my webpage using PHP. Also, can anyone help me understand if there's a script which when I type in ????????, gives me &#2360;&#2370;&#2352;&#2381;&#2351;&#2379;&#2342;&#2351;? Solution Found I wrote the following sample script which worked for me. Hope it helps someone else too <html> <head> <title>Hindi</title></head> <body> <?php include("connection.php"); //simple connection setting $result = mysql_query("SET NAMES utf8"); //the main trick $cmd = "select * from hindi"; $result = mysql_query($cmd); while ($myrow = mysql_fetch_row($result)) { echo ($myrow[0]); } ?> </body> </html> The dump for my database storing hindi utf strings is CREATE TABLE `hindi` ( `data` varchar(1000) character set utf8 collate utf8_bin default NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `hindi` VALUES ('????????'); Now my question is, how did it work without specifying "META" or header info? Thanks!

    Read the article

  • What feature is at play when Ctrl+Shift+Alt+U,E "types" an unprintable hex 000E?

    - by Peter.O
    I tend to use Ctrl+Shift+Alt for my customized system-wide keybindings. When I tried Ctrl+Shift+Alt+U it printed an underscored u and waited for more keyboard input!... Some keys were accepted and some were not... eg. Numbers were accepted and they too were underlined, but only a few keys allowed me to break out. I then tried Ctrl+Shift+Alt+U immediately followed by Ctrl+Shift+Alt+E. This produced an unprintable hex 000E(?) and broke out of the loop... The unprintable character got me thinking that this may be Unicode related. If so, how so? What is happening here? Is this underscored u a trigger for an Input Method Editor? This behaviour occurs: Here (as I type), "gedit", text-edit fields... (but not in the Terminal)... and "gvim" reported "pattern not found"...

    Read the article

  • Won`'t read Unicode characters over NFS mount ?

    - by Julz
    Hello, I'm getting this strange issue when trying to play mp3's containing unicode characters ( accents ) over an NFS on OSX, it's all good over AFP, but I'm setup with NFS because it's a linux server on the other end . This is my disk utility setup : nfs://192.168.1.112/Music advanced mount parameters : -P , nolocks nosuid The strange thing is that I can see those files in the finder ( with the accents .. ) but I cant play them !! So Im wondering if it's an unicode issue, since I can see the files properly or a permission issue since I can't play them, but them it wouldn`t make sense that I can't play ONLY the files containing accents .. help ?? Thanks

    Read the article

  • Font problems after changing non-Unicode locale to Japanese

    - by Kawa
    I recently set my system locale for non-Unicode programs to Japanese and switched it back to American English shortly afterwards. Now, just about all my non-Unicode programs use a larger dialog font without any anti-aliasing instead of Segoe UI 9, as if running on a Japanese locale. One in particular has them switched, using Segoe UI when running on a Japanese locale (be it AppLocale or system) and the Japanese font on American English! Reinstalling that particular program, as a Google search suggested, didn't do anything but reset the toolbar positions. Using Windows 7 Home Premium, and quite confused by all this.

    Read the article

  • UnicodeEncodeError: 'ascii' codec can't encode character [...]

    - by user1461135
    I have read the HOWTO on Unicode from the official docs and a full, very detailed article as well. Still I don't get it why it throws me this error. Here is what I attempt: I open an XML file that contains chars out of ASCII range (but inside allowed XML range). I do that with cfg = codecs.open(filename, encoding='utf-8, mode='r') which runs fine. Looking at the string with repr() also shows me a unicode string. Now I go ahead and read that with parseString(cfg.read().encode('utf-8'). Of course, my XML file starts with this: <?xml version="1.0" encoding="utf-8"?>. Although I suppose it is not relevant, I also defined utf-8 for my python script, but since I am not writing unicode characters directly in it, this should not apply here. Same for the following line: from __future__ import unicode_literals which also is right at the beginning. Next thing I pass the generated Object to my own class where I read tags into variables like this: xmldata.getElementsByTagName(tagName)[0].firstChild.data and assign it to a variable in my class. Now what perfectly works are those commands (obj is an instance of the class): for element in obj: print element And this command does work as well: print obj.__repr__() I defined __iter__() to just yield every variable while __repr__() uses the typical printf stuff: "%s" % self.varname Both commands print perfectly and can output the unicode character. What does not work is this: print obj And now I am stuck because this throws the dreaded UnicodeEncodeError: 'ascii' codec can't encode character u'\xfc' in position 47: So what am I missing? What am I doing wrong? I am looking for a general solution, I always want to handle strings as unicode, just to avoid any possible errors and write a compatible program. Edit: I also defined this: def __str__(self): return self.__repr__() def __unicode__(self): return self.__repr__() From documentation I got that this

    Read the article

  • Functional programming constructs in non-functional programming languages

    - by Giorgio
    This question has been going through my mind quite a lot lately and since I haven't found a convincing answer to it I would like to know if other users of this site have thought about it as well. In the recent years, even though OOP is still the most popular programming paradigm, functional programming is getting a lot of attention. I have only used OOP languages for my work (C++ and Java) but I am trying to learn some FP in my free time because I find it very interesting. So, I started learning Haskell three years ago and Scala last summer. I plan to learn some SML and Caml as well, and to brush up my (little) knowledge of Scheme. Well, a lot of plans (too ambitious?) but I hope I will find the time to learn at least the basics of FP during the next few years. What is important for me is how functional programming works and how / whether I can use it for some real projects. I have already developed small tools in Haskell. In spite of my strong interest for FP, I find it difficult to understand why functional programming constructs are being added to languages like C#, Java, C++, and so on. As a developer interested in FP, I find it more natural to use, say, Scala or Haskell, instead of waiting for the next FP feature to be added to my favourite non-FP language. In other words, why would I want to have only some FP in my originally non-FP language instead of looking for a language that has a better support for FP? For example, why should I be interested to have lambdas in Java if I can switch to Scala where I have much more FP concepts and access all the Java libraries anyway? Similarly: why do some FP in C# instead of using F# (to my knowledge, C# and F# can work together)? Java was designed to be OO. Fine. I can do OOP in Java (and I would like to keep using Java in that way). Scala was designed to support OOP + FP. Fine: I can use a mix of OOP and FP in Scala. Haskell was designed for FP: I can do FP in Haskell. If I need to tune the performance of a particular module, I can interface Haskell with some external routines in C. But why would I want to do OOP with just some basic FP in Java? So, my main point is: why are non-functional programming languages being extended with some functional concept? Shouldn't it be more comfortable (interesting, exciting, productive) to program in a language that has been designed from the very beginning to be functional or multi-paradigm? Don't different programming paradigms integrate better in a language that was designed for it than in a language in which one paradigm was only added later? The first explanation I could think of is that, since FP is a new concept (it isn't new at all, but it is new for many developers), it needs to be introduced gradually. However, I remember my switch from imperative to OOP: when I started to program in C++ (coming from Pascal and C) I really had to rethink the way in which I was coding, and to do it pretty fast. It was not gradual. So, this does not seem to be a good explanation to me. Also, I asked myself if my impression is just plainly wrong due to lack of knowledge. E.g., do C# and C++11 support FP as extensively as, say, Scala or Caml do? In this case, my question would be simply non-existent. Or can it be that many non-FP programmers are not really interested in using functional programming, but they find it practically convenient to adopt certain FP-idioms in their non-FP language? IMPORTANT NOTE Just in case (because I have seen several language wars on this site): I mentioned the languages I know better, this question is in no way meant to start comparisons between different programming languages to decide which is better / worse. Also, I am not interested in a comparison of OOP versus FP (pros and cons). The point I am interested in is to understand why FP is being introduced one bit at a time into existing languages that were not designed for it even though there exist languages that were / are specifically designed to support FP.

    Read the article

  • Programming and Ubiquitous Language (DDD) in a non-English domain

    - by Sandor Drieënhuizen
    I know there are some questions already here that are closely related to this subject but none of them take Ubquitous Language as the starting point so I think that justifies this question. For those who don't know: Ubiquitous Language is the concept of defining a (both spoken and written) language that is equally used across developers and domain experts to avoid inconsistencies and miscommunication due to translation problems and misunderstanding. You will see the same terminology show up in code, conversations between any team member, functional specs and whatnot. So, what I was wondering about is how to deal with Ubiquitous Language in non-English domains. Personally, I strongly favor writing programming code in English completely, including comments but ofcourse excluding constants and resources. However, in a non-English domain, I'm forced to make a decision either to: Write code reflecting the Ubiquitous Language in the natural language of the domain. Translate the Ubiquitous Language to English and stop communicating in the natural language of the domain. Define a table that defines how the Ubiquitous Language translates to English. Here are some of my thoughts based on these options: 1) I have a strong aversion against mixed-language code, that is coding using type/member/variable names etc. that are non-English. Most programming languages 'breathe' English to a large extent and most of the technical literature, design pattern names etc. are in English as well. Therefore, in most cases there's just no way of writing code entirely in a non-English language so you end up with a mixed languages. 2) This will force the domain experts to start thinking and talking in the English equivalent of the UL, something that will probably not come naturally to them and therefore hinders communication significantly. 3) In this case, the developers communicate with the domain experts in their native language while the developers communicate with each other in English and most importantly, they write code using the English translation of the UL. I'm sure I don't want to go for the first option and I think option 3 is much better than option 2. What do you think? Am I missing other options?

    Read the article

  • Non-blocking ORM issues

    - by Nikolay Fominyh
    Once I had question on SO, and found that there are no non-blocking ORMs for my favorite framework. I mean ORM with callback support for asynchronous retrieval. The ORM would be supplied with a callback or some such to "activate" when data has been received. Otherwise ORM needs to be split of in a separate thread to guarantee UI responsiveness. I want to create one, but I have some questions that blocking me from starting development: What issues we can meet when developing ORM? Does word "non-blocking" before word "ORM" will dramatically increase complexity of ORM? Why there are not much non-blocking ORMs around? Update: It looks, that I have to improve my question. We have solutions that already allows us to receive data in non-blocking way. And I believe that not all companies that use such solutions - using raw SQL. We want to create more generic solution, that we can reuse in future projects. What difficulties we can meet?

    Read the article

  • Non-English-based programming languages

    - by Jaime Soto
    The University of Antioquia in Colombia teaches its introductory programming courses in Lexico, a Spanish-based, object-oriented .NET language. The intent is to teach programming concepts in the students' native language before introducing English-based mainstream languages. There are many other Non-English-based programming languages and there is even a related question in Stack Overflow. I have several questions regarding these languages: Has anyone on this site learned to program using a non-English-based language? If so, how difficult was the transition to the first English-based language? Is there any research-based evidence that non-English speakers learn programming faster/better using languages with keywords in their native language instead of English-based languages?

    Read the article

  • Teaching programming to a non-CS graduate

    - by Shahzada
    I have a couple of friends interested in computer programming, but they're non-CS graduates; some of them have very little experience in software testing field (some of them took some basic software testing courses). I am going to be working with them on teaching basic computer programming, and computer science fundamentals (data structures etc). My questions are; What language should I start with? What are essential computer science topics that I should cover before jumping them into computer programming? What readings can I incorporate to make the topic interesting and non-overwhelming? If we want to spend a year on it, what topics should take priority and must be covered in 12 months? Again, these are non computer science folks, and I want to keep the learning as much fun as possible. Thanks everyone.

    Read the article

  • Are there any non-english programming languages? [closed]

    - by samarudge
    Possible Duplicate: Non-English-based programming languages Not sure if this is the right place to ask this, but I'm going to anyway. Without fail, every programming language I've ever seen, used or heard of has it's keywords based around English. if, else, while, for, query, foreach, image, path, extension, the list goes on, are all based around English words. Are there any languages, or ports of languages that base their core keywords based on non-english words to lower the wall for non-english speaking programmers? This is mostly for intrest (English is my first language so it's no problem for me). Are these languages popular locally (I.E. might a software development house in Germany use a programming language based in German over one based in English).

    Read the article

  • Unicode Collations problem ?

    - by Bayonian
    (.NET 3.5 SP1, VS 2008, VB.NET, MSSQL Server 2008) I'm writing a small web app to test the Khmer Unicode and Lao Unicode. I have a table that store text in Khmer Unicode with the following structure : [t_id] [int] IDENTITY(1,1) NOT NULL [t_chid] [int] NOT NULL [t_vn] [int] NOT NULL [t_v] [nvarchar](max) NOT NULL I can use Linq to SQL to do CRUD normally. The text display properly on the web page, even though I didn't change the default collation of MSSQL Server 2008. When it comes to search the column [t_v], the page will take a very long time to load and in fact, it loads every row of that column. It never compares with the "key word" criteria that I use for the search. Here's my query for the search : Public Shared Function SearchTestingKhmerTable(ByVal keyword As String) As DataTable Dim db As New BibleDataClassesDataContext() Dim query = From b In db.khmer_books _ From ch In db.khmer_chapters _ From v In db.testing_khmers _ Where v.t_v.Contains(keyword) And ch.kh_book_id = b.kh_b_id And v.t_chid = ch.kh_ch_id _ Select b.kh_b_id, b.kh_b_title, ch.kh_ch_id, ch.kh_ch_number, v.t_id, v.t_vn, v.t_v Dim dtDataTableOne = New DataTable("dtOne") dtDataTableOne.Columns.Add("bid", GetType(Integer)) dtDataTableOne.Columns.Add("btitle", GetType(String)) dtDataTableOne.Columns.Add("chid", GetType(Integer)) dtDataTableOne.Columns.Add("chn", GetType(Integer)) dtDataTableOne.Columns.Add("vid", GetType(Integer)) dtDataTableOne.Columns.Add("vn", GetType(Integer)) dtDataTableOne.Columns.Add("verse", GetType(String)) For Each r In query dtDataTableOne.Rows.Add(New Object() {r.kh_b_id, r.kh_b_title, r.kh_ch_id, r.kh_ch_number, r.t_id, r.t_vn, r.t_v}) Next Return dtDataTableOne End Function Please note that I use the exact same code and database design with Lao Unicode and it works just fine. I get the returned query as expected for the search. I can't figure out what the problem with searching for query in Khmer table.

    Read the article

  • Robocopy unilog output is gibberish

    - by miro
    I tried to get robocopy in Windows 7 to generate a Unicode log, since I have files with Unicode characters. The command I used: robocopy C:\mysource D:\mydest /mir /unilog:backup.log /tee File the copy works and the onscreen output is correct, the log file itself just contains gibberish. This is regardless of whether I use the Command Prompt or the Powershell. What gives? Am I doing something wrong?

    Read the article

  • Delphi - How can I prevent the main form capturing keystrokes in a TMemo on another non-modal form?

    - by user89691
    I have an app that opens a non-modal form from the main form. The non-modal form has a TMemo on it. The main form menu uses "space" as one of its accelerator characters. When the non-modal form is open and the memo has focus, every time I try to enter a space into the memo on the non-modal form, the main form event for the "space" shortcut fires! I have tried turning MainForm.KeyPreview := false while the other form is open but no dice. Any ideas? TIA

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >