Search Results

Search found 1649 results on 66 pages for 'unicode normalization'.

Page 15/66 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Won`'t read Unicode characters over NFS mount ?

    - by Julz
    Hello, I'm getting this strange issue when trying to play mp3's containing unicode characters ( accents ) over an NFS on OSX, it's all good over AFP, but I'm setup with NFS because it's a linux server on the other end . This is my disk utility setup : nfs://192.168.1.112/Music advanced mount parameters : -P , nolocks nosuid The strange thing is that I can see those files in the finder ( with the accents .. ) but I cant play them !! So Im wondering if it's an unicode issue, since I can see the files properly or a permission issue since I can't play them, but them it wouldn`t make sense that I can't play ONLY the files containing accents .. help ?? Thanks

    Read the article

  • Font problems after changing non-Unicode locale to Japanese

    - by Kawa
    I recently set my system locale for non-Unicode programs to Japanese and switched it back to American English shortly afterwards. Now, just about all my non-Unicode programs use a larger dialog font without any anti-aliasing instead of Segoe UI 9, as if running on a Japanese locale. One in particular has them switched, using Segoe UI when running on a Japanese locale (be it AppLocale or system) and the Japanese font on American English! Reinstalling that particular program, as a Google search suggested, didn't do anything but reset the toolbar positions. Using Windows 7 Home Premium, and quite confused by all this.

    Read the article

  • UnicodeEncodeError: 'ascii' codec can't encode character [...]

    - by user1461135
    I have read the HOWTO on Unicode from the official docs and a full, very detailed article as well. Still I don't get it why it throws me this error. Here is what I attempt: I open an XML file that contains chars out of ASCII range (but inside allowed XML range). I do that with cfg = codecs.open(filename, encoding='utf-8, mode='r') which runs fine. Looking at the string with repr() also shows me a unicode string. Now I go ahead and read that with parseString(cfg.read().encode('utf-8'). Of course, my XML file starts with this: <?xml version="1.0" encoding="utf-8"?>. Although I suppose it is not relevant, I also defined utf-8 for my python script, but since I am not writing unicode characters directly in it, this should not apply here. Same for the following line: from __future__ import unicode_literals which also is right at the beginning. Next thing I pass the generated Object to my own class where I read tags into variables like this: xmldata.getElementsByTagName(tagName)[0].firstChild.data and assign it to a variable in my class. Now what perfectly works are those commands (obj is an instance of the class): for element in obj: print element And this command does work as well: print obj.__repr__() I defined __iter__() to just yield every variable while __repr__() uses the typical printf stuff: "%s" % self.varname Both commands print perfectly and can output the unicode character. What does not work is this: print obj And now I am stuck because this throws the dreaded UnicodeEncodeError: 'ascii' codec can't encode character u'\xfc' in position 47: So what am I missing? What am I doing wrong? I am looking for a general solution, I always want to handle strings as unicode, just to avoid any possible errors and write a compatible program. Edit: I also defined this: def __str__(self): return self.__repr__() def __unicode__(self): return self.__repr__() From documentation I got that this

    Read the article

  • PASS Precon Countdown… See some of you Monday, and others on Tuesday Night

    - by drsql
    As I finish up the plans for Monday’s database design precon, I am getting pretty excited for the day. This is the third time I have done this precon, and where the base slides are very similar, I have a few new twists in mind. One of my big ideas for my Database Design Workshop precon has always been to give people to do some design. So I am even now trying to go through and whittle down the slides and make sure that we have the time for design. If you are attending, be prepared to be a team player....(read more)

    Read the article

  • Upcoming Database Design Pre-Cons

    - by drsql
    In July and October, I will be doing my "How To Design a Relational Database" full day conference in two places. First on July 26 for the East Iowa SQL Saturday , and then for the big daddy SQLPASS Summit in Charlotte, NC on October 14. You can see the entire abstract here on the SQL PASS site. It is essentially the same concept as last year, but this year I am making a few big changes to really give the people what they have desired (and am truly glad to have a swing at it several months...(read more)

    Read the article

  • If all variables are a subset of the superkey, is the database design 5NF? [migrated]

    - by Lukazoid
    I have a table called LogMessages, which has the following columns: Level A numeric value which represents Trace, Debug, Info, Warning, Error or Fatal Time A UTC time Message Foreign key to a Messages table Source Foreign key to a Sources table User Foreign key to a Users table From what I can see, all of these columns are a part of the super key; if any single value differs to an existing row, a new row can be created. My question is, does this design comply to fifth normal form? I am unsure as some groups of data will be repeating, however I don't believe this violates 5NF? (correct me if I'm wrong)

    Read the article

  • Consolidating hotels data from various booking sites with different IDs or reference

    - by Victor
    In one of my projects, I have data for hotels, and other booking sites are able to book this hotel. For example: Hotel A - Booking (ID = 4002), Expedia (ID = 123), Priceline (ID = 147) The three booking engines each uses their own Id to reference to Hotel A. I would need to check manually and make the right reference to the hotel. If I have 100,000 hotels, I have to check manually 300,000 (considering 3 booking sites) times? They might provide API, then I can cross check the name, address or latitude/longitude, but if they differ a little bit then I might give the wrong reference to the wrong hotel. I'm sure there are better ways to do this. There are many travel sites out there which do hotel price checking on many booking sites, but how do they do to make sure they are checking the right hotel on these booking sites? Anyone has any experience on this?

    Read the article

  • Unicode Collations problem ?

    - by Bayonian
    (.NET 3.5 SP1, VS 2008, VB.NET, MSSQL Server 2008) I'm writing a small web app to test the Khmer Unicode and Lao Unicode. I have a table that store text in Khmer Unicode with the following structure : [t_id] [int] IDENTITY(1,1) NOT NULL [t_chid] [int] NOT NULL [t_vn] [int] NOT NULL [t_v] [nvarchar](max) NOT NULL I can use Linq to SQL to do CRUD normally. The text display properly on the web page, even though I didn't change the default collation of MSSQL Server 2008. When it comes to search the column [t_v], the page will take a very long time to load and in fact, it loads every row of that column. It never compares with the "key word" criteria that I use for the search. Here's my query for the search : Public Shared Function SearchTestingKhmerTable(ByVal keyword As String) As DataTable Dim db As New BibleDataClassesDataContext() Dim query = From b In db.khmer_books _ From ch In db.khmer_chapters _ From v In db.testing_khmers _ Where v.t_v.Contains(keyword) And ch.kh_book_id = b.kh_b_id And v.t_chid = ch.kh_ch_id _ Select b.kh_b_id, b.kh_b_title, ch.kh_ch_id, ch.kh_ch_number, v.t_id, v.t_vn, v.t_v Dim dtDataTableOne = New DataTable("dtOne") dtDataTableOne.Columns.Add("bid", GetType(Integer)) dtDataTableOne.Columns.Add("btitle", GetType(String)) dtDataTableOne.Columns.Add("chid", GetType(Integer)) dtDataTableOne.Columns.Add("chn", GetType(Integer)) dtDataTableOne.Columns.Add("vid", GetType(Integer)) dtDataTableOne.Columns.Add("vn", GetType(Integer)) dtDataTableOne.Columns.Add("verse", GetType(String)) For Each r In query dtDataTableOne.Rows.Add(New Object() {r.kh_b_id, r.kh_b_title, r.kh_ch_id, r.kh_ch_number, r.t_id, r.t_vn, r.t_v}) Next Return dtDataTableOne End Function Please note that I use the exact same code and database design with Lao Unicode and it works just fine. I get the returned query as expected for the search. I can't figure out what the problem with searching for query in Khmer table.

    Read the article

  • Robocopy unilog output is gibberish

    - by miro
    I tried to get robocopy in Windows 7 to generate a Unicode log, since I have files with Unicode characters. The command I used: robocopy C:\mysource D:\mydest /mir /unilog:backup.log /tee File the copy works and the onscreen output is correct, the log file itself just contains gibberish. This is regardless of whether I use the Command Prompt or the Powershell. What gives? Am I doing something wrong?

    Read the article

  • Get unicode character with curl in PHP

    - by saturngod
    I tried to get URL with curl. Return value contain unicode character. curl convert to \u for unicode character. How to get unicode character with curl ? This is my code <?php $ch = curl_init(); // set URL and other appropriate options curl_setopt($ch, CURLOPT_URL, "http://www.ornagai.com/index.php/api/word/q/test/format/json"); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $return=curl_exec($ch); echo $return; ?>

    Read the article

  • Difference between WinMain and wWinMain

    - by Sherwood Hu
    The only difference is that Winmain takes char* for lpCmdLine parameter, while wWinMain takes wchar_t*. On Windows XP, if an application entry is WinMain, does Windows convert the command line from Unicode to Ansi and pass to the application? If the command line parameter must be in Unicode (for example, Unicode file name, conversion will cause some characters missing), does that mean that I must use wWinMain as the entry function?

    Read the article

  • Saving a file in a CSV type in Excel always removes the BOM

    - by rickp
    I've been trying to find a reasonable solution/explanation (unsuccessfully) to find out why Excel defaults to removing the BOM when saving a file to the CSV type. Please forgive me if you find this a duplicate of this question. This handles reading CSV files with non-ASCII encoding, but it doesn't cover saving the file back out (which is where the biggest issue lies). Here is my current situation (which I'm going to gather is common among localized software dealing with Unicode characters and a CSV format): We export data to a CSV format using UTF-16LE, ensuring the BOM is set (0xFFFE). We validate after the file is generated with a Hex editor to ensure it was set correctly. Open the file in Excel (for this example we're exporting Japanese characters) and witness that Excel handles loading the file with the correct encoding. Attempts to save this file will prompt you with a warning message indicating that the file may contain features that may not be compatible with Unicode encoding, but asks if you'd like to save anyway. If you select the Save As dialog, it will immediately ask you to save the file as "Unicode Text" rather than CSV. If you select the "CSV" extension and save the file it removes the BOM (obviously along with all the Japanese characters). Why would this happen? Is there a solution to this problem, or is this a known 'bug'/limitation of Excel? Additionally (as a side issue) it appears that Excel, when loading UTF-16LE encoded CSV files, only uses TAB delimiters. Again, is this another known 'bug'/limitation of Excel?

    Read the article

  • Saving a file in a CSV type in Excel always removes the BOM

    - by rickp
    I've been trying to find a reasonable solution/explanation (unsuccessfully) to find out why Excel defaults to removing the BOM when saving a file to the CSV type. Please forgive me if you find this a duplicate of this question. This handles reading CSV files with non-ASCII encoding, but it doesn't cover saving the file back out (which is where the biggest issue lies). Here is my current situation (which I'm going to gather is common among localized software dealing with Unicode characters and a CSV format): We export data to a CSV format using UTF-16LE, ensuring the BOM is set (0xFFFE). We validate after the file is generated with a Hex editor to ensure it was set correctly. Open the file in Excel (for this example we're exporting Japanese characters) and witness that Excel handles loading the file with the correct encoding. Attempts to save this file will prompt you with a warning message indicating that the file may contain features that may not be compatible with Unicode encoding, but asks if you'd like to save anyway. If you select the Save As dialog, it will immediately ask you to save the file as "Unicode Text" rather than CSV. If you select the "CSV" extension and save the file it removes the BOM (obviously along with all the Japanese characters). Why would this happen? Is there a solution to this problem, or is this a known 'bug'/limitation of Excel? Additionally (as a side issue) it appears that Excel, when loading UTF-16LE encoded CSV files, only uses TAB delimiters. Again, is this another known 'bug'/limitation of Excel?

    Read the article

  • T-SQL - Date rounding and normalization

    - by arun prakash
    Hi: I have a stored procedure that rounds a column with dates in (yyyy:mm:dd hh:mM:ss) to the nearest 10 minute handle (yyyy:mm:dd hh:mM) 20100303 09:46:3000 ------ 20100303 09:50 but i want to chage it to round it off to the nearest 15 minute handle: 20100303 09:46:3000 ------20100303 09:45 here is my code : IF OBJECT_ID(N'[dbo].[SPNormalizeAddWhen]') IS NOT NULL DROP PROCEDURE [dbo].[SPNormalizeAddWhen] GO CREATE PROCEDURE [dbo].[SPNormalizeAddWhen] As declare @colname nvarchar(20) set @colname='Normalized Add_When' if not exists (select * from syscolumns where id=object_id('Risk') and name=@colname) exec('alter table Risk add [' + @colname + '] datetime') declare @sql nvarchar(500) set @sql='update Risk set [' + @colname + ']=cast(DATEPART(yyyy,[add when]) as nvarchar(4)) + ''-'' + cast(DATEPART(mm,[add when]) as nvarchar(2)) + ''-'' + cast(DATEPART(dd,[add when]) as nvarchar(2)) + '' '' + cast(DATEPART(Hh,[add when]) as nvarchar(2)) + '':'' + cast(round(DATEPART(Mi,[add when]),-1) as nvarchar(2)) ' print @sql exec(@sql) GO

    Read the article

  • T-SQL - Date rounding and normalization

    - by arun prakash
    Hi: I have a stored procedure that rounds a column with dates in (yyyy:mm:dd hh:mM:ss) to the nearest 10 minute handle (yyyy:mm:dd hh:mM) 20100303 09:46:3000 ------ 20100303 09:50 but i want to chage it to round it off to the nearest 15 minute handle: 20100303 09:46:3000 ------20100303 09:45 here is my code : IF OBJECT_ID(N'[dbo].[SPNormalizeAddWhen]') IS NOT NULL DROP PROCEDURE [dbo].[SPNormalizeAddWhen] GO CREATE PROCEDURE [dbo].[SPNormalizeAddWhen] As declare @colname nvarchar(20) set @colname='Normalized Add_When' if not exists (select * from syscolumns where id=object_id('Risk') and name=@colname) exec('alter table Risk add [' + @colname + '] datetime') declare @sql nvarchar(500) set @sql='update Risk set [' + @colname + ']=cast(DATEPART(yyyy,[add when]) as nvarchar(4)) + ''-'' + cast(DATEPART(mm,[add when]) as nvarchar(2)) + ''-'' + cast(DATEPART(dd,[add when]) as nvarchar(2)) + '' '' + cast(DATEPART(Hh,[add when]) as nvarchar(2)) + '':'' + cast(round(DATEPART(Mi,[add when]),-1) as nvarchar(2)) ' print @sql exec(@sql) GO

    Read the article

  • Database normalization question

    - by chchrist
    Hi all, I am trying to make a fashion boutique site. In this site each product (t-shirt,jeans etc) belongs to a collection. Each collection has looks (t-shirt,jean,accessories). A product can belong to one collection and to multiple looks. How should I design the database?

    Read the article

  • Database normalization and duplicate values

    - by bretddog
    Consider a Parent / Child / GrandChild structure in a database table schema, or even a deeper hierarchy. These being in the same aggregate. One table DAYS keeps a single row per day, and has a "Date" field. This is the root table, or maybe a child of the root. No row can ever be deleted in this table. In this case, however complex my table schema looks like, however far away in the hierarchy any other table is, is there any reason why any other table would hold a Date value? Can't it instead just have a FK to the DAYS table. I obviously assume that the creation of these date fields happen not before such datefield exist in the DAYS table. I'm now thinking just about the date part to be relevant, not the time part. Not sure if all databases can store these individually. That's maybe relevant, but not really the focus of the question.

    Read the article

  • Unicode To ASCII Conversion

    - by Yuvaraj
    Hi all, i creating an small application in Delphi 2009. here i got problem that when i run my application in WindowsXP its working but it is not working in Windows95. i know the problem that 95 will not support Unicode. if anyone knows the solution please tell me. and also i have one more idea that converting Unicode to ASCII. is it possible please tell how to do that. Thanks in Advance Worm Regards, Yuvaraj

    Read the article

  • Normalization two types of customers into one table

    - by JDewzy
    I am trying to model a sales situation where you can sell to a person or to a business with a contact person. I cannot figure out the proper way to do this. It seems like 2 tables would be incorrect. But how do I model a Customer table that can be a business or a person? Would I just have a boolean for "business" and an additional "business_name" field that would default to Null. But then I have to do an if/then on the columns and that seems like poor design. Any advice, direction, or links is appreciated.

    Read the article

  • Why do we need to put N before strings in Microsoft SQL Server?

    - by user61752
    I'm learning T-SQL. From the examples I've seen, to insert text in a varchar() cell, I can write just the string to insert, but for nvarchar() cells, every example prefix the strings with the letter N. I tried the following query on a table which has nvarchar() rows, and it works fine, so the prefix N is not required: insert into [TableName] values ('Hello', 'World') Why the strings are prefixed with N in every example I've seen? What are the pros or cons of using this prefix?

    Read the article

  • Error while zipping files with unicode characters in names with Win7's "send to > compressed (zipped) folder"

    - by user1306322
    When I try to zip files containing unicode characters in their names, such as © or ™, I get the following error: [Window Title] Compressed (zipped) Folders Error [Content] 'C:\Asd™.txt' cannot be compressed because it includes characters that cannot be used in a compressed folder, such as ™. You should rename this file or directory. [OK] This only became a problem when I reinstalled Windows 7. I probably had some resources necessary from this error to be resolved automatically, but it's almost clean installation now and I can't zip files. How do I fix this? UPD: Some time passed since I posted this question, I installed some of my usual applications, but the problem still exists and I'm not sure if it can be fixed by installing some specific application from before.

    Read the article

  • Cannot set "Language for Non-Unicode Programs" in Regional and Language Settings

    - by cornjuliox
    I'm trying to set the Language for Non-Unicode Programs from English to Japanese (I'm using Windows XP SP 3), but it won't let me. It looks like I've got the East Asian Language packs installed, but when I select "Japanese" from the drop-down box and hit "Apply" I get an error that says "Setup was unable to install the chosen locale. Please contact your system administrator". I'm already logged in as the administrator, and I've restarted several times but it still won't let me. Can anyone give me an idea as to how to solve this problem? Reinstalling Windows is absolutely out of the question.

    Read the article

  • Problems with vim/locale as non-root user on Solaris

    - by Lyle
    I do some work on a Solaris 10 machine, and my .vimrc is set up to show unicode characters for tabs and line endings: set listchars=tab:?\ ,eol:¬ This works out of the box on my OS X machine. On Linux as well as Solaris I get the following error when I start vim: Error detected while processing /home/lhanson/.vimrc: line 17: E474: Invalid argument: listchars=tab:?~V?\ ,eol:¬ I solved this on my Linux box by setting LANG=en_US.utf8 ('locale -a' shows this as being an option). On Solaris, however, 'locale -a' shows the following: C POSIX iso_8859_1 Setting LANG to C or POSIX yields the same error, and even though iso_8859_1 probably wouldn't work it doesn't successfully change the locale anyway. As a non-root user, is there any way I can have my unicode characters show up?

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >