Search Results

Search found 1714 results on 69 pages for 'utf8 decode'.

Page 48/69 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Storing apostrophes, exclamation marks, etc. in mysql database

    - by rein
    I changed from latin1 to utf8. Although all sorts of text was displaying fine I noticed non-english characters were stored in the database as weird symbols. I spent a day trying to fix that and finally now non-english characters display as non-english characters in the database and display the same on the browser. However I noticed that I see apostrophes stored as &#39; and exclamation marks stored as &#33;. Is this normal, or should they be appearing as ' and ! in the database instead? If so, what would I need to do in order to fix that?

    Read the article

  • Unique constraint with nullable column

    - by Álvaro G. Vicario
    I have a table that holds nested categories. I want to avoid duplicate names on same-level items (i.e., categories with same parent). I've come with this: CREATE TABLE `category` ( `category_id` int(10) unsigned NOT NULL AUTO_INCREMENT, `category_name` varchar(100) NOT NULL, `parent_id` int(10) unsigned DEFAULT NULL, PRIMARY KEY (`category_id`), UNIQUE KEY `category_name_UNIQUE` (`category_name`,`parent_id`), KEY `fk_category_category1` (`parent_id`,`category_id`), CONSTRAINT `fk_category_category1` FOREIGN KEY (`parent_id`) REFERENCES `category` (`category_id`) ON DELETE SET NULL ON UPDATE CASCADE ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_spanish_ci Unluckily, category_name_UNIQUE does not enforce my rule for root level categories (those where parent_id is NULL). Is there a reasonable workaround?

    Read the article

  • Flask Admin didn't show all fields

    - by twoface88
    I have model like this: class User(db.Model): __tablename__ = 'users' __table_args__ = {'mysql_engine' : 'InnoDB', 'mysql_charset' : 'utf8'} id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(80), unique=True) email = db.Column(db.String(120), unique=True) _password = db.Column('password', db.String(80)) def __init__(self, username = None, email = None, password = None): self.username = username self.email = email self._set_password(password) def _set_password(self, password): self._password = generate_password_hash(password) def _get_password(self): return self._password def check_password(self, password): return check_password_hash(self._password, password) password = db.synonym("_password", descriptor=property(_get_password, _set_password)) def __repr__(self): return '<User %r>' % self.username I have ModelView: class UserAdmin(sqlamodel.ModelView): searchable_columns = ('username', 'email') excluded_list_columns = ['password'] list_columns = ('username', 'email') form_columns = ('username', 'email', 'password') But no matter what i do, flask admin didn't show password field when i'm editing user info. Is there any way ? Even just to edit hash code. UPDATE: https://github.com/mrjoes/flask-admin/issues/78

    Read the article

  • Howto convert to string and read data from TCP packet

    - by salime
    I used sharppcap to capture TCP packets. Now i wanna reconstruct HTTP packet from TCP packets but i don't know how. I read somewhere i can find start of HTTP packet in TCP data... i tried to convert byte[] TCP data to string using this code: string s = System.Text.Encoding.UTF8.GetString(tcp_pack.Data); but the string isn't readable. like a binary file that is opened with notepad. is it because the data is encrypted or code is incorrect? how can i reconstruct HTTP packet from TCP packets?

    Read the article

  • Turkish characters are not displayed correctly

    - by tfeseas
    MySql database uses utf-8 encoding and data are stored correctly.I use set_name utf8 query to make sure the data called are utf-8 encoded.all variables from database works fine as long as the header charset is utf-8,but the static html characters do not work properly.When i set header charset to ISO-8859-9 variables are displayed differenly while html characters work ok.can anyone help me? <?php header('Content-Type: text/html; charset=ISO-8859-9'); ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head><title>noname</title> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />

    Read the article

  • Writing Lucene StandardAnalyzer results to text file with OutputStreamWriter

    - by user3693192
    I'm getting ONLY the last result written to "outputStreamFile.txt". Can't figure out how to revise code so I can get ALL results written to text file. Sample input text: "1st line of text\n" "2nd line of text \n" Results in only 2nd line begin written (and not 1st line) as: "2nd line text\n" private static void analyze(String text) throws IOException { analyzer = new StandardAnalyzer(Version.LUCENE_30); Reader r = new StringReader(text); TokenStream ts = (TokenStream) analyzer.tokenStream("", r); TermAttribute term = ts.addAttribute(TermAttribute.class); File outfile = new File("C:\\Users\\Desktop\\outputStreamFile.txt"); FileOutputStream fileOutputStream = new FileOutputStream(outfile); OutputStreamWriter outputStreamWriter = new OutputStreamWriter(fileOutputStream, "UTF8"); while(ts.incrementToken()) { //System.out.print(term.term() + " "); outputStreamWriter.write(term.term().toString() + "\r\n"); } outputStreamWriter.close(); }

    Read the article

  • Glib::ustring and Japanese characters

    - by user294787
    Glib::ustring is supposed to work well with UTF8 but I have a problem when working with Japanese strings. If you compare those two strings, "???" and "???", using == operator or compare method, it will answer that those two strings are equals. I don't understand why. How Glib::ustring works ? The only way I found to get false to the comparison is to compare strings of different sizes. For example "?????" and "????". Very strange...

    Read the article

  • Table character encoding - exception in application

    - by zgnilec
    I have a code: CREATE TABLE IF NOT EXISTS Person ( name varchar(24) ... ) CHARACTER SET utf8 COLLATE utf8_polish_ci; This works OK in my application, but I read if someone put in name field a string that contains character wchich code is greater than 127, database will use 2 bytes (or more) to store this character. So i think, i will change character set to utf16: CHARACTER SET utf16 COLLATE utf16_polish_ci; But now when I run my application, exception apears: KeyNotFoundException. It apears exactly at these instructions: MySqlCommand komenda = baza.Polaczenie.CreateCommand (); komenda.CommandText = zapytanie; MySqlDataReader dr = komenda.ExecuteReader (); // HERE, at execute reader method if (dr.Read ()) ... 1) Anyone had similar problem? 2) Any idea how to use always 2 bytes/char in database field?

    Read the article

  • Socket receive buffer size

    - by Kanishka
    Is there a way to determine the receive buffer size of a TCPIP socket in c#. I am sending a message to a server and expecting a response where I am not sure of the receive buffer size. IPEndPoint ipep = new IPEndPoint(IPAddress.Parse("192.125.125.226"),20060); Socket server = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); server.Connect(ipep); String OutStr= "49|50|48|48|224|48|129|1|0|0|128|0|0|0|0|0|4|0|0|32|49|50"; byte[] temp = OutStr.Split('|').Select(s => byte.Parse(s)).ToArray(); int byteCount = server.Send(temp); byte[] bytes = new byte[255]; int res=0; res = server.Receive(bytes); return Encoding.UTF8.GetString(bytes);

    Read the article

  • sql server bulk copy out/postgres copy from infile

    - by Chris Curvey
    I'm starting a conversion of a system from MS SQL Server to Postgres. I have the table structures converted, and I use "bcp" to get the data out of SQL Server. ERROR: invalid byte sequence for encoding "UTF8": 0x80 HINT: This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlled by "client_encoding". CONTEXT: COPY cm_outgoing, line 200: "200 c:\temp\200.xml 2009-10-10 01:50:44.000 1900-01-01 00:00:00.000" I've already used "sed" to get rid of the NUL (0x00) entries in the file, and I can't find any instances of 0x80 in the file that I'm trying to import. Any thoughts? Is there an easier way?

    Read the article

  • Replacing whitespace with sed in a CSV (to use w/ postgres copy command)

    - by Wells
    I iterate through a collection of CSV files in bash, running: iconv --from-code=ISO-8859-1 --to-code=UTF-8 ${FILE} | \ sed -e 's/\"//g' | \ sed -e 's/, /,/g' \ > ${FILE}.utf8 Running iconv to fix UTF-8 characters, then the first sed call removes the double quote characters, and the final sed call is supposed to remove leading and trailing whitespace around the commas. HOWEVER, I still have a line like this in the saved file: FALSE,,,, 2.40,, The COPY command in postgres is kind of dumb, so it thinks " 2.40" is not valid syntax for a numeric value. Where am I going wrong w/ my processing of the CSV file? Thanks!

    Read the article

  • headers already sent displays after moving files from one server to another

    - by PERR0_HUNTER
    hey there! I have a project running on dreamhost hosting and it's working fine, but since DH has been getting really slow I'm moving the project to my new dedicated server. The thing is that after I move all of my file over to the new dedicated (ubuntu 8.4) I get see warnings all over the place telling me that the headers had been already sent. The first thing I tried was moving the files via FTP: download to my machine, upload to server - Didn't worked Second try was tar.gz the folder on the first server and untar it on the new one, didn't worked either I tried chaing enconding to ANSI and they start working, however most of my files contain accents so ANSI is not an option for all my files i need UTF8 Any ideas on how to fix this? I'm sure it must be some sort of config

    Read the article

  • What does addListener do in node.js?

    - by Jeffrey
    I am trying to understand the purpose of addListener in node.js. Can someone explain please? Thanks! A simple example would be: var tcp = require('tcp'); var server = tcp.createServer(function (socket) { socket.setEncoding("utf8"); socket.addListener("connect", function () { socket.write("hello\r\n"); }); socket.addListener("data", function (data) { socket.write(data); }); socket.addListener("end", function () { socket.write("goodbye\r\n"); socket.end(); }); }); server.listen(7000, "localhost");

    Read the article

  • No difference between nullable:true and nullable:false in Grails 1.3.6?

    - by knorv
    The following domain model definition .. class Test { String a String b static mapping = { version(false) table("test_table") a(nullable: false) b(nullable: true) } } .. yields the following MySQL schema .. CREATE TABLE test_table ( id bigint(20) NOT NULL AUTO_INCREMENT, a varchar(255) NOT NULL, b varchar(255) NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; Please note that a and b get identical MySQL column definitions despite the fact a is defined as non-nullable and b is nullable in the GORM mappings. What am I doing wrong? I'm running Grails 1.3.6.

    Read the article

  • Best way to have unique key over 500M varchar(255) records in mysql/innodb?

    - by taw
    I have url column with unique key over it - but its performance on updates is absolutely atrocious. I suspect that's because the index doesn't all fit in memory. So I was thinking, how about adding a column of md5(url) with 16 bytes of binary data and unique-keying that instead. What would be the best datatype for that? I'd love to be able to just see 32-character hex hash, while mysql would convert it to/from 16 binary bytes and index that, as programs using the database might have some troubles with arbitrary binary data that I'd rather avoid if possible (also I'm a bit afraid that mysql might get some strange ideas about character sets and for example overalocating storage for that by 3:1 because it thinks it might need utf8, how do I avoid that for cure?).

    Read the article

  • Working with Japanese filenames in PHP 5.3 and Windows Vista?

    - by Jon
    I'm currently trying to write a simple script that looks in a folder, and returns a list of all the file names in an RSS feed. However I've hit a major wall... Whenever I try to read filenames with Japanese characters in them, it shows them as ?'s. I've tried the solutions mentioned here: http://stackoverflow.com/questions/482342/php-readdir-problem-with-japanese-language-file-name - however they do not work for some reason, even with: header('Content-Type: text/html; charset=UTF-8'); setlocale(LC_ALL, 'en_US.UTF8'); mb_internal_encoding("UTF-8"); At the top (Exporting as plain text until I can sort this out). What can I do? I need this to work and I don't have much time.

    Read the article

  • Why does Term::Size seem to mess up Perl's output encoding?

    - by sid_com
    Hello! The Term::Size-module jumbles up the encoding. How can I fix this? #!/usr/bin/env perl use warnings; use strict; use 5.010; use utf8; binmode STDOUT, ':encoding(UTF-8)'; use Term::Size; my $string = 'Hällö'; say $string; my $columns = ( Term::Size::chars *STDOUT{IO} )[0]; say $columns; say $string; Output: Hällö 140 H?ll?

    Read the article

  • Should I convert overly-long UTF-8 strings to their shortest normal form?

    - by Grant McLean
    I've just been reworking my Encoding::FixLatin Perl module to handle overly-long UTF-8 byte sequences and convert them to the shortest normal form. My question is quite simply "is this a bad idea"? A number of sources (including this RFC) suggest that any over-long UTF-8 should be treated as an error and rejected. They caution against "naive implementations" and leave me with the impression that these things are inherently unsafe. Since the whole purpose of my module is to clean up messy data files with mixed encodings and convert them to nice clean utf8, this seems like just one more thing I can clean up so the application layer doesn't have to deal with it. My code does not concern itself with any semantic meaning the resulting characters might have, it simply converts them into a normalised form. Am I missing something. Is there a hidden danger I haven't considered?

    Read the article

  • Find node level in a tree

    - by Álvaro G. Vicario
    I have a tree (nested categories) stored as follows: CREATE TABLE `category` ( `category_id` int(10) unsigned NOT NULL AUTO_INCREMENT, `category_name` varchar(100) NOT NULL, `parent_id` int(10) unsigned DEFAULT NULL, PRIMARY KEY (`category_id`), UNIQUE KEY `category_name_UNIQUE` (`category_name`,`parent_id`), KEY `fk_category_category1` (`parent_id`,`category_id`), CONSTRAINT `fk_category_category1` FOREIGN KEY (`parent_id`) REFERENCES `category` (`category_id`) ON DELETE SET NULL ON UPDATE CASCADE ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_spanish_ci I need to feed my client-side language (PHP) with node information (child+parent) so it can build the tree in memory. I can tweak my PHP code but I think the operation would be way simpler if I could just retrieve the rows in such an order that all parents come before their children. I could do that if I knew the level for each node: SELECT category_id, category_name, parent_id FROM category ORDER BY level -- No `level` column so far :( Can you think of a way (view, stored routine or whatever...) to calculate the node level? I guess it's okay if it's not real-time and I need to recalculate it on node modification.

    Read the article

  • Ensuring uniqueness on a varchar greater than 255 in MYSQL/InnoDB

    - by Vijay Boyapati
    I have a table which contains HTML entries for news pages. When I initially designed it I used URL as the primary key. I've learned the error of my ways because left-joining is super slow. So I want to redesign the table with an integer (id) primary key, but still keep the rows unique based on the URL. The problem is that I've found URLs longer than 255 characters, and MySQL isn't letting my create a key on the URL. I'm using an InnoDB/UTF8 table. From what I understand it's using multiple bytes per character with a limit of 766 bytes for the key (in InnoDB). I would really love suggestions on an elegant way of keeping the rows unique based on URL, while using an integer primary key. Thanks!

    Read the article

  • Why my mysql DISTINCT doesn't work ?

    - by belaz
    Hello, Why the two query below return duplicate member_id and not the third ? i need the second query to work with distinct. Anytime i run a GROUP BY, this query is incredibly slow and the resultset doesn't return the same value as distinct (the value is wrong). SELECT member_id, id FROM ( SELECT * FROM table1 ORDER BY created_at desc ) as u LIMIT 5 +-----------+--------+ | member_id | id | +-----------+--------+ | 11333 | 313095 | | 141831 | 313094 | | 141831 | 313093 | | 12013 | 313092 | | 60821 | 313091 | +-----------+--------+ SELECT distinct member_id, id FROM ( SELECT * FROM table1 ORDER BY created_at desc ) as u LIMIT 5 +-----------+--------+ | member_id | id | +-----------+--------+ | 11333 | 313095 | | 141831 | 313094 | | 141831 | 313093 | | 12013 | 313092 | | 60821 | 313091 | +-----------+--------+ SELECT distinct member_id FROM ( SELECT * FROM table1 ORDER BY created_at desc ) as u LIMIT 5 +-----------+ | member_id | +-----------+ | 11333 | | 141831 | | 12013 | | 60821 | | 64980 | +-----------+ my table sample CREATE TABLE `table1` ( `id` int(11) NOT NULL AUTO_INCREMENT, `member_id` int(11) NOT NULL, `s_type_id` int(11) NOT NULL, `created_at` datetime DEFAULT NULL, PRIMARY KEY (`id`), KEY `s_FI_1` (`member_id`), KEY `s_FI_2` (`s_type_id`) ) ENGINE=InnoDB AUTO_INCREMENT=313096 DEFAULT CHARSET=utf8;

    Read the article

  • Ruby: Parse Excel 95-2003 files?

    - by Larry K
    Is there a way to read Excel 97-2003 files from Ruby? Background I'm currently using the Ruby Gem parseexcel -- http://raa.ruby-lang.org/project/parseexcel/ But it is an old port of the perl module. It works fine, but the latest format it parses is Excel 95. And guess what? Excel 2007 will not produce the Excel 95 format. John McNamara has taken over duties as the maintainer for the Perl Excel parser, see http://search.cpan.org/~jmcnamara/Spreadsheet-ParseExcel-0.55/lib/Spreadsheet/ParseExcel.pm The current version will parse Excel 95-2003 files. But is there a port to Ruby? My other thought is to build some Ruby to Perl glue code to enable use of the Perl library itself from Ruby. Eg, see http://stackoverflow.com/questions/451636/whats-the-best-way-to-export-utf8-data-into-excel/620612#620612 (I think it would be much faster to write the glue code than to port the parser.) Thanks, Larry

    Read the article

  • Encoding regarding Term::Size

    - by sid_com
    Hello! The Term::Size-module jumbles up the encoding. How can I fix this? #!/usr/bin/env perl use warnings; use strict; use 5.010; use utf8; binmode STDOUT, ':encoding(UTF-8)'; use Term::Size; my $string = 'Hällö'; say $string; my $columns = ( Term::Size::chars *STDOUT{IO} )[0]; say $columns; say $string; Output: Hällö 140 H?ll?

    Read the article

  • Why isn't this message subject encoded properly? (php mail)

    - by Camran
    I use this code to send an email: $headers="MIME-Version: 1.0"."\n"; $headers.="Content-type: text/plain; charset=UTF-8"."\n"; $headers.="From: $name <$email>"."\n"; mail($to, '=?UTF-8?B?'.base64_encode($subject).'?=', $text, $headers, '[email protected]'); If I use special characters Å Ä Ö from the swedish alphabet, they are not encoded properly, so they turn up like ö for ö. However, this doesn't happen if I change the $to variable to a gmail account email, then they are shown correctly. Anybody got any idea? Thanks UPDATE: When I echo $name, the name is displayed correctly, in utf8, with all special chars nicely shown.

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >