Search Results

Search found 1693 results on 68 pages for 'ascii'.

Page 6/68 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Cooler ASCII Spinners?

    - by Jason
    In a console app, an ascii spinner can be used, like the GUI wait cursor, to indicate that work is being done. A common spinner cycles through these 4 characters: '|', '/', '-', '\' What are some other cyclical animation sequences to spice up a console application?

    Read the article

  • Unicode To ASCII Conversion

    - by Yuvaraj
    Hi all, i creating an small application in Delphi 2009. here i got problem that when i run my application in WindowsXP its working but it is not working in Windows95. i know the problem that 95 will not support Unicode. if anyone knows the solution please tell me. and also i have one more idea that converting Unicode to ASCII. is it possible please tell how to do that. Thanks in Advance Worm Regards, Yuvaraj

    Read the article

  • Convert extended ASCII characters to it's right presentation using Console.ReadKey() method and ConsoleKeyInfo variable

    - by mishamosher
    Readed about 30 minutes, and didn't found some specific for this in this site. Suppose the following, in C#, console application: ConsoleKeyInfo cki; cki = Console.ReadKey(true); Console.WriteLine(cki.KeyChar.ToString()); //Or Console.WriteLine(cki.KeyChar) as well Console.ReadKey(true); Now, let's put ¿ in the console entry, and asign it to cki via a Console.ReadKey(true). What will be shown isn't the ¿ symbol, the ¨ symbol is the one that's shown instead. And the same happens with many other characters. Examples: ñ shows ¤, ¡ shows -, ´ shows ï. Now, let's take the same code snipplet and add some things for a more Console.ReadLine() like behavior: string data = string.Empty; ConsoleKeyInfo cki; for (int i = 0; i < 10; i++) { cki = Console.ReadKey(true); data += cki.KeyChar; } Console.WriteLine(data); Console.ReadKey(true); The question, how to handle this by the right way, end printing the right characters that should be stored on data, not things like ¨, ¤, -, ï, etc? Please note that I want a solution that works with ConsoleKeyInfo and Console.ReadKey(), not use other variable types, or read methods. EDIT: Because ReadKey() method, that comes from Console namespace, depends on Kernel32.dll and it definetively bad handles the extended ASCII and unicode, it's not an option anymore to just find a valid conversion for what it returns. The only valid way to handle the bad behavior of ReadKey() is to use the cki.Key property that's written in cki = Console.ReadKey(true) execution and apply a switch to it, then, return the right values on dependence of what key was pressed. For example, to handle the Ñ key pressing: string data = string.Empty; ConsoleKeyInfo cki; cki = Console.ReadKey(true); switch (cki.Key) { case ConsoleKey.Oem3: if (cki.Modifiers.ToString().Contains("Shift")) //Could added handlers for Alt and Control, but not putted in here to keep the code small and simple data += "Ñ"; else data += "ñ"; break; } Console.WriteLine(data); Console.ReadKey(true); So, now the question has a wider focus... Which others functions completes it's execution with only one key pressed, and returns what's pressed (a substitute of ReadKey())? I think that there's not such substitutes, but a confirmed answer would be usefull. EDIT2: HA! Found the way, for something I used for so many times Windows 98 SE. There are the codepages, the ones responsibles for how's presented the info in the console. ReadLine() reconfigures the codepage to use properly the extended ASCII and Unicode characters. ReadKey() leaves it in EN-US default (codepage 850). Just use a codepage that prints the characters you want, and that's all. Refer to http://en.wikipedia.org/wiki/Code_page for some of them :) So, for the Ñ key press, the solution is this: Console.OutputEncoding = Encoding.GetEncoding(1252); //Also 28591 is valid for `Ñ` key, and others too string data = string.Empty; ConsoleKeyInfo cki; cki = Console.ReadKey(true); data += cki.KeyChar; Console.WriteLine(data); Console.ReadKey(true); Simple :) Now I'm wrrr with myself... how could I forget those codepages!? Question answered, so, no more about this!

    Read the article

  • How do I specify MSBuild to execute command-line calls in ascii not unicode

    - by Ben L
    I'm attempting to target VC7.1 (visual studio 2003 sp1) from Visual Studio 2010. I'm so close to setting it to work. But when I build, I get this error. 1------ Build started: Project: AnExample, Configuration: Release Win32 ------ 1 Microsoft (R) 32-bit C/C++ Standard Compiler Version 13.10.6030 for 80x86 1 Copyright (C) Microsoft Corporation 1984-2002. All rights reserved. 1 1 cl ÿ_/ 1 1cl : Command line warning D4024: unrecognized source file type 'ÿ_/', object file assumed 1 Microsoft (R) Incremental Linker Version 7.10.6030 1 Copyright (C) Microsoft Corporation. All rights reserved. 1 1 /out:.exe 1 ¦/ 1LINK : fatal error LNK1181: cannot open input file ' ¦/.obj' I know this is unsupported but I thought I'd give it a go. Does anyone know how to force the output from msbuild to be ascii or if this is the problem? There were some errors like this years ago related to the DDK acorrding to some other forums. Thanks.

    Read the article

  • PHP: Replace umlauts with closest 7-bit ASCII equivalent in an UTF-8 string

    - by BlaM
    What I want to do is to remove all accents and umlauts from a string, turning "lärm" into "larm" or "andré" into "andre". What I tried to do was to utf8_decode the string and then use strtr on it, but since my source file is saved as UTF-8 file, I can't enter the ISO-8859-15 characters for all umlauts - the editor inserts the UTF-8 characters. Obviously a solution for this would be to have an include that's an ISO-8859-15 file, but there must be a better way than to have another required include? echo strtr(utf8_decode($input), 'ŠŒŽšœžŸ¥µÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýÿ', 'SOZsozYYuAAAAAAACEEEEIIIIDNOOOOOOUUUUYsaaaaaaaceeeeiiiionoooooouuuuyy'); UPDATE: Maybe I was a bit inaccurate with what I try to do: I do not actually want to remove the umlauts, but to replace them with their closest "one character ASCII" aequivalent.

    Read the article

  • YII Mail Generate unwanted ascii character in HTML mail

    - by CedSha
    I use YII-Mail just by copying the sample but I always get some ascii charcters in my generated links Where they come from and how to avoid them ? $message = new YiiMailMessage; $message->view = 'mail'; $message->setBody(array('model'=>$model), 'text/html'); $message->subject = Yii::t('tr','my subject'); $message->addTo('[email protected]'); $message->from = '[email protected]'; Yii::app()->mail->send($message); and in view file 'mail' <h1><?php echo(Yii::t('tr','This is HTML mail')); ?></h1> <?php echo CHtml::link('Mylink', array('controller/view', 'id'=>$model->id)); ?> The resulted email source looks like this <h1>This is HTML mail</h1> <a href=3D"/testdrive/index.php?r=3D ....

    Read the article

  • Sanitize a string from ascii art

    - by Toto
    I need to sanitize article titles when (creative) users try to "attract attention" with some bad "ascii art". Exemples: Buy my product !!!!!!!!!!!!!!!!!!!!!!!! Buy my product !? !? !? !? !? !? Buy my product !!!!!!!!!.......!!!!!!!! Buy my product <----------- Some acceptable solution would be to reduce the repetition of non-alphanum to 2. So I would get: Buy my product !! Buy my product !? !? Buy my product !!..!! Buy my product <-- This solution did not work that well: preg_replace('/(\W{2,})(?=\1+)/', '', $title) Any idea how to do it in PHP with regex? Other better solution is also welcomed (I cannot strip all the non-alphanum characters as they can make sense).

    Read the article

  • C++ converting binary(P5) image to ascii(P2) image (.pgm)

    - by tubby
    I am writing a simple program to convert grayscale binary (P5) to grayscale ascii (P2) but am having trouble reading in the binary and converting it to int. #include <iostream> #include <fstream> #include <sstream> using namespace::std; int usage(char* arg) { // exit program cout << arg << ": Error" << endl; return -1; } int main(int argc, char* argv[]) { int rows, cols, size, greylevels; string filetype; // open stream in binary mode ifstream istr(argv[1], ios::in | ios::binary); if(istr.fail()) return usage(argv[1]); // parse header istr >> filetype >> rows >> cols >> greylevels; size = rows * cols; // check data cout << "filetype: " << filetype << endl; cout << "rows: " << rows << endl; cout << "cols: " << cols << endl; cout << "greylevels: " << greylevels << endl; cout << "size: " << size << endl; // parse data values int* data = new int[size]; int fail_tracker = 0; // find which pixel failing on for(int* ptr = data; ptr < data+size; ptr++) { char t_ch; // read in binary char istr.read(&t_ch, sizeof(char)); // convert to integer int t_data = static_cast<int>(t_ch); // check if legal pixel if(t_data < 0 || t_data > greylevels) { cout << "Failed on pixel: " << fail_tracker << endl; cout << "Pixel value: " << t_data << endl; return usage(argv[1]); } // if passes add value to data array *ptr = t_data; fail_tracker++; } // close the stream istr.close(); // write a new P2 binary ascii image ofstream ostr("greyscale_ascii_version.pgm"); // write header ostr << "P2 " << rows << cols << greylevels << endl; // write data int line_ctr = 0; for(int* ptr = data; ptr < data+size; ptr++) { // print pixel value ostr << *ptr << " "; // endl every ~20 pixels for some readability if(++line_ctr % 20 == 0) ostr << endl; } ostr.close(); // clean up delete [] data; return 0; } sample image - Pulled this from an old post. Removed the comment within the image file as I am not worried about this functionality now. When compiled with g++ I get output: $> ./a.out a.pgm filetype: P5 rows: 1024 cols: 768 greylevels: 255 size: 786432 Failed on pixel: 1 Pixel value: -110 a.pgm: Error The image is a little duck and there's no way the pixel value can be -110...where am I going wrong? Thanks.

    Read the article

  • Python - Finding unicode/ascii problems

    - by user330739
    Hi all, I am csv.reader to pull in info from a very long sheet. I am doing work on that data set and then I am using the xlwt package to give me a workable excel file. However, I get this error: UnicodeDecodeError: 'ascii' codec can't decode byte 0x92 in position 34: ordinal not in range(128) My question to you all is, how can I find exactly where that error is in my data set? Also, is there some code that I can write which will look through my data set and find out where the issues lie (because some data sets run without the above error and others have problems)?

    Read the article

  • How to enable reading non-ascii characters in Servlets

    - by Daziplqa
    How to make the servlet accept non-ascii (Arabian, chines, etc) characters passed from JSPs? I've tried to add the following to top of JSPs: <%@page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"%> And to add the following in each post/get method in the servlet: request.setCharacterEncoding("UTF-8"); response.setCharacterEncoding("UTF-8"); I've tried to add a Filter that executes the above two statements instead of in the servlet. To be quite honest, these was working in the past, but now it doesn't work anymore. I am using tomcat 5.0.28/6.x.x on JDK1.6 on both Win & Linux boxes.

    Read the article

  • How does youtube enable ascii videos?

    - by acidzombie24
    Just by messing around a little it seems that the video stream is not ascii. i tested by downloading the stream. It would be insane if it was. Theres so many videos. So that couldnt be it. Youtube seems to not work with javascript disable (not counting mobile if true). How is it being done? is it javascript magic? is the SWF running the video through a filter in realtime? (I doubt its a native filter so how is the filter compiled) its really cool. I cant imagine how this is running realtime yet it is!

    Read the article

  • How to initialize static const char array for ASCII codes [C++]

    - by Janney
    I want to initialize a static const char array with ASCII codes in a constructor, here's my code: class Card { public: Suit(void) { static const char *Suit[4] = {0x03, 0x04, 0x05, 0x06}; // here's the problem static const string *Rank[ 13 ] = {'A', '2', '3', '4', '5', '6', '7', '8', '9', '10', 'J', 'Q', 'K'}; // and here. } However i got a whole lot of errors stating that 'initializing' : cannot convert from 'char' to 'const std::string *' 'initializing' : cannot convert from 'int' to 'const std::string *' please help me! Thank you so much.

    Read the article

  • Euro character messed up during FTP transfer

    - by djechelon
    My customer is using a very outdated ecommerce management system on my hosting service. For that product, no support is being provided anymore by the vendor. Brief explanation: the shop website, that claims to run under LAMP stack, is built by an old Visual Basic Windows application running on MS Access. The user constructs the shop, defines the HTML template, adds products and categories, etc. Then the VB exe builds the PHP pages (one for each template page) and the SQL script to run on MySQL. It also uploads everything via FTP and runs the installation/upgrade script on its own. The problem Browsing the website, many products' descriptions are cut before the euro sign. For example, what was supposed to be "Product price €1000" becomes "Product price" The analysis MySQL contains a cutted description until the € sign, so it's not PHP fault The Access databases contain full description with € sign, so it's not fault of the webmaster writing bad description or eDisplay cutting them The SQL that will run once the site gets uploaded, stored on my local machine before upload, contains the € sign The same script, after being FTPed by eDisplay and opened with nano from SSH, shows the € sign messed up like this: ^À vsftpd log reports (obfuscated for privacy) Sat Dec 15 11:16:57 2012 22 xxx.xxx.128.13 1112727 /srv/www/domains/xxxxxx.it/htdocs/db.sql b _ i r xxxxxxx ftp 0 * c which seems to be a binary transfer (and also a huge security vulnerability because you can download the whole database from unauthenticated HTTP) The eDisplay internal FTP client provides no option for ascii/binary transfer modes [Add] Trying to manually upload the SQL file via SFTP shows messing up euro [Add2] Trying to manually upload using Xftp client with explicit ASCII mode doesn't fix too It looks like the file gets uploaded as binary. Perhaps on the customer's previous host it all worked fine because that was a Windows host. The server It's an Azure virtual machine running openSUSE 12.2 with both vsftpd and openSSH The question Without asking the customer to manually upload files using FileZilla or replacing € with &euro;, because he refuses, what can I do on server side to prevent vsftpd to screw up euro sign?

    Read the article

  • Problem with extended ASCII characters in web page/master page

    - by Oyvind Brathen
    I have some localization problems in my webpage. There are basically two problems (that I suspect have a different sulution, but they are conseptually linked) First problem is this: I have a website that is using a master page. All text from the page is fine, but all text that comes from the master page file, get scrambled norwegian characters. For example Ø shows up as Ø. It seems that all characthers in the extended ASCII table gets scrambled this way. Afterwards, if I open the master page in Notepad the Ø looks normal, but if I remove the Ø and write a new Ø manually, then save the file from Notepad, and then open the website in the browser, it looks fine and the Ø is shown properly. So it seems that Visual Studio saves the characters wrongly in the master file, but correct for the aspx file. Any clue here? The second issue is norwegian characters coming from jQuery. All of these characters get's replaced by a questionmark with a black box around it. Here, modifying the js file in Notepad does not help, and it still display scrambled in the browser. Any input here would be appreciated.

    Read the article

  • Java UTF-8 to ASCII conversion with supplements

    - by bozo
    Hi, we are accepting all sorts of national characters in UTF-8 string on the input, and we need to convert them to ASCII string on the output for some legacy use. (we don't accept Chinese and Japanese chars, only European languages) We have a small utility to get rid of all the diacritics: public static final String toBaseCharacters(final String sText) { if (sText == null || sText.length() == 0) return sText; final char[] chars = sText.toCharArray(); final int iSize = chars.length; final StringBuilder sb = new StringBuilder(iSize); for (int i = 0; i < iSize; i++) { String sLetter = new String(new char[] { chars[i] }); sLetter = Normalizer.normalize(sLetter, Normalizer.Form.NFC); try { byte[] bLetter = sLetter.getBytes("UTF-8"); sb.append((char) bLetter[0]); } catch (UnsupportedEncodingException e) { } } return sb.toString(); } The question is how to replace all the german sharp s (ß, Ð, d) and other characters that get through the above normalization method, with their supplements (in case of ß, supplement would probably be "ss" and in case od Ð supplement would be either "D" or "Dj"). Is there some simple way to do it, without million of .replaceAll() calls? So for example: Ðonardan = Djonardan, Blaß = Blass and so on. We can replace all "problematic" chars with empty space, but would like to avoid this to make the output as similar to the input as possible. Thank you for your answers, Bozo

    Read the article

  • Displaying NON-ASCII Characters using HttpClient

    - by Abdullah Gheith
    So, i am using this code to get the whole HTML of a website. But i dont seem to get non-ascii characters with me. all i get is diamonds with question mark. characters like this: å, appears like this: ? I doubt its because of the charset, what could it then be? Log.e("HTML", "henter htmlen.."); String url = "http://beep.tv2.dk"; HttpClient client = new DefaultHttpClient(); client.getParams().setParameter(CoreProtocolPNames.PROTOCOL_VERSION, HttpVersion.HTTP_1_1); client.getParams().setParameter(CoreProtocolPNames.HTTP_ELEMENT_CHARSET, "UTF-8"); HttpGet request = new HttpGet(url); HttpResponse response = client.execute(request); Header h = HeaderValueFormatter response.addHeader(header) String html = ""; InputStream in = response.getEntity().getContent(); BufferedReader reader = new BufferedReader(new InputStreamReader(in)); StringBuilder str = new StringBuilder(); String line = null; while((line = reader.readLine()) != null) { str.append(line); } in.close(); //b = false; html = str.toString();

    Read the article

  • Converting Source ASCII Files to JPEGs

    - by CommonsWare
    I publish technical books, in print, PDF, and Kindle/MOBI, with EPUB on the way. The Kindle does not support monospace fonts, which are kinda useful for source code listings. The only way to do monospace fonts is to convert the text (Java source, HTML, XML, etc.) into JPEG images. More specifically, due to pagination issues, a given input ASCII file needs to be split into slices of ~6 lines each, with each slice turned into a JPEG, so listings can span a screen. This is a royal pain. My current mechanism to do that involves: Running expand to set a consistent 2-space tab size, which pipes to... a2ps, which pipes to... A small Perl snippet to add a "%%LanguageLevel: 3\n" line, which pipes to... ImageMagick's convert, to take the (E)PS and make a JPEG out it, with an appropriate background, cropped to 575x148+5+28, etc. That used to work 100% of the time. It now works 95% of the time. The rest of the time, I get convert: geometry does not contain image errors, which I cannot seem to get rid of, in part because I don't understand what the problem is. Before this process, I used to use a pretty-print engine (source-highlight) to get HTML out of the source code...but then the only thing I could find to convert the HTML into JPEGs was to automate screen-grabs from an embedded Gecko engine. Reliability stank, which is why I switched to my current mechanism. So, if you were you, and you needed to turn source listings into JPEG images, in an automated fashion, how would you do it? Bonus points if it offers some sort of pretty-print process (e.g., bolded keywords)! Or, if you know what typically causes convert: geometry does not contain image, that might help. My current process is ugly, but if I could get it back to 100% reliability, that'd be just fine for now. Thanks in advance!

    Read the article

  • upload form only works in Firefox when uploading ASCII .stl 3D files

    - by NathanPDX
    uploadform.html and upload_file.php (below) works fine in Firefox but fails in Chrome, IE, and Safari when uploading ASCII .stl 3D files. Error message is "Invalid file" and problem occurs with multiple computers and multiple .stl files. When I modify the code to support other file types like JPG and PDF it allows those file types in all three web browsers. Also, Firefox only allows the .stl upload if I include application/octet-stream in the mime types section. Why doesn't this work outside of Firefox? uploadform.html: <!doctype html> <html> <body> <form action="upload_file.php" method="post" enctype="multipart/form-data"> <label for="file">Filename:</label> <input type="file" name="file" id="file" /> <br /> <input type="submit" name="submit" value="Submit" /> </form> </body> </html> upload_file.php: <!doctype html> <html> <body> <?php $allowedExts = array("stl"); $extension = end(explode(".", $_FILES["file"]["name"])); if ( ( ($_FILES["file"]["type"] == "application/sla") || ($_FILES["file"]["type"] == "application/octet-stream") || ($_FILES["file"]["type"] == "text/plain") || ($_FILES["file"]["type"] == "application/unknown") ) && ($_FILES["file"]["size"] < 2000000) && in_array($extension, $allowedExts) ) { if ($_FILES["file"]["error"] > 0) { echo "Return Code: " . $_FILES["file"]["error"] . "<br />"; } else { echo "Upload: " . $_FILES["file"]["name"] . "<br />"; echo "Size: " . ($_FILES["file"]["size"] /1024) . " KB<br />"; if (file_exists("upload/" . $_FILES["file"]["name"])) { echo $_FILES["file"]["name"] . " already exists. "; } else { move_uploaded_file($_FILES["file"]["tmp_name"], "upload/" . $_FILES["file"]["name"]); echo "successful upload"; } } } else { echo "Invalid file"; } ?> </body> </html>

    Read the article

  • Fastest way to read data from a lot of ASCII files

    - by Alsenes
    Hi guys, for a college exercise that I've already submitted I needed to read a .txt file wich contained a lot of names of images(1 in each line). Then I needed to open each image as an ascii file, and read their data(images where in ppm format), and do a series of things with them. The things is, I noticed my program was taking 70% of the time in the reading the data from the file part, instead of in the other calculations that I was doing (finding number of repetitions of each pixel with a hash table, finding diferents pixels beetween 2 images etc..), which I found quite odd to say the least. This is how the ppm format looks like: P3 //This value can be ignored when reading the file, because all image will be correctly formatted 4 4 255 //This value can be also ignored, will be always 255. 0 0 0 0 0 0 0 0 0 15 0 15 0 0 0 0 15 7 0 0 0 0 0 0 0 0 0 0 0 0 0 15 7 0 0 0 15 0 15 0 0 0 0 0 0 0 0 0 This is how I was reading the data from the files: ifstream fdatos; fdatos.open(argv[1]); //Open file with the name of all the images const int size = 128; char file[size]; //Where I'll get the image name Image *img; while (fdatos >> file) { //While there's still images anmes left, continue ifstream fimagen; fimagen.open(file); //Open image file img = new Image(fimagen); //Create new image object with it's data file ……… //Rest of the calculations whith that image ……… delete img; //Delete image object after done fimagen.close(); //Close image file after done } fdatos.close(); And inside the image object read the data like this: const int tallafirma = 100; char firma[tallafirma]; fich_in >> std::setw(100) >> firma; // Read the P3 part, can be ignored int maxvalue, numpixels; fich_in >> height >> width >> maxvalue; // Read the next three values numpixels = height*width; datos = new Pixel[numpixels]; int r,g,b; //Don't need to be ints, max value is 256, so an unsigned char would be ok. for (int i=0; i<numpixels; i++) { fich_in >> r >> g >> b; datos[i] = Pixel( r, g ,b); } //This last part is the slow one, //I thing I should be able to read all this data in one single read //to buffer or something which would be stored in an array of unsigned chars, //and then I'd only need to to do: //buffer[0] -> //Pixel 1 - Red data //buffer[1] -> //Pixel 1 - Green data //buffer[2] -> //Pixel 1 - Blue data So, any Ideas? I think I can improve it quite a bit reading all to an array in one single call, I just don't know how that is done. Also, is it posible to know how many images will be in the "index file"? Is it posiible to know the number of lines a file has?(because there's one file name per line..) Thanks!!

    Read the article

  • cleaning up pdftotext font issues

    - by mankoff
    I'm using pdftotext to make an ASCII version of a PDF document (made with LaTeX), because collaborators prefer a simple document in MS word. The plain text version I see looks good, but upon closer inspection the f character seems to be frequently mis-converted depending on what characters follow. For example, fi and fl often seem to become one special character, which I will try to paste here: ? and ?. What is the best way to clean up the output of pdftotext? I am thinking sed might be the right tool, but am not sure how to detect these special characters.

    Read the article

  • Unconvert Text File from Binary Format

    - by Hammer Bro.
    I've got a rather large CSV file (~700MB) which I know to consist of lines of 27-character alpha-numeric hashes; no commas or anything fancy. Somehow, during its migration from Windows to Linux (via winSCP and then a few regular SCPs), it has converted into some kind of binary format I am unfamiliar with. If I open the file in vi, everything appears fine, and it says [converted] at the bottom, although I know it's not a line endings issue (and dos2unix doesn't help). If I 'head' the file, it looks proper except for a "ÿþ" at the beginning of the first line. If I open up the file in nano, however, I see the "ÿþ" at the start and then "^@" before every character (even newlines and EoF). If I try to re-save or copy the file (say via: head file.csv short.txt), this special encoding is preserved. I copied the first ten lines out of vi (which displays it properly) into my Windows clipboard via my SSH client, then pasted it into a new text file, test.txt. This file is visually identical when opened in vi (and similar through 'head', minus the "ÿþ"), although it's roughly half of the filesize. Additionally, file test.txt test.txt: ASCII text file short.txt short.txt: I have no idea what format this once-text file got converted to (it's notoriously hard to search the internet for symbols), but surely there must be some way to convert it back. Any ideas?

    Read the article

  • Rails 2.3.5, Ruby 1.9, SQLite 3 incompatible character encodings: UTF-8 and ASCII-8BIT

    - by Daniil Harik
    Hello, I know that question with same title has been asked almost 6 month ago. I have Googled for this problem and I have not found any working solution. Has there been any fixes for this very critical problem? I need to get my website running ASAP. Just to get the site up and running I'm even ready to add utf8 conversion methods to all my variables or risk to upgrade to Rails 3 beta Thank You in advance!

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >