Search Results

Search found 17841 results on 714 pages for 'non ascii characters'.

Page 20/714 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Converting Source ASCII Files to JPEGs

    - by CommonsWare
    I publish technical books, in print, PDF, and Kindle/MOBI, with EPUB on the way. The Kindle does not support monospace fonts, which are kinda useful for source code listings. The only way to do monospace fonts is to convert the text (Java source, HTML, XML, etc.) into JPEG images. More specifically, due to pagination issues, a given input ASCII file needs to be split into slices of ~6 lines each, with each slice turned into a JPEG, so listings can span a screen. This is a royal pain. My current mechanism to do that involves: Running expand to set a consistent 2-space tab size, which pipes to... a2ps, which pipes to... A small Perl snippet to add a "%%LanguageLevel: 3\n" line, which pipes to... ImageMagick's convert, to take the (E)PS and make a JPEG out it, with an appropriate background, cropped to 575x148+5+28, etc. That used to work 100% of the time. It now works 95% of the time. The rest of the time, I get convert: geometry does not contain image errors, which I cannot seem to get rid of, in part because I don't understand what the problem is. Before this process, I used to use a pretty-print engine (source-highlight) to get HTML out of the source code...but then the only thing I could find to convert the HTML into JPEGs was to automate screen-grabs from an embedded Gecko engine. Reliability stank, which is why I switched to my current mechanism. So, if you were you, and you needed to turn source listings into JPEG images, in an automated fashion, how would you do it? Bonus points if it offers some sort of pretty-print process (e.g., bolded keywords)! Or, if you know what typically causes convert: geometry does not contain image, that might help. My current process is ugly, but if I could get it back to 100% reliability, that'd be just fine for now. Thanks in advance!

    Read the article

  • Manipulating both unicode and ASCII character set in C#

    - by Murlex
    I have this mapping in my C# application string [,] unicode2Ascii = { { "&#3001;", "\x86" } }; ஹ - is the unicode value for a tamil literal "ஹ". This is the raw hex literal for the unicode value saved by MS Word as a byte sequence. I am trying to map these unicode value "strings" to a hex value under 255 (so as to accommodate non-unicode supported systems). I trying to use string.replace like this: S = S.replace(unicode2Ascii[0,0], unicode2Ascii[0,1]); However the resultant ouput has a ? instead of the actual hex 0x86 stored. Any pointer on how I could set the encoding for the second element of that array to something like windows-1252? Or is there a better way to do this conversion? thanks in advance

    Read the article

  • non static method cannot be referenced from a static context.

    - by David
    First some code: import java.util.*; ... class TicTacToe { ... public static void main (String[]arg) { Random Random = new Random() ; toerunner () ; // this leads to a path of methods that eventualy gets us to the rest of the code } ... public void CompTurn (int type, boolean debug) { ... boolean done = true ; int a = 0 ; while (!done) { a = Random.nextInt(10) ; if (debug) { int i = 0 ; while (i<20) { System.out.print (a+", ") ; i++; }} if (possibles[a]==1) done = true ; } this.board[a] = 2 ; } ... } //to close the class Here is the error message: TicTacToe.java:85: non-static method nextInt(int) cannot be referenced from a static context a = Random.nextInt(10) ; ^ What exactly went wrong? What does that error message "non static method cannot be referenced from a static context" mean?

    Read the article

  • Certain Japanese characters aren't displayed properly

    - by Nisto
    On the following site: http://www.nciku.com/search/radical the first 2 characters on the second row of the "Step 2" table aren't displayed properly. All other characters look fine. I tried re-installing the Asian fonts via the checkboxes regarding Asian fonts in the "Regional and Language Options" control panel applet. I have tried removing every single Font from the Fonts folder (some were ofcourse not possible to remove), and re-installing them all again. I did this by... Running cmd Closing down the explorer process In cmd; using the command DEL /F /S /Q * in the Fonts folder Putting in my XP SP3 Retail disc In cmd; using expand -r *.tt_ in the I386 folder on the XP disc (and any other font file, in the I386\LANG folder) I also tried installing this pack from Microsoft, but this solved nothing either. I even tried running my browser (Firefox) through AppLocale. And changing character encoding -- again, does not help. I've also tried viewing the page in Internet Explorer. What could be wrong? I have checked my Fonts folder, to make sure that every single font available on the XP disc is available in WINDOWS\Fonts. What shows in the first square on the second row - I can't really tell what it's supposed to look like (but it's not the proper character)... but the second square shows a rectangular symbol containing HEX code. I've been in this situation before -- and it has been when I've been missing fonts. But how could I possibly be missing a necessary font? Shouldn't it be provided in the Asian "font packages"? I've talked to some other users that has viewed the page, and they had no problems displaying those characters on second row - even though they're only using the fonts provided on the Windows installation disc. Windows XP Professional Service Pack 3 (x86 - with latest updates) Firefox 3.6.15

    Read the article

  • Batch file to create many files with special characters

    - by MollyO
    Essential info: I have a file "DB_OUTPUT.TXT" with 304 lines that I need to turn into 304 files (one per line). Each line contains many special characters and may be up to tens of thousands of characters long. For these reasons, I'm having difficulty using a cmd.exe batch file (which limits the amount of input) and the echo command (which would try to execute each special character, short of me having to escape them all). I also have a file "DB_OUTPUT_FILENAMES.TXT" containing a distinct filename for each line-soon-to-be-file from "db_output.txt". So line 1 of DB_OUTPUT.TXT needs to be the body of a new file with a name equal to line 1 of DB_OUTPUT_FILENAMES.TXT. Extra info: As you may have guessed, DB_OUTPUT.TXT is output from a database; it contains 304 records with 6 or 7 columns at a fixed width with the last column being a SQL query. Each of these lines (db records) will be used as a script to create new database objects, which is why the special characters need to be preserved. Question: Is there a way to do this in a batch-like fashion? I'd be happy with either a Windows solution or a Linux one.

    Read the article

  • How to display escaped characters in tmux status bar

    - by walrus
    i am running tmux from a tty on an embedded linux device. (NOT a terminal emulator) because the screen is rather small, i want to add some "icons" to the tmux status bar. to achieve this, i have simply created a font with the appropriate glyphs for things like battery, or wifi. i can load the font, and display the characters with calls that use an escape to the line drawing characters like so: echo -e "\xe\234\xf" \xe escapes me into line drawing character mode, \234 is my created character, and \xf returns me to normal character mode so my terminal doesnt start getting goofy. this works perfectly if i enter the command at the terminal whether tmux is started or not. the issue arises if i then try to use it in my ~/.tmux.conf file for the status bar. i currently have a line like this: set -g status-right "#(echo -e "\xe\234\xf") #(/script/to/output/powerlevel) this simply outputs \xe\234\xf powerlevel this goes the same if i try printf over echo. this is the output i would expect to get on the terminal if i made the call without passing -e to echo, or without enclosing the statement with quotes. i then decided to wrap the calls to the echo or printf in a shell script. again, the script works when called from the terminal, but not in tmux's status bar. now i get the unprintable character "?" instead of my icon, like this: ? powerlevel this is what i would expect if i did not use the line drawing escapes previously mentioned above, or if i tried to copy and paste the character as text using tmux. in addition, the calling of these character scripts screws up the rest of my status-right, as the clock has about 6 digits for minutes when it is called (though it correctly only updates two of them). how can i make tmux respect the escape characters? any help or insight is greatly appreciated.

    Read the article

  • upload form only works in Firefox when uploading ASCII .stl 3D files

    - by NathanPDX
    uploadform.html and upload_file.php (below) works fine in Firefox but fails in Chrome, IE, and Safari when uploading ASCII .stl 3D files. Error message is "Invalid file" and problem occurs with multiple computers and multiple .stl files. When I modify the code to support other file types like JPG and PDF it allows those file types in all three web browsers. Also, Firefox only allows the .stl upload if I include application/octet-stream in the mime types section. Why doesn't this work outside of Firefox? uploadform.html: <!doctype html> <html> <body> <form action="upload_file.php" method="post" enctype="multipart/form-data"> <label for="file">Filename:</label> <input type="file" name="file" id="file" /> <br /> <input type="submit" name="submit" value="Submit" /> </form> </body> </html> upload_file.php: <!doctype html> <html> <body> <?php $allowedExts = array("stl"); $extension = end(explode(".", $_FILES["file"]["name"])); if ( ( ($_FILES["file"]["type"] == "application/sla") || ($_FILES["file"]["type"] == "application/octet-stream") || ($_FILES["file"]["type"] == "text/plain") || ($_FILES["file"]["type"] == "application/unknown") ) && ($_FILES["file"]["size"] < 2000000) && in_array($extension, $allowedExts) ) { if ($_FILES["file"]["error"] > 0) { echo "Return Code: " . $_FILES["file"]["error"] . "<br />"; } else { echo "Upload: " . $_FILES["file"]["name"] . "<br />"; echo "Size: " . ($_FILES["file"]["size"] /1024) . " KB<br />"; if (file_exists("upload/" . $_FILES["file"]["name"])) { echo $_FILES["file"]["name"] . " already exists. "; } else { move_uploaded_file($_FILES["file"]["tmp_name"], "upload/" . $_FILES["file"]["name"]); echo "successful upload"; } } } else { echo "Invalid file"; } ?> </body> </html>

    Read the article

  • Fastest way to read data from a lot of ASCII files

    - by Alsenes
    Hi guys, for a college exercise that I've already submitted I needed to read a .txt file wich contained a lot of names of images(1 in each line). Then I needed to open each image as an ascii file, and read their data(images where in ppm format), and do a series of things with them. The things is, I noticed my program was taking 70% of the time in the reading the data from the file part, instead of in the other calculations that I was doing (finding number of repetitions of each pixel with a hash table, finding diferents pixels beetween 2 images etc..), which I found quite odd to say the least. This is how the ppm format looks like: P3 //This value can be ignored when reading the file, because all image will be correctly formatted 4 4 255 //This value can be also ignored, will be always 255. 0 0 0 0 0 0 0 0 0 15 0 15 0 0 0 0 15 7 0 0 0 0 0 0 0 0 0 0 0 0 0 15 7 0 0 0 15 0 15 0 0 0 0 0 0 0 0 0 This is how I was reading the data from the files: ifstream fdatos; fdatos.open(argv[1]); //Open file with the name of all the images const int size = 128; char file[size]; //Where I'll get the image name Image *img; while (fdatos >> file) { //While there's still images anmes left, continue ifstream fimagen; fimagen.open(file); //Open image file img = new Image(fimagen); //Create new image object with it's data file ……… //Rest of the calculations whith that image ……… delete img; //Delete image object after done fimagen.close(); //Close image file after done } fdatos.close(); And inside the image object read the data like this: const int tallafirma = 100; char firma[tallafirma]; fich_in >> std::setw(100) >> firma; // Read the P3 part, can be ignored int maxvalue, numpixels; fich_in >> height >> width >> maxvalue; // Read the next three values numpixels = height*width; datos = new Pixel[numpixels]; int r,g,b; //Don't need to be ints, max value is 256, so an unsigned char would be ok. for (int i=0; i<numpixels; i++) { fich_in >> r >> g >> b; datos[i] = Pixel( r, g ,b); } //This last part is the slow one, //I thing I should be able to read all this data in one single read //to buffer or something which would be stored in an array of unsigned chars, //and then I'd only need to to do: //buffer[0] -> //Pixel 1 - Red data //buffer[1] -> //Pixel 1 - Green data //buffer[2] -> //Pixel 1 - Blue data So, any Ideas? I think I can improve it quite a bit reading all to an array in one single call, I just don't know how that is done. Also, is it posible to know how many images will be in the "index file"? Is it posiible to know the number of lines a file has?(because there's one file name per line..) Thanks!!

    Read the article

  • Non-perfect maze generation algorithm

    - by Shylux
    I want to generate a maze with the following properties: The maze is non-perfect. Means it has loops and multiple ways to reach the exit. The maze should be random. The algorithm should output different mazes for different input parameters The maze doesn't have to be braided. Means dead-ends are allowed and appreciated. I just can't find the right resources on google. The closest i found was this description of the different types of algorithms: http://www.astrolog.org/labyrnth/algrithm.htm. All other algorithms were for perfect mazes. Can anyone give me a website where i can look this up or maybe an algorithm directly?

    Read the article

  • Non-Dom Element Event Binding with jQuery

    Yesterday I had a short discussion with Dave Reed on Twitter regarding setting up fake events on objects that are hookable. jQuery makes it real easy to bind events on DOM elements and with a little bit of extra work (that I didnt know about) you can also set up binding to non-DOM element event bindings. Assume for a second that you have a simple JavaScript object like this: var item = { sku: "wwhelp" , foo: function() { alert('orginal foo function'); } }; and...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Using RegEx to replace invalid characters

    - by yeahumok
    Hello I have a directory with lots of folders, subfolder and all with files in them. The idea of my project is to recurse through the entire directory, gather up all the names of the files and replace invalid characters (invalid for a SharePoint migration). However, i'm completely unfamiliar with Regular Expressions. The characters i need to get rid in filenames are: ~, #, %, &, *, { } , \, /, :, <, ?, -, | and "" I want to replace these characters with a blank space. I was hoping to use a string.replace() method to look through all these file names and do the replacement. So far, the only code i've gotten to is the recursion. I was thinking of the recursion scanning the drive, fetching the names of these files and putting them in a List. Can anybody help me with how to find/replace invalid chars with RegEx with those specific characters?

    Read the article

  • How to hire a web-programmer : for non-programmer

    - by 0Complex
    I am a non-programmer that has used the services of : freelancer, odesk, etc I've tried asking for what i need but, I can't find anyone who can show me any type of example similar to what I request in the specs for the web-programming. They have front ends and back ends, but they don't fulfill true "live" website requirements. "live" as to be ready to support traffic, keys in hand, can be updated constantly by me, ... How do I figure how to evaluate a programmer ? How do I bid the appropriate price for the services ?

    Read the article

  • Detect CJK characters in PHP

    - by Jasie
    Hello, I've got an input box that allows UTF8 characters -- can I detect whether the characters are in Chinese, Japanese, or Korean programmatically (part of some Unicode range, perhaps)? I would change search methods depending on if MySQL's fulltext searching would work (it won't work for CJK characters). Thanks!

    Read the article

  • Game Center alternatives for non-iOS development

    - by Eat at Joes
    I have completed a game for iOS which integrates GameKit. I am happy with Game Center however my game also has an HTML5 web version and will soo have an Android version. My question is what alternatives do I have for non-iOS platforms but primarily for Android and to a lesser extent a Javascript/Web SDK. I looked at Openfeint a year ago and it seemed to be a good solution back then but am not sure if this is still the case? Note, I have no plans to replace what I already have in my iOS game and I understand the leader boards, users, and achievements won't be shared out of Game Center.

    Read the article

  • How to hire a web-programmer : for non-programmer

    - by 0Complex
    I am a non-programmer that has used the services of : freelancer, odesk, etc I've tried asking for what i need but, I can't find anyone who can show me any type of example similar to what I request in the specs for the web-programming. They have front ends and back ends, but they don't fulfill true "live" website requirements. "live" as to be ready to support traffic, keys in hand, can be updated constantly by me, ... How do I figure how to evaluate a programmer ? How do I bid the appropriate price for the services ?

    Read the article

  • Unicode characters in URLs

    - by Pekka
    In 2010, would you serve URLs containing UTF-8 characters in a large web portal? Unicode characters are forbidden as per the RFC on URLs (see here). They would have to be percent encoded to be standards compliant. My main point, though, is serving the unencoded characters for the sole purpose of having nice-looking URLs, so percent encoding is out. All major browsers seem to be parsing those URLs okay no matter what the RFC says. My general impression, though, is that it gets very shaky when leaving the domain of web browsers: URLs getting copy+pasted into text files, E-Mails, even Web sites with a different encoding HTTP Client libraries Exotic browsers, RSS readers Is my impression correct that trouble is to be expected here, and thus it's not a practical solution (yet) if you're serving a non-technical audience and it's important that all your links work properly even if quoted and passed on? Is there some magic way of serving nice-looking URLs in HTML http://www.example.com/düsseldorf?neighbourhood=Lörick that can be copy+pasted with the special characters intact, but work correctly when re-used in older clients?

    Read the article

  • Google relaxé par la CJUE dans l'affaire l'opposant à Louis Vuitton, la notion de non-responsabilité

    Google relaxé par la CJUE dans l'affaire l'opposant à Louis Vuitton, la notion de non-responsabilité de l'hébergeur fait débat La Cour de Justice de l'Union européenne vient de rendre son verdict dans l'affaire opposant Google à Louis Vuitton (malletier de luxe, filiale du très puissant groupe LVMI). Dans ce dossier particulièrement compliqué, la législation européenne en vigueur a du être examinée sous toutes les coutures afin de pouvoir statuer. Deux points ont demandé la plus grande attention : d'abord « l'emploi de mots clés correspondant à des marques d'autrui dans le cadre d'un service de référencement sur internet », puis « la responsabilité du prestataire du service de référencement ». En rép...

    Read the article

  • Blender to Collada to Assimp - Rigid (Non-skinned) Animation

    - by gareththegeek
    I am trying to get simple animations to work, exporting from Blender and importing into my application. My first attempt was as follows: Open Blender at factory settings. Select the default cube and insert a location keyframe. Select another frame and move the cube. Insert a second location keyframe. Export to Collada. When I open the Collada file using assimp it contains zero animations, even though in Blender the cube animates correctly. On my next attempt, I inserted a bone armature with a single bone, made it the parent of the cube, and animated the bone instead. Again the animation worked correctly in Blender. Assimp now lists one animation but both key frames have the position [0, 0, 0] Does anyone have any suggestions as to how I can get animated (non-skinned) meshes from Blender into Assimp? My ultimate goal here is to export animated meshes from Blender, process them offline into my own model format, and load them into my SharpDX based graphics engine..

    Read the article

  • Unable to Retrieve Simplified Chinese Characters From Form

    - by Bullines
    I have a page that displays content retrieved from XML with no problems: <?xml version="1.0" encoding="UTF-8"?> <Root> <Fields> <NamePrompt>??</NamePrompt> </Fields> </Root> Page encoding is set to GB18030 and it displays perfectly. However, when I retrieve inputted text from HttpContext.Current.Request.Form that's been entered with double-byte characters, the retrieved string contains unreadable characters. Single-byte characters are fine, obviously. I've tried the following to no avail: byte[] valueBytes = Encoding.UTF8.GetBytes(HttpContext.Current.Request.Form["fullName"]); string value = Encoding.UTF8.GetString(valueBytes); I don't see this problem with other double-byte languages like Japanese or Korean. How can I successfully retrieve double-byte characters from a page that's GB18030 encoded?

    Read the article

  • How do you remove invalid hexadecimal characters from an XML-based data source prior to constructing

    - by Oppositional
    Is there any easy/general way to clean an XML based data source prior to using it in an XmlReader so that I can gracefully consume XML data that is non-conformant to the hexadecimal character restrictions placed on XML? Note: The solution needs to handle XML data sources that use character encodings other than UTF-8, e.g. by specifying the character encoding at the XML document declaration. Not mangling the character encoding of the source while stripping invalid hexadecimal characters has been a major sticking point. The removal of invalid hexadecimal characters should only remove hexadecimal encoded values, as you can often find href values in data that happens to contains a string that would be a string match for a hexadecimal character. Background: I need to consume an XML-based data source that conforms to a specific format (think Atom or RSS feeds), but want to be able to consume data sources that have been published which contain invalid hexadecimal characters per the XML specification. In .NET if you have a Stream that represents the XML data source, and then attempt to parse it using an XmlReader and/or XPathDocument, an exception is raised due to the inclusion of invalid hexadecimal characters in the XML data. My current attempt to resolve this issue is to parse the Stream as a string and use a regular expression to remove and/or replace the invalid hexadecimal characters, but I am looking for a more performant solution.

    Read the article

  • weird characters displayed during serial communication OSX

    - by nemo
    I have tried communicating via serial (OSX w/ prolific drivers - USB RS232 adapter - Tx,Rx and GND pins on device serial ttl port) to a device and done so successfully using screen /dev/tty.usbserial 115200 8N1 I get to log in and use it as if I was SSH or TelNetted in... However whenever I try to go into system recovery mode (holding CTRL+1) while the device is powering on, it starts displaying weird characters and until I close the screen session it will continue showing weird characters: Of course when we tried doing the same thing on my boss' macbook running windows and PuTTY and everything worked fine, even in system recovery mode; characters were displayed properly. What gives? Id like to learn the intuition to use because up till now I concluded that since I can bot into the system and see characters normally everything about the connection should be fine and its must have been the recovery partition that was broken. This was wrong of course... Niko

    Read the article

  • Non-zero exit status for clean exit

    - by trinithis
    Is it acceptable to return a non-zero exit code if the program in question ran properly? For example, say I have a simple program that (only) does the following: Program takes N arguments. It returns an exit code of min(N, 255). Note that any N is valid for the program. A more realistic program might return different codes for successfully ran programs that signify different things. Should these programs instead write this information to a stream instead, such as to stdout?

    Read the article

  • No norwegian characters in LaTeX

    - by DreamCodeR
    Hi, I have translated a document from English to Norwegian in the LaTeX format, and while using norwegian special characters, I get an error using \usepackage[utf8x]{inputenc} to try and display the norwegian (scandinavian) special characters in PostScript/PDF/DVI format, saying Package utf8x Error: MalformedUTF-8sequence. So while that didn't work, I tried out another possible solution: \usepackage{ucs} \usepackage[norsk]babel And when I tried to save that in Emacs I get this message: These default coding systems were tried to encode text in the buffer `lol.tex': (utf-8-unix (905 . 4194277) (916 . 4194245) (945 . 4194278) (950 . 4194277) (954 . 4194296) (990 . 4194277) (1010 . 4194277) (1013 . 4194278) (1051 . 4194277) (1078 . 4194296) (1105 . 4194296)) However, each of them encountered characters it couldn't encode: utf-8-unix cannot encode these: \345 \305 \346 \345 \370 \345 \345 \346 \345 \370 ... Thanks to Emacs I have the possibility to check out the properties of those characters and the first one tells me: character: \345 (4194277, #o17777745, #x3fffe5) preferred charset: eight-bit (Raw bytes 128-255) code point: 0xE5 syntax: w which means: word buffer code: #xE5 file code: not encodable by coding system utf-8-unix display: not encodable for terminal Which doesn't tell me much. When I try to build this with texi2dvi --dvipdf filename.text I get a perfectly fine PDF, all without the special norwegian characters. When I am about to save Emacs also ask me: "Select coding system (default raw-text):" And I type in utf-8 to choose its coding system. I have also tried to choose default raw-text to see if I get some different result. But nothing. At last I tried \lstset{inputencoding=utf8x, extendedchars=\true} ... a code I came over while trying to google the solution to this problem. Which gives me this error: Undefined control sequence. So basically, I have tried every encoding option I have been able to find and nothing works. I am desperately trying to make this work since the norwegian translation must be published before the deadline. As an additional information I may add that I found out later on that I only had the en_US.UTF-8 in my locale, so I added nb_NO.UTF-8 and nb_NO.ISO-8859-15 and ran locale-gen + reboot without any changes. I hope I provided enough information to get some assistance, the characters in question is æ ø å.

    Read the article

  • PHP: Cyrillic characters not displayed correctly

    - by user295502
    Recently I switched hosting from one provider to the other and I have problems displaying Cyrillic characters. The characters which are read from the database are displayed correctly, but characters which are hardcoded in the php file aren't (they are displayed as question marks). The files which contain the php source code are saved in utf-8 form. Help anybody?

    Read the article

  • Some special characters defined in "ISO-8859-1" can't be shown when encoding with "UTF-8"

    - by Mike.Huang
    I need to get a string from URL request of brower, and then create a text image by requested text. I know the default encoding of the Java net transmission is "ISO-8859-1", it can works normally with all characters what defined in "ISO-8859-1". But when I request a multi-byte Unicode character (e.g. chinese or something like ¤?), then I need to decode it by "UTF-8" from "ISO-8859-1". My codes like: String reslut = new String(requestString.getBytes("ISO-8859-1"), "UTF-8"); Everything is fine, but I found some characters in ISO-8859-1 are not been shown now, which characters are 0x80 - 0xFF(defined in" ISO-8859-1"), i.e. the characters after 0x80 (in "ISO-8859-1") not been shown when converted to "UTF-8" from "ISO-8859-1". Any other method can solve this query?

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >