Search Results

Search found 5371 results on 215 pages for 'church encoding'.

Page 3/215 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Ivar definitions show 'long' type encoding as 'long long' type encoding

    - by Frank C.
    I've found what I think may be a bug with Ivar and Objective-C runtime. I'm using XCode 3.2.1 and associated libraries, developing a 64 bit app on X86_64 (MacBook Pro). Where I would expect the type encoding for the following "longVal" to be 'l', the Ivar encoding is showing a 'q' (which is a 'long long'). Anyone else seeing this? Simplified code and output follows: Code: #import <Foundation/Foundation.h> #import <objc/runtime.h> @interface Bug : NSObject { long longVal; long long longerVal; } @property (nonatomic,assign) long longVal; @property (nonatomic,assign) long long longerVal; @end @implementation Bug @synthesize longVal,longerVal; @end int main (int argc, const char * argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; unsigned int ivarCount=0; Ivar *ivars= class_copyIvarList([Bug class], &ivarCount); for(unsigned int x=0;x<ivarCount;x++) { NSLog(@"Name [%@] encoding [%@]", [NSString stringWithCString:ivar_getName(ivars[x]) encoding:NSUTF8StringEncoding], [NSString stringWithCString:ivar_getTypeEncoding(ivars[x]) encoding:NSUTF8StringEncoding]); } [pool drain]; return 0; } And here is output from debug console: This GDB was configured as "x86_64-apple-darwin".tty /dev/ttys000 Loading program into debugger… sharedlibrary apply-load-rules all Program loaded. run [Switching to process 6048] Running… 2010-03-17 22:16:29.138 ivarbug[6048:a0f] Name [longVal] encoding [q] 2010-03-17 22:16:29.146 ivarbug[6048:a0f] Name [longerVal] encoding [q] (gdb) continue Not a pretty picture! -- Frank

    Read the article

  • file-name encoding problems

    - by tenhouse
    I googled over this topic but couldn't find what I was looking for... the following "happend" to me: I had my files stored on a NTFS-USB Harddisk, because of space problems I moved them to an ext3 system....somehow the filename (content is still ok as far as I saw) encoding screwed up....my files look like the following now: Kküken <--- should have an "ü" Jäger <--- should be an "ä" Zwölf <--- should be an "ö" fünfte <-- should be an "ü" etc .... These are just examples, but already give me my first question Why has the "ü" two different representations? (Maybe I screw up, before I screw up and now I have a mixing of x different encoding-layers? :) ) I tried the following command: convmv -r -f UTF-8 -t ISO-8859-1 * This command work for some files (for example Zwölf) but not for all: iso-8859-1 doesn't cover all needed characters for: "fünfte" So Iguess it must be another encoding - but which? How can I find out this? And is there any way that I can still fix all of this?

    Read the article

  • How to auto detect text file encoding?

    - by ???
    There are many plain text files which were encoded in variant charsets. I want to convert them all to UTF-8, but before running iconv, I need to know its original encoding. Most browsers have an Auto Detect option in encodings, however, I can't check those text files one by one because there are too many. Only having known the original encoding, I then can convert the texts by iconv -f DETECTED_CHARSET -t utf-8. Is there any utility to detect the encoding of plain text files? It doesn't have to be a 100% perfect correct, but it should recognize most of them.

    Read the article

  • Ripping Blu-Ray for Xbox 360 with Minimal Encoding

    - by Adam Haile
    What's the best way to rip a Blu-ray disc to an Xbox 360 compatible format, while preferably maintaining surround sound and as little video encoding as possible? As far as I can tell, the 360 technically supports both AVC and VC-1 (though if at those bit rates is questionable), so I'm kind of hoping that you could do it without actually re-encoding the video at all and, instead, just processing the audio and the re-muxing everything together in a new file.

    Read the article

  • How to set the mechanize page encoding?

    - by Juan Medín
    Hi, I'm trying to get a page with an ISO-8859-1 encoding clicking on a link, so the code is similar to this: page_result = page.link_with( :text => 'link_text' ).click So far I get the result with a wrong encoding, so I see characters like: 'T?tulo:' instead of 'Título:' I've tried several approaches, including: Stating the encoding in the first request using the agent like: @page_search = @agent.get( :url => 'http://www.server.com', :headers => { 'Accept-Charset' => 'ISO-8859-1' } ) Stating the encoding for the page itself page_result.encoding = 'ISO-8859-1' But I must be doing something wrong: a simple puts always show the wrong characters. Do you know how to state the encoding? Thanks in advance, Added: Executable example: require 'rubygems' require 'mechanize' WWW::Mechanize::Util::CODE_DIC[:SJIS] = "ISO-8859-1" @agent = WWW::Mechanize.new @page = @agent.get( :url => 'http://www.mcu.es/webISBN/tituloSimpleFilter.do?cache=init&layout=busquedaisbn&language=es', :headers => { 'Accept-Charset' => 'utf-8' } ) puts @page.body

    Read the article

  • Encoding over SSH Issues

    - by user1104160
    I have a Linux machine and a Windows machine, both using Vim with the Powerline plugin. They both work great with patched fonts. Next, I want to SSH onto an OSX 10.6 machine and also use the Powerline in the terminal with Vim. However, I get weird symbols with normal mode ("^^B" in one area) and fancy mode ("~@" and "~B" spread throughout the bar. I thought this mixup was an encoding issue, but when I look at Putty's encoding it is using UTF-8 and the same with the Ubuntu terminal. Additionally, on the OSX machine, "locale" returns "en_US.UTF-8" for all variables (I set it to do that in order to troubleshoot). However, the symbols are still showing. I am using a patched font (Inconsolata, the same one as the Ubuntu terminal) for the OSX terminal, so I am stumped. Is there a missing component to this equation? Are there additional problems that can arise from SSH encoding? On the OSX end, additionally, these same symbols appear, so it may not even be related to SSH and therefore I'm totally lost.

    Read the article

  • Does FFMpeg support gpu acceleration of media encoding/decoding?

    - by Jason123
    I was wondering if ffmpeg supported gpu acceleration. I was reading on their websites and came across contradicting information. http://www.ffmpeg.org/general.html#Video-Codecs -H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (VDPAU acceleration) http://ffmpeg.org/trac/ffmpeg/wiki/x264EncodingGuide -Will a graphics card make x264 encode faster? No. libx264 doesn't use them (at least not yet). There are some proprietary encoders that utilize the GPU, but that does not mean they are well optimized, though encoding time may be faster; and they might be worse than x264 anyway, and possibly slower. Regardless, FFmpeg today doesn't support any means of gpu encoding, outside of libx264. If not, is there any way to add gpu acceleration to h.264 encoding/decoding?

    Read the article

  • sql and web encoding problem

    - by Marki
    Guys, I've got an encoding problem I believe. I have upgraded from phpBB2 to phpBB3. The old databases were in latin1, the new ones have utf8 encoding. Already during the upgrade process some rows of the DB were only read partly into the new version, because of strange characters as it turned out. When I use PHP's mb_convert_encoding() function to convert those strings to UTF8 they end up e.g. as 0x0093, i.e. they must have been some kind of double quotes. Even after doing this conversion, they still show up as 0x0093 in the browser (the squares with 0093 in them when the browser does not know what to display). Can someone explain the problem here? I'm a little confused and afraid I don't see all the dependencies that need to work to have the correct encodings and the correct display thereof...

    Read the article

  • Video encoding is very slow on Amazon EC2 instance

    - by Timka
    We are using Amazon EC2 m1.xlarge instance for video re-encoding and it looks like the actual encoding process takes a very long time. For an average 250mb video file it takes about an hour to encode. Intance: m1.xlarge (Xeon E5645 x 15gb ram) Windows Server 2008 R2 64-bit AviSynth version 2.5 (32bit) + ffms2 plugin (FFmpegSource 1.21) FFmpeg SVN-r13712 libavutil 3213056 libavcodec 3356930 libavformat 3411456 libavdevice 3407872 Number of parallel jobs is 3 Average CPU utilization ~96% Update#1 Source video: mp4/h.264 Parameters for ffmpeg: --enable-memalign-hack --enable-avisynth --enable-libxvid --enable-libx264 + --enable-libgsm --enable-libfaac --enable-libfaad --enable-liba52 + --enable-libmp3lame --enable-libvorbis --enable-libtheora --enable-pthreads + --enable-swscale --enable-gpl Video files encoded to mp4/h.264 with the following extra command line options: -threads 0 -coder 0 -bf 0 -refs 1 -level 30 -maxrate 10000000 -bufsize 10000000

    Read the article

  • c# HTTPListener encoding issue

    - by Rob Griffin
    I have a Java application sending HTTP requests to a C# application. The C# app uses HTTPListener to listen for requests and respond. On the Java side I'm encoding the URL using UTF-8. When I send a \ character it gets encoded as %5C as expected but on the C# side it becomes a / character. The encoding for the request object is Windows-1252 which I think may be causing the problem. How do I set the default encoding to UTF-8? Currently I'm doing this to convert the encoding: foreach (string key in request.QueryString.Keys) { if (key != null) { byte[] sourceBytes =request.ContentEncoding.GetBytes(request.QueryString[key]); string value = Encoding.UTF8.GetString(sourceBytes)); } } This handles the non ASCII characters I'm also sending but doesn't fix the slash problem. Examining request.QueryString[key] in the debugger shows that the / is already there.

    Read the article

  • International JRE6 or JDK6 or reading a file in "cp037" encoding scheme

    - by Reddy
    I have been trying to read a file in "cp037" encoding scheme using JAVA. I able to read a file in basic encoding schemes like UTF-8, UTF16 etc...After a bit of research on the internet i came to know that we need charset.jar or international version of JRE be installed to support extended encoding schemes. Can anyone send me a link for international version of JRE6 or JDK6. or is there any better way that i could read a file in cp037 encoding scheme. P.S: cp037 is a character encoding scheme supported by IBM Mainframes. All i need is to display a file in windows, which is being generated on IBM Mainframes machine, using a java program. Thanks in advance for your help... :-)

    Read the article

  • What encoding to use for exporting to CSV?

    - by Michael Borgwardt
    I'm developing a java app that exports data to CSV files, intended to be opened in Excel by end users. We just noticed that the export function uses Java's platform default encoding. This causes umlaut characters to be lost and unit test to fail on the build server (which is configured to have US-ASCII as its platform default encoding exactly to catch such potential problems). The question is: which would be the best encoding to use? How does Excel determine what encoding to use? Does it use something platform-specific that presumably matches Java's platform default? I'm currently leaning towards hardcoding Cp1252 - that should cover the target machines (the deployment environment is actually specified) and would fix the test problem. From googling around, Excel does not seem to handle UTF-8 well, so that's out, and sticking to the platform default encoding would require some sort of workaround hack for the tests.

    Read the article

  • ADSL throughput loss from Reed-Solomon encoding

    - by javano
    I'm reading about ADSL starting here and I am confused by how the Reed-Solomon encoding for ECC is limiting the available transfer rate, as much as it does (nearly half). This pdf on the same subject contains the following; A maximum of 255 sub-carriers can be used to modulate data in the downstream direction. Sub-carrier 256, the downstream Nyquist frequency, and sub-carrier 64, the downstream pilot frequency, are not available for user data, thus limiting the total number of available downstream sub-carriers to 254. Each of these 254 sub-carriers can support the modulation of 0 to 15 bits. Since the ADSL DMT data frame rate is 4000 frames per second, the maximum theoretical downstream data rate of an ADSL system is 15.24Mbps. Due to limitations in system architecture, specifically the maximum allowable Reed-Solomon codeword size (255 bytes), the maximum achievable downstream data rate is 8.16Mbps. How is this nearly halving the throughput? Is all that extra bandwidth overhead of the RS encoding? 15240000 bps (15.24Mbps) - 8160000 bps (8.12Mbps) = 7080000 bps (7.08Mbps). Where has that 7Mbps of throughput gone? EDIT: I tried to read the wiki page on Reed-Soloman but it's all crazy maths and algerbra, which I don't understand. I can understand that data is split into 255 byte codewords, because that maybe the max codeword size whilst still maintaining accuracy during transmission; But I don't understand why that means less data is sent?

    Read the article

  • Email encoding on IIS7

    - by Ivanhoe123
    All emails sent from the server are displaying Cyrillic letters as weird characters, for example: Можно. Regular alphabet letters are properly rendered. I searched all across the web but was not able to find any solutions. Here is some information about the system: Dedicated server with Windows 2008 and IIS7 Application are in PHP (run as FastCGI) If of any importance, Smartermail is installed on the server The emails are sent using PHPs mail() function through a Drupal website. Encoding on that site is set up properly and there are no display issues on front end. Where is the problem? How can I make Cyrillic letters to be properly encoded? Any help is greatly appreciated. Thanks! UPDATE Here are the email headers: Received: from SERVERNAME (mail.domain.com [12.123.123.123]) by mail.domain.com with SMTP; Fri, 16 Nov 2012 00:00:00 +0100 From: [email protected] To: [email protected] Subject: Email subject Date: Fri, 16 Nov 2012 00:00:00 +0100 MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Mailer: Drupal Sender: [email protected] Return-Path: [email protected] Message-ID: f98b801988c642ef911ef46f7cace92b@com X-SmarterMail-Spam: SPF_None, ISpamAssassin 8 [raw: 5], DK_None, DKIM_None, Custom Rules [] X-SmarterMail-TotalSpamWeight: 8

    Read the article

  • Force encoding with IIS 7

    - by Cédric Boivin
    I try to force encoding with IIS 7. When I add in the http response headers the key : Content-Type and value charset=utf-8 i got this key content-type : text/html,content-type=utf-8 it's there a way to remove the comma ? Thanks Justin for your answer. But it's seen don't work. There is my config, i need to do that for asp classic. <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <staticContent> <remove fileExtension=".html" /> <remove fileExtension=".hxt" /> <remove fileExtension=".htm" /> <remove fileExtension=".asp" /> <mimeMap fileExtension=".htm" mimeType="text/html" /> <mimeMap fileExtension=".hxt" mimeType="text/html" /> <mimeMap fileExtension=".html" mimeType="text/html" /> <mimeMap fileExtension=".asp" mimeType="text/html; charset=UTF-8" /> </staticContent> </system.webServer> </configuration>

    Read the article

  • Changing character encoding in MySQL, PHP scripts, HTML

    - by Sandman
    So, I have built on this system for quite some time, and it is currently outputting Latin1 (ISO-8859-1) to the web browser, and this is the components: MySQL - all data is stored with the Latin1 character set PHP - All PHP text files are stored on disk with Latin1 encoding HTML - The output has the http-equiv="content-type" content="text/html; charset=iso-8859-1" meta tag So, I'm trying to understand how the encoding of the different parts come into play in my workflow. If I open a PHP script and change its encoding within the text editor to UTF-8 and save it back to disk and reload the web browser, the text is all messed up - unless the text comes from the DB. If I change the encoding of the DB to UTF-8 and keep the PHP files in latin1 I have to use utf8_decode() for the data to display correctly. And if I change the HTML code the browser will read it incorrectly. So yeah, I realise that if I want to "upgrade" to UTF8, I have to update all three parts of this setup for it to work correctly, but since it's a huge system with some 180k lines of PHP code and millions of posts in a lot of databases/tables, I don't want to start something like this without understanding everything correctly. What haven't I thought about? What could mess this up beyond fixing? What are the procedures for changing the encoding of an entire MySQL installation and what's the easiest way to change the encoding of hundreds or thousands of PHP files on disk? The META tag is luckily added dynamically, so I'll change that in one place only :) Let me hear about your experiences with this.

    Read the article

  • Theory: "Lexical Encoding"

    - by _ande_turner_
    I am using the term "Lexical Encoding" for my lack of a better one. A Word is arguably the fundamental unit of communication as opposed to a Letter. Unicode tries to assign a numeric value to each Letter of all known Alphabets. What is a Letter to one language, is a Glyph to another. Unicode 5.1 assigns more than 100,000 values to these Glyphs currently. Out of the approximately 180,000 Words being used in Modern English, it is said that with a vocabulary of about 2,000 Words, you should be able to converse in general terms. A "Lexical Encoding" would encode each Word not each Letter, and encapsulate them within a Sentence. // An simplified example of a "Lexical Encoding" String sentence = "How are you today?"; int[] sentence = { 93, 22, 14, 330, QUERY }; In this example each Token in the String was encoded as an Integer. The Encoding Scheme here simply assigned an int value based on generalised statistical ranking of word usage, and assigned a constant to the question mark. Ultimately, a Word has both a Spelling & Meaning though. Any "Lexical Encoding" would preserve the meaning and intent of the Sentence as a whole, and not be language specific. An English sentence would be encoded into "...language-neutral atomic elements of meaning ..." which could then be reconstituted into any language with a structured Syntactic Form and Grammatical Structure. What are other examples of "Lexical Encoding" techniques? If you were interested in where the word-usage statistics come from : http://www.wordcount.org

    Read the article

  • How to retain similar character encoding

    - by Mystere Man
    I have a logfile that contains the half character ½, I need to process this log file and rewrite certain lines to a new file, which contain that character. However, when I write out the file the characters appear in notepad incorrectly. I know this is some kind of encoding issue, and i'm not sure if it's just that the files i'm writing don't contain the correct bom or what. I've tried reading and writing the file with all the available encoding options in the Encoding enumeration. I'm using this code: string line; // Note i've used every version of the Encoding enumeration using (StreamReader sr = new StreamReader(file, Encoding.Unicode)) using (StreamWRiter sw = new StreamWriter(newfile, false, Encoding.Unicode)) { while ((line = sr.ReadLine()) != null) { // process code, I do not alter the lines, they are copied verbatim // but i do not write every line that i read. sw.WriteLine(line); } } When I view the original log in notepad, the half character displays correctly. When I view the new file, it does not. Can anyone help me to solve this?

    Read the article

  • 2 pass encoding or not?

    - by marco.ragogna
    I would like to do a backup of some movies on DVD with File Factory. In the output setting, by default the option 2 pass encoding is disabled. Do I need to enable it for better quality and does it worth?

    Read the article

  • Force encoding with IIS 7

    - by Cédric Boivin
    I try to force encoding with IIS 7. When I add in the http response headers the key : Content-Type and value charset=utf-8 i got this key content-type : text/html,content-type=utf-8 it's there a way to remove the comma ?

    Read the article

  • How to correct character encoding in IE8 native json ?

    - by mike_t2e
    I am using json with unicode text, and having a problem with the IE8 native json implementation. <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <script> var stringified = JSON.stringify("?????? olé"); alert(stringified); </script> Using json2.js or FireFox native json, the alert() string is the same as in the original one. IE8 on the other hand returns Unicode values rather than the original text \u0e2a\u0e27\u0e31\u0e2a\u0e14\u0e35 ol\u00e9 . Is there an easy way to make IE behave like the others, or convert this string to how it should be ? And would you regard this as a bug in IE, I thought native json implementations were supposed to be drop-in identical replacements for json2.js ?

    Read the article

  • Encoding movie files into h264

    - by Shiki
    Found some topics about archiving into h264, but those were about the generic questions (does it worth it, which codec to use.) I want to use h264 (with CUDA (if possible)). So far I only found Avidemux a usable encoder with x264 but it makes an unwatchable video file after the encoding (using the best profile, all setting maxed out), really blurry. Please write down detailed what to use, where to get it (if its free, doesnt matter), what to set, etc. Thanks in advance. (OS: Windows 7 ulti x64, VGA is VP2 capable with CUDA GTX260 XFX) Of course, if there is an up to date duplicate, just comment with the link and I'll remove the question ASAP.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >