Search Results

Search found 5303 results on 213 pages for 'encoding'.

Page 15/213 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Is this a good approach to address double-base64-encoding?

    - by Freiheit
    My software understands attachments, like PNGs attached to user records. These attachments are usually sent in from outside sources as a Base64 encoded string. The database stores whatever data it is given, Base64 encoded or not. When I serve up the attachment for download I do this: if (Base64.isBase64(data)) { data = Base64.decodeBase64(data); } There is a potential for data that is double encoded. For instance the sender of a message had base64 encoded data, then encoded it again when building the message to send to me. I think the following code would address that circumstance: while (Base64.isBase64(data)) { data = Base64.decodeBase64(data); } So if data is encoded multiple times, it would be decoded until its in its 'raw' state and then served up for download. Is this approach an acceptable way to address that problem? Ideally some sort of checking could happen at the edge when I receive attachment data, but that will take more time. This looping seems to be a faster way to do it. The 'Base64' library is Apache Commons: http://commons.apache.org/codec/apidocs/org/apache/commons/codec/binary/Base64.html I trust it to properly identify Base64 encoded data.

    Read the article

  • PHP - ___ encoding to UTF-8 - is there an end-all solution?

    - by Kerry
    I've looked across the web, I've looked through SO, through PHP documentation and more. It seems like a ridiculous problem not to have a standard solution to. If you get an unknown character set, and it has strange characters (like english quotes), is there a standard way to convert them to UTF-8? I've seen many messy solutions using a plethora of functions and checking and none of them are definitely going to work. Has anyone come up with their own function or a solution that always works?

    Read the article

  • how do i Raw url ENCODING/ DECODING in javascript and ruby to get the same answers in both?

    - by Mo
    Hi i am working on a web application where i have to encode and decode a string at the JavaScript side and ruby backend of the code. the only problem is that the escape methods for JavaScript and ruby have a small difference. in java script the " " is treated as "%20" but in ruby the " "(space) is encoded to "+". any way to work around this? another ruby method to encode a string in RAW url encode? thank you

    Read the article

  • Japanese character stored in SQL Server DB using ASP page that assumed it as ISO-8859-1 encoding

    - by Vishal Seth
    We have a legacy ASP based product that allowed the UI and Data languages of user groups to be configured according to their locations. CodePage and CharSet in ASP pages collecting data was set accordingly. I've noticed few instances in the SQL Server DB where users posted Japanese characters in the ASP page that assumes the oncoming stream to be of ISO-8859-1/Western and as a result, the data in the SQL table has gobbled up. While upgrading the client to our new product, I want to back-convert those "garbage" Japanese (in some instances Chinese) characters back to their actual form. Can I create some utility ASP page that would go through such data values and "fix" the wrongly-encoded strings and store everything back as utf-8 strings? In any case, I don't want to affect my French/Spanish/English characters that might be there as well.

    Read the article

  • How to handle URLs with diacritic characters

    - by user359650
    I am wondering how to handle URLs which correspond to strings containing diacritic (á, u, ´...). I believe what we're seeing mostly are URLs where diacritic characters where converted to their closest ASCII equivalent, for instance Rånades på Skyttis i Ö-vik converted to ranades-pa-skyttis-i-o-vik. However depending on the corresponding language, such conversion might be incorrect. For instance in German, ü should be converted to ue and not just u, as seen with the below URL representing the Bayern München string as bayern-muenchen: http://www.bundesliga.de/en/liga/clubs/fc-bayern-muenchen/index.php However what I've also noticed, is that browsers can render non-ASCII characters when they are percent-encoded in the URL, which is the approach Wikipedia has chosen, for instance http://de.wikipedia.org/wiki/FC_Bayern_M%C3%BCnchen which is rendered as: Therefore I'm considering the following approach for creating URL slugs: -(1) convert strings while replacing non-ASCII characters to their recommended ASCII representation: Bayern München - bayern-muenchen -(2) also convert strings to percent encoding: Bayern München - bayern_m%C3%BCnchen -create a 301 redirect from version (1) to version (2) Version (1) URLs could be used for marketing purposes (e.g. mywebsite.com/bayern-muenchen) but the URLs that would end being displayed in the browser bar would be version (2) URLs (e.g. mywebsite.com/bayern-münchen). Can you foresee particular problems with this approach? (Wikipedia is not doing it and I wonder why, apart from the fact that they don't need to market their URLs)

    Read the article

  • How to resolve a NULL cString crash

    - by hanumanDev
    I'm getting a crash with the following encoding fix I'm trying to implement: // encoding fix NSString *correctStringTitle = [NSString stringWithCString:[[item objectForKey:@"main_tag"] cStringUsingEncoding:NSISOLatin1StringEncoding] encoding:NSUTF8StringEncoding]; cell.titleLabel.text = [correctStringTitle capitalizedString]; my crash log output states: *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** +[NSString stringWithCString:encoding:]: NULL cString' thanks for any help

    Read the article

  • Convert .net String object into base64 encoded string

    - by chester89
    I have a question, which Unicode encoding to use while encoding .NET string into base64? I know strings are UTF-16 encoded on Windows, so is my way of encoding is the right one? public static String ToBase64String(this String source) { return Convert.ToBase64String(Encoding.Unicode.GetBytes(source)); }

    Read the article

  • How to detect UTF-8-based encoded strings [closed]

    - by Diego Sendra
    A customer of asked us to build him a multi-language based support VB6 scraper, for which we had the need to detect UTF-8 based encoded strings to decode it later for proper displaying in application UI. It's necessary to point out that this need arises based on VB6 limitations to natively support UTF-8 in its controls, contrary to what it happens in .NET where you can tell a control that it should expect UTF-8 encoding. VB6 natively supports ISO 8859-1 and/or Windows-1252 encodings only, for which textboxes, dropdowns, listview controls, others can't be defined to natively support/expect UTF-8 as you can do in .NET considering what we just explained; so we would see weird symbols such as é, è among others, making it a whole mess at the time of displaying. So, next function contains whole UTF-8 encoded punctuation marks and symbols from languages like Spanish, Italian, German, Portuguese, French and others, based on an excellent UTF-8 based list we got from this link - Ref. http://home.telfort.nl/~t876506/utf8tbl.html Basically, the function compares if each and one of the listed UTF-8 encoded sentences, separated by | (pipe) are found in our passed string making a substring search first. Whether it's not found, it makes an alternative ASCII value based search to get a match. Say, a string like "Societé" (Society in english) would return FALSE through calling isUTF8("Societé") while it would return TRUE when calling isUTF8("SocietÈ") since È is the UTF-8 encoded representation of é. Once you got it TRUE or FALSE, you can decode the string through DecodeUTF8() function for properly displaying it, a function we found somewhere else time ago and also included in this post. Function isUTF8(ByVal ptstr As String) Dim tUTFencoded As String Dim tUTFencodedaux Dim tUTFencodedASCII As String Dim ptstrASCII As String Dim iaux, iaux2 As Integer Dim ffound As Boolean ffound = False ptstrASCII = "" For iaux = 1 To Len(ptstr) ptstrASCII = ptstrASCII & Asc(Mid(ptstr, iaux, 1)) & "|" Next tUTFencoded = "Ä|Ã…|Ç|É|Ñ|Ö|ÃŒ|á|Ã|â|ä|ã|Ã¥|ç|é|è|ê|ë|í|ì|î|ï|ñ|ó|ò|ô|ö|õ|ú|ù|û|ü|â€|°|¢|£|§|•|¶|ß|®|©|â„¢|´|¨|â‰|Æ|Ø|∞|±|≤|≥|Â¥|µ|∂|∑|âˆ|Ï€|∫|ª|º|Ω|æ|ø|¿|¡|¬|√|Æ’|≈|∆|«|»|…|Â|À|Ã|Õ|Å’|Å“|–|—|“|â€|‘|’|÷|â—Š|ÿ|Ÿ|â„|€|‹|›|ï¬|fl|‡|·|‚|„|‰|Â|Ú|Ã|Ë|È|Ã|ÃŽ|Ã|ÃŒ|Ó|Ô||Ã’|Ú|Û|Ù|ı|ˆ|Ëœ|¯|˘|Ë™|Ëš|¸|Ë|Ë›|ˇ" & _ "Å|Å¡|¦|²|³|¹|¼|½|¾|Ã|×|Ã|Þ|ð|ý|þ" & _ "â‰|∞|≤|≥|∂|∑|âˆ|Ï€|∫|Ω|√|≈|∆|â—Š|â„|ï¬|fl||ı|˘|Ë™|Ëš|Ë|Ë›|ˇ" tUTFencodedaux = Split(tUTFencoded, "|") If UBound(tUTFencodedaux) > 0 Then iaux = 0 Do While Not ffound And Not iaux > UBound(tUTFencodedaux) If InStr(1, ptstr, tUTFencodedaux(iaux), vbTextCompare) > 0 Then ffound = True End If If Not ffound Then 'ASCII numeric search tUTFencodedASCII = "" For iaux2 = 1 To Len(tUTFencodedaux(iaux)) 'gets ASCII numeric sequence tUTFencodedASCII = tUTFencodedASCII & Asc(Mid(tUTFencodedaux(iaux), iaux2, 1)) & "|" Next 'tUTFencodedASCII = Left(tUTFencodedASCII, Len(tUTFencodedASCII) - 1) 'compares numeric sequences If InStr(1, ptstrASCII, tUTFencodedASCII) > 0 Then ffound = True End If End If iaux = iaux + 1 Loop End If isUTF8 = ffound End Function Function DecodeUTF8(s) Dim i Dim c Dim n s = s & " " i = 1 Do While i <= Len(s) c = Asc(Mid(s, i, 1)) If c And &H80 Then n = 1 Do While i + n < Len(s) If (Asc(Mid(s, i + n, 1)) And &HC0) <> &H80 Then Exit Do End If n = n + 1 Loop If n = 2 And ((c And &HE0) = &HC0) Then c = Asc(Mid(s, i + 1, 1)) + &H40 * (c And &H1) Else c = 191 End If s = Left(s, i - 1) + Chr(c) + Mid(s, i + n) End If i = i + 1 Loop DecodeUTF8 = s End Function

    Read the article

  • UITableView's NSString memory leak on iphone when encoding with NSUTF8StringEncoding

    - by vince
    my UITableView have serious memory leak problem only when the NSString is NOT encoding with NSASCIIStringEncoding. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"cell"; UILabel *textLabel1; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; textLabel1 = [[UILabel alloc] initWithFrame:CGRectMake(105, 6, 192, 22)]; textLabel1.tag = 1; textLabel1.textColor = [UIColor whiteColor]; textLabel1.backgroundColor = [UIColor blackColor]; textLabel1.numberOfLines = 1; textLabel1.adjustsFontSizeToFitWidth = NO; [textLabel1 setFont:[UIFont boldSystemFontOfSize:19]]; [cell.contentView addSubview:textLabel1]; [textLabel1 release]; } else { textLabel1 = (UILabel *)[cell.contentView viewWithTag:1]; } NSDictionary *tmpDict = [listOfInfo objectForKey:[NSString stringWithFormat:@"%@",indexPath.row]]; textLabel1.text = [tmpDict objectForKey:@"name"]; return cell; } -(void) readDatabase { NSArray *documentPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDir = [documentPaths objectAtIndex:0]; databasePath = [documentsDir stringByAppendingPathComponent:[NSString stringWithFormat:@"%@",myDB]]; sqlite3 *database; if(sqlite3_open([databasePath UTF8String], &database) == SQLITE_OK) { const char sqlStatement = [[NSString stringWithFormat:@"select id,name from %@ order by orderid",myTable] UTF8String]; sqlite3_stmt *compiledStatement; if(sqlite3_prepare_v2(database, sqlStatement, -1, &compiledStatement, NULL) == SQLITE_OK) { while(sqlite3_step(compiledStatement) == SQLITE_ROW) { NSString *tmpid = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 0)]; NSString *tmpname = [NSString stringWithCString:(const char *)sqlite3_column_text(compiledStatement, 1) encoding:NSUTF8StringEncoding]; [listOfInfo setObject:[[NSMutableDictionary alloc] init] forKey:tmpid]; [[listOfInfo objectForKey:tmpid] setObject:[NSString stringWithFormat:@"%@", tmpname] forKey:@"name"]; } } sqlite3_finalize(compiledStatement); debugNSLog(@"sqlite closing"); } sqlite3_close(database); } when i change the line NSString *tmpname = [NSString stringWithCString:(const char *)sqlite3_column_text(compiledStatement, 1) encoding:NSUTF8StringEncoding]; to NSString *tmpname = [NSString stringWithCString:(const char *)sqlite3_column_text(compiledStatement, 1) encoding:NSASCIIStringEncoding]; the memory leak is gone i tried NSString stringWithUTF8String and it still leak. i've also tried: NSData *dtmpname = [NSData dataWithBytes:sqlite3_column_blob(compiledStatement, 1) length:sqlite3_column_bytes(compiledStatement, 1)]; NSString *tmpname = [[[NSString alloc] initWithData:dtmpname encoding:NSUTF8StringEncoding] autorelease]; and the problem remains, the leak occur when u start scrolling the tableview. i've actually tried other encoding and it seems that only NSASCIIStringEncoding works(no memory leak) any idea how to get rid of this problem?

    Read the article

  • Code for decoding/encoding a modified base64 URL

    - by Kirk Liemohn
    I want to base64 encode data to put it in a URL and then decode it within my HttpHandler. I have found that Base64 Encoding allows for a '/' character which will mess up my UriTemplate matching. Then I found that there is a concept of a "modified Base64 for URL" from wikipedia: A modified Base64 for URL variant exists, where no padding '=' will be used, and the '+' and '/' characters of standard Base64 are respectively replaced by '-' and '_', so that using URL encoders/decoders is no longer necessary and has no impact on the length of the encoded value, leaving the same encoded form intact for use in relational databases, web forms, and object identifiers in general. Using .NET I want to modify my current code from doing basic base64 encoding and decoding to using the "modified base64 for URL" method. Has anyone done this? To decode, I know it starts out with something like: string base64EncodedText = base64UrlEncodedText.Replace('-', '+').Replace('_', '/'); // Append '=' char(s) if necessary - how best to do this? // My normal base64 decoding now uses encodedText But, I need to potentially add one or two '=' chars to the end which looks a little more complex. My encoding logic should be a little simpler: // Perform normal base64 encoding byte[] encodedBytes = Encoding.UTF8.GetBytes(unencodedText); string base64EncodedText = Convert.ToBase64String(encodedBytes); // Apply URL variant string base64UrlEncodedText = base64EncodedText.Replace("=", String.Empty).Replace('+', '-').Replace('/', '_'); I have seen the Guid to Base64 for URL StackOverflow entry, but that has a known length and therefore they can hardcode the number of equal signs needed at the end.

    Read the article

  • Fast or asynchronous AS3 JPEG encoding

    - by Bart van Heukelom
    I'm currently using the JPGEncoder from the AS3 core lib to encode a bitmap to JPEG var enc:JPGEncoder = new JPGEncoder(90); var jpg:ByteArray = enc.encode(bitmap); Because the bitmap is rather large (3000 x 2000) the encoding takes a long while (about 20 seconds), causing the application to seemingly freeze while encoding. To solve this, I need either: An asynchronous encoder so I can keep updating the screen (with a progress bar or something) while encoding An alternative encoder which is simply faster Is either possible?

    Read the article

  • Perl's use encoding pragma breaking UTF strings

    - by Karel Bílek
    I have a problem with Perl and Encoding pragma. (I use utf-8 everywhere, in input, output, the perl scripts themselves. I don't want to use other encoding, never ever.) However. When I write binmode(STDOUT, ':utf8'); use utf8; $r = "\x{ed}"; print $r; I see the string "í" (which is what I want - and what is 00+ED unicode char). But when I add the "use encoding" pragma like this binmode(STDOUT, ':utf8'); use utf8; use encoding 'utf8'; $r = "\x{ed}"; print $r; all I see is a box character. Why? Moreover, when I add Data::Dumper and let the Dumper print the new string like this binmode(STDOUT, ':utf8'); use utf8; use encoding 'utf8'; $r = "\x{ed}"; use Data::Dumper; print Dumper($r); I see that perl changed the string to "\x{fffd}". Why?

    Read the article

  • python UTF16LE file to UTF8 encoding

    - by Qiao
    I have big file with utf16le (BOM) encoding. Is it possible to convert it to usual UTF8 by python? Something like file_old = open('old.txt', mode='r', encoding='utf_16_le') file_new = open('new.txt', mode='w', encoding='utf-8') text = file_old.read() file_new.write(text.encode('utf-8')) http://docs.python.org/release/2.3/lib/node126.html (-- utf_16_le UTF-16LE) Not working. Can't understand "TypeError: must be str, not bytes" error. python 3

    Read the article

  • Ruby string encoding problem

    - by John Prideaux
    I've looked at the other ruby/encoding related posts but haven't been able to figure out why the following is not working. Likely just because I'm dense, but here's the situation. Using Ruby 1.9 on windows. I have a set of CSV files that need some data appended to the end of each line. Whenever I run my script, the appended characters are gibberish. The input text appears to be IBM437 encoding, whereas my string I'm appending starts as US-ASCII. Nothing I've tried with respect to forcing encoding on the input strings or the append string seems to change the resultant output. I'm stumped. The current encoding version is simply the last that I tried. def append_salesperson(txt, salesperson) if txt.length > 2 return txt.chomp.force_encoding('US-ASCII') + %(, "", "", "#{salesperson}") end end salespeople = Hash[ "fname", "Record Manager"] outfile = File.open("ActData.csv", "w:US-ASCII") salespeople.each do | filename, recordManager | infile = File.open("#{filename}.txt") infile.each do |line| outfile.puts append_salesperson(line, recordManager) end infile.close end outfile.close

    Read the article

  • Urls parameters doesn't encoding correctly!

    - by Ivan90
    I'am using asp.net mvc version 1.0 and I've a problem with some parameter in a url! My url is look like so(http://localhost:2282/Tags/PostList/c#) routes.MapRoute( "TagsRoute", "Tags/PostList/{tag}", new { controller="Tags",Action="PostList",tag = "" } ); In effect the problem is that tag paramter isn't encoding and so simbol # is ignored! I am using an actionlink but maybe with version 1.0 isn't encoding parameter directly! <%=Html.ActionLink(itemtags.Tags.TagName, "PostList","Tags", new { tag = itemtags.Tags.TagName }, new { style = "color:red;" })%> With this actionlink only whitespace are encoding correctly, infact asp.net mvc become asp.net%20mvc and it work fine! But c# isn't encoding :( So I try to use Server.UrlEncode, and in effect it happen some stuff!!! Infact c# became c%2523 but it isn't correct again because hexadecimal of # is %23! Have you some solutions???? Route Contraints? Thanks

    Read the article

  • Problem using AudioRecord with 8-bit encoding in android

    - by maxsap
    Hello, I have made an application that records from the phones microphone using the AudioRecord and 16-bit encoding, and I am able to playback the recording. For some compatibility reason I need to use 8-bit encoding, but when I try to run the same program using that encoding I keep getting an Invalid Audio Format. my code is : int bufferSize = AudioRecord.getMinBufferSize(11025, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_8BIT); AudioRecord recordInstance = new AudioRecord( MediaRecorder.AudioSource.MIC, 11025, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_8BIT, bufferSize); Any one knows what is the problem? according to the documentation AudioRecord is capable of 8-bit encoding. thanks in advanced maxsap.

    Read the article

  • Running SQL*Plus with bash causes wrong encoding

    - by Petr Mensik
    I have a problem with running SQL*Plus in the bash. Here is my code #!/bin/bash #curl http://192.168.168.165:8080/api_test/xsql/f_exp_order_1016.xsql > script.sql wget -O script.sql 192.168.168.165:8080/api_test/xsql/f_exp_order_1016.xsql set NLS_LANG=_.UTF8 sqlplus /nolog << ENDL connect login/password set sqlblanklines on start script.sql exit <<endl I download the insert statements from our intranet, put it into sql file and run it through SQL*Plus. This is working fine. My problem is that when I save the file script.sql my encoding goes wrong. All special characters(like íášc) are broken and that's causing inserting wrong characters into my DB. Encoding of that file is UTF-8, also UTF-8 is set on the XSQL page on our intranet. So I really don't know where could be a problem. And also any advices regarding to my script are welcomed, I am total newbie in Linux scripting:-)

    Read the article

  • Apache gzip with chucked encoding

    - by hoodoos
    I'm expiriencing some problem with one of my data source services. As it says in HTTP response headers it's running on Apache-Coyote/1.1. Server gives responses with Transfer-Encoding: chunked, here sample response: HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Content-Type: text/xml;charset=utf-8 Transfer-Encoding: chunked Date: Tue, 30 Mar 2010 06:13:52 GMT And problem is when I'm requesting server to send gzipped request it often sends not full response. I recieve response, see that last chunk recived, but then after ungzipping I see that response is partial. So my question is: is it common apache issue? maybe one of it's mod_deflate plugins or something? Ask questions if you need more info. Thanks.

    Read the article

  • Firefox or Chrome - how to force a specific encoding for a page

    - by Mike
    Hi, I am accessing an intranet site built by amateurs, that was constructed to be "best viewed by IE" (arghhh!). The site is in portuguese. All accented letters are jammed and do not appear as they should. As I create sites myself, I know that the best way to build a site in portuguese and other latin languages is to use the "charset=iso-8859-1" on the page's HTML encoding. This will ensure cross-browser and platforms compatibility. But I have no way to change this, because I am a visitor on this site. I don't know the encoding they are using. What I ask is: is there a way I can force my browser (Chrome or Firefox) to recode the page using the correct charset? I need this to work on Ubuntu.

    Read the article

  • Normalize Accept-Encoding via HAProxy for optimized Squid hit rate

    - by Matt Beckman
    Our website infrastructure uses HAProxy for load balancing, a Squid cluster for caching, and application data is on an IIS cluster. We load balance HAProxy by URI to optimize the Squid hit-rate, but we know that Squid is holding different copies of each page based on the Accept-Encoding header passed to it by the browser, and so IE (gzip, deflate) will have a different copy of a cached page than Firefox (gzip,deflate) or Chrome (gzip,deflate,sdch). We want to normalize the Accept-Encoding headers and I think the best place to do so would be in HAProxy. I'd appreciate it if someone could offer some ideas on how to accomplish this without breaking support for clients without gzip or deflate support.

    Read the article

  • How to read special characters from stdin in Python?

    - by erickrf
    I'm having trouble reading special characters from stdin. Here are my attempts: import os dir = raw_input("Dir name: ") Dir name: c:/á os.chdir(dir) WindowsError: [Error 2] The system cannot find the file specified: 'c:/\x81\xe1' Ok, so I tried to get the default system encoding and recode the string from stdin: import locale encoding = locale.getdefaultlocale()[1] print encoding cp1252 unicode(dir, encoding) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "c:\Python26\lib\encodings\cp1252.py", line 15, in decode return codecs.charmap_decode(input,errors,decoding_table) UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 3: character maps to <undefined> Now, I don't know how to solve this. Nor can I understand - why is there a problem when I try to access a directory with a name written in the system default encoding itself??

    Read the article

  • Job queueing in Toast Titanium 10?

    - by moonslug
    I have a bunch of .MP4 video files I'm burning to DVD-Video using Toast Titanium 10 on my MacBook Pro. Right now, I'm doing them one at a time. Because my computer is several years old, encoding video for a single DVD takes approximately six hours. I've discovered that it appears I can encode the video directly to a .toast format — however, I have yet to figure out if I can burn these directly to DVD. Also, I have quite a bit of video left to burn, and even that method would require me intervening manually to start a new encoding or burn job every six hours. Would it be possible to somehow queue up multiple DVD-Video encoding jobs at once, and have the computer work through them automatically? The actual writing to DVD disc doesn't take nearly as long, and if I had all my video encoded for me to begin with my job would be a lot quicker. Maybe this can be accomplished with a different piece of software?

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >