Search Results

Search found 5433 results on 218 pages for 'escaped characters'.

Page 11/218 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • How to read a Text File Hidden Characters?

    - by balexandre
    Hi guys, I've created a text file from an application that I developed. When I send the text file to a SYSTEM Validation, they (3rd Party System) say that the file is invalid and that the file contains 3 characters in the beginning of the file that are not allowed as well special characters are not correct. They also say I need to use either ISO 8859-1 or PC850 Well, I'm using NotePad++ and I can't see that at all! What is the best text file reader for this kinda problems? EDITED I also have a MAC and just a thought I remembered opening in TextMate ... WOW! Now I know what they are talking about! How can I have the same in Windows?

    Read the article

  • Non-printing characters in Word 2011 not showing even when enabled

    - by Henrik Söderlund
    I have a document I work on often, my resume. I have created a few different styles that I use and for some reason the non-printing characters have stopped showing properly. I have the option enabled (the reversed P) and the proper settings in the preferences checked. Here is a screenshot of the current view: basically, only the tab stops and the returns are showing. Upon doing an experiment by creating a new document, all characters (especially the spaces) show up nicely: I can copy this line and paste it into my resume document and it shows up there too. It seems my styles are doing something...

    Read the article

  • nginx regex characters that require quoting?

    - by Michael Louis Thaler
    So I was configuring nginx today and I hit a weird problem. I was trying to match a location like this: location ~ ^/([0-9]+)/(.*) { # do proxy redirects } ...for URLs like "http://my.domain.com/0001/index.html". This rule was never matching, despite the fact that it by all rights should. It took me awhile to figure out, based on this documentation, that some characters in regexes need to be quoted. The problem is, the documentation is for rewrites, and it specifically calls out curly braces, not square brackets. After a fair bit of experimentation that involved a lot of swearing, I discovered that I could fix the problem by quoting the regex like so: location ~ "^/([0-9]+)/(.*)" { # do proxy redirects } Is there a list somewhere of characters that nginx requires quoting regexes with? Or could there be something else going on here that I'm totally missing? This is my first nginx configuration job, so it's very possible I've misunderstood something...

    Read the article

  • max length of url 257 characters for mod_rewrite?

    - by Daniel
    My url scheme is /foo/var1-var2-var3.../bar I am using these mod_rewrite rules: RewriteBase /foo/ RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^ index.php [PT,L] If the length of the string 'var1-var2...' is greater than 257 characters then an error 403 Forbidden and a 404 are returned. However, if the length of the 'var1-var2...' string is 257 characters or less and subsequently followed by a slash the length of the remaining url may be any length. How does one overcome this limit?

    Read the article

  • Message-ID contains multiple '@' characters

    - by Thomaschaaf
    I am looking at my server's spamassasin and have stumbled across the "problem" that my outgoing email sends out its own X-Report and X-Score in the header (programs used are Outlook 2007 as client and exim4 with the vexim plugin and Spamassasin). On the one hand I want to get rid of the sent X-Report which gets sent out with every email and on the other hand I still want to have it for incoming mails. While I was trying to fix this (which I still haven't) I stumbled across this "error" which makes my email less trustworthy: 1.4 MSGID_MULTIPLE_AT Message-ID contains multiple '@' characters How do I get rid of the multiple '@' characters?

    Read the article

  • remove words containing non-alpha characters

    - by dnkb
    Given a text file with space separated string and a tab separated integer, I'd ;like to get rid of all words that have non-alpha characters but keep words consisting of alpha only characters and the tab plus the integer afterwards. My attempts like the ones below didin't yield any good. What I was trying to express is something like: "replace anything within word boundaries that starts and ends with 0 or more whatever and there is at least one :digits: or :punct: in between". sed 's/\b.[:digits::punct:]+.\b//g' sed 's/\b.[^:alpha:]+.\b//g' What am I missing? See sample input data below. Thank you! asdf 754m 563 a2a 754mm 291 754n 463 754 ppp 1409 754pin 4652 pin pin 462 754pins 652 754 ppp 1409 754pin 4652 pi$n pin 462 754/p ins 652 754 pp+p 1409 754 p=in 4652

    Read the article

  • How to read a Text File Hidden Characters?

    - by balexandre
    I've created a text file from an application that I developed. When I send the text file to a SYSTEM Validation, they (3rd Party System) say that the file is invalid and that the file contains 3 characters in the beginning of the file that are not allowed as well special characters are not correct. They also say I need to use either ISO 8859-1 or PC850 Well, I'm using Notepad++ and I can't see that at all! What is the best text file reader for this kind of problems? EDITED I also have a Mac and just thought I remembered opening in TextMate ... WOW! Now I know what they are talking about! How can I have the same in Windows?

    Read the article

  • Public IP shows strange characters and Facebook registers logged-in session to a different location

    - by Stuart Kershaw
    I'm encountering some IP strangeness today and hoping to find an explanation. In short, I'm based in Seattle, WA with my ISP being Comcast. While browsing Facebook's account settings, I noticed that my active session was located to Mount Laurel, NJ. At that point I ran a search in Google for 'my public IP', which returned an interesting result: a string of characters in the following format: 2601:8:b000:xxx:xxxx:xxxx:xxxx:xxxx Normally, a search for my IP returns something like: 67.xxx.xx.xxx A phone call to Comcast got me nowhere, but using Comcast's phone-menu debugging tools, I was able to send a 'refresh signal' to my modem. After that, the search for 'my public IP' yielded the expected result... for about 5 minutes, and then it returned to the new string of characters. Does anyone know of an explanation for this?

    Read the article

  • Why are unicode characters not rendering correctly

    - by sw1nn
    Background: I have some unicode characters in my prompt (git status markers essentially) I'm running urxvt under xfce on arch linux. I'm using DejaVu Sans Mono for Powerline font, specified via .Xresources line: URxvt*font: xft:DejaVu Sans Mono for Powerline:pixelsize=14 When I start urxvt the unicode characters do not render correctly. For example ? renders as â However, if I then start a new urxvt from inside the first terminal everything renders correctly. There doesn't appear to be any difference in the environment between the two terminals. What could be the difference between the first invocation and the nested invocation? I suspect the font is not correct in the 'outer' instance, but I'm unsure how to check the font of a running X window screenshot demonstrates the problem: Note: I moved this question from serverfault.com - i hope this site is more appropriate

    Read the article

  • How to type accented characters in Ubuntu 10.04 with an Apple Aluminum Keyboard

    - by jfmessier
    I installed the latest Ubuntu 10.04 and I used to have the Command, Option or Right-Ctrl keys as compose keys to write accented characters. But I find that under Ubuntu 10.04, the Compose Key is not working, even if I specify the proper Apple Keyboard. Since I cannot work with other keyboard layouts than the plain USA one along with compose keys (I never learned, and I hate, the French layout), this about my only way to input accented characters. I still have to try it with a regular keyboard to see whether there is a difference. Thanks :-)

    Read the article

  • I can see markup characters in vim `:help`

    - by Relax
    I just created a .txt file inside .vim/doc for documenting one little function of my .vimrc, ran :helptags ~/.vim/doc and apparently the whole vim help system went wild. Now, if I open for example :help help, I see things like: This also works together with other characters, for example to find help for CTRL-V in Insert mode: > :help i^V < (notice the < and characters). I can also see the ~ at the end of headlines and the modeline at the end of the help page (thinks like vim:tw=78:ts=8:ft=help:norl:). I have no idea about what happens or how to fix it. Any clue? Thanks in advance!

    Read the article

  • iPhone SDK - Comparing characters in string

    - by Karl Daniel
    Basically what I'm trying to do is compare 2 strings one from a plist and one from the user's input. I use a while loop to step through each character and compare it and if true then I increase an integer then once the loop has finished I work out the percentage correct / similarity of the plist answer and the user's answer. I seem to be having a problem however as the only return I'm getting is 0. Below is the code I'm using... The code below is all functioning and the question no longer requires answering... Working code... answerLength = boxAnswer.length; //Gets number of characters of first string. plistLength = plistAnswer.length; //Gets number of characters of second string. characterRange = 0; //Sets the variable for which character to look at. charactersCorrect = 0; //Sets the variable of number of matching characters. unichar answerCharacter; //Declares a unichar for the first string. unichar plistCharacter; //Declares a unichar for the second string. while (answerLength > 0 && plistLength > 0) { answerCharacter = [boxAnswer characterAtIndex:characterRange]; //Gets character of first string at the index of the range integer. plistCharacter = [plistAnswer characterAtIndex:characterRange]; //Gets character of second string at the index of the range integer. answerLength--; //Reduces number of characters left to compare. plistLength--; characterRange++; //Increases integer to tell it to look at next character in string. if (answerCharacter == plistCharacter) { //Checks to see if character of first string and character of second string match. charactersCorrect++; //If true increases the number correct. } } //Works out percentage of matching characters out of the total string. totalChar = plistAnswer.length; totalPercentage = (charactersCorrect/totalChar)*100; percentageCorrect.text = [NSString stringWithFormat:@"%i%%",totalPercentage]; Variable Declarations... int answerLength; int plistLength; int characterRange; double totalChar; double charactersCorrect; int totalPercentage;

    Read the article

  • Sql Server Management Studio: Change Prefix or Suffix characters

    - by PhantomDrummer
    I have an instance of SSMS 2008, for which the option to edit data in a table doesn't work. If I right-click on any table in the Object Explorer and select 'Edit top 200 rows' I get an error dialog 'Invalid prefix or suffix characters. (MS Visual Database Tools)'. The error seems to be associated specifically with SSMS, not with SQL Server (because this SSMS instance gives the same error no matter what database I connect to, but I've verified I can connect to some of the same databases using SSMS on other machines without the error). (However, our firewall prevents me using SSMS on other machines for some crucial tasks, so I do need to fix the problem). Googling for the error suggests that I should change the prefix, suffix or escape character, but without any indication of how you can make that change in SSMS. I'd also note that I'm not aware of having done any customization on SSMS since installing it, so would be surprised at having to make such a change now. Does anyone have any idea what the error message means or what I can do about it? Or how I can change the prefix/suffix/escape characters if that is really the problem.

    Read the article

  • Excel 2007 - Adding line breaks in a cell and no line over 50 characters

    - by Richard Drew
    I have notes stored in an excel cell. I add line breaks and dates every time I add a new note. I need to copy this to another program, but it has a line limit of 50 characters. I want a line break for each new date and for when each date's comment goes over 50 characters. I'm able to do one or the other, but I can't figure out how to do both. I'd prefer words not to be split up, but at this point I don't care. Below is some sample input. If needed for a SUBSTITUTE or REPLACE function, I could add a ~ before each date in my input as a delimiter. Sample Input: 07/03 - FU on query. Copies and history included. CC to Jane Doe and John Public 06/29 - Cust claiming not to have these and wrong PO on query form. Responded with inv sent dates and locations, correct PO values, and copies. 06/27 - New ticket opened using query form 06/12 - Opened ticket with helpdesk asking status 05/21 - Copy submitted to [email protected] 05/14 - Copy sent to John Public and [email protected] Ideal Output: 07/03 - FU on query. Copies and history included. CC to Jane Doe and John Public 06/29 - Cust claiming not to have these and wrong PO on query form. Responded with inv sent dates an d locations, correct PO values, and copies. 06/27 - New ticket opened using query form 06/12 - Opened ticket with helpdesk asking status 05/21 - Copy submitted to [email protected] om 05/14 - Copy sent to John Public and email@custome r.com

    Read the article

  • Korean characters not appearing in Korean Windows XP computer

    - by user13267
    I am using a Korean software (with a partial English interface) in a Korean Version of Windows XP SP 3 However, in parts of the software, even when I change the interface to Korean, Korean letters show up as random characters, as shown here: This is happening at others parts of the software as well, and I am not sure what is the difference between the places where this is happening, and places where this is not happening. For example, a command button where Korean letters are showing up properly is shown below: This software is a video conferencing software and has a chat feature as well. When I type into the chat box, i can see the Korean letters appear properly at my side, but when I press Enter and send the message, it changes into random characters as shown above in the chat box. What could be the issue here? Could it be a missing font in my computer? Since this is a Korean Windows installation I was hoping everything would work properly by default. What can be done here? EDIT 1: I asked some other people who are using this software, and they think that the problem is at my end, and playing around with the Regional and Language Settings might solve the problem. Also, they suggested I install all the language packs related to Korean display. But it looks like all the language packs have been installed, and my location is set to Korea in Regional and Language Settings in Control Panel, and I still have this problem. Also, I have had similar problems with displaying Korean on an English Windows XP computer. This answer suggested some solutions, but I still do not quite understand exactly what I have to do (at that time I had not fixed the problem, as I later on changed the computer). If I follow that answer, what fonts exactly do I need to install?

    Read the article

  • Limiting Number of Characters in a ContentEditable div

    - by Bill Dami
    Hi guys, I am using contenteditable div elements in my web application and I am trying to come up with a solution to limit the amount of characters allowed in the area, and once the limit is hit, attempting to enter characters simply does nothing. This is what I have so far: var content_id = 'editable_div'; //binding keyup/down events on the contenteditable div $('#'+content_id).keyup(function(){ check_charcount(content_id, max); }); $('#'+content_id).keydown(function(){ check_charcount(content_id, max); }); function check_charcount(content_id, max) { if($('#'+content_id).text().length > max) { $('#'+content_id).text($('#'+content_id).text().substring(0, max)); } } This DOES limit the number of characters to the number specified by 'max', however once the area's text is set by the jquery .text() function the cursor resets itself to the beginning of the area. So if the user keeps on typing, the newly entered characters will be inserted at the beginning of the text and the last character of the text will be removed. So really, I just need some way to keep the cursor at the end of the contenteditable area's text.

    Read the article

  • How to read Unicode characters from command-line arguments in Python on Windows

    - by Craig McQueen
    I want my Python script to be able to read Unicode command line arguments in Windows. But it appears that sys.argv is a string encoded in some local encoding, rather than Unicode. How can I read the command line in full Unicode? Example code: argv.py import sys first_arg = sys.argv[1] print first_arg print type(first_arg) print first_arg.encode("hex") print open(first_arg) On my PC set up for Japanese code page, I get: C:\temp>argv.py "PC·??????08.09.24.doc" PC·??????08.09.24.doc <type 'str'> 50438145835c83748367905c90bf8f9130382e30392e32342e646f63 <open file 'PC·??????08.09.24.doc', mode 'r' at 0x00917D90> That's Shift-JIS encoded I believe, and it "works" for that filename. But it breaks for filenames with characters that aren't in the Shift-JIS character set—the final "open" call fails: C:\temp>argv.py Jörgen.txt Jorgen.txt <type 'str'> 4a6f7267656e2e747874 Traceback (most recent call last): File "C:\temp\argv.py", line 7, in <module> print open(first_arg) IOError: [Errno 2] No such file or directory: 'Jorgen.txt' Note—I'm talking about Python 2.x, not Python 3.0. I've found that Python 3.0 gives sys.argv as proper Unicode. But it's a bit early yet to transition to Python 3.0 (due to lack of 3rd party library support). Update: A few answers have said I should decode according to whatever the sys.argv is encoded in. The problem with that is that it's not full Unicode, so some characters are not representable. Here's the use case that gives me grief: I have enabled drag-and-drop of files onto .py files in Windows Explorer. I have file names with all sorts of characters, including some not in the system default code page. My Python script doesn't get the right Unicode filenames passed to it via sys.argv in all cases, when the characters aren't representable in the current code page encoding. There is certainly some Windows API to read the command line with full Unicode (and Python 3.0 does it). I assume the Python 2.x interpreter is not using it.

    Read the article

  • Problem with extended ASCII characters in web page/master page

    - by Oyvind Brathen
    I have some localization problems in my webpage. There are basically two problems (that I suspect have a different sulution, but they are conseptually linked) First problem is this: I have a website that is using a master page. All text from the page is fine, but all text that comes from the master page file, get scrambled norwegian characters. For example Ø shows up as Ø. It seems that all characthers in the extended ASCII table gets scrambled this way. Afterwards, if I open the master page in Notepad the Ø looks normal, but if I remove the Ø and write a new Ø manually, then save the file from Notepad, and then open the website in the browser, it looks fine and the Ø is shown properly. So it seems that Visual Studio saves the characters wrongly in the master file, but correct for the aspx file. Any clue here? The second issue is norwegian characters coming from jQuery. All of these characters get's replaced by a questionmark with a black box around it. Here, modifying the js file in Notepad does not help, and it still display scrambled in the browser. Any input here would be appreciated.

    Read the article

  • Capture *all* display-characters in JavaScript?

    - by Jean-Charles
    I was given an unusual request recently that I'm having the most difficult time addressing that involves capturing all display-characters when typed into a text box. The set up is as follows: I have a text box that has a maxlength of 10 characters. When the user attempts to type more than 10 characters, I need to notify the user that they're typing beyond the character count limit. The simplest solution would be to specify a maxlength of 11, test the length on every keyup, and truncate back down to 10 characters but this solution seems a bit kludgy. What I'd prefer to do is capture the character before keyup and, depending on whether or not it is a display-character, present the notification to the user and prevent the default action. A white-list would be challenging since we handle a lot of international data. I've played around with every combination of keydown, keypress, and keyup, reading event.keyCode, event.charCode, and event.which, but I can't find a single combination that works across all browsers. The best I could manage is the following that works properly in =IE6, Chrome5, FF3.6, but fails in Opera: NOTE: The following code utilizes jQuery. $(function(){ $('#textbox').keypress(function(e){ var $this = $(this); var key = ('undefined'==typeof e.which?e.keyCode:e.which); if ($this.val().length==($this.attr('maxlength')||10)) { switch(key){ case 13: //return case 9: //tab case 27: //escape case 8: //backspace case 0: //other non-alphanumeric break; default: alert('no - '+e.charCode+' - '+e.which+' - '+e.keyCode); return false; }; } }); }); I'll grant that what I'm doing is likely over-engineering the solution but now that I'm invested in it, I'd like to know of a solution. Thanks for your help!

    Read the article

  • java BufferedReader specific length returns NUL characters

    - by Bastien
    I have a TCP socket client receiving messages (data) from a server. messages are of the type length (2 bytes) + data (length bytes), delimited by STX & ETX characters. I'm using a bufferedReader to retrieve the two first bytes, decode the length, then read again from the same bufferedReader the appropriate length and put the result in a char array. most of the time, I have no problem, but SOMETIMES (1 out of thousands of messages received), when attempting to read (length) bytes from the reader, I get only part of it, the rest of my array being filled with "NUL" characters. I imagine it's because the buffer has not yet been filled. char[] bufLen = new char[2]; _bufferedReader.read(bufLen); int len = decodeLength(bufLen); char[] _rawMsg = new char[len]; _bufferedReader.read(_rawMsg); return _rawMsg; I solved the problem in several iterative ways: first I tested the last char of my array: if it wasn't ETX I would read chars from the bufferedReader one by one until I would reach ETX, then start over my regular routine. the consequence is that I would basically DROP one message. then, in order to still retrieve that message, I would find the first occurence of the NUL char in my "truncated" message, read & store additional characters one at a time until I reached ETX, and append them to my "truncated" messages, confirming length is ok. it works also, but I'm really thinking there's something I could do better, like checking if the total number of characters I need are available in the buffer before reading it, but can't find the right way to do it... any idea / pointer ? thanks !

    Read the article

  • pdfmark for docinfo metadata in pdf is not accepting accented characters in Keywords or Subject

    - by rpilkey
    I am inserting metadata into postscript files with a program, to be distilled to pdf with Adobe Distiller. I am using this code that I grabbed from Thomas Merz's "Web Publishing with Acrobat-PDF": /pdfmark where {pop} {userdict /pdfmark /cleartomark load put} ifelse [ /Title (mot accenté) /Author (mot accenté) /Subject (mot accenté) /Keywords (mot accenté) /DOCINFO pdfmark When you look at the metadata, the accented characters turn into "?" in the Subject and Keyword fields, but not the Title and Author fields. The characters are the same ascii 233 I tried replacing them with octal encoding (\351), which came out the same (Title and Author okay, Subject and Keywords messed up). file encoding is latin-1,unix eol I found a mention on adobe forums, but the answer didn't make sense to me. http://forums.adobe.com/message/1165593 I changed the encoding to utf-8, inserted the characters binarily (in VIM : <Ctrl-v>u00e9), no change. I tried inserting the BOM in a few places, it didn't work. This is with Acrobat Pro 9 I didn't notice this problem with Acrobat Pro 7. Does anybody know of a workaround to get the accented characters into ALL the metadata fields when modifying a postscript file, or tell me if I'm doing it wrong? It seems weird that different fields would not accept the same bytes.

    Read the article

  • Python line file iteration and strange characters

    - by muckabout
    I have a huge gzipped text file which I need to read, line by line. I go with the following: for i, line in enumerate(codecs.getreader('utf-8')(gzip.open('file.gz'))): print i, line At some point late in the file, the python output diverges from the file. This is because lines are getting broken due to weird special characters that python thinks are newlines. When I open the file in 'vim', they are correct, but the suspect characters are formatted weirdly. Is there something I can do to fix this? I've tried other codecs including utf-16, latin-1. I've also tried with no codec. I looked at the file using 'od'. Sure enough, there are \n characters where they shouldn't be. But, the "wrong" ones are prepended by a weird character. I think there's some encoding here with some characters being 2-bytes, but the trailing byte being a \n if not viewed properly. If I replace: gzip.open('file.gz') With: os.popen('zcat file.gz') It works fine (and actually, quite faster). But, I'd like to know where I'm going wrong.

    Read the article

  • Foreign/accented characters in sql query

    - by FromCanada
    I'm using Java and Spring's JdbcTemplate class to build an SQL query in Java that queries a Postgres database. However, I'm having trouble executing queries that contain foreign/accented characters. For example the (trimmed) code: JdbcTemplate select = new JdbcTemplate( postgresDatabase ); String query = "SELECT id FROM province WHERE name = 'Ontario';"; Integer id = select.queryForObject( query, Integer.class ); will retrieve the province id, but if instead I did name = 'Québec' then the query fails to return any results (this value is in the database so the problem isn't that it's missing). I believe the source of the problem is that the database I am required to use has the default client encoding set to SQL_ASCII, which according to this prevents automatic character set conversions. (The Java environments encoding is set to 'UTF-8' while I'm told the database uses 'LATIN1' / 'ISO-8859-1') I was able to manually indicate the encoding when the resultSets contained values with foreign characters as a solution to a previous problem with a similar nature. Ex: String provinceName = new String ( resultSet.getBytes( "name" ), "ISO-8859-1" ); But now that the foreign characters are part of the query itself this approach hasn't been successful. (I suppose since the query has to be saved in a String before being executed anyway, breaking it down into bytes and then changing the encoding only muddles the characters further.) Is there a way around this without having to change the properties of the database or reconstruct it? PostScript: I found this function on StackOverflow when making up a title, it didn't seem to work (I might not have used it correctly, but even if it did work it doesn't seem like it could be the best solution.):

    Read the article

  • How to store unlimited characters in Oracle 11g?

    - by vicky21
    We have a table in Oracle 11g with a varchar2 column. We use a proprietary programming language where this column is defined as string. Maximum we can store 2000 characters (4000 bytes) in this column. Now the requirement is such that the column needs to store more than 2000 characters (in fact unlimited characters). The DBAs don't like BLOB or LONG datatypes for maintenance reasons. The solution that I can think of is to remove this column from the original table and have a separate table for this column and then store each character in a row, in order to get unlimited characters. This tble will be joined with the original table for queries. Is there any better solution to this problem? UPDATE: The proprietary programming language allows to define variables of type string and blob, there is no option of CLOB. I understand the responses given, but I cannot take on the DBAs. I understand that deviating from BLOB or LONG will be developers' nightmare, but still cannot help it.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >