Search Results

Search found 5776 results on 232 pages for 'forbidden characters'.

Page 86/232 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • Flash CS5 font is largest part of the SWF

    - by dev.e.loper
    I'm transferring a project from CS4 to CS5 and (without any changes) my SWF file gets to be 10 times bigger. It was 7kb and now it's 77kb. I generated a size report and it looks like the font is taking up most of the space. I haven't changed settings. I'm not sure why font is taking up so much space. Is there a way around this? Here is my size report: Font Name Bytes Characters ----------------- ---------- ---------- _sans 12 MilkyWell 317 .blsu Calibri-Bold Bold 75960 %.0123456789 As you can see Calibri-Bold is taking up 75kb and I only have 12 characters in it.

    Read the article

  • Changing Emacs Forward-Word Behaviour

    - by gvkv
    As the title says, how does one change the behaviour of emacs forward-word function? For example, suppose [] is the cursor. Then: my $abs_target_path[]= abs_path($target); <M-f> my $abs_target_path = abs[_]path($target); I know I could just use M-f M-b but as far as I'm concerned, that shouldn't be necessary and I'd like to change it. In particular, I want two things: When I press M-f, I want to go to the first character of the next word regardless of whether the point is within a word, within a group of spaces or somewhere else. Customize word-characters on a mode-by-mode basis. After all, moving around in CPerl mode is different than, say, TeX mode. So, in the above example, item 1 would have the cursor would move to the 'a' (and the point to it's left) after hitting M-f. Item 2 would allow me to define underscores and sigils as word characters.

    Read the article

  • putenv/setenv using substitutions

    - by vinaym
    I need user to define all the environment variables needed for my program in a text file as shown below. MyDLLPath = C:\MyDLLPath MyOption = Option1 PATH = %MyDLLPath%;%PATH%; In my program I read each line and call putenv with the string. Problem is that the environment substitutions (%MyDLLPath%) are not being expanded. I am guessing the following fix for that 1. Check each line for % characters. 2. Get the text between 2 consecutive % characters. 3. Call getenv using the text from step 2. 4. Replace value obtained above into the line and then call putenv. Is there a better way to do it?

    Read the article

  • putenv/setenv using substitutions

    - by vinaym
    I need user to define all the environment variables needed for my program in a text file as shown below. MyDLLPath = C:\MyDLLPath MyOption = Option1 PATH = %MyDLLPath%;%PATH%; In my program I read each line and call putenv with the string. Problem is that the environment substitutions (%MyDLLPath%) are not being expanded. I am guessing the following fix for that - Check each line for % characters.- Get the text between 2 consecutive % characters.- Call getenv using the text- Replace value obtained above into the line and then call putenv. Is there a better way to do it?

    Read the article

  • XML Output is Truncated in SQL

    - by Muhammad Akhtar
    Hi, I need to return my result set in XML and this works fine, but if the number of records are increased, my xml output is truncated here is my query select t.id,t.name,t.address from test FOR XML AUTO, ROOT('Response'), ELEMENTS However I have set some option to increase the output result set like.. Tools --> Options --> Query Results --> SQL Server --> Results to Text --> Maximum number of characters displayed in each column Tools --> Options --> Results --> Maximum characters per column but still I am unable to get my desired result. please suggest my solution Thanks....

    Read the article

  • Pattern Matching in Scheme

    - by kunjaan
    How do I accept the following input? (list of 0 or more charcters and ends with 3) or (list of 1 or more characters 4 and 0 or more characters after 4) something like (match ( list 3)) -> #t (match ( list 1 2 3)) -> #t (match (list 1 2 3 4)) -> #t (match (list 1 2 3 4 5)) -> #t (match (list 4)) -> #f EDIT: THIS IS NOT MY HOMEWORK. I trying to write something like ELIZA from PAIP but I know only how to write a pattern that begins with a word.

    Read the article

  • Make NSFormatter validate NSTextFieldCell continuously

    - by harms
    In Cocoa, I have an NSOutlineView where the cells are NSTextFieldCell. The cell displays values which are strings that are formatted according to certain rules (such as floats or pairs of floats with a space in between). I have made a custom NSFormatter to validate the text, and this seems to work with no problem. However, the cell (or the outline view, I'm unsure what is causing this) only seems to use the formatter at the moment my editing would end. If I type some alphabetic characters into the text field (which violates the formatting rules), these characters show up -- the only way I notice the formatter doing its job is that I'm now prevented from moving keyboard focus away from this cell. If I return the contents of the cell to a valid form, then I can move focus away. I have set both the cell and the outline view to be "continuous". It would be better if I was unable to enter text into the cell in the first place. Is it possible to make it like that, and if so, how?

    Read the article

  • PHPMailer safe practices - Send escaped / sanitized variables or not ?

    - by FreekOne
    I'm using the PHPMailer-Lite class to build an email sending script and I'm not sure if I should use addslashses() on the $name variable when adding it to the constructor. If somebody's last name would be O'Riley (or any other name that contains characters which should normally be sanitized before handling) and I would send it unescaped, wouldn't it mess with the script/email sending ? Is it safe to send it unescaped ? As a side note, I would also like to avoid having my message body say "Hello, O\'Riley". Looking at the source, I saw that it only trims the whitespace and line ending (\r\n) characters from the received $name variable, so any advice on this would be more than welcome. Thank you all in advance !

    Read the article

  • Invert regexp in vim

    - by Chris J
    There's a few "how do I invert a regexp" questions here on stackoverflow, but I can't find one for vim (if it does exist, by goggle-fu is lacking today). In essence I want to match all non-printable characters and delete them. I could write a short script, or drop to a shell and use tr or something similar to delete, but a vim solution would be dandy :-) Vim has the atom \p to match printable characters, however trying to do this :s/[^\p]//g to match the inverse failed and just left me with every 'p' in the file. I've seen the (?!xxx) sequence in other questions, and vim seems to not recognise this sequence. I've not found seen an atom for non-printable chars. In the interim, I'm going to drop to external tools, but if anyone's got any trick up their sleeve to do this, it'd be welcome :-) Ta!

    Read the article

  • Converting from ANSI to Unicode

    - by Rayne
    Hi all, I'm using Visual Studio .NET 2003, and I'm trying to convert a program written in purely ANSI characters to be independent of Unicode/Multi-byte characters. The program has a callback function of pcap_loop, called "got_packet". It's defined as void got_packet(u_char *user, const struct pcap_pkthdr *header, const u_char *cpacket) { USES_CONVERSION; _TUCHAR *packet; packet = A2T(cpacket); ... } However, I get the error message error C2440: 'type cast': cannot convert from 'const u_char *' to 'ATL::CA2WEX<>' How do fix this? Thank you. Regards, Rayne

    Read the article

  • DataAnnotation attributes buddy class strangeness - ASP.NET MVC

    - by JK
    Given this POCO class that was automatically generated by an EntityFramework T4 template (has not and can not be manually edited in any way): public partial class Customer { [Required] [StringLength(20, ErrorMessage = "Customer Number - Please enter no more than 20 characters.")] [DisplayName("Customer Number")] public virtual string CustomerNumber { get;set; } [Required] [StringLength(10, ErrorMessage = "ACNumber - Please enter no more than 10 characters.")] [DisplayName("ACNumber")] public virtual string ACNumber{ get;set; } } Note that "ACNumber" is a badly named database field, so the autogenerator is unable to generate the correct display name and error message which should be "Account Number". So we manually create this buddy class to add custom attributes that could not be automatically generated: [MetadataType(typeof(CustomerAnnotations))] public partial class Customer { } public class CustomerAnnotations { [NumberCode] // This line does not work public virtual string CustomerNumber { get;set; } [StringLength(10, ErrorMessage = "Account Number - Please enter no more than 10 characters.")] [DisplayName("Account Number")] public virtual string ACNumber { get;set; } } Where [NumberCode] is a simple regex based attribute that allows only digits and hyphens: [AttributeUsage(AttributeTargets.Property)] public class NumberCodeAttribute: RegularExpressionAttribute { private const string REGX = @"^[0-9-]+$"; public NumberCodeAttribute() : base(REGX) { } } NOW, when I load the page, the DisplayName attribute works correctly - it shows the display name from the buddy class not the generated class. The StringLength attribute does not work correctly - it shows the error message from the generated class ("ACNumber" instead of "Account Number"). BUT the [NumberCode] attribute in the buddy class does not even get applied to the AccountNumber property: foreach (ValidationAttribute attrib in prop.Attributes.OfType<ValidationAttribute>()) { // This collection correctly contains all the [Required], [StringLength] attributes // BUT does not contain the [NumberCode] attribute ApplyValidation(generator, attrib); } Why does the prop.Attributes.OfType<ValidationAttribute>() collection not contain the [NumberCode] attribute? NumberCode inherits RegularExpressionAttribute which inherits ValidationAttribute so it should be there. If I manually move the [NumberCode] attribute to the autogenerated class, then it is included in the prop.Attributes.OfType<ValidationAttribute>() collection. So what I don't understand is why this particular attribute does not work in when in the buddy class, when other attributes in the buddy class do work. And why this attribute works in the autogenerated class, but not in the buddy. Any ideas? Also why does DisplayName get overriden by the buddy, when StringLength does not?

    Read the article

  • Finding text's bounding rect in Core Text

    - by Mo
    I'm trying to find the boundaries of a line of text in Core Text. For simplicity, assume it has a single character. At the moment I'm using the following method: line = CTLineCreateWithAttributedString(attrString); rect = CTLineGetImageBounds(line, context); It works most of the times, but for some characters, like math italic d (Unicode: 0x1D451) or math italic q (Unicode: 0x1D45E), the width is a bit short. I tried using CTLineGetTypographicBounds() or CTFramesetterSuggestFrameSizeWithConstraints, but they didn't help either (I think they use glyph's advance to find the width, not its graphical width.) As the font itself isn't italic, I also can't use slant angle to correct this. I tried accessing the glyphs directly and using CTFontCreatePathForGlyph(), but failed as CGGlyph and UniChar are both 16-bits and I need 32-bit characters. Does anyone know if I'm doing anything wrong? If so, what's the right way?

    Read the article

  • to escape or not to escape: well formed XHTML with diacritics

    - by andresmh
    Say that you have a XHTML document in English but it has accented characters (e.g. meta name="author" content="José"). Let's say you have no control over the HTTP headers. Should the characters be replaced for their corresponding named entities (e.g. &aacute;, etc)? Should the doc type and the xml:lang attribute be set to English? I know I can check the W3C recommendation but I am asking more from a practical point of view.

    Read the article

  • C#, string replace russian to english

    - by Fabio Beoni
    Hello, I have a strange problem replacing chars in string... I read a .txt file containing russian text, and starting from a list of letters russian to english (ru=en), I loop the list and I WOULD like to replace russian characters with english characters. The problem is: I can see in the debug the right reading of the russian and the right reading of the english, but using myWord = myWord.Replace(ruChar, enChar), the string is not replaced. My txt file is a UTF-8 encoding. Any suggestions?? Thank you to all...

    Read the article

  • pyparsing ambiguity

    - by Claudiu
    I'm trying to parse some text using PyParser. The problem is that I have names that can contain white spaces. So my input might look like this: Joe Bob Jimmy Foo Joe decides to eat. Bob decides to not eat. Jimmy Foo decides to eat. How can I create a parser for the decides to eat line? If I create my name parser naively, meaning with alphabetic characters plus space characters, then it will match the entire line.

    Read the article

  • Is it possible to reliably auto-decode user files to Unicode? [C#]

    - by NVRAM
    I have a web application that allows users to upload their content for processing. The processing engine expects UTF8 (and I'm composing XML from multiple users' files), so I need to ensure that I can properly decode the uploaded files. Since I'd be surprised if any of my users knew their files even were encoded, I have very little hope they'd be able to correctly specify the encoding (decoder) to use. And so, my application is left with task of detecting before decoding. This seems like such a universal problem, I'm surprised not to find either a framework capability or general recipe for the solution. Can it be I'm not searching with meaningful search terms? I've implemented BOM-aware detection (http://en.wikipedia.org/wiki/Byte_order_mark) but I'm not sure how often files will be uploaded w/o a BOM to indicate encoding, and this isn't useful for most non-UTF files. My questions boil down to: Is BOM-aware detection sufficient for the vast majority of files? In the case where BOM-detection fails, is it possible to try different decoders and determine if they are "valid"? (My attempts indicate the answer is "no.") Under what circumstances will a "valid" file fail with the C# encoder/decoder framework? Is there a repository anywhere that has a multitude of files with various encodings to use for testing? While I'm specifically asking about C#/.NET, I'd like to know the answer for Java, Python and other languages for the next time I have to do this. So far I've found: A "valid" UTF-16 file with Ctrl-S characters has caused encoding to UTF-8 to throw an exception (Illegal character?) (That was an XML encoding exception.) Decoding a valid UTF-16 file with UTF-8 succeeds but gives text with null characters. Huh? Currently, I only expect UTF-8, UTF-16 and probably ISO-8859-1 files, but I want the solution to be extensible if possible. My existing set of input files isn't nearly broad enough to uncover all the problems that will occur with live files. Although the files I'm trying to decode are "text" I think they are often created w/methods that leave garbage characters in the files. Hence "valid" files may not be "pure". Oh joy. Thanks.

    Read the article

  • Purpose of Trigraph sequences in C++?

    - by Kirill V. Lyadvinsky
    According to C++'03 Standard 2.3/1: Before any other processing takes place, each occurrence of one of the following sequences of three characters (“trigraph sequences”) is replaced by the single character indicated in Table 1. ---------------------------------------------------------------------------- | trigraph | replacement | trigraph | replacement | trigraph | replacement | ---------------------------------------------------------------------------- | ??= | # | ??( | [ | ??< | { | | ??/ | \ | ??) | ] | ??> | } | | ??’ | ˆ | ??! | | | ??- | ˜ | ---------------------------------------------------------------------------- In real life that means that code printf( "What??!\n" ); will result in printing What| because ??! is a trigraph sequence that is replaced with the | character. My question is what purpose of using trigraphs? Is there any practical advantage of using trigraphs? UPD: In answers was mentioned that some European keyboards don't have all the punctuation characters, so non-US programmers have to use trigraphs in everyday life? UPD2: Visual Studio 2010 has trigraph support turned off by default.

    Read the article

  • Why do C compilers prepend underscores to external names?

    - by Michael Burr
    I've been working in C for so long that the fact that compilers typically add an underscore to the start of an extern is just understood... However, another SO question today got me wondering about the real reason why the underscore is added. A wikipedia article claims that a reason is: It was common practice for C compilers to prepend a leading underscore to all external scope program identifiers to avert clashes with contributions from runtime language support I think there's at least a kernel of truth to this, but also it seems to no really answer the question, since if the underscore is added to all externs it won't help much with preventing clashes. Does anyone have good information on the rationale for the leading underscore? Is the added underscore part of the reason that the Unix creat() system call doesn't end with an 'e'? I've heard that early linkers on some platforms had a limit of 6 characters for names. If that's the case, then prepending an underscore to external names would seem to be a downright crazy idea (now I only have 5 characters to play with...).

    Read the article

  • Unescape _xHHHH_ XML escape sequences using Python

    - by John Machin
    I'm using Python 2.x [not negotiable] to read XML documents [created by others] that allow the content of many elements to contain characters that are not valid XML characters by escaping them using the _xHHHH_ convention e.g. ASCII BEL aka U+0007 is represented by the 7-character sequence u"_x0007_". Neither the functionality that allows representation of any old character in the document nor the manner of escaping is negotiable. I'm parsing the documents using cElementTree or lxml [semi-negotiable]. Here is my best attempt at unescapeing the parser output as efficiently as possible: import re def unescape(s, subber=re.compile(r'_x[0-9A-Fa-f]{4,4}_').sub, repl=lambda mobj: unichr(int(mobj.group(0)[2:6], 16)), ): if "_" in s: return subber(repl, s) return s The above is biassed by observing a very low frequency of "_" in typical text and a better-than-doubling of speed by avoiding the regex apparatus where possible. The question: Any better ideas out there?

    Read the article

  • How to specify character encoding for Ant Task parameters in Java

    - by räph
    I'm writing an ANT task in Java. In my build.xml I specify parameters, which should be read from my java class. Problems occur, when I use special characters, like german umlauts (Ö,Ä,Ü) in these parameters. In my java task they appear as ?-characters (using System.out.print). All my files are encoded as UTF-8. and my build.xml has the corresponding declaration: <?xml version="1.0" encoding="UTF-8" ?> For the details of writing the task: I do it according to http://ant.apache.org/manual/develop.html (especially Point 5 nested elements). I have nested elements in my task like: <parameter name="test" value="ÖÄÜtest"/> and a java method: public void addConfiguredParameter(Parameter prop) { System.out.println(prop.getValue()); //prints ???test } to read the parameter values.

    Read the article

  • PHP: Replace umlauts with closest 7-bit ASCII equivalent in an UTF-8 string

    - by BlaM
    What I want to do is to remove all accents and umlauts from a string, turning "lärm" into "larm" or "andré" into "andre". What I tried to do was to utf8_decode the string and then use strtr on it, but since my source file is saved as UTF-8 file, I can't enter the ISO-8859-15 characters for all umlauts - the editor inserts the UTF-8 characters. Obviously a solution for this would be to have an include that's an ISO-8859-15 file, but there must be a better way than to have another required include? echo strtr(utf8_decode($input), 'ŠŒŽšœžŸ¥µÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýÿ', 'SOZsozYYuAAAAAAACEEEEIIIIDNOOOOOOUUUUYsaaaaaaaceeeeiiiionoooooouuuuyy'); UPDATE: Maybe I was a bit inaccurate with what I try to do: I do not actually want to remove the umlauts, but to replace them with their closest "one character ASCII" aequivalent.

    Read the article

  • tchar safe functions -- count parameter for UTF-8 constants

    - by Dustin Getz
    I'm porting a library from char to TCHAR. the count parameter of this fragment, according to MSDN, is the number of multibyte characters, not the number of bytes. so, did I get this right? _tcsncmp(access, TEXT("ftp"), 3); //or do i want _tcsnccmp? "Supported on Windows platforms only, _mbsncmp and _mbsnbcmp are multibyte versions of strncmp. _mbsncmp will compare at most count multibyte characters and _mbsnbcmp will compare at most count bytes. They both use the current multibyte code page. _tcsnccmp and _tcsncmp are the corresponding Generic functions for _mbsncmp and _mbsnbcmp, respectively. _tccmp is equivalent to _tcsnccmp."

    Read the article

  • Python code formatting

    - by Curious2learn
    In response to another question of mine, someone suggested that I avoid long lines in the code and to use PEP-8 rules when writing Python code. One of the PEP-8 rules suggested avoiding lines which are longer than 80 characters. I changed a lot of my code to comply with this requirement without any problems. However, changing the following line in the manner shown below breaks the code. Any ideas why? Does it have to do with the fact that what follows return command has to be in a single line? The line longer that 80 characters: def __str__(self): return "Car Type \n"+"mpg: %.1f \n" % self.mpg + "hp: %.2f \n" %(self.hp) + "pc: %i \n" %self.pc + "unit cost: $%.2f \n" %(self.cost) + "price: $%.2f "%(self.price) The line changed by using Enter key and Spaces as necessary: def __str__(self): return "Car Type \n"+"mpg: %.1f \n" % self.mpg + "hp: %.2f \n" %(self.hp) + "pc: %i \n" %self.pc + "unit cost: $%.2f \n" %(self.cost) + "price: $%.2f "%(self.price)

    Read the article

  • UTF-8 to ISO-8859-1 mapping / lossless conversion libraries in Java

    - by Pawel Krupinski
    I need to perform a conversion of characters from UTF-8 to ISO-8859-1 in Java without losing for example all of the UTF-8 specific punctuation. Ideally would like these to be converted to equivalents in ISO (e.g. there are probably 5 different single quotes in UTF-8 and would like them all converted to ISO single quote character). String.getBytes("ISO-8859-1") just won't do the trick in this case as it will lose the UTF-8-specific chars. Do you know of any ready mappings or libraries in Java that would map UTF-8 specific characters to ISO?

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >