Search Results

Search found 84 results on 4 pages for 'crlf'.

Page 3/4 | < Previous Page | 1 2 3 4  | Next Page >

  • pop3 multiline problem

    - by stupid_idiot
    hi everyone, i'm making a client for pop3 and somehow i can't figure out how to handle multiline responses. There is no difference in the first response from server whether it is single or multiline, it always ends with CRLF (considering the usual case) so how do I do I know if I should call recv() once more?

    Read the article

  • Proper line-ending for an open-source PHP project

    - by Mahdi
    What is the proper line-ending preferences for an open-source web project? Obviously it includes source code of PHP, HTML, CSS and Javascript. The source code is managing via Github now, and there are Windows (8 & 7), Linux (Ubuntu) and OSX developers inside the team, which means all the major operating systems. P.S. We are using "Windows" CRLF line-ending, plus "UTF-8 without BOM" right now, without facing any problem, however I think it might be better to use "*nix/OSX" LF style. I heard some stories about the problems that caused by the additional "CR" on Linux or OS X.

    Read the article

  • testing ssl cert for smtps => "secure connection could not be established with this website"

    - by cc young
    testing ssl cert on server using a web service. https, imaps and pop3s all check, but smtps yields the message "we advise you not to submit any confidential or personal data to this website because a secure connection could not be established with this website." running postfix tls logging: connect from s097.networking4all.com[213.249.64.242] lost connection after UNKNOWN from s097.networking4all.com[213.249.64.242] disconnect from s097.networking4all.com[213.249.64.242] these work correctly: telnet mydomain.net 587 openssl s_client -starttls smtp -crlf -connect mydomain.net:587 but cannot get email using ssl to log into either 587 or 564 - get same "UNKNOWN" problem. email smtp w/o ssh works fine. the test site is http://www.networking4all.com/en/support/tools/site+check/

    Read the article

  • How to remove line breaks (or carriage returns) only from certain parts of a block of text?

    - by Luke Allen
    Whenever I copy formatted text from a PDF file which is formatted to have line breaks (or carriage returns), I need to find a way to remove these line breaks without removing the paragraph format. To do this I need to use RegEx (Regular expressions) to only remove the line breaks which aren't preceded by a period. So for example, if a string of text has a line break right after a period, that is obviously almost always a legitimate line break which will start a new paragraph. If a string of text has a line break mid-word or after a word with no period, it's simply part of the bad formatting I need to get rid of. My problem is that I don't know how to use RegEx to make it only remove the ^p tags in word or CRLF or line breaks in any format under the conditions that it omits ones following a period.

    Read the article

  • MIME "Content-Type" folding and parameter question regarding RFCs?

    - by BastiBense
    Hello, I'm trying to implement a basic MIME parser for the multipart/related in C++/Qt. So far I've been writing some basic parser code for headers, and I'm reading the RFCs to get an idea how to do everything as close to the specification as possible. Unfortunately there is a part in the RFC that confuses me a bit: From RFC882 Section 3.1.1: Each header field can be viewed as a single, logical line of ASCII characters, comprising a field-name and a field-body. For convenience, the field-body portion of this conceptual entity can be split into a multiple-line representation; this is called "folding". The general rule is that wherever there may be linear-white-space (NOT simply LWSP-chars), a CRLF immediately followed by AT LEAST one LWSP-char may instead be inserted. Thus, the single line Alright, so I simply parse a header field and if a CRLF follows with linear whitespace, I simply concat those in a useful manner to result in a single header line. Let's proceed... From RFC2045 Section 5.1: In the Augmented BNF notation of RFC 822, a Content-Type header field value is defined as follows: content := "Content-Type" ":" type "/" subtype *(";" parameter) ; Matching of media type and subtype ; is ALWAYS case-insensitive. [...] parameter := attribute "=" value attribute := token ; Matching of attributes ; is ALWAYS case-insensitive. value := token / quoted-string token := 1*<any (US-ASCII) CHAR except SPACE, CTLs, or tspecials> Okay. So it seems if you want to specify a Content-Type header with parameters, simple do it like this: Content-Type: multipart/related; foo=bar; something=else ... and a folded version of the same header would look like this: Content-Type: multipart/related; foo=bar; something=else Correct? Good. As I kept reading the RFCs, I came across the following in RFC2387 Section 5.1 (Examples): Content-Type: Multipart/Related; boundary=example-1 start="<[email protected]>"; type="Application/X-FixedRecord" start-info="-o ps" --example-1 Content-Type: Application/X-FixedRecord Content-ID: <[email protected]> [data] --example-1 Content-Type: Application/octet-stream Content-Description: The fixed length records Content-Transfer-Encoding: base64 Content-ID: <[email protected]> [data] --example-1-- Hmm, this is odd. Do you see the Content-Type header? It has a number of parameters, but not all have a ";" as parameter delimiter. Maybe I just didn't read the RFCs correctly, but if my parser works strictly like the specification defines, the type and start-info parameters would result in a single string or worse, a parser error. Guys, what's your thought on this? Just a typo in the RFCs? Or did I miss something? Thanks!

    Read the article

  • Best way to produce automated exports in tab-delimited form from Teradata?

    - by Cade Roux
    I would like to be able to produce a file by running a command or batch which basically exports a table or view (SELECT * FROM tbl), in text form (default conversions to text for dates, numbers, etc are fine), tab-delimited, with NULLs being converted to empty field (i.e. a NULL colum would have no space between tab characters, with appropriate line termination (CRLF or Windows), preferably also with column headings. This is the same export I can get in SQL Assistant 12.0, but choosing the export option, using tab delimiter, setting my NULL value to '' and including column headings. I have been unable to find the right combination of options - the closest I have gotten is by building a single column with CAST and '09'XC, but the rows still have a leading 2-byte length indicator in most settings I have tried. I would prefer not to have to build large strings for the various different tables.

    Read the article

  • extract payload from tcpflow output

    - by Felipe Alvarez
    Tcpflow outputs a bunch of files, many of which are HTTP responses from a web server. Inside, they contain HTTP headers, including Content-type: , and other important ones. I'm trying to write a script that can extract just the payload data (i.e. image/jpeg; text/html; et al.) and save it to a file [optional: with an appropriate name and file extension]. The EOL chars are \r\n (CRLF) and so this makes it difficult to use in GNU distros (in my experiences). I've been trying something along the lines of: sed /HTTP/,/^$/d To delete all text from the the beginning of HTTP (incl) to the end of \r\n\r\n (incl) but I have found no luck. I'm looking for help from anyone with good experience in sed and/or awk. I have zero experience with Perl, please I'd prefer to use common GNU command line utilities for this Find a sample tcpflow output file here. Thanks, Felipe

    Read the article

  • Replace newline from MySQL TEXT field to parse w/ JSON

    - by dr3w
    Hi, "replace newline" seems to be a question asked here and there like hundred times already. But however, i haven't found any working solution for myself yet. I have a textarea that i use to save data into DB. Then using AJAX I want to get data from the DB in the backend that is in TEXT field and to pass it to frontend using JSON. But pasing JSON returns an error, as new lines from DB are not valid JSON syntax, I guess i should use \n instead... But how do i replace newlinew from DB with \n? I've tried this $t = str_replace('<br />', '\n', nl2br($t)); and this $t = preg_replace("/\r\n|\n\r|\r|\n/", "\n", $t); and using CHAR(13) and CHAR(10), and still I get an error the new line in textarea is equivalent to, i guess $t = 'text with a newline'; it gives the same error. And in notepad i clearly see that it is crlf

    Read the article

  • Windows secure pinned website tile

    - by Stijn de Voogd
    I'm currently working on a pinned website tile for my website and instead of using a static XML file i'm linking the tile to a web api that returns user specific XML. My question is: Is it possible to secure this tile so that a user needs to be logged in before the data loads? The pinned website livetile doesn't send any security request headers/ cookies: - Http: Request, GET /v1/livetile/firsttile Command: GET + URI: /v1/livetile/firsttile ProtocolVersion: HTTP/1.1 UserAgent: Microsoft-WNS/6.3 Host: 192.168.14.109:2089 Cache-Control: no-cache HeaderEnd: CRLF Sidenote: Notice how it's not even sending an accept header even though it only wants xml. Info: http://msdn.microsoft.com/en-US/library/ie/dn455106 http://msdn.microsoft.com/en-us/library/ie/hh761491.aspx# Thanks in advance!

    Read the article

  • manipulating strings, search text

    - by alhambraeidos
    Hi all, I try explain my issue: note 1: I have only strings, not files, ONLY strings. I have a string like this (NOTE: I include line numbers for better explain) The line separator is \r\n (CRLF) string allText = 1 Lorem ipsum Lorem ipsum 2 == START 001partXXX.sql == 3 Lorem ipsum TEXT Lorem ipsum 4 == END 001partXXX.sql == 5 Lorem ipsum TEXT Lorem ipsum 6 == START 002partzzz.sql == 7 Lorem ipsum TEXT Lorem ipsum 8 == END 002partzzz.sql == I have contents strings like this: string contents1 = == START 001partXXX.sql == Lorem ipsum TEXT Lorem ipsum == END 001partXXX.sql == the other content string: string contents2 = == START 002partzzz.sql == Lorem ipsum TEXT Lorem ipsum == END 002partzzz.sql == Then, allText.IndexOf(contents1) != -1 allText.IndexOf(contents2) != -1 I need function thats receive 3 parameters: allText, Contents, and text to find in contents, and it returns the line number of Text To Find in AllText For example, input: allText, contents2, "TEXT" ouput = line number 7 Another sample, input: allText, contents1, "TEXT" ouput = line number 3 Another sample, input: allText, contents1, "TEXT NOT FOUND" ouput = line number -1 How can I implement this function ?? any help very useful for me, Thanks in advanced.

    Read the article

  • How to read line by line a CR-only file with Perl?

    - by Subb
    Hi, I'm trying to read a file which has only CR as line delimiter. I'm using Mac OS X and Perl v.5.8.8. This script should run on every platform, for every kind of line delimiter (CR, LF, CRLF). My current code is the following : open(FILE, "test.txt"); while($record = <FILE>){ print $record; } close(TEST); This currently print only the last line (or worst). What is going on? Obvisously, I would like to not convert the file. Is it possible?

    Read the article

  • AutoIt simulate new script line

    - by Renato Böhler
    I need some way to loop in a single line. Is there a way to simulate new lines in AutoIt? Because if I try While 1 MsgBox (0,1,2) Wend It will not work. So I was wondering if there is a way to simulate a new line, something like While 1 - MsgBox (0,1,2) - Wend Or some function to do this. I also already tried to make this: Func repeat($func, $limit) $i = 0 While $i <= $limit Execute($func) $i = $i + 1 WEnd EndFunc But it only executes Execute($func) once, even if I change While $i <= $limit for While 1. I have tried Execute("While $i <= 5" & @LF & "MsgBox(0, 1, 24)" & @LF & "$i = $i + 1" & @LF & "WEnd") too, it doesn't work even if I change @LF for @CRLF, @CR, Chr(13), \n, \r... Any ideas?

    Read the article

  • How do I count the number of bytes read by TextReader.ReadLine()?

    - by Steve Guidi
    I am parsing a very large file of records (one per line, each of varying length), and I'd like to keep track of the number of bytes I've read in the file so that I may recover in the event of a failure. I wrote the following: string record = myTextReader.ReadLine(); bytesRead += record.Length; ParseRecord(record); However this doesn't work since ReadLine() strips any CR/LF characters in the line. Furthermore, a line may be terminated by either CR, LF, or CRLF characters, which means I can't just add 1 to bytesRead. Is there an easy way to get the actual line length, or do I write my own ReadLine() method in terms of the granular Read() operations?

    Read the article

  • Any way to find out which line break char(s) to use in Javascript?

    - by Irro
    I'm trying to parse some text into a textarea control and at the same time replace all with ordinary line break chars. I have been able to do it in windows by replacing with CR (it didn't work with CRLF strangely enough, it gave me linebreak + empty space) but I'm afraid that this code won't work in Unix/Mac because they use LF for line break. Is there any way to use the system default line break char in javascript? Something similar to Environment.NewLine in .Net (I wasn't able to write backslash in this editor but I use /r for CR and /n for LF, replace / with backslash)

    Read the article

  • how do i detect \r\n in a u_char type of buffer?

    - by aDi Adam
    i am trying to construct http content from packet sniffing in C. right now i am able to save all the packets in a file but i want to get rid of the headers in the first packet. they are also being saved as per they are a part of tcp payload. the actual body after the header starts after double "crlf" or \r\n\r\n in http response. how do i detect \r\n so that i can only save the following part of the buffer in the file. the buffer is u_char type. i cant figure out the command or the part i looked on google and other places but i mostly find c# commands, nothing in C.

    Read the article

  • problem using getline with a unicode file

    - by hamishmcn
    UPDATE: Thank you to @Potatoswatter and @Jonathan Leffler for comments - rather embarrassingly I was caught out by the debugger tool tip not showing the value of a wstring correctly - however it still isn't quite working for me and I have updated the question below: If I have a small multibyte file I want to read into a string I use the following trick - I use getline with a delimeter of '\0' e.g. std::string contents_utf8; std::ifstream inf1("utf8.txt"); getline(inf1, contents_utf8, '\0'); This reads in the entire file including newlines. However if I try to do the same thing with a wide character file it doesn't work - my wstring only reads to the the first line. std::wstring contents_wide; std::wifstream inf2(L"ucs2-be.txt"); getline( inf2, contents_wide, wchar_t(0) ); //doesn't work For example my if unicode file contains the chars A and B seperated by CRLF, the hex looks like this: FE FF 00 41 00 0D 00 0A 00 42 Based on the fact that with a multibyte file getline with '\0' reads the entire file I believed that getline( inf2, contents_wide, wchar_t(0) ) should read in the entire unicode file. However it doesn't - with the example above my wide string would contain the following two wchar_ts: FF FF (If I remove the wchar_t(0) it reads in the first line as expected (ie FE FF 00 41 00 0D 00) Why doesn't wchar_t(0) work as a delimiting wchar_t of "00 00"? Thank you

    Read the article

  • HtmlAgilityPack giving problems with malformed html

    - by Kapil
    I want to extract meaningful text out of an html document and I was using html-agility-pack for the same. Here is my code: string convertedContent = HttpUtility.HtmlDecode(ConvertHtml(HtmlAgilityPack.HtmlEntity.DeEntitize(htmlAsString))); ConvertHtml: public string ConvertHtml(string html) { HtmlDocument doc = new HtmlDocument(); doc.LoadHtml(html); StringWriter sw = new StringWriter(); ConvertTo(doc.DocumentNode, sw); sw.Flush(); return sw.ToString(); } ConvertTo: public void ConvertTo(HtmlAgilityPack.HtmlNode node, TextWriter outText) { string html; switch (node.NodeType) { case HtmlAgilityPack.HtmlNodeType.Comment: // don't output comments break; case HtmlAgilityPack.HtmlNodeType.Document: foreach (HtmlNode subnode in node.ChildNodes) { ConvertTo(subnode, outText); } break; case HtmlAgilityPack.HtmlNodeType.Text: // script and style must not be output string parentName = node.ParentNode.Name; if ((parentName == "script") || (parentName == "style")) break; // get text html = ((HtmlTextNode)node).Text; // is it in fact a special closing node output as text? if (HtmlNode.IsOverlappedClosingElement(html)) break; // check the text is meaningful and not a bunch of whitespaces if (html.Trim().Length > 0) { outText.Write(HtmlEntity.DeEntitize(html) + " "); } break; case HtmlAgilityPack.HtmlNodeType.Element: switch (node.Name) { case "p": // treat paragraphs as crlf outText.Write("\r\n"); break; } if (node.HasChildNodes) { foreach (HtmlNode subnode in node.ChildNodes) { ConvertTo(subnode, outText); } } break; } } Now in some cases when the html pages are malformed (for example the following page - http://rareseeds.com/cart/products/Purple_of_Romagna_Artichoke-646-72.html has a malformed meta-tag like <meta content="text/html; charset=uft-8" http-equiv="Content-Type">) [Note "uft" instead of utf] my code is puking at the time I am trying to load the html document. Can someone suggest me how can I overcome these malformed html pages and still extract relevant text out of a html document? Thanks, Kapil

    Read the article

  • Delphi: Alternative to using Assign/ReadLn for text file reading

    - by Ian Boyd
    i want to process a text file line by line. In the olden days i loaded the file into a StringList: slFile := TStringList.Create(); slFile.LoadFromFile(filename); for i := 0 to slFile.Count-1 do begin oneLine := slFile.Strings[i]; //process the line end; Problem with that is once the file gets to be a few hundred megabytes, i have to allocate a huge chunk of memory; when really i only need enough memory to hold one line at a time. (Plus, you can't really indicate progress when you the system is locked up loading the file in step 1). The i tried using the native, and recommended, file I/O routines provided by Delphi: var f: TextFile; begin Assign(filename, f); while ReadLn(f, oneLine) do begin //process the line end; Problem withAssign is that there is no option to read the file without locking (i.e. fmShareDenyNone). The former stringlist example doesn't support no-lock either, unless you change it to LoadFromStream: slFile := TStringList.Create; stream := TFileStream.Create(filename, fmOpenRead or fmShareDenyNone); slFile.LoadFromStream(stream); stream.Free; for i := 0 to slFile.Count-1 do begin oneLine := slFile.Strings[i]; //process the line end; So now even though i've gained no locks being held, i'm back to loading the entire file into memory. Is there some alternative to Assign/ReadLn, where i can read a file line-by-line, without taking a sharing lock? i'd rather not get directly into Win32 CreateFile/ReadFile, and having to deal with allocating buffers and detecting CR, LF, CRLF's. i thought about memory mapped files, but there's the difficulty if the entire file doesn't fit (map) into virtual memory, and having to maps views (pieces) of the file at a time. Starts to get ugly. i just want Assign with fmShareDenyNone!

    Read the article

  • Delphi: Fast(er) widestring concatenation

    - by Ian Boyd
    i have a function who's job is to convert an ADO Recordset into html: class function RecordsetToHtml(const rs: _Recordset): WideString; And the guts of the function involves a lot of wide string concatenation: while not rs.EOF do begin Result := Result+CRLF+ '<TR>'; for i := 0 to rs.Fields.Count-1 do Result := Result+'<TD>'+VarAsString(rs.Fields[i].Value)+'</TD>'; Result := Result+'</TR>'; rs.MoveNext; end; With a few thousand results, the function takes, what any user would feel, is too long to run. The Delphi Sampling Profiler shows that 99.3% of the time is spent in widestring concatenation (@WStrCatN and @WstrCat). Can anyone think of a way to improve widestring concatenation? i don't think Delphi 5 has any kind of string builder. And Format doesn't support Unicode. And to make sure nobody tries to weasel out: pretend you are implementing the interface: IRecordsetToHtml = interface(IUnknown) function RecordsetToHtml(const rs: _Recordset): WideString; end; Update One I thought of using an IXMLDOMDocument, to build up the HTML as xml. But then i realized that the final HTML would be xhtml and not html - a subtle, but important, difference. Update Two Microsoft knowledge base article: How To Improve String Concatenation Performance

    Read the article

  • Python3 and ftplib uploading files

    - by Teifion
    My python2 script uploads files nicely using this method but python3 is presenting problems and I'm stuck as to where to go next (googling hasn't helped). from ftplib import FTP ftp = FTP(ftp_host, ftp_user, ftp_pass) ftp.storbinary('STOR myfile.txt', open('myfile.txt')) The error I get is Traceback (most recent call last): File "/Library/WebServer/CGI-Executables/rob3/functions/cli_f.py", line 12, in upload ftp.storlines('STOR myfile.txt', open('myfile.txt')) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 454, in storbinary conn.sendall(buf) TypeError: must be bytes or buffer, not str I tried altering the code to from ftplib import FTP ftp = FTP(ftp_host, ftp_user, ftp_pass) ftp.storbinary('STOR myfile.txt'.encode('utf-8'), open('myfile.txt')) But instead I got this Traceback (most recent call last): File "/Library/WebServer/CGI-Executables/rob3/functions/cli_f.py", line 12, in upload ftp.storbinary('STOR myfile.txt'.encode('utf-8'), open('myfile.txt')) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 450, in storbinary conn = self.transfercmd(cmd) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 358, in transfercmd return self.ntransfercmd(cmd, rest)[0] File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 329, in ntransfercmd resp = self.sendcmd(cmd) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 244, in sendcmd self.putcmd(cmd) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 179, in putcmd self.putline(line) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 172, in putline line = line + CRLF TypeError: can't concat bytes to str Can anybody point me in the right direction

    Read the article

  • Delphi: Alternative to using Reset/ReadLn for text file reading

    - by Ian Boyd
    i want to process a text file line by line. In the olden days i loaded the file into a StringList: slFile := TStringList.Create(); slFile.LoadFromFile(filename); for i := 0 to slFile.Count-1 do begin oneLine := slFile.Strings[i]; //process the line end; Problem with that is once the file gets to be a few hundred megabytes, i have to allocate a huge chunk of memory; when really i only need enough memory to hold one line at a time. (Plus, you can't really indicate progress when you the system is locked up loading the file in step 1). The i tried using the native, and recommended, file I/O routines provided by Delphi: var f: TextFile; begin Reset(f, filename); while ReadLn(f, oneLine) do begin //process the line end; Problem withAssign is that there is no option to read the file without locking (i.e. fmShareDenyNone). The former stringlist example doesn't support no-lock either, unless you change it to LoadFromStream: slFile := TStringList.Create; stream := TFileStream.Create(filename, fmOpenRead or fmShareDenyNone); slFile.LoadFromStream(stream); stream.Free; for i := 0 to slFile.Count-1 do begin oneLine := slFile.Strings[i]; //process the line end; So now even though i've gained no locks being held, i'm back to loading the entire file into memory. Is there some alternative to Assign/ReadLn, where i can read a file line-by-line, without taking a sharing lock? i'd rather not get directly into Win32 CreateFile/ReadFile, and having to deal with allocating buffers and detecting CR, LF, CRLF's. i thought about memory mapped files, but there's the difficulty if the entire file doesn't fit (map) into virtual memory, and having to maps views (pieces) of the file at a time. Starts to get ugly. i just want Reset with fmShareDenyNone!

    Read the article

  • jQuery validation plugin - removing elements

    - by d3020
    I'm using the jQuery validation plugin. On most of my input type... tags I have class='required'. When I submit the page, via JavaScript, the controls on the page that have this class are found. However, there are a handful of checkboxes that I don't need to validate. I've tried removing the class code completely from the input tag, also tried class='cancel', and class='required:false. When doing any of those things though when the form submits it can't find the checkbox control. How do I still keep the ability to do Request.Form and find my checkbox object but at the same time when the form submits don't apply validation to this particular control. Thank you. Edit here. This is what I'm using without the "checked" code and ternary operator. In my input tag I'm calling a function like this... sb.Append(" " + crlf); Inside that function is where I check for the True or False coming back, like this. case "chkFlashedCarton": strResultValue = pst.FlashedCarton.ToString(); if (strResultValue == "True") { strResultValue = " checked"; } break; strResultValue is what is returned back. Does this help to see? Thank you.

    Read the article

  • JAVA - Download PDF file from Webserver

    - by Augusto Picciani
    I need to download a pdf file from a webserver to my pc and save it locally. I used Httpclient to connect to webserver and get the content body: HttpEntity entity=response.getEntity(); InputStream in=entity.getContent(); String stream = CharStreams.toString(new InputStreamReader(in)); int size=stream.length(); System.out.println("stringa html page LENGTH:"+stream.length()); System.out.println(stream); SaveToFile(stream); Then i save content in a file: //check CRLF (i don't know if i need to to this) String[] fix=stream.split("\r\n"); File file=new File("C:\\Users\\augusto\\Desktop\\progetti web\\test\\test2.pdf"); PrintWriter out = new PrintWriter(new FileWriter(file)); for (int i = 0; i < fix.length; i++) { out.print(fix[i]); out.print("\n"); } out.close(); I also tried to save a String content to file directly: OutputStream out=new FileOutputStream("pathPdfFile"); out.write(stream.getBytes()); out.close(); But the result is always the same: I can open pdf file but i can see white pages only. Does the mistake is around pdf stream and endstream charset encoding? Does pdf content between stream and endStream need to be manipulate in some others way?

    Read the article

  • Custom ASP.net UserControl List<T> Property, having trouble setting declaratively

    - by Chris McCall
    I'm developing a custom UserControl to inject JQuery hotkeys into a page declaratively on the server side. Here's the control (the important parts anyway): [AspNetHostingPermission(SecurityAction.Demand, Level = AspNetHostingPermissionLevel.Minimal), AspNetHostingPermission(SecurityAction.InheritanceDemand, Level = AspNetHostingPermissionLevel.Minimal), DefaultProperty("HotKeys"), ParseChildren(true, "HotKeys"), ToolboxData("<{0}:HotKeysControl runat=\"server\"> </{0}:HotKeysControl>")] public partial class HotKeysControl : System.Web.UI.WebControls.WebControl { private string crlf = Environment.NewLine; public List<HotKey> _HotKeys; public HotKeysControl() { if (_HotKeys == null) { _HotKeys = new List<HotKey>(); } // if I uncomment this line, script is injected into the page // _HotKeys.Add(new HotKey("ctrl+r","thisControl")); } [ Category("Behavior"), Description("The hotkeys collection"), DesignerSerializationVisibility( DesignerSerializationVisibility.Content), Editor(typeof(HotKeyCollectionEditor), typeof(UITypeEditor)), PersistenceMode(PersistenceMode.InnerDefaultProperty) ] public List<HotKey> HotKeys { set { _HotKeys = value; } get { return _HotKeys; } } Here's the .aspx code: <%@ Register Assembly="MyCompany.ProductName.WebControls" Namespace="MyCompany.ProductName.WebControls" TagPrefix="uc" %> ... <uc:HotKeysControl ID="theHotkeys" runat="server" Visible="false"> <uc:HotKey ControlName="firstControl" KeyCode="ctrl+1" /> <uc:HotKey ControlName="thirdControl" KeyCode="ctrl+2" /> </uc:HotKeysControl> Nothing happens, as if no HotKeys objects are being added to the property collection. What Am I doing wrong? If I uncomment out the line above and "manually" add items, it works. It's something about how I'm declaratively adding hotkeys to the page. Any ideas?

    Read the article

  • IE textarea wrap bug?

    - by user2227033
    It seems that IE starting from IE7 to IE10 wraps text in the textarea control incorrectly when using \n (or \r\n - doesn't matter - results are the same). Is this a bug in IE or they treat the html standard differently than other browsers - who is right? I have defined: <textarea id="TextArea1" runat="server" style="width: 190px; height: 390px; white-space: normal; word-wrap: normal; overflow: scroll" ></textarea> When I try to add long string like "VeryLongStringEndingWithNewLine\n" by using JavaScript code (obj.value += text;) the text is shown in one line with scroll (this is ok) but added with an additional empty line (\r\n) - why? When I try to add short string like "Short\n" multiple times, again via JavaScript code the text is on the same line (should be on the separate lines because normal wrapping should be applied). Moreover when I do postback then all \r\n's are replaced with spaces (why?) and then text parsed correctly (assuming if I used spaces instead of crlf normal wraping with space only wraps when does not fit in the area). When using FF or Chrome same control behaves correctly - long lines are shown without an additional empty next line, short lines are on the different lines, no replacement with spaces when doing postback. I know I could probably use other options or white space characters, but I feel that above is not correct about IE. Any comments? Mindaugas

    Read the article

< Previous Page | 1 2 3 4  | Next Page >