Search Results

Search found 7251 results on 291 pages for 'pdf parsing'.

Page 32/291 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Postback of delimited text from javascript and parsing on server side

    - by Alt_Doru
    In my ASP.NET page, I have a Javascript object, like this: var args = new Object(); args.Data1 = document.getElementById("Data1").value; args.Data2 = document.getElementById("Data2").value; args.Data3 = document.getElementById("Data3").value; The object is populated on client side using user input data. I am passing the data to a C# method, through an Ajax request: someObj.AjaxRequest(argsData1 + "|" + argsData2 + "|" + argsData3) Finally, I need to obtain the data in my C# code: string data1 = [JS args.Data1] string data2 = [JS args.Data2] string data3 = [JS args.Data3] My question is what's the best solution for this? As i am concatenating bits of user input, I don't think it's best to use "|" as a delimiter. Also, it's not clear to me how to actually parse the data in my C# code to populate the three variables with the original data.

    Read the article

  • parsing python to csv

    - by user185955
    I'm trying to download some game stats to do some analysis, only problem is each season the data their isn't 100% consistent. I grab the json file from the site, then wish to save it to a csv with the first line in the csv containing the heading for that column, so the heading would be essentially the key from the python data type. #!/usr/bin/env python import requests import json import csv base_url = 'http://www.afl.com.au/api/cfs/afl/' token_url = base_url + 'WMCTok' player_url = base_url + 'matchItems/round' def printPretty(data): print(json.dumps(data, sort_keys=True, indent=2, separators=(',', ': '))) session = requests.Session() # session makes it simple to use the token across the requests token = session.post(token_url).json()['token'] # get the token session.headers.update({'X-media-mis-token': token}) # set the token Season = 2014 Roundno = 4 if Roundno<10: strRoundno = '0'+str(Roundno) else: strRoundno = str(Roundno) # get some data (could easily be a for loop, might want to put in a delay using Sleep so that you don't get IP blocked) data = session.get(player_url + '/CD_R'+str(Season)+'014'+strRoundno) # print everything printPretty(data.json()) with open('stats_game_test.csv', 'w', newline='') as csvfile: spamwriter = csv.writer(csvfile, delimiter="'",quotechar='|', quoting=csv.QUOTE_ALL) for profile in data.json()['items']: spamwriter.writerow(['%s' %(profile)]) #for key in data.json().keys(): # print("key: %s , value: %s" % (key, data.json()[key])) The above code grabs the json and writes it to a csv, but it puts the key in each individual cell next to the value (eg 'venueId': 'CD_V190'), the key needs to be just across the first row as a heading. It gives me a csv file with data in the cells like this Column A B 'tempInCelsius': 17.0 'totalScore': 32 'tempInCelsius': 16.0 'totalScore': 28 What I want is the data like this tempInCelsius totalScore 17 32 16 28 As I mentioned up the top, the data isn't always consistent so if I define what fields to grab with spamwriter.writerow([profile['tempInCelsius'], profile['totalScore']]) then it will error out on certain data grabs. This is why I'm now trying the above method so it just grabs everything regardless of what data is there.

    Read the article

  • problem with generating PDF and session_start()

    - by gillian
    Hi All, I've been using PDF class produced by R&OS successfully in a number of recent developments. I'd like to use it in a page that performs a database query before generating a PDF (it's basically a summary of the session, so I'm using session_id() as part of the query) All is fine if I run this in Firefox - not fine in IE. I think the loading of session_start() is doing something with headers that's upsetting IE as it appears unable to load the page (comment off session_start and the page loads fine). I'm getting a little concerned as, on further investigation, it appears that R&OS is not supported ... bad newbie learning experience and I really don't want to have to try adopt another class system this late in the day. Have you any thoughts as to what I could try next? Thankx G

    Read the article

  • parsing position files in ruby

    - by john
    I have a sample position file like below. 789754654 COLORA SOMETHING1 19370119FYY076 2342423234SS323423 742784897 COLORB SOMETHING2 20060722FYY076 2342342342SDFSD3423 I am interested in positions 54-61 (4th column). I want to change the date to be a different format. So final outcome will be: 789754654 COLORA SOMETHING1 01191937FYY076 2342423234SS323423 742784897 COLORB SOMETHING2 07222006FYY076 2342342342SDFSD3423 The columns are seperated by spaces not tabs. And the final file should have exact number of spaces as the original file....only thing changing should be the date format. How can I do this? I wrote a script but it will lose the original spaces and positioning will be messed up. file.each_line do |line| dob = line.split(" ") puts dob[3] #got the date. change its format 5.times { puts "**" } end Can anyone suggest a better strategy so that positioning in the original file remains the same?

    Read the article

  • Question creating PDF document in Zend Framework

    - by deaddancer
    I need to take a ZF rendered view and create a PDF that should look pretty much exactly the same, and email it. The major issue I have right now is getting the HTML created by the view into a string that I can then process with the Zend_PDF::parse method. The view I need to turn into a PDF is the result of a posted form. I've tried grabbing the contents of ob_get_contents into a string after a successful post, but for some reason its not in there. Should I press on with this angle? Any help would be greatly appreciated!

    Read the article

  • Parsing data without HMLT tags

    - by user296507
    Hi, I need to extract the actual phone number form the html listed below, but I'm not really sure how to do it using Nokogiri CSS since there are no html tags around it. When an at_css(.phonetitle) it only parse Phone and not the number. <div class="detail"> <span class="address">Corner of Toorak Road and Chapel Street, South Yarra</span><br> <span class="phonetitle">Phone</span> 95435 34341 <br><br> </div>

    Read the article

  • parsing ssid with iwconfig in c

    - by user1781595
    I am about building a bar for DWM (ubuntu linux), showing wifi details such as the ssid. Thats my code: #include <stdio.h> #include <stdlib.h> int main( int argc, char *argv[] ) { FILE *fp; int status; char path[1035]; /* Open the command for reading. */ fp = popen("iwconfig", "r"); if (fp == NULL) { printf("Failed to run command\n" ); exit; } char s[500]; /* Read the output a line at a time - output it. */ while (fgets(path, sizeof(path)-1, fp) != NULL) { sprintf(s,"%s%s",s, path); } //printf("%s",s); /* close */ pclose(fp); char delimiter[1] = "s"; char *ptr; ptr = strtok(s, delimiter); printf("SSID: %s\n", ptr); return 0; } i am getting overflowerrors and dont know what to do. I dont think, thats a good way to get the ssid either... :/ Suggestions?

    Read the article

  • Parsing a Multi-Index Excel File in Pandas

    - by rhaskett
    I have a time series excel file with a tri-level column MultiIndex that I would like to successfully parse if possible. There are some results on how to do this for an index on stack overflow but not the columns and the parse function has a header that does not seem to take a list of rows. The ExcelFile looks like is like the following: Column A is all the time series dates starting on A4 Column B has top_level1 (B1) mid_level1 (B2) low_level1 (B3) data (B4-B100+) Column C has null (C1) null (C2) low_level2 (C3) data (C4-C100+) Column D has null (D1) mid_level2 (D2) low_level1 (D3) data (D4-D100+) Column E has null (E1) null (E2) low_level2 (E3) data (E4-E100+) ... So there are two low_level values many mid_level values and a few top_level values but the trick is the top and mid level values are null and are assumed to be the values to the left. So, for instance all the columns above would have top_level1 as the top multi-index value. My best idea so far is to use transpose, but the it fills Unnamed: # everywhere and doesn't seem to work. In Pandas 0.13 read_csv seems to have a header parameter that can take a list, but this doesn't seem to work with parse.

    Read the article

  • Parsing query strings in Java

    - by Will
    J2EE has ServletRequest.getParameterValues(). On non-EE platforms, URL.getQuery() simply returns a string. What's the normal way to properly parse the query string in a URL when not on J2EE?

    Read the article

  • Manually extracting portions of strings contained in a list (parsing)

    - by user1652011
    I'm aware that there are modules that fully simplify this function, but saying that I am running from a base install of python (standard modules only), how would I extract the following: I have a list. This list is the contents, line by line, of a webpage. Here is a mock up list (unformatted) for informative purposes: <script> link = "/scripts/playlists/1/" + a.id + "/0-5417069212.asx"; <script> "<a href="/apps/audio/?feedId=11065"><span class="px13">Eastern Metro Area Fire</span>" From the above string, I need the following extracted. The feedId (11065), which is incidentally a.id in the code above., "/scripts/playlists/1/" and "/0-5417069212.asx". Remembering that each of these lines is just contents from objects in a list, how would I go about extracting that data? Here is the full list: contents = urllib2.urlopen("http://www.radioreference.com/apps/audio/?ctid=5586") Pseudo: from urllib2 import urlopen as getpage page_contents = getpage("http://www.radioreference.com/apps/audio/?ctid=5586") feedID = % in (page_contents.search() for "/apps/audio/?feedId=%") titleID = % in (page_contents.search() for "<span class="px13">%</span>") playlistID = % in (page_contents.search() for "link = "%" + a.id + "*.asx";") asxID = * in (page_contents.search() for "link = "*" + a.id + "%.asx";") streamURL = "http://www.radioreference.com/" + playlistID + feedID + asxID + ".asx" I plan to format it as such that streamURL should = : http://www.radioreference.com/scripts/playlists/1/11065/0-5417067072.asx

    Read the article

  • Parsing complicated query parameters

    - by Will
    My Python server receives jobs that contain a list of the items to act against, rather like a search query term; an example input: (Customer:24 OR Customer:24 OR (Group:NW NOT Customer:26)) To complicate matters, customers can join and leave groups at any time, and the job should be updated live when this happens. How is best to parse, apply and store (in my RDBMS) this kind of list of constraints?

    Read the article

  • Parsing string based on initial format

    - by Kayla
    I'm trying to parse a set of lines and extract certain parts of the string based on an initial format (reading a configuration file). A little more explanation: the format can contain up to 4 parts to be formatted. This case, %S will skip the part, %a-%c will extract the part and will be treated as a string, %d as int. What I am trying to do now is to come up with some clever way to parse it. So far I came up with the following prototype. However, my pointer arithmetic still needs some work to skip/extract the parts. Ultimately each part will be stored on an array of structs. Such as: struct st_temp { char *parta; char *partb; char *partc; char *partd; }; ... #include <stdio.h> #include <string.h> #define DIM(x) (sizeof(x)/sizeof(*(x))) void process (const char *fmt, const char *line) { char c; const char *src = fmt; while ((c = *src++) != '\0') { if (c == 'S'); // skip part else if (c == 'a'); // extract %a else if (c == 'b'); // extract %b else if (c == 'c'); // extract %c else if (c == 'd'); // extract %d (int) else { printf("Unknown format\n"); exit(1); } } } static const char *input[] = { "bar 200.1 / / (zaz) - \"bon 10\"", "foo 100.1 / / (baz) - \"apt 20\"", }; int main (void) { const char *fmt = "%S %a / / (%b) - \"%c %d\""; size_t i; for(i = 0; i < DIM (input); i++) { process (fmt, input[i]); } return (0); }

    Read the article

  • Parsing ip addresses in php

    - by user2938780
    I am trying to get the number of active connections (Real Time) from a log file by IP connected and having a Play status but instead, it's giving me the total number of IP with status play. The number doesn't decrease at all. Keeps on increasing as soon as a new ip is added. How can I fix it? Here my code: $stringToParse = file_get_contents('wowzamediaserver_access.log'); preg_match_all('/\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}/', $stringToParse, $matchOP); echo "Number of connections: ".sizeof(array_unique($matchOP[0])); HERE IS THE LOG: 2013-10-30 14:54:36 CET stop stream INFO 200 account1 - _defaultVHost_ account1 _definst_ 149.21 streamURL 1935 fullStreamURL IP_ADDRESS_1 http (cupertino) - 2013-10-30 14:56:12 CET play stream INFO 200 account2 - _defaultVHost_ account1 _definst_ 149.21 streamURL 1935 fullStreamURL IP_ADDRESS_2 rtmp (cupertino) - 2013-10-30 14:58:23 CET stop stream INFO 200 account2 - _defaultVHost_ account1 _definst_ 149.21 streamURL 1935 fullStreamURL IP_ADDRESS_2 rtmp (cupertino) - 2013-10-30 14:58:39 CET play stream INFO 200 account1 - _defaultVHost_ account1 _definst_ 149.21 streamURL 1935 fullStreamURL IP_ADDRESS_1 http (cupertino) - 2013-10-30 14:59:12 CET play stream INFO 200 account2 - _defaultVHost_ account1 _definst_ 149.21 streamURL 1935 fullStreamURL IP_ADDRESS_2 rtmp (cupertino) - I want to be able to count the IP whenever it has a "PLAY" status and don't count it whenever it's "STOP" 2013-10-30 14:59:00 CET play stream INFO 200 tv2vielive - _defaultVHost_ tv2vielive _definst_ 0.315 [any] 1935 rtmp://tv2vie.zion3cloud.com:1935/tv2vielive 78.247.255.186 rtmp http://www.tv2vie.org/swf/flowplayer-3.2.16.swf WIN 11,7,700,202 92565864 3576 3455 1 0 0 0 tv2vielive - - - - - rtmp://tv2vie.zion3cloud.com:1935/tv2vielive/tv2vielive rtmp://tv2vie.zion3cloud.com:1935/tv2vielive/tv2vielive - rtmp://tv2vie.zion3cloud.com:1935/tv2vielive - 2013-10-30 14:59:04 CET stop stream INFO 200 tv2vielive - _defaultVHost_ tv2vielive _definst_ 4.75 [any] 1935 rtmp://tv2vie.zion3cloud.com:1935/tv2vielive 78.247.255.186 rtmp http://www.tv2vie.org/swf/flowplayer-3.2.16.swf WIN 11,7,700,202 92565864 3576 512571 1 7222 0 503766 tv2vielive - - - - - rtmp://tv2vie.zion3cloud.com:1935/tv2vielive/tv2vielive rtmp://tv2vie.zion3cloud.com:1935/tv2vielive/tv2vielive - rtmp://tv2vie.zion3cloud.com:1935/tv2vielive - Any solutions? I have even tried the first answer solution but getting "0" play connections. $stringToParse = file_get_contents('wowzamediaserver_access.log'); $pattern = '~^.* play.* ( ([0-9]{1,3}+\.){3,3}[0-9]{1,3}).*$~m'; preg_match_all($pattern, $stringToParse, $matches); echo count($matches[1]) . ' play actions'; But whenever I use my code, I am getting "Number of connections: xxxxx(actual count of IPs). My concern is that I only need the count of IPs with PLAY status. If that IP changes to STOP then it wont count.

    Read the article

  • Problems parsing Google Data Booksearch API XML in Ruby

    - by FrogBot
    I'm trying to parse some XML I've gotten from the Google Data Booksearch API and I'm having trouble trying to target a specific element. Currently my code looks like so: require 'gdata' client = GData::Client::BookSearch.new feed = client.get("http://books.google.com/books/feeds/volumes?q=Foundation").to_xml books = [] feed.elements.each('entry') do |entry| book = { :title => entry.elements['title'].text, :author => entry.elements['dc:creator'].text, :book_id => entry.elements['dc:identifier'].text } books.push(book) end p books and that all works fine, but I want to add a thumbnail URL to the book hash. The tag with each book's thumbnail URL looks like so: <feed> <entry> ... <link rel="http://schemas.google.com/books/2008/thumbnail" type="image/x-unknown" href="http://bks6.books.google.com/books?id=ID5P7xbmcO8C&printsec=frontcover&img=1&zoom=5&edge=curl&source=gbs_gdata"/> ... </entry> </feed> I want to grab the contents of the href attribute from this element and I'm not exactly sure how. Can anyone help me out here?

    Read the article

  • Parsing a string for dates in PHP

    - by nickf
    Given an arbitrary string, for example ("I'm going to play croquet next Friday" or "Gadzooks, is it 17th June already?"), how would you go about extracting the dates from there? If this is looking like a good candidate for the too-hard basket, perhaps you could suggest an alternative. I want to be able to parse Twitter messages for dates. The tweets I'd be looking at would be ones which users are directing at this service, so they could be coached into using an easier format, however I'd like it to be as transparent as possible. Is there a good middle ground you could think of?

    Read the article

  • word wrap in tcpdf

    - by ChuckO
    I'm using tcpdf to creat a pdf version of the html table below. How do I word wrap the text in the cells? <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <style type="text/css"> table.frm { width: 960px; Height:400px; margin-left: auto; margin-right: auto; border-width: 0px 0px 0px 0px; border-spacing: 0px; border-style: solid solid solid solid; border-color: gray gray gray gray; border-collapse: collapse; background-color: white; font-family: Verdana,Arial,Helvetica,sans-serif; font-size: 11px; } table.frm th { Width: 120px; border-width: 1px 1px 1px 1px; padding: 1px 1px 1px 1px; border-style: solid solid solid solid; border-collapse: collapse; border-color: gray gray gray gray; background-color: white; } table.frm td { width: 120px; height: 80px; vertical-align: top; border-width: 1px 1px 1px 1px; padding: 2px 2px 2px 2px; border-style: solid solid solid solid; border-collapse: collapse; border-color: gray gray gray gray; background-color: white; } </style> <title>Weekly Menu</title> </head> <body> <table class="frm"> <tr> <th align="center" colspan="8"><b>WEEKLY MENU</b></th> </tr> <tr> <th align="center" colspan="8"><b>Your Name Here</b></th> </tr> <tr> <th></th> <th>Monday</th> <th>Tuesday</th> <th>Wednesday</th> <th>Thursday</th> <th>Friday</th> <th>Saturday</th> <th>Sunday</th> </tr> <tr> <td><b>Breakfast</b></td> <td>Scrambled Eggs Black Coffee</td> <td>Vegetable Omelet Black Coffee</td> <td>2 slices Toast Black Coffee</td> <td>Cereal w/milk Black Coffee</td> <td>Orange Juice Black Coffee</td> <td>Cereal w/milk Black Coffee</td> <td>Pancakes w/syrup Black Coffee</td> </tr> <tr> <td><b>Lunch</b></td> <td>Tuna Salad Sandwich Diet Coke</td> <td>Greek Salad Black Coffee</td> <td></td> <td>Amer Cheese Sandwich Orange Juice</td> <td></td> <td></td> <td></td> </tr> <tr> <td><b>Dinner</b></td> <td>Burger Fried Onions Diet Coke</td> <td>Steak Fries Diet Sprite</td> <td></td> <td>Chicken Cutlet Baked Potato Peas</td> <td></td> <td></td> <td></td> </tr> <tr> <td><b>Snack</b></td> <td>Apple</td> <td>Orange</td> <td>Sm bag of chips</td> <td>Celery Sticks</td> <td></td> <td></td> <td></td> </tr> </table> </body> </html> This is the tcpdf code: $pdf = new TCPDF('Landscape', 'mm', '', true, 'UTF-8', false); $pdf->SetTitle('Weekly Menu'); $pdf->SetMargins(15, 7.5, 12.5); $pdf->SetAutoPageBreak(TRUE, PDF_MARGIN_BOTTOM); $pdf->SetPrintHeader(false); $pdf->SetPrintFooter(false); $pdf->AddPage(); $pdf->setFormDefaultProp(array('lineWidth'=>0, 'borderStyle'=>'dot', 'fillColor'=>array(235, 235, 255), 'strokeColor'=>array(255,255,250))); $pdf->SetFont('times', 'BU', 12); $pdf->cell(250, 8, 'Weekly Menu', 0, 1, 'C'); $pdf->cell(250, 8, $yourname, 0, 1, 'C'); $pdf->SetFont('times', '', 10); $cw=35; $ch=25; $pdf->SetXY(15,50); $pdf->cell(25,5,'',1,0,'L'); $pdf->cell($cw,5,$day1,1,0,'C'); $pdf->cell($cw,5,$day2,1,0,'C'); $pdf->cell($cw,5,$day3,1,0,'C'); $pdf->cell($cw,5,$day4,1,0,'C'); $pdf->cell($cw,5,$day5,1,0,'C'); $pdf->cell($cw,5,$day6,1,0,'C'); $pdf->cell($cw,5,$day7,1,1,'C'); $pdf->cell(25,$ch,'Breakfast',1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[0]->breakfast,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[1]->breakfast,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[2]->breakfast,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[3]->breakfast,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[4]->breakfast,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[5]->breakfast,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[6]->breakfast,1,1,'L',0,0,false,'','T'); $pdf->cell(25,$ch,'Lunch',1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[0]->lunch,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[1]->lunch,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[2]->lunch,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[3]->lunch,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[4]->lunch,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[5]->lunch,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[6]->lunch,1,1,'L',0,0,false,'','T'); $pdf->cell(25,$ch,'Dinner',1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[0]->dinner,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[1]->dinner,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[2]->dinner,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[3]->dinner,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[4]->dinner,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[5]->dinner,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[6]->dinner,1,1,'L',0,0,false,'','T'); $pdf->cell(25,$ch,'Snack',1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[0]->snack,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[1]->snack,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[2]->snack,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[3]->snack,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[4]->snack,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[5]->snack,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[6]->snack,1,1,'L',0,0,false,'','T'); EOD;

    Read the article

  • Resizing pages in Adobe Acrobat Pro

    - by KP
    I am using Acrobat Professional 8 to assemble a document from two imported page images (JPGs). Acrobat seems to interpret the images at screen resolution, and creates a document that's about 35" by 46" instead of 8.5" x 11". How can I scale down the page size within Acrobat?

    Read the article

  • Parsing HTML Documents with the Html Agility Pack

    Screen scraping is the process of programmatically accessing and processing information from an external website. For example, a price comparison website might screen scrape a variety of online retailers to build a database of products and what various retailers are selling them for. Typically, screen scraping is performed by mimicking the behavior of a browser - namely, by making an HTTP request from code and then parsing and analyzing the returned HTML. The .NET Framework offers a variety of classes for accessing data from a remote website, namely the WebClient class and the HttpWebRequest class. These classes are useful for making an HTTP request to a remote website and pulling down the markup from a particular URL, but they offer no assistance in parsing the returned HTML. Instead, developers commonly rely on string parsing methods like String.IndexOf, String.Substring, and the like, or through the use of regular expressions. Another option for parsing HTML documents is to use the Html Agility Pack, a free, open-source library designed to simplify reading from and writing to HTML documents. The Html Agility Pack constructs a Document Object Model (DOM) view of the HTML document being parsed. With a few lines of code, developers can walk through the DOM, moving from a node to its children, or vice versa. Also, the Html Agility Pack can return specific nodes in the DOM through the use of XPath expressions. (The Html Agility Pack also includes a class for downloading an HTML document from a remote website; this means you can both download and parse an external web page using the Html Agility Pack.) This article shows how to get started using the Html Agility Pack and includes a number of real-world examples that illustrate this library's utility. A complete, working demo is available for download at the end of this article. Read on to learn more! Read More >

    Read the article

  • Tip: Keeping the ADF Mobile PDF Guide up to date

    - by Chris Muir
    This is a little tip for customers using Oracle's ADF Mobile. If you're like me, it's possible you don't rely on the online HTML version of the Mobile Developer's Guide for ADF, but rather download a PDF version of the file to use locally (look to the "PDF" link to the top right of the guide).  For me the convenience of the PDF is it's faster, I can search the whole document easily, I can split read the document across two pages on my home monitor, if I lose my internet connection the document is still available, and it's easy to read on my iPad (especially on long haul flights to the US across the Pacific where there is no internet connection!). The trigger point for me to download the Oracle PDF documentation has always been on a new point release of JDeveloper.  However in the case of ADF Mobile, as an extension to JDeveloper it is releasing at a much faster and independent schedule to JDeveloper and this includes updates to the documentation. As such the 11.1.2.4.0 ADF Mobile PDF guide you have locally might be out of date and you should take the opportunity to download the latest version.  This is also particularly important for ADF Mobile as not only are many new features being added for each release and included in the new documentation, but the guide is under rapid improvement to clarify much of what has been written to date.  Our documentation teams are super responsive to suggestions on how to improve the guides and this often shows per point release. How do you tell you've the latest guide? Look to the document part number which right now is "E24475-03".  This is a unique ID per release for the document, the first part being the document number, and the part after the dash the revision number.  If the website document number has a higher revision number, time to download a new up to date PDF. One last thing to share, you can follow the ADF Mobile guide document manager Brian Duffield on Twitter to keep abreast of updates. Image courtesy of Stuart Miles / FreeDigitalPhotos.net

    Read the article

  • PDF export printing in Internet Explorer [closed]

    - by user619804
    protected static byte[] exportReportToPdf(JasperPrint jasperPrint) throws JRException { JRPdfExporter exporter = new JRPdfExporter(); ByteArrayOutputStream baos = new ByteArrayOutputStream(); exporter.setParameter(JRExporterParameter.JASPER_PRINT, jasperPrint); exporter.setParameter(JRExporterParameter.OUTPUT_STREAM, baos); exporter.setParameter(JRPdfExporterParameter.PDF_JAVASCRIPT, "this.print({bUI: true,bSilent: false,bShrinkToFit: true});"); exporter.exportReport(); return baos.toByteArray(); } We are using code like this to export a PDF document from a Jasper application. The line exporter.setParameter(JRPdfExporterParameter.PDF_JAVASCRIPT, "this.print({bUI: true,bSilent: false,bShrinkToFit: true});"); adds JavaScript to send the PDF document directly to the printer. The expected behavior is that a print dialog will come up with a preview of the PDF document. This works fine most of the time - except I am having problems about one out of every 5-6 times in Internet Explorer 8 and Firefox. What happens is - the print preview dialog with the PDF document does not appear or it appears with a blank document in the preview window. -I've tried a number of different JavaScripts (different params to this.print() via exporter.setParameter -I've tried setting different response headers such as response.setContentType("application/pdf"); response.setHeader("Content-disposition","inline; filename=\"" + reportName + "\""); response.setContentLength(baos.size()); these did not seem to help This seems to be an IE and FF issue. Has anyone ever dealt with this problem? I need to get it to work across all browsers 100% of the time. Perhaps a different approach to accomplish the goal of sending the PDF document export directly to the printer? or a third party library that will work across browsers?

    Read the article

  • FPDI - SELECT WHICH PDFS TO SHOW

    - by NORM
    IS THERE A WAY FOR A OPTION TO SELECT WHICH PDFS TO SHOW WITH THE FPDI FUNCTION? THIS IS THE REGULAR CODE: $pdf-AddPage(); // set the sourcefile $pdf-setSourceFile('h.pdf'); // import page 1 $tplIdx = $pdf-importPage(1); // use the imported page and place it at point 10,10 with a width of 100 mm $pdf-useTemplate($tplIdx, 0, 0, 0); Is there a way to make this $pdf-setSourceFile('h.pdf'); a option for users who visit the website. For example: have - $pdf-setSourceFile('h.pdf'); & $pdf-setSourceFile('g.pdf'); - then let the visitor select which one to include in the pdf via fpdi. I would prefer something like a input. Any ideas?? or something similar??? Help is very much appreciated!! :D

    Read the article

  • PDFtk Password Protection Help

    - by Dave W.
    I am using Ubuntu 11.10 and am looking for a solution to password protect a bunch of pdf files in a directory in batch. I came across PDFtk and it looks like it might do what I need, but I've reviewed the command line PDFtk examples and can't figure out if there is a way to do it in batch without having to individually specify the output file name for every file. I'm hoping a command-line guru can take a look at the PDFtk syntax and tell me if there is some trick / command that will allow me to password protect a directory of pdf files (e.g., *.pdf) and overwrite the existing files using the same name, or consistently rename the individual output files without having to specify each output name individually. Here's a link to the PDFtk command line examples page: http://www.pdflabs.com/tools/pdftk-the-pdf-toolkit/ Thanks for your help. I think I've answered my own question. Here's a bash script that appears to do the trick. I'd welcome help evaluating why the code I've commented out doesn't work... #!/bin/bash # Created by Dave, 2012-02-23 # This script uses PDFtk to password protect every PDF file # in the directory specified. The script creates a directory named "protected_[DATE]" # to hold the password protected version of the files. # # I'm using the "user_pw" parameter, # which means no one will be able to open or view the file without # the password. # # PDFtk must be installed for this script to work. # # Usage: ./protect_with_pdftk.bsh [FILE(S)] # [FILE(S)] can use wildcard expansion (e.g., *.pdf) # This part isn't working.... ignore. The goal is to avoid errors if the # directory to be created already exists by only attempting to create # it if it doesn't exists # #TARGET_DIR="protected_$(date +%F)" #if [ -d "$TARGET_DIR" ] #then #echo # echo "$TARGET_DIR directory exists!" #else #echo # echo "$TARGET_DIR directory does not exist!" #fi # mkdir protected_$(date +%F) for i in *pdf ; do pdftk "$i" output "./protected_$(date +%F)/$i" user_pw [PASSWORD]; done echo "Complete. Output is in the directory: ./protected_$(date +%F)"

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >