Search Results

Search found 978 results on 40 pages for 'feeds'.

Page 27/40 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • How to add class to selective <li> in a generated menu

    - by Vikram
    Hi I am looking for a solution to add a class to every list item <li> which has a child item with a class of <span class="separator"> and a different class to <li> with an anchor link. I use Joomla and the menu is being generated somewhat like this: <ul class="menu"> <li class="item1"><a href="<!-- link goes here -->"><span>Home</span></a></li> <li class="parent item59"><span class="separator"><span>Demo</span></span></li> <li class="item62"><a href="<!-- link goes here -->"><span>Article</span></a></li> <li id="current" class="parent active item27"><a href="<!-- link goes here -->"><span>CMS</span></a> <ul> <li class="item50"><a href="<!-- link goes here -->"><span>The News</span></a></li> <li class="item48"><a href="<!-- link goes here -->"><span>Web Links</span></a></li> <li class="item65"><span class="separator"><span /></span></li> <li class="item49"><a href="<!-- link goes here -->"><span>News Feeds</span></a></li> <li class="item66"><span class="separator"><span /></span></li> <li class="item67"><span class="separator"><span /></span></li> <li class="item68"><span class="separator"><span /></span></li> </ul> </li> <li class="item71"><span class="separator"><span>Help</span></span></li> </ul> What I want is to add class "anclink" or "seplink" to the <li> depending on their child item so that the final output looks like below. <ul class="menu"> <li class="item1 anclink"><a href="<!-- link goes here -->"><span>Home</span></a></li> <li class="parent item59 seplink"><span class="separator"><span>Demo</span></span></li> <li class="item62 anclink"><a href="<!-- link goes here -->"><span>Article</span></a></li> <li id="current" class="parent active item27" anclink><a href="<!-- link goes here -->"><span>CMS</span></a> <ul> <li class="item50 anclink"><a href="<!-- link goes here -->"><span>The News</span></a></li> <li class="item48 anclink"><a href="<!-- link goes here -->"><span>Web Links</span></a></li> <li class="item65 seplink"><span class="separator"><span /></span></li> <li class="item49 anclink"><a href="<!-- link goes here -->"><span>News Feeds</span></a></li> <li class="item66 seplink"><span class="separator"><span /></span></li> <li class="item67 seplink"><span class="separator"><span /></span></li> <li class="item68 seplink"><span class="separator"><span /></span></li> </ul> </li> <li class="item71 seplink"><span class="separator"><span>Help</span></span></li> </ul> How can I achieve this using PHP or even a jQuery solution will be fine. Kindly help.

    Read the article

  • linux locale unset

    - by naugtur
    I have a ARM based machine with ubuntu distro on it and it often feeds me with this while running various commands: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "pl_PL.UTF-8" This is output of the locale command locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory LANG=pl_PL.UTF-8 LC_CTYPE="pl_PL.UTF-8" LC_NUMERIC="pl_PL.UTF-8" LC_TIME="pl_PL.UTF-8" LC_COLLATE="pl_PL.UTF-8" LC_MONETARY="pl_PL.UTF-8" LC_MESSAGES="pl_PL.UTF-8" LC_PAPER="pl_PL.UTF-8" LC_NAME="pl_PL.UTF-8" LC_ADDRESS="pl_PL.UTF-8" LC_TELEPHONE="pl_PL.UTF-8" LC_MEASUREMENT="pl_PL.UTF-8" LC_IDENTIFICATION="pl_PL.UTF-8" LC_ALL= What should I do to stop it from popping now and then and configure it properly for the aescznól [important characters of mine]?

    Read the article

  • How to create "recurData" in Google Calendar? in C#.Net

    - by Pari
    Hi, I want to create recurring events of Calendar using Google API. I am following links: Google Calendar API I am not getting how to create "recurData". I can't modify String and pass it as parameter. Tried DDay.iCal Version 0.80. also. DDay.iCal There are some Example code given.I tried them. I am able to create ".ics" file. But when i pass this file content as "recurData" Getting Error : {"Execution of request failed: http://www.google.com/calendar/feeds/[email protected]/private/full?gsessionid=AHItK5wrSIoJVawFjGt-0g"} My icf File content is: BEGIN:VCALENDAR VERSION:2.0 PRODID:-//DDay.iCal//NONSGML ddaysoftware.com//EN BEGIN:VEVENT CREATED:20100309T132930Z DESCRIPTION:The event description DTEND:20100310T020000 DTSTAMP:20100309T132930Z DTSTART:20100309T080000 LOCATION:Event location SEQUENCE:0 SUMMARY:18 hour event summary UID:396c6b22-277f-4496-bbe1-d3692dc1b223 END:VEVENT BEGIN:VEVENT CREATED:20100309T132930Z DTEND;VALUE=DATE:20100315 DTSTAMP:20100309T132930Z DTSTART;VALUE=DATE:20100314 SEQUENCE:0 SUMMARY:All-day event UID:ac25cdaf-4e95-49ad-a770-f04f3afc1a2f END:VEVENT END:VCALENDAR I made it using "Example6".

    Read the article

  • How to upload video on YouTube with Ruby

    - by viatropos
    I am trying to upload a youtube video using the GData gem (I have seen the youtube_g gem but would like to make it work with pure GData if possible), but I keep getting this error: GData::Client::BadRequestError in 'MyProject::Google::YouTube should upload the actual video to youtube (once it does, mock this test out)' request error 400: No file found in upload request. I am using this code: def metadata data = <<-EOF <?xml version="1.0"?> <entry xmlns="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" xmlns:yt="http://gdata.youtube.com/schemas/2007"> <media:group> <media:title type="plain">Bad Wedding Toast</media:title> <media:description type="plain"> I gave a bad toast at my friend's wedding. </media:description> <media:category scheme="http://gdata.youtube.com/schemas/2007/categories.cat">People</media:category> <media:keywords>toast, wedding</media:keywords> </media:group> </entry> EOF end @yt = GData::Client::YouTube.new @yt.clientlogin("name", "pass") @yt.developer_key = "myKey" url = "http://uploads.gdata.youtube.com/feeds/api/users/name/uploads" mime_type = "multipart/related" file_path = "sample_upload.mp4" @yt.post_file(url, file_path, mime_type, metadata) What is the recommended/standard way for uploading videos to youtube with ruby, what is your method? Update After applying the changes to wrapped_entry, the string it produces looks like this: --END_OF_PART_59003 Content-Type: application/atom+xml; charset=UTF-8 <?xml version="1.0"?> <entry xmlns="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" xmlns:yt="http://gdata.youtube.com/schemas/2007"> <media:group> <media:title type="plain">Bad Wedding Toast</media:title> <media:description type="plain"> I gave a bad toast at my friend's wedding. </media:description> <media:category scheme="http://gdata.youtube.com/schemas/2007/categories.cat">People</media:category> <media:keywords>toast, wedding</media:keywords> </media:group> </entry> --END_OF_PART_59003 Content-Type: multipart/related Content-Transfer-Encoding: binary ... and inspecting the request and response looks like this: Request: <GData::HTTP::Request:0x1b8bb44 @method=:post @url="http://uploads.gdata.youtube.com/feeds/api/users/lancejpollard/uploads" @body=#<GData::HTTP::MimeBody:0x1b8c738 @parts=[#<GData::HTTP::MimeBodyString:0x1b8c058 @bytes_read=0 @string="--END_OF_PART_30909\r\nContent-Type: application/atom+xml; charset=UTF-8\r\n\r\n <?xml version=\"1.0\"?>\n<entry xmlns=\"http://www.w3.org/2005/Atom\"\n xmlns:media=\"http://search.yahoo.com/mrss/\"\n xmlns:yt=\"http://gdata.youtube.com/schemas/2007\">\n <media:group>\n <media:title type=\"plain\">Bad Wedding Toast</media:title>\n <media:description type=\"plain\">\n I gave a bad toast at my friend's wedding.\n </media:description>\n <media:category scheme=\"http://gdata.youtube.com/schemas/2007/categories.cat\">People</media:category>\n <media:keywords>toast wedding</media:keywords>\n </media:group>\n</entry> \n\r\n--END_OF_PART_30909\r\nContent-Type: multipart/related\r\nContent-Transfer-Encoding: binary\r\n\r\n"> #<File:/Users/Lance/Documents/Development/git/thing/spec/fixtures/sample_upload.mp4> #<GData::HTTP::MimeBodyString:0x1b8c044 @bytes_read=0 @string="\r\n--END_OF_PART_30909--"] @current_part=0 @boundary="END_OF_PART_30909" @headers={"Slug"="sample_upload.mp4" "User-Agent"="GoogleDataRubyUtil-AnonymousApp" "GData-Version"="2" "X-GData-Key"="key=AI39si7jkhs_ECjF4unOQz8gpWGSKXgq0KJpm8wywkvBSw4s8oJd5p5vkpvURHBNh-hiYJtoKwQqSfot7KoCkeCE32rNcZqMxA" "Content-Type"="multipart/related; boundary=\"END_OF_PART_30909\"" "MIME-Version"="1.0"} Response: #<GData::HTTP::Response:0x1b897e0 @body="No file found in upload request." @headers={"cache-control"=>"no-cache no-store must-revalidate" "connection"=>"close" "expires"=>"Fri 01 Jan 1990 00:00:00 GMT" "content-type"=>"text/plain; charset=utf-8" "date"=>"Fri 11 Dec 2009 02:10:25 GMT" "server"=>"Upload Server Built on Nov 30 2009 13:21:18 (1259616078)" "x-xss-protection"=>"0" "content-length"=>"32" "pragma"=>"no-cache"} @status_code=400> Still not working, I'll have to check it out more with those changes.

    Read the article

  • Parsing a UTF-16 encoded xml file in ruby with REXML

    - by Matthew Toohey
    Hello, I'm trying to parse the following UTF-16 encoded xml file in REXML: http://www.abc.net.au/triplej/feeds/playout/triplejsydneyplayout.xml?_523525 REXML encounters an error after the following: >> require 'rexml/document' => true >> include REXML => Object >> require 'net/http' => true >> triplejString = Net::HTTP.get('www.abc.net.au', '/triplej/feeds/playout/triplejsydneyplayout.xml?_523525') => "\377\376<\000?\000x\000m\000l\000 \000v\000e\000r\000s\000i\000o\000n\000=\000\"\0001\000.\0000\000\"\000 \000e\000n\000c\000o\000d\000i\000n\000g\000=\000\"\000u\000t\000f\000-\0001\0006\000\"\000?\000>\000<\000a\000b\000c\000m\000u\000s\000i\000c\000_\000p\000l\000a\000y\000o\000u\000t\000>\000<\000c\000h\000a\000n\000n\000e\000l\000>\000J\000J\000J\000<\000/\000c\000h\000a\000n\000n\000e\000l\000>\000<\000p\000u\000b\000l\000i\000s\000h\000t\000i\000m\000e\000>\000F\000r\000i\000,\000 \0003\0000\000 \000A\000p\000r\000 \0002\0000\0001\0000\000 \0001\0001\000:\0005\0007\000:\0001\0007\000 \000G\000M\000T\000<\000/\000p\000u\000b\000l\000i\000s\000h\000t\000i\000m\000e\000>\000<\000i\000t\000e\000m\000s\000>\000<\000i\000t\000e\000m\000>\000<\000p\000l\000a\000y\000i\000n\000g\000>\000n\000o\000w\000<\000/\000p\000l\000a\000y\000i\000n\000g\000>\000<\000t\000i\000t\000l\000e\000>\000D\000o\000c\000t\000o\000r\000,\000 \000D\000o\000c\000t\000o\000r\000<\000/\000t\000i\000t\000l\000e\000>\000<\000t\000r\000a\000c\000k\000i\000d\000>\000<\000/\000t\000r\000a\000c\000k\000i\000d\000>\000<\000p\000l\000a\000y\000e\000d\000t\000i\000m\000e\000>\000F\000r\000i\000,\000 \0003\0000\000 \000A\000p\000r\000 \0002\0000\0001\0000\000 \0001\0001\000:\0005\0007\000:\0001\0007\000 \000G\000M\000T\000<\000/\000p\000l\000a\000y\000e\000d\000t\000i\000m\000e\000>\000<\000p\000u\000b\000l\000i\000s\000h\000e\000r\000>\000<\000/\000p\000u\000b\000l\000i\000s\000h\000e\000r\000>\000<\000d\000a\000t\000e\000c\000o\000p\000y\000r\000i\000g\000h\000t\000e\000d\000>\0002\0000\0000\0003\000<\000/\000d\000a\000t\000e\000c\000o\000p\000y\000r\000i\000g\000h\000t\000e\000d\000>\000<\000d\000u\000r\000a\000t\000i\000o\000n\000>\0001\0006\0003\000<\000/\000d\000u\000r\000a\000t\000i\000o\000n\000>\000<\000a\000u\000s\000t\000>\000N\000o\000<\000/\000a\000u\000s\000t\000>\000<\000t\000r\000a\000c\000k\000n\000o\000t\000e\000>\000<\000/\000t\000r\000a\000c\000k\000n\000o\000t\000e\000>\000<\000t\000r\000a\000c\000k\000l\000i\000n\000k\000>\000<\000/\000t\000r\000a\000c\000k\000l\000i\000n\000k\000>\000<\000s\000h\000o\000w\000>\000<\000/\000s\000h\000o\000w\000>\000<\000t\000a\000l\000e\000n\000t\000>\000<\000/\000t\000a\000l\000e\000n\000t\000>\000<\000a\000l\000b\000u\000m\000>\000<\000a\000l\000b\000u\000m\000n\000a\000m\000e\000>\000D\000r\000i\000v\000i\000n\000g\000 \000F\000o\000r\000 \000T\000h\000e\000 \000S\000t\000o\000r\000m\000/\000D\000o\000c\000t\000o\000r\000 \000D\000o\000c\000t\000o\000r\000<\000/\000a\000l\000b\000u\000m\000n\000a\000m\000e\000>\000<\000a\000l\000b\000u\000m\000i\000d\000>\0008\0003\000-\0004\0002\0002\0006\0009\000<\000/\000a\000l\000b\000u\000m\000i\000d\000>\000<\000a\000l\000b\000u\000m\000i\000m\000a\000g\000e\000>\000h\000t\000t\000p\000:\000/\000/\000w\000w\000w\000.\000a\000b\000c\000.\000n\000e\000t\000.\000a\000u\000/\000t\000r\000i\000p\000l\000e\000j\000/\000c\000o\000v\000e\000r\000s\000/\000G\000y\000r\000o\000s\000c\000o\000p\000e\000 \000-\000 \000D\000r\000i\000v\000i\000n\000g\000 \000F\000o\000r\000 \000T\000h\000e\000 \000S\000t\000o\000r\000m\000/\000D\000o\000c\000t\000o\000r\000 \000D\000o\000c\000t\000o\000r\000 \000(\0002\0000\0000\0003\000)\000.\000j\000p\000g\000<\000/\000a\000l\000b\000u\000m\000i\000m\000a\000g\000e\000>\000<\000/\000a\000l\000b\000u\000m\000>\000<\000a\000r\000t\000i\000s\000t\000>\000<\000a\000r\000t\000i\000s\000t\000n\000a\000m\000e\000>\000G\000y\000r\000o\000s\000c\000o\000p\000e\000<\000/\000a\000r\000t\000i\000s\000t\000n\000a\000m\000e\000>\000<\000a\000r\000t\000i\000s\000t\000i\000d\000>\000<\000/\000a\000r\000t\000i\000s\000t\000i\000d\000>\000<\000a\000r\000t\000i\000s\000t\000n\000o\000t\000e\000>\000<\000/\000a\000r\000t\000i\000s\000t\000n\000o\000t\000e\000>\000<\000a\000r\000t\000i\000s\000t\000l\000i\000n\000k\000>\000<\000/\000a\000r\000t\000i\000s\000t\000l\000i\000n\000k\000>\000<\000/\000a\000r\000t\000i\000s\000t\000>\000<\000/\000i\000t\000e\000m\000>\000<\000/\000i\000t\000e\000m\000s\000>\000<\000/\000a\000b\000c\000m\000u\000s\000i\000c\000_\000p\000l\000a\000y\000o\000u\000t\000>\000" >> xmlDoc = REXML::Document.new(triplejString) REXML::ParseException: #<REXML::ParseException: malformed XML: missing tag start Line: Position: Last 80 unconsumed characters: <?xml version="1.0" encoding="utf-16"?><a> /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/rexml/parsers/baseparser.rb:356:in `pull' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/rexml/parsers/treeparser.rb:22:in `parse' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/rexml/document.rb:227:in `build' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/rexml/document.rb:43:in `initialize' (irb):19:in `new' (irb):19:in `irb_binding' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/irb/workspace.rb:52:in `irb_binding' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/irb/workspace.rb:52 ... malformed XML: missing tag start Line: Position: Last 80 unconsumed characters: <?xml version="1.0" encoding="utf-16"?><a Line: Position: Last 80 unconsumed characters: <?xml version="1.0" encoding="utf-16"?><a from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/rexml/parsers/treeparser.rb:92:in `parse' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/rexml/document.rb:227:in `build' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/rexml/document.rb:43:in `initialize' from (irb):19:in `new' from (irb):19 Any ideas?

    Read the article

  • Comparing datafeeds from different networks (Affiliate Marketing)

    - by Logistetica
    Hi, I am working on integrating affiliate sales into few existing sites. We are using a few merchants who work via different networks (cj, shareasale, linkshare, avantlink). Now my observation is that all these networks provide data feeds in different formats. But that's not a big problem. My main concern is actually merchants using different titles on same products. I don't want to run into these situations: a) two listings of the SAME product from N merchants (if titles are just a bit different) b) one listing of N different products from merchants (if we don't use strict comparison algorithm) We want to automate everything as much as possible, want to avoid operators scanning listings under question all the time. How is this problem typically handled?

    Read the article

  • NSXMLParser rss issue NSXMLParserInvalidCharacterError

    - by Chris Van Buskirk
    NSXMLParserInvalidCharacterError # 9 This is the error I get when I hit a weird character (like quotes copied and pasted from word to the web form, that end up in the feed). The feed I am using is not giving an encoding, and their is no hope for me to get them to change that. This is all I get in the header: < ?xml version="1.0"? < rss version="2.0" What can I do about illegal characters when parsing feeds? Do I sweep the data prior to the parse? Is there something I am missing in the API? Has anyone dealt with this issue?

    Read the article

  • How to produce a merged RSS feed (from DokuWiki and Serendipity)

    - by symcbean
    Hi, I've got an application developed on top of DokuWiki. I'd like to provide a 'News' page providing the latest updates from the internal RSS feed, some other feeds maintained in Serendipity and potentially other locations. Although its trivial to attach feed parsers to each one individually, I'd like to aggregate this into a single list (possibly a single RSS feed). Both the DokuWiki and Serendipity servers are not connected to the internet - so I can't use an external service for this - looking for code. Anybody got any ideas? TIA C.

    Read the article

  • Downloading RSS using python

    - by Vojtech R.
    Hi, I have list of 200 rss feeds, which I have to downloading. It's continuous process - I have to download every post, nothing can be missing, but also no duplicates. So best practice should be remember last update of feed and control it for change in x-hour interval? And how to handle if downloader will be restarted? So downloader should remember, what were downloaded and dont download it again... It's somewhere implemented yet? Or any tips for article? Thanks

    Read the article

  • designing an API wrapper for Twitter, Facebook, Youtube etc...

    - by John Stewart
    I am looking at some pointers on how to design a wrapper for these social networking sites. Ideally what I want to do is create a black box where I am able to create an interface for other libraries to call certain functions to interact with these social networking sites. I am planning on using oAuth for most of these sites, I already have this layer designed in PHP. The other layer that I need for these social sites is the ability to push and pull content. For example, the ability to pull feeds for users from each of these networks and then should I cache them on my end? how would I cache all twitter, facebook etc activity feed and be able to account for resync etc? The networks that I am looking at are: Twitter Youtube Facebook LinkedIN Vimeo Flickr I am looking for ideas on how to tackle this in php? Any suggestions, opensource systems that I can learn from?

    Read the article

  • Correct syntax for php inside a feed request

    - by Simon Hume
    Hi guys, I have a very basic query string which passes a ID to a receiving page. On that page, I need to dynamically call the YouTube API, giving my playlistID. I'm having to use PHP for this, and it's a little out of my comfort zone, so hopefully someone can wade in with a quick fix for me. Here is my variable $playlist; And I need to replace the 77DC230FBBCE4D58 below with that variable. $feedURL = 'http://gdata.youtube.com/feeds/api/playlists/77DC230FBBCE4D58?v=2'; Any help, as always, greatly appreciated!

    Read the article

  • How to create "recurData" in Google Calendar?

    - by Pari
    I want to create recurring events of Calendar using Google API. I am following links: Google Calendar API I am not getting how to create "recurData". I can't modify String and pass it as parameter. Tried DDay.iCal Version 0.80. also. DDay.iCal There are some Example code given.I tried them. I am able to create ".ics" file. But when i pass this file content as "recurData" Getting Error : {"Execution of request failed: http://www.google.com/calendar/feeds/[email protected]/private/full?gsessionid=AHItK5wrSIoJVawFjGt-0g"} My icf File content is: BEGIN:VCALENDAR VERSION:2.0 PRODID:-//DDay.iCal//NONSGML ddaysoftware.com//EN BEGIN:VEVENT CREATED:20100309T132930Z DESCRIPTION:The event description DTEND:20100310T020000 DTSTAMP:20100309T132930Z DTSTART:20100309T080000 LOCATION:Event location SEQUENCE:0 SUMMARY:18 hour event summary UID:396c6b22-277f-4496-bbe1-d3692dc1b223 END:VEVENT BEGIN:VEVENT CREATED:20100309T132930Z DTEND;VALUE=DATE:20100315 DTSTAMP:20100309T132930Z DTSTART;VALUE=DATE:20100314 SEQUENCE:0 SUMMARY:All-day event UID:ac25cdaf-4e95-49ad-a770-f04f3afc1a2f END:VEVENT END:VCALENDAR I made it using "Example6".

    Read the article

  • Accessing the feed/entry/id field of an ATOM 1.0 feed with the ROME library

    - by PartlyCloudy
    Hi, I feel a bit stupid asking this question, but I don't know how I can access the ID field of an entry when using ROME to parse an Atom feed. ROME provides it's own meta level of feeds/items, i.e. SyndFeed and SyndEntry. Being an abstraction over RSS and ATOM they only contain elements both formats support. Thus, there is no method to get an ID of an entry. There also exist low level packages for the distinct formats, and the Atom package contains com.sun.syndication.feed.atom.Entry, which provides getId(). However, I don't know how can I convert my SyndEntry into an Entry. I have not found a way to convert it. The (outdated) tutorials show a conversion, but that's only for output though. So how can I easily access the ID field? Thanks in advance.

    Read the article

  • Integrating a blog into asp.net website

    - by ScottK
    I have a website that I would like to integrate a blog into. I have seen lots of options available and not sure which one to jump into. What I want to do is have the most recent post on my home page and have users navigate to www.mysite.com/blog to see all posts. I would also like to have a sidebar on the homepage with links to 10 most recent posts. Where should I start? Should I use wordpress or an asp.net engine? Should I use rss feeds to get information to homepage?

    Read the article

  • Parsing xml with dom4j or jdom or anyhow

    - by c0mrade
    Hello, I wanna read feed entries and I'm just stuck now. Take this for example : http://stackoverflow.com/feeds/question/2084883 lets say I wanna read all the summary node value inside each entry node in document. How do I do that? I've changed many variations of code this one is closest to what I want to achieve I think : Element entryPoint = document.getRootElement(); Element elem; for(Iterator iter = entryPoint.elements().iterator(); iter.hasNext();){ elem = (Element)iter.next(); System.out.println(elem.getName()); } It goes trough all nodes in xml file and writes their name. Now what I wanted to do next is if(elem.getName() == "entry") to get only the entry nodes, how do I get elements of the entry nodes, and how to get let say summary and its value? tnx

    Read the article

  • [Gdata] GetAuthSubToken returns None

    - by Matt
    Hey guys, I am a little lost on how to get the auth token. Here is the code I am using on the return from authorizing my app: client = gdata.service.GDataService() gdata.alt.appengine.run_on_appengine(client) sessionToken = gdata.auth.extract_auth_sub_token_from_url(self.request.uri) client.UpgradeToSessionToken(sessionToken) logging.info(client.GetAuthSubToken()) what gets logged is "None" so that does seem right :-( if I use this: temp = client.upgrade_to_session_token(sessionToken) logging.info(dump(temp)) I get this: {'scopes': ['http://www.google.com/calendar/feeds/'], 'auth_header': 'AuthSub token=CNKe7drpFRDzp8uVARjD-s-wAg'} so I can see that I am getting a AuthSub Token and I guess I could just parse that and grab the token but that doesn't seem like the way things should work. If I try to use AuthSubTokenInfo I get this: Traceback (most recent call last): File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/webapp/__init__.py", line 507, in __call__ handler.get(*groups) File "controllers/indexController.py", line 47, in get logging.info(client.AuthSubTokenInfo()) File "/Users/matthusby/Dropbox/appengine/projects/FBCal/gdata/service.py", line 938, in AuthSubTokenInfo token = self.token_store.find_token(scopes[0]) TypeError: 'NoneType' object is unsubscriptable so it looks like my token_store is not getting filled in correctly, is that something I should be doing? Also I am using gdata 2.0.9 Thanks Matt

    Read the article

  • Retrieve Google Calendar Events

    - by Don
    Hi, I'm using the Java API for Google Calendar. The documents show the following example of how to retrieve events from a calendar: URL feedUrl = new URL("http://www.google.com/calendar/feeds/[email protected]/private/full"); CalendarService myService = new CalendarService("exampleCo-exampleApp-1"); myService.setUserCredentials("[email protected]", "mypassword"); // Send the request and receive the response: CalendarEventFeed myFeed = myService.getFeed(feedUrl, CalendarEventFeed.class); This will retrieve all events from the primary calendar of the [email protected] account. However, I need to retrieve events from a secondary calendar. I already have a reference the CalendarEntry object that represents the secondary calendar, but I still can't figure out how to get events from it. I suspect I can do this using the same code as above, but I just need to change the URL to something else. Thanks, Donal

    Read the article

  • Simple non-network concurrency with Twisted

    - by Rince
    Dear pythoners, I have a problem with using Twisted for simple concurrency in python. The problem is - I don't know how to do it and all online resources are about Twisted networking abilities. So I am turning to SO-gurus for some guidance. Python 2.5 is used. Simplified version of my problem runs as follows: A bunch of scientific data A function that munches on the data and creates output ??? < here enters concurrency, it takes chunks of data from 1 and feeds it to 2 Output from 3 is joined and stored My guess is that Twisted reactor can do the number three job. But how? Thanks a lot for any help and suggestions.

    Read the article

  • Looping trough feed entries with rome

    - by Gandalf StormCrow
    I'm trying to loop trough Atom feed entries, and get the title attribute lets say, I found this article, I tried this snipped of code : for (final Iterator iter = feeds.getEntries.iterator(); iter.hasNext(); ) { element = (Element)iter.next(); key = element.getAttributeValue("href"); if ((key != null) && (key.length() > 0)) { marks.put(key, key); } } But I get exception saying : java.lang.ClassCastException: com.sun.syndication.feed.synd.SyndEntryImpl cannot be cast to org.jdom.Element at com.emir.altantbh.FeedReader.main(FeedReader.java:47) What did I do wrong? can anyone direct me towards better tutorial or show me where did I make mistake, I need to loop trough entries and extract title tag value. thank you

    Read the article

  • How to limit a google calendar xml / rss feed by date range (not working!!)

    - by Phil
    For the life of me I cannot get my google calendar xml feed to only display events within a certain date range. I know that start-min and start-max are supposed to limit the output (according to these posts: (links to posts deleted because I am a newbie and can only post one hyperlink argh) BUT I CAN'T GET IT TO WORK. It keeps showing lot of things outside the range. I created a sample calendar and made it public. It is some events the first week of april. Can anyone show me how to construct a request that only returns those three events from the first week in april? I'll GLADLY and GRATEFULLY paypal $10 to anyone who helps me break through on this. Here is the calendar's public feed: http://www.google.com/calendar/feeds/66m31c36sj9u5k8kekrvt2lpr8%40group.calendar.google.com/public/basic

    Read the article

  • Rails 3 time output

    - by Oluf Nielsen
    Hi, I'm now working on my output of the feeds I'm taking in from some site. What I'm currently doing is Time, and i want it to be displayed in a maybe, little be special way.. like this.. today, 14:12 yesterday, 15:34 27/12, 15:24 i have this in my code = news.entry_published.strftime("%d/%m, %H:%M") That gives me an error saying undefined method `strftime' for "2010-12-30 19:26:00.000000":String And it dosn't do what i want with the days.. Edit: - @date = DateTime.strptime(news.entry_published, "%Y-%m-%d %H:%M:%S") = @date.strftime("%d/%m, %H:%M") Now works, and gives this output 30/12, 19:26 But i still have to check if it is today, yesterday or just another day. Cheers, Oluf.

    Read the article

  • RDF Usage Rates for Syndication

    - by David in Dakota
    Is RDF still used widely for content syndication? Specifically, I know only of Slashdot as a large scale website syndicating content in that format (say versus RSS). Understandably this might seem vague to answer so more specifically: Can anyone list any larger sites similar in scale to Amazon or CNN using it? Any web based publishing platforms (Wordpress, Joomla, etc...) that generate syndication feeds with this xml vocabulary. Any other more quantifiable evidence that it is used for syndication online. I understand that RDF may be a parent specification but in this case I'm talking about sites that syndicate content using <rdf as a root element and heavily leveraging elements from the RDF namespace: http://www.w3.org/1999/02/22-rdf-syntax-ns#

    Read the article

  • GData API works in Android 2.0 SDK & up?

    - by user266361
    I used GData API to pull in Calender info. It works fine if I use 1.6. But the same code, if I change to Android 2.0 & up, it would throw AuthenticationException. Below is my code for ur ref: CalendarService myService = new CalendarService("My Application"); myService.setUserCredentials(args[0],args[1]); // Set up the URL and the object that will handle the connection: URL feedUrl = new URL("http://www.google.com/calendar/feeds/"+args[0]+"/private/full"); args[0] & args[1] are the credentials. AuthenticationException will be thrown when calling myService.setUserCredentials(). Anybody has any clue?

    Read the article

  • How To Make Moving News Bar in Windows Forms Application without Timer

    - by Ehab Sutan
    I'm making a desktop application in C# which contains a moving News Bar labels. I'm using a timer to move these labels but the problem is that when i make the interval of this timer low (1-10 for example) the application takes very high percentage of CPU Usage, And when i make it higher(200 -500 ) the movement of the labels becomes intermittent or not smooth movement even that the user may not be able to read the news in Comfortable way. ((More Information)) it is Windows form application. the way i move the labels is as follows : the news items from RSS feeds are represented in a group of linklabels. All these linklabels are added to a flowlayout container. The timer moves the whole flowlayout container. I found this way according to my knowledge the best way to making the news bar. If you have better idea or solution please help

    Read the article

  • Make xargs execute the command once for each line of input

    - by Readonly
    How can I make xargs execute the command exactly once for each line of input given? It's default behavior is to chunk the lines and execute the command once, passing multiple lines to each instance. From http://en.wikipedia.org/wiki/Xargs: find /path -type f -print0 | xargs -0 rm In this example, find feeds the input of xargs with a long list of file names. xargs then splits this list into sublists and calls rm once for every sublist. This is more efficient than this functionally equivalent version: find /path -type f -exec rm '{}' \; I know that find has the "exec" flag. I am just quoting an illustrative example from another resource.

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >