Search Results

Search found 3147 results on 126 pages for 'rss feed'.

Page 19/126 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Yahoo Pipes: filter items in a feed based on words in a text file

    - by pufferfish
    I have a pipe that filters an RSS feed and removes any item that contains "stopwords" that I've chosen. Currently I've manually created a filter for each stopword in the pipe editor, but the more logical way is to read these from a file. I've figured out how to read the stopwords out of the text file, but how do I apply the filter operator to the feed, once for every stopword? The documentation states explicitly that operators can't be applied within the loop construct, but hopefully I'm missing something here.

    Read the article

  • JSON Feed Appears to be XHR when it should be JS

    - by Oscar Godson
    I don't get why it'd doing this with the 2nd feed (appearing as a XHR call rather than just JS [looking at it in Firefox/Firebug]). The 2nd feed has the exact same MIME type as Flickr's JSON feed, yet the PortlandOregon.gov one shows as XHR and i get a NULL callback when using $.getJSON and if i use $.ajax with a 'json' or 'jsonp' type i get nothing at all. If i do the Flickr one i get the normal "[object Object]" callback. Whats going on? Please help! This has been such a headache for about a week. And i have authorization to change the feed, but i have to request the change, so if anyone knows for absolute sure let me know that! Response Headers from Flickr's API ( http://api.flickr.com/services/feeds/photos_public.gne?tags=cat&tagmode=any&format=json&jsoncallback=? ) [JS]: Date Mon, 15 Mar 2010 21:56:06 GMT P3P policyref="http://p3p.yahoo.com/w3c/p3p.xml", CP="CAO DSP COR CUR ADM DEV TAI PSA PSD IVAi IVDi CONi TELo OTPi OUR DELi SAMi OTRi UNRi PUBi IND PHY ONL UNI PUR FIN COM NAV INT DEM CNT STA POL HEA PRE GOV" Expires Mon, 26 Jul 1997 05:00:00 GMT Last-Modified Mon, 15 Mar 2010 21:52:17 GMT Cache-Control no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma no-cache Vary Accept-Encoding Content-Encoding gzip Content-Length 3647 Connection close Content-Type application/x-javascript; charset=utf-8 Request Headers Host api.flickr.com User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6 Accept */* Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://oscargodson.com/dev/addWidget/test.html Cookie BX=4lflj455amesp&b=3&s=iv; fltoto=0%2C0%2C0%2C0%2C1%2C0%3B0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%3B1%3B0%3B; search_z=t; localization=en-us%3Bus%3Bus PortlandOregon.gov ( http://www.portlandonline.com/shared/cfm/json.cfm?c=27321 ) [XHR]: Response Headers Connection close Date Mon, 15 Mar 2010 21:57:49 GMT Server Microsoft-IIS/6.0 Set-Cookie CONTACT_ID=0;path=/ LAST_USER=;path=/ BIGipServercgis_pol_web_pool-http=1191537418.20480.0000; path=/ Content-Type application/x-javascript; charset=utf-8 Request Headers Host www.portlandonline.com User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6 Accept application/json, text/javascript, */* Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://oscargodson.com/dev/addWidget/test.html Origin http://oscargodson.com

    Read the article

  • Paginating itunes podcast feed?

    - by drozzy
    How in the word do I get the next page of results for this feed? I've tried everything! Grrr.... When I go to security now feed page, there is no "next" link of any kind and the url parameter of "page=100" does nothing: http://leoville.tv/podcasts/sn.xml I get only 1 page of results of about 20 episodes. However my Google Reader can successfully retrieve episodes that are earlier than that.

    Read the article

  • Using Jquery to pull Wordpress post feed into static page

    - by JCHASE11
    Just like using the many twitter or facebook widgets out there that pull the feed and display it in a nice widget, I want to create a "widget" that pulls a feed from a wordpress blog that I have, and displays it on a static, non-wordpress page. Before I try getting my hands dirty with jquery, do you know if there is any pre-existing code or plugins out there that I can use?

    Read the article

  • Google Calendar feed api deleted events

    - by hsmit
    I'm syncing the Google Calendar with my application (I store events in a database). When an event is updated I can easily find the last updates by sorting the event feed on the 'updated' order. However, if an event is removed / deleted, how can I track this update from the feed?

    Read the article

  • Tomcat + Publish RSS xml

    - by Panther24
    I've created an XML feed and want to publish it in my Tomcat. How can I achieve this? NOTE: I have validate the XML feed file @ http://webdesign.about.com/od/validators/l/bl_validation.htm#rssvalidator and it was fine.

    Read the article

  • VLC (Server) re-stream Security Camera Feed

    - by Aaron
    I purchased a Swann Home Security DVR system and was hoping for some help on how to duplicate the streaming video on my server. In order to get their web view (streaming video in the browser) to work, I had to install the following plugins: HiDvrPlugin.dmg for mac. Hidvrocx.cab for Windows. I was originally thinking it was a sign of some form of DRM? Maybe. Maybe not. HTML wise, the following code is in the source of the safari version of the web view: <embed pluginspage="SurveilClient.dmg" width="10px" height="10px" type="application/x-scplugin" id="MacDiv" style="height: 592px; width: 720px; left: 278px; top: 61px; "> It seems to be the main display area. Using wireshark, I am able to see that the video stream is on port 9000. However, I have no idea what type of stream it is. I've tried opening it in VLC with no such luck. http://dvr_ip:9000 tcp://dvr_ip:9000 My hope was to do the following to redistribute the feed vlc dvr_ip:9000 --sout h264-version-on-localhost:3000 TLDR; Trying to re-distribute a stream from a security camera (can't tell the format) using vlc (re-distribute via h.264 / HTML5). Not sure how to accomplish this. Is it possible that the software has some type of DRM that only the plugins can decode?

    Read the article

  • Pointcast Alternatives?

    - by kellyllek
    I used to love Pointcast in the early days of the internet; it was a screensaver that, like RSS feeds, brought in all your favorite sources with news articles and stock quotes along with various other info and turned it into a visually interesting screensaver. It was before broadband though and I heard so many work places had to ban it that the company eventually folded. But I wondered if there are any modern day equivalents? I certainly have feed readers and some awesome desktop gadgets that bring in various things like news and weather, and I don't want a whole screensaver like pointcast. But I wondered if anyone out there had other suggestions worth taking a look at; making some kind of feed display of interest, or some other way to turn favorite content into more animated desktop background, etc...

    Read the article

  • PHP DOM vs SimpleXML for Atom GData feed parsing

    - by Geoff Adams
    I'm building a library to access the Google Analytics Data Export API. All the data the library accesses is in Atom format and utilises numerous different namespaces throughout. My experiments with the API have used SimpleXML for parsing so far, especially as all I have been doing is accessing the data held within the feed. Now I'm coming to write a library I am wondering whether forging ahead with SimpleXML will be adequate or whether the enhanced functionality of the DOM module in PHP would be of benefit in the future. I haven't written much code for this part of the library yet so the choice is still open. I have read that the PHP DOM module can be a better choice if you need to build an XML DOM on the fly or modify an existing one, but I'm not entirely sure I would need that functionality anyway due to the nature of the API (no pushing data to the server, for instance). SimpleXML is certainly easier to use and I have seen people saying that for read-only situations it is all you need. Essentially the question is, what would you use? Compatibility will not be an issue as the server configuration will match the application's requirements. Is it worth building the library with PHP DOM in mind or should I stick with SimpleXML for now? Update: Here are two examples of the kind of feeds I will be dealing with: Account feed Data feed

    Read the article

  • Ruby and RSS2 Feed not displaying image

    - by pcasa
    Trying to create a simple RSS2 Feed that I could later pass on to FeedBurner but can't get RSS feed to display images at all. Also, from what I have read having xml.instruct! on top might cause IE to complain it's not a valid feed. Is this true? My Code looks like xml.instruct! xml.rss "version" => "2.0", "xmlns:dc" => "http://purl.org/dc/elements/1.1/" do xml.channel do xml.title "Store" xml.link url_for :only_path => false, :controller => 'products' xml.description "Store" xml.pubDate @products.first.updated_at.rfc822 if @products.any? @products.each do |product| xml.item do xml.title product.name xml.pubDate (product.updated_at.rfc822) xml.image do xml.url domain_host + product.product_image.url(:small) xml.title "Store" xml.link url_for :only_path => false, :controller => 'products' end xml.link url_for :only_path => false, :controller => 'products', :action => 'show', :id => product.permalink xml.description product.fine_print xml.guid url_for :only_path => false, :controller => 'products', :action => 'show', :id => product.permalink end end end end

    Read the article

  • PHP's SimpleXML: How to use colons in names

    - by nute
    I am trying to generate an RSS Google Merchant, using SimpleXML. The sample given by Google is: <?xml version="1.0"?> <rss version="2.0" xmlns:g="http://base.google.com/ns/1.0"> <channel> <title>The name of your data feed</title> <link>http://www.example.com</link> <description>A description of your content</description> <item> <title>Red wool sweater</title> <link> http://www.example.com/item1-info-page.html</link> <description>Comfortable and soft, this sweater will keep you warm on those cold winter nights.</description> <g:image_link>http://www.example.com/image1.jpg</g:image_link> <g:price>25</g:price> <g:condition>new</g:condition> <g:id>1a</g:id> </item> </channel> </rss> My code has things like: $product->addChild("g:condition", 'new'); Which generates: <condition>new</condition> I read online that I should instead use: $product->addChild("g:condition", 'new', 'http://base.google.com/ns/1.0'); Which now generates: <g:condition xmlns:g="http://base.google.com/ns/1.0">new</g:condition> This seems very counter-intuitive to me, as now the "xmlns" declaration is on almost EVERY line of my RSS feed intead of just once in the root element. Am I missing something?

    Read the article

  • YouTube API Get all videos uploaded feed

    - by Paul
    Hi Guys, I can't seem to retrieve ALL videos from a particular channel on YouTube, despite the API giving example code that should perform just that. I'm using Java. http://gdata.youtube.com/feeds/api/users/GoogleDevelopers/uploads The above rss feed is the URL they suggest using along with the following sample code.. /*init the list*/ String feedUrl = "http://gdata.youtube.com/feeds/api/users/GoogleDevelopers/uploads"; VideoFeed videoFeeder = null; videoFeeder = serviceobject.getFeed(new URL(feedUrl), VideoFeed.class); Looping this with a for loop suggests 25 entries (as per the RSS). However - the actual number of videos uploaded is significantly larger. (662 at time of writing). My query is how on earth you retrieve everything with the API, not just a subset of the data. Any ideas on where I'm going wrong? Should I be using a different URL? http://www.youtube.com/GoogleDevelopers#g/a

    Read the article

  • XML RSS to HTML parser doesn't work

    - by mstr
    I'm using MCX (I don't even know if someone here is familiar with it, pretty unkown derivate of COBOL and Fortran, look it up in google when you don't believe me) Note: I'm using MCX on the MCX-WebServices server as it does neither support apache or ISS, mabye that is one problem. The thing is that I want to use the XML library to read in an XML file and convert it into an output format readable by the user. The XML lib already has all the functions I need for that, yet my program fails. #!usr/bin/mcx $PGRM.ID: index.mcx $PGRM.AT: /mstr SHOWERROR: WRITE XML.LastError --> OUTPUT DO_FLUSH xcit end\ MAIN: IMPORT Extras.XML USE Extras $XML_RSS_FILE: XML.ReadIn "rss.xml" ! $XML_RSS_FILE --> GOTO SHOWERROR $XML_RSS: XML.FormatRSS1 <-- $XML_RSS_FILE ! $XML_RSS --> GOTO SHOWERROR WRITE $XML_RSS --> OUTPUT DO_FLUSH FLUSH xcit end\ Program output: Nothing The rss.xml file 100% exists and is readable Thanks in advance

    Read the article

  • Firefox addon for quick adding of rss feeds to thunderbird needed.

    - by alehro
    Actually the matter grown from SE using. It's quite bothering to: right click on rss link, choose "copy", then swich to tunderbird, right click on rss folder, choose "subscribe", push "Add", insert, ok. I'd prefer to: right click on rss link and choose the rss folder. The second motive for the question is that I'd like to look at implementation of interprocess manipulation of thunderbird. May be there is not exactly but something similar exist.

    Read the article

  • Looping trough feed entries with rome

    - by Gandalf StormCrow
    I'm trying to loop trough Atom feed entries, and get the title attribute lets say, I found this article, I tried this snipped of code : for (final Iterator iter = feeds.getEntries.iterator(); iter.hasNext(); ) { element = (Element)iter.next(); key = element.getAttributeValue("href"); if ((key != null) && (key.length() > 0)) { marks.put(key, key); } } But I get exception saying : java.lang.ClassCastException: com.sun.syndication.feed.synd.SyndEntryImpl cannot be cast to org.jdom.Element at com.emir.altantbh.FeedReader.main(FeedReader.java:47) What did I do wrong? can anyone direct me towards better tutorial or show me where did I make mistake, I need to loop trough entries and extract title tag value. thank you

    Read the article

  • Django DRY Feeds

    - by Mandx
    I'm using the Django Feeds Framework and it's really nice, very intuitive and easy to use. But, I think there is a problem when creating links to feeds in HTML. For example: <link rel="alternate" type="application/rss+xml" title="{{ feed_title }}" href="{{ url_of_feed }}" /> Link's HREF attribute can be easily found out, just use reverse() But, what about the TITLE attribute? Where the template engine should look for this? Even more, what if the feed is build up dinamically and the title depends on parameters (like this)? I can't come up with a solution that "seems" DRY to me... All that I can come up with is using context processors o template tags, but it gets messy when the context procesor/template tag has to find parameters to construct the Feed class, and writing this I realize I don't even know how to create a Feed instance myself within the view. If I put all this logic in the view, it would not be just one view. Also, the value for TITLE would be in the view AND in the feed.

    Read the article

  • ajax to php to curl and back..

    - by pfunc
    I am trying to make an ajax call to a php script. The php script calls an rss feed using curl, gets the data, and returns the data to the funciton. I keep getting an error "Warning: Wrong parameter count for curl_error() in" .... Here is my php code:1 $ch = curl_init() or die(curl_error()); curl_setopt($ch, CURLOPT_URL, $feed); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $data1 = curl_exec($ch) or die(curl_error()); echo $data1; and the ajax call: $.ajax({ url: "getSingleFeed.php", type: "POST", data: "feedURL=" + window.feedURL, success: function(feed){ alert(feed); }}); I tested all the variables, they are being passed correctly, I can echo them out. But this line: $data1 = curl_exec($ch) or die(curl_error()); is what is giving me the error. I am doing the same thing with curl on other pages, just without ajax, and it is working fine. Is there anything special I need to do with ajax to do this?

    Read the article

  • Javascript works great locally, but not on my server

    - by Jonathan Cohen
    I'm teaching myself javascript by creating a script for displaying an external rss feed on a webpage. The code I patched together works great locally. This is a screen grab of the code producing exactly the desired behavior. The code is populating all the information inside the section "Blog: Shades of Gray", except for "tagged" which I hard coded: But when I upload the site files to my server, the code doesn't work at all. This is a screen grab of the code on my site NOT producing the desired behavior... This feels like I'm not getting something really basic about how javascript works locally vs. on the server. I did my half hour of googling for an answer and no trails look promising. So I'd really appreciate your help. This is my site (under construction) http://jonathangcohen.com Below is the code, which can also be found at http://jonathangcohen.com/grabFeeds.js. /*Javascript for Displaying an External RSS Feed on a Webpage Wrote some code that’ll grab attributes from an rss feed and assign IDs for displaying on a webpage. The code references my Tumblr blog but it’ll extend to any RSS feed.*/ window.onload = writeRSS; function writeRSS(){ writeBlog(); } function writeBlog(){ if (window.XMLHttpRequest) {// code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp=new XMLHttpRequest(); } else {// code for IE6, IE5 xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); } xmlhttp.open("GET","http://blog.jonathangcohen.com/rss.xml",false); xmlhttp.send(); xmlDoc=xmlhttp.responseXML; var x=xmlDoc.getElementsByTagName("item"); //append category to link for (i=0;i<3;i++) { if (i == 0){ //print category var blogTumblrCategory = x[i].getElementsByTagName("category")[0].childNodes[0].nodeValue document.getElementById("getBlogCategory1").innerHTML = '<a class="BlogTitleLinkStyle" href="http://blog.jonathangcohen.com/tagged/'+blogTumblrCategory+'">'+blogTumblrCategory+'</a>'; //print date var k = x[i].getElementsByTagName("pubDate")[0].childNodes[0].nodeValue thisDate = new Date(); thisDate = formatTumblrDate(k); document.getElementById("getBlogPublishDate1").innerHTML = thisDate; //print title var blogTumblrTitle = x[i].getElementsByTagName("title")[0].childNodes[0].nodeValue var blogTumblrLink = x[i].getElementsByTagName("link")[0].childNodes[0].nodeValue document.getElementById("getBlogTitle1").innerHTML = '<a class="BlogTitleLinkStyle" href="'+blogTumblrLink+'">'+blogTumblrTitle+'</a>'; } if (i == 1){ //print category var blogTumblrCategory = x[i].getElementsByTagName("category")[0].childNodes[0].nodeValue document.getElementById("getBlogCategory2").innerHTML = '<a class="BlogTitleLinkStyle" href="http://blog.jonathangcohen.com/tagged/'+blogTumblrCategory+'">'+blogTumblrCategory+'</a>'; //print date var k = x[i].getElementsByTagName("pubDate")[0].childNodes[0].nodeValue thisDate = new Date(); thisDate = formatTumblrDate(k); document.getElementById("getBlogPublishDate2").innerHTML = thisDate; //print title var blogTumblrTitle = x[i].getElementsByTagName("title")[0].childNodes[0].nodeValue var blogTumblrLink = x[i].getElementsByTagName("link")[0].childNodes[0].nodeValue document.getElementById("getBlogTitle2").innerHTML = '<a class="BlogTitleLinkStyle" href="'+blogTumblrLink+'">'+blogTumblrTitle+'</a>'; } if (i == 2){ //print category var blogTumblrCategory = x[i].getElementsByTagName("category")[0].childNodes[0].nodeValue document.getElementById("getBlogCategory3").innerHTML = '<a class="BlogTitleLinkStyle" href="http://blog.jonathangcohen.com/tagged/'+blogTumblrCategory+'">'+blogTumblrCategory+'</a>'; //print date var k = x[i].getElementsByTagName("pubDate")[0].childNodes[0].nodeValue thisDate = new Date(); thisDate = formatTumblrDate(k); document.getElementById("getBlogPublishDate3").innerHTML = thisDate; //print title var blogTumblrTitle = x[i].getElementsByTagName("title")[0].childNodes[0].nodeValue var blogTumblrLink = x[i].getElementsByTagName("link")[0].childNodes[0].nodeValue document.getElementById("getBlogTitle3").innerHTML = '<a class="BlogTitleLinkStyle" href="'+blogTumblrLink+'">'+blogTumblrTitle+'</a>'; } } } function formatTumblrDate(k){ d = new Date(k); var curr_date = d.getDate(); var curr_month = d.getMonth(); curr_month++; var curr_year = d.getFullYear(); printDate = (curr_month + "/" + curr_date + "/" + curr_year); return printDate; } Thank you!

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >