Search Results

Search found 1421 results on 57 pages for 'atom feeds'.

Page 42/57 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Simple non-network concurrency with Twisted

    - by Rince
    Dear pythoners, I have a problem with using Twisted for simple concurrency in python. The problem is - I don't know how to do it and all online resources are about Twisted networking abilities. So I am turning to SO-gurus for some guidance. Python 2.5 is used. Simplified version of my problem runs as follows: A bunch of scientific data A function that munches on the data and creates output ??? < here enters concurrency, it takes chunks of data from 1 and feeds it to 2 Output from 3 is joined and stored My guess is that Twisted reactor can do the number three job. But how? Thanks a lot for any help and suggestions.

    Read the article

  • How to limit a google calendar xml / rss feed by date range (not working!!)

    - by Phil
    For the life of me I cannot get my google calendar xml feed to only display events within a certain date range. I know that start-min and start-max are supposed to limit the output (according to these posts: (links to posts deleted because I am a newbie and can only post one hyperlink argh) BUT I CAN'T GET IT TO WORK. It keeps showing lot of things outside the range. I created a sample calendar and made it public. It is some events the first week of april. Can anyone show me how to construct a request that only returns those three events from the first week in april? I'll GLADLY and GRATEFULLY paypal $10 to anyone who helps me break through on this. Here is the calendar's public feed: http://www.google.com/calendar/feeds/66m31c36sj9u5k8kekrvt2lpr8%40group.calendar.google.com/public/basic

    Read the article

  • Rails 3 time output

    - by Oluf Nielsen
    Hi, I'm now working on my output of the feeds I'm taking in from some site. What I'm currently doing is Time, and i want it to be displayed in a maybe, little be special way.. like this.. today, 14:12 yesterday, 15:34 27/12, 15:24 i have this in my code = news.entry_published.strftime("%d/%m, %H:%M") That gives me an error saying undefined method `strftime' for "2010-12-30 19:26:00.000000":String And it dosn't do what i want with the days.. Edit: - @date = DateTime.strptime(news.entry_published, "%Y-%m-%d %H:%M:%S") = @date.strftime("%d/%m, %H:%M") Now works, and gives this output 30/12, 19:26 But i still have to check if it is today, yesterday or just another day. Cheers, Oluf.

    Read the article

  • RDF Usage Rates for Syndication

    - by David in Dakota
    Is RDF still used widely for content syndication? Specifically, I know only of Slashdot as a large scale website syndicating content in that format (say versus RSS). Understandably this might seem vague to answer so more specifically: Can anyone list any larger sites similar in scale to Amazon or CNN using it? Any web based publishing platforms (Wordpress, Joomla, etc...) that generate syndication feeds with this xml vocabulary. Any other more quantifiable evidence that it is used for syndication online. I understand that RDF may be a parent specification but in this case I'm talking about sites that syndicate content using <rdf as a root element and heavily leveraging elements from the RDF namespace: http://www.w3.org/1999/02/22-rdf-syntax-ns#

    Read the article

  • GData API works in Android 2.0 SDK & up?

    - by user266361
    I used GData API to pull in Calender info. It works fine if I use 1.6. But the same code, if I change to Android 2.0 & up, it would throw AuthenticationException. Below is my code for ur ref: CalendarService myService = new CalendarService("My Application"); myService.setUserCredentials(args[0],args[1]); // Set up the URL and the object that will handle the connection: URL feedUrl = new URL("http://www.google.com/calendar/feeds/"+args[0]+"/private/full"); args[0] & args[1] are the credentials. AuthenticationException will be thrown when calling myService.setUserCredentials(). Anybody has any clue?

    Read the article

  • How To Make Moving News Bar in Windows Forms Application without Timer

    - by Ehab Sutan
    I'm making a desktop application in C# which contains a moving News Bar labels. I'm using a timer to move these labels but the problem is that when i make the interval of this timer low (1-10 for example) the application takes very high percentage of CPU Usage, And when i make it higher(200 -500 ) the movement of the labels becomes intermittent or not smooth movement even that the user may not be able to read the news in Comfortable way. ((More Information)) it is Windows form application. the way i move the labels is as follows : the news items from RSS feeds are represented in a group of linklabels. All these linklabels are added to a flowlayout container. The timer moves the whole flowlayout container. I found this way according to my knowledge the best way to making the news bar. If you have better idea or solution please help

    Read the article

  • Make xargs execute the command once for each line of input

    - by Readonly
    How can I make xargs execute the command exactly once for each line of input given? It's default behavior is to chunk the lines and execute the command once, passing multiple lines to each instance. From http://en.wikipedia.org/wiki/Xargs: find /path -type f -print0 | xargs -0 rm In this example, find feeds the input of xargs with a long list of file names. xargs then splits this list into sublists and calls rm once for every sublist. This is more efficient than this functionally equivalent version: find /path -type f -exec rm '{}' \; I know that find has the "exec" flag. I am just quoting an illustrative example from another resource.

    Read the article

  • Rss Feed for youtube channel?

    - by Praveen Chandrasekaran
    Suggest me to how to get the HD video url for the youtube channel uploads as a RSS Feed. its to play on android phones. the formats should be Format File Type H.263 3GPP (.3gp) and MPEG-4 (.mp4) H.264 AVC 3GPP (.3gp) and MPEG-4 (.mp4) MPEG-4 SP 3GPP (.3gp) if i use this link: http://gdata.youtube.com/feeds/api/users/youtube/uploads i get only the 3gpp format media-content tag? i want high definition mp4 video link. resolution must be 320X480.How can i? Any Idea?

    Read the article

  • Rss Feed for HD Video for youtube channel?

    - by Praveen Chandrasekaran
    Suggest me to how to get the HD video url for the youtube channel uploads as a RSS Feed. its to play on android phones. the formats should be Format File Type H.263 3GPP (.3gp) and MPEG-4 (.mp4) H.264 AVC 3GPP (.3gp) and MPEG-4 (.mp4) MPEG-4 SP 3GPP (.3gp) if i use this link: http://gdata.youtube.com/feeds/api/users/youtube/uploads i get only the 3gpp format media-content tag? i want high definition mp4 video link. resolution must be 320X480.How can i? Any Idea?

    Read the article

  • What is a good open source job scheduler in Java?

    - by Boaz
    Hi, In an application harvesting (many) RSS feeds, I want to dynamically schedule the feed downloaders based on the following criteria: The speed at which content are generated - source which produce content at a higher rate need to be visited more often. After downloading a feed, its content is analyzed and based on the current rate of publication the next run time is determined for the feed. Note that this is dynamically changing. Sometimes a feed is very active and sometimes it is slow. Every RSS feed should be visited at least once an hour. While the second one is done by most schedulers but the first one is more problematic. What JAVA-based open source scheduler would you recommend for the task? (and why ;) ) Thanks! Boaz

    Read the article

  • How to loop over nodes with xmlfeed using scrapy python

    - by Kour ipm
    Hi i working on scrapy and trying xml feeds first time, below is my code class TestxmlItemSpider(XMLFeedSpider): name = "TestxmlItem" allowed_domains = {"http://www.nasinteractive.com"} start_urls = [ "http://www.nasinteractive.com/jobexport/advance/hcantexasexport.xml" ] iterator = 'iternodes' itertag = 'job' def parse_node(self, response, node): title = node.select('title/text()').extract() job_code = node.select('job-code/text()').extract() detail_url = node.select('detail-url/text()').extract() category = node.select('job-category/text()').extract() print title,";;;;;;;;;;;;;;;;;;;;;" print job_code,";;;;;;;;;;;;;;;;;;;;;" item = TestxmlItem() item['title'] = node.select('title/text()').extract() ....... return item result: File "/usr/lib/python2.7/site-packages/Scrapy-0.14.3-py2.7.egg/scrapy/item.py", line 56, in __setitem__ (self.__class__.__name__, key)) exceptions.KeyError: 'TestxmlItem does not support field: title' Totally there are 200+ items so i need to loop over and assign the node text to item but here all the results are displaying at once when we print, actually how can we loop over on nodes in scraping xml files with xmlfeedspider

    Read the article

  • Regular expressions and matching URLs with metacharacters

    - by James P.
    I'm having trouble finding a regular expression that matches the following String. Korben;http://feeds.feedburner.com/KorbensBlog-UpgradeYourMind?format=xml;1 One problem is escaping the question mark. Java's pattern matcher doesn't seem to accept \? as a valid escape sequence but it also fails to work with the tester at myregexp.com. Here's what I have so far: ([a-zA-Z0-9])+;http://([a-zA-Z0-9./-]+);[0-9]+ Any suggestions? Edit: The original intent was to match all URLs that could be found after the first semi colon.

    Read the article

  • in google analytics, what is 'ga:accountName' for ?

    - by Chez
    In google analytics, what is 'ga:accountName' for ? it might seem like a straightforward question but I can't find anywhere some documentation which tells me what ga:accountName is supposed to return. if I run the google's code from the java example: private static void getAccountFeed(AnalyticsService analyticsService) throws IOException, MalformedURLException, ServiceException { // Construct query from a string. URL queryUrl = new URL( "https://www.google.com/analytics/feeds/accounts/default?max-results=10"); // Make request to the API. AccountFeed accountFeed = analyticsService.getFeed(queryUrl, AccountFeed.class); // Output the data to the screen. System.out.println("-------- Account Feed Results --------"); for (AccountEntry entry : accountFeed.getEntries()) { System.out.println( "\nAccount Name = " + entry.getProperty("ga:accountName") + "\nProfile Name = " + entry.getTitle().getPlainText() + "\nProfile Id = " + entry.getProperty("ga:profileId") + "\nTable Id = " + entry.getTableId().getValue()); } } it does return my website. can anybody help ? thanks

    Read the article

  • Browser Not Reading Entire XML File

    - by Chris
    I have an XML file that is written by a PHP script. The data for the XML file is gathered from several different RSS feeds. The PHP script is invoked every 5 minutes by a Cron Job. The PHP Script takes maybe 5-10 seconds to write the XML File. Here's the problem: After the XML file is written, I can open it through DreamWeaver and read everything just fine - but when I enter the XML File's URL into my Web Browser (IE or Firefox), I get a "XML Parsing Error: not well-formed" Error in the Browser. When I do View Source in the Browser, the XML file appears incomplete - but when I open the file directly off the server, it is complete. Anyone know what's going on here?

    Read the article

  • How can I create a google-spreadsheet using C#

    - by Kev
    I've installed the latest Google Data API for .Net. But I cannot figure out how to create a spreadsheet using C# code. It's not included in the sample programs. I've tried this: SpreadsheetsService ss = new SpreadsheetsService("Spreadsheet Example"); ss.setUserCredentials("[email protected]", "password"); SpreadsheetEntry se = new SpreadsheetEntry(); se.Title.Text = "new"; ss.Insert(new Uri("http://spreadsheets.google.com/feeds/spreadsheets/private/full"), se); However, It doesn't work! Is there some way to do this job? Thank you!

    Read the article

  • Get only new RSS entries with PHP Script ?

    - by ArneRie
    What im trying to do: Fetch X numbers of RSS Feeds from my Blogs and echo only new entries. My Problem is, how to know wich items are already parsed? Solution so far: Fetch the Feed every 5 hours, store all titles inside an Database table or flat file. Next run check if the title is already in database if not print it and save it inside the database. But iam not sure if this is best practise to do this? If someone knows a fast way, it would be great. Sorry for my poor english.

    Read the article

  • Windows desktop gadget RSS Feed colour coding items

    - by padjo
    I have a desktop gadget that pulls RSS Feeds from a website. The Feed contains information about issues - ie. Priority, Time, Description. The Feed items are displayed on the desktop - however I need to colour code them according to their priority ie 1 = red etc. using the substr function - is there a better way to do this using JavaScript / HTML? At the moment I've hacked together this - but is there a more elegant solution? if (feed.item.description.substr(10,1) == "1") { document.write "<a href colour="red"" + item + ">"; else if (feed.item.description.substr(10,1) == "2") { document.write "<a href colour="yellow"" + item + ">"; else { document.write "<a href colour="green"" + item + ">";

    Read the article

  • Getting the number of feedburner subscribers?

    - by iMaster
    I'm currently using this code to get the number of subscribers to my blog: $whaturl="http://api.feedburner.com/awareness/1.0/GetFeedData?uri=http://feeds.feedburner.com/DesignDeluge"; $ch = curl_init(); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_URL, $whaturl); $data = curl_exec($ch); curl_close($ch); $xml = new SimpleXMLElement($data); $fb = $xml->feed->entry['circulation']; but when I echo $fb it doesn't work (the whole file that $fb is being echoed on doesn't appear). Any ideas on why this isn't working?

    Read the article

  • Detecting Available Qualities of YouTube Videos

    - by Langdon
    I'm writing a Boxee app that makes use of YouTube videos and I want to be able to display the highest quality version available. I was looking through the YouTube API, but I can't seem to find a way to detect if 720p and/or 1080p versions of the video are available. Does anyone know how to do this? I'm already using their Data API to collection information about the video, but there doesn't seem to be anything in the payload about different qualities consumable on the web: http://gdata.youtube.com/feeds/api/videos/NWHfY_lvKIQ I could just hard code fmt=22 and let it default to a lesser quality version, but then I miss out on 1080p (fmt=387).

    Read the article

  • Which sites/blog would you prefer to learn advanced css techniques

    - by metal-gear-solid
    Which sites/blog (not book) would you prefer to learn advanced CSS techniques (not basic)? Site which updates new only css and pure css (no client side and server side) techniques, articles on daily or weekly basis. Or you can suggest any pure css rss feeds. My aim is to learn one new technique/trick daily. I know some well known blogs but do you know any other not well known but good blogs/sites. I want to know some hidden treasures.

    Read the article

  • How to customise the TextView inside a Spinner?

    - by Janusz
    I have a Spinner with an ArrayAdapter that feeds Values into it. The Problem is that the text is to long for the view and the result is a very very ugly spinner. As can be seen in the screenshot: I tried to pass the Id of my own TextView into the Adapter but everytime the spinner should be shown I get an Exception that the Id I supplied is not valid: 04-26 17:38:39.695: ERROR/AndroidRuntime(4276): android.content.res.Resources$NotFoundException: Resource ID #0x7f09003a type #0x12 is not valid Where do I have to define the TextView? In a separate xml file? With a surrounding viewgroup? It would help me a lot if I could see an example of the adapter initialization and the textview definition?

    Read the article

  • Unable upload large file size on Google Docs

    - by Preeti
    Hi, I am uploading document on Google Docs as: DocumentsService myService = new DocumentsService(""); myService.setUserCredentials("[email protected]", password ); DocumentEntry newEntry = myService.UploadDocument(@"C:\Sample.txt", "Sample.txt"); But when i try to upload a file of 3 MB it result into exception: An unhandled exception of type 'Google.GData.Client.GDataRequestException' occurred in Google.GData.Client.dll Additional information: Execution of request failed: http://docs.google.com/feeds/documents/private/full How can i upload large size file on Google Docs? I am using Google API ver 2. Thanx

    Read the article

  • Improving HTML scrapper efficiency with pcntl_fork()

    - by Michael Pasqualone
    With the help from two previous questions, I now have a working HTML scrapper that feeds product information into a database. What I am now trying to do is improve efficiently by wrapping my brain around with getting my scrapper working with pcntl_fork. If I split my php5-cli script into 10 separate chunks, I improve total runtime by a large factor so I know I am not i/o or cpu bound but just limited by the linear nature of my scraping functions. Using code I've cobbled together from multiple sources, I have this working test: <?php libxml_use_internal_errors(true); ini_set('max_execution_time', 0); ini_set('max_input_time', 0); set_time_limit(0); $hrefArray = array("http://slashdot.org", "http://slashdot.org", "http://slashdot.org", "http://slashdot.org"); function doDomStuff($singleHref,$childPid) { $html = new DOMDocument(); $html->loadHtmlFile($singleHref); $xPath = new DOMXPath($html); $domQuery = '//div[@id="slogan"]/h2'; $domReturn = $xPath->query($domQuery); foreach($domReturn as $return) { $slogan = $return->nodeValue; echo "Child PID #" . $childPid . " says: " . $slogan . "\n"; } } $pids = array(); foreach ($hrefArray as $singleHref) { $pid = pcntl_fork(); if ($pid == -1) { die("Couldn't fork, error!"); } elseif ($pid > 0) { // We are the parent $pids[] = $pid; } else { // We are the child $childPid = posix_getpid(); doDomStuff($singleHref,$childPid); exit(0); } } foreach ($pids as $pid) { pcntl_waitpid($pid, $status); } // Clear the libxml buffer so it doesn't fill up libxml_clear_errors(); Which raises the following questions: 1) Given my hrefArray contains 4 urls - if the array was to contain say 1,000 product urls this code would spawn 1,000 child processes? If so, what is the best way to limit the amount of processes to say 10, and again 1,000 urls as an example split the child work load to 100 products per child (10 x 100). 2) I've learn that pcntl_fork creates a copy of the process and all variables, classes, etc. What I would like to do is replace my hrefArray variable with a DOMDocument query that builds the list of products to scrape, and then feeds them off to child processes to do the processing - so spreading the load across 10 child workers. My brain is telling I need to do something like the following (obviously this doesn't work, so don't run it): <?php libxml_use_internal_errors(true); ini_set('max_execution_time', 0); ini_set('max_input_time', 0); set_time_limit(0); $maxChildWorkers = 10; $html = new DOMDocument(); $html->loadHtmlFile('http://xxxx'); $xPath = new DOMXPath($html); $domQuery = '//div[@id=productDetail]/a'; $domReturn = $xPath->query($domQuery); $hrefsArray[] = $domReturn->getAttribute('href'); function doDomStuff($singleHref) { // Do stuff here with each product } // To figure out: Split href array into $maxChilderWorks # of workArray1, workArray2 ... workArray10. $pids = array(); foreach ($workArray(1,2,3 ... 10) as $singleHref) { $pid = pcntl_fork(); if ($pid == -1) { die("Couldn't fork, error!"); } elseif ($pid > 0) { // We are the parent $pids[] = $pid; } else { // We are the child $childPid = posix_getpid(); doDomStuff($singleHref); exit(0); } } foreach ($pids as $pid) { pcntl_waitpid($pid, $status); } // Clear the libxml buffer so it doesn't fill up libxml_clear_errors(); But what I can't figure out is how to build my hrefsArray[] in the master/parent process only and feed it off to the child process. Currently everything I've tried causes loops in the child processes. I.e. my hrefsArray gets built in the master, and in each subsequent child process. I am sure I am going about this all totally wrong, so would greatly appreciate just general nudge in the right direction.

    Read the article

  • Django equivalent to paster for backend processes

    - by intractelicious
    I use pylons in my job, but I'm new to django. I'm making an rss filtering application, and so I'd like to have two backend processes that run on a schedule: one to crawl rss feeds for each user, and another to determine relevance of individual posts relative to users' past preferences. In pylons, I'd just write paster commands to update the db with that data. Is there an equivalent in django? EG is there a way to run the equivalent of python manage.py shell in a non-interactive mode?

    Read the article

  • free RSS feed caching

    - by cherouvim
    Hello I've got an application which serves an rss feed of headlines and I need to provide this rss feed to other consumers. I don't want to provide the rss directly from my server though, due to limited server resources, so I need to proxy (cache) it through some service which will handle the load. Assuming the rss feed URL of my application is http://example.com/rss I initially provided my consumers with the url http://ajax.googleapis.com/ajax/services/feed/load?v=1.0&q=http%3A%2F%2Fexample.com%2Frss which solved my server load problem but introduced a liveness problem. The headlines are minutes to hours late from the actual feed (haven't exactly measured how much late). I've also tried distributing through feedburner so the url became something like http://feeds.feedburner.com/example123?format=xml but the liveness problem still exists. Is there a public and free solution for this problem? Anything below 5 minutes of liveness delay would be totally acceptable. thanks

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >