Search Results

Search found 978 results on 40 pages for 'feeds'.

Page 28/40 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • How can I set Invitee in Google Calendar through Python?

    - by Dhaval dave
    I am Setting Google Calendar via python command like this def _InsertQuickAddEvent(self, content="Tennis with dddddd on 5/19/2010 4am-5:30am"): """Creates an event with the quick_add property set to true so the content is processed as quick add content instead of as an event description.""" event = gdata.calendar.CalendarEventEntry() who = whois("[email protected]") event.content = atom.Content(text=content) event.quick_add = gdata.calendar.QuickAdd(value='true'); new_event = self.cal_client.InsertEvent(event, '/calendar/feeds/default/private/full') return new_event this code is given by Google API Can any one suggest what to do to add invitee in this? Important links for that http://code.google.com/apis/calendar/data/1.0/developers_guide_python.html

    Read the article

  • Rss Feed for HD Video for youtube channel?

    - by Praveen Chandrasekaran
    Suggest me to how to get the HD video url for the youtube channel uploads as a RSS Feed. its to play on android phones. the formats should be Format File Type H.263 3GPP (.3gp) and MPEG-4 (.mp4) H.264 AVC 3GPP (.3gp) and MPEG-4 (.mp4) MPEG-4 SP 3GPP (.3gp) if i use this link: http://gdata.youtube.com/feeds/api/users/youtube/uploads i get only the 3gpp format media-content tag? i want high definition mp4 video link. resolution must be 320X480.How can i? Any Idea?

    Read the article

  • Rss Feed for youtube channel?

    - by Praveen Chandrasekaran
    Suggest me to how to get the HD video url for the youtube channel uploads as a RSS Feed. its to play on android phones. the formats should be Format File Type H.263 3GPP (.3gp) and MPEG-4 (.mp4) H.264 AVC 3GPP (.3gp) and MPEG-4 (.mp4) MPEG-4 SP 3GPP (.3gp) if i use this link: http://gdata.youtube.com/feeds/api/users/youtube/uploads i get only the 3gpp format media-content tag? i want high definition mp4 video link. resolution must be 320X480.How can i? Any Idea?

    Read the article

  • What is a good open source job scheduler in Java?

    - by Boaz
    Hi, In an application harvesting (many) RSS feeds, I want to dynamically schedule the feed downloaders based on the following criteria: The speed at which content are generated - source which produce content at a higher rate need to be visited more often. After downloading a feed, its content is analyzed and based on the current rate of publication the next run time is determined for the feed. Note that this is dynamically changing. Sometimes a feed is very active and sometimes it is slow. Every RSS feed should be visited at least once an hour. While the second one is done by most schedulers but the first one is more problematic. What JAVA-based open source scheduler would you recommend for the task? (and why ;) ) Thanks! Boaz

    Read the article

  • in google analytics, what is 'ga:accountName' for ?

    - by Chez
    In google analytics, what is 'ga:accountName' for ? it might seem like a straightforward question but I can't find anywhere some documentation which tells me what ga:accountName is supposed to return. if I run the google's code from the java example: private static void getAccountFeed(AnalyticsService analyticsService) throws IOException, MalformedURLException, ServiceException { // Construct query from a string. URL queryUrl = new URL( "https://www.google.com/analytics/feeds/accounts/default?max-results=10"); // Make request to the API. AccountFeed accountFeed = analyticsService.getFeed(queryUrl, AccountFeed.class); // Output the data to the screen. System.out.println("-------- Account Feed Results --------"); for (AccountEntry entry : accountFeed.getEntries()) { System.out.println( "\nAccount Name = " + entry.getProperty("ga:accountName") + "\nProfile Name = " + entry.getTitle().getPlainText() + "\nProfile Id = " + entry.getProperty("ga:profileId") + "\nTable Id = " + entry.getTableId().getValue()); } } it does return my website. can anybody help ? thanks

    Read the article

  • Regular expressions and matching URLs with metacharacters

    - by James P.
    I'm having trouble finding a regular expression that matches the following String. Korben;http://feeds.feedburner.com/KorbensBlog-UpgradeYourMind?format=xml;1 One problem is escaping the question mark. Java's pattern matcher doesn't seem to accept \? as a valid escape sequence but it also fails to work with the tester at myregexp.com. Here's what I have so far: ([a-zA-Z0-9])+;http://([a-zA-Z0-9./-]+);[0-9]+ Any suggestions? Edit: The original intent was to match all URLs that could be found after the first semi colon.

    Read the article

  • How to loop over nodes with xmlfeed using scrapy python

    - by Kour ipm
    Hi i working on scrapy and trying xml feeds first time, below is my code class TestxmlItemSpider(XMLFeedSpider): name = "TestxmlItem" allowed_domains = {"http://www.nasinteractive.com"} start_urls = [ "http://www.nasinteractive.com/jobexport/advance/hcantexasexport.xml" ] iterator = 'iternodes' itertag = 'job' def parse_node(self, response, node): title = node.select('title/text()').extract() job_code = node.select('job-code/text()').extract() detail_url = node.select('detail-url/text()').extract() category = node.select('job-category/text()').extract() print title,";;;;;;;;;;;;;;;;;;;;;" print job_code,";;;;;;;;;;;;;;;;;;;;;" item = TestxmlItem() item['title'] = node.select('title/text()').extract() ....... return item result: File "/usr/lib/python2.7/site-packages/Scrapy-0.14.3-py2.7.egg/scrapy/item.py", line 56, in __setitem__ (self.__class__.__name__, key)) exceptions.KeyError: 'TestxmlItem does not support field: title' Totally there are 200+ items so i need to loop over and assign the node text to item but here all the results are displaying at once when we print, actually how can we loop over on nodes in scraping xml files with xmlfeedspider

    Read the article

  • How can I create a google-spreadsheet using C#

    - by Kev
    I've installed the latest Google Data API for .Net. But I cannot figure out how to create a spreadsheet using C# code. It's not included in the sample programs. I've tried this: SpreadsheetsService ss = new SpreadsheetsService("Spreadsheet Example"); ss.setUserCredentials("[email protected]", "password"); SpreadsheetEntry se = new SpreadsheetEntry(); se.Title.Text = "new"; ss.Insert(new Uri("http://spreadsheets.google.com/feeds/spreadsheets/private/full"), se); However, It doesn't work! Is there some way to do this job? Thank you!

    Read the article

  • Get only new RSS entries with PHP Script ?

    - by ArneRie
    What im trying to do: Fetch X numbers of RSS Feeds from my Blogs and echo only new entries. My Problem is, how to know wich items are already parsed? Solution so far: Fetch the Feed every 5 hours, store all titles inside an Database table or flat file. Next run check if the title is already in database if not print it and save it inside the database. But iam not sure if this is best practise to do this? If someone knows a fast way, it would be great. Sorry for my poor english.

    Read the article

  • Browser Not Reading Entire XML File

    - by Chris
    I have an XML file that is written by a PHP script. The data for the XML file is gathered from several different RSS feeds. The PHP script is invoked every 5 minutes by a Cron Job. The PHP Script takes maybe 5-10 seconds to write the XML File. Here's the problem: After the XML file is written, I can open it through DreamWeaver and read everything just fine - but when I enter the XML File's URL into my Web Browser (IE or Firefox), I get a "XML Parsing Error: not well-formed" Error in the Browser. When I do View Source in the Browser, the XML file appears incomplete - but when I open the file directly off the server, it is complete. Anyone know what's going on here?

    Read the article

  • Getting the number of feedburner subscribers?

    - by iMaster
    I'm currently using this code to get the number of subscribers to my blog: $whaturl="http://api.feedburner.com/awareness/1.0/GetFeedData?uri=http://feeds.feedburner.com/DesignDeluge"; $ch = curl_init(); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_URL, $whaturl); $data = curl_exec($ch); curl_close($ch); $xml = new SimpleXMLElement($data); $fb = $xml->feed->entry['circulation']; but when I echo $fb it doesn't work (the whole file that $fb is being echoed on doesn't appear). Any ideas on why this isn't working?

    Read the article

  • Windows desktop gadget RSS Feed colour coding items

    - by padjo
    I have a desktop gadget that pulls RSS Feeds from a website. The Feed contains information about issues - ie. Priority, Time, Description. The Feed items are displayed on the desktop - however I need to colour code them according to their priority ie 1 = red etc. using the substr function - is there a better way to do this using JavaScript / HTML? At the moment I've hacked together this - but is there a more elegant solution? if (feed.item.description.substr(10,1) == "1") { document.write "<a href colour="red"" + item + ">"; else if (feed.item.description.substr(10,1) == "2") { document.write "<a href colour="yellow"" + item + ">"; else { document.write "<a href colour="green"" + item + ">";

    Read the article

  • Detecting Available Qualities of YouTube Videos

    - by Langdon
    I'm writing a Boxee app that makes use of YouTube videos and I want to be able to display the highest quality version available. I was looking through the YouTube API, but I can't seem to find a way to detect if 720p and/or 1080p versions of the video are available. Does anyone know how to do this? I'm already using their Data API to collection information about the video, but there doesn't seem to be anything in the payload about different qualities consumable on the web: http://gdata.youtube.com/feeds/api/videos/NWHfY_lvKIQ I could just hard code fmt=22 and let it default to a lesser quality version, but then I miss out on 1080p (fmt=387).

    Read the article

  • Which sites/blog would you prefer to learn advanced css techniques

    - by metal-gear-solid
    Which sites/blog (not book) would you prefer to learn advanced CSS techniques (not basic)? Site which updates new only css and pure css (no client side and server side) techniques, articles on daily or weekly basis. Or you can suggest any pure css rss feeds. My aim is to learn one new technique/trick daily. I know some well known blogs but do you know any other not well known but good blogs/sites. I want to know some hidden treasures.

    Read the article

  • How to customise the TextView inside a Spinner?

    - by Janusz
    I have a Spinner with an ArrayAdapter that feeds Values into it. The Problem is that the text is to long for the view and the result is a very very ugly spinner. As can be seen in the screenshot: I tried to pass the Id of my own TextView into the Adapter but everytime the spinner should be shown I get an Exception that the Id I supplied is not valid: 04-26 17:38:39.695: ERROR/AndroidRuntime(4276): android.content.res.Resources$NotFoundException: Resource ID #0x7f09003a type #0x12 is not valid Where do I have to define the TextView? In a separate xml file? With a surrounding viewgroup? It would help me a lot if I could see an example of the adapter initialization and the textview definition?

    Read the article

  • Improving HTML scrapper efficiency with pcntl_fork()

    - by Michael Pasqualone
    With the help from two previous questions, I now have a working HTML scrapper that feeds product information into a database. What I am now trying to do is improve efficiently by wrapping my brain around with getting my scrapper working with pcntl_fork. If I split my php5-cli script into 10 separate chunks, I improve total runtime by a large factor so I know I am not i/o or cpu bound but just limited by the linear nature of my scraping functions. Using code I've cobbled together from multiple sources, I have this working test: <?php libxml_use_internal_errors(true); ini_set('max_execution_time', 0); ini_set('max_input_time', 0); set_time_limit(0); $hrefArray = array("http://slashdot.org", "http://slashdot.org", "http://slashdot.org", "http://slashdot.org"); function doDomStuff($singleHref,$childPid) { $html = new DOMDocument(); $html->loadHtmlFile($singleHref); $xPath = new DOMXPath($html); $domQuery = '//div[@id="slogan"]/h2'; $domReturn = $xPath->query($domQuery); foreach($domReturn as $return) { $slogan = $return->nodeValue; echo "Child PID #" . $childPid . " says: " . $slogan . "\n"; } } $pids = array(); foreach ($hrefArray as $singleHref) { $pid = pcntl_fork(); if ($pid == -1) { die("Couldn't fork, error!"); } elseif ($pid > 0) { // We are the parent $pids[] = $pid; } else { // We are the child $childPid = posix_getpid(); doDomStuff($singleHref,$childPid); exit(0); } } foreach ($pids as $pid) { pcntl_waitpid($pid, $status); } // Clear the libxml buffer so it doesn't fill up libxml_clear_errors(); Which raises the following questions: 1) Given my hrefArray contains 4 urls - if the array was to contain say 1,000 product urls this code would spawn 1,000 child processes? If so, what is the best way to limit the amount of processes to say 10, and again 1,000 urls as an example split the child work load to 100 products per child (10 x 100). 2) I've learn that pcntl_fork creates a copy of the process and all variables, classes, etc. What I would like to do is replace my hrefArray variable with a DOMDocument query that builds the list of products to scrape, and then feeds them off to child processes to do the processing - so spreading the load across 10 child workers. My brain is telling I need to do something like the following (obviously this doesn't work, so don't run it): <?php libxml_use_internal_errors(true); ini_set('max_execution_time', 0); ini_set('max_input_time', 0); set_time_limit(0); $maxChildWorkers = 10; $html = new DOMDocument(); $html->loadHtmlFile('http://xxxx'); $xPath = new DOMXPath($html); $domQuery = '//div[@id=productDetail]/a'; $domReturn = $xPath->query($domQuery); $hrefsArray[] = $domReturn->getAttribute('href'); function doDomStuff($singleHref) { // Do stuff here with each product } // To figure out: Split href array into $maxChilderWorks # of workArray1, workArray2 ... workArray10. $pids = array(); foreach ($workArray(1,2,3 ... 10) as $singleHref) { $pid = pcntl_fork(); if ($pid == -1) { die("Couldn't fork, error!"); } elseif ($pid > 0) { // We are the parent $pids[] = $pid; } else { // We are the child $childPid = posix_getpid(); doDomStuff($singleHref); exit(0); } } foreach ($pids as $pid) { pcntl_waitpid($pid, $status); } // Clear the libxml buffer so it doesn't fill up libxml_clear_errors(); But what I can't figure out is how to build my hrefsArray[] in the master/parent process only and feed it off to the child process. Currently everything I've tried causes loops in the child processes. I.e. my hrefsArray gets built in the master, and in each subsequent child process. I am sure I am going about this all totally wrong, so would greatly appreciate just general nudge in the right direction.

    Read the article

  • free RSS feed caching

    - by cherouvim
    Hello I've got an application which serves an rss feed of headlines and I need to provide this rss feed to other consumers. I don't want to provide the rss directly from my server though, due to limited server resources, so I need to proxy (cache) it through some service which will handle the load. Assuming the rss feed URL of my application is http://example.com/rss I initially provided my consumers with the url http://ajax.googleapis.com/ajax/services/feed/load?v=1.0&q=http%3A%2F%2Fexample.com%2Frss which solved my server load problem but introduced a liveness problem. The headlines are minutes to hours late from the actual feed (haven't exactly measured how much late). I've also tried distributing through feedburner so the url became something like http://feeds.feedburner.com/example123?format=xml but the liveness problem still exists. Is there a public and free solution for this problem? Anything below 5 minutes of liveness delay would be totally acceptable. thanks

    Read the article

  • Unable upload large file size on Google Docs

    - by Preeti
    Hi, I am uploading document on Google Docs as: DocumentsService myService = new DocumentsService(""); myService.setUserCredentials("[email protected]", password ); DocumentEntry newEntry = myService.UploadDocument(@"C:\Sample.txt", "Sample.txt"); But when i try to upload a file of 3 MB it result into exception: An unhandled exception of type 'Google.GData.Client.GDataRequestException' occurred in Google.GData.Client.dll Additional information: Execution of request failed: http://docs.google.com/feeds/documents/private/full How can i upload large size file on Google Docs? I am using Google API ver 2. Thanx

    Read the article

  • Django equivalent to paster for backend processes

    - by intractelicious
    I use pylons in my job, but I'm new to django. I'm making an rss filtering application, and so I'd like to have two backend processes that run on a schedule: one to crawl rss feeds for each user, and another to determine relevance of individual posts relative to users' past preferences. In pylons, I'd just write paster commands to update the db with that data. Is there an equivalent in django? EG is there a way to run the equivalent of python manage.py shell in a non-interactive mode?

    Read the article

  • Facebook Graph API shows different results in me/home

    - by elekatonio
    Hi, When I do a GET with my browser (already logged-in at Facebook): https://graph.facebook.com/me/home?access_token={token} the results are different than doing the same via a FB app using Facebook C# SDK. Specifically, what the API is not returning are feeds posted by other applications. Why can be this happening? Can't an application retrieve updates from other applications even if it has the read_stream permission? I even requested for additional permissions: read_stream,user_activities,friends_activities,friends_likes,user_likes,read_requests but nothing has changed. What I need is to get ALL and the same stories an user would see at his FB news feed.

    Read the article

  • How to remove $ from associative array using Json_decode in php?

    - by Chase
    I am trying to use the youtube API to pulldown some videos for my site. Currently am running this code here: //Youtube Videos Pull Down $youtubeURL = "http://gdata.youtube.com/feeds/api/videos?alt=json&q=cats+cradle+chapel+hill&orderby=published&max-results=10&v=2"; $youtubeSearch = file_get_contents($youtubeURL, true); $youtubeArray = json_decode($youtubeSearch, true); Not having any problems accessing certain elements of the associative array however youtube's api is putting $ in many of its array elements .. such as [media$group] Anytime I try to access an array with one of the $ elements in it, it doesn't work. Suggestions? I have tried preg_replace but can't seem to get my expression right.

    Read the article

  • What does it take to get the "LyricArtist" from this XML feed using Nokogiri?

    - by fail.
    First the xml: http://api.chartlyrics.com/apiv1.asmx//GetLyric?lyricId=90&lyricCheckSum=9600c891e35f602eb6e1605fb7b5229e doc = Nokogiri::XML(open("http://api.chartlyrics.com/apiv1.asmx//GetLyric?lyricId=90&lyricCheckSum=9600c891e35f602eb6e1605fb7b5229e")) Successfully will grab the document content. After this point i am unable to get inside and grab data and i am not sure why? For example, i would expect: doc.xpath("//LyricArtist") To kick back the artist but it does not. I have tried the same thing with other feeds, such as the default RSS feed that any wordpress installation provides and if i do something like: doc.xpath("//link") I get a list of all the "links". I am definitely missing something and would love your input. thank you!!

    Read the article

  • Can't switch tab and replace value of an input and call a f() same time.

    - by marharépa
    Hi! I feel sorry of askin all my little thingys here, but i can't find the answer via google. :( I'd like to switch tabs, replace an input value and call a function by one click. THE JS: function ApplyTableId(id) { var $tabs = $('#tabs').tabs(); $('a.stat').click(function() { $tabs.tabs('select', 2); // switch to third tab }); $('tableId').val('ga:'+id); // replace the input with id=tableId's val getAccountFeed(); // call an other function } The another JS, which will be called by the first script: function getAccountFeed() { var myFeedUri = 'https://www.google.com/analytics/feeds/accounts/default?max-results=50'; myService.getAccountFeed(myFeedUri, handleAccountFeed, handleError); } This is what i want to call, and here is the HTML: TAB1: <a class="stat" onClick="return ApplyTableId(this.getAttribute('id'));" id="7777777" />asd</a> TAB3: <input type="text" value="asd" id="tableId"/> Please tell me, what i did wrong :(

    Read the article

  • RSS Reader php (have already read related articles)

    - by lightingwrist
    Hey there. I've read all the related articles on here and can't find one that is specific to what I am looking for. I am new to RSS and am looking for the following reader if anyone know's the right direction to throw me in: An rss reader that I can put on my page that does NOT require mysql database A fairly light chunk of code that I can just add as many .xml,rss.php links/addresses to I can wrap div's around to style each segment specifically as possible can manually limit the amount of feeds that are read to conform to my desires of the pages content out put thanks in advance!

    Read the article

  • Creating folders using PHP in google docs

    - by Isaac
    Hi, Currently I am working on a project integrating google docs to my application using php. However, there is only version1 for the php and I am not well-versed with REST web service. And I am required to create folder using the api. I wonder any people manage/know how to do it? Below is the protocol for the creation of the folder. If anyone know how to do it, I would be glad if you can assist me. Thank you in advance. POST /feeds/default/private/full HTTP/1.1 Host: docs.google.com GData-Version: 3.0 Authorization: Content-Length: 245 Content-Type: application/atom+xml Example Folder

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >