TypeError: coercing to Unicode: need string or buffer, User found

Posted by Clemens on Stack Overflow See other posts from Stack Overflow or by Clemens
Published on 2010-04-13T12:37:57Z Indexed on 2010/04/13 12:43 UTC
Read the original article Hit count: 480

Filed under:
|
|
|
|

hi,

i have to crawl last.fm for users (university exercise). I'm new to python and get following error:

 Traceback (most recent call last):
  File "crawler.py", line 23, in <module>
    for f in user_.get_friends(limit='200'):
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/pylast.py", line 2717, in get_friends
    for node in _collect_nodes(limit, self, "user.getFriends", False):
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/pylast.py", line 3409, in _collect_nodes
    doc = sender._request(method_name, cacheable, params)
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/pylast.py", line 969, in _request
    return _Request(self.network, method_name, params).execute(cacheable)
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/pylast.py", line 721, in __init__
    self.sign_it()
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/pylast.py", line 727, in sign_it
    self.params['api_sig'] = self._get_signature()
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/pylast.py", line 740, in _get_signature
    string += self.params[name]
TypeError: coercing to Unicode: need string or buffer, User found

i use the pylast lib for crawling. what i want to do:

i want to get a users friends and the friends of the users friends. the error occurs, when i have a for loop in another for loop. here's the code:

network = pylast.get_lastfm_network(api_key = API_KEY, api_secret = API_SECRET, username = username, password_hash = password_hash)
user = network.get_user("vidarnelson")

friends = user.get_friends(limit='200')

i = 1

for friend in friends:
 user_ = network.get_user(friend)
 print '#%d %s' % (i, friend)
 i = i + 1

 for f in user_.get_friends(limit='200'):
  print f

any advice?

thanks in advance. regards!

© Stack Overflow or respective owner

Related posts about python

Related posts about web-crawler

  • web crawler needed

    as seen on Stack Overflow - Search for 'Stack Overflow'
    does anybody know where i can get a free web crawler that actually works with minimal coding by me. ive googled it and can only find really old ones that dont work or openwebspider which doesnt seem to work. ideally id like to store just the web addresses and which links that page contains any suggestions… >>> More

  • Building an automatic web crawler

    as seen on Stack Overflow - Search for 'Stack Overflow'
    I am building a web application crawler that's meant not only to find all the links or pages in a web application, but also perform all the allowed actions in the app (such as pushing buttons, filling forms, notice changes in the DOM even if they did not trigger a request etc.) Basically, this is… >>> More

  • Appengine Apps Vs Google bot web crawler

    as seen on Stack Overflow - Search for 'Stack Overflow'
    i built an appengine web app cricket.hover.in. The web app consists of about 15k url's linked in it, But even after a long time of my launch, no pages are indexed on google. Any base link place on my root site hover.in are being indexed with in minutes. but i placed the same link home page of root… >>> More

  • Extracting data from internet

    as seen on Programmers - Search for 'Programmers'
    I would like to extract data from internet like www.mozenda.com does but I want to write my own program to do that. Specific data I'm looking for is various event data. Based on my research, I think custom web crawler is my answer but I Would like to confirm the answer and see if there are any suggestion… >>> More

  • Web crawler update strategy

    as seen on Stack Overflow - Search for 'Stack Overflow'
    I want to crawl useful resource (like background picture .. ) from certain websites. It is not a hard job, especially with the help of some wonderful projects like scrapy. The problem here is I not only just want crawl this site ONE TIME. I also want to keep my crawl long running and crawl the updated… >>> More