Search Results

Search found 5416 results on 217 pages for 'urls py'.

Page 29/217 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Will ranking be affected with a mobile XML sitemap for a mobile site with canonical URLs?

    - by Emil Rasmussen
    We have a site with both a desktop version and a mobile version. Most of the content are the same and both versions have the same URL, but the HTML generated is device specific. Looking at Google's recommendations for smartphone-optimized sites, one could get the impression that the mobile xml sitemap is only for sites with different URLs. Will ranking be affected - negatively or positively - if we add a mobile xml sitemap that effectively will be a duplicate of the desktop sitemap?

    Read the article

  • Will ranking be affected with a mobile XML sitemap for a mobile site with the same URLs as the desktop site?

    - by Emil Rasmussen
    We have a site with both a desktop version and a mobile version. Most of the content are the same and both versions have the same URL, but the HTML generated is device specific. Looking at Google's recommendations for smartphone-optimized sites, one could get the impression that the mobile xml sitemap is only for sites with different URLs. Will ranking be affected - negatively or positively - if we add a mobile xml sitemap that effectively will be a duplicate of the desktop sitemap?

    Read the article

  • How to Remove Extensions From, and Force the Trailing Slash at the End of URLs?

    - by Kronbernkzion
    Example of current file structure: example.com/foo.php example.com/bar.html example.com/directory/ example.com/directory/foo.php example.com/directory/bar.html example.com/cgi-bin/directory/foo.cgi I would like to remove HTML, PHP and CGI extensions from, and then force the trailing slash at the end of URLs. So, it could look like this: example.com/foo/ example.com/bar/ example.com/directory/ example.com/directory/foo/ example.com/directory/bar/ example.com/cgi-bin/directory/foo/ I am very frustrated because I've searched for 17 hours straight for solution and visited more than a few hundred pages on various blogs and forums. I'm not joking. So I think I've done my research. Here is the code that sits in my .htaccess file right now: RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME}\.html -f RewriteRule ^(([^/]+/)*[^./]+)/$ $1.html RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !(\.[a-zA-Z0-9]|/)$ RewriteRule (.*)$ /$1/ [R=301,L] As you can see, this code only removes .html (and I'm not very happy with it because I think it could be done a lot simpler). I can remove the extension from PHP files when I rename them to .html through .htaccess, but that's not what I want. I want to remove it straight. This is the first thing I don't know how to do. The second thing is actually very annoying. My .htaccess file with code above, adds .html/ to every string entered after example.com/directory/foo/. So if I enter example.com/directory/foo/bar (obviously /bar doesn't exist since foo is a file), instead of just displaying message that page is not found, it converts it to example.com/directory/foo/bar.html/, then searches for a file for a few seconds and then displays the not found message. This, of course, is bad behavior. So, once again, I need the code in .htaccess to do the following things: Remove .html extension Remove .php extension Remove .cgi extension Force the trailing slash at the end of URLs Requests should behave correctly (no adding trailing slashes or extensions to strings if file or directory doesn't exist on server) Code should be as simple as possible I would very much appreciate any help. And to first person that gives me the solution, I'll send two $50 iTunes Store gift cards for US store. If this offends anyone, I am truly sorry and I apologize. Thanks in advance.

    Read the article

  • Do SEO-friendly URLs really affect a page's ranking?

    - by Lee Harold
    SEO-friendly URLs are all the rage these days. But do they actually have a meaningful impact on a page's ranking in Google and other search engines? If so, why? If not, why not? (Note that I would absolutely agree that SEO-friendly URLs are nicer to use for human beings. My question is whether they actually make a difference to the ranking algorithms.) Update: As it turns out, the Google post that endorphine points to here has caused tremendous confusion in the SEO community. For a sampling of the discussion, see here, here, and here. Part of the problem is that the Google post is addressing the worst case where URL rewriting is done poorly and so you'd be better off sticking with a dynamic URL rather than a mangled static "SEO-friendly" URL. There's no question dynamic URLs can be crawled by Google and can achieve high rankings. Maybe it would be easier to reframe the question more concretely: given 2 otherwise equivalent pages, which will rank higher for the search "do seo friendly urls really affect page ranking"? A) http://stackoverflow.com/questions/505793/do-seo-friendly-urls-really-affect-a-pages-ranking or B) http://stackoverflow.com?question=505793 (a fake URL for comparison only)

    Read the article

  • Can't install psycopg2 (Python2.6 Ubuntu9.10 PostgreSQL8.4.2)

    - by yuv
    $ ~/psycopg2-2.0.13$ python setup.py install running install running build running build_py creating build creating build/lib.linux-x86_64-2.6 creating build/lib.linux-x86_64-2.6/psycopg2 copying lib/init.py - build/lib.linux-x86_64-2.6/psycopg2 copying lib/pool.py - build/lib.linux-x86_64-2.6/psycopg2 copying lib/extensions.py - build/lib.linux-x86_64-2.6/psycopg2 copying lib/errorcodes.py - build/lib.linux-x86_64-2.6/psycopg2 copying lib/extras.py - build/lib.linux-x86_64-2.6/psycopg2 copying lib/tz.py - build/lib.linux-x86_64-2.6/psycopg2 copying lib/psycopg1.py - build/lib.linux-x86_64-2.6/psycopg2 running build_ext error: No such file or directory or $ sudo easy_install psycopg2==2.0.13 Searching for psycopg2==2.0.13 Reading http;//pypi.python.org/simple/psycopg2/ Reading http;//initd.org/projects/psycopg2 Reading http;//initd.org/pub/software/psycopg/ Best match: psycopg2 2.0.13 Downloading http;//initd.org/pub/software/psycopg/psycopg2-2.0.13.tar.gz Processing psycopg2-2.0.13.tar.gz Running psycopg2-2.0.13/setup.py -q bdist_egg --dist-dir /tmp/easy_install-QuViWt/psycopg2-2.0.13/egg-dist-tmp-P2W9Dh error: Setup script exited with error: No such file or directory thanks in advance

    Read the article

  • Python: shutil.rmtree fails on Windows with 'Access is denied'

    - by Sridhar Ratnakumar
    In Python, when running shutil.rmtree over a folder that contains a read-only file, the following exception is printed: File "C:\Python26\lib\shutil.py", line 216, in rmtree rmtree(fullname, ignore_errors, onerror) File "C:\Python26\lib\shutil.py", line 216, in rmtree rmtree(fullname, ignore_errors, onerror) File "C:\Python26\lib\shutil.py", line 216, in rmtree rmtree(fullname, ignore_errors, onerror) File "C:\Python26\lib\shutil.py", line 216, in rmtree rmtree(fullname, ignore_errors, onerror) File "C:\Python26\lib\shutil.py", line 216, in rmtree rmtree(fullname, ignore_errors, onerror) File "C:\Python26\lib\shutil.py", line 216, in rmtree rmtree(fullname, ignore_errors, onerror) File "C:\Python26\lib\shutil.py", line 216, in rmtree rmtree(fullname, ignore_errors, onerror) File "C:\Python26\lib\shutil.py", line 221, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\Python26\lib\shutil.py", line 219, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'build\\tcl\\tcl8.5\\msgs\\af.msg' Looking in File Properties dialog I noticed that af.msg file is set to be read-only. So the question is: what is the simplest workaround/fix to get around this problem - given that my intention is to do an equivalent of rm -rf build/ but on Windows? (without having to use third-party tools like unxutils or cygwin - as this code is targeted to be run on a bare Windows install with Python 2.6 w/ PyWin32 installed)

    Read the article

  • python c extension, problems with dlopen on mac os

    - by Jason Sundram
    I've taken a library that is distributed as a binary lib (.a) and header, written some c++ code against it, and want to wrap the results up in a python module. I've done this here. The problem is that when importing this module on Mac OSX (I've tried 10.5 and 10.6), I get the following error: dlopen(/Library/Python/2.5/site-packages/dirac.so, 2): Symbol not found: _DisposePtr Referenced from: /Library/Python/2.5/site-packages/dirac.so Expected in: dynamic lookup This looks like symbols defined in the Carbon framework aren't being properly resolved, but I'm not sure what to do about that. I am supplying -framework Carbon to distutil.core.Extension's extra_link_args parameter, so I'm not sure what else I should do. Any help would be much appreciated.

    Read the article

  • How do I generate coverage xml report for a single package?

    - by Wraith
    I'm using nose and coverage to generate coverage reports. I only have one package right now, ae, so I specify to only cover that: nosetests -w tests/unit --with-xunit --with-coverage --cover-package=ae And here are the results, which look good: Name Stmts Exec Cover Missing ---------------------------------------------- ae 1 1 100% ae.util 253 224 88% 39, 63-65, 284, 287, 362, 406 ---------------------------------------------- TOTAL 263 234 88% ---------------------------------------------------------------------- Ran 68 tests in 5.292s However when I run coverage xml, coverage pulls in more packages than necessary, including python email and logging packages which have nothing to do with my code. If I run coverage xml ae, I get this error: No source for code: '/home/wraith/dev/projects/trimurti/src/ae': [Errno 21] Is a directory: '/home/wraith/dev/projects/trimurti/src/ae' Is there a way to generate the XML for just the ae package?

    Read the article

  • How can I process command line arguments in Python?

    - by photographer
    What would be an easy expression to process command line arguments if I'm expecting anything like 001 or 999 (let's limit expectations to 001...999 range for this time), and few other arguments passed, and would like to ignore any unexpected? I understand if for example I need to find out if "debug" was passed among parameters it'll be something like that: if 'debug' in argv[1:]: print 'Will be running in debug mode.' How to find out if 009 or 575 was passed? All those are expected calls: python script.py python script.py 011 python script.py 256 debug python script.py 391 xls python script.py 999 debug pdf At this point I don't care about calls like that: python script.py 001 002 245 568 python script.py some unexpected argument python script.py 0001 python script.py 02 ...first one - because of more than one "numeric" argument; second - because of... well, unexpected arguments; third and fourth - because of non-3-digits arguments.

    Read the article

  • shutil.rmtree fails on Windows with 'Access is denied'

    - by Sridhar Ratnakumar
    In Python, when running shutil.rmtree over a folder that contains a read-only file, the following exception is printed: File "C:\ActivePython32Python26\lib\shutil.py", line 216, in rmtree rmtree(fullname, ignore_errors, onerror) File "C:\ActivePython32Python26\lib\shutil.py", line 216, in rmtree rmtree(fullname, ignore_errors, onerror) File "C:\ActivePython32Python26\lib\shutil.py", line 216, in rmtree rmtree(fullname, ignore_errors, onerror) File "C:\ActivePython32Python26\lib\shutil.py", line 216, in rmtree rmtree(fullname, ignore_errors, onerror) File "C:\ActivePython32Python26\lib\shutil.py", line 216, in rmtree rmtree(fullname, ignore_errors, onerror) File "C:\ActivePython32Python26\lib\shutil.py", line 216, in rmtree rmtree(fullname, ignore_errors, onerror) File "C:\ActivePython32Python26\lib\shutil.py", line 216, in rmtree rmtree(fullname, ignore_errors, onerror) File "C:\ActivePython32Python26\lib\shutil.py", line 221, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\ActivePython32Python26\lib\shutil.py", line 219, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'build\\pyhg_trunk-win32-x86-hgtip27\\image\\feature-core\\INSTALLDIR\\tcl\\tcl8.5\\msgs\\af.msg' Looking in File Properties dialog I noticed that af.msg file is set to be read-only. So the question is: what is the simplest workaround/fix to get around this problem - given that my intention is to do an equivalent of rm -rf build/ but on Windows? (without having to use unxutils or cygwin)

    Read the article

  • Can I debug with python debugger when using py.test somehow?

    - by Joel
    I am using py.test for unit testing my python program. I wish to debug my test code with the python debugger the normal way (by which i mean pdb.set_trace() in the code) but I can't make it work. Putting pdb.set_trace() in the code doesn't work (raises IOError: reading from stdin while output is captured). I have also tried running py.test with the option --pdb but that doesn't seem to do the trick if I want to explore what happens before my assertion. It breaks when an assertion fails, and moving on from that line means terminating the program. Does anyone know a way to get debugging, or is debugging and py.test just not meant to be together?

    Read the article

  • How can I efficiently group a large list of URLs by their host name in Perl?

    - by jesper
    I have text file that contains over one million URLs. I have to process this file in order to assign URLs to groups, based on host address: { 'http://www.ex1.com' = ['http://www.ex1.com/...', 'http://www.ex1.com/...', ...], 'http://www.ex2.com' = ['http://www.ex2.com/...', 'http://www.ex2.com/...', ...] } My current basic solution takes about 600 MB of RAM to do this (size of file is about 300 MB). Could you provide some more efficient ways? My current solution simply reads line by line, extracts host address by regex and puts the url into a hash. EDIT Here is my implementation (I've cut off irrelevant things): while($line = <STDIN>) { chomp($line); $line =~ /(http:\/\/.+?)(\/|$)/i; $host = "$1"; push @{$urls{$host}}, $line; } store \%urls, 'out.hash';

    Read the article

  • Why can't I get Python's urlopen() method to work?

    - by froadie
    Why isn't this simple Python code working? import urllib file = urllib.urlopen('http://www.google.com') print file.read() This is the error that I get: Traceback (most recent call last): File "C:\workspace\GarchUpdate\src\Practice.py", line 26, in <module> file = urllib.urlopen('http://www.google.com') File "C:\Python26\lib\urllib.py", line 87, in urlopen return opener.open(url) File "C:\Python26\lib\urllib.py", line 206, in open return getattr(self, name)(url) File "C:\Python26\lib\urllib.py", line 345, in open_http h.endheaders() File "C:\Python26\lib\httplib.py", line 892, in endheaders self._send_output() File "C:\Python26\lib\httplib.py", line 764, in _send_output self.send(msg) File "C:\Python26\lib\httplib.py", line 723, in send self.connect() File "C:\Python26\lib\httplib.py", line 704, in connect self.timeout) File "C:\Python26\lib\socket.py", line 514, in create_connection raise error, msg IOError: [Errno socket error] [Errno 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond I've tried it with several different pages but I can never get the urlopen method to execute correctly.

    Read the article

  • How can this verbose, unpythonic routine be improved?

    - by fmark
    Is there a more pythonic way of doing this? I am trying to find the eight neighbours of an integer coordinate lying within an extent. I am interested in reducing its verbosity without sacrificing execution speed. def fringe8((px, py), (x1, y1, x2, xy)): f = [(px - 1, py - 1), (px - 1, py), (px - 1, py + 1), (px, py - 1), (px, py + 1), (px + 1, py - 1), (px + 1, py), (px + 1, py + 1)] f_inrange = [] for fx, fy in f: if fx < x1: continue if fx >= x2: continue if fy < y1: continue if fy >= y2: continue f_inrange.append((fx, fy)) return f_inrange

    Read the article

  • What is the base open source java package to filter/match URLs?

    - by Boaz
    Hi, I have an high performance application which deals with URLs. For every URL it needs to retrieve the appropriate settings from a predefined pool. Every settings object is associated with a URL pattern which indicates which URLs should use these settings. The matching rules are as follows: "google.com" match pattern should match all URLs pointing to the google domain (thus, maps.google.com and www.google.com/match are matched). "*.google.com" should match all URLs pointing to a subdomain of google.com (thus, maps.google.com matches, but google.com and www.google.com don't). "maps.google.com" should match all URLs pointing to this specific subdomain. Apart from the above rules, every match rule can contain a path, which means that the path part of the URL should start with the match rule path. So: "*.google.com/maps" matches "maps.google.com/maps" but not "maps.google.com/advanced". As you can see the rules above are overlapping. In the case two rules exist which match the same URL the most specific should apply. The list above is ranked from least specific to most specific. This seems to be such a standard problem that I was hoping to use a ready made library rather than program my self. Google reveals a couple of options but without a clear way to choose between them. What would you recommend as a good library for this task? Thanks, Boaz

    Read the article

  • Optimizing python code performance when importing zipped csv to a mongo collection

    - by mark
    I need to import a zipped csv into a mongo collection, but there is a catch - every record contains a timestamp in Pacific Time, which must be converted to the local time corresponding to the (longitude,latitude) pair found in the same record. The code looks like so: def read_csv_zip(path, timezones): with ZipFile(path) as z, z.open(z.namelist()[0]) as input: csv_rows = csv.reader(input) header = csv_rows.next() check,converters = get_aux_stuff(header) for csv_row in csv_rows: if check(csv_row): row = { converter[0]:converter[1](value) for converter, value in zip(converters, csv_row) if allow_field(converter) } ts = row['ts'] lng, lat = row['loc'] found_tz_entry = timezones.find_one(SON({'loc': {'$within': {'$box': [[lng-tz_lookup_radius, lat-tz_lookup_radius],[lng+tz_lookup_radius, lat+tz_lookup_radius]]}}})) if found_tz_entry: tz_name = found_tz_entry['tz'] local_ts = ts.astimezone(timezone(tz_name)).replace(tzinfo=None) row['tz'] = tz_name else: local_ts = (ts.astimezone(utc) + timedelta(hours = int(lng/15))).replace(tzinfo = None) row['local_ts'] = local_ts yield row def insert_documents(collection, source, batch_size): while True: items = list(itertools.islice(source, batch_size)) if len(items) == 0: break; try: collection.insert(items) except: for item in items: try: collection.insert(item) except Exception as exc: print("Failed to insert record {0} - {1}".format(item['_id'], exc)) def main(zip_path): with Connection() as connection: data = connection.mydb.data timezones = connection.timezones.data insert_documents(data, read_csv_zip(zip_path, timezones), 1000) The code proceeds as follows: Every record read from the csv is checked and converted to a dictionary, where some fields may be skipped, some titles be renamed (from those appearing in the csv header), some values may be converted (to datetime, to integers, to floats. etc ...) For each record read from the csv, a lookup is made into the timezones collection to map the record location to the respective time zone. If the mapping is successful - that timezone is used to convert the record timestamp (pacific time) to the respective local timestamp. If no mapping is found - a rough approximation is calculated. The timezones collection is appropriately indexed, of course - calling explain() confirms it. The process is slow. Naturally, having to query the timezones collection for every record kills the performance. I am looking for advises on how to improve it. Thanks. EDIT The timezones collection contains 8176040 records, each containing four values: > db.data.findOne() { "_id" : 3038814, "loc" : [ 1.48333, 42.5 ], "tz" : "Europe/Andorra" } EDIT2 OK, I have compiled a release build of http://toblerity.github.com/rtree/ and configured the rtree package. Then I have created an rtree dat/idx pair of files corresponding to my timezones collection. So, instead of calling collection.find_one I call index.intersection. Surprisingly, not only there is no improvement, but it works even more slowly now! May be rtree could be fine tuned to load the entire dat/idx pair into RAM (704M), but I do not know how to do it. Until then, it is not an alternative. In general, I think the solution should involve parallelization of the task. EDIT3 Profile output when using collection.find_one: >>> p.sort_stats('cumulative').print_stats(10) Tue Apr 10 14:28:39 2012 ImportDataIntoMongo.profile 64549590 function calls (64549180 primitive calls) in 1231.257 seconds Ordered by: cumulative time List reduced from 730 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.012 0.012 1231.257 1231.257 ImportDataIntoMongo.py:1(<module>) 1 0.001 0.001 1230.959 1230.959 ImportDataIntoMongo.py:187(main) 1 853.558 853.558 853.558 853.558 {raw_input} 1 0.598 0.598 370.510 370.510 ImportDataIntoMongo.py:165(insert_documents) 343407 9.965 0.000 359.034 0.001 ImportDataIntoMongo.py:137(read_csv_zip) 343408 2.927 0.000 287.035 0.001 c:\python27\lib\site-packages\pymongo\collection.py:489(find_one) 343408 1.842 0.000 274.803 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:699(next) 343408 2.542 0.000 271.212 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:644(_refresh) 343408 4.512 0.000 253.673 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:605(__send_message) 343408 0.971 0.000 242.078 0.001 c:\python27\lib\site-packages\pymongo\connection.py:871(_send_message_with_response) Profile output when using index.intersection: >>> p.sort_stats('cumulative').print_stats(10) Wed Apr 11 16:21:31 2012 ImportDataIntoMongo.profile 41542960 function calls (41542536 primitive calls) in 2889.164 seconds Ordered by: cumulative time List reduced from 778 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.028 0.028 2889.164 2889.164 ImportDataIntoMongo.py:1(<module>) 1 0.017 0.017 2888.679 2888.679 ImportDataIntoMongo.py:202(main) 1 2365.526 2365.526 2365.526 2365.526 {raw_input} 1 0.766 0.766 502.817 502.817 ImportDataIntoMongo.py:180(insert_documents) 343407 9.147 0.000 491.433 0.001 ImportDataIntoMongo.py:152(read_csv_zip) 343406 0.571 0.000 391.394 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:384(intersection) 343406 379.957 0.001 390.824 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:435(_intersection_obj) 686513 22.616 0.000 38.705 0.000 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:451(_get_objects) 343406 6.134 0.000 33.326 0.000 ImportDataIntoMongo.py:162(<dictcomp>) 346 0.396 0.001 30.665 0.089 c:\python27\lib\site-packages\pymongo\collection.py:240(insert) EDIT4 I have parallelized the code, but the results are still not very encouraging. I am convinced it could be done better. See my own answer to this question for details.

    Read the article

  • How can I use cURL to open multiple URLs simultaneously with PHP?

    - by Rob
    Here is my current code: $SQL = mysql_query("SELECT url FROM urls") or die(mysql_error()); //Query the urls table while($resultSet = mysql_fetch_array($SQL)){ //Put all the urls into one variable // Now for some cURL to run it. $ch = curl_init($resultSet['url']); //load the urls curl_setopt($ch, CURLOPT_TIMEOUT, 2); //No need to wait for it to load. Execute it and go. curl_exec($ch); //Execute curl_close($ch); //Close it off } //While loop I'm relatively new to cURL. By relatively new, I mean this is my first time using cURL. Currently it loads one for two seconds, then loads the next one for 2 seconds, then the next. however, I want to make it load ALL of them at the same time. I'm sure its possible, I'm just unsure as to how. If someone could point me in the right direction, I'd appreciate it.

    Read the article

  • How to merge or copy anonymous session data into user data when user logs in?

    - by benhoyt
    This is a general question, or perhaps a request for pointers to other open source projects to look at: I'm wondering how people merge an anonymous user's session data into the authenticated user data when a user logs in. For example, someone is browsing around your websites saving various items as favourites. He's not logged in, so they're saved to an anonymous user's data. Then he logs in, and we need to merge all that data into their (possibly existing) user data. Is this done different ways in an ad-hoc fashion for different applications? Or are there some best practices or other projects people can direct me to?

    Read the article

  • How to correctly migrate urls from custom asp.net solution to Wordpress?

    - by Marek
    I have a web site built using asp.net with ugly URLs like /DisplayContent.aspx?id=789564. I know how to migrate the database, but the Wordpress urls will be (naturally) different. Can I simply write some mapping or do I have to include a rewrite rule for each subpage (300 pages) in .htaccess? Should I provide a rewrite rule for each existing page that would transform a full old url to the known new url, like for example: /DisplayContent.aspx?id=789798 -> /2010-5-10/Title-Of-The-Post Even if I manage to migrate the URLs, the structure of the HTML for the new content will naturally be different. How does this affect SEO? Should I run asp.net and wordpress side by side and issue the redirects from the asp.net application? What is the most efficient solution to this kind of migration of URLs without doing PHP programming?

    Read the article

  • Django 1.2 + South 0.7 + django-annoying's AutoOneToOneField leads to TypeError: 'LegacyConnection'

    - by konrad
    I'm using Django 1.2 trunk with South 0.7 and an AutoOneToOneField copied from django-annoying. South complained that the field does not have rules defined and the new version of South no longer has an automatic field type parser. So I read the South documentation and wrote the following definition (basically an exact copy of the OneToOneField rules): rules = [ ( (AutoOneToOneField), [], { "to": ["rel.to", {}], "to_field": ["rel.field_name", {"default_attr": "rel.to._meta.pk.name"}], "related_name": ["rel.related_name", {"default": None}], "db_index": ["db_index", {"default": True}], }, ) ] from south.modelsinspector import add_introspection_rules add_introspection_rules(rules, ["^myapp"]) Now South raises the following error when I do a schemamigration. Traceback (most recent call last): File "manage.py", line 11, in <module> execute_manager(settings) File "django/core/management/__init__.py", line 438, in execute_manager utility.execute() File "django/core/management/__init__.py", line 379, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "django/core/management/base.py", line 196, in run_from_argv self.execute(*args, **options.__dict__) File "django/core/management/base.py", line 223, in execute output = self.handle(*args, **options) File "South-0.7-py2.6.egg/south/management/commands/schemamigration.py", line 92, in handle (k, v) for k, v in freezer.freeze_apps([migrations.app_label()]).items() File "South-0.7-py2.6.egg/south/creator/freezer.py", line 33, in freeze_apps model_defs[model_key(model)] = prep_for_freeze(model) File "South-0.7-py2.6.egg/south/creator/freezer.py", line 65, in prep_for_freeze fields = modelsinspector.get_model_fields(model, m2m=True) File "South-0.7-py2.6.egg/south/modelsinspector.py", line 322, in get_model_fields args, kwargs = introspector(field) File "South-0.7-py2.6.egg/south/modelsinspector.py", line 271, in introspector arg_defs, kwarg_defs = matching_details(field) File "South-0.7-py2.6.egg/south/modelsinspector.py", line 187, in matching_details if any([isinstance(field, x) for x in classes]): TypeError: 'LegacyConnection' object is not iterable Is this related to a recent change in Django 1.2 trunk? How do I fix this? I use this field as follows: class Bar(models.Model): foo = AutoOneToOneField("foo.Foo", primary_key=True, related_name="bar") For reference the field code from django-tagging: class AutoSingleRelatedObjectDescriptor(SingleRelatedObjectDescriptor): def __get__(self, instance, instance_type=None): try: return super(AutoSingleRelatedObjectDescriptor, self).__get__(instance, instance_type) except self.related.model.DoesNotExist: obj = self.related.model(**{self.related.field.name: instance}) obj.save() return obj class AutoOneToOneField(OneToOneField): def contribute_to_related_class(self, cls, related): setattr(cls, related.get_accessor_name(), AutoSingleRelatedObjectDescriptor(related))

    Read the article

  • fabric deploy problem

    - by alexarsh
    Hi, I'm trying to deploy a django app with fabric and get the following error: Alexs-MacBook:fabric alex$ fab config:instance=peergw deploy -H <ip> - u <username> -p <password> [192.168.2.93] run: cat /etc/issue Traceback (most recent call last): File "build/bdist.macosx-10.6-universal/egg/fabric/main.py", line 419, in main File "/Users/alex/Rabota/server/mx30/scripts/fabric/fab/ commands.py", line 37, in deploy checkup() File "/Users/alex/Rabota/server/mx30/scripts/fabric/fab/ commands.py", line 140, in checkup if not 'Ubuntu' in run('cat /etc/issue'): File "build/bdist.macosx-10.6-universal/egg/fabric/network.py", line 382, in host_prompting_wrapper File "build/bdist.macosx-10.6-universal/egg/fabric/operations.py", line 414, in run File "build/bdist.macosx-10.6-universal/egg/fabric/network.py", line 65, in __getitem__ File "build/bdist.macosx-10.6-universal/egg/fabric/network.py", line 140, in connect File "build/bdist.macosx-10.6-universal/egg/paramiko/client.py", line 149, in load_system_host_keys File "build/bdist.macosx-10.6-universal/egg/paramiko/hostkeys.py", line 154, in load File "build/bdist.macosx-10.6-universal/egg/paramiko/hostkeys.py", line 66, in from_line File "build/bdist.macosx-10.6-universal/egg/paramiko/rsakey.py", line 61, in __init__ paramiko.SSHException: Invalid key Alexs-MacBook:fabric alex$ I can't connect to the server via ssh. What can be my problem? Regards, Arshavski Alexander.

    Read the article

  • installing paramiko

    - by fixxxer
    This may sound like a repeated question on SF, but I could not find a clear answer to it, yet.So. I installed Paramiko 1.7 with "setup.py install" command and while running the demo.py program, I got this error: Traceback (most recent call last): File "C:\Documents and Settings\fixavier\Desktop\paramiko-1.7\demos\demo.py", line 33, in <module> import paramiko File "C:\Python26\lib\site-packages\paramiko\__init__.py", line 69, in <module> from transport import randpool, SecurityOptions, Transport File "C:\Python26\lib\site-packages\paramiko\transport.py", line 32, in <module> from paramiko import util File "C:\Python26\lib\site-packages\paramiko\util.py", line 31, in <module> from paramiko.common import * File "C:\Python26\lib\site-packages\paramiko\common.py", line 99, in <module> from Crypto.Util.randpool import PersistentRandomPool, RandomPool ImportError: No module named Crypto.Util.randpool I'm getting this error even after installing PyCrypto 2.1. On running test.py(which comes with the installation), I got the following error - Traceback (most recent call last): File "C:\Documents and Settings\fixavier\Desktop\pycrypto-2.0.1\pycrypto-2.0.1\test.py", line 18, in <module> from Crypto.Util import test File "C:\Documents and Settings\fixavier\Desktop\pycrypto-2.0.1\pycrypto-2.0.1\build/lib.win32-2.6\Crypto\Util\test.py", line 17, in <module> import testdata File "C:\Documents and Settings\fixavier\Desktop\pycrypto-2.0.1\pycrypto-2.0.1\test\testdata.py", line 450, in <module> from Crypto.Cipher import AES ImportError: cannot import name AES I don't have the confidence to go ahead and install AES after all this, for all I know I may get another ImportError! Please advice.Is it the way of installation thats problematic?

    Read the article

  • FancyURLOpener failing since moving to python 3.1.2

    - by Andrew Shepherd
    I had an application that was downloading a .CSV file from a password-protected website then processing it futher. I was using FancyURLOpener, and simply hardcoding the username and password. (Obviously, security is not a high priority in this particular instance). Since downloading Python 3.1.2, this code has stopped working. Does anyone know of the changes that have happened to the implementation? Here is a cut down version of the code: import urllib.request; class TracOpener (urllib.request.FancyURLopener) : def prompt_user_passwd(self, host, realm) : return ('andrew_ee', '_my_unenctryped_password') csvUrl='http://mysite/report/19?format=csv@USER=fred_nukre' opener = TracOpener(); f = opener.open(csvUrl); s = f.read(); f.close(); s; For the sake of completeness, here's the entire call stack: Traceback (most recent call last): File "C:\reporting\download_csv_file.py", line 12, in <module> f = opener.open(csvUrl); File "C:\Program Files\Python31\lib\urllib\request.py", line 1454, in open return getattr(self, name)(url) File "C:\Program Files\Python31\lib\urllib\request.py", line 1628, in open_http return self._open_generic_http(http.client.HTTPConnection, url, data) File "C:\Program Files\Python31\lib\urllib\request.py", line 1624, in _open_generic_http response.status, response.reason, response.msg, data) File "C:\Program Files\Python31\lib\urllib\request.py", line 1640, in http_error result = method(url, fp, errcode, errmsg, headers) File "C:\Program Files\Python31\lib\urllib\request.py", line 1878, in http_error_401 return getattr(self,name)(url, realm) File "C:\Program Files\Python31\lib\urllib\request.py", line 1950, in retry_http_basic_auth return self.open(newurl) File "C:\Program Files\Python31\lib\urllib\request.py", line 1454, in open return getattr(self, name)(url) File "C:\Program Files\Python31\lib\urllib\request.py", line 1628, in open_http return self._open_generic_http(http.client.HTTPConnection, url, data) File "C:\Program Files\Python31\lib\urllib\request.py", line 1590, in _open_generic_http auth = base64.b64encode(user_passwd).strip() File "C:\Program Files\Python31\lib\base64.py", line 56, in b64encode raise TypeError("expected bytes, not %s" % s.__class__.__name__) TypeError: expected bytes, not str

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >