Search Results

Search found 14828 results on 594 pages for 'settings py'.

Page 59/594 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Python: eliminating stack traces into library code?

    - by Mark Harrison
    When I get a runtime exception from the standard library, it's almost always a problem in my code and not in the library code. Is there a way to truncate the exception stack trace so that it doesn't show the guts of the library package? For example, I would like to get this: Traceback (most recent call last): File "./lmd3-mkhead.py", line 71, in <module> main() File "./lmd3-mkhead.py", line 66, in main create() File "./lmd3-mkhead.py", line 41, in create headver1[depotFile]=rev TypeError: Data values must be of type string or None. and not this: Traceback (most recent call last): File "./lmd3-mkhead.py", line 71, in <module> main() File "./lmd3-mkhead.py", line 66, in main create() File "./lmd3-mkhead.py", line 41, in create headver1[depotFile]=rev File "/usr/anim/modsquad/oses/fc11/lib/python2.6/bsddb/__init__.py", line 276, in __setitem__ _DeadlockWrap(wrapF) # self.db[key] = value File "/usr/anim/modsquad/oses/fc11/lib/python2.6/bsddb/dbutils.py", line 68, in DeadlockWrap return function(*_args, **_kwargs) File "/usr/anim/modsquad/oses/fc11/lib/python2.6/bsddb/__init__.py", line 275, in wrapF self.db[key] = value TypeError: Data values must be of type string or None.

    Read the article

  • Testing variable types in Python

    - by Jasper
    Hello, I'm creating an initialising function for the class 'Room', and found that the program wouldn't accept the tests I was doing on the input variables. Why is this? def __init__(self, code, name, type, size, description, objects, exits): self.code = code self.name = name self.type = type self.size = size self.description = description self.objects = objects self.exits = exits #Check for input errors: if type(self.code) != type(str()): print 'Error found in module rooms.py!' print 'Error number: 110' elif type(self.name) != type(str()): print 'Error found in module rooms.py!' print 'Error number: 111' elif type(self.type) != type(str()): print 'Error found in module rooms.py!' print 'Error number: 112' elif type(self.size) != type(int()): print 'Error found in module rooms.py!' print 'Error number: 113' elif type(self.description) != type(str()): print 'Error found in module rooms.py!' print 'Error number: 114' elif type(self.objects) != type(list()): print 'Error found in module rooms.py!' print 'Error number: 115' elif type(self.exits) != type(tuple()): print 'Error found in module rooms.py!' print 'Error number: 116' When I run this I get this error: Traceback (most recent call last): File "/Users/Jasper/Development/Programming/MyProjects/Game Making Challenge/Europa I/rooms.py", line 148, in <module> myRoom = Room(101, 'myRoom', 'Basic Room', 5, '<insert description>', myObjects, myExits) File "/Users/Jasper/Development/Programming/MyProjects/Game Making Challenge/Europa I/rooms.py", line 29, in __init__ if type(self.code) != type(str()): TypeError: 'str' object is not callable

    Read the article

  • Broken Vista. Can't open Windows settings.

    - by serena
    My neighbor has a Lenovo laptop with Windows Vista Home Basic. She's a noob and just uses the laptop for internet purposes. She said she had to close down Windows improperly (sometime ago, maybe 6 months) because of system freeze. She realized there's something wrong with her Windows when she tried to open Windows update settings. I took a look at the system and determined the following errors: When I click on Windows Updates, a bare white window opens for a sec. and closes immediately. When I try to open Computer Properties, the same thing happens. (Windows+Break doesn't work either.) When I try to open Bluetooth settings, the same thing happens. So Vista won't let me open any Windows settings, but installed programs work correctly (games, applications etc.). She has no Windows Vista discs since the laptop came with preinstalled genuine Vista. She also has no recovery discs. I don't think there is a system restore point for the time the system was stable. Now what can we do to solve this big problem?

    Read the article

  • Are there any tools to migrate your files, applications, and settings to a new Windows computer?

    - by calbar
    I've decided to upgrade my laptop on a regular basis and one of my main concerns is recreating my entire Windows 7 environment every time I do this. I'm talking toolbar positions, login settings, start menu items, applications and all their customizations... everything but my drivers. It literally takes weeks to fully recreate my working environment, not to mention the risk of user error or just simply forgetting "how I liked it." I'm assuming I won't find something as painless as Apple's Migration Assistant for Windows, but maybe there's something out there that can at least package up your apps and their settings? Bonus points if you can point it to your personal files, too - whatever's the quickest way to get from one machine to the next. I intend to install Windows fresh to remove bloatware on every machine that I buy, then selectively install the drivers I need. Something that accommodates loading my old apps into this newly prepared environment would be ideal. One random point of concern is in regard to application settings that refer to old hardware. I'm not sure if there's anything that can be done about this. If you have any thoughts, feel free to share. Thanks for your help!

    Read the article

  • When using software RAID and LVM on Linux, which IO scheduler and readahead settings are honored?

    - by andrew311
    In the case of multiple layers (physical drives - md - dm - lvm), how do the schedulers, readahead settings, and other disk settings interact? Imagine you have several disks (/dev/sda - /dev/sdd) all part of a software RAID device (/dev/md0) created with mdadm. Each device (including physical disks and /dev/md0) has its own setting for IO scheduler (changed like so) and readahead (changed using blockdev). When you throw in things like dm (crypto) and LVM you add even more layers with their own settings. For example, if the physical device has a read ahead of 128 blocks and the RAID has a readahead of 64 blocks, which is honored when I do a read from /dev/md0? Does the md driver attempt a 64 block read which the physical device driver then translates to a read of 128 blocks? Or does the RAID readahead "pass-through" to the underlying device, resulting in a 64 block read? The same kind of question holds for schedulers? Do I have to worry about multiple layers of IO schedulers and how they interact, or does the /dev/md0 effectively override underlying schedulers? In my attempts to answer this question, I've dug up some interesting data on schedulers and tools which might help figure this out: Linux Disk Scheduler Benchmarking from Google blktrace - generate traces of the i/o traffic on block devices Relevant Linux kernel mailing list thread

    Read the article

  • How do I use django settings in my logging.ini file?

    - by slypete
    I have a BASE_DIR setting in my settings.py file: BASE_DIR = os.path.dirname(os.path.abspath(__file__)) I need to use this variable in my logging.ini file to setup my file handler paths. The initialization of logging happens in the same file, the settings.py file, below my BASE_DIR variable: LOG_INIT_DONE=False if not LOG_INIT_DONE: logging.config.fileConfig(LOGGING_INI) LOG_INIT_DONE=True Thanks, Pete

    Read the article

  • ubuntu 13.10 kvm binary is deprecated, please use qemu-system-x86_64

    - by ??1986
    I just upgrade from 13.04 to 13.10 and I have this issue when I run my KVM Unable to complete install: 'internal error: process exited while connecting to monitor: W: kvm binary is deprecated, please use qemu-system-x86_64 instead char device redirected to /dev/pts/10 (label charserial0) failed to initialize KVM: Device or resource busy Detail Error: Traceback (most recent call last): File "/usr/share/virt-manager/virtManager/asyncjob.py", line 96, in cb_wrapper callback(asyncjob, *args, **kwargs) File "/usr/share/virt-manager/virtManager/create.py", line 1983, in do_install guest.start_install(False, meter=meter) File "/usr/lib/python2.7/dist-packages/virtinst/Guest.py", line 1246, in start_install noboot) File "/usr/lib/python2.7/dist-packages/virtinst/Guest.py", line 1314, in _create_guest dom = self.conn.createLinux(start_xml or final_xml, 0) File "/usr/lib/python2.7/dist-packages/libvirt.py", line 2892, in createLinux if ret is None:raise libvirtError('virDomainCreateLinux() failed', conn=self) libvirtError: internal error: process exited while connecting to monitor: W: kvm binary is deprecated, please use qemu-system-x86_64 instead char device redirected to /dev/pts/8 (label charserial0) failed to initialize KVM: Device or resource busy

    Read the article

  • problem using pydoc in python

    - by rohanag
    I'm using pydoc in python 2.7.3 to generate documentation for a file called PreProcessingAPI.py which contains a class called PreProcessingAPI In PreProcessingAPI.py, I have the following import in the beginning of the file: from __future__ import division from re import * from nltk.stem import porter The problem is, in the documentation generated by pydoc, nltk.stem.porter is shown as a Module. There is also a DATA heading with all sorts of variables I do not know about. Is there a way to avoid these variables and avoid showing nltk.stem.porter in the modules? I'm running the following command to generate documentation python pydoc.py -w PreProcessingAPI.py I've put the file pydoc.py in the directory containing my file. Here is the file generated: https://www.dropbox.com/s/4rb6ut99o25mwly/PreProcessingAPI.html

    Read the article

  • What are the default mount settings for mount / fstab?

    - by John Craick
    What are the default mounting options for a non root partition ? The man entry for mount says ... defaults - use default options: rw, suid, dev, exec, auto, nouser, and async. ... so that might be what we expect to see. But, unless I'm missing something, that's not what happens. I have an ext3 partition labelled "NewHome20G" which is seen as /dev/sdc6 by the system. This we can see from ... root@john-pc1204:~# blkid | grep NewHome20G /dev/sdc6: LABEL="NewHome20G" UUID="d024bad5-906c-46c0-b7d4-812daf2c9628" TYPE="ext3" I have an entry in fstab as follows ... root@john-pc1204:~# cat /etc/fstab | grep NewHome LABEL=NewHome20G /media/NewHome20G ext3 rw,nosuid,nodev,exec,users 0 2 Note the option settings that are specified in that fstab line. Now I look at how the partition is actually mounted after boot up ... root@john-pc1204:~# mount -l | grep sdc6 /dev/sdc6 on /media/NewHome20G type ext3 (rw,noexec,nosuid,nodev) [NewHome20G] ... so, when the filesystem gets mounted the exec & users options I specified seem to have been ignored. Just to be sure, I unmount sdc6, remount it and look at the mount options again ... root@john-pc1204:~# umount /dev/sdc6 root@john-pc1204:~# mount /dev/sdc6 root@john-pc1204:~# mount -l | grep sdc6 /dev/sdc6 on /media/NewHome20G type ext3 (rw,noexec,nosuid,nodev) [NewHome20G] .... same result Now I unmount the partition again, remount it specifying the exec option and look at the result ... root@john-pc1204:~# umount /dev/sdc6 root@john-pc1204:~# mount /dev/sdc6 -o exec root@john-pc1204:~# mount -l | grep sdc6 /dev/sdc6 on /media/NewHome20G type ext3 (rw,nosuid,nodev) [NewHome20G] ... and here the exec option has finally taken effect and the noexec setting has vanished. Just for interest, I re-mount the partition with the defaults option root@john-pc1204:~# umount /dev/sdc6 root@john-pc1204:~# mount /dev/sdc6 -o defaults root@john-pc1204:~# mount -l | grep sdc6 /dev/sdc6 on /media/NewHome20G type ext3 (rw,noexec,nosuid,nodev) [NewHome20G] The noexec is back, so it looks very like rw,noexec,nosuid,nodev are the default options which is NOT what man says. Why does this matter ? I have a folder full of useful scripts stored on a data disk. Because that disk is mounted noexec those scripts won't run, even though they have all been set with chmod 777. I can work round this in several ways but it's disappointing that the man entry seems to be wrong. Have I missed something obvious here or have the default options in Ubuntu changed from what they were a few versions ago ?

    Read the article

  • Nautilus ignores / misinterprets view size

    - by BlueZero4
    I noticed that a lot of my folders had suddenly switched to higher view sizes than I had specificied. I was assuming that somehow nautilus had suddenly decided to create per-folder entries for said folders with incorrect view sizes. So I found this question: How to reset all per-folder view settings in nautilus? I found the folder specified in the answer (~/.local/share/gvfs-metadata) and found that it was actually important to delete the files INSIDE the folder, because for some reason deleting the folder itself didn't work for some reason. After doing that, I discovered that the odd setting was for the default view settings, not for a handful of files. Nautilus actually handles the per-folder settings like it should, but it ignores the global folder settings. I want Nautilus to, by default, display all non-specified folders as compact view, 50%. My folders are using the compact setting like I want, but they are not down to 50%. At a guess, they are at 100%. Altering the view size of the icon view can set the compact view to 33%, but I'm not sure by what mechanism this functions. I haven't extensively tested the other view sizes because I don't plan on using them much at all. Next I looked up questions like How do I reset nautilus to the default configuration? I'm expecting the problem to be a corrupted config file or something of the sort, so I hunted down directories like ~/.nautilus, ~/.gconf/apps/nautilus, and ~/.gnome2/nautilus. (I don't have a ~/.nautilus directory, so I'm assuming that's only for older versions.) I attempted to remove the contents of each, but I can't seem to force Nautilus back to default configuration settings. Actually viewing Nautilus's preferences in GConf made the settings look like they were what I wanted them to be, which is odd. I'd like to force Nautilus to default settings, basically. Though if something else will fix it, I'll take it too. I'm not interested in doing a full uninstall, reinstall of Nautilus if I don't have to. ==EDIT1== Turns out that Nautilus just writes the settings in GConf for the heck of it. Nautilus only really uses the settings that it stores in DConf. I did gsettings reset-recursively org.gnome.nautilus, which actually did reset Nautilus to default, but it still doesn't like my view size settings.

    Read the article

  • How can I load .obj files in the Soya3D engine?

    - by John Riselvato
    I recently just found soya3d. I want to import .obj files, but it seems to only accept .data files. How can I import .obj files? Importing a .obj file named "house" produces this error: Traceback (most recent call last): File "introduction.py", line 7, in <module> model = soya.Model.get("house") File "/usr/lib/pymodules/python2.6/soya/__init__.py", line 259, in get return klass._alls.get(filename) or klass._alls.setdefault(filename, klass.load(filename)) File "/usr/lib/pymodules/python2.6/soya/__init__.py", line 268, in load dirname = klass._get_directory_for_loading_and_check_export(filename) File "/usr/lib/pymodules/python2.6/soya/__init__.py", line 194, in _get_directory_for_loading_and_check_export dirname = klass._get_directory_for_loading(filename, ext) File "/usr/lib/pymodules/python2.6/soya/__init__.py", line 171, in _get_directory_for_loading raise ValueError("Cannot find a %s named %s!" % (klass, filename)) ValueError: Cannot find a <class 'soya.Model'> named house! * Soya3D * Quit...

    Read the article

  • Building a Debian package with two buildsystem

    - by queueoverflow
    I have a package that needs to be build with both a regular makefile and a setup.py. The thing is that the Debian packaging magic that is invoked via debuild would recognize a makefile and do the right make make install DESTDIR=??? thing and get it working right. When I only have a setup.py sitting there and have dh $@ --with python3 --buildsystem pybuild in debian/rules, it will correctly install the Python module with python3 setup.py build python3 setup.py install --install-layout deb --root=??? ??? I do not know all those flags. And I think that I do not need to. I just want the makefile magic to happen, and then the setup.py magic. How can I tell debuild to do both? When I do the following in debian/rules %: dh $@ dh $@ --with python3 --buildsystem pybuild it will only put the first one into the resulting package. I tried to delete the debhelper.log between those, but that did not change much.

    Read the article

  • Python urllib.urlopen IOError

    - by Michael
    So I have the following lines of code in a function sock = urllib.urlopen(url) html = sock.read() sock.close() and they work fine when I call the function by hand. However, when I call the function in a loop (using the same urls as earlier) I get the following error: > Traceback (most recent call last): File "./headlines.py", line 256, in <module> main(argv[1:]) File "./headlines.py", line 37, in main write_articles(headline, output_folder + "articles_" + term +"/") File "./headlines.py", line 232, in write_articles print get_blogs(headline, 5) File "/Users/michaelnussbaum08/Documents/College/Sophmore_Year/Quarter_2/Innovation/Headlines/_code/get_content.py", line 41, in get_blogs sock = urllib.urlopen(url) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib.py", line 87, in urlopen return opener.open(url) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib.py", line 203, in open return getattr(self, name)(url) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib.py", line 314, in open_http if not host: raise IOError, ('http error', 'no host given') IOError: [Errno http error] no host given Any ideas?

    Read the article

  • Testing sample code in python modules

    - by Andrew Walker
    I'm in the process of writing a python module that includes some samples. These samples aren't unit-tests, and they are too long and complex to be doctests. I'm interested in best practices for automatically checking that these samples run. My current project layout is pretty standard, except that there is an extra top level makefile that has build, install, unittest, coverage and profile targets, that delegate responsibility to setup.py and nose as required. projectname/ Makefile README setup.py samples/ foo-sample foobar-sample projectname/ __init__.py foo.py bar.py tests/ test-foo.py test-bar.py I've considered adding a sampletest module, or adding nose.tools.istest decorators to the entry-point functions of the samples, but for a small number of samples, these solutions sound a bit ugly. This question is similar to http://stackoverflow.com/questions/301365/automatically-unit-test-example-code, but I assume python best practices will differ from C#

    Read the article

  • "AttributeError: fileno" when attemping to import from pyevolve

    - by Corey Sunwold
    I just installed Pyevolve using easy_install and I am getting errors trying to run my first program. I first tried copy and pasting the source code of the first example but this is what I receive when I attempt to run it: Traceback (most recent call last): File "/home/corey/CTest/first_intro.py", line 3, in from pyevolve import G1DList File "/usr/lib/python2.6/site-packages/Pyevolve-0.5-py2.6.egg/pyevolve/init.py", line 15, in File "/usr/lib/python2.6/site-packages/Pyevolve-0.5-py2.6.egg/pyevolve/Consts.py", line 240, in import Selectors File "/usr/lib/python2.6/site-packages/Pyevolve-0.5-py2.6.egg/pyevolve/Selectors.py", line 12, in File "/usr/lib/python2.6/site-packages/Pyevolve-0.5-py2.6.egg/pyevolve/GPopulation.py", line 11, in File "/usr/lib/python2.6/site-packages/Pyevolve-0.5-py2.6.egg/pyevolve/FunctionSlot.py", line 14, in File "/usr/lib/python2.6/site-packages/Pyevolve-0.5-py2.6.egg/pyevolve/Util.py", line 20, in AttributeError: fileno I am running python 2.6 on Fedora 11 X86_64.

    Read the article

  • Python and App Engine project structure

    - by Joel
    Hello, I am relatively new to python and app engine, and I just finished my first project. It consists of several *.py files (usually py file for every page on the site) and respectively temple files for each py file. In addition, I have one big PY file that has many functions that are common to a lot of pages, in I also declared the classes of db.Model (that is the datastore kinds). My question is what is the convention (if there is one) of arranging these files. If I create a model.py with the datastore classes, should it be in different package? Where should I put my template files and all of the py files that handle every page (should they be in the same directory as the one big common PY file)? I have tried to look for MVC and such implementations online but there are very few. Thanks, Joel

    Read the article

  • How can I put all twill commands together into one piece of code in a .py file?

    - by brilliant
    Hello everybody! I have just started exploring TWILL. Twill is an amazing scripting language for Web browsing and it does all I want!!! So far I've been using twill from a Python shell (IDLE (Python GUI) to be precise) and I do things in the way of executing commands one by one (I mean, I type one command, run it, then type the next command): But I don't know how to put all these commands together in one .py file, so that they would all be executed one by one automatically. It seems that there is such possibility in twill. This example from the twill documentation page (you can see it HERE) shows us one piece of code consisting of several commands: So, my question is: How can I put all commands together in twill?

    Read the article

  • How to finish a broken data upload to the production Google App Engine server?

    - by WooYek
    I was uploading the data to App Engine (not dev server) through loader class and remote api, and I hit the quota in the middle of a CSV file. Based on logs and progress sqllite db, how can I select remaining portion of data to be uploaded? Going through tens of records to determine which was and which was not transfered, is not appealing task, so I look for some way to limit the number of record I need to check. Here's relevant (IMO) log portion, how to interpret work item numbers? [DEBUG 2010-03-30 03:22:51,757 bulkloader.py] [Thread-2] [1041-1050] Transferred 10 entities in 3.9 seconds [DEBUG 2010-03-30 03:22:51,757 adaptive_thread_pool.py] [Thread-2] Got work item [1071-1080] <cut> [DEBUG 2010-03-30 03:23:09,194 bulkloader.py] [Thread-1] [1141-1150] Transferred 10 entities in 4.6 seconds [DEBUG 2010-03-30 03:23:09,194 adaptive_thread_pool.py] [Thread-1] Got work item [1161-1170] <cut> [DEBUG 2010-03-30 03:23:09,226 bulkloader.py] [Thread-3] [1151-1160] Transferred 10 entities in 4.2 seconds [DEBUG 2010-03-30 03:23:09,226 adaptive_thread_pool.py] [Thread-3] Got work item [1171-1180] [ERROR 2010-03-30 03:23:10,174 bulkloader.py] Retrying on non-fatal HTTP error: 503 Service Unavailable

    Read the article

  • Testing a Django view cause "AttributeError: 'NoneType' object has no attribute 'handler500'" error

    - by jack
    I just wanted to start testing a Django view using the code below: from django.test.client import Client c = Client() response = c.get('/search/keyword') print response.content It just throws out following error message: "/usr/local/lib/python2.6/dist-packages/django/test/client.py", line 286, in get response = self.request(**r) File "/usr/local/lib/python2.6/dist-packages/django/test/client.py", line 230, in request response = self.handler(environ) File "/usr/local/lib/python2.6/dist-packages/django/test/client.py", line 74, in __call__ response = self.get_response(request) File "/usr/local/lib/python2.6/dist-packages/django/core/handlers/base.py", line 143, in get_response return self.handle_uncaught_exception(request, resolver, exc_info) File "/usr/local/lib/python2.6/dist-packages/django/core/handlers/base.py", line 178, in handle_uncaught_exception callback, param_dict = resolver.resolve500() File "/usr/local/lib/python2.6/dist-packages/django/core/urlresolvers.py", line 268, in resolve500 return self._resolve_special('500') File "/usr/local/lib/python2.6/dist-packages/django/core/urlresolvers.py", line 258, in _resolve_special callback = getattr(self.urlconf_module, 'handler%s' % view_type) AttributeError: 'NoneType' object has no attribute 'handler500' The view works in browser. What's wrong with above code?

    Read the article

  • importing same module more than once

    - by wallacoloo
    So after a few hours, I discovered the cause of a bug in my application. My app's source is structure like: main/ __init__.py folderA/ __init__.py fileA.py fileB.py Really, there are about 50 more files. But that's not the point. In main/__init__.py, I have this code: from folderA.fileA import * in folderA/__init__.py I have this code: sys.path.append(pathToFolderA) in folderA/fileB.py I have this code: from fileA import * The problem is that fileA gets imported twice. However, I only want to import it once. The obvious way to fix this (to me atleast) is to change certain paths from path to folderA.path But I feel like Python should not even have this error in the first place. What other workarounds are there that don't require each file to know it's absolute location?

    Read the article

  • web2py error while using distinct in the queries

    - by Steve
    Hi, I am using web2py with GAE. While using some of the queries which has a distinct clause, GAE throws out an error.I have pasted the Traceback. Can someone please help me out with this. In FILE: /base/data/home/apps/panneersoda/1.341206242889687944/applications/init/controllers/default.py Traceback (most recent call last): File "/base/data/home/apps/panneersoda/1.341206242889687944/gluon/restricted.py", line 173, in restricted exec ccode in environment File "/base/data/home/apps/panneersoda/1.341206242889687944/applications/init/controllers/default.py:profileview", line 263, in <module> File "/base/data/home/apps/panneersoda/1.341206242889687944/gluon/globals.py", line 96, in <lambda> self._caller = lambda f: f() File "/base/data/home/apps/panneersoda/1.341206242889687944/applications/init/controllers/default.py:profileview", line 97, in profileview File "/base/data/home/apps/panneersoda/1.341206242889687944/gluon/contrib/gql.py", line 675, in select (items, tablename, fields) = self._select(*fields, **attributes) File "/base/data/home/apps/panneersoda/1.341206242889687944/gluon/contrib/gql.py", line 624, in _select raise SyntaxError, 'invalid select attribute: %s' % key SyntaxError: invalid select attribute: distinct Thanks

    Read the article

  • Python - werid behavior

    - by orokusaki
    I've done what I shouldn't have done and written 4 modules (6 hours or so) without running any tests along the way. I have a method inside of /mydir/__init__.py called get_hash(), and a class inside of /mydir/utils.py called SpamClass. /mydir/utils.py imports get_hash() from /mydir/__init__. /mydir/__init__.py imports SpamClass from /mydir/utils.py. Both the class and the method work fine on their own but for some reason if I try to import /mydir/, I get an import error saying "Cannot import name get_hash" from /mydir/__init__.py. The only stack trace is the line saying that __init__.py imported SpamClass. The next line is where the error occurs in in SpamClass when trying to import get_hash. Why is this?

    Read the article

  • Python fCGI + sqlAlchemy = malformed header from script. Bad header=FROM tags : index.py

    - by crgwbr
    I'm writing an Fast-CGI application that makes use of sqlAlchemy & MySQL for persistent data storage. I have no problem connecting to the DB and setting up ORM (so that tables get mapped to classes); I can even add data to tables (in memory). But, as soon as I query the DB (and push any changes from memory to storage) I get a 500 Internal Server Error and my error.log records malformed header from script. Bad header=FROM tags : index.py, when tags is the table name. Any idea what could be causing this? Also, I don't think it matters, but its a Linux development server talking to an off-site (across the country) MySQL server.

    Read the article

  • Google App Engine Application Error 5

    - by Sam
    I frequently get this Application error. What does this mean ? File "/base/data/home/apps/0xxopdp/10.347467753731922836/matrices.py", line 215, in insert_into_db obj.put() File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/db/__init__.py", line 895, in put return datastore.Put(self._entity, config=config) File "/base/python_runtime/python_lib/versions/1/google/appengine/api/datastore.py", line 404, in Put return _GetConnection().async_put(config, entities, extra_hook).get_result() File "/base/python_runtime/python_lib/versions/1/google/appengine/datastore/datastore_rpc.py", line 601, in get_result self.check_success() File "/base/python_runtime/python_lib/versions/1/google/appengine/datastore/datastore_rpc.py", line 572, in check_success rpc.check_success() File "/base/python_runtime/python_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 502, in check_success self.__rpc.CheckSuccess() File "/base/python_runtime/python_lib/versions/1/google/appengine/api/apiproxy_rpc.py", line 126, in CheckSuccess raise self.exception ApplicationError: ApplicationError: 5 I do make many calls to the datastore. What caused this problem ?

    Read the article

  • User Crontab + Python + Random wallpapers = Not working?

    - by Andrew Bolster
    I have a python script that correctly sets the desktop wallpaper via gconf to a random picture in a given folder. I then have the following entry in my crontab * * * * * python /home/bolster/bin/change-background.py And syslog correctly reports execution Apr 26 14:11:01 bolster-desktop CRON[9751]: (bolster) CMD (python /home/bolster/bin/change-background.py) Apr 26 14:12:01 bolster-desktop CRON[9836]: (bolster) CMD (python /home/bolster/bin/change-background.py) Apr 26 14:13:01 bolster-desktop CRON[9860]: (bolster) CMD (python /home/bolster/bin/change-background.py) Apr 26 14:14:01 bolster-desktop CRON[9905]: (bolster) CMD (python /home/bolster/bin/change-background.py) Apr 26 14:15:01 bolster-desktop CRON[9948]: (bolster) CMD (python /home/bolster/bin/change-background.py) Apr 26 14:16:01 bolster-desktop CRON[9983]: (bolster) CMD (python /home/bolster/bin/change-background.py) But no desktopy changey, Any ideas?

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >