Search Results

Search found 14399 results on 576 pages for 'python noob'.

Page 267/576 | < Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >

  • Is there any framework for rich web clients on top of html/css?

    - by iamgopal
    Some RAD tools like openobject use rich web clients. I.e. their client side code reside inside the browser and they talk to the server via xml-rpc or json-rpc only and change the view accordingly, all the javascript and css are transferred only once. Such rich web clients would increase the productivity in enterprise class web application that have lots of processes and forms etc. I would like to use such a rich web client inside my own application. I tried to search but found only openerp-web, which is tightly integrated to its server. Is there any other rich web client framework available? if not, is there any design detail I can look into to create my own? Thanks. Edit: Browser is a client which uses http and similar protocols to talks to web server which serve pages that the client displays. Rich web client is a client which sits on top of Browser which talks to the server, send data, receive data and information about How to update the view etc and do it. Similar to Vaadin, such rich web client will eliminate any code requirement on client side and and all the coding will be done on server side. Belows are such thin clients. pjax ( jquery ) vaadin ( java ) openobject web client ( python ) nagare ( python ) seaside ( smalltalk ) p4a ( php ) this are all such clients that once properly setup will allow to code on only on sever and still provide great ajax like experience.

    Read the article

  • Moses v1.0 multi language ini file

    - by Milan Kocic
    I was working with mosesserver 0.91 and everything works fine but now there is version 1.0 and nothing is same as before. Here is my situation: I want to have multi language translation from arabic to english and from english to arabic. All data and configuration file I have works with 0.91 version of mosesserver. Here is my config file: ------------------------------------------------- ######################### ### MOSES CONFIG FILE ### ######################### # D - decoding path, R - reordering model, L - language model [translation-systems] ar-en D 0 R 0 L 0 en-ar D 1 R 1 L 1 # input factors [input-factors] 0 # mapping steps [mapping] 0 T 0 1 T 1 # translation tables: table type (hierarchical(0), textual (0), binary (1)), source-factors, target-factors, number of scores, file # OLD FORMAT is still handled for back-compatibility # OLD FORMAT translation tables: source-factors, target-factors, number of scores, file # OLD FORMAT a binary table type (1) is assumed [ttable-file] 1 0 0 5 /mnt/models/ar-en/phrase-table/phrase-table 1 0 0 5 /mnt/models/en-ar/phrase-table/phrase-table # no generation models, no generation-file section # language models: type(srilm/irstlm), factors, order, file [lmodel-file] 1 0 5 /mnt/models/ar-en/language-model/en.qblm.mm 1 0 5 /mnt/models/en-ar/language-model/ar.lm.d1.blm.mm # limit on how many phrase translations e for each phrase f are loaded # 0 = all elements loaded [ttable-limit] 20 # distortion (reordering) files [distortion-file] 0-0 wbe-msd-bidirectional-fe-allff 6 /mnt/models/ar-en/reordering-table/reordering-table.wbe-msd-bidirectional-fe.gz 0-0 wbe-msd-bidirectional-fe-allff 6 /mnt/models/en-ar/reordering-model/reordering-table.wbe-msd-bidirectional-fe.gz # distortion (reordering) weight [weight-d] 0.3 0.3 # lexicalised distortion weights [weight-lr] 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 # language model weights [weight-l] 0.5000 0.5000 # translation model weights [weight-t] 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 # no generation models, no weight-generation section # word penalty [weight-w] -1 -1 [distortion-limit] 12 --------------------------------------------------------- So please can someone help me and rewrite this config file so it can work in version 1.0. And i need some python sample code of translation. I am using xmlrpc in python and earler I sent http request with: import xmlrpclib client = xmlrpclib.ServerProxy('http://localhost:8080') client.translate({'text': 'some text', 'system': 'en-ar'}) but now seems there is no more 'system' parameter and moses use always default settings.

    Read the article

  • Get current timepoint from Totem application

    - by ??O?????
    I want to find the exact time where a media file is currently paused at (or playing) in a running Totem instance using D-Bus. To be precise, what I want is available from the Totem python console (if the plugin exists and is enabled) by the following command: >>> print totem_object.props.current_time 732616 which I understand is milliseconds. So far: I've never used D-Bus before, so I'm in the process of going through D-Bus and python-dbus documentation. I've also fired up D-Feet and found that the org.gnome.Totem bus name and the /Factory object I can use the org.freedesktop.DBus.Properties interface methods. I'm currently at this point: >>> import dbus >>> seb= dbus.SessionBus() >>> t= seb.get_object('org.gnome.Totem', '/Factory') >>> tif= dbus.Interface(t, 'org.freedesktop.DBus.Properties') >>> tif.GetAll('') dbus.Dictionary({}, signature=dbus.Signature('sv')) I can't find even a proper how-to, so any help will be greatly appreciated.

    Read the article

  • vtk glyphs 3D, indenpently color and rotation

    - by user3684219
    I try to display thanks to vtk (python wrapper) several glyphs in a scene with each their own colour and rotation. Unfortunately, just the rotation (using vtkTensorGlyph) is taken in consideration by vtk. Reversely, just color is taken in consideration when I use a vtkGlyph3D. Here is a ready to use piece of code with a vtkTensorGlyph. Each cube should have a random color but there all will be in the same color. I read and read again the doc of vtk but I found no solution. Thanks in advance for any idea #!/usr/bin/env python # -*- coding: utf-8 -*- import vtk import scipy.linalg as sc import random as ra import numpy as np import itertools points = vtk.vtk.vtkPoints() # where to locate each glyph in the scene tensors = vtk.vtkDoubleArray() # rotation for each glyph tensors.SetNumberOfComponents(9) colors = vtk.vtkUnsignedCharArray() # should be the color for each glyph colors.SetNumberOfComponents(3) # let's make 10 cubes in the scene for i in range(0, 50, 5): points.InsertNextPoint(i, i, i) # position of a glyph colors.InsertNextTuple3(ra.randint(0, 255), ra.randint(0, 255), ra.randint(0, 255) ) # pick random color rot = list(itertools.chain(*np.reshape(sc.orth(np.random.rand(3, 3)).transpose(), (1, 9)).tolist())) # random rotation matrix (row major) tensors.InsertNextTuple9(*rot) polydata = vtk.vtkPolyData() # create the polydatas polydata.SetPoints(points) polydata.GetPointData().SetTensors(tensors) polydata.GetPointData().SetScalars(colors) cubeSource = vtk.vtkCubeSource() cubeSource.Update() glyphTensor = vtk.vtkTensorGlyph() glyphTensor.SetColorModeToScalars() # is it really work ? try: glyphTensor.SetInput(polydata) except AttributeError: glyphTensor.SetInputData(polydata) glyphTensor.SetSourceConnection(cubeSource.GetOutputPort()) glyphTensor.ColorGlyphsOn() # should not color all cubes independently ? glyphTensor.ThreeGlyphsOff() glyphTensor.ExtractEigenvaluesOff() glyphTensor.Update() # next is usual vtk code mapper = vtk.vtkPolyDataMapper() mapper.SetInputConnection(glyphTensor.GetOutputPort()) actor = vtk.vtkActor() actor.SetMapper(mapper) ren = vtk.vtkRenderer() ren.SetBackground(0.2, 0.5, 0.3) ren.AddActor(actor) renwin = vtk.vtkRenderWindow() renwin.AddRenderer(ren) iren = vtk.vtkRenderWindowInteractor() iren.SetInteractorStyle(vtk.vtkInteractorStyleTrackballCamera()) iren.SetRenderWindow(renwin) renwin.Render() iren.Initialize() renwin.Render() iren.Start()

    Read the article

  • cx_Oracle makes subprocess give OSError

    - by Shrikant Sharat
    I am trying to use the cx_Oracle module with python 2.6.6 on ubuntu Maverick, with Oracle 11gR2 Enterprise edition. I am able to connect to my oracle db just fine, but once I do that, the subprocess module does not work anymore. Here is an iPython session that reproduces the problem... In [1]: import subprocess as sp, cx_Oracle as dbh In [2]: sp.call(['whoami']) sharat Out[2]: 0 In [3]: con = dbh.connect('system', 'password') In [4]: con.close() In [5]: sp.call(['whomai']) --------------------------------------------------------------------------- OSError Traceback (most recent call last) /home/sharat/desk/calypso-launcher/<ipython console> in <module>() /usr/lib/python2.6/subprocess.pyc in call(*popenargs, **kwargs) 468 retcode = call(["ls", "-l"]) 469 """ --> 470 return Popen(*popenargs, **kwargs).wait() 471 472 /usr/lib/python2.6/subprocess.pyc in __init__(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags) 621 p2cread, p2cwrite, 622 c2pread, c2pwrite, --> 623 errread, errwrite) 624 625 if mswindows: /usr/lib/python2.6/subprocess.pyc in _execute_child(self, args, executable, preexec_fn, close_fds, cwd, env, universal_newlines, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite) 1134 1135 if data != "": -> 1136 _eintr_retry_call(os.waitpid, self.pid, 0) 1137 child_exception = pickle.loads(data) 1138 for fd in (p2cwrite, c2pread, errread): /usr/lib/python2.6/subprocess.pyc in _eintr_retry_call(func, *args) 453 while True: 454 try: --> 455 return func(*args) 456 except OSError, e: 457 if e.errno == errno.EINTR: OSError: [Errno 10] No child processes So, the call to sp.call works fine before connecting to oracle, but breaks after that. Even if I have closed the connection to the database. Looking around, I found http://bugs.python.org/issue1731717 as somewhat related to this issue, but I am not dealing with threads here. I don't know if cx_Oracle is. Moreover, the above issue mentions that adding a time.sleep(1) fixes it, but it didn't help me. Any help appreciated. Thanks.

    Read the article

  • Settings module not found deploying django on a shared server

    - by mcanes
    I'm trying to deploy my django project on a shared hosting as describe here I have my project on /home/user/www/testa I'm using this script #!/usr/bin/python import sys, os sys.path.append("/home/user/bin/python") sys.path.append('/home/user/www/testa') os.chdir("/home/user/www/testa") os.environ['DJANGO_SETTINGS_MODULE'] = "settings.py" from django.core.servers.fastcgi import runfastcgi runfastcgi(method="threaded", daemonize="false") And here's the error I get when trying to run it from shell: WSGIServer: missing FastCGI param REQUEST_METHOD required by WSGI! WSGIServer: missing FastCGI param SERVER_NAME required by WSGI! WSGIServer: missing FastCGI param SERVER_PORT required by WSGI! WSGIServer: missing FastCGI param SERVER_PROTOCOL required by WSGI! Traceback (most recent call last): File "build/bdist.linux-i686/egg/flup/server/fcgi_base.py", line 558, in run File "build/bdist.linux-i686/egg/flup/server/fcgi_base.py", line 1118, in handler File "/home/user/lib/python2.4/site-packages/django/core/handlers/wsgi.py", line 230, in __call__ self.load_middleware() File "/home/user/lib/python2.4/site-packages/django/core/handlers/base.py", line 33, in load_middleware for middleware_path in settings.MIDDLEWARE_CLASSES: File "/home/user/lib/python2.4/site-packages/django/utils/functional.py", line 269, in __getattr__ self._setup() File "/home/usr/lib/python2.4/site-packages/django/conf/__init__.py", line 40, in _setup self._wrapped = Settings(settings_module) File "/home/user/lib/python2.4/site-packages/django/conf/__init__.py", line 75, in __init__ raise ImportError, "Could not import settings '%s' (Is it on sys.path? Does it have syntax errors?): %s" % (self.SETTINGS_MODULE, e) ImportError: Could not import settings 'settings.py' (Is it on sys.path? Does it have syntax errors?): No module named settings.py Content-Type: text/html Unhandled Exception Unhandled Exception An unhandled exception was thrown by the application. What am I doing wrong? Running the script from the browser just gives me an internal server error.

    Read the article

  • What's the purpose of "import package"?

    - by codethief
    As I just found out import package does not make the package's modules available through package.module. The same obviously holds true for from package import subpackage as well as from package import * What's the purpose of importing a package at all then if I can't access its submodules but only the objects defined in __init__.py? It makes sense to me that from package import * would bloat the namespace, which, however, doesn't apply in case of the other two ways! I also understand that loading all submodules might take a long time. But I don't know what these unwanted side-effects, "that should only happen when the sub-module is explicitly imported", are which the author of the previous link mentions. To me it looks like doing an import package[.subpackage] (or from package import subpackage) makes absolutely no sense if I don't exactly want to access objects provided in __init__.py. Are those unwanted side effects really that serious that the language actually has to protect the programmer from causing them? Actually, I thought that Python was a little bit more about "If the programmer wants to do it, let him do it." In my case, I really do want to import all submodules with the single statement from package import subpackage, because I need all of them! Telling Python in the init.py file which submodules I'm exactly talking about (all of them!) is quite cumbersome from my point of view. Please enlighten me. :)

    Read the article

  • Swig: No Constructor defined

    - by wheaties
    I added %allowexcept to my *.i file when building a Python <-- C++ bridge using Swig. Then I removed that preprocessing directive. Now I can't get the Python produced code to recognize the constructor of a C++ class. Here's the class file: #include <exception> class Swig_Foo{ public: Swig_Foo(){} virtual ~Swig_Foo(){} virtual getInt() =0; virtual throwException() { throw std::exception(); } }; And here's the code Swig produces from it: class Swig_Foo: __swig_setmethods__ = {} __setattr__ = lambda self, name, value: _swig_setattr(self, Swig_Foo, name, value) __swig_getmethods__ = {} __getattr__ = lambda self, name: _swig_getattr(self, Swig_Foo, name) def __init__(self): raise AttributeError, "No constructor defined" __repr__ = _swig_repr __swig_destroy__ = _foo.delete_Swig_Foo __del__ = lambda self : None; def getInt(*args): return apply(_foo.Swig_Foo_getInt, args) def throwOut(*args): return apply(_foo.Swig_Foo_throwOut, args) Swig_Foo_swigregister = _foo.Swig_Foo_swigregister Swig_Foo_swigregister(Swig_Foo) The problem is the def __init__self): raise AttributeError, "No constructor defined" portion. It never did this before I added the %allowexception and now that I've removed it, I can't get it to create a real constructor. All the other classes have actual constructors. Quite baffled. Anyone know how I can get it to stop doing this?

    Read the article

  • Can I compare a template variable to an integer in App Engine templates?

    - by matt b
    Using Django templates in Google App Engine (on Python), is it possible to compare a template variable to an integer in an {% if %} block? views.py: class MyHandler(webapp.RequestHandler): def get(self): foo_list = db.GqlQuery(...) ... template_values['foos'] = foo_list template_values['foo_count'] = len(foo_list) handler.response.out.write(template.render(...)) My template: {% if foo_count == 1 %} There is one foo. {% endif %} This blows up with 'if' statement improperly formatted. What I was attempting to do in my template was build a simple if/elif/else tree to be grammatically correct to be able to state #foo_count == 0: There are no foos. #foo_count == 1: There is one foo. #else: There are {{ foos|length }} foos. Browsing the Django template documents (this link provided in the GAE documentation appears to be for versions of Django far newer than what is supported on GAE), it appears as if I can only actually use boolean operators (if in fact boolean operators are supported in this older version of Django) with strings or other template variables. Is it not possible to compare variables to integers or non-strings with Django templates? I'm sure there is an easy way to workaround this - built up the message string on the Python side rather than within the template - but this seems like such a simple operation you ought to be able to handle in a template. It sounds like I should be switching to a more advanced templating engine, but as I am new to Django (templates or any part of it), I'd just like some confirmation first.

    Read the article

  • gevent install on x86_64 fails: "undefined symbol: evhttp_accept_socket"

    - by digitala
    I'm trying to install gevent on a fresh EC2 CentOS 5.3 64-bit system. Since the libevent version available in yum was too old for another package (beanstalkd) I compiled/installed libevent-1.4.13-stable manually using the following command: ./configure --prefix=/usr && make && make install This is the output from installing gevent: [gevent-0.12.2]# python setup.py build --libevent /usr/lib Using libevent 1.4.13-stable: libevent.so running build running build_py running build_ext Linking /usr/src/gevent-0.12.2/build/lib.linux-x86_64-2.6/gevent/core.so to /usr/src/gevent-0.12.2/gevent/core.so [gevent-0.12.2]# cd /path/to/my/project [project]# python myscript.py Traceback (most recent call last): File "myscript.py", line 9, in <module> from gevent.wsgi import WSGIServer as GeventServer File "/usr/lib/python2.6/site-packages/gevent/__init__.py", line 32, in <module> from gevent.core import reinit ImportError: /usr/lib/python2.6/site-packages/gevent/core.so: undefined symbol: evhttp_accept_socket I've followed exactly the same steps on a local VirtualBox instance (32-bit) and I'm not seeing any errors. How would I fix this?

    Read the article

  • Why use Django on Google App Engine?

    - by Travis Bradshaw
    When researching Google App Engine (GAE), it's clear that using Django is wildly popular for developing in Python on GAE. I've been scouring the web to find information on the costs and benefits of using Django, to find out why it's so popular. While I've been able to find a wide variety of sources on how to run Django on GAE and the various methods of doing so, I haven't found any comparative analysis on why Django is preferable to using the webapp framework provided by Google. To be clear, it's immediately apparent why using Django on GAE is useful for developers with an existing skillset in Django (a majority of Python web developers, no doubt) or existing code in Django (where using GAE is more of a porting exercise). My team, however, is evaluating GAE for use on an all-new project and our existing experience is with TurboGears, not Django. It's been quite difficult to determine why Django is beneficial to a development team when the BigTable libraries have replaced Django's ORM, sessions and authentication are necessarily changed, and Django's templating (if desirable) is available without using the entire Django stack. Finally, it's clear that using Django does have the advantage of providing an "exit strategy" if we later wanted to move away from GAE and need a platform to target for the exodus. I'd be extremely appreciative for help in pointing out why using Django is better than using webapp on GAE. I'm also completely inexperienced with Django, so elaboration on smaller features and/or conveniences that work on GAE are also valuable to me. Thanks in advance for your time!

    Read the article

  • Generic unit test scheduling

    - by Raphink
    Hello, I'm (re)writing a program that does generic unit test scheduling. The current program is a mono-threaded Perl program, but I'm willing to modularize it and parallelize the tests. I'm also considering rewriting it in Python. Here is what I need to do: I have a list of tests, with the following attributes: uri: a URI to test (could be HTTP/HTTPS/SSH/local) ; depends: an associative array of tests/values that this test depends on ; join: a list of DB joints to be added when selecting items to process in this test ; depends_db: additional conditions to add to the DB request when selecting items to process in this test. The program builds a dependency tree, beginning with the tests that have no dependencies ; for each test: a list of items is selected from the database using the conditions (results of depending tests, joints and depends_db) ; the list of items is sent to the URI (using POST or stdin) ; the result is retrived as a YAML file listing the state and comments for the test for each tested item ; the results are stored in the DB ; the test returns, allowing depending tests to be performed. the program generates reports (CSV, DB, graphviz) of the performed tests. The primary use of this program currently is to test a fleet of machines against services such as backup, DNS, etc. The tests can then be: - backup: hosted on the backup machine(s), called through HTTP, checks if the machines' backup went well ; - DNS: hosted on the local machine, called via stdin, checks if the machines' fqdn have a valid DNS entry. Does such a tool/module already exist? What would be the best implementation to achieve this (using Perl or Python)?

    Read the article

  • How to correctly relay TCP traffic between sockets?

    - by flukes1
    I'm trying to write some Python code that will establish an invisible relay between two TCP sockets. My current technique is to set up two threads, each one reading and subsequently writing 1kb of data at a time in a particular direction (i.e. 1 thread for A to B, 1 thread for B to A). This works for some applications and protocols, but it isn't foolproof - sometimes particular applications will behave differently when running through this Python-based relay. Some even crash. I think that this is because when I finish performing a read on socket A, the program running there considers its data to have already arrived at B, when in fact I - the devious man in the middle - have yet to send it to B. In a situation where B isn't ready to receive the data (whereby send() blocks for a while), we are now in a state where A believes it has successfully sent data to B, yet I am still holding the data, waiting for the send() call to execute. I think this is the cause of the difference in behaviour that I've found in some applications, while using my current relaying code. Have I missed something, or does that sound correct? If so, my real question is: is there a way around this problem? Is it possible to only read from socket A when we know that B is ready to receive data? Or is there another technique that I can use to establish a truly 'invisible' two-way relay between [already open & established] TCP sockets?

    Read the article

  • documenting class attributes

    - by intuited
    I'm writing a lightweight class whose attributes are intended to be publicly accessible, and only sometimes overridden in specific instantiations. There's no provision in the Python language for creating docstrings for class attributes, or any sort of attributes, for that matter. What is the accepted way, should there be one, to document these attributes? Currently I'm doing this sort of thing: class Albatross(object): """A bird with a flight speed exceeding that of an unladen swallow. Attributes: """ flight_speed = 691 __doc__ += """ flight_speed (691) The maximum speed that such a bird can attain. """ nesting_grounds = "Raymond Luxury-Yacht" __doc__ += """ nesting_grounds ("Raymond Luxury-Yacht") The locale where these birds congregate to reproduce. """ def __init__(**keyargs): """Initialize the Albatross from the keyword arguments.""" self.__dict__.update(keyargs) Although this style doesn't seem to be expressly forbidden in the docstring style guidelines, it's also not mentioned as an option. The advantage here is that it provides a way to document attributes alongside their definitions, while still creating a presentable class docstring, and avoiding having to write comments that reiterate the information from the docstring. I'm still kind of annoyed that I have to actually write the attributes twice; I'm considering using the string representations of the values in the docstring to at least avoid duplication of the default values. Is this a heinous breach of the ad hoc community conventions? Is it okay? Is there a better way? For example, it's possible to create a dictionary containing values and docstrings for the attributes and then add the contents to the class __dict__ and docstring towards the end of the class declaration; this would alleviate the need to type the attribute names and values twice. edit: this last idea is, I think, not actually possible, at least not without dynamically building the class from data, which seems like a really bad idea unless there's some other reason to do that. I'm pretty new to python and still working out the details of coding style, so unrelated critiques are also welcome.

    Read the article

  • Unexpected performance curve from CPython merge sort

    - by vkazanov
    I have implemented a naive merge sorting algorithm in Python. Algorithm and test code is below: import time import random import matplotlib.pyplot as plt import math from collections import deque def sort(unsorted): if len(unsorted) <= 1: return unsorted to_merge = deque(deque([elem]) for elem in unsorted) while len(to_merge) > 1: left = to_merge.popleft() right = to_merge.popleft() to_merge.append(merge(left, right)) return to_merge.pop() def merge(left, right): result = deque() while left or right: if left and right: elem = left.popleft() if left[0] > right[0] else right.popleft() elif not left and right: elem = right.popleft() elif not right and left: elem = left.popleft() result.append(elem) return result LOOP_COUNT = 100 START_N = 1 END_N = 1000 def test(fun, test_data): start = time.clock() for _ in xrange(LOOP_COUNT): fun(test_data) return time.clock() - start def run_test(): timings, elem_nums = [], [] test_data = random.sample(xrange(100000), END_N) for i in xrange(START_N, END_N): loop_test_data = test_data[:i] elapsed = test(sort, loop_test_data) timings.append(elapsed) elem_nums.append(len(loop_test_data)) print "%f s --- %d elems" % (elapsed, len(loop_test_data)) plt.plot(elem_nums, timings) plt.show() run_test() As much as I can see everything is OK and I should get a nice N*logN curve as a result. But the picture differs a bit: Things I've tried to investigate the issue: PyPy. The curve is ok. Disabled the GC using the gc module. Wrong guess. Debug output showed that it doesn't even run until the end of the test. Memory profiling using meliae - nothing special or suspicious. ` I had another implementation (a recursive one using the same merge function), it acts the similar way. The more full test cycles I create - the more "jumps" there are in the curve. So how can this behaviour be explained and - hopefully - fixed? UPD: changed lists to collections.deque UPD2: added the full test code UPD3: I use Python 2.7.1 on a Ubuntu 11.04 OS, using a quad-core 2Hz notebook. I tried to turn of most of all other processes: the number of spikes went down but at least one of them was still there.

    Read the article

  • documenting class properties

    - by intuited
    I'm writing a lightweight class whose properties are intended to be publicly accessible, and only sometimes overridden in specific instantiations. There's no provision in the Python language for creating docstrings for class properties, or any sort of properties, for that matter. What is the accepted way, should there be one, to document these properties? Currently I'm doing this sort of thing: class Albatross(object): """A bird with a flight speed exceeding that of an unladen swallow. Properties: """ flight_speed = 691 __doc__ += """ flight_speed (691) The maximum speed that such a bird can attain """ nesting_grounds = "Throatwarbler Man Grove" __doc__ += """ nesting_grounds ("Throatwarbler Man Grove") The locale where these birds congregate to reproduce. """ def __init__(**keyargs): """Initialize the Albatross from the keyword arguments.""" self.__dict__.update(keyargs) Although this style doesn't seem to be expressly forbidden in the docstring style guidelines, it's also not mentioned as an option. The advantage here is that it provides a way to document properties alongside their definitions, while still creating a presentable class docstring, and avoiding having to write comments that reiterate the information from the docstring. I'm still kind of annoyed that I have to actually write the properties twice; I'm considering using the string representations of the values in the docstring to at least avoid duplication of the default values. Is this a heinous breach of the ad hoc community conventions? Is it okay? Is there a better way? For example, it's possible to create a dictionary containing values and docstrings for the properties and then add the contents to the class __dict__ and docstring towards the end of the class declaration; this would alleviate the need to type the property names and values twice. I'm pretty new to python and still working out the details of coding style, so unrelated critiques are also welcome.

    Read the article

  • Ajax Form submittion in Google App Engine with jQuery

    - by user271785
    could not figure out why it is not working: i need to send request to server, generate some fragment of html in python with meanCal method, and then want that fragment embedded into submitting html file using calculation method and dynamically shows in dyContent div. all the processes are done by single click on submit button in a form. any suggestions??? thanks in advance. the submitting html: <div id="dyContent" style="height: 200px;"> waiting for user... {{ mgs }} </div> <div id="leturetext"> <form id="mean" method="post" action="/calculation"> <select name="meanselect"> <option value=10>example</option> <option value=11>exercise</option> </select> <input type="button" name="btnMean" value="Check Results" /> </form> </div> <script type="text/javascript"> $(document).ready(function() { //$("#btnMean").live("click", function() { $("#mean").submit(function(){ $.ajax({ type: "POST", cache: false, url: "/meanCal", success: function(html) { $("#dyContent").html(html); } }); return false; }); }); </script> python: class MainHandler(webapp.RequestHandler): def get(self): path = self.request.path if doRender(self, path): return doRender(self,'index.htm') class calculationHandler(webapp.RequestHandler): def post(self): doRender(self, 'Diagnostic_stats.htm', {'mgs' : "refreshed.", }) def get(self): doRender(self, 'Diagnostic_stats.htm') class meanHandler(webapp.RequestHandler): def get(self): global GL index = self.request.get('meanselect'.value) if (index == 10): allData = GL.exampleData dataString = ','.join(map(str, allData)) dataMean = (str)(stats.lmean(allData)) doRender(self, 'Result.htm', { 'dataIn' : dataString, 'MEAN' : "Example Mean is: " + dataMean, }) return else: allData = GL.exerciseData dataString = ','.join(map(str, allData)) dataMean = (str)(stats.lmean(allData)) doRender(self, 'Result.htm', { 'dataIn' : dataString, 'MEAN' : "Exercise Mean is: " + dataMean, }) def main(): global GL GL = GlobalVariables() application = webapp.WSGIApplication( [('/calculation', calculationHandler), ('/meanCal', meanHandler), ('.*', MainHandler), ], debug=True) wsgiref.handlers.CGIHandler().run(application) if __name__ == '__main__': main()

    Read the article

  • Code bacteria: evolving mathematical behavior

    - by Stefano Borini
    It would not be my intention to put a link on my blog, but I don't have any other method to clarify what I really mean. The article is quite long, and it's in three parts (1,2,3), but if you are curious, it's worth the reading. A long time ago (5 years, at least) I programmed a python program which generated "mathematical bacteria". These bacteria are python objects with a simple opcode-based genetic code. You can feed them with a number and they return a number, according to the execution of their code. I generate their genetic codes at random, and apply an environmental selection to those objects producing a result similar to a predefined expected value. Then I let them duplicate, introduce mutations, and evolve them. The result is quite interesting, as their genetic code basically learns how to solve simple equations, even for values different for the training dataset. Now, this thing is just a toy. I had time to waste and I wanted to satisfy my curiosity. however, I assume that something, in terms of research, has been made... I am reinventing the wheel here, I hope. Are you aware of more serious attempts at creating in-silico bacteria like the one I programmed? Please note that this is not really "genetic algorithms". Genetic algorithms is when you use evolution/selection to improve a vector of parameters against a given scoring function. This is kind of different. I optimize the code, not the parameters, against a given scoring function.

    Read the article

  • Using Pylint with Django

    - by rcreswick
    I would very much like to integrate pylint into the build process for my python projects, but I have run into one show-stopper: One of the error types that I find extremely useful--:E1101: *%s %r has no %r member*--constantly reports errors when using common django fields, for example: E1101:125:get_user_tags: Class 'Tag' has no 'objects' member which is caused by this code: def get_user_tags(username): """ Gets all the tags that username has used. Returns a query set. """ return Tag.objects.filter( ## This line triggers the error. tagownership__users__username__exact=username).distinct() # Here is the Tag class, models.Model is provided by Django: class Tag(models.Model): """ Model for user-defined strings that help categorize Events on on a per-user basis. """ name = models.CharField(max_length=500, null=False, unique=True) def __unicode__(self): return self.name How can I tune Pylint to properly take fields such as objects into account? (I've also looked into the Django source, and I have been unable to find the implementation of objects, so I suspect it is not "just" a class field. On the other hand, I'm fairly new to python, so I may very well have overlooked something.) Edit: The only way I've found to tell pylint to not warn about these warnings is by blocking all errors of the type (E1101) which is not an acceptable solution, since that is (in my opinion) an extremely useful error. If there is another way, without augmenting the pylint source, please point me to specifics :) See here for a summary of the problems I've had with pychecker and pyflakes -- they've proven to be far to unstable for general use. (In pychecker's case, the crashes originated in the pychecker code -- not source it was loading/invoking.)

    Read the article

  • Listing serial (COM) ports on Windows?

    - by Eli Bendersky
    Hello, I'm looking for a robust way to list the available serial (COM) ports on a Windows machine. There's this post about using WMI, but I would like something less .NET specific - I want to get the list of ports in a Python or a C++ program, without .NET. I currently know of two other approaches: Reading the information in the HARDWARE\\DEVICEMAP\\SERIALCOMM registry key. This looks like a great option, but is it robust? I can't find a guarantee online or in MSDN that this registry cell indeed always holds the full list of available ports. Tryint to call CreateFile on COMN with N a number from 1 to something. This isn't good enough, because some COM ports aren't named COMN. For example, some virtual COM ports created are named CSNA0, CSNB0, and so on, so I wouldn't rely on this method. Any other methods/ideas/experience to share? Edit: by the way, here's a simple Python implementation of reading the port names from registry: import _winreg as winreg import itertools def enumerate_serial_ports(): """ Uses the Win32 registry to return a iterator of serial (COM) ports existing on this computer. """ path = 'HARDWARE\\DEVICEMAP\\SERIALCOMM' try: key = winreg.OpenKey(winreg.HKEY_LOCAL_MACHINE, path) except WindowsError: raise IterationError for i in itertools.count(): try: val = winreg.EnumValue(key, i) yield (str(val[1]), str(val[0])) except EnvironmentError: break

    Read the article

  • Save PyML.classifiers.multi.OneAgainstRest(SVM()) object?

    - by Michael Aaron Safyan
    I'm using PYML to construct a multiclass linear support vector machine (SVM). After training the SVM, I would like to be able to save the classifier, so that on subsequent runs I can use the classifier right away without retraining. Unfortunately, the .save() function is not implemented for that classifier, and attempting to pickle it (both with standard pickle and cPickle) yield the following error message: pickle.PicklingError: Can't pickle : it's not found as __builtin__.PySwigObject Does anyone know of a way around this or of an alternative library without this problem? Thanks. Edit/Update I am now training and attempting to save the classifier with the following code: mc = multi.OneAgainstRest(SVM()); mc.train(dataset_pyml,saveSpace=False); for i, classifier in enumerate(mc.classifiers): filename=os.path.join(prefix,labels[i]+".svm"); classifier.save(filename); Notice that I am now saving with the PyML save mechanism rather than with pickling, and that I have passed "saveSpace=False" to the training function. However, I am still gettting an error: ValueError: in order to save a dataset you need to train as: s.train(data, saveSpace = False) However, I am passing saveSpace=False... so, how do I save the classifier(s)? P.S. The project I am using this in is pyimgattr, in case you would like a complete testable example... the program is run with "./pyimgattr.py train"... that will get you this error. Also, a note on version information: [michaelsafyan@codemage /Volumes/Storage/classes/cse559/pyimgattr]$ python Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. import PyML print PyML.__version__ 0.7.0

    Read the article

  • Any high-level languages that can use c libraries?

    - by Isaiah
    I know this question could be in vain, but it's just out of curiosity, and I'm still much a newb^^ Anyways I've been loving python for some time while learning it. My problem is obviously speed issues. I'd like to get into indie game creation, and for the short future, 2d and pygame will work. But I'd eventually like to branch into the 3d area, and python is really too slow to make anything 3d and professional. So I'm wondering if there has ever been work to create a high-level language able to import and use c libraries? I've looked at Genie and it seems to be able to use certain libraries, but I'm not sure to what extent. Will I be able to use it for openGL programing, or in a c game engine? I do know some lisp and enjoy it a lot, but there aren't a great many libraries out there for it. Which leads to the problem: I can't stand C syntax, but C has libraries galore that I could need! And game engines like irrlicht. Is there any language that can be used in place of C around C? Thanks so much guys

    Read the article

  • 404 when getting private YouTube video even when logged in with the owner's account using gdata-pyth

    - by Gonzalo
    If a YouTube video is set as private and I try to fetch it using the gdata Python API a 404 RequestError is raised, even though I have done a programmatic login with the account that owns that video: from gdata.youtube import service yt_service = service.YouTubeService(email=my_email, password=my_password, client_id=my_client_id, source=my_source, developer_key=my_developer_key) yt_service.ProgrammaticLogin() yt_service.GetYouTubeVideoEntry(video_id='IcVqemzfyYs') --------------------------------------------------------------------------- RequestError Traceback (most recent call last) <ipython console> /usr/lib/python2.4/site-packages/gdata/youtube/service.pyc in GetYouTubeVideoEntry(self, uri, video_id) 203 elif video_id and not uri: 204 uri = '%s/%s' % (YOUTUBE_VIDEO_URI, video_id) --> 205 return self.Get(uri, converter=gdata.youtube.YouTubeVideoEntryFromString) 206 207 def GetYouTubeContactFeed(self, uri=None, username='default'): /usr/lib/python2.4/site-packages/gdata/service.pyc in Get(self, uri, extra_headers, redirects_remaining, encoding, converter) 1100 'body': result_body} 1101 else: -> 1102 raise RequestError, {'status': server_response.status, 1103 'reason': server_response.reason, 'body': result_body} 1104 RequestError: {'status': 404, 'body': 'Video not found', 'reason': 'Not Found'} This happens every time, unless I go into my YouTube account (through the YouTube website) and set it public, after that I can set it as private and back to public using the Python API. Am I missing a step or is there another (or any) way to fetch a YouTube video set as private from the API? Thanks in advance.

    Read the article

  • what is the point of heterogenous arrays?

    - by aharon
    I know that more-dynamic-than-Java languages, like Python and Ruby, often allow you to place objects of mixed types in arrays, like so: ["hello", 120, ["world"]] What I don't understand is why you would ever use a feature like this. If I want to store heterogenous data in Java, I'll usually create an object for it. For example, say a User has int ID and String name. While I see that in Python/Ruby/PHP you could do something like this: [["John Smith", 000], ["Smith John", 001], ...] this seems a bit less safe/OO than creating a class User with attributes ID and name and then having your array: [<User: name="John Smith", id=000>, <User: name="Smith John", id=001>, ...] where those <User ...> things represent User objects. Is there reason to use the former over the latter in languages that support it? Or is there some bigger reason to use heterogenous arrays? N.B. I am not talking about arrays that include different objects that all implement the same interface or inherit from the same parent, e.g.: class Square extends Shape class Triangle extends Shape [new Square(), new Triangle()] because that is, to the programmer at least, still a homogenous array as you'll be doing the same thing with each shape (e.g., calling the draw() method), only the methods commonly defined between the two.

    Read the article

  • Fixing a multi-threaded pycurl crash.

    - by Rook
    If I run pycurl in a single thread everything works great. If I run pycurl in 2 threads python will access violate. The first thing I did was report the problem to pycurl, but the project died about 3 years ago so I'm not holding my breath. My (hackish) solution is to build a 2nd version of pycurl called "pycurl_thread" which will only be used by the 2nd thread. I downloaded the pycurl module from sourceforge and I made a total of 4 line changes. But python is still crashing. My guess is that even though this is a module with a different name (import pycurl_thread), its still sharing memory with the original module (import pycurl). How should I solve this problem? Changes in pycurl.c: initpycurl(void) to initpycurl_thread(void) and m = Py_InitModule3("pycurl", curl_methods, module_doc); to m = Py_InitModule3("pycurl_thread", curl_methods, module_doc); Changes in setup.py: PACKAGE = "pycurl" PY_PACKAGE = "curl" to PACKAGE = "pycurl_thread" PY_PACKAGE = "curl_thread" Here is the seg fault i'm getting. This is happening within the C function do_curl_perform(). *** longjmp causes uninitialized stack frame ***: python2.7 terminated ======= Backtrace: ========= /lib/libc.so.6(__fortify_fail+0x37)[0x7f209421b537] /lib/libc.so.6(+0xff4c9)[0x7f209421b4c9] /lib/libc.so.6(__longjmp_chk+0x33)[0x7f209421b433] /usr/lib/libcurl.so.4(+0xe3a5)[0x7f20931da3a5] /lib/libpthread.so.0(+0xfb40)[0x7f209532eb40] /lib/libc.so.6(__poll+0x53)[0x7f20941f6203] /usr/lib/libcurl.so.4(Curl_socket_ready+0x116)[0x7f2093208876] /usr/lib/libcurl.so.4(+0x2faec)[0x7f20931fbaec] /usr/local/lib/python2.7/dist-packages/pycurl.so(+0x892b)[0x7f209342c92b] python2.7(PyEval_EvalFrameEx+0x58a1)[0x4adf81] python2.7(PyEval_EvalCodeEx+0x891)[0x4af7c1] python2.7(PyEval_EvalFrameEx+0x538b)[0x4ada6b] python2.7(PyEval_EvalFrameEx+0x65f9)[0x4aecd9]

    Read the article

< Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >