Search Results

Search found 15224 results on 609 pages for 'parallel python'.

Page 404/609 | < Previous Page | 400 401 402 403 404 405 406 407 408 409 410 411  | Next Page >

  • how to let the parser print help message rather than error and exit

    - by fluter
    Hi, I am using argparse to handle cmd args, I wanna if there is no args specified, then print the help message, but now the parse will output a error, and then exit. my code is: def main(): print "in abing/start/main" parser = argparse.ArgumentParser(prog="abing")#, usage="%(prog)s <command> [args] [--help]") parser.add_argument("-v", "--verbose", action="store_true", default=False, help="show verbose output") subparsers = parser.add_subparsers(title="commands") bkr_subparser = subparsers.add_parser("beaker", help="beaker inspection") bkr_subparser.set_defaults(command=beaker_command) bkr_subparser.add_argument("-m", "--max", action="store", default=3, type=int, help="max resubmit count") bkr_subparser.add_argument("-g", "--grain", action="store", default="J", choices=["J", "RS", "R", "T", "job", "recipeset", "recipe", "task"], type=str, help="resubmit selection granularity") bkr_subparser.add_argument("job_ids", nargs=1, action="store", help="list of job id to be monitored") et_subparser = subparsers.add_parser("errata", help="errata inspection") et_subparser.set_defaults(command=errata_command) et_subparser.add_argument("-w", "--workflows", action="store_true", help="generate workflows for the erratum") et_subparser.add_argument("-r", "--run", action="store_true", help="generate workflows, and run for the erratum") et_subparser.add_argument("-s", "--start-monitor", action="store_true", help="start monitor the errata system") et_subparser.add_argument("-d", "--daemon", action="store_true", help="run monitor into daemon mode") et_subparser.add_argument("erratum", action="store", nargs=1, metavar="ERRATUM", help="erratum id") if len(sys.argv) == 1: parser.print_help() return args = parser.parse_args() args.command(args) return how can I do that? thanks.

    Read the article

  • Paramiko ssh output stops at --more--

    - by Anesh
    The output stops printing at --more-- any idea how to get the end of the output >>> import paramiko >>> ssh = paramiko.SSHClient() >>> ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) >>> conn=ssh.connect("ipaddress",username="user", password="pass") >>> channel = ssh.invoke_shell() >>> channel.send("en\n") 3 >>> channel.send("password\n") 9 >>> channel.send("show security local-user-list\n") 30 >>> results = '' >>> channel.send("\n") 1 >>> results += channel.recv(5000) >>> print results bluecoat>en Password: bluecoat#show security local-user-list Default List: local_user_database Append users loaded from file to default list: false local_user_database Lockout parameters: Max failed attempts: 60 Lockout duration: 3600 Reset interval: 7200 Users: Groups: admin_local Lockout parameters: Max failed attempts: 60 Lockout duration: 3600 Reset interval: 7200 Users: <username> Hashed Password: Enabled: true Groups: <username> Hashed Password: Enabled: true **--More--** As you can see above the output stops printing at --more-- any idea how to get the output to print till the end.

    Read the article

  • Appengine filter inequality and ordering fails

    - by davezor
    I think I'm overlooking something simple here, I can't imagine this is impossible to do. I want to filter by a datetime attribute and then order the result by a ranking integer attribute. When I try to do this: query.filter("submitted >=" thisweek).order("ranking") I get the following: BadArgumentError: First ordering property must be the same as inequality filter property, if specified for this query; received ranking, expected submitted Huh? What am I missing? Thanks.

    Read the article

  • A RAM error of big array

    - by flint
    I have a big file, more than 400M. In that file, there are 13496*13496 number, means 13496 rows and 13496 cols. I want to read them to a array. This is my code: _L1 = [[0 for col in range(13496)] for row in range(13496)] _L1file = open('distanceCMD.function.txt') while (i<13496): print "i="+str(i) _strlf = _L1file.readline() _strlf = _strlf.split('\t') _strlf = _strlf[:-1] _L1[i] = _strlf i += 1 _L1file.close() And this is my error massage: MemoryError: File "D:\research\space-function\ART3.py", line 30, in <module> _strlf = _strlf.split('\t')

    Read the article

  • django admin site - filtering available objects for user

    - by JPG
    I have models that belong to some 'group' (Company class). I want to add users, who will also belong to a one group and should be able to edit/manage/add objects with membership in associated group. something like: class Company() class Something() company = ForeignKey(Company) user Microsoft_admin company = ForeignKey(Company) and this user should only see and edit objects belonging to associated Company in the Admin Interface. How to acomplish that?

    Read the article

  • How to discover table properties from SQLAlchemy mapped object

    - by ssaboum
    Hi, My point is i have a class mapped with a table, in my case in a declarative way, and i want to "discover" table properties, columns, names, relations, from this class : engine = create_engine('sqlite:///' + databasePath, echo=True) # setting up root class for declarative declaration Base = declarative_base(bind=engine) class Ship(Base): __tablename__ = 'ships' id = Column(Integer, primary_key=True) name = Column(String(255)) def __init__(self, name): self.name = name def __repr__(self): return "<Ship('%s')>" % (self.name) So now my goal is from the "Ship" class to get the table columns and their properties from another piece of code. I guess i can deal with it using instrumentation but is there any way provided by the SQLAlchemy API ? Thank you.

    Read the article

  • Can this django query be improved?

    - by Hobhouse
    Given a model structure like this: class Book(models.Model): user = models.ForeignKey(User) class Readingdate(models.Model): book = models.ForeignKey(Book) date = models.DateField() One book may have several readingdates. How do I list books having at least one readingdate within a specific year? I can do this: from_date = datetime.date(2010,1,1) to_date = datetime.date(2010,12,31) book_ids = Readingdate.objects\ .filter(date__range=(from_date,to_date))\ .values_list('book_id', flat=True) books_read_2010 = Book.objects.filter(id__in=book_ids) Is it possible to do this with one queryset, or is this the best way?

    Read the article

  • Removing the port number from URL

    - by DrewSSP
    I'm new to anything related to servers and am trying to deploy a django application. Today I bought a domain name for the app and am having trouble configuring it so that the base URL does not need the port number at the end of it. I have to type www.trackthecharts.com:8001 to see the website when I only want to use www.trackethecharts.com. I think the problem is somewhere in my nginx, gunicorn or supervisor configuration. gunicorn_config.py command = '/opt/myenv/bin/gunicorn' pythonpath = '/opt/myenv/top-chart-app/' bind = '162.243.76.202:8001' workers = 3 root@django-app:~# nginx config server { server_name 162.243.76.202; access_log off; location /static/ { alias /opt/myenv/static/; } location / { proxy_pass http://127.0.0.1:8001; proxy_set_header X-Forwarded-Host $server_name; proxy_set_header X-Real-IP $remote_addr; add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"'; } } supervisor config [program:top_chart_gunicorn] command=/opt/myenv/bin/gunicorn -c /opt/myenv/gunicorn_config.py djangoTopChartApp.wsgi autostart=true autorestart=true stderr_logfile=/var/log/supervisor_gunicorn.err.log stdout_logfile=/var/log/supervisor_gunicorn.out.log Thanks for taking a look.

    Read the article

  • How to make django test framework read from live database?

    - by lfborjas
    I realize there's a similar question here, but this one has a different approach: I have a django app that does queries over data indexed with djapian ; I'd like to write unit tests for this app's search component, and, obviously, I'd need the django settings module and all connections with the database active, so the test runner that django provides seems ideal. however, the django testing framework creates a dummy database and I'd hate to dump all my data to a fixture and then index it (the tests would take forever!); My data isn't at risk because the tests would only read from the database, so, how could this be achieved? -I'm new at this whole unit testing thing, so the solution of writing a new test runner I read in that similar question doesn't enlighten me a bit, at least not without some details

    Read the article

  • problem installing mysqldb for python2.6

    - by apoorva
    Hi.. My mysql database is located on a remote machine... So i dont have any local copy of mysql on my local machine.. i get the registry key error... (file not found)... serverKey = _winreg.OpenKey(_winreg.HKEY_LOCAL_MACHINE, options['registry_key']) WindowsError: [Error 2] The system cannot find the file specified I think it requires to have a local copy of mysql... How do i install the mysqldb for database residing on another machine???

    Read the article

  • Unable to plot graph using matplotlib

    - by Aman Deep Gautam
    I have the following code which searches all the directory in the current directory and then takes data from those files to plot the graph. The data is read correctly as verified by printing but there are no points plotted on graph. import argparse import os import matplotlib.pyplot as plt #find the present working directory pwd=os.path.dirname(os.path.abspath(__file__)) #find all the folders in the present working directory. dirs = [f for f in os.listdir('.') if os.path.isdir(f)] plt.figure() plt.xlim(0, 20000) plt.ylim(0, 1) for directory in dirs: os.chdir(os.path.join(pwd, directory)); chd_dir = os.path.dirname(os.path.abspath(__file__)) files = [ fl for fl in os.listdir('.') if os.path.isfile(fl) ] print files for f in files: f_obj = open(os.path.join(chd_dir, f), 'r') list_x = [] list_y = [] for i in xrange(0,4): f_obj.next() for line in f_obj: temp_list = line.split() print temp_list list_y.append(temp_list[0]) list_x.append(temp_list[1]) print 'final_lsit' print list_x print list_y plt.plot(list_x, list_y, 'r.') f_obj.close() os.chdir(pwd) plt.savefig("test.jpg") The input files look like the following: 5 865 14709 15573 14709 1.32667e-06 664 0.815601 14719 1.55333e-06 674 0.813277 14729 1.82667e-06 684 0.810185 14739 1.4e-06 694 0.808459 Can anybody help me with why this is happening? Being new I would like to know some tutorial where I can get help with kind of plotting as the tutorial I was following made me end up here. Any help appreciated.

    Read the article

  • Finding a list of indices from master array using secondary array with non-unique entries

    - by fideli
    I have a master array of length n of id numbers that apply to other analogous arrays with corresponding data for elements in my simulation that belong to those id numbers (e.g. data[id]). Were I to generate a list of id numbers of length m separately and need the information in the data array for those ids, what is the best method of getting a list of indices idx of the original array of ids in order to extract data[idx]? That is, given: a=numpy.array([1,3,4,5,6]) # master array b=numpy.array([3,4,3,6,4,1,5]) # secondary array I would like to generate idx=numpy.array([1,2,1,4,2,0,3]) The array a is typically in sequential order but it's not a requirement. Also, array b will most definitely have repeats and will not be in any order. My current method of doing this is: idx=numpy.array([numpy.where(a==bi)[0][0] for bi in b]) I timed it using the following test: a=(numpy.random.uniform(100,size=100)).astype('int') b=numpy.repeat(a,100) timeit method1(a,b) 10 loops, best of 3: 53.1 ms per loop Is there a better way of doing this?

    Read the article

  • django url matching

    - by ben
    can anyone see why this wouldn't be working. Fairly new to django so any help would be much appreciated actual url: http://127.0.0.1:8000/2010/may/12/my-second-blog-post/ urls.py: (r'(?P<year>d{4})/(?P<month>[a-z]{3})/(?P<day>w{1,2})/(?P<slug>[-w]+)/$', 'object_detail', dict(info_dict, slug_field='slug',template_name='blog/detail.html')),

    Read the article

  • argparse coding issue

    - by Carl Skonieczny
    write a script that takes two optional boolean arguments,"--verbose‚" and ‚"--live", and two required string arguments, "base"and "pattern". Please set up the command line processing using argparse. This is the code I have so far for the question, I know I am getting close but something is not quite right. Any help is much appreciated.Thanks for all the quick useful feedback. def main(): import argparse parser = argparse.ArgumentParser(description='') parser.add_argument('base', type=str) parser.add_arguemnt('--verbose', action='store_true') parser.add_argument('pattern', type=str) parser.add_arguemnt('--live', action='store_true') args = parser.parse_args() print(args.base(args.pattern))

    Read the article

  • Problem opening Solr *.jsp pages with urllib2.urlopen.

    - by nestling
    I'm trying to open a page at http://localhost:8983/solr/admin/stats.jsp but urllib2.urlopen returns a blank string. It works fine for solr/ and solr/admin, but for all the pages above /solr/admin/ I get nothing but a blank string. 76]: t = urllib2.urlopen('http://localhost:8983/solr/admin/stats.jsp') 77]: s = t.read() 78]: s 78]: 79]: type(s) 79]: <type 'str'> 80]: urllib2.urlopen('http://localhost:8983/solr/admin/registry.jsp').read() 80]: In [84]: urllib2.urlopen('http://localhost:8983/solr/admin/schema.jsp').read() Out[84]: I know this isn't a problem with urllib2, but beyond that I am at a loss. I wish solr (or jetty) had an easy to get to log file, so that perhaps it could tell me its side of the story.

    Read the article

  • Function for averages of tuples in a dictionary

    - by Billy Mann
    I have a string, dictionary in the form: ('the head', {'exploded': (3.5, 1.0), 'the': (5.0, 1.0), "puppy's": (9.0, 1.0), 'head': (6.0, 1.0)}) Each parentheses is a tuple which corresponds to (score, standard deviation). I'm taking the average of just the first integer in each tuple. I've tried this: def score(string, d): for word in d: (score, std) = d[word] d[word]=float(score),float(std) if word in string: word = string.lower() number = len(string) return sum([v[0] for v in d.values()]) / float(len(d)) if len(string) == 0: return 0 When I run: print score('the head', {'exploded': (3.5, 1.0), 'the': (5.0, 1.0), "puppy's": (9.0, 1.0), 'head': (6.0, 1.0)}) I should get 5.5 but instead I'm getting 5.875. Can't figure out what in my function is not allowing me to get the correct answer.

    Read the article

  • Django dictionary in templates: Grab key from another objects attribute

    - by Jordan Messina
    I have a dictionary called number_devices I'm passing to a template, the dictionary keys are the ids of a list of objects I'm also passing to the template (called implementations). I'm iterating over the list of objects and then trying to use the object.id to get a value out of the dict like so: {% for implementation in implementations %} {{ number_devices.implementation.id }} {% endfor %} Unfortunately number_devices.implementation is evaluated first, then the result.id is evaluated obviously returning and displaying nothing. I can't use parentheses like: {{ number_devices.(implementation.id) }} because I get a parse error. How do I get around this annoyance in Django templates? Thanks for any help!

    Read the article

  • SQLAlchemy sessions - DetachedInstanceError?

    - by benjaminhkaiser
    I have a function that attempts to take a list of usernames, look each one up in a user table, and then add them to a membership table. If even one username is invalid, I want the entire list to be rolled back, including any users that have already been processed. I thought that using sessions was the best way to do this but I'm running into a DetachedInstanceError: DetachedInstanceError: Instance <Organization at 0x7fc35cb5df90> is not bound to a Session; attribute refresh operation cannot proceed Full stack trace is here. The error seems to trigger when I attempt to access the user (model) object that is returned by the query. From my reading I understand that it has something to do with there being multiple sessions, but none of the suggestions I saw on other threads worked for me. Code is below: def add_members_in_bulk(organization_eid, users): """Add users to an organization in bulk - helper function for add_member()""" """Returns "success" on success and id of first failed student on failure""" session = query_session.get_session() session.begin_nested() users = users.split('\n') for u in users: try: user = user_lookup.by_student_id(u) except ObjectNotFoundError: session.rollback() return u if user: membership.add_user_to_organization( user.entity_id, organization_eid, '', [] ) session.flush() session.commit() return 'success' here's the membership.add_user_to_organization: def add_user_to_organization(user_eid, organization_eid, title, tag_ids): """Add a User to an Organization with the given title""" user = user_lookup.by_eid(user_eid) organization = organization_lookup.by_eid(organization_eid) new_membership = OrganizationMembership( organization_eid=organization.entity_id, user_eid=user.entity_id, title=title) new_membership.tags = [get_tag_by_id(tag_id) for tag_id in tag_ids] crud.add(new_membership) and here is the lookup by ID query: def by_student_id(student_id, include_disabled=False): """Get User by RIN""" try: return get_query_set(include_disabled).filter(User.student_id == student_id).one() except NoResultFound: raise ObjectNotFoundError("User with RIN %s does not exist." % student_id)

    Read the article

  • supervisord environment variables setting up application

    - by user1434844
    I'm running an application from supervisord and I have to set up an environment for it. There are about 30 environment variables that need to be set. I've tried putting all on one big environment= line and that doesn't seem to work. I've also tried multiple enviroment= lines, and that doesn't seem to work either. I've also tried both with and without ' around the env value. What's the best way to set up my environment such that it remains intact under supervisord control? Should I be calling my actual program (tornado, fwiw) from a shell script with the environment preloaded there? Ideally, I'd like to put all of the enviroment variables into an include file and load them with supervisor, but I'm open to doing it another way.

    Read the article

  • Get local network interface addresses using only proc?

    - by Matt Joiner
    How can I obtain the (IPv4) addresses for all network interfaces using only proc? After some extensive investigation I've discovered the following: ifconfig makes use of SIOCGIFADDR, which requires open sockets and advance knowledge of all the interface names. It also isn't documented in any manual pages on Linux. proc contains /proc/net/dev, but this is a list of interface statistics. proc contains /proc/net/if_inet6, which is exactly what I need but for IPv6. Generally interfaces are easy to find in proc, but actual addresses are very rarely used except where explicitly part of some connection. There's a system call called getifaddrs, which is very much a "magical" function you'd expect to see in Windows. It's also implemented on BSD. However it's not very text-oriented, which makes it difficult to use from non-C languages.

    Read the article

  • convert an int to list of individual digitals more faster?

    - by user478514
    All, I want define an int(987654321) <= [9, 8, 7, 6, 5, 4, 3, 2, 1] convertor, if the length of int number < 9, for example 10 the list will be [0,0,0,0,0,0,0,1,0] , and if the length 9, for example 9987654321 , the list will be [9, 9, 8, 7, 6, 5, 4, 3, 2, 1] >>> i 987654321 >>> l [9, 8, 7, 6, 5, 4, 3, 2, 1] >>> z = [0]*(len(unit) - len(str(l))) >>> z.extend(l) >>> l = z >>> unit [100000000, 10000000, 1000000, 100000, 10000, 1000, 100, 10, 1] >>> sum([x*y for x,y in zip(l, unit)]) 987654321 >>> int("".join([str(x) for x in l])) 987654321 >>> l1 = [int(x) for x in str(i)] >>> z = [0]*(len(unit) - len(str(l1))) >>> z.extend(l1) >>> l1 = z >>> l1 [9, 8, 7, 6, 5, 4, 3, 2, 1] >>> a = [i//x for x in unit] >>> b = [a[x] - a[x-1]*10 for x in range(9)] >>> if len(b) = len(a): b[0] = a[0] # fix the a[-1] issue >>> b [9, 8, 7, 6, 5, 4, 3, 2, 1] I tested above solutions but found those may not faster/simple enough than I want and may have a length related bug inside, anyone may share me a better solution for this kinds convertion? Thanks!

    Read the article

  • Which class should store the lookup table?

    - by max
    The world contains agents at different locations, with only a single agent at any location. Each agent knows where he's at, but I also need to quickly check if there's an agent at a given location. Hence, I also maintain a map from locations to agents. I have a problem deciding where this map belongs to: class World, class Agent (as a class attribute) or elsewhere. In the following I put the lookup table, agent_locations, in class World. But now agents have to call world.update_agent_location every time they move. This is very annoying; what if I decide later to track other things about the agents, apart from their locations - would I need to add calls back to the world object all across the Agent code? class World: def __init__(self, n_agents): # ... self.agents = {} self.agent_locations = {} for id in range(n_agents): x, y = self.find_location() agent = Agent(self,x,y) self.agents.append(agent) self.agent_locations[x,y] = agent def update_agent_location(self, agent, x, y): del self.agent_locations[agent.x, agent.y] self.agent_locations[x, y] = agent def update(self): # next step in the simulation for agent in self.agents: agent.update() # next step for this agent # ... class Agent: def __init__(self, world, x, y): self.world = world self.x, self.y = x, y def move(self, x1, y1): self.world.update_agent_location(self, x1, y1) self.x, self.y = x1, y1 def update(): # find a good location that is not occupied and move there for x, y in self.valid_locations(): if not self.location_is_good(x, y): continue if self.world.agent_locations[x, y]: # location occupied continue self.move(x, y) I can instead put agent_locations in class Agent as a class attribute. But that only works when I have a single World object. If I later decide to instantiate multiple World objects, the lookup tables would need to be world-specific. I am sure there's a better solution... EDIT: I added a few lines to the code to show how agent_locations is used. Note that it's only used from inside Agent objects, but I don't know if that would remain the case forever.

    Read the article

  • how to show the right word in my code, my code is : os.urandom(64)

    - by zjm1126
    My code is: print os.urandom(64) which outputs: > "D:\Python25\pythonw.exe" "D:\zjm_code\a.py" \xd0\xc8=<\xdbD' \xdf\xf0\xb3>\xfc\xf2\x99\x93 =S\xb2\xcd'\xdbD\x8d\xd0\\xbc{&YkD[\xdd\x8b\xbd\x82\x9e\xad\xd5\x90\x90\xdcD9\xbf9.\xeb\x9b>\xef#n\x84 which isn't readable, so I tried this: print os.urandom(64).decode("utf-8") but then I get: > "D:\Python25\pythonw.exe" "D:\zjm_code\a.py" Traceback (most recent call last): File "D:\zjm_code\a.py", line 17, in <module> print os.urandom(64).decode("utf-8") File "D:\Python25\lib\encodings\utf_8.py", line 16, in decode return codecs.utf_8_decode(input, errors, True) UnicodeDecodeError: 'utf8' codec can't decode bytes in position 0-3: invalid data What should I do to get human-readable output?

    Read the article

  • How to change the OSX menubar in wxPython without any opened window?

    - by gyim
    I am writing a wxPython application that remains open after closing all of its windows - so you can still drag & drop new files onto the OSX dock icon (I do this with myApp.SetExitOnFrameDelete(False)). Unfortunately if I close all the windows, the OSX menubar will only contain a "Help" menu. I would like to add at least a File/Open menu item, or just keep the menubar of the main window. Is this somehow possible in wxPython? In fact, I would be happy with a non-wxPython hack as well (for example, setting the menu in pyobjc). wxPython development in OSX is such a hack anyway ;)

    Read the article

  • Reset selection of wx.lib.calendar.Calendar control?

    - by Joseph
    I have a wx.lib.calendar.Calendar control (not wx.lib.calendar.CalendarCtrl!). I am selecting a number of days using the following function call: self.cal.AddSelect([days], 'green', 'white') This works, and draws the days highlighted. However, I cannot work out how to reverse this (i.e., clear the selection so the days go back to their normal colouring). Any hints, please?

    Read the article

< Previous Page | 400 401 402 403 404 405 406 407 408 409 410 411  | Next Page >