Search Results

Search found 13596 results on 544 pages for 'mechanize python'.

Page 167/544 | < Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >

  • Python - output from functions?

    - by Seafoid
    Hi I have a very rudimentary question. Assume I call a function, e.g., def foo(): x = 'hello world' How do I get the function to return x in such a way that I can use it as the input for another function or use the variable within the body of a program? When I use return and call the variable within another functions I get a NameError. Thanks, S :-)

    Read the article

  • Python OpenGL Can't Redraw Scene

    - by RobbR
    I'm getting started with OpenGL and shaders using GLUT and PyOpenGL. I can draw a basic scene but for some reason I can't get it to update. E.g. any changes I make during idle(), display(), or reshape() are not reflected. Here are the methods: def display(self): glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ) glMatrixMode(GL_MODELVIEW) glLoadIdentity() glUseProgram(self.shader_program) self.m_vbo.bind() glEnableClientState( GL_VERTEX_ARRAY ) glVertexPointerf(self.m_vbo) glDrawArrays(GL_TRIANGLES, 0, len(self.m_vbo)) glutSwapBuffers() glutReportErrors() def idle(self): test_change += .1 self.m_vbo = vbo.VBO( array([ [ test_change, 1, 0 ], # triangle [ -1,-1, 0 ], [ 1,-1, 0 ], [ 2,-1, 0 ], # square [ 4,-1, 0 ], [ 4, 1, 0 ], [ 2,-1, 0 ], [ 4, 1, 0 ], [ 2, 1, 0 ], ],'f') ) glutPostRedisplay() def begin(self): glutInit() glutInitWindowSize(400, 400) glutCreateWindow("Simple OpenGL") glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB) glutDisplayFunc(self.display) glutReshapeFunc(self.reshape) glutMouseFunc(self.mouse) glutMotionFunc(self.motion) glutIdleFunc(self.idle) self.define_shaders() glutMainLoop() I'd like to implement a time step in idle() but even basic changes to the vertices or tranlastions and rotations on the MODELVIEW matrix don't display. It just puts up the initial state and does not update. Am I missing something?

    Read the article

  • Python. Draw rectangle in basemap

    - by user2928318
    I need to add several rectangles in my basemap. I nee four rectangles with lat and log ranges as below. 1) llcrnrlon=-10, urcrnrlon=10, llcrnrlat=35,urcrnrlat=60 2) llcrnrlon=10.5, urcrnrlon=35, llcrnrlat=35,urcrnrlat=60 3) llcrnrlon=35.5, urcrnrlon=52, llcrnrlat=30,urcrnrlat=55 4) llcrnrlon=-20, urcrnrlon=35, llcrnrlat=20,urcrnrlat=34.5 My script is below. I found "polygon" packages to add lines but I do not exactly know how to do. Please help me!! Thanks a lot for your help in advance! from mpl_toolkits.basemap import Basemap m=basemaputpart.Basemap(llcrnrlon=-60, llcrnrlat=20, urcrnrlon=60, urcrnrlat=70, resolution='i', projection='cyl', lon_0=0, lat_0=45) lon1=np.array([[-180.+j*0.5 for j in range(721)] for i in range(181)]) lat1=np.array([[i*0.5 for j in range(721)] for i in range(181) ]) Nx1,Ny1=m(lon1,lat1,inverse=False) toplot=data[:,:] toplot[data==0]=np.nan toplot=np.ma.masked_invalid(toplot) plt.pcolor(Nx1,Ny1,np.log(toplot),vmin=0, vmax=5) cbar=plt.colorbar() m.drawcoastlines(zorder=2) m.drawcountries(zorder=2) plt.show()

    Read the article

  • Python: concatenate generator and item

    - by TarGz
    I have a generator (numbers) and a value (number). I would like to iterate over these as if they were one sequence: i for i in tuple(my_generator) + (my_value,) The problem is, as far as I undestand, this creates 3 tuples only to immediately discard them and also copies items in "my_generator" once. Better approch would be: def con(seq, item): for i in seq: yield seq yield item i for i in con(my_generator, my_value) But I was wondering whether it is possible to do it without that function definition

    Read the article

  • python tarfile adding files without directory hiearchy

    - by theactiveactor
    When I invoke add() on a tarfile object with a file path, the file is added to the tarball with directory hiearchy associated .In other words, if I unzip the tarfile the directories in the original dir hiearchy are reproduced. Is there a way to simply add a plainfile without directory info that untarring the resulting tarball produce a flat list of files?

    Read the article

  • How to lazy load a data structure (python)

    - by Anton Geraschenko
    I have some way of building a data structure (out of some file contents, say): def loadfile(FILE): return # some data structure created from the contents of FILE So I can do things like puppies = loadfile("puppies.csv") # wait for loadfile to work kitties = loadfile("kitties.csv") # wait some more print len(puppies) print puppies[32] In the above example, I wasted a bunch of time actually reading kitties.csv and creating a data structure that I never used. I'd like to avoid that waste without constantly checking if not kitties whenever I want to do something. I'd like to be able to do puppies = lazyload("puppies.csv") # instant kitties = lazyload("kitties.csv") # instant print len(puppies) # wait for loadfile print puppies[32] So if I don't ever try to do anything with kitties, loadfile("kitties.csv") never gets called. Is there some standard way to do this? After playing around with it for a bit, I produced the following solution, which appears to work correctly and is quite brief. Are there some alternatives? Are there drawbacks to using this approach that I should keep in mind? class lazyload: def __init__(self,FILE): self.FILE = FILE self.F = None def __getattr__(self,name): if not self.F: print "loading %s" % self.FILE self.F = loadfile(self.FILE) return object.__getattribute__(self.F, name) What might be even better is if something like this worked: class lazyload: def __init__(self,FILE): self.FILE = FILE def __getattr__(self,name): self = loadfile(self.FILE) # this never gets called again # since self is no longer a # lazyload instance return object.__getattribute__(self, name) But this doesn't work because self is local. It actually ends up calling loadfile every time you do anything.

    Read the article

  • Python: Lits containg tuples and long int.

    - by Yasmin
    I have a list containing a tuples and long integers the list looks like this: table = [(1L,), (1L,), (1L,), (2L,), (2L,), (2L,), (3L,), (3L,)] How do i convert the table to look like a formal list? so the output would be: table = ['1','1','1','2','2','2','3','3'] For information purposes the data was obtained from a mysql database.

    Read the article

  • Text to a PNG on App Engine (Python)

    - by Bemmu
    Note: I am cross-posting this from App Engine group because I got no answers there. As part of my site about Japan, I have a feature where the user can get a large PNG for use as desktop background that shows the user's name in Japanese. After switching my site hosting entirely to App Engine, I removed this particular feature because I could not find any way to render text to a PNG using the image API. In other words, how would you go about outputting an unicode string on top of an image of known dimensions (1024x768 for example), so that the text will be as large as possible horizontally, and centered vertically? Is there a way to do this is App Engine, or is there some external service besides App Engine that could make this easier for me, that you could recommend (besides running ImageMagick on your own server)?

    Read the article

  • Python: combine two neighbor list components

    - by kame
    When i use this code I get elements wich containing one number or letter. How to combine two neighbors? data = '4D41544C414220352E30204D41542D66696C652C20506C6174666F726D3A20504357494E2C2043726561746564206F6E3A20576564204D61792030352031363A31393A3337203230313020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020200001494D0F00000026000000789CE36360607000623620E680D220C00AE53343312310BA00692620E604F351010025BE00C8' data2 = list(data) print data2

    Read the article

  • Python: NameError: 'self' is not defined

    - by Rosarch
    I must be doing something stupid. I'm running this in Google App Engine: def render(self, template_name, template_data): path = os.path.join(os.path.dirname(__file__), 'static/templates/%s.html' % template_name) self.response.out.write(template.render(path, template_data)) This gives an error: Traceback (most recent call last): File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 3192, in _HandleRequest self._Dispatch(dispatcher, self.rfile, outfile, env_dict) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 3135, in _Dispatch base_env_dict=env_dict) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 516, in Dispatch base_env_dict=base_env_dict) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 2394, in Dispatch self._module_dict) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 2304, in ExecuteCGI reset_modules = exec_script(handler_path, cgi_path, hook) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 2200, in ExecuteOrImportScript exec module_code in script_module.__dict__ File "main.py", line 22, in <module> class MainHandler(webapp.RequestHandler): File "main.py", line 38, in MainHandler self.writeOut(template.render(path, template_data)) NameError: name 'self' is not defined What am I doing wrong?

    Read the article

  • python multiprocessing.Process.Manager not producing consistent results?

    - by COpython
    I've written the following code to illustrate the problem I'm seeing. I'm trying to use a Process.Manager.list() to keep track of a list and increment random indices of that list. Each time there are 100 processes spawned, and each process increments a random index of the list by 1. Therefore, one would expect the SUM of the resulting list to be the same each time, correct? I get something between 203 and 205. from multiprocessing import Process, Manager import random class MyProc(Process): def __init__(self, A): Process.__init__(self) self.A = A def run(self): i = random.randint(0, len(self.A)-1) self.A[i] = self.A[i] + 1 if __name__ == '__main__': procs = [] M = Manager() a = M.list(range(15)) print('A: {0}'.format(a)) print('sum(A) = {0}'.format(sum(a))) for i in range(100): procs.append(MyProc(a)) map(lambda x: x.start(), procs) map(lambda x: x.join(), procs) print('A: {0}'.format(a)) print('sum(A) = {0}'.format(sum(a)))

    Read the article

  • Python Terminated Thread Cannot Restart

    - by Mel Kaye
    Hello, I have a thread that gets executed when some action occurs. Given the logic of the program, the thread cannot possibly be started while another instance of it is still running. Yet when I call it a second time, I get a "RuntimeError: thread already started" error. I added a check to see if it is actually alive using the Thread.is_alive() function, and it is actually dead. What am I doing wrong? I can provide more details as are needed.

    Read the article

  • Python unittest: Generate multiple tests programmatically?

    - by Rosarch
    I have a function to test, under_test, and a set of expected input/output pairs: [ (2, 332), (234, 99213), (9, 3), # ... ] I would like each one of these input/output pairs to be tested in its own test_* method. Is that possible? This is sort of what I want, but forcing every single input/output pair into a single test: class TestPreReqs(unittest.TestCase): def setUp(self): self.expected_pairs = [(23, 55), (4, 32)] def test_expected(self): for exp in self.expected_pairs: self.assertEqual(under_test(exp[0]), exp[1]) if __name__ == '__main__': unittest.main()

    Read the article

  • is there a better way of replacing duplicates in a list (python)

    - by myeu2
    Given a list: l1: ['a', 'b', 'c', 'a', 'a', 'b'] output: ['a', 'b', 'c', 'a'_1, 'a'_2, 'b'_1 ] I created the following code to get the output. Its messyyy.. for index in range(len(l1)): counter = 1 list_of_duplicates_for_item = [dup_index for dup_index, item in enumerate(l1) if item == l1[index] and l1.count(l1[index]) > 1] for dup_index in list_of_duplicates_for_item[1:]: l1[dup_index] = l1[dup_index] + '_' + str(counter) counter = counter + 1 Is there a more pythonic way of doing this? I couldnt find anything on the web.

    Read the article

  • Python: Comparing specific columns in two csv files

    - by coder999
    Say that I have two CSV files (file1 and file2) with contents as shown below: file1: fred,43,Male,"23,45",blue,"1, bedrock avenue" file2: fred,39,Male,"23,45",blue,"1, bedrock avenue" I would like to compare these two CSV records to see if columns 0,2,3,4, and 5 are the same. I don't care about column 1. What's the most pythonic way of doing this? EDIT: Some example code would be appreciated.

    Read the article

  • Python ctypes in_dll string assignment

    - by ackdesha
    I could use some help assigning to a global C variable in DLL using ctypes. The following is an example of what I'm trying: test.c contains the following #include <stdio.h> char name[60]; void test(void) { printf("Name is %s\n", name); } On windows (cygwin) I build a DLL (Test.dll) as follows: gcc -g -c -Wall test.c gcc -Wall -mrtd -mno-cygwin -shared -W1,--add-stdcall-alias -o Test.dll test.o When trying to modify the name variable and then calling the C test function using the ctypes interface I get the following... >>> from ctypes import * >>> dll = windll.Test >>> dll <WinDLL 'Test', handle ... at ...> >>> f = c_char_p.in_dll(dll, 'name') >>> f c_char_p(None) >>> f.value = 'foo' >>> f c_char_p('foo') >>> dll.test() Name is Name is 48+? 13 Why does the test function print garbage in this case?

    Read the article

  • Python stdout, \r progress bar and sshd with Putty not updating regularly

    - by Kyle MacFarlane
    I have a dead simple progress "bar" using something like the following: import sys from time import sleep current = 0 limit = 50 while current <= limit: sys.stdout.write('\rSynced %s/%s orders' % (current, limit)) current_order += 1 sleep(1) Works fine, except over ssh with Putty. Putty only updates every 3 minutes or if a line ends with \n. Is this a Putty setting, sshd_config, or can I code around it?

    Read the article

  • Efficient way to build a MySQL update query in Python

    - by ensnare
    I have a class variable called attributes which lists the instance variables I want to update in a database: attributes = ['id', 'first_name', 'last_name', 'name', 'name_url', 'email', 'password', 'password_salt', 'picture_id'] Each of the class attributes are updated upon instantiation. I would like to loop through each of the attributes and build a MySQL update query in the form of: UPDATE members SET id = self._id, first_name = self._first name ... Thanks.

    Read the article

  • Python - Removing duplicates from a string

    - by Daniel
    def remove_duplicates(strng): """ Returns a string which is the same as the argument except only the first occurrence of each letter is present. Upper and lower case letters are treated as different. Only duplicate letters are removed, other characters such as spaces or numbers are not changed. >>> remove_duplicates('apple') 'aple' >>> remove_duplicates('Mississippi') 'Misp' >>> remove_duplicates('The quick brown fox jumps over the lazy dog') 'The quick brown fx jmps v t lazy dg' >>> remove_duplicates('121 balloons 2 u') '121 balons 2 u' """ s = strng.split() return strng.replace(s[0],"") Writing a function to get rid of duplicate letters but so far have been playing around for an hour and can't get anything. Help would be appreciated, thanks.

    Read the article

  • Accessing items from a dictionary using pickle efficiently in Python

    - by user248237
    I have a large dictionary mapping keys (which are strings) to objects. I pickled this large dictionary and at certain times I want to pull out only a handful of entries from it. The dictionary has usually thousands of entries total. When I load the dictionary using pickle, as follows: from cPickle import * # my dictionary from pickle, containing thousands of entries mydict = open(load('mypickle.pickle')) # accessing only handful of entries here for entry in relevant_entries: # find relevant entry value = mydict[entry] I notice that it can take up to 3-4 seconds to load the entire pickle, which I don't need, since I access only a tiny subset of the dictionary entries later on (shown above.) How can I make it so pickle only loads those entries that I have from the dictionary, to make this faster? Thanks.

    Read the article

< Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >