Search Results

Search found 14486 results on 580 pages for 'python idle'.

Page 125/580 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • Extracting a .app from a zip file in Python, using ZipFile

    - by Yakattak
    I'm trying to extract new revisions of Chromium.app from their snapshots, and I can download the file fine, but when it comes to extracting it, ZipFile either extracts the chrome-mac folder within as a file, says that directories don't exist, etc. I am very new to python, so these errors make little sense to me. Here is what I have so far. import urllib2 response = urllib2.urlopen('http://build.chromium.org/buildbot/snapshots/chromium-rel-mac/LATEST') latestRev = response.read() print latestRev # we have the revision, now we need to download the zip and extract it latestZip = urllib2.urlopen('http://build.chromium.org/buildbot/snapshots/chromium-rel-mac/%i/chrome-mac.zip' % (int(latestRev)), '~/Desktop/ChromiumUpdate/%i-update' % (int(latestRev))) #declare some vars that hold paths n shit workingDir = '/Users/slehan/Desktop/ChromiumUpdate/' chromiumZipPath = '%s%i-update.zip' % (workingDir, (int(latestRev))) chromiumAppPath = 'chrome-mac/' #the path of the chromium executable within the zip file chromiumAppExtracted = '%s/Chromium.app' % (workingDir) # path of the extracted executable output = open(chromiumZipPath, 'w') #delete any current file there output.write(latestZip.read()) output.close() # we have the .zip now we need to extract the Chromium.app file, it's in ziproot/chrome-mac/Chromium.app import zipfile, os zippedFile = open(chromiumZipPath) zippedChromium = zipfile.ZipFile(zippedFile, 'r') zippedChromium.extract(chromiumAppPath, workingDir) #print zippedChromium.namelist() zippedChromium.close() #zippedChromium.close() Any ideas?

    Read the article

  • Python subprocess: 64 bit windows server PIPE doesn't exist :(

    - by Spaceman1861
    I have a GUI that launches selected python scripts and runs it in cmd next to the gui window. I am able to get my launcher to work on my (windows xp 32 bit) laptop but when I upload it to the server(64bit windows iss7) I am running into some issues. The script runs, to my knowledge but spits back no information into the cmd window. My script is a bit of a Frankenstein that I have hacked and slashed together to get it to work I am fairly certain that this is a very bad example of the subprocess module. Just wondering if i could get a hand :). My question is how do i have to alter my code to work on a 64bit windows server. :) from Tkinter import * import pickle,subprocess,errno,time,sys,os PIPE = subprocess.PIPE if subprocess.mswindows: from win32file import ReadFile, WriteFile from win32pipe import PeekNamedPipe import msvcrt else: import select import fcntl def recv_some(p, t=.1, e=1, tr=5, stderr=0): if tr < 1: tr = 1 x = time.time()+t y = [] r = '' pr = p.recv if stderr: pr = p.recv_err while time.time() < x or r: r = pr() if r is None: if e: raise Exception(message) else: break elif r: y.append(r) else: time.sleep(max((x-time.time())/tr, 0)) return ''.join(y) def send_all(p, data): while len(data): sent = p.send(data) if sent is None: raise Exception(message) data = buffer(data, sent) The code above isn't mine def Run(): print filebox.get(0) location = filebox.get(0) location = location.__str__().replace(listbox.get(ANCHOR).__str__(),"") theTime = time.asctime(time.localtime(time.time())) lastbox.delete(0, END) lastbox.insert(END,theTime) for line in CookieCont: if listbox.get(ANCHOR) in line and len(line) > 4: line[4] = theTime else: "Fill In the rip Details to record the time" if __name__ == '__main__': if sys.platform == 'win32' or sys.platform == 'win64': shell, commands, tail = ('cmd', ('cd "'+location+'"',listbox.get(ANCHOR).__str__()), '\r\n') else: return "Please use contact admin" a = Popen(shell, stdin=PIPE, stdout=PIPE) print recv_some(a) for cmd in commands: send_all(a, cmd + tail) print recv_some(a) send_all(a, 'exit' + tail) print recv_some(a, e=0) The Code above is mine :)

    Read the article

  • Python virtualenv questions

    - by orokusaki
    I'm using VirtualEnv on Windows XP. I'm wondering if I have my brain wrapped around it correctly. I ran virtualenv ENV and it created C:\WINDOWS\system32\ENV. I then changed my PATH variable to include C:\WINDOWS\system32\ENV\Scripts instead of C:\Python27\Scripts. Then, I checked out Django into C:\WINDOWS\system32\ENV\Lib\site-packages\django-trunk, updated my PYTHON_PATH variable to point the new Django directory, and continued to easy_install other things (which of course go into my new C:\WINDOWS\system32\ENV\Lib\site-packages directory). I understand why I should use VirtualEnv so I can run multiple versions of Django, and other libraries on the same machine, but does this mean that to switch between environments I have to basically change my PATH and PYTHON_PATH variable? So, I go from developing one Django project which uses Django 1.2 in an environment called ENV and then change my PATH and such so that I can use an environment called ENV2 which has the dev version of Django? Is that basically it, or is there some better way to automatically do all this (I could update my path in Python code, but that would require me to write machine-specific code in my application)? Also, how does this process compare to using VirtualEnv on Linux (I'm quite the beginner at Linux).

    Read the article

  • Internationalizing a Python 2.6 application via Babel

    - by Malcolm
    We're evaluating Babel 0.9.5 [1] under Windows for use with Python 2.6 and have the following questions that we we've been unable to answer through reading the documentation or googling. 1) I would like to use an _ like abbreviation for ungettext. Is there a concencus on whether one should use n_ or N_ for this? n_ does not appear to work. Babel does not extract text. N_ appears to partially work. Babel extracts text like it does for gettext, but does not format for ngettext (missing plural argument and msgstr[ n ].) 2) Is there a way to set the initial msgstr fields like the following when creating a POT file? I suspect there may be a way to do this via Babel cfg files, but I've been unable to find documentation on the Babel cfg file format. "Project-Id-Version: PROJECT VERSION\n" "Language-Team: en_US \n" 3) Is there a way to preserve 'obsolete' msgid/msgstr's in our PO files? When I use the Babel update command, newly created obsolete strings are marked with #~ prefixes, but existing obsolete message strings get deleted. Thanks, Malcolm [1] http://babel.edgewall.org/

    Read the article

  • Simple continuously running XMPP client in python

    - by tom
    I'm using python-xmpp to send jabber messages. Everything works fine except that every time I want to send messages (every 15 minutes) I need to reconnect to the jabber server, and in the meantime the sending client is offline and cannot receive messages. So I want to write a really simple, indefinitely running xmpp client, that is online the whole time and can send (and receive) messages when required. My trivial (non-working) approach: import time import xmpp class Jabber(object): def __init__(self): server = 'example.com' username = 'bot' passwd = 'password' self.client = xmpp.Client(server) self.client.connect(server=(server, 5222)) self.client.auth(username, passwd, 'bot') self.client.sendInitPresence() self.sleep() def sleep(self): self.awake = False delay = 1 while not self.awake: time.sleep(delay) def wake(self): self.awake = True def auth(self, jid): self.client.getRoster().Authorize(jid) self.sleep() def send(self, jid, msg): message = xmpp.Message(jid, msg) message.setAttr('type', 'chat') self.client.send(message) self.sleep() if __name__ == '__main__': j = Jabber() time.sleep(3) j.wake() j.send('[email protected]', 'hello world') time.sleep(30) The problem here seems to be that I cannot wake it up. My best guess is that I need some kind of concurrency. Is that true, and if so how would I best go about that? EDIT: After looking into all the options concerning concurrency, I decided to go with twisted and wokkel. If I could, I would delete this post.

    Read the article

  • Catch a thread's exception in the caller thread in Python

    - by Mikee
    Hi Everyone, I'm very new to Python and multithreaded programming in general. Basically, I have a script that will copy files to another location. I would like this to be placed in another thread so I can output "...." to indicate that the script is still running. The problem that I am having is that if the files cannot be copied it will throw an exception. This is ok if running in the main thread; however, having the following code does not work: try: threadClass = TheThread(param1, param2, etc.) threadClass.start() ##### **Exception takes place here** except: print "Caught an exception" In the thread class itself, I tried to re-throw the exception, but it does not work. I have seen people on here ask similar questions, but they all seem to be doing something more specific than what I am trying to do (and I don't quite understand the solutions offered). I have seen people mention the usage of sys.exc_info(), however I do not know where or how to use it. All help is greatly appreciated! EDIT: The code for the thread class is below: class TheThread(threading.Thread): def __init__(self, sourceFolder, destFolder): threading.Thread.__init__(self) self.sourceFolder = sourceFolder self.destFolder = destFolder def run(self): try: shul.copytree(self.sourceFolder, self.destFolder) except: raise

    Read the article

  • Difference in regex between Python and Rubular?

    - by Rosarch
    In Rubular, I have created a regular expression: (Prerequisite|Recommended): (\w|-| )* It matches the bolded: Recommended: good comfort level with computers and some of the arts. Summer. 2 credits. Prerequisite: pre-freshman standing or permission of instructor. Credit may not be applied toward engineering degree. S-U grades only. Here is a use of the regex in Python: note_re = re.compile(r'(Prerequisite|Recommended): (\w|-| )*', re.IGNORECASE) def prereqs_of_note(note): match = note_re.match(note) if not match: return None return match.group(0) Unfortunately, the code returns None instead of a match: >>> import prereqs >>> result = prereqs.prereqs_of_note("Summer. 2 credits. Prerequisite: pre-fres hman standing or permission of instructor. Credit may not be applied toward engi neering degree. S-U grades only.") >>> print result None What am I doing wrong here?

    Read the article

  • how to implement a really efficient bitvector sorting in python

    - by xiao
    Hello guys! Actually this is an interesting topic from programming pearls, sorting 10 digits telephone numbers in a limited memory with an efficient algorithm. You can find the whole story here What I am interested in is just how fast the implementation could be in python. I have done a naive implementation with the module bitvector. The code is as following: from BitVector import BitVector import timeit import random import time import sys def sort(input_li): return sorted(input_li) def vec_sort(input_li): bv = BitVector( size = len(input_li) ) for i in input_li: bv[i] = 1 res_li = [] for i in range(len(bv)): if bv[i]: res_li.append(i) return res_li if __name__ == "__main__": test_data = range(int(sys.argv[1])) print 'test_data size is:', sys.argv[1] random.shuffle(test_data) start = time.time() sort(test_data) elapsed = (time.time() - start) print "sort function takes " + str(elapsed) start = time.time() vec_sort(test_data) elapsed = (time.time() - start) print "sort function takes " + str(elapsed) start = time.time() vec_sort(test_data) elapsed = (time.time() - start) print "vec_sort function takes " + str(elapsed) I have tested from array size 100 to 10,000,000 in my macbook(2GHz Intel Core 2 Duo 2GB SDRAM), the result is as following: test_data size is: 1000 sort function takes 0.000274896621704 vec_sort function takes 0.00383687019348 test_data size is: 10000 sort function takes 0.00380706787109 vec_sort function takes 0.0371489524841 test_data size is: 100000 sort function takes 0.0520560741425 vec_sort function takes 0.374383926392 test_data size is: 1000000 sort function takes 0.867373943329 vec_sort function takes 3.80475401878 test_data size is: 10000000 sort function takes 12.9204008579 vec_sort function takes 38.8053860664 What disappoints me is that even when the test_data size is 100,000,000, the sort function is still faster than vec_sort. Is there any way to accelerate the vec_sort function?

    Read the article

  • Python Multiword Index

    - by Manab Chetia
    index = {'Michael': [['mj.com',1], ['Nine.com',9],['i.com', 34]], / 'Jackson': [['One.com',4],['mj.com', 2],['Nine.com', 10], ['i.com', 45]], / 'Thriller' : [['Seven.com', 7], ['Ten.com',10], ['One.com', 5], ['mj.com',3]} # In this dictionary (index), for eg: 'KEYWORD': # [['THE LINK in which KEYWORD is present,'POSITION # of KEYWORD in the page specified by link']] eg: Michael is present in MJ.com, NINE.com, and i.com at positions 1, 9, 34 of respective pages. Please help me with a python procedure which takes index and KEYWORDS as input. When i enter 'MICHAEL'. The result should be: >>['mj.com', 'nine.com', 'i.com'] When I enter 'MICHAEL JACKSON'. The result should be : >>['mj.com', 'Nine.com'] as 'Michael' and 'Jackson' are present at 'mj.com' and 'nine.com' consecutively i.e. in positions (1,2) & (9,10) respectively. The result should not show 'i.com' even though it contains both KEYWORDS but they are not placed consecutively. When I enter 'MICHAEL JACKSON THRILLER', the result should be ['mj.com'] as the 3 words 'MICHAEL', 'JACKSON', 'THRILLER' are placed consecutively in 'mj.com' ie positions (1, 2, 3) respectively. If I enter 'THRILLER JACKSON' or 'THRILLER FEDERER', the result should be NONE.

    Read the article

  • Querying the Datastore in python

    - by Ray
    Greetings! I am trying to work with a single column in the datatstore, I can view and display the contents, like this - q = test.all() q.filter("adjclose =", "adjclose") q = db.GqlQuery("SELECT * FROM test") results = q.fetch(5) for p in results: p1 = p.adjclose print "The value is --> %f" % (p.adjclose) however i need to calculate the historical values with the adjclose column, and I am not able to get over the errors for c in range(len(p1)-1): TypeError: object of type 'float' has no len() here is my code! for c in range(len(p1)-1): p1.append(p1[c+1]-p1[c]/p1[c]) p2 = (p1[c+1]-p1[c]/p1[c]) print "the p1 value<-- %f" % (p2) print "dfd %f" %(p1) new to python, any help will be greatly appreciated! thanks in advance Ray HERE IS THE COMPLETE CODE class CalHandler(webapp.RequestHandler): def get(self): que = db.GqlQuery("SELECT * from test") user_list = que.fetch(limit=100) doRender( self, 'memberscreen2.htm', {'user_list': user_list} ) q = test.all() q.filter("adjclose =", "adjclose") q = db.GqlQuery("SELECT * FROM test") results = q.fetch(5) for p in results: p1 = p.adjclose print "The value is --> %f" % (p.adjclose) for c in range(len(p1)-1): p1.append(p1[c+1]-p1[c]/p1[c]) print "the p1 value<--> %f" % (p2) print "dfd %f" %(p1)

    Read the article

  • Python: Most efficient way to concatenate and rearrange files

    - by user300890
    Hi, I am reading from several files, each file is divided into 2 pieces, first a header section of a few thousand lines followed by a body of a few thousand. My problem is I need to concatenate these files into one file where all the headers are on the top followed by the body. Currently I am using two loops; one to pull out all the headers and write them, and the second to write the body of each file (I also include a tmp_count variable to limit the number of lines to be loading into memory before dumping to file). This is pretty slow - about 6min for 13gb file. Can anyone tell me how to optimize this or if there is a faster way to do this in python ? Thanks! Here is my code: def cat_files_sam(final_file_name,work_directory_master,file_count): final_file = open(final_file_name,"w") if len(file_count) > 1: file_count=sort_output_files(file_count) # only for @ headers for bowtie_file in file_count: #print bowtie_file tmp_list = [] tmp_count = 0 for line in open(os.path.join(work_directory_master,bowtie_file)): if line.startswith("@"): if tmp_count == 1000000: final_file.writelines(tmp_list) tmp_list = [] tmp_count = 0 tmp_list.append(line) tmp_count += 1 else: final_file.writelines(tmp_list) break for bowtie_file in file_count: #print bowtie_file tmp_list = [] tmp_count = 0 for line in open(os.path.join(work_directory_master,bowtie_file)): if line.startswith("@"): continue if tmp_count == 1000000: final_file.writelines(tmp_list) tmp_list = [] tmp_count = 0 tmp_list.append(line) tmp_count += 1 final_file.writelines(tmp_list) final_file.close()

    Read the article

  • Python optimization problem?

    - by user342079
    Alright, i had this homework recently (don't worry, i've already done it, but in c++) but I got curious how i could do it in python. The problem is about 2 light sources that emit light. I won't get into details tho. Here's the code (that I've managed to optimize a bit in the latter part): import math, array import numpy as np from PIL import Image size = (800,800) width, height = size s1x = width * 1./8 s1y = height * 1./8 s2x = width * 7./8 s2y = height * 7./8 r,g,b = (255,255,255) arr = np.zeros((width,height,3)) hy = math.hypot print 'computing distances (%s by %s)'%size, for i in xrange(width): if i%(width/10)==0: print i, if i%20==0: print '.', for j in xrange(height): d1 = hy(i-s1x,j-s1y) d2 = hy(i-s2x,j-s2y) arr[i][j] = abs(d1-d2) print '' arr2 = np.zeros((width,height,3),dtype="uint8") for ld in [200,116,100,84,68,52,36,20,8,4,2]: print 'now computing image for ld = '+str(ld) arr2 *= 0 arr2 += abs(arr%ld-ld/2)*(r,g,b)/(ld/2) print 'saving image...' ar2img = Image.fromarray(arr2) ar2img.save('ld'+str(ld).rjust(4,'0')+'.png') print 'saved as ld'+str(ld).rjust(4,'0')+'.png' I have managed to optimize most of it, but there's still a huge performance gap in the part with the 2 for-s, and I can't seem to think of a way to bypass that using common array operations... I'm open to suggestions :D

    Read the article

  • how to multithread on a python server

    - by user3732790
    HELP please i have this code import socket from threading import * import time HOST = '' # Symbolic name meaning all available interfaces PORT = 8888 # Arbitrary non-privileged port s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) print ('Socket created') s.bind((HOST, PORT)) print ('Socket bind complete') s.listen(10) print ('Socket now listening') def listen(conn): odata = "" end = 'end' while end == 'end': data = conn.recv(1024) if data != odata: odata = data print(data) if data == b'end': end = "" print("conection ended") conn.close() while True: time.sleep(1) conn, addr = s.accept() print ('Connected with ' + addr[0] + ':' + str(addr[1])) Thread.start_new_thread(listen,(conn)) and i would like it so that when ever a person comes onto the server it has its own thread. but i can't get it to work please someone help me. :_( here is the error code: Socket created Socket bind complete Socket now listening Connected with 127.0.0.1:61475 Traceback (most recent call last): File "C:\Users\Myles\Desktop\test recever - Copy.py", line 29, in <module> Thread.start_new_thread(listen,(conn)) AttributeError: type object 'Thread' has no attribute 'start_new_thread' i am on python version 3.4.0 and here is the users code: import socket #for sockets import time s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) print('Socket Created') host = 'localhost' port = 8888 remote_ip = socket.gethostbyname( host ) print('Ip address of ' + host + ' is ' + remote_ip) #Connect to remote server s.connect((remote_ip , port)) print ('Socket Connected to ' + host + ' on ip ' + remote_ip) while True: message = input("> ") #Set the whole string s.send(message.encode('utf-8')) print ('Message send successfully') data = s.recv(1024) print(data) s.close

    Read the article

  • Python and the self parameter

    - by Svend
    I'm having some issues with the self parameter, and some seemingly inconsistent behavior in Python is annoying me, so I figure I better ask some people in the know. I have a class, Foo. This class will have a bunch of methods, m1, through mN. For some of these, I will use a standard definition, like in the case of m1 below. But for others, it's more convinient to just assign the method name directly, like I've done with m2 and m3. import os def myfun(x, y): return x + y class Foo(): def m1(self, y, z): return y + z + 42 m2 = os.access m3 = myfun f = Foo() print f.m1(1, 2) print f.m2("/", os.R_OK) print f.m3(3, 4) Now, I know that os.access does not take a self parameter (seemingly). And it still has no issues with this type of assignment. However, I cannot do the same for my own modules (imagine myfun defined off in mymodule.myfun). Running the above code yields the following output: 3 True Traceback (most recent call last): File "foo.py", line 16, in <module> print f.m3(3, 4) TypeError: myfun() takes exactly 2 arguments (3 given) The problem is that, due to the framework I work in, I cannot avoid having a class Foo at least. But I'd like to avoid having my mymodule stuff in a dummy class. In order to do this, I need to do something ala def m3(self,a1, a2): return mymodule.myfun(a1,a2) Which is hugely redundant when you have like 20 of them. So, the question is, either how do I do this in a totally different and obviously much smarter way, or how can I make my own modules behave like the built-in ones, so it does not complain about receiving 1 argument too many.

    Read the article

  • Validating and filling default values in XML based on XSD in Python

    - by PoltoS
    I have an XML like <a> <b/> <b c="2"/> </a> I have my XSD <xs:element name="a"> <xs:complexType> <xs:sequence> <xs:element name="b" maxOccurs="unbounded"> <xs:attribute name="c" default="1"/> </xs:element> </xs:sequence> </xs:complexType> </xs:element> I want to use my XSD to validate my original XML and fill all default values: <a> <b c="1"/> <b c="2"/> </a> How do I get it in Python? With validation there is no problem (e.g. XMLSchema). The problem are the default values.

    Read the article

  • python gui generate math equation

    - by Nero Dietrich
    I have a homework question for one specific item with python GUIs. My goal is to create a GUI that asks a random mathematical equation and if the equation is evaluated correctly, then I will receive a message stating that it is correct. My main problem is finding out where to place my statements so that they show up in the labels; I have 1 textbox which generates the random equation, the next textbox is blank for me to enter the solution, and then an "Enter" button at the end to evaluate my solution. It looks like this: [randomly generated equation][Empty space to enter solution] [ENTER] I've managed to get the layout and the evaluate parameters, but I don't know where to go from here. This is my code so far: class Equation(Frame): def __init__(self,parent=None): Frame.__init__(self, parent) self.pack() Equation.make_widgets(self) Equation.new_problem(self) def make_widgets(self): Label(self).grid(row=0, column=1) ent = Entry(self) ent.grid(row=0, column=1) Label(self).grid(row=0, column=2) ent = Entry(self) ent.grid(row=0, column=2) Button(self, text='Enter', command=self.evaluate).grid(row=0, column=3) def new_problem(self): pass def evaluate(self): result = eval(self.get()) self.delete(0, END) self.insert(END, result) print('Correct')

    Read the article

  • Sending buffered images between Java client and Twisted Python socket server

    - by PattimusPrime
    I have a server-side function that draws an image with the Python Imaging Library. The Java client requests an image, which is returned via socket and converted to a BufferedImage. I prefix the data with the size of the image to be sent, followed by a CR. I then read this number of bytes from the socket input stream and attempt to use ImageIO to convert to a BufferedImage. In abbreviated code for the client: public String writeAndReadSocket(String request) { // Write text to the socket BufferedWriter bufferedWriter = new BufferedWriter(new OutputStreamWriter(socket.getOutputStream())); bufferedWriter.write(request); bufferedWriter.flush(); // Read text from the socket BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(socket.getInputStream())); // Read the prefixed size int size = Integer.parseInt(bufferedReader.readLine()); // Get that many bytes from the stream char[] buf = new char[size]; bufferedReader.read(buf, 0, size); return new String(buf); } public BufferedImage stringToBufferedImage(String imageBytes) { return ImageIO.read(new ByteArrayInputStream(s.getBytes())); } and the server: # Twisted server code here # The analog of the following method is called with the proper client # request and the result is written to the socket. def worker_thread(): img = draw_function() buf = StringIO.StringIO() img.save(buf, format="PNG") img_string = buf.getvalue() return "%i\r%s" % (sys.getsizeof(img_string), img_string) This works for sending and receiving Strings, but image conversion (usually) fails. I'm trying to understand why the images are not being read properly. My best guess is that the client is not reading the proper number of bytes, but I honestly don't know why that would be the case. Side notes: I realize that the char[]-to-String-to-bytes-to-BufferedImage Java logic is roundabout, but reading the bytestream directly produces the same errors. I have a version of this working where the client socket isn't persistent, ie. the request is processed and the connection is dropped. That version works fine, as I don't need to care about the image size, but I want to learn why the proposed approach doesn't work.

    Read the article

  • linear combinations in python/numpy

    - by nmaxwell
    greetings, I'm not sure if this is a dumb question or not. Lets say I have 3 numpy arrays, A1,A2,A3, and 3 floats, c1,c2,c3 and I'd like to evaluate B = A1*c1+ A2*c2+ A3*c3 will numpy compute this as for example, E1 = A1*c1 E2 = A2*c2 E3 = A3*c3 D1 = E1+E2 B = D1+E3 or is it more clever than that? In c++ I had a neat way to abstract this kind of operation. I defined series of general 'LC' template functions, LC for linear combination like: template<class T,class D> void LC( T & R, T & L0,D C0, T & L1,D C1, T & L2,D C2) { R = L0*C0 +L1*C1 +L2*C2; } and then specialized this for various types, so for instance, for an array the code looked like for (int i=0; i<L0.length; i++) R.array[i] = L0.array[i]*C0 + L1.array[i]*C1 + L2.array[i]*C2; thus avoiding having to create new intermediate arrays. This may look messy but it worked really well. I could do something similar in python, but I'm not sure if its nescesary. Thanks in advance for any insight. -nick

    Read the article

  • Python FTP grabbing and saving images issue

    - by PylonsN00b
    OK So I have been messing with this all day long. I am fairly new to Python FTP. So I have searched through here and came up w/ this: images = notions_ftp.nlst() for image_name in image_names: if found_url == False: try: for image in images: ftp_image_name = "./%s" % image_name if ftp_image_name == image: found_url = True image_name_we_want = image_name except: pass # We failed to find an image for this product, it will have to be done manually if found_url == False: log.info("Image ain't there baby -- SKU: %s" % sku) return False # Hey we found something! Open the image.... notions_ftp.retrlines("RETR %s" % image_name_we_want, open(image_name_we_want, "rb")) 1/0 So I have narrowed the error down to the line before I divide by zero. Here is the error: Traceback (most recent call last): File "<console>", line 6, in <module> File "<console>", line 39, in insert_image IOError: [Errno 2] No such file or directory: '411483CC-IT,IM.jpg' So if you follow the code you will see that the image IS in the directory because image_name_we_want is set if found in that directory listing on the first line of my code. And I KNOW it's there because I am looking at the FTP site myself and ...it's freakin there. So at some point during all of this I got the image to save locally, which is most desired, but I have long since forgot what I used to make it do that. Either way, why does it think that the image isn't there when it clearly finds it in the listing.

    Read the article

  • Python: Copying files with special characters in path

    - by erikderwikinger
    Hi is there any possibility in Python 2.5 to copy files having special chars (Japanese chars, cyrillic letters) in their path? shutil.copy cannot handle this. here is some example code: import copy, os,shutil,sys fname=os.getenv("USERPROFILE")+"\\Desktop\\testfile.txt" print fname print "type of fname: "+str(type(fname)) fname0 = unicode(fname,'mbcs') print fname0 print "type of fname0: "+str(type(fname0)) fname1 = unicodedata.normalize('NFKD', fname0).encode('cp1251','replace') print fname1 print "type of fname1: "+str(type(fname1)) fname2 = unicode(fname,'mbcs').encode(sys.stdout.encoding) print fname2 print "type of fname2: "+str(type(fname2)) shutil.copy(fname2,'C:\\') the output on a Russian Windows XP C:\Documents and Settings\+????????????\Desktop\testfile.txt type of fname: <type 'str'> C:\Documents and Settings\?????????????\Desktop\testfile.txt type of fname0: <type 'unicode'> C:\Documents and Settings\+????????????\Desktop\testfile.txt type of fname1: <type 'str'> C:\Documents and Settings\?????????????\Desktop\testfile.txt type of fname2: <type 'str'> Traceback (most recent call last): File "C:\Test\getuserdir.py", line 23, in <module> shutil.copy(fname2,'C:\\') File "C:\Python25\lib\shutil.py", line 80, in copy copyfile(src, dst) File "C:\Python25\lib\shutil.py", line 46, in copyfile fsrc = open(src, 'rb') IOError: [Errno 2] No such file or directory: 'C:\\Documents and Settings\\\x80\ xa4\xac\xa8\xad\xa8\xe1\xe2\xe0\xa0\xe2\xae\xe0\\Desktop\\testfile.txt'

    Read the article

  • Python: two loops at once

    - by Stephan Meijer
    I've got a problem: I am new to Python and I want to do multiple loops. I want to run a WebSocket client (Autobahn) and I want to run a loop which shows the filed which are edited in a specific folder (pyinotify or else Watchdog). Both are running forever, Great. Is there a way to run them at once and send a message via the WebSocket connection while I'm running the FileSystemWatcher, like with callbacks, multithreading, multiprocessing or just separate files? factory = WebSocketClientFactory("ws://localhost:8888/ws", debug=False) factory.protocol = self.webSocket connectWS(factory) reactor.run() If we run this, it will have success. But if we run this: factory = WebSocketClientFactory("ws://localhost:8888/ws", debug=False) factory.protocol = self.webSocket connectWS(factory) reactor.run() # Websocket client running now,running the filewatcher wm = pyinotify.WatchManager() mask = pyinotify.IN_DELETE | pyinotify.IN_CREATE # watched events class EventHandler(pyinotify.ProcessEvent): def process_IN_CREATE(self, event): print "Creating:", event.pathname def process_IN_DELETE(self, event): print "Removing:", event.pathname handler = EventHandler() notifier = pyinotify.Notifier(wm, handler) wdd = wm.add_watch('/tmp', mask, rec=True) notifier.loop() This will create 2 loops, but since we already have a loop, the code after 'reactor.run()' will not run at all.. For your information: this project is going to be a sync client. Thanks a lot!

    Read the article

  • Parse a CSV file using python (to make a decision tree later)

    - by Margaret
    First off, full disclosure: This is going towards a uni assignment, so I don't want to receive code. :). I'm more looking for approaches; I'm very new to python, having read a book but not yet written any code. The entire task is to import the contents of a CSV file, create a decision tree from the contents of the CSV file (using the ID3 algorithm), and then parse a second CSV file to run against the tree. There's a big (understandable) preference to have it capable of dealing with different CSV files (I asked if we were allowed to hard code the column names, mostly to eliminate it as a possibility, and the answer was no). The CSV files are in a fairly standard format; the header row is marked with a # then the column names are displayed, and every row after that is a simple series of values. Example: # Column1, Column2, Column3, Column4 Value01, Value02, Value03, Value04 Value11, Value12, Value13, Value14 At the moment, I'm trying to work out the first part: parsing the CSV. To make the decisions for the decision tree, a dictionary structure seems like it's going to be the most logical; so I was thinking of doing something along these lines: Read in each line, character by character If the character is not a comma or a space Append character to temporary string If the character is a comma Append the temporary string to a list Empty string Once a line has been read Create a dictionary using the header row as the key (somehow!) Append that dictionary to a list However, if I do things that way, I'm not sure how to make a mapping between the keys and the values. I'm also wondering whether there is some way to perform an action on every dictionary in a list, since I'll need to be doing things to the effect of "Everyone return their values for columns Column1 and Column4, so I can count up who has what!" - I assume that there is some mechanism, but I don't think I know how to do it. Is a dictionary the best way to do it? Would I be better off doing things using some other data structure? If so, what?

    Read the article

  • Python: Serial Transmission

    - by Silent Elektron
    I have an image stack of 500 images (jpeg) of 640x480. I intend to make 500 pixels (1st pixels of all images) as a list and then send that via COM1 to FPGA where I do my further processing. I have a couple of questions here: How do I import all the 500 images at a time into python and how do i store it? How do I send the 500 pixel list via COM1 to FPGA? I tried the following: Converted the jpeg image to intensity values (each pixel is denoted by a number between 0 and 255) in MATLAB, saved the intensity values in a text file, read that file using readlines(). But it became too cumbersome to make the intensity value files for all the 500 images! Used NumPy to put the read files in a matrix and then pick the first pixel of all images. But when I send it, its coming like: [56, 61, 78, ... ,71, 91]. Is there a way to eliminate the [ ] and , while sending the data serially? Thanks in Advance! :)

    Read the article

  • Calculating a total cost in Python

    - by Sérgio Lourenço
    I'm trying to create a trip planner in python, but after I defined all the functions I'm not able to call and calculate them in the last function tripCost(). In tripCost, I want to put the days and travel destination (city) and the program runs the functions and gives me the exact result of all the 3 functions previously defined. Code: def hotelCost(): days = raw_input ("How many nights will you stay at the hotel?") total = 140 * int(days) print "The total cost is",total,"dollars" def planeRideCost(): city = raw_input ("Wich city will you travel to\n") if city == 'Charlotte': return "The cost is 183$" elif city == 'Tampa': return "The cost is 220$" elif city == 'Pittsburgh': return "The cost is 222$" elif city == 'Los Angeles': return "The cost is 475$" else: return "That's not a valid destination" def rentalCarCost(): rental_days = raw_input ("How many days will you rent the car\n") discount_3 = 40 * int(rental_days) * 0.2 discount_7 = 40 * int(rental_days) * 0.5 total_rent3 = 40 * int(rental_days) - discount_3 total_rent7 = 40 * int(rental_days) - discount_7 cost_day = 40 * int(rental_days) if int(rental_days) >= 3: print "The total cost is", total_rent3, "dollars" elif int(rental_days) >= 7: print "The total cost is", total_rent7, "dollars" else: print "The total cost is", cost_day, "dollars" def tripCost(): travel_city = raw_input ("What's our destination\n") days_travel = raw_input ("\nHow many days will you stay\n") total_trip_cost = hotelCost(int(day_travel)) + planeRideCost (str(travel_city)) + rentalCost (int(days_travel)) return "The total cost with the trip is", total_trip_cost tripCost()

    Read the article

  • Python: Errors saving and loading objects with pickle module

    - by Peterstone
    Hi, I am trying to load and save objects with this piece of code I get it from a question I asked a week ago: Python: saving and loading objects and using pickle. The piece of code is this: class Fruits: pass banana = Fruits() banana.color = 'yellow' banana.value = 30 import pickle filehandler = open("Fruits.obj","wb") pickle.dump(banana,filehandler) filehandler.close() file = open("Fruits.obj",'rb') object_file = pickle.load(file) file.close() print(object_file.color, object_file.value, sep=', ') At a first glance the piece of code works well, getting load and see the 'color' and 'value' of the saved object. But, what I pursuit is to close a session, open a new one and load what I save in a past session. I close the session after putting the line filehandler.close() and I open a new one and I put the rest of your code, then after putting object_file = pickle.load(file) I get this error: Traceback (most recent call last): File "<pyshell#5>", line 1, in <module> object_file = pickle.load(file) File "C:\Python31\lib\pickle.py", line 1365, in load encoding=encoding, errors=errors).load() AttributeError: 'module' object has no attribute 'Fruits' Can anyone explain me what this error message means and telling me how to solve this problem? Thank so much and happy new year!!

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >