Search Results

Search found 15187 results on 608 pages for 'boost python'.

Page 164/608 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • How do I overwrite a file currently being read by Python

    - by Brian
    Hi guys, I am not too sure the best way to word this, but what I want to do, is read a pdf file, make various modifications, and save the modified pdf over the original file. As of now, I am able to save the modified pdf to a separate file, but I am looking to replace the original, not create a new file. Here is my current code: from pyPdf import PdfFileWriter, PdfFileReader output = PdfFileWriter() input = PdfFileReader(file('input.pdf', 'rb')) blank = PdfFileReader(file('C:\\BLANK.pdf', 'rb')) # Copy the input pdf to the output. for page in range(int(input.getNumPages())): output.addPage(input.getPage(page)) # Add a blank page if needed. if (input.getNumPages() % 2 != 0): output.addPage(blank.getPage(0)) # Write the output to pdf. outputStream = file('input.pdf', 'wb') output.write(outputStream) outputStream.close() If i change the outputStream to a different file name, it works fine, I just cant save over the input file because it is still being used. I have tried to .close() the stream, but it was giving me errors as well. I have a feeling this has a fairly simple solution, I just haven't had any luck finding it. Thanks!

    Read the article

  • about python scripting

    - by kmitnick
    I have this code class HNCS (ThreadingTCPServer): def verify_request(self, request, client_address): for key in connections: if connections[key].client_address[0] == client_address[0]: if client_address[0] != '127.0.0.1': return False return True def welcome(self): return '''______________________________________________________ ------------------------------------------------------ %s ______________________________________________________ ------------------------------------------------------ * Server started %s * Waiting for connections on port %i ''' % (gpl, ctime(), PORT) I only can't figure out the line where it says if connections[key].client_address[0] == client_address[0] how come we used client_address as an attribute after dictionary???

    Read the article

  • python to display the special characters

    - by Suhail
    Hi, I am facing issues with the special characters like ° and ® which represent the degreee Farenheit sign and the ® represent the registered sign, when i print the string the contains the special characters, it gives output like this: Preheat oven to 350&deg F Welcome to Lorem Ipsum Inc&reg is there a way i can output the exact characters and not their codes ? please let me know.

    Read the article

  • How to bind an ip address to telnetlib in Python

    - by jack
    The code below binds an ip address to urllib, urllib2, etc. import socket true_socket = socket.socket def bound_socket(*a, **k): sock = true_socket(*a, **k) sock.bind((sourceIP, 0)) return sock socket.socket = bound_socket Is it also able to bind an ip address to telnetlib?

    Read the article

  • Python : Small Regex problem

    - by user316758
    Hi, when I try to extract this video ID (AIiMa2Fe-ZQ) with a regex expression, I can't get the dash an all the letters after. Someone can help me please? Thanks >>> id = re.search('(?<=\?v\=)\w+', 'http://www.youtube.com/watch?v=AIiMa2Fe-ZQ') >>> print id.group(0) >>> AIiMa2Fe

    Read the article

  • Logical python question - handling directories and files in them

    - by Konstantin
    Hello! I'm using this function to extract files from .zip archive and store it on the server: def unzip_file_into_dir(file, dir): import sys, zipfile, os, os.path os.makedirs(dir, 0777) zfobj = zipfile.ZipFile(file) for name in zfobj.namelist(): if name.endswith('/'): os.mkdir(os.path.join(dir, name)) else: outfile = open(os.path.join(dir, name), 'wb') outfile.write(zfobj.read(name)) outfile.close() And the usage: unzip_file_into_dir('/var/zips/somearchive.zip', '/var/www/extracted_zip') somearchive.zip have this structure: somearchive.zip 1.jpeg 2.jpeg another.jpeg or, somethimes, this one: somearchive.zip somedir/ 1.jpeg 2.jpeg another.jpeg Question is: how do I modify my function, so that my extracted_zip catalog would always contain just images, not images in another subdirectory, even if images are stored in somedir inside an archive.

    Read the article

  • Obfuscate strings in Python

    - by Caedis
    I have a password string that must be passed to a method. Everything works fine but I don't feel comfortable storing the password in clear text. Is there a way to obfuscate the string or to truly encrypt it? I'm aware that obfuscation can be reverse engineered, but I think I should at least try to cover up the password a bit. At the very least it wont be visible to a indexing program, or a stray eye giving a quick look at my code. I am aware of pyobfuscate but I don't want the whole program obfuscated, just one string and possibly the whole line itself where the variable is defined. Target platform is GNU Linux Generic (If that makes a difference)

    Read the article

  • Sorting numbers in string format with Python

    - by prosseek
    I have a list that has some chapter numbers in string. When I sort the keys using keys function, it gives me wrong results. keys = ['1.1', '1.2', '2.1', '10.1'] keys.sort() print keys ['1.1', '1.2', '10.1', '2.1'] How can I use the sort function to get ['1.1', '1.2', '2.1', '10.1'] What if the array has something like this? ['1.1.1', '1.2.1', '10.1', '2.1'] - ['1.1.1','1.2.1','2.1','10.1']

    Read the article

  • Comparing a time delta in python

    - by Alpesh Patel
    I have a variable which is <type 'datetime.timedelta'> and I would like to compare it against certain values. Lets say d produces this datetime.timedelta value 0:00:01.782000 I would like to compare it like this: #if d is greater than 1 minute if d>1:00: print "elapsed time is greater than 1 minute" I have tried converting datetime.timedelta.strptime() but that does seem to work. Is there an easier way to compare this value?

    Read the article

  • Memory problems while code is running (Python, Networkx)

    - by MIN SU PARK
    I made a code for generate a graph with 379613734 edges. But the code couldn't be finished because of memory. It takes about 97% of server memory when it go through 62 million lines. So I killed it. Do you have any idea to solve this problem? My code is like this: import os, sys import time import networkx as nx G = nx.Graph() ptime = time.time() j = 1 for line in open("./US_Health_Links.txt", 'r'): #for line in open("./test_network.txt", 'r'): follower = line.strip().split()[0] followee = line.strip().split()[1] G.add_edge(follower, followee) if j%1000000 == 0: print j*1.0/1000000, "million lines done", time.time() - ptime ptime = time.time() j += 1 DG = G.to_directed() # P = nx.path_graph(DG) Nn_G = G.number_of_nodes() N_CC = nx.number_connected_components(G) LCC = nx.connected_component_subgraphs(G)[0] n_LCC = LCC.nodes() Nn_LCC = LCC.number_of_nodes() inDegree = DG.in_degree() outDegree = DG.out_degree() Density = nx.density(G) # Diameter = nx.diameter(G) # Centrality = nx.betweenness_centrality(PDG, normalized=True, weighted_edges=False) # Clustering = nx.average_clustering(G) print "number of nodes in G\t" + str(Nn_G) + '\n' + "number of CC in G\t" + str(N_CC) + '\n' + "number of nodes in LCC\t" + str(Nn_LCC) + '\n' + "Density of G\t" + str(Density) + '\n' # sys.exit() # j += 1 The edge data is like this: 1000 1001 1000245 1020191 1000 10267352 1000653 10957902 1000 11039092 1000 1118691 10346 11882 1000 1228281 1000 1247041 1000 12965332 121340 13027572 1000 13075072 1000 13183162 1000 13250162 1214 13326292 1000 13452672 1000 13844892 1000 14061830 12340 1406481 1000 14134703 1000 14216951 1000 14254402 12134 14258044 1000 14270791 1000 14278978 12134 14313332 1000 14392970 1000 14441172 1000 14497568 1000 14502775 1000 14595635 1000 14620544 1000 14632615 10234 14680596 1000 14956164 10230 14998341 112000 15132211 1000 15145450 100 15285998 1000 15288974 1000 15300187 1000 1532061 1000 15326300 Lastly, is there anybody who has an experience to analyze Twitter link data? It's quite hard to me to take a directed graph and calculate average/median indegree and outdegree of nodes. Any help or idea?

    Read the article

  • python lxml problem

    - by David ???
    I'm trying to print/save a certain element's HTML from a web-page. I've retrieved the requested element's XPath from firebug. All I wish is to save this element to a file. I don't seem to succeed in doing so. (tried the XPath with and without a /text() at the end) I would appreciate any help, or past experience. 10x, David import urllib2,StringIO from lxml import etree url='http://www.tutiempo.net/en/Climate/Londres_Heathrow_Airport/12-2009/37720.htm' seite = urllib2.urlopen(url) html = seite.read() seite.close() parser = etree.HTMLParser() tree = etree.parse(StringIO.StringIO(html), parser) xpath = "/html/body/table/tbody/tr/td[2]/div/table/tbody/tr[6]/td/table/tbody/tr/td[3]/table/tbody/tr[3]/td/table/tbody/tr/td/table/tbody/tr/td/table/tbody/text()" elem = tree.xpath(xpath) print elem[0].strip().encode("utf-8")

    Read the article

  • Drawing a Dragons curve in Python

    - by Connor Franzoni
    I am trying to work out how to draw the dragons curve, with pythons turtle using the An L-System or Lindenmayer system. I no the code is something like the Dragon curve; initial state = ‘F’, replacement rule – replace ‘F’ with ‘F+F-F’, number of replacements = 8, length = 5, angle = 60 But have no idea how to put that into code.

    Read the article

  • Fast iterating over first n items of an iterable (not a list) in python

    - by martinthenext
    Hello! I'm looking for a pythonic way of iterating over first n items of an iterable (upd: not a list in a common case, as for lists things are trivial), and it's quite important to do this as fast as possible. This is how I do it now: count = 0 for item in iterable: do_something(item) count += 1 if count >= n: break Doesn't seem neat to me. Another way of doing this is: for item in itertools.islice(iterable, n): do_something(item) This looks good, the question is it fast enough to use with some generator(s)? For example: pair_generator = lambda iterable: itertools.izip(*[iter(iterable)]*2) for item in itertools.islice(pair_generator(iterable), n): so_something(item) Will it run fast enough as compared to the first method? Is there some easier way to do it?

    Read the article

  • [Python]Download an image embedded in a mime multipart message

    - by michele
    Hi, I have to download some images from links. This links return me a file where is embedded a multipart mime and a tiff image. I have writed this code but it downloads the file with mime. How I can remove the mime from this file and have the image returned? Can I do this with wget or curl? My code: def download(url,local): import urllib urllib.urlretrieve(url,local) urllib.urlcleanup() Thanks a lot.

    Read the article

  • Python __subclasses__() not listing subclasses

    - by Mridang Agarwalla
    I cant seem to list all derived classes using the __subclasses__() method. Here's my directory layout: import.py backends __init__.py --digger __init__.py base.py test.py --plugins plugina_plugin.py From import.py i'm calling test.py. test.py in turn iterates over all the files in the plugins directory and loads all of them. test.py looks like this: import os import sys import re sys.path.append(os.path.join(os.path.abspath(os.path.dirname(os.path.abspath( __file__ ))))) sys.path.append(os.path.join(os.path.abspath(os.path.dirname(os.path.abspath( __file__ ))), 'plugins')) from base import BasePlugin class TestImport: def __init__(self): print 'heeeeello' PLUGIN_DIRECTORY = os.path.join(os.path.abspath(os.path.dirname(os.path.abspath( __file__ ))), 'plugins') for filename in os.listdir (PLUGIN_DIRECTORY): # Ignore subfolders if os.path.isdir (os.path.join(PLUGIN_DIRECTORY, filename)): continue else: if re.match(r".*?_plugin\.py$", filename): print ('Initialising plugin : ' + filename) __import__(re.sub(r".py", r"", filename)) print ('Plugin system initialized') print BasePlugin.__subclasses__() The problem us that the __subclasses__() method doesn't show any derived classes. All plugins in the plugins directory derive from a base class in the base.py file. base.py looks like this: class BasePlugin(object): """ Base """ def __init__(self): pass plugina_plugin.py looks like this: from base import BasePlugin class PluginA(BasePlugin): """ Plugin A """ def __init__(self): pass Could anyone help me out with this? Whatm am i doing wrong? I've racked my brains over this but I cant seem to figure it out Thanks.

    Read the article

  • Get python tarfile to skip files without read permission

    - by chris
    I'm trying to write a function that backs up a directory with files of different permission to an archive on Windows XP. I'm using the tarfile module to tar the directory. Currently as soon as the program encounters a file that does not have read permissions, it stops giving the error: IOError: [Errno 13] Permission denied: 'path to file'. I would like it to instead just skip over the files it cannot read rather than end the tar operation. This is the code I am using now: def compressTar(): """Build and gzip the tar archive.""" folder = 'C:\\Documents and Settings' tar = tarfile.open ("C:\\WINDOWS\\Program\\archive.tar.gz", "w:gz") try: print "Attempting to build a backup archive" tar.add(folder) except: print "Permission denied attempting to create a backup archive" print "Building a limited archive conatining files with read permissions." for root, dirs, files in os.walk(folder): for f in files: tar.add(os.path.join(root, f)) for d in dirs: tar.add(os.path.join(root, d))

    Read the article

  • How to convert an HTML table to an array in python

    - by user345660
    I have an html document, and I want to pull the tables out of this document and return them as arrays. I'm picturing 2 functions, one that finds all the html tables in a document, and a second one that turns html tables into 2-dimensional arrays. Something like this: htmltables = get_tables(htmldocument) for table in htmltables: array=make_array(table) There's 2 catches: 1. The number tables varies day to day 2. The tables have all kinds of weird extra formatting, like bold and blink tags, randomly thrown in. Thanks!

    Read the article

  • Python large variable RAM useage

    - by PPTim
    Hi, Say there is a dict variable that grows very large during runtime- up into millions of key:value pairs. Does this variable get stored in RAM,effectively using up all the available memory and slowing down the rest of the system? Asking the interpreter to display the entire dict is a bad idea, but would it be okay as long as one key is accessed at a time? Tim

    Read the article

  • Using arrays with other arrays in Python.

    - by Scott
    Trying to find an efficient way to extract all instances of items in an array out of another. For example array1 = ["abc", "def", "ghi", "jkl"] array2 = ["abc", "ghi", "456", "789"] Array 1 is an array of items that need to be extracted out of array 2. Thus, array 2 should be modified to ["456", "789"] I know how to do this, but no in an efficient manner.

    Read the article

  • Types in Python - Google Appengine

    - by Chris M
    Getting a bit peeved now; I have a model and a class thats just storing a get request in the database; basic tracking. class SearchRec(db.Model): WebSite = db.StringProperty()#required=True WebPage = db.StringProperty() CountryNM = db.StringProperty() PrefMailing = db.BooleanProperty() DateStamp = db.DateTimeProperty(auto_now_add=True) IP = db.StringProperty() class AddSearch(webapp.RequestHandler): def get(self): searchRec = SearchRec() searchRec.WebSite = self.request.get('WEBSITE') searchRec.WebPage = self.request.get('WEBPAGE') searchRec.CountryNM = self.request.get('COUNTRY') searchRec.PrefMailing = bool(self.request.get('MAIL')) searchRec.IP = self.request.get('IP') Bool has my biscuit; I thought that setting bool(self.reque....) would set the type of the string but no matter what I pass it it still stores it as TRUE in the database. I had the same issue with using required=True on strings for the model; the damn thing kept saying that nothing was being passed... but it had. Ta

    Read the article

  • Faster float to int conversion in Python

    - by culebrón
    Here's a piece of code that takes most time in my program, according to timeit statistics. It's a dirty function to convert floats in [-1.0, 1.0] interval into unsigned integer [0, 2**32]. How can I accelerate floatToInt? piece = [] rng = range(32) for i in rng: piece.append(1.0/2**i) def floatToInt(x): n = x + 1.0 res = 0 for i in rng: if n >= piece[i]: res += 2**(31-i) n -= piece[i] return res

    Read the article

  • regular expression search in python

    - by Richard
    Hello all, I am trying to parse some data and just started reading up on regular Expressions so I am pretty new to it. This is the code I have so far String = "MEASUREMENT 3835 303 Oxygen: 235.78 Saturation: 90.51 Temperature: 24.41 DPhase: 33.07 BPhase: 29.56 RPhase: 0.00 BAmp: 368.57 BPot: 18.00 RAmp: 0.00 RawTem.: 68.21" String = String.strip('\t\x11\x13') String = String.split("Oxygen:") print String[1] String[1].lstrip print String[1] What I am trying to do is to do is remove the oxygen data (235.78) and put it in its own variable using an regular expression search. I realize that there should be an easy solution but I am trying to figure out how regular expressions work and they are making my head hurt. Thanks for any help Richard

    Read the article

  • Python/YACC: Resolving a shift/reduce conflict

    - by Rosarch
    I'm using PLY. Here is one of my states from parser.out: state 3 (5) course_data -> course . (6) course_data -> course . course_list_tail (3) or_phrase -> course . OR_CONJ COURSE_NUMBER (7) course_list_tail -> . , COURSE_NUMBER (8) course_list_tail -> . , COURSE_NUMBER course_list_tail ! shift/reduce conflict for OR_CONJ resolved as shift $end reduce using rule 5 (course_data -> course .) OR_CONJ shift and go to state 7 , shift and go to state 8 ! OR_CONJ [ reduce using rule 5 (course_data -> course .) ] course_list_tail shift and go to state 9 I want to resolve this as: if OR_CONJ is followed by COURSE_NUMBER: shift and go to state 7 else: reduce using rule 5 (course_data -> course .) How can I fix my parser file to reflect this? Do I need to handle a syntax error by backtracking and trying a different rule?

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >