Search Results

Search found 15000 results on 600 pages for 'python csv'.

Page 24/600 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Looping in Python and keeping current line after sub routine [migrated]

    - by Brendan
    I've been trying to nut out an issue when looping in python 3. When returning from sub routine the "line" variable has not incremented. How do I get the script to return the latest readline from the subsroutine? Code below def getData(line): #print(line) #while line in sTSDP_data: while "/service/content/test" not in line: line = sTSDP_data.readline() import os, sys sFileTSDP = "d:/ess/redo/Test.log" sTSDP_data = open(sFileTSDP, "r") for line in sTSDP_data: if "MOBITV" in line: getData(line) #call sub routine print(line)

    Read the article

  • Automating release management and CI on python projects under mercurial VCS

    - by ms4py
    I have a set of Python projects which are under the mercurial VCS. I would like to automate the following tasks: Run the test suite for every commit (CI). Make a source distribution for every commit, which has a tag in mercurial. This is regarded as a new release. Copy the distribution to a special repository. There is Jenkins as a proposal for similar questions, but I'm not sure if it can handle the release management like intended.

    Read the article

  • How do I change my PYTHONPATH to make 3,2 my default Python instead of 2.7.2?

    - by max
    I have python3.2 located in /usr/lib/python3.2. I am not sure if that means it's installed but I assume it is for now. Some facts about my system: $ which python /usr/local/bin/python When I type python in terminal I get the following $ python Python 2.7.2 (default, Dec 19 2011, 11:12:13) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. Then to find the path I do >>> sys.info >>> sys.path ['', '/usr/local/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg', '/usr/local/lib/python2.7/site-packages/pip-1.0.2-py2.7.egg', '/usr/local/lib/python2.7/site-packages/PIL-1.1.7-py2.7-linux-x86_64.egg', '/usr/local/lib/python27.zip', '/usr/local/lib/python2.7', '/usr/local/lib/python2.7/plat-linux2', '/usr/local/lib/python2.7/lib-tk', '/usr/local/lib/python2.7/lib-old', '/usr/local/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/site-packages'] So knowing all of this, how do I change my default system python from 2.7.2 to 3.2?

    Read the article

  • To make or not to make...python-nautilus a dependency?

    - by George Edison
    That is the question! Okay, all silliness aside, I really am forced to make a difficult decision here. My application is written in C++ and allows other scripts to invoke methods via XML-RPC. One of these scripts is a Nautilus extension written in Python. The extension is packaged with the rest of the application and copied to the appropriate place when installed (/usr/share/nautilus-python/extensions). Now the problem is that the Nautilus extension requires the python-nautilus package to be installed to be operational. So therefore I have three options: Make the python-nautilus package a dependency. This option will ensure that anyone who installs my package will be able to use the Nautilus extension. However, this option will not be attractive to XFCE or KDE users - a ton of python-nautilus's dependencies will be installed on their machines and take up a lot of space - even if they never use Nautilus. Put the python-nautilus package in the suggests: or recommends: field. This option provides the end-user with a way to avoid installing the python-nautilus package (by providing the --no-install-suggests or --no-install-recommends argument to apt-get). However, this won't work when the user installs the package in the Software Center. (I always get mixed up as to which of those two fields are installed by default.) Prompt the user when the application is installed or first launched. This option is more complicated than the others but offers the best compromise between making it easy for the user to install python-nautilus (without going into a technical explanation) and not installing it when the user doesn't need it (or want it). I guess the best way to implement this is a simple prompt that invokes apt-get if the user would like the package installed. Don't install the package at all. This option ensures that nobody has python-nautilus installed on their machine unless they want it. However, this also means that my Nautilus extension will simply not run on the end-user's machine unless they manually install the package. Which of these options seems the best choice? Have I missed any pros and cons for each of the options?

    Read the article

  • Learning python, what to learn next to build a site?

    - by user1476145
    Hi everyone thanks for your help. I am currently learning python now through Think Python: How to Think Like a Computer Scientist. It is a great book, and is free online. I will learn more about python through Udacity and books online. After I learn python, what will I need to learn next to build a site? I want to build an interactive photo site. I will have to learn HTML. Do any of you know a good book or web site to learn HTML? I need to learn how to have python in HTML. As you know, I am using an editor now, but I need to learn how to use python to build a site. I am sorry that I am so ignorant. I just need some good books to read or courses to take. thank you so much, Luke

    Read the article

  • Converting lib from other language to python and Rights issue

    - by Arruda
    If I take a program, and basically translate its source from some language to python, with some small changes, can I do a entirely my new lib or I have to make a "version" of the old one? would this be a copy of the first or a new lib based in the ideas of the first? OBS: Consider the original lib is using Apache License v2 Not sure if well explained, but I can't see for now how to make it more clear.

    Read the article

  • Syntax error in Maya Python Script [on hold]

    - by Enchanter
    Ok this error is immensly frustrating as it is obviously a simple syntax issue. Basically I've written two lines of maya script in python designed to create a list of the names of all the joints of a model currently selected in the model viewer. Here are the two lines of script: import maya.cmds joints = ls(selection = true, type = 'joint') Upon compiling the code the script editor is saying there is a syntax error in the second line, but I do not see any reason why this code should not execute?

    Read the article

  • Prettify working using python idle but not when using python script

    - by Loclip
    So when I use print soup.prettify() on idle it's prettifying my html but when I calling the script from server the prettify() not working #!/usr/bin/python import cgi, cgitb, urllib2, sys from bs4 import BeautifulSoup styles = [] i = 1 site = "www.example.com" page = urllib2.urlopen(site) soup = BeautifulSoup(page) table = soup.find('table', {'class' :'style5'}) alltd = table.findAll('td') allspans = table.findAll('span') while i<55: styles.append("style" + str(i)) i+=1 for td in alltd: if any(x in td["class"] for x in styles): td["class"] = "align" for span in allspans: span.replaceWithChildren() print "Content-type: text/html" print print "<!DOCTYPE html>" print "<html>" print "<head>" print '<meta http-equiv="content-type" content="text/html; charset=utf-8">' print '<link rel="stylesheet" href="lightbox/css/lightbox.css" type="text/css" media="screen">' print '<style type="text/css">' print "body {background-color:#b0c4de;}" print '.align {text-align: center;}' print "</style>" print "</head>" print "<body>" print table.pretiffy() print "</body>" print "</html>"

    Read the article

  • Mac OS X: Update Python for Shell

    - by Nathan G.
    So, I see similar questions, but none of the answers work for me. I updated Python to 3.1.3 from 2.6.1. Everything works, except: When I type python into Terminal, I get: Python 2.6.1 (r261:67515, Jun 24 2010, 21:47:49) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> So, how do I change the version of Python that runs in the Shell? I've tried the script that they provide. It adds their directory to my $PATH, but it still doesn't change the version that'd displayed from Terminal. Here's what I get when I echo $PATH: /Library/Frameworks/Python.framework/Versions/3.1/bin:/Library/Frameworks/Python.framework/Versions/3.1/bin:/Library/Frameworks/Python.framework/Versions/3.1/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin It appears that the script provided has added their directory for every time I ran the script (I tried it a few times, naturally). I'll gives links to caps of what is in the other relevant folders it mentions: /Library/Frameworks/Python.framework/Versions/3.1/bin /usr/local/bin /usr/bin Thakns in advance for any ideas!

    Read the article

  • convert csv/xls to json

    - by mkoryak
    Does anyone know if there is application that will let you covert preferably xls to JSON. ill also settle for a converter from csv since thats what ill probably end up having to write myself if there is nothing around.

    Read the article

  • python pdb not breaking in files properly?

    - by YGA
    Hi Folks, I wish I could provide a simple sample case that occurs using standard library code, but unfortunately it only happens when using one of our in-house libraries that in turn is built on top of sql alchemy. Basically, the problem is that this break command: (Pdb) print sqlalchemy.engine.base.__file__ /prod/eggs/SQLAlchemy-0.5.5-py2.5.egg/sqlalchemy/engine/base.py (Pdb) break /prod/eggs/SQLAlchemy-0.5.5-py2.5.egg/sqlalchemy/engine/base.py:946 Is just being totally ignored, it seems, by pdb. As in, even though I am positive the code is being hit (both because I can see log messages, and because I've used sys.settrace to check which lines in which files are being hit), pdb is just not breaking there. I suspect that somehow the use of an egg is confusing pdb as to what files are being used (I can't reproduce the error if I use a non-egg'ed library, like pickle; there everything works fine). It's a shot in the dark, but has anyone come across this before? Thanks, /YGA

    Read the article

  • MDB 2 CSV batch.

    - by darhipo
    Does anybody know what I could use to write a script to convert all MDB files in a directory to CSV files? I'm working on Windows but have been using Cygwin for some work.

    Read the article

  • fastest way convert tab-delimited file to csv in linux

    - by andrewj
    I have a tab-delimited file that has over 200 million lines. What's the fastest way in linux to convert this to a csv file? This file does have multiple lines of header information which I'll need to strip out down the road, but the number of lines of header is known. I have seen suggestions for sed and gawk, but I wonder if there is a "preferred" choice.

    Read the article

  • Solving embarassingly parallel problems using Python multiprocessing

    - by gotgenes
    How does one use multiprocessing to tackle embarrassingly parallel problems? Embarassingly parallel problems typically consist of three basic parts: Read input data (from a file, database, tcp connection, etc.). Run calculations on the input data, where each calculation is independent of any other calculation. Write results of calculations (to a file, database, tcp connection, etc.). We can parallelize the program in two dimensions: Part 2 can run on multiple cores, since each calculation is independent; order of processing doesn't matter. Each part can run independently. Part 1 can place data on an input queue, part 2 can pull data off the input queue and put results onto an output queue, and part 3 can pull results off the output queue and write them out. This seems a most basic pattern in concurrent programming, but I am still lost in trying to solve it, so let's write a canonical example to illustrate how this is done using multiprocessing. Here is the example problem: Given a CSV file with rows of integers as input, compute their sums. Separate the problem into three parts, which can all run in parallel: Process the input file into raw data (lists/iterables of integers) Calculate the sums of the data, in parallel Output the sums Below is traditional, single-process bound Python program which solves these three tasks: #!/usr/bin/env python # -*- coding: UTF-8 -*- # basicsums.py """A program that reads integer values from a CSV file and writes out their sums to another CSV file. """ import csv import optparse import sys def make_cli_parser(): """Make the command line interface parser.""" usage = "\n\n".join(["python %prog INPUT_CSV OUTPUT_CSV", __doc__, """ ARGUMENTS: INPUT_CSV: an input CSV file with rows of numbers OUTPUT_CSV: an output file that will contain the sums\ """]) cli_parser = optparse.OptionParser(usage) return cli_parser def parse_input_csv(csvfile): """Parses the input CSV and yields tuples with the index of the row as the first element, and the integers of the row as the second element. The index is zero-index based. :Parameters: - `csvfile`: a `csv.reader` instance """ for i, row in enumerate(csvfile): row = [int(entry) for entry in row] yield i, row def sum_rows(rows): """Yields a tuple with the index of each input list of integers as the first element, and the sum of the list of integers as the second element. The index is zero-index based. :Parameters: - `rows`: an iterable of tuples, with the index of the original row as the first element, and a list of integers as the second element """ for i, row in rows: yield i, sum(row) def write_results(csvfile, results): """Writes a series of results to an outfile, where the first column is the index of the original row of data, and the second column is the result of the calculation. The index is zero-index based. :Parameters: - `csvfile`: a `csv.writer` instance to which to write results - `results`: an iterable of tuples, with the index (zero-based) of the original row as the first element, and the calculated result from that row as the second element """ for result_row in results: csvfile.writerow(result_row) def main(argv): cli_parser = make_cli_parser() opts, args = cli_parser.parse_args(argv) if len(args) != 2: cli_parser.error("Please provide an input file and output file.") infile = open(args[0]) in_csvfile = csv.reader(infile) outfile = open(args[1], 'w') out_csvfile = csv.writer(outfile) # gets an iterable of rows that's not yet evaluated input_rows = parse_input_csv(in_csvfile) # sends the rows iterable to sum_rows() for results iterable, but # still not evaluated result_rows = sum_rows(input_rows) # finally evaluation takes place as a chain in write_results() write_results(out_csvfile, result_rows) infile.close() outfile.close() if __name__ == '__main__': main(sys.argv[1:]) Let's take this program and rewrite it to use multiprocessing to parallelize the three parts outlined above. Below is a skeleton of this new, parallelized program, that needs to be fleshed out to address the parts in the comments: #!/usr/bin/env python # -*- coding: UTF-8 -*- # multiproc_sums.py """A program that reads integer values from a CSV file and writes out their sums to another CSV file, using multiple processes if desired. """ import csv import multiprocessing import optparse import sys NUM_PROCS = multiprocessing.cpu_count() def make_cli_parser(): """Make the command line interface parser.""" usage = "\n\n".join(["python %prog INPUT_CSV OUTPUT_CSV", __doc__, """ ARGUMENTS: INPUT_CSV: an input CSV file with rows of numbers OUTPUT_CSV: an output file that will contain the sums\ """]) cli_parser = optparse.OptionParser(usage) cli_parser.add_option('-n', '--numprocs', type='int', default=NUM_PROCS, help="Number of processes to launch [DEFAULT: %default]") return cli_parser def main(argv): cli_parser = make_cli_parser() opts, args = cli_parser.parse_args(argv) if len(args) != 2: cli_parser.error("Please provide an input file and output file.") infile = open(args[0]) in_csvfile = csv.reader(infile) outfile = open(args[1], 'w') out_csvfile = csv.writer(outfile) # Parse the input file and add the parsed data to a queue for # processing, possibly chunking to decrease communication between # processes. # Process the parsed data as soon as any (chunks) appear on the # queue, using as many processes as allotted by the user # (opts.numprocs); place results on a queue for output. # # Terminate processes when the parser stops putting data in the # input queue. # Write the results to disk as soon as they appear on the output # queue. # Ensure all child processes have terminated. # Clean up files. infile.close() outfile.close() if __name__ == '__main__': main(sys.argv[1:]) These pieces of code, as well as another piece of code that can generate example CSV files for testing purposes, can be found on github. I would appreciate any insight here as to how you concurrency gurus would approach this problem. Here are some questions I had when thinking about this problem. Bonus points for addressing any/all: Should I have child processes for reading in the data and placing it into the queue, or can the main process do this without blocking until all input is read? Likewise, should I have a child process for writing the results out from the processed queue, or can the main process do this without having to wait for all the results? Should I use a processes pool for the sum operations? If yes, what method do I call on the pool to get it to start processing the results coming into the input queue, without blocking the input and output processes, too? apply_async()? map_async()? imap()? imap_unordered()? Suppose we didn't need to siphon off the input and output queues as data entered them, but could wait until all input was parsed and all results were calculated (e.g., because we know all the input and output will fit in system memory). Should we change the algorithm in any way (e.g., not run any processes concurrently with I/O)?

    Read the article

  • Using httplib2 in python 3 properly? (Timeout problems)

    - by Sho Minamimoto
    Hey, first time post, I'm really stuck on httplib2. I've been reading up on it from diveintopython3.org, but it mentions nothing about a timeout function. I look up the documentation, but the only thing I see is an ability to put a timeout int but there are no units specified (seconds? milliseconds? What's the default if None?) This is what I have (I also have code to check what the response is and try again, but it's never tried more than once) h = httplib2.Http('.cache', timeout=None) for url in list: response, content = h.request(url) more stuff... So the Http object stays around until some arbitrary time, but I'm downloading a ton of pages from the same server, and after a while, it hangs on getting a page. No errors are thrown, the thing just hangs at a page. So then I try: h = httplib2.Http('.cache', timeout=None) for url in list: try: response, content = h.request(url) except: h = httplib2.Http('.cache', timeout=None) more stuff... But then it recreates another Http object every time (goes down the 'except' path)...I dont understand how to keep getting with the same object, until it expires and I make another. Also, is there a way to set a timeout on an individual request? Thanks for the help!

    Read the article

  • YAML to CSV converter-does one exist already?

    - by Joel
    Hi folks, I'm wanting to migrate contacts from one CRM to another. The first only exports all the data I need in YAML. The second only imports with CSV. Is there a converter out there that already exists, or do I need to build one specific to this job? Thanks!

    Read the article

  • regex to parse csv

    - by mike
    I'm looking for a regex that will parse a line at a time from a csv file. basically, what string.readline() does, but it will allow line breaks if they are within double quotes. or is there an easier way to do this?

    Read the article

  • Number of lines in csv.DictReader

    - by Alan Harris-Reid
    Hi there, I have a csv DictReader object (using Python 3.1), but I would like to know the number of lines/rows contained in the reader before I iterate through it. Something like as follows... myreader = csv.DictReader(open('myFile.csv', newline='')) totalrows = ? rowcount = 0 for row in myreader: rowcount +=1 print("Row %d/%d" % (rowcount,totalrows)) I know I could get the total by iterating through the reader, but then I couldn't run the 'for' loop. I could iterate through a copy of the reader, but I cannot find how to copy an iterator. I could also use totalrows = len(open('myFile.csv').readlines()) but that seems an unnecessary re-opening of the file. I would rather get the count from the DictReader if possible. Any help would be appreciated. Alan

    Read the article

  • csv file enclosed with double quotes not stripping quotes

    - by sjw
    I am generating a csv download from my web server and to be safe, I have enclosed each field with double quotes. i.e. "Field1","Field2","Field3","Field4" "row1_field1","row1_field2","row1_field3","row1_field4" "row2_field1","row2_field2","row2_field3","row2_field4" The problem is that when the file is opened in Excel, it does not strip all quotes... Therefore, some fields are appearing as: row1_field1 whereas others are appearing as "row1_field2" What am I not doing to ensure that excel strips all surrounding quotes?

    Read the article

  • Generating permutation in Python with specific rule

    - by twfx
    Let say a=[A, B, C, D], each element has a weight w, and is set to 1 if selected, 0 if otherwise. I'd like to generate permutation in the below order 1,1,1,1 1,1,1,0 1,1,0,1 1,1,0,0 1,0,1,1 1,0,1,0 1,0,0,1 1,0,0,0 Let's w=[1,2,3,4] for item A,B,C,D ... and max_weight = 4. For each permutation, if the accum weight has exceeded max_weight, stop calculation for that permutation, move to next permutation. For eg. 1,1,1 --> 6 > 4, exceeded, stop, move to next 1,1,1 --> 6 > 4, exceeded, stop, move to next 1,1,0,1 --> 7 > 4 finished, move to next 1,1,0,0 --> 3 finished, move to next 1,0,1,1 --> 8 > 4, finished, stop, move to next 1,0,1,0 --> 4 finished, move to next 1,0,0,1 --> 5 > 4 finished, move to next 1,0,0,0 --> 1 finished, move to next [1,0,1,0] is the best combination which does not exceeded max_weight 4 My questions are What's the algorithm which generate the required permutation? Or any suggestion I could generate the permutation? As the number of element can be up to 10000, and the calculation stop if the accum weight for the branch exceeds max_weight, it is not necessary to generate all permutation first before the calculation. How can the algo in (1) generate permutation on the fly?

    Read the article

  • How to build sqlite for Python 2.4?

    - by Verrtex
    I would like to use pysqlite interface between Python and sdlite database. I have already Python and SQLite on my computer. But I have troubles with installation of pysqlite. During the installation I get the following error message: error: command 'gcc' failed with exit status 1 As far as I understood the problems appears because version of my Python is 2.4.3 and SQLite is integrated in Python since 2.5. However, I also found out that it IS possible to build sqlite for Python 2.4 (using some tricks, probably). Does anybody know how to build sqlite for Python 2.4? As another option I could try to install higher version of Python. However I do not have root privileges. Does anybody know what will be the easiest way to solve the problem (build SQLite fro Python 2.4, or install newer version of Python)? I have to mention that I would not like to overwrite the old version version of Python. Thank you in advance.

    Read the article

  • How is the 'is' keyword implemented in Python?

    - by Srikanth
    ... the is keyword that can be used for equality in strings. >>> s = 'str' >>> s is 'str' True >>> s is 'st' False I tried both __is__() and __eq__() but they didn't work. >>> class MyString: ... def __init__(self): ... self.s = 'string' ... def __is__(self, s): ... return self.s == s ... >>> >>> >>> m = MyString() >>> m is 'ss' False >>> m is 'string' # <--- Expected to work False >>> >>> class MyString: ... def __init__(self): ... self.s = 'string' ... def __eq__(self, s): ... return self.s == s ... >>> >>> m = MyString() >>> m is 'ss' False >>> m is 'string' # <--- Expected to work, but again failed False >>> Thanks for your help!

    Read the article

  • Python try...except comma vs 'as' in except

    - by peter
    What is the difference between ',' and 'as' in except statements, eg: try: pass except Exception, exception: pass and: try: pass except Exception as exception: pass Is the second syntax legal in 2.6? It works in CPython 2.6 on Windows but the 2.5 interpreter in cygwin complains that it is invalid. If they are both valid in 2.6 which should I use?

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >