Search Results

Search found 13693 results on 548 pages for 'python metaprogramming'.

Page 108/548 | < Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >

  • Python: wxpython wx.media.MediaCtrl - millisecond seek capability

    - by PPTim
    I've been searching for a media player that can display sub-second resolution in videos. Some pointed me to the Frame stepping functionality in MPC, but I'd like even more than that. I know from previous experience with wxPython that the wx.media.MediaCtrl both displays and (as fast as i can click with the mouse anyway) stops the video with millisecond-precision. The code is here, and runs no-problem with python +wxpython module. Has anyone come across other video players that handle this functionality, or has seen a more robust/developed video player written with wxPython that allows for this level of precision? This is possibily a one-off task so I'd like to use existing solutions if possible. Thanks.

    Read the article

  • Not-quite-JSON string deserialization in Python

    - by cpharmston
    I get the following text as a string from an XML-based REST API 'd':4 'ca':5 'sen':1 'diann':2,6,8 'feinstein':3,7,9 that I'm looking to deserialize into a pretty little Python dictionary: { 'd': [4], 'ca': [5], 'sen': [1], 'diann': [2, 6, 8], 'feinstein': [3, 7, 9] } I'm hoping to avoid using regular expressions or heavy string manipulation, as this format isn't documented and may change. The best I've been able to come up with: members = {} for m in elem.text.split(' '): m = m.split(':') members[m[0].replace("'", '')] = map(int, m[1].split(',')) return members Obviously a terrible approach, but it works, and that's better than anything else I've got right now. Any suggestions on better approaches?

    Read the article

  • Generate fixed length hash in python for url parameter

    - by LeRoy
    I am working in python on appengine. I am trying to create what is equivalent to the "v" value in the youtube url's (http://www.youtube.com/watch?v=XhMN0wlITLk) for retrieving specific entities. The datastore auto generates a key but it is way too long (34 digits). I have experimented with hashlib to build my own, but again I get a long string. I would like to keep it to under 11 digits (I am not dealing with a huge number of entities) and letters and numbers are acceptable. It seems like there should be a pretty standard solution. I am probably just missing it.

    Read the article

  • How would you adblock using Python?

    - by regomodo
    I'm slowly building a web browser in PyQt4 and like the speed i'm getting out of it. However, I want to combine easylist.txt with it. I believe adblock uses this to block http requests by the browser. How would you go about it using python/PyQt4? [edit1] Ok. I think i've setup Privoxy. I haven't setup any additional filters and it seems to work. The PyQt4 i've tried to use looks like this self.proxyIP = "127.0.0.1" self.proxyPORT= 8118 proxy = QNetworkProxy() proxy.setType(QNetworkProxy.HttpProxy) proxy.setHostName(self.proxyIP) proxy.setPort(self.proxyPORT) QNetworkProxy.setApplicationProxy(proxy) However, this does absolutely nothing and I cannot make sense of the docs and can not find any examples. [edit2] I've just noticed that i'f I change self.proxyIP to my actual local IP rather than 127.0.0.1 the page doesn't load. So something is happening.

    Read the article

  • Generate n-dimensional random numbers in Python

    - by Magsol
    I'm trying to generate random numbers from a gaussian distribution. Python has the very useful random.gauss() method, but this is only a one-dimensional random variable. How could I programmatically generate random numbers from this distribution in n-dimensions? For example, in two dimensions, the return value of this method is essentially distance from the mean, so I would still need (x,y) coordinates to determine an actual data point. I suppose I could generate two more random numbers, but I'm not sure how to set up the constraints. I appreciate any insights. Thanks!

    Read the article

  • Convert data retrieved from MySQL database into JSON object using Python/Django

    - by rohanbk
    I have a MySQL database called People which contains the following schema <id,name,foodchoice1,foodchoice2>. The database contains a list of people and the two choices of food they wish to have at a party (for example). I want to create some kind of Python web-service that will output a JSON object. An example of output should be like: { "guestlist": [ {"id":1,"name":"Bob","choice1":"chicken","choice2":"pasta"},{"id":2,"name":"Alice","choice1":"pasta","choice2":"chicken"} ], "partyname": "My awesome party", "day": "1", "month": "June", "2010": "null" } Basically every guest is stored into a dictionary 'guestlist' along with their choices of food. At the end of the JSON object is just some additional information that only needs to be mentioned once. The question that I have is regarding the method that I need to utilize to grab the data from my database, and create the JSON object. Do I need to use a standard Model/View structure of Django or can I get away with something that is much simpler since what I need to do is really simple?

    Read the article

  • Python encoding - Nothing works

    - by Luiz Fernando
    I've been looking the answers here in this web site, but nothing have worked so far. The problem is: In the database, strings are saved like that one: at &#8730;s = 7 TeV with. And the reason is that the "escape" JavaScript function was used. I was not able to "unescape" these strings in Python yet. I tried to use "eval", "decode", "re.sub" and others, but without success. So please, which function can I use to get it right?

    Read the article

  • in-memory database in Python

    - by Claudiu
    I'm doing some queries in Python on a large database to get some stats out of the database. I want these stats to be in-memory so other programs can use them without going to a database. I was thinking of how to structure them, and after trying to set up some complicated nested dictionaries, I realized that a good representation would be an SQL table. I don't want to store the data back into the persistent database, though. Are there any in-memory implementations of an SQL database that supports querying the data with SQL syntax?

    Read the article

  • Sending mail from Python using SMTP

    - by Eli Bendersky
    I'm using the following method to send mail from Python using SMTP. Is it the right method to use or are there gotchas I'm missing ? from smtplib import SMTP import datetime debuglevel = 0 smtp = SMTP() smtp.set_debuglevel(debuglevel) smtp.connect('YOUR.MAIL.SERVER', 26) smtp.login('USERNAME@DOMAIN', 'PASSWORD') from_addr = "John Doe <[email protected]>" to_addr = "[email protected]" subj = "hello" date = datetime.datetime.now().strftime( "%d/%m/%Y %H:%M" ) message_text = "Hello\nThis is a mail from your server\n\nBye\n" msg = "From: %s\nTo: %s\nSubject: %s\nDate: %s\n\n%s" % ( from_addr, to_addr, subj, date, message_text ) smtp.sendmail(from_addr, to_addr, msg) smtp.quit()

    Read the article

  • Python character count

    - by user74283
    I have been going over python tutorials in this resource. Everything is pretty clear in the below code which counts number of characters. Only section that i dont understand is the section where count assigned to a list and multiplied by 120. Can anyone explain what is the purpose of this in plain english please. def display(i): if i == 10: return 'LF' if i == 13: return 'CR' if i == 32: return 'SPACE' return chr(i) infile = open('alice_in_wonderland.txt', 'r') text = infile.read() infile.close() counts = 128 * [0] for letter in text: counts[ord(letter)] += 1 outfile = open('alice_counts.dat', 'w') outfile.write("%-12s%s\n" % ("Character", "Count")) outfile.write("=================\n") for i in range(len(counts)): if counts[i]: outfile.write("%-12s%d\n" % (display(i), counts[i])) outfile.close()

    Read the article

  • Easy to use time-stamps in Python

    - by Morlock
    I'm working on a journal-type application in Python. The application basically permits the user write entries in the journal and adds a time-stamp for later querying the journal. As of now, I use the time.ctime() function to generate time-stamps that are visually friendly. The journal entries thus look like: Thu Jan 21 19:59:47 2010 Did something Thu Jan 21 20:01:07 2010 Did something else Now, I would like to be able to use these time-stamps to do some searching/querying. I need to be able to search, for example, for "2010", or "feb 2010", or "23 feb 2010". My questions are: 1) What time module(s) should I use: time vs datetime? 2) What would be an appropriate way of creating and using the time-stamp objects? Many thanks!

    Read the article

  • python unichr problem

    - by jacob
    I've got some problem with unichr() on my server. Please see below: On my server (Ubuntu 9.04): >>> print unichr(255) Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeEncodeError: 'ascii' codec can't encode character u'\xff' in position 0: ordinal not in range(128) On my desktop (Ubuntu 9.10): >>> print unichr(255) ÿ I'm fairly new to python so I don't know how to solve this. Anyone care to help? Thanks.

    Read the article

  • Next step for Python app using Sqlite db

    - by ChrisC
    I want to write a db program in Python using Sqlite. I have the db table structure planned, and am ready to move to the next step, which I think is to work any bugs out of the db structure. I am totally inexperienced in development except for writing the original db (written in MS Access), and an Intro to C++ class (OOP concepts and console C++ programs). Is it time to test the db structure? If so, what's the best way, and what tool(s) should I use? Thank you.

    Read the article

  • Python UTF-16 encoding hex representation

    - by Romeno
    I have a string in Python 2.7.2 say u"\u0638". When I write it to file: f = open("J:\\111.txt", "w+") f.write(u"\u0638".encode('utf-16')) f.close() In hex it looks like: FF FE 38 06 When i print such a string to stdout i will see: '\xff\xfe8\x06'. The querstion: Where is \x38 in the string output to stdout? In other words why the string output to stdout is not '\xff\xfe\x38\x06'? If I write the string to file twice: f = open("J:\\111.txt", "w+") f.write(u"\u0638".encode('utf-16')) f.write(u"\u0638".encode('utf-16')) f.close() The hex representation in file contains byte order mark (BOM) \xff\xfe twice: FF FE 38 06 FF FE 38 06 I wonder what is the techique to avoid writting BOM in UTF-16 encoded strings?

    Read the article

  • Python: Plot some data (matplotlib) without GIL

    - by BandGap
    Hello all, my problem is the GIL of course. While I'm analysing data it would be nice to present some plots in between (so it's not too boring waiting for results) But the GIL prevents this (and this is bringing me to the point of asking myself if Python was such a good idea in the first place). I can only display the plot, wait till the user closes it and commence calculations after that. A waste of time obviously. I already tried the subprocess and multiprocessing modules but can't seem to get them to work. Any thoughts on this one? Thanks

    Read the article

  • Resizing image with Python with locked aspect ratio

    - by David Vinklar
    How should I resize an image with Python script so that it would automatically adjust the Height ratio to the Width used? I'm using the following code: def Do(Environment): # Resize App.Do( Environment, 'Resize', { 'AspectRatio': 1.33333, 'CurrentDimensionUnits': App.Constants.UnitsOfMeasure.Pixels, 'CurrentResolutionUnits': App.Constants.ResolutionUnits.PixelsPerIn, 'Height': 1440, 'MaintainAspectRatio': True, 'Resample': True, 'ResampleType': App.Constants.ResampleType.SmartSize, 'ResizeAllLayers': True, 'Resolution': 72, 'Width': 1920, }) Using this code works perfectly if the aspect ratio of an image is the same as the one defined in the code - i.e. 1.33333. But how should I make it work with images that do not have this ratio? For me, what is important is that the new Width is 1920; Height has to be able to adjust automatically. Any ideas which part of my code should be altered and how?

    Read the article

  • Converting HTML special characters into their value using Python

    - by tipu
    I have a file that's littered with these: http://www.utexas.edu/learn/html/spchar.html That link just displays all sorts of HTML entities, such as – &ndash; — &mdash; ¡ &iexcl; and so on. Is it possible in Python to natively convert these characters back into their values so any occurrences of &ndash; will appear as – instead? My current approach was just to make a dict of key html entities and their utf-8 values and do search and replace, but I was wondering if there are any libraries that can take care of this for me.

    Read the article

  • difference in logging mechanism: API and application(python)

    - by zubin71
    I am currently writing an API and an application which uses the API. I have gotten suggestions from people stating that I should perform logging using handlers in the application and use a "logger" object for logging from the API. In light of the advice I received above, is the following implementation correct? class test: def __init__(self, verbose): self.logger = logging.getLogger("test") self.logger.setLevel(verbose) def do_something(self): # do something self.logger.log("something") # by doing this i get the error message "No handlers could be found for logger "test" The implementation i had in mind was as follows: #!/usr/bin/python """ .... .... create a logger with a handler .... .... """ myobject = test() try: myobject.do_something() except SomeError: logger.log("cant do something") Id like to get my basics strong, id be grateful for any help and suggestions for code you might recommend I look up. Thnkx!

    Read the article

  • Unescape _xHHHH_ XML escape sequences using Python

    - by John Machin
    I'm using Python 2.x [not negotiable] to read XML documents [created by others] that allow the content of many elements to contain characters that are not valid XML characters by escaping them using the _xHHHH_ convention e.g. ASCII BEL aka U+0007 is represented by the 7-character sequence u"_x0007_". Neither the functionality that allows representation of any old character in the document nor the manner of escaping is negotiable. I'm parsing the documents using cElementTree or lxml [semi-negotiable]. Here is my best attempt at unescapeing the parser output as efficiently as possible: import re def unescape(s, subber=re.compile(r'_x[0-9A-Fa-f]{4,4}_').sub, repl=lambda mobj: unichr(int(mobj.group(0)[2:6], 16)), ): if "_" in s: return subber(repl, s) return s The above is biassed by observing a very low frequency of "_" in typical text and a better-than-doubling of speed by avoiding the regex apparatus where possible. The question: Any better ideas out there?

    Read the article

  • Vim: Goot AutoCompletion Plugin for Python and PHP

    - by Rafid K. Abdullah
    I use Vim with ctags for development. I found ctags to be very useful in going to definitions, but I don't know a good plugin to make use of ctags for clever auto completion. It seems that the default Vim auto completion is not good. When I write set omnifunc? in Vim, I get this: omnifunction=pythoncomplete#Complete I do know about OmniComplete for C++, but I don't know any good plugin for Python and PHP. Does anybody have an idea?

    Read the article

  • math syntax checker written in python

    - by neurino
    All I need is to check, using python, if a string is a valid math expression or not. For simplicity let's say I just need + - * / operators (+ - as unary too) with numbers and nested parenthesis. I add also simple variable names for completeness. So I can test this way: test("-3 * (2 + 1)") #valid test("-3 * ") #NOT valid test("v1 + v2") #valid test("v2 - 2v") #NOT valid ("2v" not a valid variable name) I tried pyparsing but just trying the example: "simple algebraic expression parser, that performs +,-,*,/ and ^ arithmetic operations" I get passed invalid code and also trying to fix it I always get wrong syntaxes being parsed without raising Exceptions just try: >>>test('9', 9) 9 qwerty = 9.0 ['9'] => ['9'] >>>test('9 qwerty', 9) 9 qwerty = 9.0 ['9'] => ['9'] both test pass... o_O Any advice?

    Read the article

  • vectorized approach to binning with numpy/scipy in Python

    - by user248237
    I am binning a 2d array (x by y) in Python into the bins of its x value (given in "bins"), using np.digitize: elements_to_bins = digitize(vals, bins) where "vals" is a 2d array, i.e.: vals = array([[1, v1], [2, v2], ...]). elements_to_bins just says what bin each element falls into. What I then want to do is get a list whose length is the number of bins in "bins", and each element returns the y-dimension of "vals" that falls into that bin. I do it this way right now: points_by_bins = [] for curr_bin in range(min(elements_to_bins), max(elements_to_bins) + 1): curr_indx = where(elements_to_bins == curr_bin)[0] curr_bin_vals = vals[:, curr_indx] points_by_bins.append(curr_bin_vals) is there a more elegant/simpler way to do this? All I need is a list of of lists of the y-values that fall into each bin. thanks.

    Read the article

  • Python - Converting CSV to Objects - Code Design

    - by victorhooi
    Hi, I have a small script we're using to read in a CSV file containing employees, and perform some basic manipulations on that data. We read in the data (import_gd_dump), and create an Employees object, containing a list of Employee objects (maybe I should think of a better naming convention...lol). We then call clean_all_phone_numbers() on Employees, which calls clean_phone_number() on each Employee, as well as lookup_all_supervisors(), on Employees. import csv import re import sys #class CSVLoader: # """Virtual class to assist with loading in CSV files.""" # def import_gd_dump(self, input_file='Gp Directory 20100331 original.csv'): # gd_extract = csv.DictReader(open(input_file), dialect='excel') # employees = [] # for row in gd_extract: # curr_employee = Employee(row) # employees.append(curr_employee) # return employees # #self.employees = {row['dbdirid']:row for row in gd_extract} # Previously, this was inside a (virtual) class called "CSVLoader". # However, according to here (http://tomayko.com/writings/the-static-method-thing) - the idiomatic way of doing this in Python is not with a class-fucntion but with a module-level function def import_gd_dump(input_file='Gp Directory 20100331 original.csv'): """Return a list ('employee') of dict objects, taken from a Group Directory CSV file.""" gd_extract = csv.DictReader(open(input_file), dialect='excel') employees = [] for row in gd_extract: employees.append(row) return employees def write_gd_formatted(employees_dict, output_file="gd_formatted.csv"): """Read in an Employees() object, and write out each Employee() inside this to a CSV file""" gd_output_fieldnames = ('hrid', 'mail', 'givenName', 'sn', 'dbcostcenter', 'dbdirid', 'hrreportsto', 'PHFull', 'PHFull_message', 'SupervisorEmail', 'SupervisorFirstName', 'SupervisorSurname') try: gd_formatted = csv.DictWriter(open(output_file, 'w', newline=''), fieldnames=gd_output_fieldnames, extrasaction='ignore', dialect='excel') except IOError: print('Unable to open file, IO error (Is it locked?)') sys.exit(1) headers = {n:n for n in gd_output_fieldnames} gd_formatted.writerow(headers) for employee in employees_dict.employee_list: # We're using the employee object's inbuilt __dict__ attribute - hmm, is this good practice? gd_formatted.writerow(employee.__dict__) class Employee: """An Employee in the system, with employee attributes (name, email, cost-centre etc.)""" def __init__(self, employee_attributes): """We use the Employee constructor to convert a dictionary into instance attributes.""" for k, v in employee_attributes.items(): setattr(self, k, v) def clean_phone_number(self): """Perform some rudimentary checks and corrections, to make sure numbers are in the right format. Numbers should be in the form 0XYYYYYYYY, where X is the area code, and Y is the local number.""" if self.telephoneNumber is None or self.telephoneNumber == '': return '', 'Missing phone number.' else: standard_format = re.compile(r'^\+(?P<intl_prefix>\d{2})\((?P<area_code>\d)\)(?P<local_first_half>\d{4})-(?P<local_second_half>\d{4})') extra_zero = re.compile(r'^\+(?P<intl_prefix>\d{2})\(0(?P<area_code>\d)\)(?P<local_first_half>\d{4})-(?P<local_second_half>\d{4})') missing_hyphen = re.compile(r'^\+(?P<intl_prefix>\d{2})\(0(?P<area_code>\d)\)(?P<local_first_half>\d{4})(?P<local_second_half>\d{4})') if standard_format.search(self.telephoneNumber): result = standard_format.search(self.telephoneNumber) return '0' + result.group('area_code') + result.group('local_first_half') + result.group('local_second_half'), '' elif extra_zero.search(self.telephoneNumber): result = extra_zero.search(self.telephoneNumber) return '0' + result.group('area_code') + result.group('local_first_half') + result.group('local_second_half'), 'Extra zero in area code - ask user to remediate. ' elif missing_hyphen.search(self.telephoneNumber): result = missing_hyphen.search(self.telephoneNumber) return '0' + result.group('area_code') + result.group('local_first_half') + result.group('local_second_half'), 'Missing hyphen in local component - ask user to remediate. ' else: return '', "Number didn't match recognised format. Original text is: " + self.telephoneNumber class Employees: def __init__(self, import_list): self.employee_list = [] for employee in import_list: self.employee_list.append(Employee(employee)) def clean_all_phone_numbers(self): for employee in self.employee_list: #Should we just set this directly in Employee.clean_phone_number() instead? employee.PHFull, employee.PHFull_message = employee.clean_phone_number() # Hmm, the search is O(n^2) - there's probably a better way of doing this search? def lookup_all_supervisors(self): for employee in self.employee_list: if employee.hrreportsto is not None and employee.hrreportsto != '': for supervisor in self.employee_list: if supervisor.hrid == employee.hrreportsto: (employee.SupervisorEmail, employee.SupervisorFirstName, employee.SupervisorSurname) = supervisor.mail, supervisor.givenName, supervisor.sn break else: (employee.SupervisorEmail, employee.SupervisorFirstName, employee.SupervisorSurname) = ('Supervisor not found.', 'Supervisor not found.', 'Supervisor not found.') else: (employee.SupervisorEmail, employee.SupervisorFirstName, employee.SupervisorSurname) = ('Supervisor not set.', 'Supervisor not set.', 'Supervisor not set.') #Is thre a more pythonic way of doing this? def print_employees(self): for employee in self.employee_list: print(employee.__dict__) if __name__ == '__main__': db_employees = Employees(import_gd_dump()) db_employees.clean_all_phone_numbers() db_employees.lookup_all_supervisors() #db_employees.print_employees() write_gd_formatted(db_employees) Firstly, my preamble question is, can you see anything inherently wrong with the above, from either a class design or Python point-of-view? Is the logic/design sound? Anyhow, to the specifics: The Employees object has a method, clean_all_phone_numbers(), which calls clean_phone_number() on each Employee object inside it. Is this bad design? If so, why? Also, is the way I'm calling lookup_all_supervisors() bad? Originally, I wrapped the clean_phone_number() and lookup_supervisor() method in a single function, with a single for-loop inside it. clean_phone_number is O(n), I believe, lookup_supervisor is O(n^2) - is it ok splitting it into two loops like this? In clean_all_phone_numbers(), I'm looping on the Employee objects, and settings their values using return/assignment - should I be setting this inside clean_phone_number() itself? There's also a few things that I'm sorted of hacked out, not sure if they're bad practice - e.g. print_employee() and gd_formatted() both use __dict__, and the constructor for Employee uses setattr() to convert a dictionary into instance attributes. I'd value any thoughts at all. If you think the questions are too broad, let me know and I can repost as several split up (I just didn't want to pollute the boards with multiple similar questions, and the three questions are more or less fairly tightly related). Cheers, Victor

    Read the article

  • How to evaluate javascript code in Python

    - by overboming
    I need to fetch some result on a webpage, which use some javascript code to generate the part I am interesting in like following eval(function(p,a,c,k,e,d){e=function(c){return c};if(!''.replace(/^/,String)){while(c--)d[c]=k[c]||c;k=[function(e){return d[e]}];e=function(){return'\\w+'};c=1;};while(c--)if(k[c])p=p.replace(new RegExp('\\b'+e(c)+'\\b','g'),k[c]);return p;}('5 11=17;5 12=["/3/2/1/0/13.4","/3/2/1/0/15.4","/3/2/1/0/14.4","/3/2/1/0/7.4","/3/2/1/0/6.4","/3/2/1/0/8.4","/3/2/1/0/10.4","/3/2/1/0/9.4","/3/2/1/0/23.4","/3/2/1/0/22.4","/3/2/1/0/24.4","/3/2/1/0/26.4","/3/2/1/0/25.4","/3/2/1/0/18.4","/3/2/1/0/16.4","/3/2/1/0/19.4","/3/2/1/0/21.4"];5 20=0;',10,27,'40769|54|Images|Files|png|var|imanhua_005_140430179|imanhua_004_140430179|imanhua_006_140430226|imanhua_008_140430242|imanhua_007_140430226|len|pic|imanhua_001_140429664|imanhua_003_140430117|imanhua_002_140430070|imanhua_015_140430414||imanhua_014_140430382|imanhua_016_140430414|sid|imanhua_017_140430429|imanhua_010_140430289|imanhua_009_140430242|imanhua_011_140430367|imanhua_013_140430382|imanhua_012_140430367'.split('|'),0,{})) The result of eval() is valuable to me, I am writing a Python script, is there any library I can use to virtually run this piece of javascript code and get the output? Thanks

    Read the article

  • Is Using Python to MapReduce for Cassandra Dumb?

    - by UltimateBrent
    Since Cassandra doesn't have MapReduce built in yet (I think it's coming in 0.7), is it dumb to try and MapReduce with my Python client or should I just use CouchDB or Mongo or something? The application is stats collection, so I need to be able to sum values with grouping to increment counters. I'm not, but pretend I'm making Google analytics so I want to keep track of which browsers appear, which pages they went to, and visits vs. pageviews. I would just atomically update my counters on write, but Cassandra isn't very good at counters either. May Cassandra just isn't the right choice for this? Thanks!

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >