i am using XML as my backend for the application...
LXML is used to parse the xml.
How can i encrypt this xml file to make sure that the data is protected......
thanks in advance.
I have a game loop like this:
#The velocity of the object
velocity_x = 0.09
velocity_y = 0.03
#If the location of the object is over 5, bounce off.
if loc_x > 5:
velocity_x = (velocity_x * -1)
if loc_y > 5:
velocity_y = (velocity_y * -1)
#Every frame set the object's position to the old position plus the velocity
obj.setPosition([(loc_x + velocity_x),(loc_y + velocity_y),0])
Basically, my problem is that in the if loops, I change the variable from its original value to the inverse of its old value. But because I declare the variable's value at the beginning of the script, the velocity variables don't stay on what I change it to.
I need a way to change the variable's value permanently.
Thank you!
Is it possible to get the full path of the file on the user's computer being uploaded to my site?
Using os.path.abspath(fileitem.filename) simply gets me the address of where my script is executing from on my shared hosting server.
FYI: fileitem = form['file'] and form = cgi.FieldStorage()
I have a class which is being operated on by two functions. One function creates a list of widgets and writes it into the class:
def updateWidgets(self):
widgets = self.generateWidgetList()
self.widgets = widgets
the other function deals with the widgets in some way:
def workOnWidgets(self):
for widget in self.widgets:
self.workOnWidget(widget)
each of these functions runs in it's own thread. the question is, what happens if the updateWidgets() thread executes while the workOnWidgets() thread is running?
I am assuming that the iterator created as part of the for...in loop will keep some kind of reference to the old self.widgets object? So I will finish iterating over the old list... but I'd love to know for sure.
I had to do heavy I/o bound operation, i.e Parsing large files and converting from one format to other format. Initially I used to do it serially, i.e parsing one after another..! Performance was very poor ( it used take 90+ seconds). So I decided to use threading to improve the performance. I created one thread for each file. ( 4 threads)
for file in file_list:
t=threading.Thread(target = self.convertfile,args = file)
t.start()
ts.append(t)
for t in ts:
t.join()
But for my astonishment, there is no performance improvement whatsoever. Now also it takes around 90+ seconds to complete the task. As this is I/o bound operation , I had expected to improve the performance. What am I doing wrong?
When I try to write a field that includes whitespace in it, it gets split into multiple fields on the space. What's causing this? It's driving me insane. Thanks
data = open("file.csv", "wb")
w = csv.writer(data)
w.writerow(['word1', 'word2'])
w.writerow(['word 1', 'word2'])
data.close()
I'll get 2 fields(word1,word2) for first example and 3(word,1,word2) for the second.
Hi,
I'm trying to modify Guido's multimethod (dynamic dispatch code):
http://www.artima.com/weblogs/viewpost.jsp?thread=101605
to handle inheritance and possibly out of order arguments.
e.g. (inheritance problem)
class A(object):
pass
class B(A):
pass
@multimethod(A,A)
def foo(arg1,arg2):
print 'works'
foo(A(),A()) #works
foo(A(),B()) #fails
Is there a better way than iteratively checking for the super() of each item until one is found?
e.g. (argument ordering problem)
I was thinking of this from a collision detection standpoint.
e.g.
foo(Car(),Truck()) and
foo(Truck(), Car()) and
should both trigger
foo(Car,Truck) # Note: @multimethod(Truck,Car) will throw an exception if @multimethod(Car,Truck) was registered first?
I'm looking specifically for an 'elegant' solution. I know that I could just brute force my way through all the possibilities, but I'm trying to avoid that. I just wanted to get some input/ideas before sitting down and pounding out a solution.
Thanks
Is there a convenient way to calculate percentiles for a sequence or single-dimensional numpy array?
I am looking for something similar to Excel's percentile function.
I looked in NumPy's statistics reference, and couldn't find this. All I could find is the median (50th percentile), but not something more specific.
Hi
I have this code that fetches some text from a page using BeautifulSoup
soup= BeautifulSoup(html)
body = soup.find('div' , {'id':'body'})
print body
I would like to make this as a reusable function that takes in some htmltext and the tags to match it like the following
def parse(html, atrs):
soup= BeautifulSoup(html)
body = soup.find(atrs)
return body
But if i make a call like this
parse(htmlpage, ('div' , {'id':'body'}")) or like
parse(htmlpage, ['div' , {'id':'body'}"])
I get only the div element, the body attribute seems to get ignored.
Is there a way to fix this?
I'm not even sure what the right words are to search for. I want to display parts of the error object in an except block (similar to the err object in VBScript, which has Err.Number and Err.Description). For example, I want to show the values of my variables, then show the exact error. Clearly, I am causing a divided-by-zero error below, but how can I print that fact?
try:
x = 0
y = 1
z = y / x
z = z + 1
print "z=%d" % (z)
except:
print "Values at Exception: x=%d y=%d " % (x,y)
print "The error was on line ..."
print "The reason for the error was ..."
I'm downloading a long list of my email subject lines , with the intent of finding email lists that I was a member of years ago, and would want to purge them from my Gmail account (which is getting pretty slow.)
I'm specifically thinking of newsletters that often come from the same address, and repeat the product/service/group's name in the subject.
I'm aware that I could search/sort by the common occurrence of items from a particular email address (and I intend to), but I'd like to correlate that data with repeating subject lines....
Now, many subject lines would fail a string match, but
"Google Friends : Our latest news"
"Google Friends : What we're doing today"
are more similar to each other than a random subject line, as is:
"Virgin Airlines has a great sale today"
"Take a flight with Virgin Airlines"
So -- how can I start to automagically extract trends/examples of strings that may be more similar.
Approaches I've considered and discarded ('because there must be some better way'):
Extracting all the possible substrings and ordering them by how often they show up, and manually selecting relevant ones
Stripping off the first word or two and then count the occurrence of each sub string
Comparing Levenshtein distance between entries
Some sort of string similarity index ...
Most of these were rejected for massive inefficiency or likelyhood of a vast amount of manual intervention required. I guess I need some sort of fuzzy string matching..?
In the end, I can think of kludgy ways of doing this, but I'm looking for something more generic so I've added to my set of tools rather than special casing for this data set.
After this, I'd be matching the occurring of particular subject strings with 'From' addresses - I'm not sure if there's a good way of building a data structure that represents how likely/not two messages are part of the 'same email list' or by filtering all my email subjects/from addresses into pools of likely 'related' emails and not -- but that's a problem to solve after this one.
Any guidance would be appreciated.
I have two functions:
def f(a,b,c=g(b)):
blabla
def g(n):
blabla
c is an optional argument in function f. If the user does not specify its value, the program should compute g(b) and that would be the value of c. But the code does not compile - it says name 'b' is not defined. How to fix that?
Someone suggested:
def g(b):
blabla
def f(a,b,c=None):
if c is None:
c = g(b)
blabla
But this doesn't work, because maybe the user intended c to be None and then c will have another value.
I am trying to install a package via pip, but there were missing files from the zip file. So I copy the files and then compile with gcc. But now I cannot continue with the installation by calling pip install because it sees a pre-existing directory and will not proceed.
This is with pip version 1.5.6, but I thought that with earlier versions of pip that it was less fussy about this.
What are the remaining steps to complete the package installation?
say ive got a matrix that looks like:
[[0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]
how can i make it on seperate lines:
[[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0]]
and then remove commas etc:
0 0 0 0 0
And also to make it blank instead of 0's, so that numbers can be put in later, so in the end it will be like:
_ 1 2 _ 1 _ 1
(spaces not underscores)
thanks
Hello everyone. My question is if we can assign/bind some value to a certain item and hide that value(or if we can do the same thing in another way).
Example: Lets say the columns on ListCtrl are "Name" and "Description":
self.lc = wx.ListCtrl(self, -1, style=wx.LC_REPORT)
self.lc.InsertColumn(0, 'Name')
self.lc.InsertColumn(1, 'Description')
And when I add a item I want them to show the Name parameter and the description:
num_items = self.lc.GetItemCount()
self.lc.InsertStringItem(num_items, "Randomname")
self.lc.SetStringItem(num_items, 1, "Some description here")
Now what I want to do is basically assign something to that item that is not shown so I can access later on the app.
So I would like to add something that is not shown on the app but is on the item value like:
hiddendescription = "Somerandomthing"
Still didn't undestand? Well lets say I add a button to add a item with some other TextCtrls to set the parameters and the TextCtrls parameters are:
"Name"
"Description"
"Hiddendescription"
So then the user fills this textctrls out and clicks the button to create the item, and I basically want only to show the Name and Description and hide the "HiddenDescription" but to do it so I can use it later.
Sorry for explaining more than 1 time on this post but I want to make sure you understand what I pretend to do.
hello
i never worked with web programming and
i've been asked lately to write a web-based software to manage assets and tasks. to be used by more than 900 persons
what are the recommended modules , frameworks , libraries for this task.
and it will be highly appreciated if you guyz recommend some books and articles that might help me. thanks in advance
I need to open multiple files (2 input and 2 output files), do complex manipulations on the lines from input files and then append results at the end of 2 output files. I am currently using the following approach:
in_1 = open(input_1)
in_2 = open(input_2)
out_1 = open(output_1, "w")
out_2 = open(output_2, "w")
# Read one line from each 'in_' file
# Do many operations on the DNA sequences included in the input files
# Append one line to each 'out_' file
in_1.close()
in_2.close()
out_1.close()
out_2.close()
The files are huge (each potentially approaching 1Go, that is why I am reading through these input files one at a time. I am guessing that this is not a very Pythonic way to do things. :) Would using the following form good?
with open("file1") as f1:
with open("file2") as f2: # etc.
If yes, could I do it while avoiding the highly indented code that would result? Thanks for the insights!
I want to verify that the HTML tags present in a source string are also present in a target string.
For example:
>> source = '<em>Hello</em><label>What's your name</label>'
>> verify_target(’<em>Hi</em><label>My name is Jim</label>')
True
>> verify_target('<label>My name is Jim</label><em>Hi</em>')
True
>> verify_target('<em>Hi<label>My name is Jim</label></em>')
False
I have an HTML table that I'd like to be able to export to an Excel file. I already have an option to export the table into an IQY file, but I'd prefer something that didn't allow the user to refresh the data via Excel. I just want a feature that takes a snapshot of the table at the time the user clicks the link/button.
I'd prefer it if the feature was a link/button on the HTML page that allows the user to save the query results displayed in the table. Is there a way to do this at all? Or, something I can modify with the IQY?
I can try to provide more details if needed. Thanks in advance.
query = "SELECT * FROM mytable WHERE time=%s", (mytime)
Then, I want to add a limit %s to it. How can I do that without messing up the %s in mytime?
Edit: I want to concat query2, which has "LIMIT %s, %s"
Hello, I have html-file. I have to replace all text between this: [%anytext%]. As I understand, it's very easy to do with BeautifulSoup for parsing hmtl. But what is regular expression and how to remove&write back text data?
Hello, everyone
I cannot get a way to terminate a thread that is hung in a socket.recvfrom() call. For example, ctrl+c that should trigger KeyboardInterrupt exception can't be caught. Here is a script I've used for testing:
from socket import *
from threading import Thread
from sys import exit
class TestThread(Thread):
def __init__(self,host="localhost",port=9999):
self.sock = socket(AF_INET,SOCK_DGRAM)
self.sock.bind((host,port))
super(TestThread,self).__init__()
def run(self):
while True:
try:
recv_data,addr = self.sock.recvfrom(1024)
except (KeyboardInterrupt, SystemExit):
sys.exit()
if __name__ == "__main__":
server_thread = TestThread()
server_thread.start()
while True: pass
The main thread (the one that executes infinite loop) exits. However the thread that I explicitly create, keeps hanging in recvfrom().
Please, help me resolve this.
is there anyway to show why a "try" failed, and skipped to "except", without writing out all the possible errors by hand, and without ending the program?
example:
try:
1/0
except:
someway to show
"Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
1/0
ZeroDivisionError: integer division or modulo by zero"
i dont want to doif:print error 1, elif: print error 2, elif: etc.... i want to see the error that would be shown had try not been there
I have this small script that sorts the content of a text file
# The built-in function `open` opens a file and returns a file object.
# Read mode opens a file for reading only.
try:
f = open("tracks.txt", "r")
try:
# Read the entire contents of a file at once.
# string = f.read()
# OR read one line at a time.
#line = f.readline()
# OR read all the lines into a list.
lines = f.readlines()
lines.sort()
f = open('tracks.txt', 'w')
f.writelines(lines) # Write a sequence of strings to a file
finally:
f.close()
except IOError:
pass
the only problem is that the text is displayed at the bottom of the text file everytime it's sortened...
I assume it also sorts the blank lines...anybody knows why?
thanks in advance
Hi i working on scrapy and trying xml feeds first time, below is my code
class TestxmlItemSpider(XMLFeedSpider):
name = "TestxmlItem"
allowed_domains = {"http://www.nasinteractive.com"}
start_urls = [
"http://www.nasinteractive.com/jobexport/advance/hcantexasexport.xml"
]
iterator = 'iternodes'
itertag = 'job'
def parse_node(self, response, node):
title = node.select('title/text()').extract()
job_code = node.select('job-code/text()').extract()
detail_url = node.select('detail-url/text()').extract()
category = node.select('job-category/text()').extract()
print title,";;;;;;;;;;;;;;;;;;;;;"
print job_code,";;;;;;;;;;;;;;;;;;;;;"
item = TestxmlItem()
item['title'] = node.select('title/text()').extract()
.......
return item
result:
File "/usr/lib/python2.7/site-packages/Scrapy-0.14.3-py2.7.egg/scrapy/item.py", line 56, in __setitem__
(self.__class__.__name__, key))
exceptions.KeyError: 'TestxmlItem does not support field: title'
Totally there are 200+ items so i need to loop over and assign the node text to item
but here all the results are displaying at once when we print, actually how can we loop over on nodes in scraping xml files with xmlfeedspider