Search Results

Search found 13693 results on 548 pages for 'python metaprogramming'.

Page 374/548 | < Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >

  • Getting unpredictable data into a tabular format

    - by Acorn
    The situation: Each page I scrape has <input> elements with a title= and a value= I don't know what is going to be on the page. I want to have all my collected data in a single table at the end, with a column for each title. So basically, I need each row of data to line up with all the others, and if a row doesn't have a certain element, then it should be blank (but there must be something there to keep the alignment). eg. First page has: {animal: cat, colour: blue, fruit: lemon, day: monday} Second page has: {animal: fish, colour: green, day: saturday} Third page has: {animal: dog, number: 10, colour: yellow, fruit: mango, day: tuesday} Then my resulting table should be: animal | number | colour | fruit | day cat | none | blue | lemon | monday fish | none | green | none | saturday dog | 10 | yellow | mango | tuesday Although it would be good to keep the order of the title value pairs, which I know dictionaries wont do. So basically, I need to generate columns from all the titles (kept in order but somehow merged together) What would be the best way of going about this without knowing all the possible titles and explicitly specifying an order for the values to be put in?

    Read the article

  • Programmatically sync the db in Django

    - by Attila Oláh
    I'm trying to sync my db from a view, something like this: from django import http from django.core import management def syncdb(request): management.call_command('syncdb') return http.HttpResponse('Database synced.') The issue is, it will block the dev server by asking for user input from the terminal. How can I pass it the '--noinput' option to prevent asking me anything? I have other ways of marking users as super-user, so there's no need for the user input, but I really need to call syncdb (and flush) programmatically, without logging on to the server via ssh. Any help is appreciated.

    Read the article

  • Setting up relations/mappings for a SQLAlchemy many-to-many database

    - by Brent Ramerth
    I'm new to SQLAlchemy and relational databases, and I'm trying to set up a model for an annotated lexicon. I want to support an arbitrary number of key-value annotations for the words which can be added or removed at runtime. Since there will be a lot of repetition in the names of the keys, I don't want to use this solution directly, although the code is similar. My design has word objects and property objects. The words and properties are stored in separate tables with a property_values table that links the two. Here's the code: from sqlalchemy import Column, Integer, String, Table, create_engine from sqlalchemy import MetaData, ForeignKey from sqlalchemy.orm import relation, mapper, sessionmaker from sqlalchemy.ext.declarative import declarative_base engine = create_engine('sqlite:///test.db', echo=True) meta = MetaData(bind=engine) property_values = Table('property_values', meta, Column('word_id', Integer, ForeignKey('words.id')), Column('property_id', Integer, ForeignKey('properties.id')), Column('value', String(20)) ) words = Table('words', meta, Column('id', Integer, primary_key=True), Column('name', String(20)), Column('freq', Integer) ) properties = Table('properties', meta, Column('id', Integer, primary_key=True), Column('name', String(20), nullable=False, unique=True) ) meta.create_all() class Word(object): def __init__(self, name, freq=1): self.name = name self.freq = freq class Property(object): def __init__(self, name): self.name = name mapper(Property, properties) Now I'd like to be able to do the following: Session = sessionmaker(bind=engine) s = Session() word = Word('foo', 42) word['bar'] = 'yes' # or word.bar = 'yes' ? s.add(word) s.commit() Ideally this should add 1|foo|42 to the words table, add 1|bar to the properties table, and add 1|1|yes to the property_values table. However, I don't have the right mappings and relations in place to make this happen. I get the sense from reading the documentation at http://www.sqlalchemy.org/docs/05/mappers.html#association-pattern that I want to use an association proxy or something of that sort here, but the syntax is unclear to me. I experimented with this: mapper(Word, words, properties={ 'properties': relation(Property, secondary=property_values) }) but this mapper only fills in the foreign key values, and I need to fill in the other value as well. Any assistance would be greatly appreciated.

    Read the article

  • How to retrieve view of MultiIndex DataFrame

    - by Henry S. Harrison
    This question was inspired by this question. I had the same problem, updating a MultiIndex DataFrame by selection. The drop_level=False solution in Pandas 0.13 will allow me to achieve the same result, but I am still wondering why I cannot get a view from the MultiIndex DataFrame. In other words, why does this not work?: >>> sat = d.xs('sat', level='day', copy=False) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Python27\lib\site-packages\pandas\core\frame.py", line 2248, in xs raise ValueError('Cannot retrieve view (copy=False)') ValueError: Cannot retrieve view (copy=False) Of course it could be only because it is not implemented, but is there a reason? Is it somehow ambiguous or impossible to implement? Returning a view is more intuitive to me than returning a copy then later updating the original. I looked through the source and it seems this situation is checked explicitly to raise an error. Alternatively, is it possible to get the same sort of view from any of the other indexing methods? I've experimented but have not been successful. [edit] Some potential implementations are discussed here. I guess with the last question above I'm wondering what the current best solution is to index into arbitrary multiindex slices and cross-sections.

    Read the article

  • mod_wsgi daemon mode vs threaded fastcgi

    - by t0ster
    Can someone explain the difference between apache mod_wsgi in daemon mode and django fastcgi in threaded mode. They both use threads for concurrency I think. Supposing that I'm using nginx as front end to apache mod_wsgi. UPDATE: I'm comparing django built in fastcgi(./manage.py method=threaded maxchildren=15) and mod_wsgi in 'daemon' mode(WSGIDaemonProcess example threads=15). They both use threads and acquire GIL, am I right?

    Read the article

  • Help calling class from a class above.

    - by wtzolt
    Hello, How to call from class oneThread: back to class fun:? As in, address a class written below. Is it possible? class oneThread(threading.Thread): def __init__(self): threading.Thread.__init__(self) self.start() def run(self): print "1" time.sleep(1) print "2" time.sleep(1) print "3" self.wTree.get_widget("entryResult").set_text("Done with One.") # How to call from here back to class fun, which of course is below...? class fun: wTree = None def __init__( self ): self.wTree = gtk.glade.XML( "main.glade" ) self.wTree.signal_autoconnect( {"on_buttonOne" : self.one} ) gtk.main() def one(self, widget): oneThread(); gtk.gdk.threads_init() do=fun()

    Read the article

  • How do i add a new object with suds?

    - by Jerome
    I'm trying to use suds but have so far been unsuccessful at figuring this out. Hopefully it's something simple that i'm missing. Any help would be highly appreciated. This is supposed to be the raw soap message that i need to achieve: <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:api="http://api.service.apimember.soapservice.com/"> <soapenv:Header/> <soapenv:Body> <api:insertOrUpdateMemberByObj> <token>t67GFCygjhkjyUy8y9hkjhlkjhuii</token> <member> <dynContent> <entry> <key>FIRSTNAME</key> <value>hhhhbbbbb</value> </entry> </dynContent> <email>[email protected]</email> </member> </api:insertOrUpdateMemberByObj> </soapenv:Body> </soapenv:Envelope> So i use suds to create the member object: member = client.factory.create('member') produces: (apiMember){ attributes = (attributes){ entry[] = <empty> } } How exactly do i append an 'entry'? I try this: member.attributes.entry.append({'key':'FIRSTNAME','value':'test'}) and that produces this: (apiMember){ attributes = (attributes){ entry[] = { value = "test" key = "FIRSTNAME" }, } } However, what i actually need is: (apiMember){ attributes = (attributes){ entry[] = (entry) { value = "test" key = "FIRSTNAME" }, } } How do i achieve this?

    Read the article

  • how to speed up the code??

    - by kaushik
    i have very huge code about 600 lines plus. cant post the whole thing here. but a particular code snippet is taking so much time,leading to problems. here i post that part of code please tell me what to do speed up the processing.. please suggest the part which may be the reason and measure to improve them if this small part of code is understandable. using_data={} def join_cost(a , b): global using_data #print a #print b save_a=[] save_b=[] print 1 #for i in range(len(m)): #if str(m[i][0])==str(a): save_a=database_index[a] #for i in range(len(m)): # if str(m[i][0])==str(b): #print 'save_a',save_a #print 'save_b',save_b print 2 save_b=database_index[b] using_data[save_a[0]]=save_a s=str(save_a[1]).replace('phone','text') s=str(s)+'.pm' p=os.path.join("c:/begpython/wavnk/",s) x=open(p , 'r') print 3 for i in range(6): x.readline() k2='a' j=0 o=[] while k2 is not '': k2=x.readline() k2=k2.rstrip('\n') oj=k2.split(' ') o=o+[oj] #print o[j] j=j+1 #print j #print o[2][0] temp=long(1232332) end_time=save_a[4] #print end_time k=(j-1) for i in range(k): diff=float(o[i][0])-float(end_time) if diff<0: diff=diff*(-1) if temp>diff: temp=diff pm_row=i #print pm_row #print temp #print o[pm_row] #pm_row=3 q=[] print 4 l=str(p).replace('.pm','.mcep') z=open(l ,'r') for i in range(pm_row): z.readline() k3=z.readline() k3=k3.rstrip('\n') q=k3.split(' ') #print q print 5 s=str(save_b[1]).replace('phone','text') s=str(s)+'.pm' p=os.path.join("c:/begpython/wavnk/",s) x=open(p , 'r') for i in range(6): x.readline() k2='a' j=0 o=[] while k2 is not '': k2=x.readline() k2=k2.rstrip('\n') oj=k2.split(' ') o=o+[oj] #print o[j] j=j+1 #print j #print o[2][0] temp=long(1232332) strt_time=save_b[3] #print strt_time k=(j-1) for i in range(k): diff=float(o[i][0])-float(strt_time) if diff<0: diff=diff*(-1) if temp>diff: temp=diff pm_row=i #print pm_row #print temp #print o[pm_row] #pm_row=3 w=[] l=str(p).replace('.pm','.mcep') z=open(l ,'r') for i in range(pm_row): z.readline() k3=z.readline() k3=k3.rstrip('\n') w=k3.split(' ') #print w cost=0 for i in range(12): #print q[i] #print w[i] h=float(q[i])-float(w[i]) cost=cost+math.pow(h,2) j_cost=math.sqrt(cost) #print cost return j_cost def target_cost(a , b): a=(b+1)*3 b=(a+1)*2 t_cost=(a+b)*5/2 return t_cost r1='shht:ra_77' r2='grx_18' g=[] nodes=[] nodes=nodes+[[r1]] for i in range(len(y_in_db_format)): g=y_in_db_format[i] #print g #print g[0] g.remove(str(g[0])) nodes=nodes+[g] nodes=nodes+[[r2]] print nodes print "lenght of nodes",len(nodes) lists=[] #lists=lists+[r1] for i in range(len(nodes)): for j in range(len(nodes[i])): lists=lists+[nodes[i][j]] #lists=lists+[r2] print lists distance={} for i in range(len(lists)): if i==0: distance[str(lists[i])]=0 else: distance[str(lists[i])]=long(123231223) #print distance group_dist=[] infinity=long(123232323) for i in range(len(nodes)): distances=[] for j in range(len(nodes[i])): #distances=[] if i==0: distances=distances+[[nodes[i][j], 0]] else: distances=distances+[[nodes[i][j],infinity]] group_dist=group_dist+[distances] #print distances print "group_distances",group_dist #print "check",group_dist[0][0][1] #costs={} #for i in range(len(lists)): #if i==0: # costs[str(lists[i])]=1 #else: # costs[str(lists[i])]=get_selfcost(lists[i]) path=[] for i in range(len(nodes)): mini=[] if i!=(len(nodes)-1): #temp=long(123234324) #Now calculate the cost between the current node and each of its neighbour for k in range(len(nodes[(i+1)])): for j in range(len(nodes[i])): current=nodes[i][j] #print "current_node",current j_distance=join_cost( current , nodes[i+1][k]) #t_distance=target_cost( current , nodes[i+1][k]) t_distance=34 #print distance #print "distance between current and neighbours",distance total_distance=(.5*(float(group_dist[i][j][1])+float(j_distance))+.5*(float(t_distance))) #print "total distance between the intial_nodes and current neighbour",total_distance if int(group_dist[i+1][k][1]) > int(total_distance): group_dist[i+1][k][1]=total_distance #print "updated distance",group_dist[i+1][k][1] a=current #print "the neighbour",nodes[i+1][k],"updated the value",a mini=mini+[[str(nodes[i+1][k]),a]] print mini

    Read the article

  • how to speed up the code??

    - by kaushik
    in my program i have a method which requires about 4 files to be open each time it is called,as i require to take some data.all this data from the file i have been storing in list for manupalation. I approximatily need to call this method about 10,000 times.which is making my program very slow? any method for handling this files in a better ways and is storing the whole data in list time consuming what is better alternatives for list? I can give some code,but my previous question was closed as that only confused everyone as it is a part of big program and need to be explained completely to understand,so i am not giving any code,please suggest ways thinking this as a general question... thanks in advance

    Read the article

  • How to reload Django models without losing my locals in an interactive session?

    - by Gj
    I'm doing some research with an interactive shell and using a Django app (shell_plus) for storing data and browsing it using the convenient admin. Occasionally I add or change some of the app models, and run a syncdb (or South migration when changing a model). The changes to the models don't take effect in my interactive session even if I re-import the app models. Thus I'm forced to restart the shell_plus and lose my precious locals() in the process. Is there any way to reload the models during a session? Thanks!!

    Read the article

  • Convert object to DateRange

    - by user655832
    I'm querying an underlying PostgreSQL database using Pandas 0.8. Pandas is returning the DataFrame properly but the underlying timestamp column in my database is being returned as a generic "object" type in Pandas. As I would eventually like to seasonal normalization of my data I am curious as to how to convert this generic "object" column to something that is appropriate for analysis. Here is my current code to retrieve the data: # get records from db example import pandas.io.sql as psql import psycopg2 # define query to get all subs created this year QRY = """ select i i, i * random() f, case when random() > 0.5 then true else false end t, (current_date - (i*random())::int)::timestamp with time zone tsz from generate_series(1,1000) as s(i) order by 4 ; """ CONN_STRING = "host='localhost' port=5432 dbname='postgres' user='postgres'" # connect to db conn = psycopg2.connect(CONN_STRING) # get some data set index on relid column df = psql.frame_query(QRY, con=conn) print "Row count retrieved: %i" % (len(df),) Thanks for any help you can render. M

    Read the article

  • What is the Simplest Possible Payment Gateway to Implement? (using Django)

    - by b14ck
    I'm developing a web application that will require users to either make one time deposits of money into their account, or allow users to sign up for recurring billing each month for a certain amount of money. I've been looking at various payment gateways, but most (if not all) of them seem complex and difficult to get working. I also see no real active Django projects which offer simple views for making payments. Ideally, I'd like to use something like Amazon FPS, so that I can see online transaction logs, refund money, etc., but I'm open to other things. I just want the EASIEST possible payment gateway to integrate with my site. I'm not looking for anything fancy, whatever does the job, and requires < 10 hours to get working from start to finish would be perfect. I'll give answer points to whoever can point out a good one. Thanks!

    Read the article

  • sqlite3 'database is locked' won't go away with retries

    - by Azarias
    I have a sqlite3 database that is accessed by a few threads (3-4). I am aware of the general limitations of sqlite3 with regards to concurrency as stated http://www.sqlite.org/faq.html#q6 , but I am convinced that is not the problem. All of the threads both read and write from this database. Whenever I do a write, I have the following construct: try: Cursor.execute(q, params) Connection.commit() except sqlite3.IntegrityError: Notify except sqlite3.OperationalError: print sys.exc_info() print("DATABASE LOCKED; sleeping for 3 seconds and trying again") time.sleep(3) Retry On some runs, I won't even hit this block, but when I do, it never comes out of it (keeps retrying, but I keep getting the 'database is locked' error from exc_info. If I understand the reader/writer lock usage correctly, some amount of waiting should help with the contention. What this sounds like is deadlock, but I do not use any transactions in my code, and every SELECT or INSERT is simply a one off. Some threads, however, keep the same connection when they do their operation (which includes a mix of SELECTS and INSERTS and other modifiers). I would appericiate it if you could shade a light on this, and also ways around fixing it (besides using a different database engine.)

    Read the article

  • How to repeatedly show a Dialog with PyGTK / Gtkbuilder?

    - by Julian
    I have created a PyGTK application that shows a Dialog when the user presses a button. The dialog is loaded in my __init__ method with: builder = gtk.Builder() builder.add_from_file("filename") builder.connect_signals(self) self.myDialog = builder.get_object("dialog_name") In the event handler, the dialog is shown with the command self.myDialog.run(), but this only works once, because after run() the dialog is automatically destroyed. If I click the button a second time, the application crashes. I read that there is a way to use show() instead of run() where the dialog is not destroyed, but I feel like this is not the right way for me because I would like the dialog to behave modally and to return control to the code only after the user has closed it. Is there a simple way to repeatedly show a dialog using the run() method using gtkbuilder? I tried reloading the whole dialog using the gtkbuilder, but that did not really seem to work, the dialog was missing all child elements (and I would prefer to have to use the builder only once, at the beginning of the program). [SOLUTION] As pointed out by the answer below, using hide() does the trick. But one has to take care that the dialog is in fact destroyed if one does not catch its "delete-event". A simple example that works is: import pygtk import gtk class DialogTest: def rundialog(self, widget, data=None): self.dia.show_all() result = self.dia.run() def destroy(self, widget, data=None): gtk.main_quit() def closedialog(self, widget, data=None): self.dia.hide() return True def __init__(self): self.window = gtk.Window(gtk.WINDOW_TOPLEVEL) self.window.connect("destroy", self.destroy) self.dia = gtk.Dialog('TEST DIALOG', self.window, gtk.DIALOG_MODAL | gtk.DIALOG_DESTROY_WITH_PARENT) self.dia.vbox.pack_start(gtk.Label('This is just a Test')) self.dia.connect("delete-event", self.closedialog) self.button = gtk.Button("Run Dialog") self.button.connect("clicked", self.rundialog, None) self.window.add(self.button) self.button.show() self.window.show() if __name__ == "__main__": testApp = DialogTest() gtk.main()

    Read the article

  • How to make Universal Feed Parser only parse feeds?

    - by piquadrat
    I'm trying to get content from external feeds on my Django web site with Universal Feed Parser. I want to have some user error handling, e.g. if the user supplies a URL that is not a feed. When I tried how feedparser responds to faulty input, I was surprised to see that feedparser does not throw any Exceptions at all. E.g. on HTML content, it tries to parse some information from the HTML code, and on non-existing domains, it returns a mostly empty dictionary: {'bozo': 1, 'bozo_exception': URLError(gaierror(-2, 'Name or service not known'),), 'encoding': 'utf-8', 'entries': [], 'feed': {}, 'version': None} Other faulty input manifest themselves in the status_code or the namespaces values in the returned dictionary. So, what's the best approach to have sane error checking without resorting to an endless cascade of if .. elif .. elif ...?

    Read the article

  • SQLAlchemy declarative syntax with autoload in Pylons

    - by Juliusz Gonera
    I would like to use autoload to use an existings database. I know how to do it without declarative syntax (model/_init_.py): def init_model(engine): """Call me before using any of the tables or classes in the model""" t_events = Table('events', Base.metadata, schema='events', autoload=True, autoload_with=engine) orm.mapper(Event, t_events) Session.configure(bind=engine) class Event(object): pass This works fine, but I would like to use declarative syntax: class Event(Base): __tablename__ = 'events' __table_args__ = {'schema': 'events', 'autoload': True} Unfortunately, this way I get: sqlalchemy.exc.UnboundExecutionError: No engine is bound to this Table's MetaData. Pass an engine to the Table via autoload_with=<someengine>, or associate the MetaData with an engine via metadata.bind=<someengine> The problem here is that I don't know where to get the engine from (to use it in autoload_with) at the stage of importing the model (it's available in init_model()). I tried adding meta.Base.metadata.bind(engine) to environment.py but it doesn't work. Anyone has found some elegant solution?

    Read the article

  • grabbing a substring while scraping with Python2.6

    - by Diego
    Hey can someone help with the following? I'm trying to scrape a site that has the following information.. I need to pull just the number after the </strong> tag.. [<li><strong>ISBN-13:</strong> 9780375853401</li>, <li><strong>Pub. Date: </strong> 05/11/2010</li>] [<li><strong>UPC:</strong> 490355000372</li>, <li><strong>Catalog No:</strong> 15024/25</li>, <li><strong>Label:</strong> CAMERATA</li>] here's a piece of the code I've been using to grab the above data using mechanize and BeautifulSoup. I'm stuck here as it won't let me use the find() function for a list br_results = mechanize.urlopen(br_results) html = br_results.read() soup = BeautifulSoup(html) local_links = soup.findAll("a", {"class" : "down-arrow csa"}) upc_code = soup.findAll("ul", {"class" : "bc-meta3"}) for upc in upc_code: upc_text = upc.contents.contents print upc_text

    Read the article

  • django on appengine

    - by aks
    I am impressed with django.Am am currenty a java developer.I want to make some cool websites for myself but i want to host it in some third pary environmet. Now the question is can i host the django application on appengine?If yes , how?? Are there any site built using django which are already hosted on appengine?

    Read the article

  • How can I login in a website with Pyhon?

    - by Shady
    How can I do it? I was trying to enter some specified link (with urllib), but to do it, I need to log. I have this source from the site <form id="login-form" action="auth/login" method="post"> <div> <!--label for="rememberme">Remember me</label><input type="checkbox" class="remember" checked="checked" name="remember me" /--> <label for="email" id="email-label" class="no-js">Email</label> <input id="email-email" type="text" name="handle" value="" autocomplete="off" /> <label for="combination" id="combo-label" class="no-js">Combination</label> <input id="password-clear" type="text" value="Combination" autocomplete="off" /> <input id="password-password" type="password" name="password" value="" autocomplete="off" /> <input id="sumbitLogin" class="signin" type="submit" value="Sign In" /> It's possible?

    Read the article

< Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >