Search Results

Search found 15224 results on 609 pages for 'parallel python'.

Page 432/609 | < Previous Page | 428 429 430 431 432 433 434 435 436 437 438 439  | Next Page >

  • Is it possible in SQLAlchemy to filter by a database function or stored procedure?

    - by Rico Suave
    We're using SQLalchemy in a project with a legacy database. The database has functions/stored procedures. In the past we used raw SQL and we could use these functions as filters in our queries. I would like to do the same for SQLAlchemy queries if possible. I have read about the @hybrid_property, but some of these functions need one or more parameters, for example; I have a User model that has a JOIN to a bunch of historical records. These historical records for this user, have a date and a debit and credit field, so we can look up the balance of a user at a specific point in time, by doing a SUM(credit) - SUM(debit) up until the given date. We have a database function for that called dbo.Balance(user_id, date_time). I can use this to check the balance of a user at a given point in time. I would like to use this as a criterium in a query, to select only users that have a negative balance at a specific date/time. selection = users.filter(coalesce(Users.status, 0) == 1, coalesce(Users.no_reminders, 0) == 0, dbo.pplBalance(Users.user_id, datetime.datetime.now()) < -0.01).all() This is of course a non-working example, just for you to get the gist of what I'd like to do. The solution looks to be to use hybrd properties, but as I mentioned above, these only work without parameters (as they are properties, not methods). Any suggestions on how to implement something like this (if it's even possible) are welcome. Thanks,

    Read the article

  • How to quickly parse a list of strings

    - by math
    If I want to split a list of words separated by a delimiter character, I can use >>> 'abc,foo,bar'.split(',') ['abc', 'foo', 'bar'] But how to easily and quickly do the same thing if I also want to handle quoted-strings which can contain the delimiter character ? In: 'abc,"a string, with a comma","another, one"' Out: ['abc', 'a string, with a comma', 'another, one'] Related question: How can i parse a comma delimited string into a list (caveat)?

    Read the article

  • Get node name with minidom

    - by Alex
    Is it possible to get the name of a node using minidom? for example i have a node: <heading><![CDATA[5 year]]></heading> what i'm trying to do is store the value heading so that i can use it as a key in a dictionary, the closest i can get is something like [<DOM Element: heading at 0x11e6d28>] i'm sure i'm overlooking something very simple here, thanks!

    Read the article

  • Does dict.update affect a function's argspec?

    - by sbox32
    import inspect class Test: def test(self, p, d={}): d.update(p) return d print inspect.getargspec(getattr(Test, 'test'))[3] print Test().test({'1':True}) print inspect.getargspec(getattr(Test, 'test'))[3] I would expect the argspec for Test.test not to change but because of dict.update it does. Why?

    Read the article

  • Filtering SQLAlchemy query on attribute_mapped_collection field of relationship

    - by bsa
    I have two classes, Tag and Hardware, defined with a simple parent-child relationship (see the full definition at the end). Now I want to filter a query on Tag using the version field in Hardware through an attribute_mapped_collection, eg: def get_tags(order_code=None, hardware_filters=None): session = Session() query = session.query(Tag) if order_code: query = query.filter(Tag.order_code == order_code) if hardware_filters: for k, v in hardware_filters.iteritems(): query = query.filter(getattr(Tag.hardware, k).version == v) return query.all() But I get: AttributeError: Neither 'InstrumentedAttribute' object nor 'Comparator' object associated with Tag.hardware has an attribute 'baseband The same thing happens if I strip it back by hard-coding the attribute, eg: query.filter(Tag.hardware.baseband.version == v) I can do it this way: query = query.filter(Tag.hardware.any(artefact=k, version=v)) But why can't I filter directly through the attribute? Class definitions class Tag(Base): __tablename__ = 'tag' tag_id = Column(Integer, primary_key=True) order_code = Column(String, nullable=False) version = Column(String, nullable=False) status = Column(String, nullable=False) comments = Column(String) hardware = relationship( "Hardware", backref="tag", collection_class=attribute_mapped_collection('artefact'), ) __table_args__ = ( UniqueConstraint('order_code', 'version'), ) class Hardware(Base): __tablename__ = 'hardware' hardware_id = Column(Integer, primary_key=True) tag_id = Column(String, ForeignKey('tag.tag_id')) product_id = Column(String, nullable=True) artefact = Column(String, nullable=False) version = Column(String, nullable=False)

    Read the article

  • Can pydoc/help() hide the documentation for inherited class methods and attributes?

    - by EOL
    When declaring a class that inherits from a specific class: class C(dict): added_attribute = 0 the documentation for class C lists all the methods of dict (either through help(C) or pydoc). Is there a way to hide the inherited methods from the automatically generated documentation (the documentation string can refer to the base class, for non-overwritten methods)? or is it impossible? This would be useful: pydoc lists the functions defined in a module after its classes. Thus, when the classes have a very long documentation, a lot of less than useful information is printed before the new functions provided by the module are presented, which makes the documentation harder to exploit (you have to skip all the documentation for the inherited methods until you reach something specific to the module being documented).

    Read the article

  • directory and file related doubts??

    - by kaushik
    i have a directory with around 1000 files....i want to run a same code for each of these file... my code requires the file name to be inputted. i have written code to copy the information of one into other in other format... please suggest a method to copy all 1000 files one by one without need to change the file name every time and i have a field serial_num which need to be continous i.e if 1st file has upto 30 then while coping other file it should continue from 30not from 0 again require suggestion please thanks..

    Read the article

  • Feedback on availability with Google App Engine

    - by Ron
    We've had some good experiences building an app on Google App Engine, this first app's target audience are Google Apps users, so no issues there in terms of it being hosted on Google infrastructure. We like it so much that we would like to investigate using it for a another app, however this next project is for a client who is not really that interested in what technology it sits on, they just want it to work, and work all of the time. In this scenario, given that we have the technology applicability and capability side covered, are there any concerns that this stuff is still relatively new and that we may not be as much "in control" as if we had it done with traditional hosting?

    Read the article

  • Twisted: how-to bind a server to a specified IP address? (solved)

    - by daccle
    I want to have a twisted service (started via twistd) which listens to TCP/POST request on a specified port on a specified IP address. By now I have a twisted application which listens to port 8040 on localhost. It is running fine, but I want it to only listen to a certain IP address, say 10.0.0.78. How-to manage that? This is a snippet of my code: application = service.Application('SMS_Inbound') smsInbound = resource.Resource() smsInbound.putChild('75sms_inbound',ReceiveSMS(application)) smsInboundServer = internet.TCPServer(8001, webserver.Site(smsInbound)) smsInboundServer.setName("SMS Handling") smsInboundServer.setServiceParent(application)

    Read the article

  • Using SQLAlchemy, how can I return a count with multiple columns

    - by Andy
    I am attempting to run a query like this: SELECT comment_type_id, name, count(comment_type_id) FROM comments, commenttypes WHERE comment_type_id=commenttypes.id GROUP BY comment_type_id Without the join between comments and commenttypes for the name column, I can do this using: session.query(Comment.comment_type_id,func.count(Comment.comment_type_id)).group_by(Comment.comment_type_id).all() However, if I try to do something like this, I get incorrect results: session.query(Comment.comment_type_id, Comment.comment_type, func.count(Comment.comment_type_id)).group_by(Comment.comment_type_id).all() I have two problems with the results: (1, False, 82920) (2, False, 588) (3, False, 4278) (4, False, 104370) Problems: The False is not correct The counts are wrong My expected results are: (1, 'Comment Type 1', 13820) (2, 'Comment Type 2', 98) (3, 'Comment Type 2', 713) (4, 'Comment Type 2', 17395) How can I adjust my command to pull the correct name value and the correct count?

    Read the article

  • twitter api post rate limit

    - by Xavier
    Does anyone know Twitter's rate limit on posting? Looking at their web page they claimed to not have one but I get an exception thrown if my program posts too fast... Any help is appreciated.

    Read the article

  • Sleeping a thread blocking stdin

    - by Sid
    Hey, I'm running a function which evaluates commands passed in using stdin and another function which runs a bunch of jobs. I need to make the latter function sleep at regular intervals but that seems to be blocking the stdin. Any advice on how to resolve this would be appreciated. The source code for the functions is def runJobs(comps, jobQueue, numRunning, limit, lock): while len(jobQueue) >= 0: print(len(jobQueue)); if len(jobQueue) > 0: comp, tasks = find_computer(comps, 0); #do something time.sleep(5); def manageStdin(): print "Global Stdin Begins Now" for line in fileinput.input(): try: print(eval(line)); except Exception, e: print e; --Thanks

    Read the article

  • element-wise lookup on one ndarray to another ndarray of different shapes

    - by fahhean
    Hi, I am new to numpy. Am wonder is there a way to do lookup of two ndarray of different shapes? for example, i have 2 ndarrays as below: X = array([[0, 3, 6], [3, 3, 3], [6, 0, 3]]) Y = array([[0, 100], [3, 500], [6, 800]]) and would like to lookup each element of X in Y, then be able to return the second column of Y: Z = array([[100, 500, 800], [500, 500, 500], [800, 100, 500]]) thanks, fahhean

    Read the article

  • How to check if a file is a textfile?

    - by daniels
    I have a folder full of files and i want to search some string inside them. The issue is that some files may be zip,exe,ogg,etc. Can i check somehow what kind of file is it so i only open and search through txt, php, etc files. I can't rely on the file extension.

    Read the article

  • Should I use a class in this: Reading a XML file using lxml.

    - by PulpFiction
    Hi everyone. This question is in continuation to my previous question, in which I asked about passing around an ElementTree. I need to read the XML files only and to solve this, I decided to create a global ElementTree and then parse it wherever required. My question is: Is this an acceptable practice? I heard global variables are bad. If I don't make it global, I was suggested to make a class. But do I really need to create a class? What benefits would I have from that approach. Note that I would be handling only one ElementTree instance per run, the operations are read-only. If I don't use a class, how and where do I declare that ElementTree so that it available globally? (Note that I would be importing this module) Please answer this question in the respect that I am a beginner to development, and at this stage I can't figure out whether to use a class or just go with the functional style programming approach.

    Read the article

  • AppEngine: Can I write a Dynamic property (db.Expando) with a name chosen at runtime?

    - by MarcoB
    If I have an entity derived from db.Expando I can write Dynamic property by just assigning a value to a new property, e.g. "y" in this example: class MyEntity(db.Expando): x = db.IntegerProperty() my_entity = MyEntity(x=1) my_entity.y = 2 But suppose I have the name of the dynamic property in a variable... how can I (1) read and write to it, and (2) check if the Dynamic variable exists in the entity's instance? e.g. class MyEntity(db.Expando): x = db.IntegerProperty() my_entity = MyEntity(x=1) # choose a var name: var_name = "z" # assign a value to the Dynamic variable whose name is in var_name: my_entity.property_by_name[var_name] = 2 # also, check if such a property esists if my_entity.property_exists(var_name): # read the value of the Dynamic property whose name is in var_name print my_entity.property_by_name[var_name] Thanks...

    Read the article

  • Own params to PeriodicTask run() method in Celery

    - by Alex Isayko
    Hello to all! I am writing a small Django application and I should be able to create for each model object its periodical task which will be executed with a certain interval. I'm use for this a Celery application, but i can't understand one thing: class ProcessQueryTask(PeriodicTask): run_every = timedelta(minutes=1) def run(self, query_task_pk, **kwargs): logging.info('Process celery task for QueryTask %d' % query_task_pk) task = QueryTask.objects.get(pk=query_task_pk) task.exec_task() return True Then i'm do following: >>> from tasks.tasks import ProcessQueryTask >>> result1 = ProcessQueryTask.delay(query_task_pk=1) >>> result2 = ProcessQueryTask.delay(query_task_pk=2) First call is success, but other periodical calls returning the error - TypeError: run() takes exactly 2 non-keyword arguments (1 given) in celeryd server. So, can i pass own params to PeriodicTask run() ? Thanks!

    Read the article

  • Keep PyGTK Button from Resizing on Label Change

    - by Cap
    I'm working on a PyGTK app with some Buttons that, when clicked, give a text entry dialog, then set the text on the button to whatever was entered in the box. The problem is that if the text is longer than the button can show, the button changes size to accomodate. How do I keep GTK Buttons from resizing when the text changes?

    Read the article

  • sql select from a large number of IDs

    - by Claudiu
    I have a table, Foo. I run a query on Foo to get the ids from a subset of Foo. I then want to run a more complicated set of queries, but only on those IDs. Is there an efficient way to do this? The best I can think of is creating a query such as: SELECT ... --complicated stuff WHERE ... --more stuff AND id IN (1, 2, 3, 9, 413, 4324, ..., 939393) That is, I construct a huge "IN" clause. Is this efficient? Is there a more efficient way of doing this, or is the only way to JOIN with the inital query that gets the IDs? If it helps, I'm using SQLObject to connect to a PostgreSQL database, and I have access to the cursor that executed the query to get all the IDs.

    Read the article

  • Pickled my dictionary from ZODB but i got a less in size one?

    - by Someone Someoneelse
    I use ZODB and i want to copy my 'database_1.fs' file to another 'database_2.fs', so I opened the root dictionary of that 'database_1.fs' and I (pickle.dump) it in a text file. Then I (pickle.load) it in a dictionary-variable, in the end I update the root dictionary of the other 'database_2.fs' with the dictionary-variable. It works, but I wonder why the size of the 'database_1.fs' not equal to the size of the other 'database_2.fs'. They are still copies of each other. def openstorage(store): #opens the database data={} data['file']=filestorage data['db']=DB(data['file']) data['conn']=data['db'].open() data['root']=data['conn'].root() return data def getroot(dicty): return dicty['root'] def closestorage(dicty): #close the database after Saving transaction.commit() dicty['file'].close() dicty['db'].close() dicty['conn'].close() transaction.get().abort() then that's what i do:- import pickle loc1='G:\\database_1.fs' op1=openstorage(loc1) root1=getroot(op1) loc2='G:database_2.fs' op2=openstorage(loc2) root2=getroot(op2) >>> len(root1) 215 >>> len(root2) 0 pickle.dump( root1, open( "save.txt", "wb" )) item=pickle.load( open( "save.txt", "rb" ) ) #now item is a dictionary root2.update(item) closestorage(op1) closestorage(op2) #after I open both of the databases #I get the same keys in both databases #But `database_2.fs` is smaller that `database_2.fs` in size I mean. >>> len(root2)==len(root1)==215 #they have the same keys True Note: (1) there are persistent dictionaries and lists in the original database_1.fs (2) both of them have the same length and the same indexes.

    Read the article

< Previous Page | 428 429 430 431 432 433 434 435 436 437 438 439  | Next Page >