Search Results

Search found 16680 results on 668 pages for 'python datetime'.

Page 401/668 | < Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >

  • How do I code this relationship in SQLAlchemy?

    - by Martin Del Vecchio
    I am new to SQLAlchemy (and SQL, for that matter). I can't figure out how to code the idea I have in my head. I am creating a database of performance-test results. A test run consists of a test type and a number (this is class TestRun below) A test suite consists the version string of the software being tested, and one or more TestRun objects (this is class TestSuite below). A test version consists of all test suites with the given version name. Here is my code, as simple as I can make it: from sqlalchemy import * from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import relationship, backref, sessionmaker Base = declarative_base() class TestVersion (Base): __tablename__ = 'versions' id = Column (Integer, primary_key=True) version_name = Column (String) def __init__ (self, version_name): self.version_name = version_name class TestRun (Base): __tablename__ = 'runs' id = Column (Integer, primary_key=True) suite_directory = Column (String, ForeignKey ('suites.directory')) suite = relationship ('TestSuite', backref=backref ('runs', order_by=id)) test_type = Column (String) rate = Column (Integer) def __init__ (self, test_type, rate): self.test_type = test_type self.rate = rate class TestSuite (Base): __tablename__ = 'suites' directory = Column (String, primary_key=True) version_id = Column (Integer, ForeignKey ('versions.id')) version_ref = relationship ('TestVersion', backref=backref ('suites', order_by=directory)) version_name = Column (String) def __init__ (self, directory, version_name): self.directory = directory self.version_name = version_name # Create a v1.0 suite suite1 = TestSuite ('dir1', 'v1.0') suite1.runs.append (TestRun ('test1', 100)) suite1.runs.append (TestRun ('test2', 200)) # Create a another v1.0 suite suite2 = TestSuite ('dir2', 'v1.0') suite2.runs.append (TestRun ('test1', 101)) suite2.runs.append (TestRun ('test2', 201)) # Create another suite suite3 = TestSuite ('dir3', 'v2.0') suite3.runs.append (TestRun ('test1', 102)) suite3.runs.append (TestRun ('test2', 202)) # Create the in-memory database engine = create_engine ('sqlite://') Session = sessionmaker (bind=engine) session = Session() Base.metadata.create_all (engine) # Add the suites in version1 = TestVersion (suite1.version_name) version1.suites.append (suite1) session.add (suite1) version2 = TestVersion (suite2.version_name) version2.suites.append (suite2) session.add (suite2) version3 = TestVersion (suite3.version_name) version3.suites.append (suite3) session.add (suite3) session.commit() # Query the suites for suite in session.query (TestSuite).order_by (TestSuite.directory): print "\nSuite directory %s, version %s has %d test runs:" % (suite.directory, suite.version_name, len (suite.runs)) for run in suite.runs: print " Test '%s', result %d" % (run.test_type, run.rate) # Query the versions for version in session.query (TestVersion).order_by (TestVersion.version_name): print "\nVersion %s has %d test suites:" % (version.version_name, len (version.suites)) for suite in version.suites: print " Suite directory %s, version %s has %d test runs:" % (suite.directory, suite.version_name, len (suite.runs)) for run in suite.runs: print " Test '%s', result %d" % (run.test_type, run.rate) The output of this program: Suite directory dir1, version v1.0 has 2 test runs: Test 'test1', result 100 Test 'test2', result 200 Suite directory dir2, version v1.0 has 2 test runs: Test 'test1', result 101 Test 'test2', result 201 Suite directory dir3, version v2.0 has 2 test runs: Test 'test1', result 102 Test 'test2', result 202 Version v1.0 has 1 test suites: Suite directory dir1, version v1.0 has 2 test runs: Test 'test1', result 100 Test 'test2', result 200 Version v1.0 has 1 test suites: Suite directory dir2, version v1.0 has 2 test runs: Test 'test1', result 101 Test 'test2', result 201 Version v2.0 has 1 test suites: Suite directory dir3, version v2.0 has 2 test runs: Test 'test1', result 102 Test 'test2', result 202 This is not correct, since there are two TestVersion objects with the name 'v1.0'. I hacked my way around this by adding a private list of TestVersion objects, and a function to find a matching one: versions = [] def find_or_create_version (version_name): # Find existing for version in versions: if version.version_name == version_name: return (version) # Create new version = TestVersion (version_name) versions.append (version) return (version) Then I modified my code that adds the records to use it: # Add the suites in version1 = find_or_create_version (suite1.version_name) version1.suites.append (suite1) session.add (suite1) version2 = find_or_create_version (suite2.version_name) version2.suites.append (suite2) session.add (suite2) version3 = find_or_create_version (suite3.version_name) version3.suites.append (suite3) session.add (suite3) Now the output is what I want: Suite directory dir1, version v1.0 has 2 test runs: Test 'test1', result 100 Test 'test2', result 200 Suite directory dir2, version v1.0 has 2 test runs: Test 'test1', result 101 Test 'test2', result 201 Suite directory dir3, version v2.0 has 2 test runs: Test 'test1', result 102 Test 'test2', result 202 Version v1.0 has 2 test suites: Suite directory dir1, version v1.0 has 2 test runs: Test 'test1', result 100 Test 'test2', result 200 Suite directory dir2, version v1.0 has 2 test runs: Test 'test1', result 101 Test 'test2', result 201 Version v2.0 has 1 test suites: Suite directory dir3, version v2.0 has 2 test runs: Test 'test1', result 102 Test 'test2', result 202 This feels wrong to me; it doesn't feel right that I am manually keeping track of the unique version names, and manually adding the suites to the appropriate TestVersion objects. Is this code even close to being correct? And what happens when I'm not building the entire database from scratch, as in this example. If the database already exists, do I have to query the database's TestVersion table to discover the unique version names? Thanks in advance. I know this is a lot of code to wade through, and I appreciate the help.

    Read the article

  • Django and json request

    - by Hulk
    In a template i have the following code <script> var url="/mypjt/my_timer" $.post(url, paramarr, function callbackHandler(dict) { alert('got response back'); if (dict.flag == 2) { alert('1'); $.jGrowl("Data could not be saved"); } else if(dict.ret_status == 1) { alert('2'); $.jGrowl("Data saved successfully"); window.location = "/mypjt/display/" + dict.rid; } }, "json" ); </script> In views i have the following code, def my_timer(request): dict={} try: a= timer.objects.get(pk=1) dict({'flag':1}) return HttpResponse(simplejson.dumps(dict), mimetype='application/javascript') except: dict({'flag':1}) return HttpResponse(simplejson.dumps(dict), mimetype='application/javascript') My question is since we are making a json request and in the try block ,after setting the flag ,cant we return a page directly as return render_to_response('mypjt/display.html',context_instance=RequestContext(request,{'dict': dict})) instead of sending the response, because on success again in the html page we redirect the code Also if there is a exception then only can we return the json request. My only concern is that the interaction between client and server should be minimal. Thanks..

    Read the article

  • Is django orm & templates thread safe?

    - by Piotr Czapla
    I'm using django orm and templates to create a background service that is ran as management command. Do you know if django is thread safe? I'd like to use threads to speed up processing. The processing is blocked by I/O not CPU so I don't care about performance hit caused by GIL.

    Read the article

  • How to set the size of a wx.aui.AuiManager Pane that is centered?

    - by aF
    Hello, I have three panes with the InfoPane center option. I want to know how to set their size. Using this code: import wx import wx.aui class MyFrame(wx.Frame): def __init__(self, parent, id=-1, title='wx.aui Test', pos=wx.DefaultPosition, size=(800, 600), style=wx.DEFAULT_FRAME_STYLE): wx.Frame.__init__(self, parent, id, title, pos, size, style) self._mgr = wx.aui.AuiManager(self) # create several text controls text1 = wx.TextCtrl(self, -1, 'Pane 1 - sample text', wx.DefaultPosition, wx.Size(200,150), wx.NO_BORDER | wx.TE_MULTILINE) text2 = wx.TextCtrl(self, -1, 'Pane 2 - sample text', wx.DefaultPosition, wx.Size(200,150), wx.NO_BORDER | wx.TE_MULTILINE) text3 = wx.TextCtrl(self, -1, 'Main content window', wx.DefaultPosition, wx.Size(200,150), wx.NO_BORDER | wx.TE_MULTILINE) # add the panes to the manager self._mgr.AddPane(text1, wx.CENTER) self._mgr.AddPane(text2, wx.CENTER) self._mgr.AddPane(text3, wx.CENTER) # tell the manager to 'commit' all the changes just made self._mgr.Update() self.Bind(wx.EVT_CLOSE, self.OnClose) def OnClose(self, event): # deinitialize the frame manager self._mgr.UnInit() # delete the frame self.Destroy() app = wx.App() frame = MyFrame(None) frame.Show() app.MainLoop() I want to know what is called when we change the size of the panes. If you tell me that, I can do the rest by myself :)

    Read the article

  • TypeError: 'NoneType' object does not support item assignment

    - by R S John
    I am trying to do some mathematical calculation according to the values at particular index of a NumPy array with the following code X = np.arange(9).reshape(3,3) temp = X.copy().fill(5.446361E-01) ind = np.where(X < 4.0) temp[ind] = 0.5*X[ind]**2 - 1.0 ind = np.where(X >= 4.0 and X < 9.0) temp[ind] = (5.699327E-1*(X[ind]-1)**4)/(X[ind]**4) print temp But I am getting the following error Traceback (most recent call last): File "test.py", line 7, in <module> temp[ind] = 0.5*X[ind]**2 - 1.0 TypeError: 'NoneType' object does not support item assignment Would you please help me in solving this? Thanks

    Read the article

  • Huge amount of time sending data with suds and proxy

    - by Roman
    Hi everyone, I have the following code to send data through a proxy using suds: import suds t = suds.transport.http.HttpTransport() proxy = urllib2.ProxyHandler({'http': 'http://192.168.3.217:3128'}) opener = urllib2.build_opener(proxy) t.urlopener = opener ws = suds.client.Client('http://xxxxxxx/web.asmx?WSDL', transport=t) req = ws.factory.create('ActionRequest.request') req.SerialNumber = 'asdf' req.HostName = 'hola' res = ws.service.ActionRequest(req) I don't know why, but it can be sending data above 2 or 3 minutes, or even more and it raises a "Gateway timeout" exception sometimes. If I don't use the proxy, the amount of time used is above 2 seconds or less. Here is the SOAP reply: (ActionResponse){ Id = None Action = "Action.None" Objects = "" } The proxy is running right with other requests through urllib2, or using normal web browsers like firefox. Does anyone have any idea what's happening here with suds? Thanks a lot in advance!!!

    Read the article

  • Loading url with cyrillic symbols

    - by Ockonal
    Hi guys, I have to load some url with cyrillic symbols. My script should work with this: http://wincode.org/%D0%BF%D1%80%D0%BE%D0%B3%D1%80%D0%B0%D0%BC%D0%BC%D0%B8%D1%80%D0%BE%D0%B2%D0%B0%D0%BD%D0%B8%D0%B5/ If I'll use this in browser it would replaced into normal symbols, but urllib code fails with 404 error. How to decode correctly this url? When I'm using that url directly in code, like address = 'that address', it works perfect. But I used parsing page for getting this url. I have a list of urls which contents cyrillic. Maybe they have uncorrect encoding? Here is more code: requestData = urllib2.Request( %SOME_ADDRESS%, None, {"User-Agent": user_agent}) requestHandler = pageHandler.open(requestData) pageData = requestHandler.read().decode('utf-8') soupHandler = BeautifulSoup(pageData) topicLinks = [] for postBlock in soupHandler.findAll('a', href=re.compile('%SOME_REGEXP%')): topicLinks.append(postBlock['href']) postAddress = choice(topicLinks) postRequestData = urllib2.Request(postAddress, None, {"User-Agent": user_agent}) postHandler = pageHandler.open(postRequestData) postData = postHandler.read() File "/usr/lib/python2.6/urllib2.py", line 518, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) urllib2.HTTPError: HTTP Error 404: Not Found

    Read the article

  • how can i set the key 'blob-key' about BlobStore?

    - by pyleaf
    I use the jquery plugin "uploadify" to upload multiple files to My App(GAE), and then save them with blobstore, but it failed. I debug the code into get_uploads, it seems field.type_options is empty and of course has 'blob-key'. Q: where does the key 'blob-key' come from? thank you!

    Read the article

  • SQLAlchemy unsupported type error - and table design issues?

    - by Az
    Hi there, back again with some more SQLAlchemy shenanigans. Let me step through this. My table is now set up as so: engine = create_engine('sqlite:///:memory:', echo=False) metadata = MetaData() students_table = Table('studs', metadata, Column('sid', Integer, primary_key=True), Column('name', String), Column('preferences', Integer), Column('allocated_rank', Integer), Column('allocated_project', Integer) ) metadata.create_all(engine) mapper(Student, students_table) Fairly simple, and for the most part I've been enjoying the ability to query almost any bit of information I want provided I avoid the error cases below. The class it is mapped from is: class Student(object): def __init__(self, sid, name): self.sid = sid self.name = name self.preferences = collections.defaultdict(set) self.allocated_project = None self.allocated_rank = 0 def __repr__(self): return str(self) def __str__(self): return "%s %s" %(self.sid, self.name) Explanation: preferences is basically a set of all the projects the student would prefer to be assigned. When the allocation algorithm kicks in, a student's allocated_project emerges from this preference set. Now if I try to do this: for student in students.itervalues(): session.add(student) session.commit() It throws two errors, one for the allocated_project column (seen below) and a similar error for the preferences column: sqlalchemy.exc.InterfaceError: (InterfaceError) Error binding parameter 4 - probably unsupported type. u'INSERT INTO studs (sid, name, allocated_rank, allocated_project) VALUES (?, ?, ?, ?, ?, ?, ?)' [1101, 'Muffett,M.', 1, 888 Human-spider relationships (Supervisor id: 123)] If I go back into my code I find that, when I'm copying the preferences from the given text files, it actually refers to the Project class which is mapped to a dictionary, using the unique project id's (pid) as keys. Thus, as I iterate through each student via their rank and to the preferences set, it adds not a project id, but the reference to the project id from the projects dictionary. students[sid].preferences[int(rank)].add(projects[int(pid)]) Now this is very useful to me since I can find out all I want to about a student's preferred projects without having to run another check to pull up information about the project id. The form you see in the error has the object print information passed as: return "%s %s (Supervisor id: %s)" %(self.proj_id, self.proj_name, self.proj_sup) My questions are: I'm trying to store an object in a database field aren't I? Would the correct way then, be copying the project information (project id, name, etc) into its own table, referenced by the unique project id? That way I can just have the project id field for one of the student tables just be an integer id and when I need more information, just join the tables? So and so forth for other tables? If the above makes sense, then how does one maintain the relationship with a column of information in one table which is a key index on another table? Does this boil down into a database design problem? Are there any other elegant ways of accomplishing this? Apologies if this is a very long-winded question. It's rather crucial for me to solve this, so I've tried to explain as much as I can, whilst attempting to show that I'm trying (key word here sadly) to understand what could be going wrong.

    Read the article

  • Django: Summing values

    - by Anry
    I have a two Model - Project and Cost. class Project(models.Model): title = models.CharField(max_length=150) url = models.URLField() manager = models.ForeignKey(User) class Cost(models.Model): project = models.ForeignKey(Project) cost = models.FloatField() date = models.DateField() I must return the sum of costs for each project. view.py: from mypm.costs.models import Project, Cost from django.shortcuts import render_to_response from django.db.models import Avg, Sum def index(request): #... return render_to_response('index.html',... How?

    Read the article

  • Trouble with encoding and urllib

    - by Ockonal
    Hello, I'm loading web-page using urllib. Ther eis russian symbols, but page encoding is 'utf-8' 1 pageData = unicode(requestHandler.read()).decode('utf-8') UnicodeDecodeError: 'ascii' codec can't decode byte 0xd0 in position 262: ordinal not in range(128) 2 pageData = requestHandler.read() soupHandler = BeautifulSoup(pageData) print soupHandler.findAll(...) UnicodeEncodeError: 'ascii' codec can't encode characters in position 340-345: ordinal not in range(128)

    Read the article

  • pysvn client.log() returning empty dictionary

    - by nashr rafeeg
    i have the following script that i am using to get the log messages from svn import pysvn class svncheck(): def __init__(self, svn_root="http://10.11.25.3/svn/Moodle/modules", svn_user=None, svn_password=None): self.user = svn_user self.password = svn_password self.root = svn_root def diffrence(self): client = pysvn.Client() client.commit_info_style = 1 client.callback_notify = self.notify client.callback_get_login = self.credentials log = client.log( self.root, revision_start=pysvn.Revision( pysvn.opt_revision_kind.number, 0), revision_end=pysvn.Revision( pysvn.opt_revision_kind.number, 5829), discover_changed_paths=True, strict_node_history=True, limit=0, include_merged_revisions=False, ) print log def notify( event_dict ): print event_dict return def credentials(realm, username, may_save): return True, self.user, self.password, True s = svncheck() s.diffrence() when i run this script its returning a empty dictionary object [<PysvnLog ''>, <PysvnLog ''>, <PysvnLog ''>,.. any idea what i am doing wrong here ? i am using pysvn version 1.7.2 built again svn version 1.6.5 cheers Nash

    Read the article

  • Scraping paginated items from a website using scrapy

    - by Mridang Agarwalla
    I'm using scrapy to scrape items from a site. I'm not being able to implement this scraping pattern. The site I'm trying to scrape is a forum and I scrape the site once a day. Each page has a table containing posts. New posts are added to the top of the table and as more and more posts are posted to the site, the older posts go further into the pages due to pagination. This is a very simple scenario and we will assume that the order of the posts never change. I would like to scrape this site and scrape all the "new" records until the last scraped post from yesterday is encountered. I have configured my spider to paginate endlessly and when it encounters yesterday's last scraped post, it should stop. How can implement this? (My Scrapy installation works with my Django installation using django-dynamic-scraper )

    Read the article

  • Polynomial fitting with log log plot

    - by viral parekh
    I have a simple problem to fit a straight line on log-log scale. My code is, data=loadtxt(filename) xdata=data[:,0] ydata=data[:,1] polycoeffs = scipy.polyfit(xdata, ydata, 1) yfit = scipy.polyval(polycoeffs, xdata) pylab.plot(xdata, ydata, 'k.') pylab.plot(xdata, yfit, 'r-') Now I need to plot fit line on log scale so I just change x and y axis, ax.set_yscale('log') ax.set_xscale('log') then its not plotting correct fit line. So how can I change fit function (in log scale) so that it can plot fit line on log-log scale? Thanks -Viral

    Read the article

  • the error "invalid literal for int() with base 10:" keeps coming up

    - by ratce003
    I'm trying to write a very simple program, I want to print out the sum of all the multiples of 3 and 5 below 100, but, an error keeps accuring, saying "invalid literal for int() with base 10:" my program is as follows: sum = "" sum_int = int(sum) for i in range(1, 101): if i % 5 == 0: sum += i elif i % 3 == 0: sum += i else: sum += "" print sum Any help would be much appreciated.

    Read the article

  • Content-Length header not returned from Pylons response

    - by Evgeny
    I'm still struggling to Stream a file to the HTTP response in Pylons. In addition to the original problem, I'm finding that I cannot return the Content-Length header, so that for large files the client cannot estimate how long the download will take. I've tried response.content_length = 12345 and I've tried response.headers['Content-Length'] = 12345 In both cases the HTTP response (viewed in Fiddler) simply does not contain the Content-Length header. How do I get Pylons to return this header? (Oh, and if you have any ideas on making it stream the file please reply to the original question - I'm all out of ideas there.)

    Read the article

  • Import boto from local library

    - by ensnare
    I'm trying to use boto as a downloaded library, rather than installing it globally on my machine. I'm able to import boto, but when I run boto.connect_dynamodb() I get an error: ImportError: No module named dynamodb.layer2 Here's my file structure: project/ project/ __init__.py libraries/ __init__.py flask/ boto/ views/ .... modules/ __init__.py db.py .... templates/ .... static/ .... runserver.py And the contents of the relevant files as follows: project/project/modules/db.py from project.libraries import boto conn = boto.connect_dynamodb( aws_access_key_id='<YOUR_AWS_KEY_ID>', aws_secret_access_key='<YOUR_AWS_SECRET_KEY>') What am I doing wrong? Thanks in advance.

    Read the article

  • Django: Determining if a user has voted or not

    - by TheLizardKing
    I have a long list of links that I spit out using the below code, total votes, submitted by, the usual stuff but I am not 100% on how to determine if the currently logged in user has voted on a link or not. I know how to do this from within my view but do I need to alter my below view code or can I make use of the way templates work to determine it? I have read http://stackoverflow.com/questions/1528583/django-vote-up-down-method but I don't quite understand what's going on ( and don't need any ofjavascriptery). Models (snippet): class Link(models.Model): category = models.ForeignKey(Category, blank=False, default=1) user = models.ForeignKey(User) created = models.DateTimeField(auto_now_add=True) modified = models.DateTimeField(auto_now=True) url = models.URLField(max_length=1024, unique=True, verify_exists=True) name = models.CharField(max_length=512) def __unicode__(self): return u'%s (%s)' % (self.name, self.url) class Vote(models.Model): link = models.ForeignKey(Link) user = models.ForeignKey(User) created = models.DateTimeField(auto_now_add=True) def __unicode__(self): return u'%s vote for %s' % (self.user, self.link) Views (snippet): links = Link.objects.select_related().annotate(votes=Count('vote')).order_by('-created')

    Read the article

  • Speeding up templates in GAE-Py by aggregating RPC calls

    - by Sudhir Jonathan
    Here's my problem: class City(Model): name = StringProperty() class Author(Model): name = StringProperty() city = ReferenceProperty(City) class Post(Model): author = ReferenceProperty(Author) content = StringProperty() The code isn't important... its this django template: {% for post in posts %} <div>{{post.content}}</div> <div>by {{post.author.name}} from {{post.author.city.name}}</div> {% endfor %} Now lets say I get the first 100 posts using Post.all().fetch(limit=100), and pass this list to the template - what happens? It makes 200 more datastore gets - 100 to get each author, 100 to get each author's city. This is perfectly understandable, actually, since the post only has a reference to the author, and the author only has a reference to the city. The __get__ accessor on the post.author and author.city objects transparently do a get and pull the data back (See this question). Some ways around this are Use Post.author.get_value_for_datastore(post) to collect the author keys (see the link above), and then do a batch get to get them all - the trouble here is that we need to re-construct a template data object... something which needs extra code and maintenance for each model and handler. Write an accessor, say cached_author, that checks memcache for the author first and returns that - the problem here is that post.cached_author is going to be called 100 times, which could probably mean 100 memcache calls. Hold a static key to object map (and refresh it maybe once in five minutes) if the data doesn't have to be very up to date. The cached_author accessor can then just refer to this map. All these ideas need extra code and maintenance, and they're not very transparent. What if we could do @prefetch def render_template(path, data) template.render(path, data) Turns out we can... hooks and Guido's instrumentation module both prove it. If the @prefetch method wraps a template render by capturing which keys are requested we can (atleast to one level of depth) capture which keys are being requested, return mock objects, and do a batch get on them. This could be repeated for all depth levels, till no new keys are being requested. The final render could intercept the gets and return the objects from a map. This would change a total of 200 gets into 3, transparently and without any extra code. Not to mention greatly cut down the need for memcache and help in situations where memcache can't be used. Trouble is I don't know how to do it (yet). Before I start trying, has anyone else done this? Or does anyone want to help? Or do you see a massive flaw in the plan?

    Read the article

  • SQLAlchemy - SQLite for testing and Postgresql for devlopment - How to port?

    - by StackUnderflow
    I want to use sqlite memory database for all my testing and Postgresql for my development/production server. But the SQL syntax is not same in both dbs. for ex: SQLite has autoincrement, and Postgresql has serial Is it easy to port the SQL script from sqlite to postgresql... what are your solutions? If you want me to use standard SQL, how should I go about generating primary key in both the databases?

    Read the article

< Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >