Search Results

Search found 14399 results on 576 pages for 'python noob'.

Page 413/576 | < Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >

  • Display graph without saving using pydot

    - by user506710
    Hello all I am trying to display a simple graph using pydot. My question is that is there any way to display the graph without writing it to a file as currently I use write function to first draw and then have to use the Image module to show the files. However is there any way that the graph directly gets printed on the screen without being saved ?? Also as an update I would like to ask in this same question that I observe that while the image gets saved very quickly when I use the show command of the Image module it takes noticeable time for the image to be seen .... Also sometimes I get the error that the image could'nt be opened because it was either deleted or saved in unavailable location which is not correct as I am saving it at my Desktop..... Does anyone know what's happening and is there a faster way to get the image loaded..... Thanks a lot....

    Read the article

  • Iterating over a database column in Django

    - by curious
    I would like to iterate a calculation over a column of values in a MySQL database. I wondered if Django had any built-in functionality for doing this. Previously, I have just used the following to store each column as a list of tuples with the name table_column: import MySQLdb import sys try: conn = MySQLdb.connect (host = "localhost", user = "user", passwd="passwd", db="db") except MySQLdb.Error, e: print "Error %d: %s" % (e.args[0], e.args[1]) sys.exit (1) cursor = conn.cursor() for table in ['foo', 'bar']: for column in ['foobar1', 'foobar2']: cursor.execute('select %s from %s' % (column, table)) exec "%s_%s = cursor.fetchall()" % (table, column) cursor.close() conn.commit() conn.close() Is there any functionality built into Django to more conveniently iterate through the values of a column in a database table? I'm dealing with millions of rows so speed of execution is important.

    Read the article

  • maching strings

    - by kiran
    Write two functions, called countSubStringMatch and countSubStringMatchRecursive that take two arguments, a key string and a target string. These functions iteratively and recursively count the number of instances of the key in the target string. You should complete definitions for def countSubStringMatch(target,key): and def countSubStringMatchRecursive (target, key): For the remaining problems, we are going to explore other substring matching ideas. These problems can be solved with either an iterative function or a recursive one. You are welcome to use either approach, though you may find iterative approaches more intuitive in these cases of matching linear structures.

    Read the article

  • Save JSON outputed from a URL to a file

    - by Aidan
    Hey Guys, How would I save JSON outputed by an URL to a file? e.g from the Twitter search API (this http://search.twitter.com/search.json?q=hi) Language isn't important. Thanks! edit // How would I then append further updates to EOF?

    Read the article

  • What motivates people to learn a new programming language?

    - by szabgab
    There are plenty of question asking Which Programming Language Should I Learn? but I have not found an answer yet to the question what really motivates people to learn a specific new language?. There are the people who think they should learn a new language every year for educational purpose. How do they decide on the languages to be learned? Then I guess there are people who learn a new language because people around them told it is a fun language and they can build nice things with it. Of course if the current job requires it people would learn a new language but I think if the language seems to have a potential to earn money (e.g. There are plenty of jobs in Java or ObjectiveC can be used to write apps for the iPhone and make money). So why are you learning a new language or why have you learned the languages you know?

    Read the article

  • how to do this problem?

    - by Sachin Tendulkar
    Write an iterative program that finds the largest number of McNuggets that cannot be bought in exact quantity. Your program should print the answer in the following format (where the correct number is provided in place of n): "Largest number of McNuggets that cannot be bought in exact quantity: n"

    Read the article

  • Are there viable alternatives for Web 2.0 apps besides lots of Javascript?

    - by djembe
    If you say find C-style syntax to be in the axis of evil are you just hopelessly condemned to suck it up and deal with it if you want to provide your users with cool web 2.0 applications - for example stuff that's generally done using JQuery and Ajax etc? Are there no other choices out there? We're currently building intranet apps using pylons and a bunch of JavaScript along with a bit of Evoque. So obviously for us the world would be a better place if instead something equivalent existed written in like PythonScript. But I've yet to seen anything approaching that aside from the Android system's ASE - but obviously that's something rather unrelated. Still - if browsers could support other scripting languages....

    Read the article

  • Best way to order by columns in relationships?

    - by Timmy
    im using sqlalchemy, and i have a few polymorphic tables, and i want to sort by a column in one of the relationship. i have tables a,b,c,d, with relationship a to b, b to c, c to d. a to b is one-to-many b to c and c to d are one-to-one, but polymorphic. given an item in table a, i will have a list of items b, c, d (all one to one). how do i use sqlalchemy to sort them by a column in table d?

    Read the article

  • indexing for faster search of lists in a file??

    - by kaushik
    i have a file having around 1 lakh lists and have a another file with again a list of around an average of 50.. I want to compare 2nd item of list in second file with the 2nd element of 1st file and repeat this for each of the 50 lists in 2nd file and get the result of all the matching element. I have written the code for all this,but this is taking a lot of time as it need to check the whole the 1lakh list some 50 times..i want to improve the speed... please tell me how can i do this.... i cant not post my code as it is part of big code and will be difficult to infer anything from that... please tell what can be done to improve the speed?? thank u,

    Read the article

  • Django 1.2: Dates in admin forms don't work with Locales (I10N=True)

    - by equalium
    I have an application in Django 1.2. Language is selectable (I18N and Locale = True) When I select the english lang. in the site, the admin works OK. But when I change to any other language this is what happens with date inputs (spanish example): Correctly, the input accepts the spanish format %d/%m/%Y (Even selecting from the calendar, the date inserts as expected). But when I save the form and load it again, the date shows in the english form: %Y-%m-%d The real problem is that when I load the form to change any other text field and try to save it I get an error telling me to enter a valid date, so I have to write all dates again or change the language in the site to use the admin. I haven't specified anything for DATE_INPUT_FORMATS in settings nor have I overridden forms or models. Surely I am missing something but I can't find it. Can anybody give me a hint?

    Read the article

  • How to optimize this script

    - by marks34
    I have written the following script. It opens a file, reads each line from it splitting by new line character and deleting first character in line. If line exists it's being added to array. Next each element of array is splitted by whitespace, sorted alphabetically and joined again. Every line is printed because script is fired from console and writes everything to file using standard output. I'd like to optimize this code to be more pythonic. Any ideas ? import sys def main(): filename = sys.argv[1] file = open(filename) arr = [] for line in file: line = line[1:].replace("\n", "") if line: arr.append(line) for line in arr: lines = line.split(" ") lines.sort(key=str.lower) line = ''.join(lines) print line if __name__ == '__main__': main()

    Read the article

  • how to get all 'username' from my model 'MyUser' on google-app-engine ..

    - by zjm1126
    my model is : class MyUser(db.Model): user = db.UserProperty() password = db.StringProperty(default=UNUSABLE_PASSWORD) email = db.StringProperty() nickname = db.StringProperty(indexed=False) and my method which want to get all username is : s=[] a=MyUser.all().fetch(10000) for i in a: s.append(i.username) and the error is : AttributeError: 'MyUser' object has no attribute 'username' so how can i get all 'username', which is the simplest way . thanks

    Read the article

  • Trouble getting QMainWindow to scroll

    - by random
    A minimal example: class MainWindow(QtGui.QMainWindow): def __init__(self, parent = None): QtGui.QMainWindow.__init__(self, parent) winWidth = 683 winHeight = 784 screen = QtGui.QDesktopWidget().availableGeometry() screenCenterX = (screen.width() - winWidth) / 2 screenCenterY = (screen.height() - winHeight) / 2 self.setGeometry(screenCenterX, screenCenterY, winWidth, winHeight) layout = QtGui.QVBoxLayout() layout.addWidget(FormA()) mainWidget = QtGui.QWidget() mainWidget.setLayout(layout) self.setCentralWidget(mainWidget) FormA is a QFrame with a VBoxLayout that can expand to an arbitrary number of entries. In the code posted above, if the entries in the forms can't fit in the window then the window itself grows. I'd prefer for the window to become scrollable. I've also tried the following... replacing mainWidget = QtGui.QWidget() mainWidget.setLayout(layout) self.setCentralWidget(mainWidget) with mainWidget = QtGui.QScrollArea() mainWidget.setLayout(layout) self.setCentralWidget(mainWidget) results in the forms and entries shrinking if they can't fit in the window. Replacing it with mainWidget = QtGui.QWidget() mainWidget.setLayout(layout) scrollWidget = QtGui.QScrollArea() scrollWidget.setWidget(mainWidget) self.setCentralWidget(scrollWidget) results in the mainwidget (composed of the forms) being scrunched in the top left corner of the window, leaving large blank areas on the right and bottom of it, and still isn't scrollable. I can't set a limit on the size of the window because I wish for it to be resizable. How can I make this window scrollable?

    Read the article

  • filter queryset based on list, including None

    - by jujule
    Hi all I dont know if its a django bug or a feature but i have a strange ORM behaviour with MySQL. class Status(models.Model): name = models.CharField(max_length = 50) class Article(models.Model) status = models.ForeignKey(status, blank = True, null=True) filters = Q(status__in =[0, 1,2] ) | Q(status=None) items = Article.objects.filter(filters) this returns Article items but some have other status than requested [0,1,2,None] looking at the sql query : SELECT [..] FROM `app_article` LEFT OUTER JOIN `app_status` ON (`app_article`.`status_id` = `app_status`.`id`) WHERE (`app_article`.`status_id` IN (1, 2) OR `app_status`.`id` IS NULL) ORDER BY [...] the OR app_status.id IS NULL part seems to be the cause. if i change it to OR app_article.status_id IS NULL it works correctly. How to deal with this ? Thanx.

    Read the article

  • Strange (atleast for me) behavior in Django template

    - by lud0h
    The following code snippet in a Django template (v 1.1) doesn't work. {{ item.vendors.all.0 }} == returns "Test" but the following code snippet, doesn't hide the paragraph! {% ifnotequal item.vendors.all.0 "Test" %} <p class="view_vendor">Vendor(s): {{item.vendors.all.0}} </p><br /> {% endifnotequal %} Any tips on what's wrong? Thanks.

    Read the article

  • Why is numpy c extension slow?

    - by Bitwise
    I am working on large numpy arrays, and some native numpy operations are too slow for my needs (for example simple operations such as "bitwise" A&B). I started looking into writing C extensions to try and improve performance. As a test case, I tried the example given here, implementing a simple trace calculation. I was able to get it to work, but was surprised by the performance: for a (1000,1000) numpy array, numpy.trace() was about 1000 times faster than the C extension! This happens whether I run it once or many times. Is this expected? Is the C extension overhead that bad? Any ideas how to speed things up?

    Read the article

  • Sleeping a thread blocking stdin

    - by Sid
    Hey, I'm running a function which evaluates commands passed in using stdin and another function which runs a bunch of jobs. I need to make the latter function sleep at regular intervals but that seems to be blocking the stdin. Any advice on how to resolve this would be appreciated. The source code for the functions is def runJobs(comps, jobQueue, numRunning, limit, lock): while len(jobQueue) >= 0: print(len(jobQueue)); if len(jobQueue) > 0: comp, tasks = find_computer(comps, 0); #do something time.sleep(5); def manageStdin(): print "Global Stdin Begins Now" for line in fileinput.input(): try: print(eval(line)); except Exception, e: print e; --Thanks

    Read the article

  • How does git fetches commits associated to a file ?

    - by liadan
    I'm writing a simple parser of .git/* files. I covered almost everything, like objects, refs, pack files etc. But I have a problem. Let's say I have a big 300M repository (in a pack file) and I want to find out all the commits which changed /some/deep/inside/file file. What I'm doing now is: fetching last commit finding a file in it by: fetching parent tree finding out a tree inside recursively repeat until I get into the file additionally I'm checking hashes of each subfolders on my way to file. If one of them is the same as in commit before, I assume that file was not changed (because it's parent dir didn't change) then I store the hash of a file and fetch parent commit finding file again and check if hash change occurs if yes then original commit (i.e. one before parent) was changing a file And I repeat it over and over until I reach very first commit. This solution works, but it sucks. In worse case scenario, first search can take even 3 minutes (for 300M pack). Is there any way to speed it up ? I tried to avoid putting so large objects in memory, but right now I don't see any other way. And even that, initial memory load will take forever :( Greets and thanks for any help!

    Read the article

< Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >