Search Results

Search found 14399 results on 576 pages for 'python noob'.

Page 340/576 | < Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >

  • Implementing __concat__

    - by Casebash
    I tried to implement __concat__, but it didn't work >>> class lHolder(): ... def __init__(self,l): ... self.l=l ... def __concat__(self, l2): ... return self.l+l2 ... def __iter__(self): ... return self.l.__iter__() ... >>> lHolder([1])+[2] Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unsupported operand type(s) for +: 'lHolder' and 'list' How can I fix this?

    Read the article

  • BioPython: extracting sequence IDs from a Blast output file

    - by Jon
    Hi, I have a BLAST output file in XML format. It is 22 query sequences with 50 hits reported from each sequence. And I want to extract all the 50x22 hits. This is the code I currently have, but it only extracts the 50 hits from the first query. from Bio.Blast import NCBIXM blast_records = NCBIXML.parse(result_handle) blast_record = blast_records.next() save_file = open("/Users/jonbra/Desktop/my_fasta_seq.fasta", 'w') for alignment in blast_record.alignments: for hsp in alignment.hsps: save_file.write('>%s\n' % (alignment.title,)) save_file.close() Somebody have any suggestions as to extract all the hits? I guess I have to use something else than alignments. Hope this was clear. Thanks! Jon

    Read the article

  • How to use Django's filesizeformat

    - by Scott LaPlant
    I have a small app I'm working on where I'm trying to use Django's built in filesizeformat. Currently, the format looks like this: {{ value|filesizeformat }} I understand I need to define this in my view.py file but, I can't seem to figure out how to do that. I've tried to use the syntax below: def filesizeformat(bytes): """ Formats the value like a 'human-readable' file size (i.e. 13 KB, 4.1 MB, 102 bytes, etc). """ try: bytes = float(bytes) except (TypeError,ValueError,UnicodeDecodeError): return u"0 bytes" if bytes < 1024: return ungettext("%(size)d byte", "%(size)d bytes", bytes) % {'size': bytes} if bytes < 1024 * 1024: return ugettext("%.1f KB") % (bytes / 1024) if bytes < 1024 * 1024 * 1024: return ugettext("%.1f MB") % (bytes / (1024 * 1024)) return ugettext("%.1f GB") % (bytes / (1024 * 1024 * 1024)) filesizeformat.is_safe = True I've then replaced 'value' with 'bytes' in the template but, this does not seem to work. Any suggestions?

    Read the article

  • Django access data passed to form

    - by realshadow
    Hey, I have got a choiceField in my form, where I display filtered data. To filter the data I need two arguments. The first one is not a problem, because I can take it directly from an object, but the second one is dynamically generated. Here is some code: class GroupAdd(forms.Form): def __init__(self, *args, **kwargs): self.pid = kwargs.pop('parent_id', None) super(GroupAdd, self).__init__(*args, **kwargs) parent_id = forms.IntegerField(widget=forms.HiddenInput) choices = forms.ChoiceField( choices = [ [group.node_id, group.name] for group in Objtree.objects.filter( type_id = ObjtreeTypes.objects.values_list('type_id').filter(name = 'group'), parent_id = 50 ).distinct()] + [[0, 'Add a new one'] ], widget = forms.Select( attrs = { 'id': 'group_select' } ) ) I would like to change the parent_id that is passed into the Objtree.objects.filter. As you can see I tried in the init function, as well with kwargs['initial']['parent_id'] and then calling it with self, but that doesnt work, since its out of scope... it was pretty much my last effort. I need to acccess it either trough the initial parameter or directly trough parent_id field, since it already holds its value (passed trough initial). Any help is appreciated, as I am running out of ideas.

    Read the article

  • How to use SQLAlchemy to dump an SQL file from query expressions to bulk-insert into a DBMS?

    - by Mahmoud Abdelkader
    Please bear with me as I explain the problem, how I tried to solve it, and my question on how to improve it is at the end. I have a 100,000 line csv file from an offline batch job and I needed to insert it into the database as its proper models. Ordinarily, if this is a fairly straight-forward load, this can be trivially loaded by just munging the CSV file to fit a schema, but I had to do some external processing that requires querying and it's just much more convenient to use SQLAlchemy to generate the data I want. The data I want here is 3 models that represent 3 pre-exiting tables in the database and each subsequent model depends on the previous model. For example: Model C --> Foreign Key --> Model B --> Foreign Key --> Model A So, the models must be inserted in the order A, B, and C. I came up with a producer/consumer approach: - instantiate a multiprocessing.Process which contains a threadpool of 50 persister threads that have a threadlocal connection to a database - read a line from the file using the csv DictReader - enqueue the dictionary to the process, where each thread creates the appropriate models by querying the right values and each thread persists the models in the appropriate order This was faster than a non-threaded read/persist but it is way slower than bulk-loading a file into the database. The job finished persisting after about 45 minutes. For fun, I decided to write it in SQL statements, it took 5 minutes. Writing the SQL statements took me a couple of hours, though. So my question is, could I have used a faster method to insert rows using SQLAlchemy? As I understand it, SQLAlchemy is not designed for bulk insert operations, so this is less than ideal. This follows to my question, is there a way to generate the SQL statements using SQLAlchemy, throw them in a file, and then just use a bulk-load into the database? I know about str(model_object) but it does not show the interpolated values. I would appreciate any guidance for how to do this faster. Thanks!

    Read the article

  • Calling activateWindow on QDialog sends window to background

    - by Stan
    I am debugging certain application written with C++/Qt4. On Linux it has problems that with certain window managers (gnome-wm/metacity), the main window (based on QDialog) is created in the background (it's not raised). I managed to re-create the scenario using PyQt4 and following code: from PyQt4.QtCore import * from PyQt4.QtGui import * import sys class PinDialog(QDialog): def showEvent(self, event): QDialog.showEvent(self, event) self.raise_() self.activateWindow() if __name__ == "__main__": app = QApplication(sys.argv) widget = PinDialog() app.setActiveWindow(widget) widget.exec_() sys.exit(0) If I remove self.activateWindow() the application works as expected. This seems wrong, since documentation for activateWindow does not specify any conditions under which something like this could happen. My question is: Is there any reason to have activateWindow in showEvent in the first place? If there is some reason, what would be good workaround for focusing issues?

    Read the article

  • File Uploads with Turbogears 2

    - by William Chambers
    I've been trying to work out the 'best practices' way to manage file uploads with Turbogears 2 and have thus far not really found any examples. I've figured out a way to actually upload the file, but I'm not sure how reliable it us. Also, what would be a good way to get the uploaded files name? file = request.POST['file'] permanent_file = open(os.path.join(asset_dirname, file.filename.lstrip(os.sep)), 'w') shutil.copyfileobj(file.file, permanent_file) file.file.close() this_file = self.request.params["file"].filename permanent_file.close() So assuming I'm understanding correctly, would something like this avoid the core 'naming' problem? id = UUID. file = request.POST['file'] permanent_file = open(os.path.join(asset_dirname, id.lstrip(os.sep)), 'w') shutil.copyfileobj(file.file, permanent_file) file.file.close() this_file = file.filename permanent_file.close()

    Read the article

  • Django: name of many to many items in the admin interface

    - by Adam
    I have a many to many field, which I'm displaying in the django admin panel. When I add multiple items, they all come up as "ASGGroup object" in the display selector. Instead, I want them to come up as whatever the ASGGroup.name field is set to. How do I do this? My models looks like: class Thing(Model): read_groups = ManyToManyField('ASGGroup', related_name="thing_read", blank=True) class ASGGroup(Model): name = CharField(max_length=63, null=True) But what I'm seeing the m2m widget display is:

    Read the article

  • How to limit traffic using multicast over localhost

    - by Shane Holloway
    I'm using multicast UDP over localhost to implement a loose collection of cooperative programs running on a single machine. The following code works well on Mac OSX, Windows and linux. The flaw is that the code will receive UDP packets outside of the localhost network as well. For example, sendSock.sendto(pkt, ('192.168.0.25', 1600)) is received by my test machine when sent from another box on my network. import platform, time, socket, select addr = ("239.255.2.9", 1600) sendSock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP) sendSock.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_TTL, 24) sendSock.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_IF, socket.inet_aton("127.0.0.1")) recvSock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP) recvSock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, True) if hasattr(socket, 'SO_REUSEPORT'): recvSock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, True) recvSock.bind(("0.0.0.0", addr[1])) status = recvSock.setsockopt(socket.IPPROTO_IP, socket.IP_ADD_MEMBERSHIP, socket.inet_aton(addr[0]) + socket.inet_aton("127.0.0.1")); while 1: pkt = "Hello host: {1} time: {0}".format(time.ctime(), platform.node()) print "SEND to: {0} data: {1}".format(addr, pkt) r = sendSock.sendto(pkt, addr) while select.select([recvSock], [], [], 0)[0]: data, fromAddr = recvSock.recvfrom(1024) print "RECV from: {0} data: {1}".format(fromAddr, data) time.sleep(2) I've attempted to recvSock.bind(("127.0.0.1", addr[1])), but that prevents the socket from receiving any multicast traffic. Is there a proper way to configure recvSock to only accept multicast packets from the 127/24 network, or do I need to test the address of each received packet?

    Read the article

  • openerp client customization

    - by iamgopal
    openerp client seems to be nice and working , i would like to hack it and use it as a front end to my open erp solution. but the documentation regarding client side design or customization is poor on openerp site , is there any good reference or documentation available for further digging in to openerp client side coding ? or more : if any similar client solution available that can be plug in to any back end system. ( i.e. rich internet client )

    Read the article

  • Django1.1 model field value preprocessing before returning

    - by Satoru.Logic
    Hi, all. I have a model class like this: class Note(models.Model): author = models.ForeignKey(User, related_name='notes') content = NoteContentField(max_length=256) NoteContentField is a custom sub-class of CharField that override the to_python method in purpose of doing some twitter-text-conversion processing. class NoteContentField(models.CharField): __metaclass__ = models.SubfieldBase def to_python(self, value): value = super(NoteContentField, self).to_python(value) from ..utils import linkify return mark_safe(linkify(value)) However, this doesn't work. When I save a Note object like this: note = Note(author=request.use, content=form.cleaned_data['content']) note.save() The conversed value is saved into the database, which is not what I wanna see. What I'm trying to do is to save the raw content into the database, and only make the conversion when the content attribute is later accessed. Would you please tell me what's wrong with this? Thanks to Pierre and Daniel. I have figured out what's wrong. I thought the text-conversion code should be in either to_python or get_db_prep_value, and that's wrong. I should override both of them, make to_python do the conversion and get_db_prep_value return the unconversed value: from ..utils import linkify class NoteContentField(models.CharField): __metaclass__ = models.SubfieldBase def to_python(self, value): self._raw_value = super(NoteContentField, self).to_python(value) return mark_safe(linkify(self._raw_value)) def get_db_prep_value(self, value): return self._raw_value I wonder if there is a better way to implement this?

    Read the article

  • Django debug error

    - by Hulk
    I have the following in my model: class info(models.Model): add = models.CharField(max_length=255) name = models.CharField(max_length=255) An in the views when i say info_l = info.objects.filter(id=1) logging.debug(info_l.name) i get an error saying name doesnt exist at debug statement. 'QuerySet' object has no attribute 'name' 1.How can this be resolved. 2.Also how to query for only one field instead of selecting all like select name from info.

    Read the article

  • Using sub filters/queries in Google App Engine

    - by fredrik
    Hi, I'm trying to use figure out how to sub query a query that uses a filter. From what I've figured out so far while using .filter() it changes the original query, that leads to a second .filter() would also have to match the first filter. I would like to make something like this: modules = data.Modules.all().filter('page = ', page.key()) modules.filter('name = ', 'Test') modules.filter('name = ', 'Test2') I can't get the "Test2" filter to work. The only solution I have at the moment is to make all new queries. data.Modules.all().filter('page = ', page.key()).filter('name = ', "Test").get() data.Modules.all().filter('page = ', page.key()).filter('name = ', "Test2").get() Or write the same as an GQL. But for me it seams quite stupid way to go. I've looked at using ancestors, but I don't quite understand it and honestly don't know if that's the way to go. Any ideas? ..fredrik

    Read the article

  • wxPython ListCtrl Column Ignores Specific Fields

    - by g.d.d.c
    I'm rewriting this post to clarify some things and provide a full class definition for the Virtual List I'm having trouble with. The class is defined like so: from wx import ListCtrl, LC_REPORT, LC_VIRTUAL, LC_HRULES, LC_VRULES, \ EVT_LIST_COL_CLICK, EVT_LIST_CACHE_HINT, EVT_LIST_COL_RIGHT_CLICK, \ ImageList, IMAGE_LIST_SMALL, Menu, MenuItem, NewId, ITEM_CHECK, Frame, \ EVT_MENU class VirtualList(ListCtrl): def __init__(self, parent, datasource = None, style = LC_REPORT | LC_VIRTUAL | LC_HRULES | LC_VRULES): ListCtrl.__init__(self, parent, style = style) self.columns = [] self.il = ImageList(16, 16) self.Bind(EVT_LIST_CACHE_HINT, self.CheckCache) self.Bind(EVT_LIST_COL_CLICK, self.OnSort) if datasource is not None: self.datasource = datasource self.Bind(EVT_LIST_COL_RIGHT_CLICK, self.ShowAvailableColumns) self.datasource.list = self self.Populate() def SetDatasource(self, datasource): self.datasource = datasource def CheckCache(self, event): self.datasource.UpdateCache(event.GetCacheFrom(), event.GetCacheTo()) def OnGetItemText(self, item, col): return self.datasource.GetItem(item, self.columns[col]) def OnGetItemImage(self, item): return self.datasource.GetImg(item) def OnSort(self, event): self.datasource.SortByColumn(self.columns[event.Column]) self.Refresh() def UpdateCount(self): self.SetItemCount(self.datasource.GetCount()) def Populate(self): self.UpdateCount() self.datasource.MakeImgList(self.il) self.SetImageList(self.il, IMAGE_LIST_SMALL) self.ShowColumns() def ShowColumns(self): for col, (text, visible) in enumerate(self.datasource.GetColumnHeaders()): if visible: self.columns.append(text) self.InsertColumn(col, text, width = -2) def Filter(self, filter): self.datasource.Filter(filter) self.UpdateCount() self.Refresh() def ShowAvailableColumns(self, evt): colMenu = Menu() self.id2item = {} for idx, (text, visible) in enumerate(self.datasource.columns): id = NewId() self.id2item[id] = (idx, visible, text) item = MenuItem(colMenu, id, text, kind = ITEM_CHECK) colMenu.AppendItem(item) EVT_MENU(colMenu, id, self.ColumnToggle) item.Check(visible) Frame(self, -1).PopupMenu(colMenu) colMenu.Destroy() def ColumnToggle(self, evt): toggled = self.id2item[evt.GetId()] if toggled[1]: idx = self.columns.index(toggled[2]) self.datasource.columns[toggled[0]] = (self.datasource.columns[toggled[0]][0], False) self.DeleteColumn(idx) self.columns.pop(idx) else: self.datasource.columns[toggled[0]] = (self.datasource.columns[toggled[0]][0], True) idx = self.datasource.GetColumnHeaders().index((toggled[2], True)) self.columns.insert(idx, toggled[2]) self.InsertColumn(idx, toggled[2], width = -2) self.datasource.SaveColumns() I've added functions that allow for Column Toggling which facilitate my description of the issue I'm encountering. On the 3rd instance of this class in my application the Column at Index 1 will not display String values. Integer values are displayed properly. If I add print statements to my OnGetItemText method the values show up in my console properly. This behavior is not present in the first two instances of this class, and my class does not contain any type checking code with respect to value display. It was suggested by someone on the wxPython users' group that I create a standalone sample that demonstrates this issue if I can. I'm working on that, but have not yet had time to create a sample that does not rely on database access. Any suggestions or advice would be most appreciated. I'm tearing my hair out on this one.

    Read the article

  • Unbuffered subprocess output (last line missing)

    - by plok
    I must be overlooking something terribly obvious. I need to execute a C program, display its output in real time and finally parse its last line, which should be straightforward as the last line printed is always the same. process = subprocess.Popen(args, shell = True, stdout = subprocess.PIPE, stderr = subprocess.PIPE) # None indicates that the process hasn't terminated yet. while process.poll() is None: # Always save the last non-emtpy line that was output by the child # process, as it will write an empty line when closing its stdout. out = process.stdout.readline() if out: last_non_empty_line = out if verbose: sys.stdout.write(out) sys.stdout.flush() # Parse 'out' here... Once in a while, however, the last line is not printed. The default value for Popens's bufsize is 0, so it is supposed to be unbuffered. I have also tried, to no avail, adding fflush(stdout) to the C code just before exiting, but it seems that there is absolutely no need to flush a stream before exiting a program. Ideas anyone?

    Read the article

  • django sphinx automodule -- basics

    - by haras.pl
    Hi, I have a projects with several large apps and where settings and apps files are split. directory structure goes something like that: project_name __init__.py apps __init__.py app1 app2 3rdparty __init__.py lib1 lib2 settings __init__.py installed_apps.py path.py templates.py locale.py ... urls.py every app is like that __init__.py admin __init__.py file1.py file2.py models __init__.py model1.py model2.py tests __init__.py test1.py test2.py views __init__.py view1.py view2.py urls.py how to use a sphinx to autogenerate documentation for that? I want something like that for each in settings module or INSTALLED_APPS (not starting with django.* or 3rdparty.*) give me a auto documentation output based on docstring and autogen documentation and run tests before git commit btw. I tried doing .rst files by hand with .. automodule:: module_name :members: but is sucks for such a big project, and it does not works for settings Is there an autogen method or something? I am not tied to sphinx, is there a better solution for my problem?

    Read the article

  • Django, making a page activate for a fixed time

    - by Hellnar
    Greetings I am hacking Django and trying to test something such as: Like woot.com , I want to sell "an item per day", so only one item will be available for that day (say the default www.mysite.com will be redirected to that item), Assume my urls for calling these items will be such: www.mysite.com/item/<number> my model for item: class Item(models.Model): item_name = models.CharField(max_length=30) price = models.FloatField() content = models.TextField() #keeps all the html content start_time = models.DateTimeField() end_time = models.DateTimeField() And my view for rendering this: def results(request, item_id): item = get_object_or_404(Item, pk=item_id) now = datetime.now() if item.start_time > now: #render and return some "not started yet" error templete elif item.end_time < now: #render and return some "item selling ended" error templete else: # render the real templete for selling this item What would be the efficient and clever model & templete for achieving this ?

    Read the article

  • What is the correct way to backup ZODB blobs?

    - by joeforker
    I am using plone.app.blob to store large ZODB objects in a blobstorage directory. This reduces size pressure on Data.fs but I have not been able to find any advice on backing up this data. I am already backing up Data.fs by pointing a network backup tool at a directory of repozo backups. Should I simply point that tool at the blobstorage directory to backup my blobs? What if the database is being repacked or blobs are being added and deleted while the copy is taking place? Are there files in the blobstorage directory that must be copied over in a certain order?

    Read the article

  • [numpy] storing record arrays in object arrays

    - by Peter Prettenhofer
    I'd like to convert a list of record arrays -- dtype is (uint32, float32) -- into a numpy array of dtype np.object: X = np.array(instances, dtype = np.object) where instances is a list of arrays with data type np.dtype([('f0', '<u4'), ('f1', '<f4')]). However, the above statement results in an array whose elements are also of type np.object: X[0] array([(67111L, 1.0), (104242L, 1.0)], dtype=object) Does anybody know why? The following statement should be equivalent to the above but gives the desired result: X = np.empty((len(instances),), dtype = np.object) X[:] = instances X[0] array([(67111L, 1.0), (104242L, 1.0), dtype=[('f0', '<u4'), ('f1', '<f4')]) thanks & best regards, peter

    Read the article

  • Formatting inline many-to-many related models presented in django admin

    - by Jonathan
    I've got two django models (simplified): class Product(models.Model): name = models.TextField() price = models.IntegerField() class Invoice(models.Model): company = models.TextField() customer = models.TextField() products = models.ManyToManyField(Product) I would like to see the relevant products as a nice table (of product fields) in an Invoice page in admin and be able to link to the individual respective Product pages. My first thought was using the admin's inline - but django used a select box widget per related Product. This isn't linked to the Product pages, and also as I have thousands of products, and each select box independently downloads all the product names, it quickly becomes unreasonably slow. So I turned to using ModelAdmin.filter_horizontal as suggested here, which used a single instance of a different widget, where you have a list of all Products and another list of related Products and you can add\remove products in the later from the former. This solved the slowness, but it still doesn't show the relevant Product fields, and it ain't linkable. So, what should I do? tweak views? override ModelForms? I Googled around and couldn't find any example of such code...

    Read the article

  • Comparing two large sets of attributes

    - by andyashton
    Suppose you have a Django view that has two functions: The first function renders some XML using a XSLT stylesheet and produces a div with 1000 subelements like this: <div id="myText"> <p id="p1"><a class="note-p1" href="#" style="display:none" target="bot">?</a></strong>Lorem ipsum</p> <p id="p2"><a class="note-p2" href="#" style="display:none" target="bot">?</a></strong>Foo bar</p> <p id="p3"><a class="note-p3" href="#" style="display:none" target="bot">?</a></strong>Chocolate peanut butter</p> (etc for 1000 lines) <p id="p1000"><a class="note-p1000" href="#" style="display:none" target="bot">?</a></strong>Go Yankees!</p> </div> The second function renders another XML document using another stylesheet to produce a div like this: <div id="myNotes"> <p id="n1"><cite class="note-p1"><sup>1</sup><span>Trololo</span></cite></p> <p id="n2"><cite class="note-p1"><sup>2</sup><span>Trololo</span></cite></p> <p id="n3"><cite class="note-p2"><sup>3</sup><span>lololo</span></cite></p> (etc for n lines) <p id="n"><cite class="note-p885"><sup>n</sup><span>lololo</span></cite></p> </div> I need to see which elements in #myText have classes that match elements in #myNotes, and display them. I can do this using the following jQuery: $('#myText').find('a').each(function() { var $anchor = $(this); $('#myNotes').find('cite').each(function() { if($(this).attr('class') == $anchor.attr('class')) { $anchor.show(); }); }); However this is incredibly slow and inefficient for a large number of comparisons. What is the fastest/most efficient way to do this - is there a jQuery/js method that is reasonable for a large number of items? Or do I need to reengineer the Django code to do the work before passing it to the template?

    Read the article

  • Rebuilding website from Django 0.96 to Django 1.2

    - by Neytiri
    I've got a website done in Django 0.96 (done in 2007), and now we are thinking about rebuilding it (not just migrating) for Django 1.2 . Can anyone point me to the new (and worth the while) widgets, plugins and other stuff for Django 1.2 (released in april 2010). I've heard of "South" and of a widget for debugging (can't remember the name), but I'm a little lost here.

    Read the article

  • Should we have a database independent SQL like query language in Django? [closed]

    - by Yugal Jindle
    Note : I know we have Django ORM already that keeps things database independent and converts to the database specific SQL queries. Once things starts getting complicated it is preferred to write raw SQL queries for better efficiency. When you write raw sql queries your code gets trapped with the database you are using. I also understand its important to use the full power of your database that can-not be achieved with the django orm alone. My Question : Until I use any database specific feature, why should one be trapped with the database. For instance : We have a query with multiple joins and we decided to write a raw sql query. Now, that makes my website postgres specific. Even when I have not used any postgres specific feature. I feel there should be some fake sql language which can translate to any database's sql query. Even Django's ORM can be built over it. So, that if you go out of ORM but not database specific - you can still remain database independent. I asked the same question to Jacob Kaplan Moss (In person) : He advised me to stay with the database that I like and endure its whole power, to which I agree. But my point was not that we should be database independent. My point is we should be database independent until we use a database specific feature. Please explain, why should be there a fake sql layer over the actual sql ?

    Read the article

< Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >