Search Results

Search found 13683 results on 548 pages for 'python sphinx'.

Page 396/548 | < Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >

  • numpy array mapping and take average

    - by user566653
    Dear all, I have three array value = np.array ([1, 3, 3, 5, 5, 7, 3]) index = np.array ([1, 1, 3, 3, 6, 6, 6]) data = np.array ([1, 2, 3, 4, 5, 6]) and want to take average for item of "value" by array "index", and assign a new array with value of "data", such as [2, nan, 4, nan, nan, 5] first value is the average of 1st and 2nd of "value" second value is nan because there is not any key in "index" third value is the average of 3rd and 4th of "value" ... Thanks for your help!!! Regards, Roy

    Read the article

  • A faster alternative to Pandas `isin` function

    - by user3576212
    I have a very large data frame df that looks like: ID Value1 Value2 1345 3.2 332 1355 2.2 32 2346 1.0 11 3456 8.9 322 And I have a list that contains a subset of IDs ID_list. I need to have a subset of df for the ID contained in ID_list. Currently, I am using df_sub=df[df.ID.isin(ID_list)] to do it. But it takes a lot time. IDs contained in ID_list doesn't have any pattern, so it's not within certain range. (And I need to apply the same operation to many similar dataframes. I was wondering if there is any faster way to do this. Will it help a lot if make ID as the index? Thanks!

    Read the article

  • How do I specify a null relation in SQLAlchemy?

    - by Jesse
    Not sure what the correct title for this question should be. I have the following schema: Matters have a one-many relationship to WorkItems. WorkItems have a one-one (or one-zero) relationship to LineItems. I am trying to create the following relation between Matters and WorkItems Matter.unbilled_work_items = orm.relation(WorkItem, primaryjoin = (Matter.id == WorkItem.matter_id) and (WorkItem.line_item_id == None), foreign_keys = [WorkItem.matter_id, WorkItem.line_item_id], viewonly=True ) This throws: AttributeError: '_Null' object has no attribute 'table' That seems to be saying that the second clause in the primaryjoin returns an object of type _Null, but it seems to be expecting something with a "table" attribute. This seems like it should be pretty straightforward to me, am I missing something obvious?

    Read the article

  • Celery / Django Single Tasks being run multiple times

    - by felix001
    I'm facing an issue where I'm placing a task into the queue and it is being run multiple times. From the celery logs I can see that the same worker is running the task ... [2014-06-06 15:12:20,731: INFO/MainProcess] Received task: input.tasks.add_queue [2014-06-06 15:12:20,750: INFO/Worker-2] starting runner.. [2014-06-06 15:12:20,759: INFO/Worker-2] collection started [2014-06-06 15:13:32,828: INFO/Worker-2] collection complete [2014-06-06 15:13:32,836: INFO/Worker-2] generation of steps complete [2014-06-06 15:13:32,836: INFO/Worker-2] update created [2014-06-06 15:13:33,655: INFO/Worker-2] email sent [2014-06-06 15:13:33,656: INFO/Worker-2] update created [2014-06-06 15:13:34,420: INFO/Worker-2] email sent [2014-06-06 15:13:34,421: INFO/Worker-2] FINISH - Success However when I view the actual logs of the application it is showing 5-6 log lines for each step (??). Im using Django 1.6 with RabbitMQ. The method for placing into the queue is via placing a delay on a function. This function (task decorator is added( then calls a class which is run. Has anyone any idea on the best way to troubleshoot this ? Edit : As requested heres the code, views.py In my view im sending my data to the queue via ... from input.tasks import add_queue_project add_queue_project.delay(data) tasks.py from celery.decorators import task @task() def add_queue_project(data): """ run project """ logger = logging_setup(app="project") logger.info("starting project runner..") f = project_runner(data) f.main() class project_runner(): """ main project runner """ def __init__(self,data): self.data = data self.logger = logging_setup(app="project") def self.main(self): .... Code settings.py THIRD_PARTY_APPS = ( 'south', # Database migration helpers: 'crispy_forms', # Form layouts 'rest_framework', 'djcelery', ) import djcelery djcelery.setup_loader() BROKER_HOST = "127.0.0.1" BROKER_PORT = 5672 # default RabbitMQ listening port BROKER_USER = "test" BROKER_PASSWORD = "test" BROKER_VHOST = "test" CELERY_BACKEND = "amqp" # telling Celery to report the results back to RabbitMQ CELERY_RESULT_DBURI = "" CELERY_IMPORTS = ("input.tasks", ) celeryd The line im running is to start celery, python2.7 manage.py celeryd -l info Thanks,

    Read the article

  • Wrong values reported by pyPDF for various box regions

    - by romor
    Using pyPdf, for most files I get matched results concerning various box's dimensions compared to what Acrobat reports. However for some files I get different values reported by pyPdf and Acrobat, like: pyPdf: artBox: 595.3 x 841.9 bleedBox: 595.3 x 841.9 cropBox: 595.3 x 841.9 trimBox: 517.3 x 754 Acrobat: artBox: 439.35 x 666.13 pt bleedBox: 439.35 x 666.13 pt cropBox: 439.35 x 666.13 pt trimBox: 439.35 x 666.13 pt I thought it's units issue, but then ratio between widths and heights doesn't match also, not mentioning trimBox mismatch Correct results are those reported by Acrobat of course. Does someone know why is this and is there a way I get correct dimensions by using pyPdf? Thanks couple of minutes later... After reading this question: Are PDF box coordinates relative or absolute? I figured I didn't considered uper left corner to be different then 0 (zero). It turned out that box starts at 77.95 x 87.87, so if we reduce reported values of trimBox by this values correct result is obtained. artBox: 0 x 0 bleedBox: 0 x 0 cropBox: 0 x 0 trimBox: 77.95 x 87.87 Other boxes seem with misleading values or I misinterpret them. Snippet: from pyPdf import PdfFileReader pdfread = PdfFileReader(file('my.pdf', 'rb')) page = 1 width = pdfread.getPage(page).trimBox[2]-pdfread.getPage(page).trimBox[0] height = pdfread.getPage(page).trimBox[3] - pdfread.getPage(page).trimBox[1] print width, height

    Read the article

  • Django - partially validating form

    - by aeter
    I'm new to Django, trying to process some forms. I have this form for entering information (creating a new ad) in one template: class Ad(models.Model): ... category = models.CharField("Category",max_length=30, choices=CATEGORIES) sub_category = models.CharField("Subcategory",max_length=4, choices=SUBCATEGORIES) location = models.CharField("Location",max_length=30, blank=True) title = models.CharField("Title",max_length=50) ... I validate it with "is_valid()" just fine. Basically for the second validation (another template) I want to validate only against "category" and "sub_category": In another template, I want to use 2 fields from the same form ("category" and "sub_category") for filtering information - and now the "is_valid()" method would not work correctly, cause it validates the entire form, and I need to validate only 2 fields. I have tried with the following: ... if request.method == 'POST': # If a filter for data has been submitted: form = AdForm(request.POST) try: form = form.clean() category = form.category sub_category = form.sub_category latest_ads_list = Ad.objects.filter(category=category) except ValidationError: latest_ads_list = Ad.objects.all().order_by('pub_date') else: latest_ads_list = Ad.objects.all().order_by('pub_date') form = AdForm() ... but it doesn't work. How can I validate only the 2 fields category and sub_category?

    Read the article

  • Fast image coordinate lookup in Numpy

    - by victor
    I've got a big numpy array full of coordinates (about 400): [[102, 234], [304, 104], .... ] And a numpy 2d array my_map of size 800x800. What's the fastest way to look up the coordinates given in that array? I tried things like paletting as described in this post: http://opencvpython.blogspot.com/2012/06/fast-array-manipulation-in-numpy.html but couldn't get it to work. I was also thinking about turning each coordinate into a linear index of the map and then piping it straight into my_map like so: my_map[linearized_coords] but I couldn't get vectorize to properly translate the coordinates into a linear fashion. Any ideas?

    Read the article

  • Any way to set or overwrite the __line__ and __file__ metadata?

    - by charles.merriam
    I'm writing some code that needs to change function signatures. Right now, I'm using Simionato's FunctionMaker class, which uses the (hacky) inspect module, and does a compile. Unfortunately, this still loses the line and file metadata. Does anyone know: If it is possible to overwrite these values in some odd way? If hacking up a class with a complex getattribute() to intercept the values and also try to make the class looks like a function is any more possible than a moose with a flying nun hat? Is there an alternative to the (hacky) inspect module? PEP 362 is dead dead dead? I know decorators and cPickle users fight with this. What other situations is the read only metadata in people's way? I appreciate any insights. Thank you.

    Read the article

  • Lucene: Fastest way to return the document occurance of a phrase?

    - by dont say the kid's name
    Hi Guys, I am trying to use Lucene (actually PyLucene!) to find out how many documents contain my exact phrase. My code currently looks like this... but it runs rather slow. Does anyone know a faster way to return document counts? phraseList = ["some phrase 1", "some phrase 2"] #etc, a list of phrases... countsearcher = IndexSearcher(SimpleFSDirectory(File(STORE_DIR)), True) analyzer = StandardAnalyzer(Version.LUCENE_CURRENT) for phrase in phraseList: query = QueryParser(Version.LUCENE_CURRENT, "contents", analyzer).parse("\"" + phrase + "\"") scoreDocs = countsearcher.search(query, 200).scoreDocs print "count is: " + str(len(scoreDocs))

    Read the article

  • Transferring binary file from web server to client

    - by Yan Cheng CHEOK
    Usually, when I want to transfer a web server text file to client, here is what I did import cgi print "Content-Type: text/plain" print "Content-Disposition: attachment; filename=TEST.txt" print filename = "C:\\TEST.TXT" f = open(filename, 'r') for line in f: print line Works very fine for ANSI file. However, say, I have a binary file a.exe (This file is in web server secret path, and user shall not have direct access to that directory path). I wish to use the similar method to transfer. How I can do so? What content-type I should use? Using print seems to have corrupted content received at client side. What is the correct method?

    Read the article

  • UnicodeDecodeError when redirecting to file

    - by zedoo
    Hi, I run this snippet twice, in the ubuntu terminal, (encoding set to utf-8) once with ./test.py and then with ./test.py >out.txt: uni = u"\u001A\u0BC3\u1451\U0001D10C" print uni Without redirection it prints garbage. With redirection I get a UnicodeDecodeError. Can someone explain why I get the error only in the second case, or even better give a detailed explanation of what's going on behind the curtain in both cases?

    Read the article

  • Solving linear system over integers with numpy

    - by A. R. S.
    I'm trying to solve an overdetermined linear system of equations with numpy. Currently, I'm doing something like this (as a simple example): a = np.array([[1,0], [0,1], [-1,1]]) b = np.array([1,1,0]) print np.linalg.lstsq(a,b)[0] [ 1. 1.] This works, but uses floats. Is there any way to solve the system over integers only? I've tried something along the lines of print map(int, np.linalg.lstsq(a,b)[0]) [0, 1] in order to convert the solution to an array of ints, expecting [1, 1], but clearly I'm missing something. Could anyone point me in the right direction?

    Read the article

  • Passing parameter to base class constructor or using instance variable?

    - by deamon
    All classes derived from a certain base class have to define an attribute called "path". In the sense of duck typing I could rely upon definition in the subclasses: class Base: pass # no "path" variable here def Sub(Base): def __init__(self): self.path = "something/" Another possiblity would be to use the base class constructor: class Base: def __init__(self, path): self.path = path def Sub(Base): def __init__(self): super().__init__("something/") What would you prefer and why? Is there a better way?

    Read the article

  • Send Special Keys to Gtk.VteTerminal

    - by Ubersoldat
    Hi I have this OSS Project called Monocaffe connections manager which uses the Gtk.VteTerminal widget from PyGTK. A nice feature is that it allows the users to send commands to different servers' consoles (cluster mode) using a Gtk.TextView for the input. The way I send key strokes to each Gtk.VteTerminal is by using the feed_child method. For common keys there's no problem: I simply feed what the TextView receives to all the terminals, but when doing so with special keys I get into a little trouble. For "Return" I catch the event and feed the terminal a '\n'. For back-space is the same, catch the event and feed a '\b'. def cluster_backspace(self, widget): return self.cluster_send_key('\b') The problem comes with other keys like Tab, Arrows, Esc which I don't know how to feed as str to the terminal to recognize them. In the case of Esc is a real pain, because the users can edit the same file on different servers using vi, but cannot escape insert mode. Anyway, I'm not looking for a complete solution, just ideas since I've ran out of them. Thanks.

    Read the article

  • How do I use multiple settings file in Django with multiple sites on one server?

    - by William Bing Hua
    I have an ec2 instance running Ubuntu 14.04 and I want to host two sites from it. On my first site I have two settings file, production_settings.py and settings.py (for local development). I import the local settings into the production settings and override any settings with the production settings file. Since my production settings file is not the default settings.py name, I have to create an environment variable DJANGO_SETTINGS_MODULE='site1.production_settings' However because of this whenever I try to start my second site it says No module named site1.production_settings I am assuming that this is due to me setting the environment variable. Another problem is that I won't be able to use different settings file for different sites. How do I start use two different settings file for two different websites?

    Read the article

  • Load image from string

    - by zaf
    Given a string containing jpeg image data, is it possible to load this directly in pygame? I've tried using StringIO but failed and I don't completely understand the 'file-like' object concept. Currently, as a workaround, I'm saving to disk and then loading an image the standard way: # imagestring contains a jpeg f=open('test.jpg','wb') f.write(imagestring) f.close() image=pygame.image.load('test.jpg') Any suggestions on improving this so that we avoid creating a temp file?

    Read the article

  • How to create and restore a backup from SqlAlchemy?

    - by swilliams
    I'm writing a Pylons app, and am trying to create a simple backup system where every table is serialized and tarred up into a single file for an administrator to download, and use to restore the app should something bad happen. I can serialize my table data just fine using the SqlAlchemy serializer, and I can deserialize it fine as well, but I can't figure out how to commit those changes back to the database. In order to serialize my data I am doing this: from myproject.model.meta import Session from sqlalchemy.ext.serializer import loads, dumps q = Session.query(MyTable) serialized_data = dumps(q.all()) In order to test things out, I go ahead and truncation MyTable, and then attempt to restore using serialized_data: from myproject.model import meta restore_q = loads(serialized_data, meta.metadata, Session) This doesn't seem to do anything... I've tried calling a Session.commit after the fact, individually walking through all the objects in restore_q and adding them, but nothing seems to work. What am I missing? Or is there a better way to do what I'm aiming for? I don't want to shell out and directly touch the database, since SqlAlchemy supports different database engines.

    Read the article

  • Referencing other modules in atexit

    - by Dmitry Risenberg
    I have a function that is responsible for killing a child process when the program ends: class MySingleton: def __init__(self): import atexit atexit.register(self.stop) def stop(self): os.kill(self.sel_server_pid, signal.SIGTERM) However I get an error message when this function is called: Traceback (most recent call last): File "/usr/lib/python2.5/atexit.py", line 24, in _run_exitfuncs func(*targs, **kargs) File "/home/commando/Development/Diploma/streaminatr/stream/selenium_tests.py", line 66, in stop os.kill(self.sel_server_pid, signal.SIGTERM) AttributeError: 'NoneType' object has no attribute 'kill' Looks like the os and signal modules get unloaded before atexit is called. Re-importing them solves the problem, but this behaviour seems weird to me - these modules are imported before I register my handler, so why are they unloaded before my own exit handler runs?

    Read the article

< Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >