Search Results

Search found 20677 results on 828 pages for 'python team'.

Page 410/828 | < Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >

  • How to write data by dynamic parameter name

    - by Maxim Welikobratov
    I need to be able to write data to datastore of google-app-engine for some known entity. But I don't want write assignment code for each parameter of the entity. I meen, I don't want do like this val_1 = self.request.get('prop_1') val_2 = self.request.get('prop_2') ... val_N = self.request.get('prop_N') item.prop_1 = val_1 item.prop_2 = val_2 ... item.prop_N = val_N item.put() instead, I want to do something like this args = self.request.arguments() for prop_name in args: item.set(prop_name, self.request.get(prop_name)) item.put() dose anybody know how to do this trick?

    Read the article

  • Creating a custom widget using django for use on external sites

    - by ajt
    I have a new site that I am putting together and part of it has statistics for the site's users. I would like to create a widget that others can use on another website by invoking javascript that reads data from my server and shows that statistics for a given user, but I am having a hard time finding specific tutorials that covers this in django. I have seen the link at Alex Maradon's site [0], but it looks to me like that is passing html back to the widget and I am having a hard time figuring out how to do this using something like xml. Are there any django apps for doing this or does anyone know of good how-tos? [0] http://alexmarandon.com/articles/web_widget_jquery/

    Read the article

  • def constrainedMatchPair(firstMatch,secondMatch,length):

    - by smart
    matches of a key string in a target string, where one of the elements of the key string is replaced by a different element. For example, if we want to match ATGC against ATGACATGCACAAGTATGCAT, we know there is an exact match starting at 5 and a second one starting at 15. However, there is another match starting at 0, in which the element A is substituted for C in the key, that is we match ATGC against the target. Similarly, the key ATTA matches this target starting at 0, if we allow a substitution of G for the second T in the key string. consider the following steps. First, break the key string into two parts (where one of the parts could be an empty string). Let's call them key1 and key2. For each part, use your function from Problem 2 to find the starting points of possible matches, that is, invoke starts1 = subStringMatchExact(target,key1) and starts2 = subStringMatchExact(target,key2) The result of these two invocations should be two tuples, each indicating the starting points of matches of the two parts (key1 and key2) of the key string in the target. For example, if we consider the key ATGC, we could consider matching A and GC against a target, like ATGACATGCA (in which case we would get as locations of matches for A the tuple (0, 3, 5, 9) and as locations of matches for GC the tuple (7,). Of course, we would want to search over all possible choices of substrings with a missing element: the empty string and TGC; A and GC; AT and C; and ATG and the empty string. Note that we can use your solution for Problem 2 to find these values. Once we have the locations of starting points for matches of the two substrings, we need to decide which combinations of a match from the first substring and a match of the second substring are correct. There is an easy test for this. Suppose that the index for the starting point of the match of the first substring is n (which would be an element of starts1), and that the length of the first substring is m. Then if k is an element of starts2, denoting the index of the starting point of a match of the second substring, there is a valid match with one substitution starting at n, if n+m+1 = k, since this means that the second substring match starts one element beyond the end of the first substring. finally the question is Write a function, called constrainedMatchPair which takes three arguments: a tuple representing starting points for the first substring, a tuple representing starting points for the second substring, and the length of the first substring. The function should return a tuple of all members (call it n) of the first tuple for which there is an element in the second tuple (call it k) such that n+m+1 = k, where m is the length of the first substring.

    Read the article

  • Validating key/certificate pairs with M2Crypto when a certificate chain is needed

    - by Charles Duffy
    M2Crypto.X509.X509 objects have a verify(pkey) method, which provide a means of testing that a given certificate does in fact sign a specified key. This is a good and useful thing -- except that sometimes the certificate I want to verify in this way is invalid without the use of an intermediate certificate, which this API does not appear to allow a way to specify. Is there an alternate means of validating a certificate / private key pair which will work even when the certificate is unable to stand alone?

    Read the article

  • Can I create class properties during __new__ or __init__?

    - by 007brendan
    I want to do something like this. The _print_attr function is designed to be called lazily, so I don't want to evaluate it in the init and set the value to attr. I would like to make attr a property that computes _print_attr only when accessed: class Base(object): def __init__(self): for attr in self._edl_uniform_attrs: setattr(self, attr, property(lambda self: self._print_attr(attr))) def _print_attr(self, attr): print attr class Child(Base): _edl_uniform_attrs = ['foo', 'bar'] me = Child() me.foo me.bar #output: #"foo" #"bar"

    Read the article

  • Breadth first search all paths

    - by Amndeep7
    First of all, thank you for looking at this question. For a school assignment we're supposed to create a BFS algorithm and use it to do various things. One of these things is that we're supposed to find all of the paths between the root and the goal nodes of a graph. I have no idea how to do this as I can't find a way to keep track of all of the alternate routes without also including copies/cycles. Here is my BFS code: def makePath(predecessors, last): return makePath(predecessors, predecessors[last]) + [last] if last else [] def BFS1b(node, goal): Q = [node] predecessor = {node:None} while Q: current = Q.pop(0) if current[0] == goal: return makePath(predecessor, goal) for subnode in graph[current[0]][2:]: if subnode[0] not in predecessor: predecessor[subnode[0]] = current[0] Q.append(subnode[0]) A conceptual push in the right direction would be greatly appreciated. tl;dr How do I use BFS to find all of the paths between two nodes?

    Read the article

  • How do I upload a files to google app engine app when field name is not known

    - by Michael Neale
    I have tried a few options, none of which seem to work (if I have a simple multipart form with a named field, it works well, but when I don't know the name I can't just grab all files in the request...). I have looked at http://stackoverflow.com/questions/81451/upload-files-in-google-app-engine and it doesn't seem suitable (or to actually work, as someone mentioned the code snipped it untested).

    Read the article

  • Iterate with binary structure over numpy array to get cell sums

    - by Curlew
    In the package scipy there is the function to define a binary structure (such as a taxicab (2,1) or a chessboard (2,2)). import numpy from scipy import ndimage a = numpy.zeros((6,6), dtype=numpy.int) a[1:5, 1:5] = 1;a[3,3] = 0 ; a[2,2] = 2 s = ndimage.generate_binary_structure(2,2) # Binary structure #.... Calculate Sum of result_array = numpy.zeros_like(a) What i want is to iterate over all cells of this array with the given structure s. Then i want to append a function to the current cell value indexed in a empty array (example function sum), which uses the values of all cells in the binary structure. For example: array([[0, 0, 0, 0, 0, 0], [0, 1, 1, 1, 1, 0], [0, 1, 2, 1, 1, 0], [0, 1, 1, 0, 1, 0], [0, 1, 1, 1, 1, 0], [0, 0, 0, 0, 0, 0]]) # The array a. The value in cell 1,2 is currently one. Given the structure s and an example function such as sum the value in the resulting array (result_array) becomes 7 (or 6 if the current cell value is excluded). Someone got an idea?

    Read the article

  • A good data model for finding a user's favorite stories

    - by wings
    Original Design Here's how I originally had my Models set up: class UserData(db.Model): user = db.UserProperty() favorites = db.ListProperty(db.Key) # list of story keys # ... class Story(db.Model): title = db.StringProperty() # ... On every page that displayed a story I would query UserData for the current user: user_data = UserData.all().filter('user =' users.get_current_user()).get() story_is_favorited = (story in user_data.favorites) New Design After watching this talk: Google I/O 2009 - Scalable, Complex Apps on App Engine, I wondered if I could set things up more efficiently. class FavoriteIndex(db.Model): favorited_by = db.StringListProperty() The Story Model is the same, but I got rid of the UserData Model. Each instance of the new FavoriteIndex Model has a Story instance as a parent. And each FavoriteIndex stores a list of user id's in it's favorited_by property. If I want to find all of the stories that have been favorited by a certain user: index_keys = FavoriteIndex.all(keys_only=True).filter('favorited_by =', users.get_current_user().user_id()) story_keys = [k.parent() for k in index_keys] stories = db.get(story_keys) This approach avoids the serialization/deserialization that's otherwise associated with the ListProperty. Efficiency vs Simplicity I'm not sure how efficient the new design is, especially after a user decides to favorite 300 stories, but here's why I like it: A favorited story is associated with a user, not with her user data On a page where I display a story, it's pretty easy to ask the story if it's been favorited (without calling up a separate entity filled with user data). fav_index = FavoriteIndex.all().ancestor(story).get() fav_of_current_user = users.get_current_user().user_id() in fav_index.favorited_by It's also easy to get a list of all the users who have favorited a story (using the method in #2) Is there an easier way? Please help. How is this kind of thing normally done?

    Read the article

  • Diminishing programmer wants to get back to programming

    - by Marcus TV
    I last programmed actively in 2002. It is almost 8 years now. I learned C and then moved to Visual Basic for our thesis project in the university. I would like to ask suggestions on what programming language should I learn and put to profitability use in areas such as desktop applications, web development, and database applications.

    Read the article

  • Need a workaround to filter on related model and aggregated fields in Django

    - by parxier
    I opened a ticket for this problem. In a nutshell here is my model: class Plan(models.Model): cap = models.IntegerField() class Phone(models.Model): plan = models.ForeignKey(Plan, related_name='phones') class Call(models.Model): phone = models.ForeignKey(Phone, related_name='calls') cost = models.IntegerField() I want to run a query like this one: Phone.objects.annotate(total_cost=Sum('calls__cost')).filter(total_cost__gte=0.5*F('plan__cap')) Unfortunately Django generates bad SQL: SELECT "app_phone"."id", "app_phone"."plan_id", SUM("app_call"."cost") AS "total_cost" FROM "app_phone" INNER JOIN "app_plan" ON ("app_phone"."plan_id" = "app_plan"."id") LEFT OUTER JOIN "app_call" ON ("app_phone"."id" = "app_call"."phone_id") GROUP BY "app_phone"."id", "app_phone"."plan_id" HAVING SUM("app_call"."cost") >= 0.5 * "app_plan"."cap" and errors with: ProgrammingError: column "app_plan.cap" must appear in the GROUP BY clause or be used in an aggregate function LINE 1: ...."plan_id" HAVING SUM("app_call"."cost") >= 0.5 * "app_plan".... Is there any workaround apart from running raw SQL?

    Read the article

  • Check result of AX_PYTHON_MODULE in configure.ac

    - by tmatth
    In using the m4_ax_python_module.m4 macro in configure.ac (AX_PYTHON_MODULE), one can know at configure time if a given module is installed. It takes two arguments, the module name, and second argument which if not empty, will lead to an exit, useful when the module is a must-have. In the case where you don't want a fatal exit, how do you test in configure.ac which modules were found or not? They output "yes" or "no" when configure is run, but that's all I've found so far. Basically If I have these lines in configure.ac: AX_PYTHON_MODULE(json,[]) AX_PYTHON_MODULE(simplejson,[]) How do I test which of the two modules were found? See http://www.gnu.org/software/autoconf-archive/ax_python_module.html#ax_python_module for documentation about this macro.

    Read the article

  • Non standard interaction among two tables to avoid very large merge

    - by riko
    Suppose I have two tables A and B. Table A has a multi-level index (a, b) and one column (ts). b determines univocally ts. A = pd.DataFrame( [('a', 'x', 4), ('a', 'y', 6), ('a', 'z', 5), ('b', 'x', 4), ('b', 'z', 5), ('c', 'y', 6)], columns=['a', 'b', 'ts']).set_index(['a', 'b']) AA = A.reset_index() Table B is another one-column (ts) table with non-unique index (a). The ts's are sorted "inside" each group, i.e., B.ix[x] is sorted for each x. Moreover, there is always a value in B.ix[x] that is greater than or equal to the values in A. B = pd.DataFrame( dict(a=list('aaaaabbcccccc'), ts=[1, 2, 4, 5, 7, 7, 8, 1, 2, 4, 5, 8, 9])).set_index('a') The semantics in this is that B contains observations of occurrences of an event of type indicated by the index. I would like to find from B the timestamp of the first occurrence of each event type after the timestamp specified in A for each value of b. In other words, I would like to get a table with the same shape of A, that instead of ts contains the "minimum value occurring after ts" as specified by table B. So, my goal would be: C: ('a', 'x') 4 ('a', 'y') 7 ('a', 'z') 5 ('b', 'x') 7 ('b', 'z') 7 ('c', 'y') 8 I have some working code, but is terribly slow. C = AA.apply(lambda row: ( row[0], row[1], B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))), axis=1).set_index(['a', 'b']) Profiling shows the culprit is obviously B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))). However, standard solutions using merge/join would take too much RAM in the long run. Consider that now I have 1000 a's, assume constant the average number of b's per a (probably 100-200), and consider that the number of observations per a is probably in the order of 300. In production I will have 1000 more a's. 1,000,000 x 200 x 300 = 60,000,000,000 rows may be a bit too much to keep in RAM, especially considering that the data I need is perfectly described by a C like the one I discussed above. How would I improve the performance?

    Read the article

  • Why do all module run together?

    - by gunbuster363
    I just made a fresh copy of eclipse and installed pydev. In my first trial to use pydev with eclipse, I created 2 module under the src package(the default one) FirstModule.py: ''' Created on 18.06.2009 @author: Lars Vogel ''' def add(a,b): return a+b def addFixedValue(a): y = 5 return y +a print "123" run.py: ''' Created on Jun 20, 2011 @author: Raymond.Yeung ''' from FirstModule import add print add(1,2) print "Helloword" When I pull out the pull down menu of the run button, and click "ProjectName run.py", here is the result: 123 3 Helloword Apparantly both module ran, why? Is this the default setting?

    Read the article

  • Differentiate gtk.Entry icons

    - by Ubersoldat
    I'm adding two icons to a gtk.Entry in PyGTK. The icons signals are handled by the following method def entry_icon_event(self, widget, position, event) I'm trying to differentiate between the two of them: <enum GTK_ENTRY_ICON_PRIMARY of type GtkEntryIconPosition> <enum GTK_ENTRY_ICON_SECONDARY of type GtkEntryIconPosition> How can I do this? I've been digging through the documentation of PyGTK but there's no object GtkEntryIconPosition nor any definition for this enums. Thanks

    Read the article

  • Iterating dictionary indexes in django templates

    - by unclaimedbaggage
    Hi folks...I have a dictionary with embedded objects, which looks something like this: notes = { 2009: [<Note: Test note>, <Note: Another test note>], 2010: [<Note: Third test note>, <Note: Fourth test note>], } I'm trying to access each of the note objects inside a django template, and having a helluva time navigating to them. In short, I'm not sure how to extract by index in django templating. Current template code is: <h3>Notes</h3> {% for year in notes %} {{ year }} # Works fine {% for note in notes.year %} {{ note }} # Returns blank {% endfor %} {% endfor %} If I replace {% for note in notes.year %} with {% for note in notes.2010 %} things work fine, but I need that '2010' to be dynamic. Any suggestions much appreciated.

    Read the article

  • How to display multiple images?

    - by misterwebz
    I'm trying to get multiple image paths from my database in order to display them, but it currently doesn't work. Here's what i'm using: def get_image(self, userid, id): image = meta.Session.query(Image).filter_by(userid=userid) permanent_file = open(image[id].image_path, 'rb') if not os.path.exists(image.image_path): return 'No such file' data = permanent_file.read() permanent_file.close() response.content_type = guess_type(image.image_path)[0] or 'text/plain' return data I'm getting an error regarding this part: image[id].image_path What i want is for Pylons to display several jpg files on 1 page. Any idea how i could achieve this?

    Read the article

  • Improving the join of two wave file?

    - by kaki
    I have written a code for joining two wave files.It works fine when i am joining larger segments but as i need to join very small segments the clarity is not good. I have learned that the signal processing technique such a windowed join can be used to improve the joining of file. y[n] = w[n]s[n] Multiply value of signal at sample number n by the value of a windowing function hamming window w[n]= .54 - .46*cos(2*Pi*n)/L 0 I am not understanding how to get the value to signal at sample n and how to implement this?? the code i am using for joining is import wave m=['C:/begpython/S0001_0002.wav', 'C:/begpython/S0001_0001.wav'] i=1 a=m[i] infiles = [a, "C:/begpython/S0001_0002.wav", a] outfile = "C:/begpython/S0001_00367.wav" data= [] data1=[] for infile in infiles: w = wave.open(infile, 'rb') data1=[w.getnframes] data.append( [w.getparams(), w.readframes(w.getnframes())] ) #data1 = [ord(character) for character in data1] #print data1 #data1 = ''.join(chr(character) for character in data1) w.close() output = wave.open(outfile, 'wb') output.setparams(data[0][0]) output.writeframes(data[0][1]) output.writeframes(data[1][1]) output.writeframes(data[2][1]) output.close()

    Read the article

  • Writing csv header removes data from numpy array written below

    - by user338095
    I'm trying to export data to a csv file. It should contain a header (from datastack) and restacked arrays with my data (from datastack). One line in datastack has the same length as dataset. The code below works but it removes parts of the first line from datastack. Any ideas why that could be? s = ','.join(itertools.chain(dataset)) + '\n' newfile = 'export.csv' f = open(newfile,'w') f.write(s) numpy.savetxt(newfile, (numpy.transpose(datastack)), delimiter=', ') f.close()

    Read the article

  • Appengine filter inequality and ordering fails

    - by davezor
    I think I'm overlooking something simple here, I can't imagine this is impossible to do. I want to filter by a datetime attribute and then order the result by a ranking integer attribute. When I try to do this: query.filter("submitted >=" thisweek).order("ranking") I get the following: BadArgumentError: First ordering property must be the same as inequality filter property, if specified for this query; received ranking, expected submitted Huh? What am I missing? Thanks.

    Read the article

  • Rewriting Live TCP/IP (Layer 4) (i.e. Socket Layer) Streams

    - by user213060
    I have a simple problem which I'm sure someone here has done before... I want to rewrite Layer 4 TCP/IP streams (Not lower layer individual packets or frames.) Ettercap's etterfilter command lets you perform simple live replacements of Layer 4 TCP/IP streams based on fixed strings or regexes. Example ettercap scripting code: if (ip.proto == TCP && tcp.dst == 80) { if (search(DATA.data, "gzip")) { replace("gzip", " "); msg("whited out gzip\n"); } } if (ip.proto == TCP && tcp.dst == 80) { if (search(DATA.data, "deflate")) { replace("deflate", " "); msg("whited out deflate\n"); } } http://ettercap.sourceforge.net/forum/viewtopic.php?t=2833 I would like to rewrite streams based on my own filter program instead of just simple string replacements. Anyone have an idea of how to do this? Is there anything other than Ettercap that can do live replacement like this, maybe as a plugin to a VPN software or something? I would like to have a configuration similar to ettercap's silent bridged sniffing configuration between two Ethernet interfaces. This way I can silently filter traffic coming from either direction with no NATing problems. Note that my filter is an application that acts as a pipe filter, similar to the design of unix command-line filters: >[eth0] <----------> [my filter] <----------> [eth1]< What I am already aware of, but are not suitable: Tun/Tap - Works at the lower packet layer, I need to work with the higher layer streams. Ettercap - I can't find any way to do replacements other than the restricted capabilities in the example above. Hooking into some VPN software? - I just can't figure out which or exactly how. libnetfilter_queue - Works with lower layer packets, not TCP/IP streams. Again, the rewriting should occur at the transport layer (Layer 4) as it does in this example, instead of a lower layer packet-based approach. Exact code will help immensely! Thanks!

    Read the article

  • Django Piston - how can I create custom methods?

    - by orokusaki
    I put my questions in the code comments for clarity: from piston.handler import AnonymousBaseHandler class AnonymousAPITest(AnonymousBaseHandler): fields = ('update_subscription',) def update_subscription(self, request, months): # Do some stuff here to update a subscription based on the # number of months provided. # How the heck can I call this method? return {'msg': 'Your subscription has been updated!'} def read(self, request): return { 'msg': 'Why would I need a read() method on a fully custom API?' }

    Read the article

< Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >