Search Results

Search found 13693 results on 548 pages for 'python metaprogramming'.

Page 388/548 | < Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >

  • pyramid traversal resource url no attribute __name__

    - by Santana
    So I have: resources.py: def _add(obj, name, parent): obj.__name__ = name obj.__parent__ = parent return obj class Root(object): __parent__ = __name__ = None def __init__(self, request): super(Root, self).__init__() self.request = request self.collection = request.db.post def __getitem__(self, key): if u'profile' in key: return Profile(self.request) class Profile(dict): def __init__(self, request): super(Profile, self).__init__() self.__name__ = u'profile' self.__parent__ = Root self.collection = request.db.posts def __getitem__(self, name): post = Dummy(self.collection.find_one(dict(username=name))) return _add(post, name, self) and I'm using MongoDB and pyramid_mongodb views.py: @view_config(context = Profile, renderer = 'templates/mytemplate.pt') def test_view(request): return {} and in mytemplate.pt: <p tal:repeat='item request.context'> ${item} </p> I can echo what's in the database (I'm using mongodb), but when I provided a URL for each item using resource_url() <p tal:repeat='item request.context'> <a href='${request.resource_url(item)}'>${item}</a> </p> I got an error: 'dict' object has no attribute '__name__', can someone help me?

    Read the article

  • finding and returning a string with a specified prefix

    - by tipu
    I am close but I am not sure what to do with the restuling match object. If I do p = re.search('[/@.* /]', str) I'll get any words that start with @ and end up with a space. This is what I want. However this returns a Match object that I dont' know what to do with. What's the most computationally efficient way of finding and returning a string which is prefixed with a @? For example, "Hi there @guy" After doing the proper calculations, I would be returned guy

    Read the article

  • How to exclude results with get_object_or_404?

    - by googletorp
    In Django you can use the exclude to create SQL similar to not equal. An example could be. Model.objects.exclude(status='deleted') Now this works great and exclude is very flexible. Since I'm a bit lazy, I would like to get that functionality when using get_object_or_404, but I haven't found a way to do this, since you cannot use exclude on get_object_or_404. What I want is to do something like this: model = get_object_or_404(pk=id, status__exclude='deleted') But unfortunately this doesn't work as there isn't an exclude query filter or similar. The best I've come up with so far is doing something like this: object = get_object_or_404(pk=id) if object.status == 'deleted': return HttpResponseNotfound('text') Doing something like that, really defeats the point of using get_object_or_404, since it no longer is a handy one-liner. Alternatively I could do: object = get_object_or_404(pk=id, status__in=['list', 'of', 'items']) But that wouldn't be very maintainable, as I would need to keep the list up to date. I'm wondering if I'm missing some trick or feature in django to use get_object_or_404 to get the desired result?

    Read the article

  • Non standard interaction among two tables to avoid very large merge

    - by riko
    Suppose I have two tables A and B. Table A has a multi-level index (a, b) and one column (ts). b determines univocally ts. A = pd.DataFrame( [('a', 'x', 4), ('a', 'y', 6), ('a', 'z', 5), ('b', 'x', 4), ('b', 'z', 5), ('c', 'y', 6)], columns=['a', 'b', 'ts']).set_index(['a', 'b']) AA = A.reset_index() Table B is another one-column (ts) table with non-unique index (a). The ts's are sorted "inside" each group, i.e., B.ix[x] is sorted for each x. Moreover, there is always a value in B.ix[x] that is greater than or equal to the values in A. B = pd.DataFrame( dict(a=list('aaaaabbcccccc'), ts=[1, 2, 4, 5, 7, 7, 8, 1, 2, 4, 5, 8, 9])).set_index('a') The semantics in this is that B contains observations of occurrences of an event of type indicated by the index. I would like to find from B the timestamp of the first occurrence of each event type after the timestamp specified in A for each value of b. In other words, I would like to get a table with the same shape of A, that instead of ts contains the "minimum value occurring after ts" as specified by table B. So, my goal would be: C: ('a', 'x') 4 ('a', 'y') 7 ('a', 'z') 5 ('b', 'x') 7 ('b', 'z') 7 ('c', 'y') 8 I have some working code, but is terribly slow. C = AA.apply(lambda row: ( row[0], row[1], B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))), axis=1).set_index(['a', 'b']) Profiling shows the culprit is obviously B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))). However, standard solutions using merge/join would take too much RAM in the long run. Consider that now I have 1000 a's, assume constant the average number of b's per a (probably 100-200), and consider that the number of observations per a is probably in the order of 300. In production I will have 1000 more a's. 1,000,000 x 200 x 300 = 60,000,000,000 rows may be a bit too much to keep in RAM, especially considering that the data I need is perfectly described by a C like the one I discussed above. How would I improve the performance?

    Read the article

  • Where is the help.py for Android's monkeyrunner

    - by Keyboardsurfer
    Hi, I just can't find the help.py file in order to create the API reference for the monkeyrunner. The command described at the Android references monkeyrunner <format> help.py <outfile> does not work when i call monkeyrunner html help.py /path/to/place/the/doc.html. It's quite obvious that the help.py file is not found and the monkeyrunner also tells me "Can't open specified script file". But a locate on my system doesn't bring me a help.py file that has anything to do with monkeyrunner or Android. So my question is: Where did they hide the help.py file for creating the API reference?

    Read the article

  • how to detect escape characters in a string

    - by mix
    Given a string named line whose raw version has this value: \rRAWSTRING how can I detect if it has the escape character \r? What I've tried is: if repr(line).startswith('\r'): blah... but it doesn't catch it. I also tried find, such as: if repr(line).find('\r') != -1: blah doesn't work either. What am I missing? thx!

    Read the article

  • Test assertions for tuples with floats

    - by Space_C0wb0y
    I have a function that returns a tuple that, among others, contains a float value. Usually I use assertAlmostEquals to compare those, but this does not work with tuples. Also, the tuple contains other data-types as well. Currently I am asserting every element of the tuple individually, but that gets too much for a list of such tuples. Is there any good way to write assertions for such cases?

    Read the article

  • Django dictionary in templates: Grab key from another objects attribute

    - by Jordan Messina
    I have a dictionary called number_devices I'm passing to a template, the dictionary keys are the ids of a list of objects I'm also passing to the template (called implementations). I'm iterating over the list of objects and then trying to use the object.id to get a value out of the dict like so: {% for implementation in implementations %} {{ number_devices.implementation.id }} {% endfor %} Unfortunately number_devices.implementation is evaluated first, then the result.id is evaluated obviously returning and displaying nothing. I can't use parentheses like: {{ number_devices.(implementation.id) }} because I get a parse error. How do I get around this annoyance in Django templates? Thanks for any help!

    Read the article

  • App-Engine Parse a UrlFetch UTF-8 encoded stream

    - by Davidrd91
    I am trying to parse an XML from a URL using the xml.sax parser. I know there are other libraries to use but coming from Java this is the one I am most familiar with and seems the least complicated to me. The code I'm using to parse is as follows: parser = xml.sax.make_parser() handler = MangaHandler() parser.setContentHandler(handler) url = urlfetch.Fetch('http://www.mangapanda.com/alphabetical', allow_truncated = False, follow_redirects = False, deadline = False) xml.sax.parseString(url.content, handler) This returns a SaxException (invalid token) once the parser reaches the first & sign: SAXParseException: <unknown>:582:34: not well-formed (invalid token) Because urlfetch returns a string and not a stream I cannot use the parse() (which only works with streams) and am left to use parseString() instead. To see if parsing as a stream would fix this I tried: parser.parse(io.StringIO(url.content).encode('utf-8')) but this returns: TypeError: initial_value must be unicode or None, not str I have also tried to use the urllib2 libraries which do return a stream instead of urlfetch but the file is too large and is automatically truncated, leaving me with missing data. Any Sort of work-around for this would be greatly appreciated as I've spent days getting around one obstacle just to be stopped by another.

    Read the article

  • how to let the parser print help message rather than error and exit

    - by fluter
    Hi, I am using argparse to handle cmd args, I wanna if there is no args specified, then print the help message, but now the parse will output a error, and then exit. my code is: def main(): print "in abing/start/main" parser = argparse.ArgumentParser(prog="abing")#, usage="%(prog)s <command> [args] [--help]") parser.add_argument("-v", "--verbose", action="store_true", default=False, help="show verbose output") subparsers = parser.add_subparsers(title="commands") bkr_subparser = subparsers.add_parser("beaker", help="beaker inspection") bkr_subparser.set_defaults(command=beaker_command) bkr_subparser.add_argument("-m", "--max", action="store", default=3, type=int, help="max resubmit count") bkr_subparser.add_argument("-g", "--grain", action="store", default="J", choices=["J", "RS", "R", "T", "job", "recipeset", "recipe", "task"], type=str, help="resubmit selection granularity") bkr_subparser.add_argument("job_ids", nargs=1, action="store", help="list of job id to be monitored") et_subparser = subparsers.add_parser("errata", help="errata inspection") et_subparser.set_defaults(command=errata_command) et_subparser.add_argument("-w", "--workflows", action="store_true", help="generate workflows for the erratum") et_subparser.add_argument("-r", "--run", action="store_true", help="generate workflows, and run for the erratum") et_subparser.add_argument("-s", "--start-monitor", action="store_true", help="start monitor the errata system") et_subparser.add_argument("-d", "--daemon", action="store_true", help="run monitor into daemon mode") et_subparser.add_argument("erratum", action="store", nargs=1, metavar="ERRATUM", help="erratum id") if len(sys.argv) == 1: parser.print_help() return args = parser.parse_args() args.command(args) return how can I do that? thanks.

    Read the article

  • Simple App Engine Sessions Implementation

    - by raz0r
    Here is a very basic class for handling sessions on App Engine: """Lightweight implementation of cookie-based sessions for Google App Engine. Classes: Session """ import os import random import Cookie from google.appengine.api import memcache _COOKIE_NAME = 'app-sid' _COOKIE_PATH = '/' _SESSION_EXPIRE_TIME = 180 * 60 class Session(object): """Cookie-based session implementation using Memcached.""" def __init__(self): self.sid = None self.key = None self.session = None cookie_str = os.environ.get('HTTP_COOKIE', '') self.cookie = Cookie.SimpleCookie() self.cookie.load(cookie_str) if self.cookie.get(_COOKIE_NAME): self.sid = self.cookie[_COOKIE_NAME].value self.key = 'session-' + self.sid self.session = memcache.get(self.key) if self.session: self._update_memcache() else: self.sid = str(random.random())[5:] + str(random.random())[5:] self.key = 'session-' + self.sid self.session = dict() memcache.add(self.key, self.session, _SESSION_EXPIRE_TIME) self.cookie[_COOKIE_NAME] = self.sid self.cookie[_COOKIE_NAME]['path'] = _COOKIE_PATH print self.cookie def __len__(self): return len(self.session) def __getitem__(self, key): if key in self.session: return self.session[key] raise KeyError(str(key)) def __setitem__(self, key, value): self.session[key] = value self._update_memcache() def __delitem__(self, key): if key in self.session: del self.session[key] self._update_memcache() return None raise KeyError(str(key)) def __contains__(self, item): try: i = self.__getitem__(item) except KeyError: return False return True def _update_memcache(self): memcache.replace(self.key, self.session, _SESSION_EXPIRE_TIME) I would like some advices on how to improve the code for better security. Note: In the production version it will also save a copy of the session in the datastore. Note': I know there are much more complete implementations available online though I would like to learn more about this subject so please don't answer the question with "use that" or "use the other" library.

    Read the article

  • virtualenvwrapper .hook problem

    - by Wraith
    I've used virtualenvwrapper, but I'm having problems running it on a new computer. My .bashrc file is updated per the instructions: export WORKON_HOME=$DEV_HOME/projects source /usr/local/bin/virtualenvwrapper.sh But when source is run, I get the following: bash: /25009.hook: Permission denied bash: /25009.hook: No such file or directory This previous post leads me to believe the filename is being recycled and locked because virtualenvwrapper.sh uses $$. Is there any way to fix this?

    Read the article

  • Django extending user model and displaying form

    - by MichalKlich
    Hello, I am writing website and i`d like to implement profile managment. Basic thing would be to edit some of user details by themself, like first and last name etc. Now, i had to extend User model to add my own stuff, and email address. I am having troubles with displaying form. Example will describe better what i would like achieve. This is mine extended user model. class UserExtended(models.Model): user = models.ForeignKey(User, unique=True) kod_pocztowy = models.CharField(max_length=6,blank=True) email = models.EmailField() This is how my form looks like. class UserCreationFormExtended(UserCreationForm): def __init__(self, *args, **kwargs): super(UserCreationFormExtended, self).__init__(*args, **kwargs) self.fields['email'].required = True self.fields['first_name'].required = False self.fields['last_name'].required = False class Meta: model = User fields = ('username', 'first_name', 'last_name', 'email') It works fine when registering, as i need allow users to put username and email but when it goes to editing profile it displays too many fields. I would not like them to be able to edit username and email. How could i disable fields in form? Thanks for help.

    Read the article

  • Parse large XML file w/ script or use BioPython API ?

    - by jeremy04
    Hey guys this is my first question on here. I'm trying to make a local copy of the UniprotKB in SQL. The UniprotKB is 2.1GB, and it comes in XML and a special text format used by SwissProt Here are my options: 1) Use a SAX parser (XML) - I chose Ruby, and Nokogiri. I started writing the parser, but my initial reaction: how would I map the XML schema to the SAX parser? 2) BioPython - I already have BioSQL/Biopython installed, which literally created my SQL schema for me, and I was able to successfully insert one SwissProt/Uniprot txt file into the database. I'm running it right now (crosses fingers) on the entire 2.1gb. Here is the code I'm running: from Bio import SeqIO from BioSQL import BioSeqDatabase from Bio import SwissProt server = BioSeqDatabase.open_database(driver = "MySQLdb", user = "root", passwd = "", host="localhost", db = "bioseqdb") db = server["uniprot"] iterator = SeqIO.parse(open("/path/to/uniprot_sprot.dat", "r"), "swiss") db.load(iterator) server.commit() Edit: it's now crashing because the transactions are getting locked (since the tables are Innodb) Error Number: 1205 Lock wait timeout exceeded; try restarting transaction. I'm using MySQL version: 5.1.43 Should I switch my database to Postgrelsql ?

    Read the article

  • How do I use a string as a keyword argument?

    - by Issac Kelly
    Specifically, I'm trying to use a string to arbitrairly filter the ORM. I've tried exec and eval solutions, but I'm running into walls. The code below doesn't work, but it's the best way I know how to explain where I'm trying to go from gblocks.models import Image f = 'image__endswith="jpg"' # Would be scripted in another area, but passed as text <user input> d = Image.objects.filter(f) #for the non-django pythonistas: d = Image.objects.filter(image__endswith="jpg") # would be the non-dynamic equivalent.

    Read the article

  • django url matching

    - by ben
    can anyone see why this wouldn't be working. Fairly new to django so any help would be much appreciated actual url: http://127.0.0.1:8000/2010/may/12/my-second-blog-post/ urls.py: (r'(?P<year>d{4})/(?P<month>[a-z]{3})/(?P<day>w{1,2})/(?P<slug>[-w]+)/$', 'object_detail', dict(info_dict, slug_field='slug',template_name='blog/detail.html')),

    Read the article

  • Problem opening Solr *.jsp pages with urllib2.urlopen.

    - by nestling
    I'm trying to open a page at http://localhost:8983/solr/admin/stats.jsp but urllib2.urlopen returns a blank string. It works fine for solr/ and solr/admin, but for all the pages above /solr/admin/ I get nothing but a blank string. 76]: t = urllib2.urlopen('http://localhost:8983/solr/admin/stats.jsp') 77]: s = t.read() 78]: s 78]: 79]: type(s) 79]: <type 'str'> 80]: urllib2.urlopen('http://localhost:8983/solr/admin/registry.jsp').read() 80]: In [84]: urllib2.urlopen('http://localhost:8983/solr/admin/schema.jsp').read() Out[84]: I know this isn't a problem with urllib2, but beyond that I am at a loss. I wish solr (or jetty) had an easy to get to log file, so that perhaps it could tell me its side of the story.

    Read the article

  • Increasing figure size in Matplotlib

    - by Anirudh
    I am trying to plot a graph from a distance matrix. The code words fine and gives me a image in 800 * 600 pixels. The image being too small, All the nodes are packed together. I want increase the size of the image. so I added the following line to my code - figure(num=None, figsize=(10, 10), dpi=80, facecolor='w', edgecolor='k') After this all I get is a blank 1000 * 1000 image file. My overall code - import networkx as nx import pickle import matplotlib.pyplot as plt print "Reading from pickle." p_file = open('pickles/names') Names = pickle.load(p_file) p_file.close() p_file = open('pickles/distance') Dist = pickle.load(p_file) p_file.close() G = nx.Graph() print "Inserting Nodes." for n in Names: G.add_node(n) print "Inserting Edges." for i in range(601): for j in range(601): G.add_edge(Names[i],Names[j],weight=Dist[i][j]) print "Drawing Graph." nx.draw(G) print "Saving Figure." #plt.figure(num=None, figsize=(10, 10)) plt.savefig('new.png') print "Success!"

    Read the article

  • chatbot using twisted and wokkel

    - by dmitriy k.
    I am writing a chatbot using Twisted and wokkel and everything seems to be working except that bot periodically logs off. To temporarily fix that I set presence to available on every connection initialized. Does anyone know how to prevent going offline? (I assume if i keep sending available presence every minute or so bot wont go offline but that just seems too wasteful.) Suggestions anyone? Here is the presence code: class BotPresenceClientProtocol(PresenceClientProtocol): def connectionInitialized(self): PresenceClientProtocol.connectionInitialized(self) self.available(statuses={None: 'Here'}) def subscribeReceived(self, entity): self.subscribed(entity) self.available(statuses={None: 'Here'}) def unsubscribeReceived(self, entity): self.unsubscribed(entity) Thanks in advance.

    Read the article

< Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >