Search Results

Search found 14201 results on 569 pages for 'python mock'.

Page 390/569 | < Previous Page | 386 387 388 389 390 391 392 393 394 395 396 397  | Next Page >

  • finding and returning a string with a specified prefix

    - by tipu
    I am close but I am not sure what to do with the restuling match object. If I do p = re.search('[/@.* /]', str) I'll get any words that start with @ and end up with a space. This is what I want. However this returns a Match object that I dont' know what to do with. What's the most computationally efficient way of finding and returning a string which is prefixed with a @? For example, "Hi there @guy" After doing the proper calculations, I would be returned guy

    Read the article

  • how to detect escape characters in a string

    - by mix
    Given a string named line whose raw version has this value: \rRAWSTRING how can I detect if it has the escape character \r? What I've tried is: if repr(line).startswith('\r'): blah... but it doesn't catch it. I also tried find, such as: if repr(line).find('\r') != -1: blah doesn't work either. What am I missing? thx!

    Read the article

  • Where is the help.py for Android's monkeyrunner

    - by Keyboardsurfer
    Hi, I just can't find the help.py file in order to create the API reference for the monkeyrunner. The command described at the Android references monkeyrunner <format> help.py <outfile> does not work when i call monkeyrunner html help.py /path/to/place/the/doc.html. It's quite obvious that the help.py file is not found and the monkeyrunner also tells me "Can't open specified script file". But a locate on my system doesn't bring me a help.py file that has anything to do with monkeyrunner or Android. So my question is: Where did they hide the help.py file for creating the API reference?

    Read the article

  • Simple App Engine Sessions Implementation

    - by raz0r
    Here is a very basic class for handling sessions on App Engine: """Lightweight implementation of cookie-based sessions for Google App Engine. Classes: Session """ import os import random import Cookie from google.appengine.api import memcache _COOKIE_NAME = 'app-sid' _COOKIE_PATH = '/' _SESSION_EXPIRE_TIME = 180 * 60 class Session(object): """Cookie-based session implementation using Memcached.""" def __init__(self): self.sid = None self.key = None self.session = None cookie_str = os.environ.get('HTTP_COOKIE', '') self.cookie = Cookie.SimpleCookie() self.cookie.load(cookie_str) if self.cookie.get(_COOKIE_NAME): self.sid = self.cookie[_COOKIE_NAME].value self.key = 'session-' + self.sid self.session = memcache.get(self.key) if self.session: self._update_memcache() else: self.sid = str(random.random())[5:] + str(random.random())[5:] self.key = 'session-' + self.sid self.session = dict() memcache.add(self.key, self.session, _SESSION_EXPIRE_TIME) self.cookie[_COOKIE_NAME] = self.sid self.cookie[_COOKIE_NAME]['path'] = _COOKIE_PATH print self.cookie def __len__(self): return len(self.session) def __getitem__(self, key): if key in self.session: return self.session[key] raise KeyError(str(key)) def __setitem__(self, key, value): self.session[key] = value self._update_memcache() def __delitem__(self, key): if key in self.session: del self.session[key] self._update_memcache() return None raise KeyError(str(key)) def __contains__(self, item): try: i = self.__getitem__(item) except KeyError: return False return True def _update_memcache(self): memcache.replace(self.key, self.session, _SESSION_EXPIRE_TIME) I would like some advices on how to improve the code for better security. Note: In the production version it will also save a copy of the session in the datastore. Note': I know there are much more complete implementations available online though I would like to learn more about this subject so please don't answer the question with "use that" or "use the other" library.

    Read the article

  • django url matching

    - by ben
    can anyone see why this wouldn't be working. Fairly new to django so any help would be much appreciated actual url: http://127.0.0.1:8000/2010/may/12/my-second-blog-post/ urls.py: (r'(?P<year>d{4})/(?P<month>[a-z]{3})/(?P<day>w{1,2})/(?P<slug>[-w]+)/$', 'object_detail', dict(info_dict, slug_field='slug',template_name='blog/detail.html')),

    Read the article

  • Problem opening Solr *.jsp pages with urllib2.urlopen.

    - by nestling
    I'm trying to open a page at http://localhost:8983/solr/admin/stats.jsp but urllib2.urlopen returns a blank string. It works fine for solr/ and solr/admin, but for all the pages above /solr/admin/ I get nothing but a blank string. 76]: t = urllib2.urlopen('http://localhost:8983/solr/admin/stats.jsp') 77]: s = t.read() 78]: s 78]: 79]: type(s) 79]: <type 'str'> 80]: urllib2.urlopen('http://localhost:8983/solr/admin/registry.jsp').read() 80]: In [84]: urllib2.urlopen('http://localhost:8983/solr/admin/schema.jsp').read() Out[84]: I know this isn't a problem with urllib2, but beyond that I am at a loss. I wish solr (or jetty) had an easy to get to log file, so that perhaps it could tell me its side of the story.

    Read the article

  • How to exclude results with get_object_or_404?

    - by googletorp
    In Django you can use the exclude to create SQL similar to not equal. An example could be. Model.objects.exclude(status='deleted') Now this works great and exclude is very flexible. Since I'm a bit lazy, I would like to get that functionality when using get_object_or_404, but I haven't found a way to do this, since you cannot use exclude on get_object_or_404. What I want is to do something like this: model = get_object_or_404(pk=id, status__exclude='deleted') But unfortunately this doesn't work as there isn't an exclude query filter or similar. The best I've come up with so far is doing something like this: object = get_object_or_404(pk=id) if object.status == 'deleted': return HttpResponseNotfound('text') Doing something like that, really defeats the point of using get_object_or_404, since it no longer is a handy one-liner. Alternatively I could do: object = get_object_or_404(pk=id, status__in=['list', 'of', 'items']) But that wouldn't be very maintainable, as I would need to keep the list up to date. I'm wondering if I'm missing some trick or feature in django to use get_object_or_404 to get the desired result?

    Read the article

  • Increasing figure size in Matplotlib

    - by Anirudh
    I am trying to plot a graph from a distance matrix. The code words fine and gives me a image in 800 * 600 pixels. The image being too small, All the nodes are packed together. I want increase the size of the image. so I added the following line to my code - figure(num=None, figsize=(10, 10), dpi=80, facecolor='w', edgecolor='k') After this all I get is a blank 1000 * 1000 image file. My overall code - import networkx as nx import pickle import matplotlib.pyplot as plt print "Reading from pickle." p_file = open('pickles/names') Names = pickle.load(p_file) p_file.close() p_file = open('pickles/distance') Dist = pickle.load(p_file) p_file.close() G = nx.Graph() print "Inserting Nodes." for n in Names: G.add_node(n) print "Inserting Edges." for i in range(601): for j in range(601): G.add_edge(Names[i],Names[j],weight=Dist[i][j]) print "Drawing Graph." nx.draw(G) print "Saving Figure." #plt.figure(num=None, figsize=(10, 10)) plt.savefig('new.png') print "Success!"

    Read the article

  • How do I use a string as a keyword argument?

    - by Issac Kelly
    Specifically, I'm trying to use a string to arbitrairly filter the ORM. I've tried exec and eval solutions, but I'm running into walls. The code below doesn't work, but it's the best way I know how to explain where I'm trying to go from gblocks.models import Image f = 'image__endswith="jpg"' # Would be scripted in another area, but passed as text <user input> d = Image.objects.filter(f) #for the non-django pythonistas: d = Image.objects.filter(image__endswith="jpg") # would be the non-dynamic equivalent.

    Read the article

  • django join-like expansion of queryset

    - by jimbob
    I have a list of Persons each which have multiple fields that I usually filter what's upon, using the object_list generic view. Each person can have multiple Comments attached to them, each with a datetime and a text string. What I ultimately want to do is have the option to filter comments based on dates. class Person(models.Model): name = models.CharField("Name", max_length=30) ## has ~30 other fields, usually filtered on as well class Comment(models.Model): date = models.DateTimeField() person = models.ForeignKey(Person) comment = models.TextField("Comment Text", max_length=1023) What I want to do is get a queryset like Person.objects.filter(comment__date__gt=date(2011,1,1)).order_by('comment__date') send that queryset to object_list and be able to only see the comments ordered by date with only so many objects on a page. E.g., if "Person A" has comments 12/3/11, 1/2/11, 1/5/11, "Person B" has no comments, and person C has a comment on 1/3, I would see: "Person A", 1/2 - comment "Person C", 1/3 - comment "Person A", 1/5 - comment I would strongly prefer not to have to switch to filtering based on Comments.objects.filter(), as that would make me have to largely repeat large sections of code in the both the view and template. Right now if I tried executing the following command, I will get a queryset returning (PersonA, PersonC, PersonA), but if I try rendering that in a template each persons comment_set will contain all their comments even if they aren't in the date range. Ideally they're would be some sort of functionality where I could expand out a Person queryset's comment_set into a larger queryset that can be sorted and ordered based on the comment and put into a object_list generic view. This normally is fairly simple to do in SQL with a JOIN, but I don't want to abandon the ORM, which I use everywhere else.

    Read the article

  • How to get the related_name of a many-to-many-field?

    - by amann
    I am trying to get the related_name of a many-to-many-field. The m2m-field is located betweeen the models "Group" and "Lection" and is declared in the group-model as following: lections = models.ManyToManyField(Lection, blank=True) The field looks like this: <django.db.models.fields.related.ManyToManyField object at 0x012AD690> The print of field.__dict__ is: {'_choices': [], '_m2m_column_cache': 'group_id', '_m2m_name_cache': 'group', '_m2m_reverse_column_cache': 'lection_id', '_m2m_reverse_name_cache': 'lection', '_unique': False, 'attname': 'lections', 'auto_created': False, 'blank': True, 'column': 'lections', 'creation_counter': 71, 'db_column': None, 'db_index': False, 'db_table': None, 'db_tablespace': '', 'default': <class django.db.models.fields.NOT_PROVIDED at 0x00FC8780>, 'editable': True, 'error_messages': {'blank': <django.utils.functional.__proxy__ object at 0x00FC 7B50>, 'invalid_choice': <django.utils.functional.__proxy__ object at 0x00FC7A50>, 'null': <django.utils.functional.__proxy__ object at 0x00FC7 A70>}, 'help_text': <django.utils.functional.__proxy__ object at 0x012AD6F0>, 'm2m_column_name': <function _curried at 0x012A88F0>, 'm2m_db_table': <function _curried at 0x012A8AF0>, 'm2m_field_name': <function _curried at 0x012A8970>, 'm2m_reverse_field_name': <function _curried at 0x012A89B0>, 'm2m_reverse_name': <function _curried at 0x012A8930>, 'max_length': None, 'name': 'lections', 'null': False, 'primary_key': False, 'rel': <django.db.models.fields.related.ManyToManyRel object at 0x012AD6B0>, 'related': <RelatedObject: mymodel:group related to lections>, 'related_query_name': <function _curried at 0x012A8670>, 'serialize': True, 'unique_for_date': None, 'unique_for_month': None, 'unique_for_year': None, 'validators': [], 'verbose_name': 'lections'} Now the field should be accessed via a lection-instance. So this is done by lection.group_set But i need to access it dynamically, so there is the need to get the related_name attribute from somewhere. Here in the documentation, there is a note that it is possible to access ManyToManyField.related_name, but this doesn't work for my somehow.. Help would be a lot appreciated. Thanks in advance.

    Read the article

  • Parse large XML file w/ script or use BioPython API ?

    - by jeremy04
    Hey guys this is my first question on here. I'm trying to make a local copy of the UniprotKB in SQL. The UniprotKB is 2.1GB, and it comes in XML and a special text format used by SwissProt Here are my options: 1) Use a SAX parser (XML) - I chose Ruby, and Nokogiri. I started writing the parser, but my initial reaction: how would I map the XML schema to the SAX parser? 2) BioPython - I already have BioSQL/Biopython installed, which literally created my SQL schema for me, and I was able to successfully insert one SwissProt/Uniprot txt file into the database. I'm running it right now (crosses fingers) on the entire 2.1gb. Here is the code I'm running: from Bio import SeqIO from BioSQL import BioSeqDatabase from Bio import SwissProt server = BioSeqDatabase.open_database(driver = "MySQLdb", user = "root", passwd = "", host="localhost", db = "bioseqdb") db = server["uniprot"] iterator = SeqIO.parse(open("/path/to/uniprot_sprot.dat", "r"), "swiss") db.load(iterator) server.commit() Edit: it's now crashing because the transactions are getting locked (since the tables are Innodb) Error Number: 1205 Lock wait timeout exceeded; try restarting transaction. I'm using MySQL version: 5.1.43 Should I switch my database to Postgrelsql ?

    Read the article

  • Access to field in extended flatpage in django

    - by Stanislav Feldman
    How to access field in extended flatpage in django? I wrote this: class ExtendedFlatPage(FlatPage): teaser = CharField(max_length=150) class ExtendedFlatPageForm(FlatpageForm): teaser = CharField(max_length=150) class Meta: model = ExtendedFlatPage class ExtendedFlatPageAdmin(FlatPageAdmin): form = ExtendedFlatPageForm fieldsets = ( (None, {'fields': ('url', 'title', 'teaser', 'content', 'sites',)}), ) admin.site.unregister(FlatPage) admin.site.register(ExtendedFlatPage, ExtendedFlatPageAdmin) And creation in admin is ok. But then in flatpages/default.html I tried this: <html> <body> <h1>{{ flatpage.title }}</h1> <strong>{{ flatpage.teaser }}</strong> <p>{{ flatpage.content }}</p> </body> </html> And there was no flatpage.teaser! What is wrong?

    Read the article

  • Extend argparse to write set names in the help text for optional argument choices and define those sets once at the end

    - by Kent
    Example of the problem If I have a list of valid option strings which is shared between several arguments, the list is written in multiple places in the help string. Making it harder to read: def main(): elements = ['a', 'b', 'c', 'd', 'e', 'f'] parser = argparse.ArgumentParser() parser.add_argument( '-i', nargs='*', choices=elements, default=elements, help='Space separated list of case sensitive element names.') parser.add_argument( '-e', nargs='*', choices=elements, default=[], help='Space separated list of case sensitive element names to ' 'exclude from processing') parser.parse_args() When running the above function with the command line argument --help it shows: usage: arguments.py [-h] [-i [{a,b,c,d,e,f} [{a,b,c,d,e,f} ...]]] [-e [{a,b,c,d,e,f} [{a,b,c,d,e,f} ...]]] optional arguments: -h, --help show this help message and exit -i [{a,b,c,d,e,f} [{a,b,c,d,e,f} ...]] Space separated list of case sensitive element names. -e [{a,b,c,d,e,f} [{a,b,c,d,e,f} ...]] Space separated list of case sensitive element names to exclude from processing What would be nice It would be nice if one could define an option list name, and in the help output write the option list name in multiple places and define it last of all. In theory it would work like this: def main_optionlist(): elements = ['a', 'b', 'c', 'd', 'e', 'f'] # Two instances of OptionList are equal if and only if they # have the same name (ALFA in this case) ol = OptionList('ALFA', elements) parser = argparse.ArgumentParser() parser.add_argument( '-i', nargs='*', choices=ol, default=ol, help='Space separated list of case sensitive element names.') parser.add_argument( '-e', nargs='*', choices=ol, default=[], help='Space separated list of case sensitive element names to ' 'exclude from processing') parser.parse_args() And when running the above function with the command line argument --help it would show something similar to: usage: arguments.py [-h] [-i [ALFA [ALFA ...]]] [-e [ALFA [ALFA ...]]] optional arguments: -h, --help show this help message and exit -i [ALFA [ALFA ...]] Space separated list of case sensitive element names. -e [ALFA [ALFA ...]] Space separated list of case sensitive element names to exclude from processing sets in optional arguments: ALFA {a,b,c,d,e,f} Question I need to: Replace the {'l', 'i', 's', 't', 's'} shown with the option name, in the optional arguments. At the end of the help text show a section explaining which elements each option name consists of. So I ask: Is this possible using argparse? Which classes would I have to inherit from and which methods would I need to override? I have tried looking at the source for argparse, but as this modification feels pretty advanced I don´t know how to get going.

    Read the article

  • Differentiate gtk.Entry icons

    - by Ubersoldat
    I'm adding two icons to a gtk.Entry in PyGTK. The icons signals are handled by the following method def entry_icon_event(self, widget, position, event) I'm trying to differentiate between the two of them: <enum GTK_ENTRY_ICON_PRIMARY of type GtkEntryIconPosition> <enum GTK_ENTRY_ICON_SECONDARY of type GtkEntryIconPosition> How can I do this? I've been digging through the documentation of PyGTK but there's no object GtkEntryIconPosition nor any definition for this enums. Thanks

    Read the article

  • chatbot using twisted and wokkel

    - by dmitriy k.
    I am writing a chatbot using Twisted and wokkel and everything seems to be working except that bot periodically logs off. To temporarily fix that I set presence to available on every connection initialized. Does anyone know how to prevent going offline? (I assume if i keep sending available presence every minute or so bot wont go offline but that just seems too wasteful.) Suggestions anyone? Here is the presence code: class BotPresenceClientProtocol(PresenceClientProtocol): def connectionInitialized(self): PresenceClientProtocol.connectionInitialized(self) self.available(statuses={None: 'Here'}) def subscribeReceived(self, entity): self.subscribed(entity) self.available(statuses={None: 'Here'}) def unsubscribeReceived(self, entity): self.unsubscribed(entity) Thanks in advance.

    Read the article

  • Encoding with url and api

    - by user2950824
    So I have this web app set up and running and it works fine for any username that you request, but when i try http://mrcastelo.pythonanywhere.com/lol/euw/Nazaré, it simply doesnt work - the error that I get on the server is the following: iddata= getJSON(urllolbase+region+urlid+username) #SummonerID UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 5: ordinal not in range(128) It is annoying me greatly, I've tried some other threads but none of them came to a fix. The api that I am using (www.legendaryapi.com) does accept this because this works. Any idea on how to fix this?

    Read the article

  • Simplifying for-if messes with better structure?

    - by HH
    # Description: you are given a bitwise pattern and a string # you need to find the number of times the pattern matches in the string # any one liner or simple pythonic solution? import random def matchIt(yourString, yourPattern): """find the number of times yourPattern occurs in yourString""" count = 0 matchTimes = 0 # How can you simplify the for-if structures? for coin in yourString: #return to base if count == len(pattern): matchTimes = matchTimes + 1 count = 0 #special case to return to 2, there could be more this type of conditions #so this type of if-conditionals are screaming for a havoc if count == 2 and pattern[count] == 1: count = count - 1 #the work horse #it could be simpler by breaking the intial string of lenght 'l' #to blocks of pattern-length, the number of them is 'l - len(pattern)-1' if coin == pattern[count]: count=count+1 average = len(yourString)/matchTimes return [average, matchTimes] # Generates the list myString =[] for x in range(10000): myString= myString + [int(random.random()*2)] pattern = [1,0,0] result = matchIt(myString, pattern) print("The sample had "+str(result[1])+" matches and its size was "+str(len(myString))+".\n" + "So it took "+str(result[0])+" steps in average.\n" + "RESULT: "+str([a for a in "FAILURE" if result[0] != 8])) # Sample Output # # The sample had 1656 matches and its size was 10000. # So it took 6 steps in average. # RESULT: ['F', 'A', 'I', 'L', 'U', 'R', 'E']

    Read the article

  • Is there a way to control how pytest-xdist runs tests in parallel?

    - by superselector
    I have the following directory layout: runner.py lib/ tests/ testsuite1/ testsuite1.py testsuite2/ testsuite2.py testsuite3/ testsuite3.py testsuite4/ testsuite4.py The format of testsuite*.py modules is as follows: import pytest class testsomething: def setup_class(self): ''' do some setup ''' # Do some setup stuff here def teardown_class(self): '''' do some teardown''' # Do some teardown stuff here def test1(self): # Do some test1 related stuff def test2(self): # Do some test2 related stuff .... .... .... def test40(self): # Do some test40 related stuff if __name__=='__main()__' pytest.main(args=[os.path.abspath(__file__)]) The problem I have is that I would like to execute the 'testsuites' in parallel i.e. I want testsuite1, testsuite2, testsuite3 and testsuite4 to start execution in parallel but individual tests within the testsuites need to be executed serially. When I use the 'xdist' plugin from py.test and kick off the tests using 'py.test -n 4', py.test is gathering all the tests and randomly load balancing the tests among 4 workers. This leads to the 'setup_class' method to be executed every time of each test within a 'testsuitex.py' module (which defeats my purpose. I want setup_class to be executed only once per class and tests executed serially there after). Essentially what I want the execution to look like is: worker1: executes all tests in testsuite1.py serially worker2: executes all tests in testsuite2.py serially worker3: executes all tests in testsuite3.py serially worker4: executes all tests in testsuite4.py serially while worker1, worker2, worker3 and worker4 are all executed in parallel. Is there a way to achieve this in 'pytest-xidst' framework? The only option that I can think of is to kick off different processes to execute each test suite individually within runner.py: def test_execute_func(testsuite_path): subprocess.process('py.test %s' % testsuite_path) if __name__=='__main__': #Gather all the testsuite names for each testsuite: multiprocessing.Process(test_execute_func,(testsuite_path,))

    Read the article

  • Django complex queries

    - by Josh K
    I need to craft a filter for an object that checks date ranges. Right now I'm performing a very inefficient loop which checks all the objects. I would like to simplify this to a database call. The logic is you have a start and an end date objects. I need to check if the start OR the end is within the range of an appointment. if (start >= appointment.start && start < appointment.end) || (end > appointment.start && end <= appointment.end) I could do this in SQL, but I'm not as familiar with the Django model structure for more complex queries.

    Read the article

  • How can I load an MP3 or similar music file for display and analysis in wxWidgets?

    - by Jon Cage
    I'm developing a GUI in wxPython which allows a user to generate sequences of colours for some toys I'm building. Part of the program needs to load an MP3 (and potentially other formats further down the line) and display it to the user. That should be sufficient to get started but later I'd like to add features like identifying beats and some crude frequency analysis. Is there any simple way of loading / understanding an MP3's contents to display a plot of its amplitudes to the screen using wxWidgets? I later intend to port to C++/wxWidgets for speed and to avoid having to distribute wxPython.

    Read the article

< Previous Page | 386 387 388 389 390 391 392 393 394 395 396 397  | Next Page >