Search Results

Search found 14399 results on 576 pages for 'python noob'.

Page 374/576 | < Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >

  • What could cause Django to start failing its own tests after an OS and Django reinstall?

    - by Macha
    I had to reinstall my OS, and so, I reinstalled django 1.1. Since reinstalling, when I run tests in my app, I get several failures from django.contrib.auth. Logs: http://dpaste.com/178153/ I asked on #django, and no one is too sure what the cause of the errors are. Some of my own code fails its tests, because it's not fully written yet, but that shouldn't cause django to fail it's core tests... I have included django.contrib.admin, which was mentioned as a possible cause.

    Read the article

  • Reverse mapping from a table to a model in SQLAlchemy

    - by Jace
    To provide an activity log in my SQLAlchemy-based app, I have a model like this: class ActivityLog(Base): __tablename__ = 'activitylog' id = Column(Integer, primary_key=True) activity_by_id = Column(Integer, ForeignKey('users.id'), nullable=False) activity_by = relation(User, primaryjoin=activity_by_id == User.id) activity_at = Column(DateTime, default=datetime.utcnow, nullable=False) activity_type = Column(SmallInteger, nullable=False) target_table = Column(Unicode(20), nullable=False) target_id = Column(Integer, nullable=False) target_title = Column(Unicode(255), nullable=False) The log contains entries for multiple tables, so I can't use ForeignKey relations. Log entries are made like this: doc = Document(name=u'mydoc', title=u'My Test Document', created_by=user, edited_by=user) session.add(doc) session.flush() # See note below log = ActivityLog(activity_by=user, activity_type=ACTIVITY_ADD, target_table=Document.__table__.name, target_id=doc.id, target_title=doc.title) session.add(log) This leaves me with three problems: I have to flush the session before my doc object gets an id. If I had used a ForeignKey column and a relation mapper, I could have simply called ActivityLog(target=doc) and let SQLAlchemy do the work. Is there any way to work around needing to flush by hand? The target_table parameter is too verbose. I suppose I could solve this with a target property setter in ActivityLog that automatically retrieves the table name and id from a given instance. Biggest of all, I'm not sure how to retrieve a model instance from the database. Given an ActivityLog instance log, calling self.session.query(log.target_table).get(log.target_id) does not work, as query() expects a model as parameter. One workaround appears to be to use polymorphism and derive all my models from a base model which ActivityLog recognises. Something like this: class Entity(Base): __tablename__ = 'entities' id = Column(Integer, primary_key=True) title = Column(Unicode(255), nullable=False) edited_at = Column(DateTime, onupdate=datetime.utcnow, nullable=False) entity_type = Column(Unicode(20), nullable=False) __mapper_args__ = {'polymorphic_on': entity_type} class Document(Entity): __tablename__ = 'documents' __mapper_args__ = {'polymorphic_identity': 'document'} body = Column(UnicodeText, nullable=False) class ActivityLog(Base): __tablename__ = 'activitylog' id = Column(Integer, primary_key=True) ... target_id = Column(Integer, ForeignKey('entities.id'), nullable=False) target = relation(Entity) If I do this, ActivityLog(...).target will give me a Document instance when it refers to a Document, but I'm not sure it's worth the overhead of having two tables for everything. Should I go ahead and do it this way?

    Read the article

  • Infinite loop when adding a row to a list in a class in python3

    - by Margaret
    I have a script which contains two classes. (I'm obviously deleting a lot of stuff that I don't believe is relevant to the error I'm dealing with.) The eventual task is to create a decision tree, as I mentioned in this question. Unfortunately, I'm getting an infinite loop, and I'm having difficulty identifying why. I've identified the line of code that's going haywire, but I would have thought the iterator and the list I'm adding to would be different objects. Is there some side effect of list's .append functionality that I'm not aware of? Or am I making some other blindingly obvious mistake? class Dataset: individuals = [] #Becomes a list of dictionaries, in which each dictionary is a row from the CSV with the headers as keys def field_set(self): #Returns a list of the fields in individuals[] that can be used to split the data (i.e. have more than one value amongst the individuals def classified(self, predicted_value): #Returns True if all the individuals have the same value for predicted_value def fields_exhausted(self, predicted_value): #Returns True if all the individuals are identical except for predicted_value def lowest_entropy_value(self, predicted_value): #Returns the field that will reduce <a href="http://en.wikipedia.org/wiki/Entropy_%28information_theory%29">entropy</a> the most def __init__(self, individuals=[]): and class Node: ds = Dataset() #The data that is associated with this Node links = [] #List of Nodes, the offspring Nodes of this node level = 0 #Tree depth of this Node split_value = '' #Field used to split out this Node from the parent node node_value = '' #Value used to split out this Node from the parent Node def split_dataset(self, split_value): fields = [] #List of options for split_value amongst the individuals datasets = {} #Dictionary of Datasets, each one with a value from fields[] as its key for field in self.ds.field_set()[split_value]: #Populates the keys of fields[] fields.append(field) datasets[field] = Dataset() for i in self.ds.individuals: #Adds individuals to the datasets.dataset that matches their result for split_value datasets[i[split_value]].individuals.append(i) #<---Causes an infinite loop on the second hit for field in fields: #Creates subnodes from each of the datasets.Dataset options self.add_subnode(datasets[field],split_value,field) def add_subnode(self, dataset, split_value='', node_value=''): def __init__(self, level, dataset=Dataset()): My initialisation code is currently: if __name__ == '__main__': filename = (sys.argv[1]) #Takes in a CSV file predicted_value = "# class" #Identifies the field from the CSV file that should be predicted base_dataset = parse_csv(filename) #Turns the CSV file into a list of lists parsed_dataset = individual_list(base_dataset) #Turns the list of lists into a list of dictionaries root = Node(0, Dataset(parsed_dataset)) #Creates a root node, passing it the full dataset root.split_dataset(root.ds.lowest_entropy_value(predicted_value)) #Performs the first split, creating multiple subnodes n = root.links[0] n.split_dataset(n.ds.lowest_entropy_value(predicted_value)) #Attempts to split the first subnode.

    Read the article

  • gae error : Error: Server Error, how to debug it .

    - by zjm1126
    when i upload my project to google-app-engine , it show this : Error: Server Error The server encountered an error and could not complete your request. If the problem persists, please report your problem and mention this error message and the query that caused it. why ? how can i debug this error ? thanks

    Read the article

  • need help in site classification

    - by goh
    hi guys, I have to crawl the contents of several blogs. The problem is that I need to classify whether the blogs the authors are from a specific school and is talking about the school's stuff. May i know what's the best approach in doing the crawling or how should i go about the classification?

    Read the article

  • Accented characters in matplotlib

    - by OldJim
    Does anyone know a way to get matplotlib to render accented chars (é,ã,â,etc)? For instance i'm trying to use accented chars on set_yticklabels() and matplot renders squares instead, and when i use unicode() it renders the wrong chars. Is there a way to make this work? Thanks in advance, Jim.

    Read the article

  • Union on ValuesQuerySet in django

    - by Wuxab
    I've been searching for a way to take the union of querysets in django. From what I read you can use query1 | query2 to take the union... This doesn't seem to work when using values() though. I'd skip using values until after taking the union but I need to use annotate to take the sum of a field and filter on it and since there's no way to do "group by" I have to use values(). The other suggestions I read were to use Q objects but I can't think of a way that would work. Do I pretty much need to just use straight SQL or is there a django way of doing this? What I want is: q1 = mymodel.objects.filter(date__lt = '2010-06-11').values('field1','field2').annotate(volsum=Sum('volume')).exclude(volsum=0) q2 = mymodel.objects.values('field1','field2').annotate(volsum=Sum('volume')).exclude(volsum=0) query = q1|q2 But this doesn't work and as far as I know I need the "values" part because there's no other way for Sum to know how to act since it's a 15 column table.

    Read the article

  • Django and mod_python intermittent error?

    - by Peter
    I have a Django site at http://sm.rutgers.edu/relive/af_api/index/. It is supposed to display "Home of the relive APIs". If you refresh this page many times, you can see different renderings. 1) The expected page. 2) Django "It worked!" page. 3) "ImportError at /index/" page. If you scroll down enough to ROOT_URLCONF part, you will see it says 'relive.urls'. But apparently, it should be 'af_api.urls', which is in my settings.py file. Since these results happen randomly, is it possible that either Django or mod_python is working unstably?

    Read the article

  • Exception Handling in google app engine

    - by Rahul99
    i am raising exception using if UserId == '' and Password == '': raise Exception.MyException , "wrong userId or password" but i want print the error message on same page class MyException(Exception): def __init__(self,msg): Exception.__init__(self,msg)

    Read the article

  • Deterministic key serialization

    - by Mike Boers
    I'm writing a mapping class which uses SQLite as the storage backend. I am currently allowing only basestring keys but it would be nice if I could use a couple more types hopefully up to anything that is hashable (ie. same requirements as the builtin dict). To that end I would like to derive a deterministic serialization scheme. Ideally, I would like to know if any implementation/protocol combination of pickle is deterministic for hashable objects (e.g. can only use cPickle with protocol 0). I noticed that pickle and cPickle do not match: >>> import pickle >>> import cPickle >>> def dumps(x): ... print repr(pickle.dumps(x)) ... print repr(cPickle.dumps(x)) ... >>> dumps(1) 'I1\n.' 'I1\n.' >>> dumps('hello') "S'hello'\np0\n." "S'hello'\np1\n." >>> dumps((1, 2, 'hello')) "(I1\nI2\nS'hello'\np0\ntp1\n." "(I1\nI2\nS'hello'\np1\ntp2\n." Another option is to use repr to dump and ast.literal_eval to load. This would only be valid for builtin hashable types. I have written a function to determine if a given key would survive this process (it is rather conservative on the types it allows): def is_reprable_key(key): return type(key) in (int, str, unicode) or (type(key) == tuple and all( is_reprable_key(x) for x in key)) The question for this method is if repr itself is deterministic for the types that I have allowed here. I believe this would not survive the 2/3 version barrier due to the change in str/unicode literals. This also would not work for integers where 2**32 - 1 < x < 2**64 jumping between 32 and 64 bit platforms. Are there any other conditions (ie. do strings serialize differently under different conditions)? (If this all fails miserably then I can store the hash of the key along with the pickle of both the key and value, then iterate across rows that have a matching hash looking for one that unpickles to the expected key, but that really does complicate a few other things and I would rather not do it.) Any insights?

    Read the article

  • Sqlalchemy layout with WSGI application

    - by TheDude
    I'm working on writing a small WSGI application using Bottle and SqlAlchemy and am confused on how the "layout" of my application should be in terms of SqlAlchemy. My confusion is with creating engines and sessions. My understanding is that I should only create one engine with the 'create_engine' method. Should I be creating an engine instance in the global namespace in some sort of singleton pattern and creating sessions based off of it? How have you done this in your projects? Any insight would be appreciated. The examples in the documentation dont seem to make this entirely clear (unless I'm missing something obvious). Any thoughts?

    Read the article

  • Parsing html for domain links

    - by Hallik
    I have a script that parses an html page for all the links within it. I am getting all of them fine, but I have a list of domains I want to compare it against. So a sample list contains list=['www.domain.com', 'sub.domain.com'] But I may have a list of links that look like http://domain.com http://sub.domain.com/some/other/page I can strip off the http:// just fine, but in the two example links I just posted, they both should match. The first I would like to match against the www.domain.com, and the second, I would like to match against the subdomain in the list. Right now I am using url2lib for parsing the html. What are my options in completely this task?

    Read the article

  • Counting amount of items in Pythons 'for'

    - by Markum
    Kind of hard to explain, but when I run something like this: fruits = ['apple', 'orange', 'banana', 'strawberry', 'kiwi'] for fruit in fruits: print fruit.capitalize() It gives me this, as expected: Apple Orange Banana Strawberry Kiwi How would I edit that code so that it would "count" the amount of times it's performing the for, and print this? 1 Apple 2 Orange 3 Banana 4 Strawberry 5 Kiwi

    Read the article

  • Making all variables accessible to namespace

    - by Gökhan Sever
    Hello, Say I have a simple function: def myfunc(): a = 4.2 b = 5.5 ... many similar variables ... I use this function one time only and I am wondering what is the easiest way to make all the variables inside the function accessible to my main name-space. Do I have to declare global for each item? or any other suggested methods? Thanks.

    Read the article

  • any faster alternative??

    - by kaushik
    cost=0 for i in range(12): cost=cost+math.pow(float(float(q[i])-float(w[i])),2) cost=(math.sqrt(cost)) Any faster alternative to this? i am need to improve my entire code so trying to improve each statements performance. thanking u

    Read the article

  • any faster alternative??

    - by kaushik
    I have to read a file from a particular line number and i know the line number say "n": i have been thinking of two choice: 1)for i in range(n) fname.readline() k=readline() print k 2)i=0 for line in fname: dictionary[i]=line i=i+1 but i want to know faster alternative as i might have to perform this on different files 20000 times. is there is any other better alternatives?? thanking u

    Read the article

  • Ternary operator

    - by Antoine Leclair
    In PHP, I often use the ternary operator to add an attribute to an html element if it applies to the element in question. For example: <select name="blah"> <option value="1"<?= $blah == 1 ? ' selected="selected"' : '' ?>> One </option> <option value="2"<?= $blah == 2 ? ' selected="selected"' : '' ?>> Two </option> </select> I'm starting a project with Pylons using Mako for the templating. How can I achieve something similar? Right now, I see two possibilities that are not ideal. Solution 1: <select name="blah"> % if blah == 1: <option value="1" selected="selected">One</option> % else: <option value="1">One</option> % endif % if blah == 2: <option value="2" selected="selected">Two</option> % else: <option value="2">Two</option> % endif </select> Solution 2: <select name="blah"> <option value="1" % if blah == 1: selected="selected" % endif >One</option> <option value="2" % if blah == 2: selected="selected" % endif >Two</option> </select> In this particular case, the value is equal to the variable tested (value="1" = blah == 1), but I use the same pattern in other situations, like <?= isset($variable) ? ' value="$variable" : '' ?>. I am looking for a clean way to achieve this using Mako.

    Read the article

  • How to write data by dynamic parameter name

    - by Maxim Welikobratov
    I need to be able to write data to datastore of google-app-engine for some known entity. But I don't want write assignment code for each parameter of the entity. I meen, I don't want do like this val_1 = self.request.get('prop_1') val_2 = self.request.get('prop_2') ... val_N = self.request.get('prop_N') item.prop_1 = val_1 item.prop_2 = val_2 ... item.prop_N = val_N item.put() instead, I want to do something like this args = self.request.arguments() for prop_name in args: item.set(prop_name, self.request.get(prop_name)) item.put() dose anybody know how to do this trick?

    Read the article

  • Clean Method for a ModelForm in a ModelFormSet made by modelformset_factory

    - by Salyangoz
    I was wondering if my approach is right or not. Assuming the Restaurant model has only a name. forms.py class BaseRestaurantOpinionForm(forms.ModelForm): opinion = forms.ChoiceField(choices=(('yes', 'yes'), ('no', 'no'), ('meh', 'meh')), required=False, )) class Meta: model = Restaurant fields = ['opinion'] views.py class RestaurantVoteListView(ListView): queryset = Restaurant.objects.all() template_name = "restaurants/list.html" def dispatch(self, request, *args, **kwargs): if request.POST: queryset = self.request.POST.dict() #clean here return HttpResponse(json.dumps(queryset), content_type="application/json") def get_context_data(self, **kwargs): context = super(EligibleRestaurantsListView, self).get_context_data(**kwargs) RestaurantFormSet = modelformset_factory( Restaurant,form=BaseRestaurantOpinionForm ) extra_context = { 'eligible_restaurants' : self.get_eligible_restaurants(), 'forms' : RestaurantFormSet(), } context.update(extra_context) return context Basically I'll be getting 3 voting buttons for each restaurant and then I want to read the votes. I was wondering from where/which clean function do I need to call to get something like: { ('3' : 'yes'), ('2' : 'no') } #{ 'restaurant_id' : 'vote' } This is my second/third question so tell me if I'm being unclear. Thanks.

    Read the article

  • getting global name not defined error

    - by nashr rafeeg
    i have the following class class notify(): def __init__(self,server="localhost", port=23053): self.host = server self.port = port register = gntp.GNTPRegister() register.add_header('Application-Name',"SVN Monitor") register.add_notification("svnupdate",True) growl(register) def svn_update(self, author="Unknown", files=0): notice = gntp.GNTPNotice() notice.add_header('Application-Name',"SVN Monitor") notice.add_header('Notification-Name', "svnupdate") notice.add_header('Notification-Title',"SVN Commit") # notice.add_header('Notification-Icon',"") notice.add_header('Notification-Text',Msg) growl(notice) def growl(data): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect((self.host,self.port)) s.send(data) response = gntp.parse_gntp(s.recv(1024)) print response s.close() but when ever i try to use this class via the follwoing code i get 'NameError: global name 'growl' is not defined' from growlnotify import * n = notify() n.svn_update() any one has an idea what is going on here ? cheers nash

    Read the article

  • PostgreSQL pgdb driver raises "can't rollback" exception

    - by David Parunakian
    Hello, for some reason I'm experiencing the Operational Error with "can't rollback" message when I attempt to roll back my transaction in the following context: try: cursors[instance].execute("lock revision, app, timeout IN SHARE MODE") cursors[instance].execute("insert into app (type, active, active_revision, contents, z) values ('session', true, %s, %s, 0) returning id", (cRevision, sessionId)) sAppId = cursors[instance].fetchone()[0] cursors[instance].execute("insert into revision (app_id, type) values (%s, 'active')", (sAppId,)) cursors[instance].execute("insert into timeout (app_id, last_seen) values (%s, now())", (sAppId,)) connections[instance].commit() except pgdb.DatabaseError, e: connections[instance].rollback() return "{status: 'error', errno:4, errmsg: \"%s\"}"%(str(e).replace('\"', '\\"').replace('\n', '\\n').replace('\r', '\\r')) The driver in use is PGDB. What is fundamentally wrong here?

    Read the article

< Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >