Search Results

Search found 14399 results on 576 pages for 'python noob'.

Page 338/576 | < Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >

  • How long should it take for someone to be able to type code from memory?

    - by LordSnoutimus
    Hi, I understand that this question could be answered with a simple sentence and that it may be viewed as subjective, however, I am a young student who is interested in pursuing a career in programming and wondered how long it took some of you to get to the level of experience you are now?. I ask this because I am currently working on building an application in Java on the Android platform and it bothers me that I am constantly having to look up how to write a certain section of code in my application such as writing to a database, or how the if loop should be structured. My question really is, how long did it take for you to become experienced enough to actually know exactly how your next line of code was going to look, before you even wrote it?

    Read the article

  • How to Convert multiple sets of Data going from left to right to top to bottom the Pythonic way?

    - by ThinkCode
    Following is a sample of sets of contacts for each company going from left to right. ID Company ContactFirst1 ContactLast1 Title1 Email1 ContactFirst2 ContactLast2 Title2 Email2 1 ABC John Doe CEO [email protected] Steve Bern CIO [email protected] How do I get them to go top to bottom as shown? ID Company Contactfirst ContactLast Title Email 1 ABC John Doe CEO [email protected] 1 ABC Steve Bern CIO [email protected] I am hoping there is a Pythonic way of solving this task. Any pointers or samples are really appreciated! p.s : In the actual file, there are 10 sets of contacts going from left to right and there are few thousand such records. It is a CSV file and I loaded into MySQL to manipulate the data.

    Read the article

  • Hide deprecated methods from tab completion

    - by Morgoth
    I would like to control which methods appear when a user uses tab-completion on a custom object in ipython - in particular, I want to hide functions that I have deprecated. I still want these methods to be callable, but I don't want users to see them and start using them if they are inspecting the object. Is this something that is possible?

    Read the article

  • How to add legend to imshow() in matplotlib

    - by rankthefirst
    I am using matplotlib In plot() or bar(), we can easily put legend, if we add labels to them. but what if it is a contourf() or imshow() I know there is a colorbar() which can present the color range, but it is not satisfied. I want such a legend which have names(labels). For what I can think of is that, add labels to each element in the matrix, then ,try legend(), to see if it works, but how to add label to the element, like a value?? in my case, the raw data is like: 1,2,3,3,4 2,3,4,4,5 1,1,1,2,2 for example, 1 represents 'grass', 2 represents 'sand', 3 represents 'hill'... and so on. imshow() works perfectly with my case, but without the legend. my question is: Is there a function that can automatically add legend, for example, in my case, I just have to do like this: someFunction('grass','sand',...) If there isn't, how do I add labels to each value in the matrix. For example, label all the 1 in the matrix 'grass', labell all the 2 in the matrix 'sand'...and so on. Thank you!

    Read the article

  • My method is being recognized within my own program. Newbie mistake probably.

    - by Sergio Tapia
    Here's my code: sentenceToTranslate = raw_input("Please write in the sentence you want to translate: ") words = sentenceToTranslate.split(" ") for word in words: if isVowel(word[0]): print "TEST" def isVowel(letter): if letter.lower() == "a" or letter.lower() == "e" or letter.lower() == "i" or letter.lower() == "o" or letter.lower() == "u": return True else: return False The error I get is: NameError: name 'isVowel' is not defined What am I doing wrong?

    Read the article

  • Unbuffered subprocess output (last line missing)

    - by plok
    I must be overlooking something terribly obvious. I need to execute a C program, display its output in real time and finally parse its last line, which should be straightforward as the last line printed is always the same. process = subprocess.Popen(args, shell = True, stdout = subprocess.PIPE, stderr = subprocess.PIPE) # None indicates that the process hasn't terminated yet. while process.poll() is None: # Always save the last non-emtpy line that was output by the child # process, as it will write an empty line when closing its stdout. out = process.stdout.readline() if out: last_non_empty_line = out if verbose: sys.stdout.write(out) sys.stdout.flush() # Parse 'out' here... Once in a while, however, the last line is not printed. The default value for Popens's bufsize is 0, so it is supposed to be unbuffered. I have also tried, to no avail, adding fflush(stdout) to the C code just before exiting, but it seems that there is absolutely no need to flush a stream before exiting a program. Ideas anyone?

    Read the article

  • How to create a custom admin configuration panel in Django?

    - by Matteo
    Hi, I would like to create a configuration panel for the homepage of the web-app I'm designing with Django. This configuration panel should let me choose some basic options like highlighting some news, setting a showcase banner, and so on. Basically I don't need an app with different rows, but just a panel page with some configuration options. The automatically generated administration area created by Django doesn't seem to handle this feature as far as I can see, so I'm asking you for some directions. Any hint is highly appreciated. Thank you in advance. Matteo

    Read the article

  • List as a key to a dictionary

    - by williamx
    Let's say I have a list: a = ['apple', 'orange'] and a dictionary: d ={'apple': [2,4], 'carrot': [44,33], 'orange': [345,667]} How can I use the list a as a key to lookup in the dictionary d? I want the result to be written to a comma-separated textfile like this apple, orange 2, 44 4, 33

    Read the article

  • Framework for Implementing REST web service in Django

    - by Laizer
    I'm looking to implement a RESTful interface for a Django application. It is primarily a data-service application - the interface will be (at this point) read-only. The question is which Django toolsets / frameworks make the most sense for this task. I see Django-rest and Django-piston. There's also the option of rolling my own. The question was asked here, but a good two years back. I'd like to know what the current state of play is. In this question, circa 2008, the strong majority vote was to not use any framework at all - just create Django views that reply with e.g. JSON. (The question was also addressed, crica 2008, here.) In the current landscape, what makes the most sense?

    Read the article

  • Scipy Negative Distance? What?

    - by disappearedng
    I have a input file which are all floating point numbers to 4 decimal place. i.e. 13359 0.0000 0.0000 0.0001 0.0001 0.0002` 0.0003 0.0007 ... (the first is the id). My class uses the loadVectorsFromFile method which multiplies it by 10000 and then int() these numbers. On top of that, I also loop through each vector to ensure that there are no negative values inside. However, when I perform _hclustering, I am continually seeing the error, "Linkage Z contains negative values". I seriously think this is a bug because: I checked my values, the values are no where small enough or big enough to approach the limits of the floating point numbers and the formula that I used to derive the values in the file uses absolute value (my input is DEFINITELY right). Can someone enligten me as to why I am seeing this weird error? What is going on that is causing this negative distance error? ===== def loadVectorsFromFile(self, limit, loc, assertAllPositive=True, inflate=True): """Inflate to prevent "negative" distance, we use 4 decimal points, so *10000 """ vectors = {} self.winfo("Each vector is set to have %d limit in length" % limit) with open( loc ) as inf: for line in filter(None, inf.read().split('\n')): l = line.split('\t') if limit: scores = map(float, l[1:limit+1]) else: scores = map(float, l[1:]) if inflate: vectors[ l[0]] = map( lambda x: int(x*10000), scores) #int might save space else: vectors[ l[0]] = scores if assertAllPositive: #Assert that it has no negative value for dirID, l in vectors.iteritems(): if reduce(operator.or_, map( lambda x: x < 0, l)): self.werror( "Vector %s has negative values!" % dirID) return vectors def main( self, inputDir, outputDir, limit=0, inFname="data.vectors.all", mappingFname='all.id.features.group.intermediate'): """ Loads vector from a file and start clustering INPUT vectors is { featureID: tfidfVector (list), } """ IDFeatureDic = loadIdFeatureGroupDicFromIntermediate( pjoin(self.configDir, mappingFname)) if not os.path.exists(outputDir): os.makedirs(outputDir) vectors = self.loadVectorsFromFile( limit, pjoin( inputDir, inFname)) for threshold in map( lambda x:float(x)/30, range(20,30)): clusters = self._hclustering(threshold, vectors) if clusters: outputLoc = pjoin(outputDir, "threshold.%s.result" % str(threshold)) with open(outputLoc, 'w') as outf: for clusterNo, cluster in clusters.iteritems(): outf.write('%s\n' % str(clusterNo)) for featureID in cluster: feature, group = IDFeatureDic[featureID] outline = "%s\t%s\n" % (feature, group) outf.write(outline.encode('utf-8')) outf.write("\n") else: continue def _hclustering(self, threshold, vectors): """function which you should call to vary the threshold vectors: { featureID: [ tfidf scores, tfidf score, .. ] """ clusters = defaultdict(list) if len(vectors) > 1: try: results = hierarchy.fclusterdata( vectors.values(), threshold, metric='cosine') except ValueError, e: self.werror("_hclustering: %s" % str(e)) return False for i, featureID in enumerate( vectors.keys()):

    Read the article

  • With sqlalchemy how to dynamically bind to database engine on a per-request basis

    - by Peter Hansen
    I have a Pylons-based web application which connects via Sqlalchemy (v0.5) to a Postgres database. For security, rather than follow the typical pattern of simple web apps (as seen in just about all tutorials), I'm not using a generic Postgres user (e.g. "webapp") but am requiring that users enter their own Postgres userid and password, and am using that to establish the connection. That means we get the full benefit of Postgres security. Complicating things still further, there are two separate databases to connect to. Although they're currently in the same Postgres cluster, they need to be able to move to separate hosts at a later date. We're using sqlalchemy's declarative package, though I can't see that this has any bearing on the matter. Most examples of sqlalchemy show trivial approaches such as setting up the Metadata once, at application startup, with a generic database userid and password, which is used through the web application. This is usually done with Metadata.bind = create_engine(), sometimes even at module-level in the database model files. My question is, how can we defer establishing the connections until the user has logged in, and then (of course) re-use those connections, or re-establish them using the same credentials, for each subsequent request. We have this working -- we think -- but I'm not only not certain of the safety of it, I also think it looks incredibly heavy-weight for the situation. Inside the __call__ method of the BaseController we retrieve the userid and password from the web session, call sqlalchemy create_engine() once for each database, then call a routine which calls Session.bind_mapper() repeatedly, once for each table that may be referenced on each of those connections, even though any given request usually references only one or two tables. It looks something like this: # in lib/base.py on the BaseController class def __call__(self, environ, start_response): # note: web session contains {'username': XXX, 'password': YYY} url1 = 'postgres://%(username)s:%(password)s@server1/finance' % session url2 = 'postgres://%(username)s:%(password)s@server2/staff' % session finance = create_engine(url1) staff = create_engine(url2) db_configure(staff, finance) # see below ... etc # in another file Session = scoped_session(sessionmaker()) def db_configure(staff, finance): s = Session() from db.finance import Employee, Customer, Invoice for c in [ Employee, Customer, Invoice, ]: s.bind_mapper(c, finance) from db.staff import Project, Hour for c in [ Project, Hour, ]: s.bind_mapper(c, staff) s.close() # prevents leaking connections between sessions? So the create_engine() calls occur on every request... I can see that being needed, and the Connection Pool probably caches them and does things sensibly. But calling Session.bind_mapper() once for each table, on every request? Seems like there has to be a better way. Obviously, since a desire for strong security underlies all this, we don't want any chance that a connection established for a high-security user will inadvertently be used in a later request by a low-security user.

    Read the article

  • missing elements from pcap?

    - by Matthew
    When I check the attributes available to the module pcap, I expect to see something like 'DLT_AIRONET_HEADER', 'DLT_APPLE_IP_OVER_IEEE1394', 'DLT_ARCNET', 'DLT_ARCNET_LINUX', 'DLT_ATM_CLIP', 'DLT_ATM_RFC1483', 'DLT_AURORA', 'DLT_AX25', 'DLT_CHAOS', 'DLT_CISCO_IOS', 'DLT_C_HDLC', 'DLT_DOCSIS', 'DLT_ECONET', 'DLT_EN10MB', 'DLT_EN3MB', 'DLT_ENC', 'DLT_FDDI', 'DLT_FRELAY', 'DLT_IEEE802', 'DLT_IEEE802_11', 'DLT_IEEE802_11_RADIO', 'DLT_IEEE802_11_RADIO_AVS', 'DLT_IPFILTER', 'DLT_IP_OVER_FC', 'DLT_JUNIPER_ATM1', 'DLT_JUNIPER_ATM2', 'DLT_JUNIPER_ES', 'DLT_JUNIPER_GGSN', 'DLT_JUNIPER_MFR', 'DLT_JUNIPER_MLFR', 'DLT_JUNIPER_MLPPP', 'DLT_JUNIPER_MONITOR', 'DLT_JUNIPER_SERVICES', 'DLT_LINUX_IRDA', 'DLT_LINUX_SLL', 'DLT_LOOP', 'DLT_LTALK', 'DLT_NULL', 'DLT_PFLOG', 'DLT_PPP', 'DLT_PPP_BSDOS', 'DLT_PPP_ETHER', 'DLT_PPP_SERIAL', 'DLT_PRISM_HEADER', 'DLT_PRONET', 'DLT_RAW', 'DLT_RIO', 'DLT_SLIP', 'DLT_SLIP_BSDOS', 'DLT_SUNATM', 'DLT_SYMANTEC_FIREWALL', 'DLT_TZSP', 'builtins', 'doc', 'file', 'name', '_newclass', '_object', '_pcap', '_swig_getattr', '_swig_setattr', 'aton', 'dltname', 'dltvalue', 'findalldevs', 'lookupdev', 'lookupnet', 'ntoa', 'pcapObject', 'pcapObjectPtr'] With note on pcapObject. However, all I get when running dir(pcap) is ['DLT_ARCNET', 'DLT_AX25', 'DLT_CHAOS', 'DLT_EN10MB', 'DLT_EN3MB', 'DLT_FDDI', 'DLT_IEEE802', 'DLT_LINUX_SLL', 'DLT_LOOP', 'DLT_NULL', 'DLT_PFLOG', 'DLT_PFSYNC', 'DLT_PPP', 'DLT_PRONET', 'DLT_RAW', 'DLT_SLIP', 'author', 'builtins', 'copyright', 'doc', 'file', 'license', 'name', 'url', 'version', 'bpf', 'dltoff', 'ex_name', 'lookupdev', 'pcap', 'sys'] Noting the lack of pcapObject. Why is this? What could cause this?

    Read the article

  • Running an allocation simulation repeatedly breaks after the first run.

    - by Az
    Background I have a bunch of students, their desired projects and the supervisors for the respective projects. I'm running a battery of simulations to see which projects the students end up with, which will allow me to get some useful statistics required for feedback. So, this is essentially a Monte-Carlo simulation where I'm randomising the list of students and then iterating through it, allocating projects until I hit the end of the list. Then the process is repeated again. Note that, within a single session, after each successful allocation of a project the following take place: + the project is set to allocated and cannot be given to another student + the supervisor has a fixed quota of students he can supervise. This is decremented by 1 + Once the quota hits 0, all the projects from that supervisor become blocked and this has the same effect as a project being allocated Code def resetData(): for student in students.itervalues(): student.allocated_project = None for supervisor in supervisors.itervalues(): supervisor.quota = 0 for project in projects.itervalues(): project.allocated = False project.blocked = False The role of resetData() is to "reset" certain bits of the data. For example, when a project is successfully allocated, project.allocated for that project is flipped to True. While that's useful for a single run, for the next run I need to be deallocated. Above I'm iterating through thee three dictionaries - one each for students, projects and supervisors - where the information is stored. The next bit is the "Monte-Carlo" simulation for the allocation algorithm. sesh_id = 1 for trial in range(50): for id in randomiseStudents(1): stud_id = id student = students[id] if not student.preferences: # Ignoring the students who've not entered any preferences for rank in ranks: temp_proj = random.choice(list(student.preferences[rank])) if not (temp_proj.allocated or temp_proj.blocked): alloc_proj = student.allocated_proj_ref = temp_proj.proj_id alloc_proj_rank = student.allocated_rank = rank successActions(temp_proj) temp_alloc = Allocated(sesh_id, stud_id, alloc_proj, alloc_proj_rank) print temp_alloc # Explained break sesh_id += 1 resetData() # Refer to def resetData() above All randomiseStudents(1) does is randomise the order of students. Allocated is a class defined as such: class Allocated(object): def __init__(self, sesh_id, stud_id, alloc_proj, alloc_proj_rank): self.sesh_id = sesh_id self.stud_id = stud_id self.alloc_proj = alloc_proj self.alloc_proj_rank = alloc_proj_rank def __repr__(self): return str(self) def __str__(self): return "%s - Student: %s (Project: %s - Rank: %s)" %(self.sesh_id, self.stud_id, self.alloc_proj, self.alloc_proj_rank) Output and problem Now if I run this I get an output such as this (truncated): 1 - Student: 7720 (Project: 1100241 - Rank: 1) 1 - Student: 7832 (Project: 1100339 - Rank: 1) 1 - Student: 7743 (Project: 1100359 - Rank: 1) 1 - Student: 7820 (Project: 1100261 - Rank: 2) 1 - Student: 7829 (Project: 1100270 - Rank: 1) . . . 1 - Student: 7822 (Project: 1100280 - Rank: 1) 1 - Student: 7792 (Project: 1100141 - Rank: 7) 2 - Student: 7739 (Project: 1100267 - Rank: 1) 3 - Student: 7806 (Project: 1100272 - Rank: 1) . . . 45 - Student: 7806 (Project: 1100272 - Rank: 1) 46 - Student: 7714 (Project: 1100317 - Rank: 1) 47 - Student: 7930 (Project: 1100343 - Rank: 1) 48 - Student: 7757 (Project: 1100358 - Rank: 1) 49 - Student: 7759 (Project: 1100269 - Rank: 1) 50 - Student: 7778 (Project: 1100301 - Rank: 1) Basically, it works perfectly for the first run, but on subsequent runs leading upto the nth run, in this case 50, only a single student-project allocation pair is returned. Thus, the main issue I'm having trouble with is figuring out what is causing this anomalous behaviour especially since the first run works smoothly. Thanks in advance, Az

    Read the article

  • Is it possible to calculate distance on GeoDjango in a SELECT statement?

    - by alex
    I am using MYSQL. I have a table with 1 column, a Point field. I want to SELECT all rows that have a point with a distance less than 50 meters of my given point. Simple enough, right? Below is how it's done in RAW SQL. But of course, I want to use GeoDjango to do this. cursor.execute("SELECT * FROM project_location WHERE\ (GLength(LineStringFromWKB(LineString(asbinary(utm), asbinary(PointFromWKB(point(%s, %s)))))) < 50)\

    Read the article

  • tweepy documentation

    - by andy
    Hi everybody I just began working on a little twitter-app using tweepy. is there any kind of useful (and complete) documentation for tweepy? I googled like hell but didn't find anything. greetings, Andy

    Read the article

  • Creating a dataframe in pandas by multiplying two series together

    - by Aoife
    Say I have two series in pandas, series A and series B. How do I create a dataframe in which all of those values are multiplied together, i.e. with series A down the left hand side and series B along the top. Basically the same concept as this, where series A would be the yellow on the left and series B the yellow along the top, and all the values in between would be filled in by multiplication: http://www.google.co.uk/imgres?imgurl=http://www.vaughns-1-pagers.com/computer/multiplication-tables/times-table-12x12.gif&imgrefurl=http://www.vaughns-1-pagers.com/computer/multiplication-tables.htm&h=533&w=720&sz=58&tbnid=9B8R_kpUloA4NM:&tbnh=90&tbnw=122&zoom=1&usg=__meqZT9kIAMJ5b8BenRzF0l-CUqY=&docid=j9BT8tUCNtg--M&sa=X&ei=bkBpUpOWOI2p0AWYnIHwBQ&ved=0CE0Q9QEwBg Thanks!

    Read the article

  • Which Django 1.2.x multilingual application to use?

    - by mawimawi
    There are a couple of different applications for internationalized content in Django. As of now I only have used http://code.google.com/p/django-multilingual/ in my production environments, but I wonder if there are "better" solutions for my wishes. What my staff users need is the following: An object is being created by a staff user in any language (e.g. "de") This object should be displayed in the german version of the website. When a staff user translates the object into a different language (e.g. "fr"), then the page must be visible in the french version as well. If an object is not translated in the visitor's currently selected language (e.g. "en"), then calling the objects url shall raise a 404 Error (or even better a notice that the object is only available in the languages "de" and "fr", and the visitor might be able to select one of the languages) My staff users are working in the admin interface, so the multilingual application must support this as well. I don't really care whether the multilingual app uses a single table with many fields (like title_en, title_de, title_fr) or a foreign key to a related table (as it is implemented in django-multlingual). I only want it to have a good admin interface and no "default" language, because some content might be available just in "de", and some other just in "fr" and "en". And the most important issue of course is compatibility with Django 1.2.x. What are your experiences and preferred apps, and why?

    Read the article

  • django sphinx automodule -- basics

    - by haras.pl
    Hi, I have a projects with several large apps and where settings and apps files are split. directory structure goes something like that: project_name __init__.py apps __init__.py app1 app2 3rdparty __init__.py lib1 lib2 settings __init__.py installed_apps.py path.py templates.py locale.py ... urls.py every app is like that __init__.py admin __init__.py file1.py file2.py models __init__.py model1.py model2.py tests __init__.py test1.py test2.py views __init__.py view1.py view2.py urls.py how to use a sphinx to autogenerate documentation for that? I want something like that for each in settings module or INSTALLED_APPS (not starting with django.* or 3rdparty.*) give me a auto documentation output based on docstring and autogen documentation and run tests before git commit btw. I tried doing .rst files by hand with .. automodule:: module_name :members: but is sucks for such a big project, and it does not works for settings Is there an autogen method or something? I am not tied to sphinx, is there a better solution for my problem?

    Read the article

  • File Uploads with Turbogears 2

    - by William Chambers
    I've been trying to work out the 'best practices' way to manage file uploads with Turbogears 2 and have thus far not really found any examples. I've figured out a way to actually upload the file, but I'm not sure how reliable it us. Also, what would be a good way to get the uploaded files name? file = request.POST['file'] permanent_file = open(os.path.join(asset_dirname, file.filename.lstrip(os.sep)), 'w') shutil.copyfileobj(file.file, permanent_file) file.file.close() this_file = self.request.params["file"].filename permanent_file.close() So assuming I'm understanding correctly, would something like this avoid the core 'naming' problem? id = UUID. file = request.POST['file'] permanent_file = open(os.path.join(asset_dirname, id.lstrip(os.sep)), 'w') shutil.copyfileobj(file.file, permanent_file) file.file.close() this_file = file.filename permanent_file.close()

    Read the article

  • How to deserialize an object with pyYaml using safe_load?

    - by systempuntoout
    Having a snippet like this: import yaml class User(object): def __init__(self, name, surname): self.name= name self.surname= surname user = User('spam', 'eggs') serialized_user = yaml.dump(user) #Network deserialized_user = yaml.load(serialized_user) print "name: %s, sname: %s" % (deserialized_user.name, deserialized_user.surname) Yaml docs says that it is not safe to call yaml.load with any data received from an untrusted source; so, what do i need to modify to my snippet\class to use safe_load method? Is it possible?

    Read the article

  • Losing 'post' requests sent to Pylons paster server

    - by Philip McDermott
    I'm sending post requests to a Pylons server (served by paster serve), and if I send them with any frequency many don't arrive at the server. One at a time is ok, but if I fire off a few (or more) within seconds, only a small number get dealt with. If I send with no post data, or with get, it works fine, but putting just one character of data in the post fields causes massive losses. For example, sending 200, 2 will come back. Sending 100 more slowly, 10 will come back. I'm making the requests form inside a Qt application. Tis will work ok (no data): QString postFields = "" QNetworkRequest request(QUrl("http://server.com/endpoint")); QNetworkReply *reply = networkAccessManager-post(request, postFields.toAscii()); And this will result in only a fraction of the requests being dealt with: QString postFields = "" QNetworkRequest request(QUrl("http://server.com/endpoint")); QNetworkReply *reply = networkAccessManager-post(request, postFields.toAscii()); I've played around with turning on use_threadpool, and other options (threadpool_workers, threadpool_max_requests = 300), of which some combinations can alter the results slightly (best case 10 responses in 200). If I send similar requests to other (non paster) servers, the replies come back ok, so I'm almost certain its'a paster serve config issue. Any help or advice greatly appreciated. Thanks Philip

    Read the article

  • BioPython: extracting sequence IDs from a Blast output file

    - by Jon
    Hi, I have a BLAST output file in XML format. It is 22 query sequences with 50 hits reported from each sequence. And I want to extract all the 50x22 hits. This is the code I currently have, but it only extracts the 50 hits from the first query. from Bio.Blast import NCBIXM blast_records = NCBIXML.parse(result_handle) blast_record = blast_records.next() save_file = open("/Users/jonbra/Desktop/my_fasta_seq.fasta", 'w') for alignment in blast_record.alignments: for hsp in alignment.hsps: save_file.write('>%s\n' % (alignment.title,)) save_file.close() Somebody have any suggestions as to extract all the hits? I guess I have to use something else than alignments. Hope this was clear. Thanks! Jon

    Read the article

  • How to make a model instance read-only after saving it once?

    - by Ryszard Szopa
    One of the functionalities in a Django project I am writing is sending a newsletter. I have a model, Newsletter and a function, send_newsletter, which I have registered to listen to Newsletter's post_save signal. When the newsletter object is saved via the admin interface, send_newsletter checks if created is True, and if yes it actually sends the mail. However, it doesn't make much sense to edit a newsletter that has already been sent, for the obvious reasons. Is there a way of making the Newsletter object read-only once it has been saved? Edit: I know I can override the save method of the object to raise an error or do nothin if the object existed. However, I don't see the point of doing that. As for the former, I don't know where to catch that error and how to communicate the user the fact that the object wasn't saved. As for the latter, giving the user false feedback (the admin interface saying that the save succeded) doesn't seem like a Good Thing. What I really want is allow the user to use the Admin interface to write the newsletter and send it, and then browse the newsletters that have already been sent. I would like the admin interface to show the data for sent newsletters in an non-editable input box, without the "Save" button. Alternatively I would like the "Save" button to be inactive.

    Read the article

< Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >