Search Results

Search found 34110 results on 1365 pages for 'gdata python client'.

Page 523/1365 | < Previous Page | 519 520 521 522 523 524 525 526 527 528 529 530  | Next Page >

  • threading.Event wait function not signaled when subclassing Process class

    - by user1313404
    For following code never gets past the wait function in run. I'm certain I'm doing something ridiculously stupid, but since I'm not smart enough to figure out what, I'm asking. Any help is appreciated. Here is the code: import threading import multiprocessing from multiprocessing import Process class SomeClass(Process): def __init__(self): Process.__init__(self) self.event = threading.Event() self.event.clear() def continueExec(self): print multiprocessing.current_process().name print self print "Set:" + str(self.event.is_set()) self.event.set() print "Set:" + str(self.event.is_set()) def run(self): print "I'm running with it" print multiprocessing.current_process().name self.event.wait() print "I'm further than I was" print multiprocessing.current_process().name self.event.clear() def main(): s_list = [] for t in range(3): s = SomeClass() print "s:" + str(s) s_list.append(s) s.start() raw_input("Press enter to send signal") for t in range(3): print "s_list["+str(t)+"]:" + str(s_list[t]) s_list[t].continueExec() raw_input("Press enter to send signal") for t in range(3): s_list[t].join() print "All Done" if __name__ == "__main__": main()

    Read the article

  • Best practice- How to team-split a django project while still allowing code reusal

    - by Infinity
    I know this sounds kind of vague, but please let me explain- I'm starting work on a brand new project, it will have two main components: "ACME PRODUCT" (think Gmail, Meebo, etc), and "THE SITE" (help, information, marketing stuff, promotional landing pages, etc lots of marketing-induced cruft). So basically the url /acme/* will load stuff in the uber cool ajaxy application, and every other URI will load stuff in the other site. Problem: "THE SITE" component is out of my hands, and will be handled by a consultants team that will work closely with marketing, And I and my team will work solely on the ACME PRODUCT. Question: How to set up the django project in such a way that we can have: Seperate releases. (They can push new marketing pages and functionality without having to worry about the state of our code. Maybe even separate Subversion "projects") Minimize impact (on our product) of whatever flying-unicorns-hocus-pocus the other team codes into the site. Still allow some code reusal. My main concern is that the ACME product needs to be rock solid, and therefore needs to be somewhat isolated of whatever mistakes/code bloopers the consultants make in their marketing side of the site. How have you handled this? Any ideas? Thanks!

    Read the article

  • How to manage feeds with subclassed object in Django 1.2?

    - by Matteo
    Hi, I'm trying to generate a feed rss from a model like this one, selecting all the Entry objects: from django.db import models from django.contrib.sites.models import Site from django.contrib.auth.models import User from imagekit.models import ImageModel import datetime class Entry(ImageModel): date_pub = models.DateTimeField(default=datetime.datetime.now) author = models.ForeignKey(User) via = models.URLField(blank=True) comments_allowed = models.BooleanField(default=True) icon = models.ImageField(upload_to='icon/',blank=True) class IKOptions: spec_module = 'journal.icon_specs' cache_dir = 'icon/resized' image_field = 'icon' class Post(Entry): title = models.CharField(max_length=200) description = models.TextField() slug = models.SlugField(unique=True) def __unicode__(self): return self.title class Photo(Entry): alt = models.CharField(max_length=200) description = models.TextField(blank=True) original = models.ImageField(upload_to='photo/') class IKOptions: spec_module = 'journal.photo_specs' cache_dir = 'photo/resized' image_field = 'original' def __unicode__(self): return self.alt class Quote(Entry): blockquote = models.TextField() cite = models.TextField(blank=True) def __unicode__(self): return self.blockquote When I use the render_to_response in my views I simply call: def get_journal_entries(request): entries = Entry.objects.all().order_by('-date_pub') return render_to_response('journal/entries.html', {'entries':entries}) And then I use a conditional template to render the right snippets of html: {% extends "base.html" %} {% block main %} <hr> {% for entry in entries %} {% if entry.post %}[...]{% endif %}[...] But I cannot do the same with the Feed Framework in django 1.2... Any suggestion, please?

    Read the article

  • Transaction within transaction

    - by user281521
    Hello, I want to know if open a transaction inside another is safe and encouraged? I have a method: def foo(): session.begin try: stuffs except Exception, e: session.rollback() raise e session.commit() and a method that calls the first one, inside a transaction: def bar(): stuffs try: foo() #<<<< there it is :) stuffs except Exception, e: session.rollback() raise e session.commit() if I get and exception on the foo method, all the operations will be rolled back? and everything else will work just fine? thanks!!

    Read the article

  • Solution for distributing MANY simple network tasks?

    - by EmpireJones
    I would like to create some sort of a distributed setup for running a ton of small/simple REST web queries in a production environment. For each 5-10 related queries which are executed from a node, I will generate a very small amount of derived data, which will need to be stored in a standard relational database (such as PostgreSQL). What platforms are built for this type of problem set? The nature, data sizes, and quantities seem to contradict the mindset of Hadoop. There are also more grid based architectures such as Condor and Sun Grid Engine, which I have seen mentioned. I'm not sure if these platforms have any recovery from errors though (checking if a job succeeds). What I would really like is a FIFO type queue that I could add jobs to, with the end result of my database getting updated. Any suggestions on the best tool for the job?

    Read the article

  • prints line number in both txtfile and list????

    - by jad
    i have this code which prints the line number in infile but also the linenumber in words what do i do to only print the line number of the txt file next to the words??? d = {} counter = 0 wrongwords = [] for line in infile: infile = line.split() wrongwords.extend(infile) counter += 1 for word in infile: if word not in d: d[word] = [counter] if word in d: d[word].append(counter) for stuff in wrongwords: print(stuff, d[stuff]) the output is : hello [1, 2, 7, 9] # this is printing the linenumber of the txt file hello [1] # this is printing the linenumber of the list words hello [1] what i want is: hello [1, 2, 7, 9]

    Read the article

  • write() in sys/uio.h returns -1

    - by fredrik
    I'm using Ubuntu Server 9.10 AMD Phenom 2 cpu g++ (Ubuntu 4.4.1-4ubuntu9) 4.4.1 trying to run the application pftp-shit v 1.11. The following code in tcp.cc is executed successfully: int outfile_fd = open(name, O_CREAT | O_TRUNC | O_RDWR | O_BINARY)) which returns a file descriptor int (in my case 6) - name is a char array containing a valid path to my file which successfully i created. and successfully running: fchmod(outfile_fd, S_IRUSR | S_IWUSR); and access(name, W_OK) The issue occurs during running the function (from sys/uio.h) write(outfile_fd, this-control_buffer, read_length) which returns -1. -1 is of returned if nothing was written and otherwise a non-negative integer is returned which is equal to the number of bytes written. Anyone having a clue how I can get the write function to work?

    Read the article

  • Extract files from zip folder and store these files in blobstore

    - by Eng_Engineer
    i want to upload zip folder from file input in form the i want to extract the contents of this uploaded zip folder,and store the contents (files)of this zip in the blobstore in order to download them after putting these files in one folder,but the problem is that i can't deal with the zip folder directly(to read it), i tried as this: form = cgi.FieldStorage() file_upload = form['file'] zip1=file_upload.filename zipstream=StringIO.StringIO(zip1.read()) But the problem still that i can't read the zip as previous,also i tried to read zip folder directly like this: z1=zipfile.ZipFile(zip1,"r") But there was an error in this way.Please can any one help me.Thanks in advance.

    Read the article

  • Many-to-many relationship on same table with association object

    - by Nicholas Knight
    Related (for the no-association-object use case): http://stackoverflow.com/questions/1889251/sqlalchemy-many-to-many-relationship-on-a-single-table Building a many-to-many relationship is easy. Building a many-to-many relationship on the same table is almost as easy, as documented in the above question. Building a many-to-many relationship with an association object is also easy. What I can't seem to find is the right way to combine association objects and many-to-many relationships with the left and right sides being the same table. So, starting from the simple, naïve, and clearly wrong version that I've spent forever trying to massage into the right version: t_groups = Table('groups', metadata, Column('id', Integer, primary_key=True), ) t_group_groups = Table('group_groups', metadata, Column('parent_group_id', Integer, ForeignKey('groups.id'), primary_key=True, nullable=False), Column('child_group_id', Integer, ForeignKey('groups.id'), primary_key=True, nullable=False), Column('expires', DateTime), ) mapper(Group_To_Group, t_group_groups, properties={ 'parent_group':relationship(Group), 'child_group':relationship(Group), }) What's the right way to map this relationship?

    Read the article

  • Online job-searching is tedious. Help me automate it.

    - by ehsanul
    Many job sites have broken searches that don't let you narrow down jobs by experience level. Even when they do, it's usually wrong. This requires you to wade through hundreds of postings that you can't apply for before finding a relevant one, quite tedious. Since I'd rather focus on writing cover letters etc., I want to write a program to look through a large number of postings, and save the URLs of just those jobs that don't require years of experience. I don't require help writing the scraper to get the html bodies of possibly relevant job posts. The issue is accurately detecting the level of experience required for the job. This should not be too difficult as job posts are usually very explicit about this ("must have 5 years experience in..."), but there may be some issues with overly simple solutions. In my case, I'm looking for entry-level positions. Often they don't say "entry-level", but inclusion of the words probably means the job should be saved. Next, I can safely exclude a job the says it requires "5 years" of experience in whatever, so a regex like /\d\syears/ seems reasonable to exclude jobs. But then, I realized some jobs say they'll take 0-2 years of experience, matches the exclusion regex but is clearly a job I want to take a look at. Hmmm, I can handle that with another regex. But some say "less than 2 years" or "fewer than 2 years". Can handle that too, but it makes me wonder what other patterns I'm not thinking of, and possibly excluding many jobs. That's what brings me here, to find a better way to do this than regexes, if there is one. I'd like to minimize the false negative rate and save all the jobs that seem like they might not require many years of experience. Does excluding anything that matches /[3-9]\syears|1\d\syears/ seem reasonable? Or is there a better way? Training a bayesian filter maybe?

    Read the article

  • Django: automatically import MEDIA_URL in context

    - by pistacchio
    Hi, like exposed here, one can set a MEDIA_URL in settings.py (for example i'm pointing to Amazon S3) and serve the files in the view via {{ MEDIA_URL }}. Since MEDIA_URL is not automatically in the context, one have to manually add it to the context, so, for example, the following works: #views.py from django.shortcuts import render_to_response from django.template import RequestContext def test(request): return render_to_response('test.html', {}, context_instance=RequestContext(request)) This means that in each view.py file i have to add from django.template import RequestContext and in each response i have to explicitly specify context_instance=RequestContext(request). Is there a way to automatically (DRY) add MEDIA_URL to the default context? Thanks in advance.

    Read the article

  • HTML tag for identifying text

    - by ravi
    I am not very much familiar with HTML programming. If we look at the source code of a page, then we can see what are the HTML tags for which texts and so. It is the case that there is group or class of HTML tags which is used for purpose such that it can be used for main text or so. I mean like '<\input type="radio" name="option"' this tag says that there will a radio button, similar can be make a group of HTML tags such that it consist of text part, which means we look at the tag and not at the content and can say that in between startTag and endTag we have text.

    Read the article

  • Matplotlib not showing up in Mac OSX

    - by Werner
    Hi, I am running Mac OSX 10.5.8. I installed matplotlib using macports. I get some examples from the matplotlib gallery like this one, without modification: http://matplotlib.sourceforge.net/examples/api/unicode_minus.html I run it, get no error, but the picture does not show up. In Linux Ubuntu I get it. Do you know what could be wrong here? Thanks

    Read the article

  • What IRC client(s) would allow me to filter out log messages?

    - by Aras
    I have been using Empathy and more recently Pidgin as my IRC client. But there is one feature missing from both of these clients that keeps bothering me. I want to be able to see only what people are actually talking about in each channel. I often like to leave IRC channels open and go back to them and read the messages for the last few hours or days. I saw that IRC clients have been discussed in this post and a few others, but I have not found what I am looking for yet. Here is what I want to see in my IRC client: what actual people say Here is what I do not want to see message about people entering the room people leaving room people changing their status messages or what they are known as people sneezing people's cats sneezing Anyway, you get the picture. What IRC clients are available for Ubuntu that would allow me to configure what I see so that I can see what people are talking about -- and nothing else?

    Read the article

  • removing elements incrementally from a list

    - by Javier
    Dear all, I've a list of float numbers and I would like to delete incrementally a set of elements in a given range of indexes, sth. like: for j in range(beginIndex, endIndex+1): print ("remove [%d] => val: %g" % (j, myList[j])) del myList[j] However, since I'm iterating over the same list, the indexes (range) are not valid any more for the new list. Does anybody has some suggestions on how to delete the elements properly? Best wishes

    Read the article

  • OpenGL embedded in gtk has colour badly displayed

    - by Sardathrion
    Note that this is a re-write now that I have more clues as to where the problem could be... I am creating a GTK GUI which contains two embedded OpenGL displays. Both use the same shader code (complied once for each). On my normal hardware, this works fine. On a virtual machine running on the same hardware, I get horrible colours -- see images. I suspect that the shader code is at fault -- certainly dropping a simpler shader does make the problem moot. However, I do need both diffuse and spot lights in my shader thus making it non-trivial. Anyone has seen this before?

    Read the article

< Previous Page | 519 520 521 522 523 524 525 526 527 528 529 530  | Next Page >