Search Results

Search found 15380 results on 616 pages for 'man with python'.

Page 364/616 | < Previous Page | 360 361 362 363 364 365 366 367 368 369 370 371  | Next Page >

  • Matplotlib: plotting discrete values

    - by Arkapravo
    I am trying to plot the following ! from numpy import * from pylab import * import random for x in range(1,500): y = random.randint(1,25000) print(x,y) plot(x,y) show() However, I keep getting a blank graph (?). Just to make sure that the program logic is correct I added the code print(x,y), just the confirm that (x,y) pairs are being generated. (x,y) pairs are being generated, but there is no plot, I keep getting a blank graph. Any help ?

    Read the article

  • Error -3 while decompressing data: incorrect header check

    - by Rahul99
    I have .zip file which contain csv data. I am reading .zip file using <input type = "file" name = "select_file"/> I want to decompress that .zip file and read csv data. file_data = self.request.get('select_file') file_str = zlib.decompress(file_data) #file_data_list = file_str.split('\n') #file_Reader = csv.reader(file_data_list,quoting=csv.QUOTE_NONE ) I am expecting csv data in file_str but I am getting error. error :: Error -3 while decompressing data: incorrect header check What I have to use?

    Read the article

  • Why is the destructor called when the CPython garbage collector is disabled?

    - by Frederik
    I'm trying to understand the internals of the CPython garbage collector, specifically when the destructor is called. So far, the behavior is intuitive, but the following case trips me up: Disable the GC. Create an object, then remove a reference to it. The object is destroyed and the __del__ method is called. I thought this would only happen if the garbage collector was enabled. Can someone explain why this happens? Is there a way to defer calling the destructor? import gc import unittest _destroyed = False class MyClass(object): def __del__(self): global _destroyed _destroyed = True class GarbageCollectionTest(unittest.TestCase): def testExplicitGarbageCollection(self): gc.disable() ref = MyClass() ref = None # The next test fails. # The object is automatically destroyed even with the collector turned off. self.assertFalse(_destroyed) gc.collect() self.assertTrue(_destroyed) if __name__=='__main__': unittest.main() Disclaimer: this code is not meant for production -- I've already noted that this is very implementation-specific and does not work on Jython.

    Read the article

  • how to import a.py not a folder

    - by zjm1126
    zjm_code |-----a.py |-----a |----- __init__.py |-----b.py in a.py is : c='ccc' in b.py is : import a print dir(a) when i execute b.py ,it show (it import 'a' folder): ['__builtins__', '__doc__', '__file__', '__name__', '__path__'] and when i delete a folder, it show ,(it import a.py): ['__builtins__', '__doc__', '__file__', '__name__', 'c'] so my question is : how to import a.py via not delete a folder thanks

    Read the article

  • SQLAlchemy - how to map against a read-only (or calculated) property

    - by Jeff Peck
    I'm trying to figure out how to map against a simple read-only property and have that property fire when I save to the database. A contrived example should make this more clear. First, a simple table: meta = MetaData() foo_table = Table('foo', meta, Column('id', String(3), primary_key=True), Column('description', String(64), nullable=False), Column('calculated_value', Integer, nullable=False), ) What I want to do is set up a class with a read-only property that will insert into the calculated_value column for me when I call session.commit()... import datetime def Foo(object): def __init__(self, id, description): self.id = id self.description = description @property def calculated_value(self): self._calculated_value = datetime.datetime.now().second + 10 return self._calculated_value According to the sqlalchemy docs, I think I am supposed to map this like so: mapper(Foo, foo_table, properties = { 'calculated_value' : synonym('_calculated_value', map_column=True) }) The problem with this is that _calculated_value is None until you access the calculated_value property. It appears that SQLAlchemy is not calling the property on insertion into the database, so I'm getting a None value instead. What is the correct way to map this so that the result of the "calculated_value" property is inserted into the foo table's "calculated_value" column?

    Read the article

  • How can I display multiple django modelformset forms together?

    - by JT
    I have a problem with needing to provide multiple model backed forms on the same page. I understand how to do this with single forms, i.e. just create both the forms call them something different then use the appropriate names in the template. Now how exactly do you expand that solution to work with modelformsets? The wrinkle, of course, is that each 'form' must be rendered together in the appropriate fieldset. For example I want my template to produce something like this: <fieldset> <label for="id_base-0-desc">Home Base Description:</label> <input id="id_base-0-desc" type="text" name="base-0-desc" maxlength="100" /> <label for="id_likes-0-icecream">Want ice cream?</label> <input type="checkbox" name="likes-0-icecream" id="id_likes-0-icecream" /> </fieldset> <fieldset> <label for="id_base-1-desc">Home Base Description:</label> <input id="id_base-1-desc" type="text" name="base-1-desc" maxlength="100" /> <label for="id_likes-1-icecream">Want ice cream?</label> <input type="checkbox" name="likes-1-icecream" id="id_likes-1-icecream" /> </fieldset> I am using a loop like this to process the results for base_form, likes_form in map(None, base_forms, likes_forms): which works as I'd expect (I'm using map because the # of forms can be different). The problem is that I can't figure out a way to do the same thing with the templating engine. The system does work if I layout all the base models together then all the likes models after wards, but it doesn't meet the layout requirements.

    Read the article

  • How do I launch background jobs w/ paramiko?

    - by sophacles
    Here is my scenario: I am trying to automate some tasks using Paramiko. The tasks need to be started in this order (using the notation (host, task)): (A, 1), (B, 2), (C, 2), (A,3), (B,3) -- essentially starting servers and clients for some testing in the correct order. Further, because in the tests networking may get mucked up, and because I need some of the output from the tests, I would like to just redirect output to a file. In similar scenarios the common response is to use 'screen -m -d' or to use 'nohup'. However with paramiko's exec_cmd, nohup doesn't actually exit. Using: bash -c -l nohup test_cmd & doesnt work either, exec_cmd still blocks to process end. In the screen case, output redirection doesn't work very well, (actually, doesnt work at all the best I can figure out). So, after all that explanation, my question is: is there an easy elegant way to detach processes and capture output in such a way as to end paramiko's exec_cmd blocking? Update The dtach command works nicely for this!

    Read the article

  • Loading url with cyrillic symbols

    - by Ockonal
    Hi guys, I have to load some url with cyrillic symbols. My script should work with this: http://wincode.org/%D0%BF%D1%80%D0%BE%D0%B3%D1%80%D0%B0%D0%BC%D0%BC%D0%B8%D1%80%D0%BE%D0%B2%D0%B0%D0%BD%D0%B8%D0%B5/ If I'll use this in browser it would replaced into normal symbols, but urllib code fails with 404 error. How to decode correctly this url? When I'm using that url directly in code, like address = 'that address', it works perfect. But I used parsing page for getting this url. I have a list of urls which contents cyrillic. Maybe they have uncorrect encoding? Here is more code: requestData = urllib2.Request( %SOME_ADDRESS%, None, {"User-Agent": user_agent}) requestHandler = pageHandler.open(requestData) pageData = requestHandler.read().decode('utf-8') soupHandler = BeautifulSoup(pageData) topicLinks = [] for postBlock in soupHandler.findAll('a', href=re.compile('%SOME_REGEXP%')): topicLinks.append(postBlock['href']) postAddress = choice(topicLinks) postRequestData = urllib2.Request(postAddress, None, {"User-Agent": user_agent}) postHandler = pageHandler.open(postRequestData) postData = postHandler.read() File "/usr/lib/python2.6/urllib2.py", line 518, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) urllib2.HTTPError: HTTP Error 404: Not Found

    Read the article

  • Huge amount of time sending data with suds and proxy

    - by Roman
    Hi everyone, I have the following code to send data through a proxy using suds: import suds t = suds.transport.http.HttpTransport() proxy = urllib2.ProxyHandler({'http': 'http://192.168.3.217:3128'}) opener = urllib2.build_opener(proxy) t.urlopener = opener ws = suds.client.Client('http://xxxxxxx/web.asmx?WSDL', transport=t) req = ws.factory.create('ActionRequest.request') req.SerialNumber = 'asdf' req.HostName = 'hola' res = ws.service.ActionRequest(req) I don't know why, but it can be sending data above 2 or 3 minutes, or even more and it raises a "Gateway timeout" exception sometimes. If I don't use the proxy, the amount of time used is above 2 seconds or less. Here is the SOAP reply: (ActionResponse){ Id = None Action = "Action.None" Objects = "" } The proxy is running right with other requests through urllib2, or using normal web browsers like firefox. Does anyone have any idea what's happening here with suds? Thanks a lot in advance!!!

    Read the article

  • How to create instances of related models in Django

    - by sevennineteen
    I'm working on a CMSy app for which I've implemented a set of models which allow for creation of custom Template instances, made up of a number of Fields and tied to a specific Customer. The end-goal is that one or more templates with a set of custom fields can be defined through the Admin interface and associated to a customer, so that customer can then create content objects in the format prescribed by the template. I seem to have gotten this hooked up such that I can create any number of Template objects, but I'm struggling with how to create instances - actual content objects - in those templates. For example, I can define a template "Basic Page" for customer "Acme" which has the fields "Title" and "Body", but I haven't figured out how to create Basic Page instances where these fields can be filled in. Here are my (somewhat elided) models... class Customer(models.Model): ... class Field(models.Model): ... class Template(models.Model): label = models.CharField(max_length=255) clients = models.ManyToManyField(Customer, blank=True) fields = models.ManyToManyField(Field, blank=True) class ContentObject(models.Model): label = models.CharField(max_length=255) template = models.ForeignKey(Template) author = models.ForeignKey(User) customer = models.ForeignKey(Customer) mod_date = models.DateTimeField('Modified Date', editable=False) def __unicode__(self): return '%s (%s)' % (self.label, self.template) def save(self): self.mod_date = datetime.datetime.now() super(ContentObject, self).save() Thanks in advance for any advice!

    Read the article

  • Can SQLAlchemy DateTime Objects Only Be Naive?

    - by Sean M
    I am working with SQLAlchemy, and I'm not yet sure which database I'll use under it, so I want to remain as DB-agnostic as possible. How can I store a timezone-aware datetime object in the DB without tying myself to a specific database? Right now, I'm making sure that times are UTC before I store them in the DB, and converting to localized at display-time, but that feels inelegant and brittle. Is there a DB-agnostic way to get a timezone-aware datetime out of SQLAlchemy instead of getting naive datatime objects out of the DB?

    Read the article

  • Django: Summing values

    - by Anry
    I have a two Model - Project and Cost. class Project(models.Model): title = models.CharField(max_length=150) url = models.URLField() manager = models.ForeignKey(User) class Cost(models.Model): project = models.ForeignKey(Project) cost = models.FloatField() date = models.DateField() I must return the sum of costs for each project. view.py: from mypm.costs.models import Project, Cost from django.shortcuts import render_to_response from django.db.models import Avg, Sum def index(request): #... return render_to_response('index.html',... How?

    Read the article

  • Combinatorial optimisation of a distance metric

    - by Jose
    I have a set of trajectories, made up of points along the trajectory, and with the coordinates associated with each point. I store these in a 3d array ( trajectory, point, param). I want to find the set of r trajectories that have the maximum accumulated distance between the possible pairwise combinations of these trajectories. My first attempt, which I think is working looks like this: max_dist = 0 for h in itertools.combinations ( xrange(num_traj), r): for (m,l) in itertools.combinations (h, 2): accum = 0. for ( i, j ) in itertools.izip ( range(k), range(k) ): A = [ (my_mat[m, i, z] - my_mat[l, j, z])**2 \ for z in xrange(k) ] A = numpy.array( numpy.sqrt (A) ).sum() accum += A if max_dist < accum: selected_trajectories = h This takes forever, as num_traj can be around 500-1000, and r can be around 5-20. k is arbitrary, but can typically be up to 50. Trying to be super-clever, I have put everything into two nested list comprehensions, making heavy use of itertools: chunk = [[ numpy.sqrt((my_mat[m, i, :] - my_mat[l, j, :])**2).sum() \ for ((m,l),i,j) in \ itertools.product ( itertools.combinations(h,2), range(k), range(k)) ]\ for h in itertools.combinations(range(num_traj), r) ] Apart from being quite unreadable (!!!), it is also taking a long time. Can anyone suggest any ways to improve on this?

    Read the article

  • Scraping paginated items from a website using scrapy

    - by Mridang Agarwalla
    I'm using scrapy to scrape items from a site. I'm not being able to implement this scraping pattern. The site I'm trying to scrape is a forum and I scrape the site once a day. Each page has a table containing posts. New posts are added to the top of the table and as more and more posts are posted to the site, the older posts go further into the pages due to pagination. This is a very simple scenario and we will assume that the order of the posts never change. I would like to scrape this site and scrape all the "new" records until the last scraped post from yesterday is encountered. I have configured my spider to paginate endlessly and when it encounters yesterday's last scraped post, it should stop. How can implement this? (My Scrapy installation works with my Django installation using django-dynamic-scraper )

    Read the article

  • Simple XML over http web service

    - by Mark
    I have a simple html service, developed in django. You enter your name - it posts this, and returns a value (male/female). I need to ofer this as a web service. I have no idea where to start. I want to accept a xml request, and provide an xml response - thats it. Can anyone give ma any pointers - Googling it is difficult when you dont know what your searching for.

    Read the article

  • How can I detect whether an image is a PNG or APNG format?

    - by perlit
    APNG is backwards compatible with PNG. I opened up an apng and png file in a hex editor and the first few bytes look identical. So if a user uploads either of these formats, how do I detect what the format really is? I've seen this done on some sites that block apng. I'm guessing the ImageMagick library makes this easy, but what if I were to do the detect without the use of an image processing library (for learning purposes)? Can I look for specific bytes that tell me if the file is apng? Solutions in any language is welcome.

    Read the article

  • Execute function without sending 'self' to it

    - by Sergey
    Is that possible to define a function without referencing to self this way? def myfunc(var_a,var_b) But so that it could also get sender data, like if I defined it like this: def myfunc(self, var_a,var_b) That self is always the same so it looks a little redundant here always to run a function this way: myfunc(self,'data_a','data_b'). Then I would like to get its data in the function like this sender.fields. UPDATE: Here is some code to understand better what I mean. The class below is used to show a page based on Jinja2 templates engine for users to sign up. class SignupHandler(webapp.RequestHandler): def get(self, *args, **kwargs): utils.render_template(self, 'signup.html') And this code below is a render_template that I created as wrapper to Jinja2 functions to use it more conveniently in my project: def render_template(response, template_name, vars=dict(), is_string=False): template_dirs = [os.path.join(root(), 'templates')] logging.info(template_dirs[0]) env = Environment(loader=FileSystemLoader(template_dirs)) try: template = env.get_template(template_name) except TemplateNotFound: raise TemplateNotFound(template_name) content = template.render(vars) if is_string: return content else: response.response.out.write(content) As I use this function render_template very often in my project and usually the same way, just with different template files, I wondered if there was a way to get rid of having to call it like I do it now, with self as the first argument but still having access to that object.

    Read the article

  • BeautifulSoup, but for CSS?

    - by MTsoul
    BeautifulSoup parses HTML and offers various ways to manipulate and search within HTML. Is there something similar for CSS? Specifically, I'd like to know if a given HTML text is rendered as bold. Either it has an ancestor that is the <strong> or the <bold> tag (which can be done with BeautifulSoup), or it has an ancestor (or itself) that has CSS attributes with font-weight: bold. Is this possible without resulting to writing my own library?

    Read the article

  • Which webserver to use with bottle?

    - by luc
    Bottle can use several webservers: Build-in HTTP development server and support for paste, fapws3, flup, cherrypy or any other WSGI capable server. I am using bottle for a desktop-app and I guess that the development server is enough in this case. I would like to know if some of you have experience with one of the alternative server. Which server for which purpose? Thanks in advance

    Read the article

  • App only spawns one thread

    - by tipu
    I have what I thought was a thread-friendly app, and after doing some output I've concluded that of the 15 threads I am attempting to run, only one does. I have if __name__ == "__main__": fhf = FileHandlerFactory() tweet_manager = TweetManager("C:/Documents and Settings/Administrator/My Documents/My Dropbox/workspace/trie/Tweet Search Engine/data/partitioned_raw_tweets/raw_tweets.txt.001") start = time.time() for i in range(15): Indexer(tweet_manager, fhf).start() Then in my thread-entry point, I do def run(self): print(threading.current_thread()) self.index() That results in this: <Indexer(Thread-3, started 1168)> So of 15 threads that I thought were running, I'm only running one. Any idea as to why? Edit: code

    Read the article

< Previous Page | 360 361 362 363 364 365 366 367 368 369 370 371  | Next Page >