Search Results

Search found 14486 results on 580 pages for 'python idle'.

Page 129/580 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • working on lists in python

    - by owca
    I'm trying to make a small modification to django lfs project, that will allow me to deactivate products with no stocks. Unfortunatelly I'm just beginning to learn python, so I have big trouble with its syntax. That's what I'm trying to do. I'm using method 'has_variants' which returns true if product has any. Then I'm building a list from variants for this product. Next for every product in this list (I've called it 'set') I check it's stock and set bool variable 'inactive' to true if product has no stocks and to false if there are any. Finally if 'inactive' is false I'm setting self.active to 0. Code fails in line with: set[] = s How to correct it ? def deactivate(self): """If there are no stocks, deactivate the product. Used in last step of checkout. """ if self.has_variants(): for s in self.variants.filter(active=True): set[] = s for var in set: if var.get_stock_amount() == 0: inactive = True else: inactive = False else: if self.get_stock_amount() == 0: inactive = True if inactive: self.active = False return 0 error log : Traceback (most recent call last): File "manage.py", line 11, in <module> execute_manager(settings) File "/home/purplecow/rails/purpledev/site-packages/django/core/management/__i nit__.py", line 362, in execute_manager utility.execute() File "/home/purplecow/rails/purpledev/site-packages/django/core/management/__i nit__.py", line 303, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/purplecow/rails/purpledev/site-packages/django/core/management/bas e.py", line 195, in run_from_argv self.execute(*args, **options.__dict__) File "/home/purplecow/rails/purpledev/site-packages/django/core/management/bas e.py", line 213, in execute translation.activate('en-us') File "/home/purplecow/rails/purpledev/site-packages/django/utils/translation/_ _init__.py", line 73, in activate return real_activate(language) File "/home/purplecow/rails/purpledev/site-packages/django/utils/translation/_ _init__.py", line 43, in delayed_loader return g['real_%s' % caller](*args, **kwargs) File "/home/purplecow/rails/purpledev/site-packages/django/utils/translation/t rans_real.py", line 205, in activate _active[currentThread()] = translation(language) File "/home/purplecow/rails/purpledev/site-packages/django/utils/translation/t rans_real.py", line 194, in translation default_translation = _fetch(settings.LANGUAGE_CODE) File "/home/purplecow/rails/purpledev/site-packages/django/utils/translation/t rans_real.py", line 180, in _fetch app = import_module(appname) File "/home/purplecow/rails/purpledev/site-packages/django/utils/importlib.py" , line 35, in import_module __import__(name) File "/home/purplecow/rails/purpledev/lfs/caching/__init__.py", line 1, in <mo dule> from listeners import * File "/home/purplecow/rails/purpledev/lfs/caching/listeners.py", line 10, in < module> from lfs.cart.models import Cart File "/home/purplecow/rails/purpledev/lfs/cart/models.py", line 8, in <module> from lfs.catalog.models import Product File "/home/purplecow/rails/purpledev/lfs/catalog/__init__.py", line 1, in <mo dule> from listeners import * File "/home/purplecow/rails/purpledev/lfs/catalog/listeners.py", line 5, in <m odule> from lfs.catalog.models import PropertyGroup File "/home/purplecow/rails/purpledev/lfs/catalog/models.py", line 589 set[] = s ^ SyntaxError: invalid syntax

    Read the article

  • Creating a spam list with a web crawler in python

    - by user313623
    Hey guys, I'm not trying to do anything malicious here, I just need to do some homework. I'm a fairly new programmer, I'm using python 3.0, and I having difficulty using recursion for problem-solving. I've been stuck on this question for quite a while. Here's the assignment: Write a recursive method spam(url, n) that takes a url of a web page as input and a non-negative integer n, collects all the email address contained in the web page and adds them to a global dictionary variable spam_dict, and then recursively calls itself on every http hyperlink contained in the web page. You will use a dictionary so only one copy of every email address is save; your dictionary will store (key,value) pairs (email, email). The recursive call should use the parameter n-1 instead of n. If n = 0, you should collect the email addresses but no recursive calls should be made. The parameter n is used to limit the recursion to at most depth n. You will need to use the solutions of the two above problems; you method spam() will call the methods links2() and emails() and possibly other functions as well. Notes: 1. running spam() directly will produce no output on the screen; to find your spam_dict, you will need to read the value of spam_dict, and you will also need to reset it to the empty dictionary before every run of spam. 2. Recall how global variables are used. Usage: spam_dict = {} spam('http://reed.cs.depaul.edu/lperkovic/csc242/test1.html',0) spam_dict.keys() dict_keys([]) spam_dict = {} spam('http://reed.cs.depaul.edu/lperkovic/csc242/test1.html',1) spam_dict.keys() dict_keys(['[email protected]', '[email protected]']) So far, I've written a function that traverses web pages and puts all the links in a nice little list, and what I wanted to do was call that functions. And why would I use recursion on a dictionary? And how? I don't understand how n ties into all of this. def links2(url): content = str(urlopen(url).read()) myparser = MyHTMLParser() myparser.feed(content) lst = myparser.get() mergelst = [] for link in lst: mergelst.append(urljoin(lst[0],link)) print(mergelst) Any input (except why spam is bad) would be greatly appreciated. Also, I realize that the above function could probably look better, if you have a way to do it, I'm all ears. However, all I need is the point is for the program to produce the proper output.

    Read the article

  • How to get a html elements with python lxml

    - by Damiano
    Hello! I have this html code: <table> <tr> <td class="test"><b><a href="">aaa</a></b></td> <td class="test">bbb</td> <td class="test">ccc</td> <td class="test"><small>ddd</small></td> </tr> <tr> <td class="test"><b><a href="">eee</a></b></td> <td class="test">fff</td> <td class="test">ggg</td> <td class="test"><small>hhh</small></td> </tr> </table> I use this Python code to extract all <td class="test"> with lxml module. import urllib2 import lxml.html code = urllib.urlopen("http://www.example.com/page.html").read() html = lxml.html.fromstring(code) result = html.xpath('//td[@class="test"][position() = 1 or position() = 4]') It works good! The result is: <td class="test"><b><a href="">aaa</a></b></td> <td class="test"><small>ddd</small></td> <td class="test"><b><a href="">eee</a></b></td> <td class="test"><small>hhh</small></td> (so the first and the fourth column of each <tr>) Now, I have to extract: aaa (the title of the link) ddd (text between <small> tag) eee (the title of the link) hhh (text between <small> tag) How could I extract these values? (the problem is that I have to remove <b> tag and get the title of the anchor on the first column and remove <small> tag on the forth column) Thank you!

    Read the article

  • Python multiprocessing global variable updates not returned to parent

    - by user1459256
    I am trying to return values from subprocesses but these values are unfortunately unpicklable. So I used global variables in threads module with success but have not been able to retrieve updates done in subprocesses when using multiprocessing module. I hope I'm missing something. The results printed at the end are always the same as initial values given the vars dataDV03 and dataDV04. The subprocesses are updating these global variables but these global variables remain unchanged in the parent. import multiprocessing # NOT ABLE to get python to return values in passed variables. ants = ['DV03', 'DV04'] dataDV03 = ['', ''] dataDV04 = {'driver': '', 'status': ''} def getDV03CclDrivers(lib): # call global variable global dataDV03 dataDV03[1] = 1 dataDV03[0] = 0 # eval( 'CCL.' + lib + '.' + lib + '( "DV03" )' ) these are unpicklable instantiations def getDV04CclDrivers(lib, dataDV04): # pass global variable dataDV04['driver'] = 0 # eval( 'CCL.' + lib + '.' + lib + '( "DV04" )' ) if __name__ == "__main__": jobs = [] if 'DV03' in ants: j = multiprocessing.Process(target=getDV03CclDrivers, args=('LORR',)) jobs.append(j) if 'DV04' in ants: j = multiprocessing.Process(target=getDV04CclDrivers, args=('LORR', dataDV04)) jobs.append(j) for j in jobs: j.start() for j in jobs: j.join() print 'Results:\n' print 'DV03', dataDV03 print 'DV04', dataDV04 I cannot post to my question so will try to edit the original. Here is the object that is not picklable: In [1]: from CCL import LORR In [2]: lorr=LORR.LORR('DV20', None) In [3]: lorr Out[3]: <CCL.LORR.LORR instance at 0x94b188c> This is the error returned when I use a multiprocessing.Pool to return the instance back to the parent: Thread getCcl (('DV20', 'LORR'),) Process PoolWorker-1: Traceback (most recent call last): File "/alma/ACS-10.1/casa/lib/python2.6/multiprocessing/process.py", line 232, in _bootstrap self.run() File "/alma/ACS-10.1/casa/lib/python2.6/multiprocessing/process.py", line 88, in run self._target(*self._args, **self._kwargs) File "/alma/ACS-10.1/casa/lib/python2.6/multiprocessing/pool.py", line 71, in worker put((job, i, result)) File "/alma/ACS-10.1/casa/lib/python2.6/multiprocessing/queues.py", line 366, in put return send(obj) UnpickleableError: Cannot pickle <type 'thread.lock'> objects In [5]: dir(lorr) Out[5]: ['GET_AMBIENT_TEMPERATURE', 'GET_CAN_ERROR', 'GET_CAN_ERROR_COUNT', 'GET_CHANNEL_NUMBER', 'GET_COUNT_PER_C_OP', 'GET_COUNT_REMAINING_OP', 'GET_DCM_LOCKED', 'GET_EFC_125_MHZ', 'GET_EFC_COMB_LINE_PLL', 'GET_ERROR_CODE_LAST_CAN_ERROR', 'GET_INTERNAL_SLAVE_ERROR_CODE', 'GET_MAGNITUDE_CELSIUS_OP', 'GET_MAJOR_REV_LEVEL', 'GET_MINOR_REV_LEVEL', 'GET_MODULE_CODES_CDAY', 'GET_MODULE_CODES_CMONTH', 'GET_MODULE_CODES_DIG1', 'GET_MODULE_CODES_DIG2', 'GET_MODULE_CODES_DIG4', 'GET_MODULE_CODES_DIG6', 'GET_MODULE_CODES_SERIAL', 'GET_MODULE_CODES_VERSION_MAJOR', 'GET_MODULE_CODES_VERSION_MINOR', 'GET_MODULE_CODES_YEAR', 'GET_NODE_ADDRESS', 'GET_OPTICAL_POWER_OFF', 'GET_OUTPUT_125MHZ_LOCKED', 'GET_OUTPUT_2GHZ_LOCKED', 'GET_PATCH_LEVEL', 'GET_POWER_SUPPLY_12V_NOT_OK', 'GET_POWER_SUPPLY_15V_NOT_OK', 'GET_PROTOCOL_MAJOR_REV_LEVEL', 'GET_PROTOCOL_MINOR_REV_LEVEL', 'GET_PROTOCOL_PATCH_LEVEL', 'GET_PROTOCOL_REV_LEVEL', 'GET_PWR_125_MHZ', 'GET_PWR_25_MHZ', 'GET_PWR_2_GHZ', 'GET_READ_MODULE_CODES', 'GET_RX_OPT_PWR', 'GET_SERIAL_NUMBER', 'GET_SIGN_OP', 'GET_STATUS', 'GET_SW_REV_LEVEL', 'GET_TE_LENGTH', 'GET_TE_LONG_FLAG_SET', 'GET_TE_OFFSET_COUNTER', 'GET_TE_SHORT_FLAG_SET', 'GET_TRANS_NUM', 'GET_VDC_12', 'GET_VDC_15', 'GET_VDC_7', 'GET_VDC_MINUS_7', 'SET_CLEAR_FLAGS', 'SET_FPGA_LOGIC_RESET', 'SET_RESET_AMBSI', 'SET_RESET_DEVICE', 'SET_RESYNC_TE', 'STATUS', '_HardwareDevice__componentName', '_HardwareDevice__hw', '_HardwareDevice__stickyFlag', '_LORRBase__logger', '__del__', '__doc__', '__init__', '__module__', '_devices', 'clearDeviceCommunicationErrorAlarm', 'getControlList', 'getDeviceCommunicationErrorCounter', 'getErrorMessage', 'getHwState', 'getInternalSlaveCanErrorMsg', 'getLastCanErrorMsg', 'getMonitorList', 'hwConfigure', 'hwDiagnostic', 'hwInitialize', 'hwOperational', 'hwSimulation', 'hwStart', 'hwStop', 'inErrorState', 'isMonitoring', 'isSimulated'] In [6]:

    Read the article

  • how to detect internet idle time?

    - by omair iqbal
    how do i detect internet idle time(time since i used internet) using delphi something like: begin { idle time detection code goes here} showmessage('you have not browsed any websites or downloaded/uploaded for over xyz seconds'); end; thanks in advance

    Read the article

  • cant download youtube video

    - by dsaccount1
    I'm having trouble retrieving the youtube video automatically, heres the code. The problem is the last part. download = urllib.request.urlopen(download_url).read() # Youtube video download script # 10n1z3d[at]w[dot]cn import urllib.request import sys print("\n--------------------------") print (" Youtube Video Downloader") print ("--------------------------\n") try: video_url = sys.argv[1] except: video_url = input('[+] Enter video URL: ') print("[+] Connecting...") try: if(video_url.endswith('&feature=related')): video_id = video_url.split('www.youtube.com/watch?v=')[1].split('&feature=related')[0] elif(video_url.endswith('&feature=dir')): video_id = video_url.split('www.youtube.com/watch?v=')[1].split('&feature=dir')[0] elif(video_url.endswith('&feature=fvst')): video_id = video_url.split('www.youtube.com/watch?v=')[1].split('&feature=fvst')[0] elif(video_url.endswith('&feature=channel_page')): video_id = video_url.split('www.youtube.com/watch?v=')[1].split('&feature=channel_page')[0] else: video_id = video_url.split('www.youtube.com/watch?v=')[1] except: print("[-] Invalid URL.") exit(1) print("[+] Parsing token...") try: url = str(urllib.request.urlopen('http://www.youtube.com/get_video_info?&video_id=' + video_id).read()) token_value = url.split('video_id='+video_id+'&token=')[1].split('&thumbnail_url')[0] download_url = "http://www.youtube.com/get_video?video_id=" + video_id + "&t=" + token_value + "&fmt=18" except: url = str(urllib.request.urlopen('www.youtube.com/watch?v=' + video_id)) exit(1) v_url=str(urllib.request.urlopen('http://'+video_url).read()) video_title = v_url.split('"rv.2.title": "')[1].split('", "rv.4.rating"')[0] if '&quot;' in video_title: video_title = video_title.replace('&quot;','"') elif '&amp;' in video_title: video_title = video_title.replace('&amp;','&') print("[+] Downloading " + '"' + video_title + '"...') try: print(download_url) file = open(video_title + '.mp4', 'wb') download = urllib.request.urlopen(download_url).read() print(download) for line in download: file.write(line) file.close() except: print("[-] Error downloading. Quitting.") exit(1) print("\n[+] Done. The video is saved to the current working directory(cwd).\n")

    Read the article

  • Facebook connect on Google App Engine with Django Patch

    - by yoav.aviram
    We are building a website on Google App Engine, using django patch. We would like to use Facebook connect for two purposes: Authenticate users. Access user's social data. Searching for a solution in the usual places (google, FB, SO) brigs up a lot of noise, many partial solutions and no clear answer. So the question is this: does anyone has a clear working solution? maybe even a recipe? Thanks.

    Read the article

  • twython search api rate limit: Header information will not be updated

    - by user2715478
    I want to handle the Search-API rate limit of 180 requests / 15 minutes. The first solution I came up with was to check the remaining requests in the header and wait 900 seconds. See the following snippet: results = search_interface.cursor(search_interface.search, q=k, lang=lang, result_type=result_mode) while True: try: tweet = next(results) if limit_reached(search_interface): sleep(900) self.writer(tweet) def limit_reached(search_interface): remaining_rate = int(search_interface.get_lastfunction_header('X-Rate-Limit-Remaining')) return remaining_rate <= 2 But it seems, that the header information are not reseted to 180 after it reached the two remaining requests. The second solution I came up with was to handle the twython exception for rate limitation and wait the remaining amount of time: results = search_interface.cursor(search_interface.search, q=k, lang=lang, result_type=result_mode) while True: try: tweet = next(results) self.writer(tweet) except TwythonError as inst: logger.error(inst.msg) wait_for_reset(search_interface) continue except StopIteration: break def wait_for_reset(search_interface): reset_timestamp = int(search_interface.get_lastfunction_header('X-Rate-Limit-Reset')) now_timestamp = datetime.now().timestamp() seconds_offset = 10 t = reset_timestamp - now_timestamp + seconds_offset logger.info('Waiting {0} seconds for Twitter rate limit reset.'.format(t)) sleep(t) But with this solution I receive this message INFO: Resetting dropped connection: api.twitter.com" and the loop will not continue with the last element of the generator. Have somebody faced the same problems? Regards.

    Read the article

  • Why eclipse + pydev print() output look strange with two strings?

    - by srisar
    hay all, I just did the following: a = input("give a word: ") b = input("give another word: ") c = a + " " + b print("result is", c) and get the output as follows: give a word: name give another word: word result is name word my question is why the output on pydev or eclipse console in two lines? i expected to output as follows: give a word: name give another word: word result is name word how and why this happens to me? on cmd its looking fine??!!

    Read the article

  • I am trying to use user-defined functions to print out an A out of stars, but i need help defining m

    - by lm
    def horizline(col): for col in range (col): print("*", end='') print() def vertline(rows, col): for rows in range (rows-2): print ("*", end='') for col in range (col-2): print(' ', end='') print("*") def functionA(width): horizline(width) vertline(width) horizline(width) vertline(width) print() def main(): width=int(input("Please enter a width for the letter: ")) length=int(input("Please enter a lenght for the letter: ")) letter=input("Enter one of the capital letters: A ") if(width>=5 and width<=20): functionA(width) else: print("You have entered an incorrect value") main()

    Read the article

  • I am trying to use user-defined functions to print out an T out of stars, but i need help shifting t

    - by lm
    I know main() and other parts of the prog are missing, but please help def horizLine(col): for cols in range(col): print("*", end='') print() def line(col): #C,E,F,G,I,L,P,T for col in range(col//2): print("*", end='') print() def functionT(width): horizLine(width) line(width) enter width for the box width = int(input("Enter a width between 5 and 20: ")) letter=input("Enter one of the capital letters: T ") if ((width >= 5 and width <=20)): if letter=="T": functionT(width) else: print() print("Invalid letter!") else: print("You have entered a wrong range for the width!") main()

    Read the article

  • how to change string values in dictionary to int values

    - by tom smith
    I have a dictionary such as: {'Sun': {'Satellites': 'Mercury,Venus,Earth,Mars,Jupiter,Saturn,Uranus,Neptune,Ceres,Pluto,Haumea,Makemake,Eris', 'Orbital Radius': '0', 'Object': 'Sun', 'RootObject': 'Sun', 'Radius': '20890260'}, 'Earth': {'Period': '365.256363004', 'Satellites': 'Moon', 'Orbital Radius': '77098290', 'Radius': '63710.41000.0', 'Object': 'Earth'}, 'Moon': {'Period': '27.321582', 'Orbital Radius': '18128500', 'Radius': '1737000.10', 'Object': 'Moon'}} I am wondering how to change just the number values to ints instead of strings. def read_next_object(file): obj = {} for line in file: if not line.strip(): continue line = line.strip() key, val = line.split(": ") if key in obj and key == "Object": yield obj obj = {} obj[key] = val yield obj planets = {} with open( "smallsolar.txt", 'r') as f: for obj in read_next_object(f): planets[obj["Object"]] = obj print(planets)

    Read the article

  • Django logs: any tutorial to log to a file

    - by Algorist
    Hi, I am working with a django project, I haven't started. The developed working on the project left. During the knowledge transfer, it was told to me that all the events are logged to the database. I don't find the database interface useful to search for logs and sometimes they don't even log(I might be wrong). I want to know, if there is an easy tutorial that explains how to enable logging in Django with minimal configuration changes. Thank you Bala

    Read the article

  • Zoneminder camera permanently idle

    - by C. Ross
    I have an Ubuntu server with zoneminder installed. I only have one camera (cheap Logitech USB model) and it seems to be permanently idle. I'm also getting this error repeatedly in the logs 06/06/2010 08:26:56.563783 zmwatch[28234].INF [Restarting capture daemon for main_room, shared memory not valid] 06/06/2010 08:26:56.812964 zmwatch[28234].INF ['zmc -d /dev/video0' starting at 10/06/06 08:26:56, pid = 29214] 06/06/2010 08:27:06.814486 zmwatch[28234].INF [Restarting capture daemon for main_room, shared memory not valid] 06/06/2010 08:27:07.054854 zmwatch[28234].INF ['zmc -d /dev/video0' starting at 10/06/06 08:27:07, pid = 29219] I've already tried following the instructions in the FAQ for increasing shared memory, but that doesn't seem to work. What do I need to do to get this working?

    Read the article

  • Converting the value from string to integer in a nested dictionary

    - by tom smith
    I want to change the numbers in my dictionary to int values for use later in my program. So far I have import time import math x = 400 y = 300 def read_next_object(file): obj = {} for line in file: if not line.strip(): continue line = line.strip() key, val = line.split(": ") if key in obj and key == "Object": yield obj obj = {} obj[key] = val yield obj planets = {} with open( "smallsolar.txt", 'r') as f: for obj in read_next_object(f): planets[obj["Object"]] = obj print(planets) scale=250/int(max([planets[x]["Orbital Radius"] for x in planets if "Orbital Radius" in planets[x]])) print(scale) and the output is {'Sun': {'Object': 'Sun', 'Satellites': 'Mercury,Venus,Earth,Mars,Jupiter,Saturn,Uranus,Neptune,Ceres,Pluto,Haumea,Makemake,Eris', 'Orbital Radius': '0', 'RootObject': 'Sun', 'Radius': '20890260'}, 'Moon': {'Object': 'Moon', 'Orbital Radius': '18128500', 'Period': '27.321582', 'Radius': '1737000.10'}, 'Earth': {'Object': 'Earth', 'Satellites': 'Moon', 'Orbital Radius': '77098290', 'Period': '365.256363004', 'Radius': '6371000.0'}} 3.2426140709476178e-06 I want to be able to convert the numbers in the dict to ints for further use. Any help in greatly appreciated.

    Read the article

  • Can we compare programming languages ergonomically?

    - by Nick Rosencrantz
    For instance, would Python be a more ergonomic programming language since it doesn't force you to make curly braces which requires the AltGr key. Also Python usually requires less code to achieve the same or am I being biased towards Python and PHP actually is an ergonomical and comfortable language despite forcing the programmer to use the AltGr key? Isn't forcing the programmer to use the AltGr key not very ergonomical?

    Read the article

  • Djangobb problem

    - by Djero
    I've installed Djangobb app on my server (Debian, mod_python) by cloning original source. The only things I've changed is database options in settings.py. All needed components are installed - syncdb query was executed right. But, when I'm trying to enter on my forum, it returns me error: ImproperlyConfigured: Error importing middleware django_authopenid.middleware: "No module named djangobb_forum.subscription" I've checked - djangobb_forum/subscription.py exist, so I don't know what can be wrong. Maybe someone had problems like that and know how to fix it? Sorry for my english.

    Read the article

  • Using fft2 with reshaping for an RGB filter

    - by Mahmoud Aladdin
    I want to apply a filter on an image, for example, blurring filter [[1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0]]. Also, I'd like to use the approach that convolution in Spatial domain is equivalent to multiplication in Frequency domain. So, my algorithm will be like. Load Image. Create Filter. convert both Filter & Image to Frequency domains. multiply both. reconvert the output to Spatial Domain and that should be the required output. The following is the basic code I use, the image is loaded and displayed as cv.cvmat object. Image is a class of my creation, it has a member image which is an object of scipy.matrix and toFrequencyDomain(size = None) uses spf.fftshift(spf.fft2(self.image, size)) where spf is scipy.fftpack and dotMultiply(img) uses scipy.multiply(self.image, image) f = Image.fromMatrix([[1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0]]) lena = Image.fromFile("Test/images/lena.jpg") print lena.image.shape lenaf = lena.toFrequencyDomain(lena.image.shape) ff = f.toFrequencyDomain(lena.image.shape) lenafm = lenaf.dotMultiplyImage(ff) lenaff = lenafm.toTimeDomain() lena.display() lenaff.display() So, the previous code works pretty well, if I told OpenCV to load the image via GRAY_SCALE. However, if I let the image to be loaded in color ... lena.image.shape will be (512, 512, 3) .. so, it gives me an error when using scipy.fttpack.ftt2 saying "When given, Shape and Axes should be of same length". What I tried next was converted my filter to 3-D .. as [[[1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0]], [[1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0]], [[1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0]]] And, not knowing what the axes argument do, I added it with random numbers as (-2, -1, -1), (-1, -1, -2), .. etc. until it gave me the correct filter output shape for the dotMultiply to work. But, of course it wasn't the correct value. Things were totally worse. My final trial, was using fft2 function on each of the components 2-D matrices, and then re-making the 3-D one, using the following code. # Spiltting the 3-D matrix to three 2-D matrices. for i, row in enumerate(self.image): r.append(list()) g.append(list()) b.append(list()) for pixel in row: r[i].append(pixel[0]) g[i].append(pixel[1]) b[i].append(pixel[2]) rfft = spf.fftshift(spf.fft2(r, size)) gfft = spf.fftshift(spf.fft2(g, size)) bfft = spf.fftshift(spf.fft2(b, size)) newImage.image = sp.asarray([[[rfft[i][j], gfft[i][j], bfft[i][j]] for j in xrange(len(rfft[i]))] for i in xrange(len(rfft))] ) return newImage Any help on what I made wrong, or how can I achieve that for both GreyScale and Coloured pictures.

    Read the article

  • Decode base64 data as array in Python

    - by skerit
    I'm using this handy Javascript function to decode a base64 string and get an array in return. This is the string: base64_decode_array('6gAAAOsAAADsAAAACAEAAAkBAAAKAQAAJgEAACcBAAAoAQAA') This is what's returned: 234,0,0,0,235,0,0,0,236,0,0,0,8,1,0,0,9,1,0,0,10,1,0,0,38,1,0,0,39,1,0,0,40,1,0,0 The problem is I don't really understand the javascript function: var base64chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'.split(""); var base64inv = {}; for (var i = 0; i < base64chars.length; i++) { base64inv[base64chars[i]] = i; } function base64_decode_array (s) { // remove/ignore any characters not in the base64 characters list // or the pad character -- particularly newlines s = s.replace(new RegExp('[^'+base64chars.join("")+'=]', 'g'), ""); // replace any incoming padding with a zero pad (the 'A' character is zero) var p = (s.charAt(s.length-1) == '=' ? (s.charAt(s.length-2) == '=' ? 'AA' : 'A') : ""); var r = []; s = s.substr(0, s.length - p.length) + p; // increment over the length of this encrypted string, four characters at a time for (var c = 0; c < s.length; c += 4) { // each of these four characters represents a 6-bit index in the base64 characters list // which, when concatenated, will give the 24-bit number for the original 3 characters var n = (base64inv[s.charAt(c)] << 18) + (base64inv[s.charAt(c+1)] << 12) + (base64inv[s.charAt(c+2)] << 6) + base64inv[s.charAt(c+3)]; // split the 24-bit number into the original three 8-bit (ASCII) characters r.push((n >>> 16) & 255); r.push((n >>> 8) & 255); r.push(n & 255); } // remove any zero pad that was added to make this a multiple of 24 bits return r; } What's the function of those "<<<" and "" characters. Or is there a function like this for Python?

    Read the article

  • How do I communicate with an IronPython component in a C#/XNA game?

    - by Jonathan Hobbs
    My XNA game is component-oriented, and has various components for position, physics representation, rendering, etc, all of which extend a base Component class. The player and enemies also have controllers which are currently defined in C#. I'd like to turn them into Python scripts, but I'm not sure how to interact with those scripts. The examples in Embedding IronPython in a C# Application suggest I'd have to create a wrapper class (e.g. a Script component) which compiles a Python script, and call the Update methods of the component in the script Is this the most effective way of working with a Python object? I feel that I'm missing something in my research - there must be a way to load up a script, instantiate a Python object and then work directly with it from within C#. Or is the wrapper required?

    Read the article

  • Linux QoS: bulk data transmission during idle times

    - by syneticon-dj
    How would I do a QoS setup where a certain low-priority data stream would get up to X Mbps of bandwidth, but only if the current total bandwidth (of all streams/classes) on this interface does not exceed X? At the same time, other data streams / classes must not be limited to X. The use case is an ISP billing the traffic by calculating the bandwidth average over 5 minute intervals and billing the maximum. I would like to keep the maximum usage to a minimum (i.e. quench the bulk transfer during interface busy times) but get the data through during idle/low traffic times. Looking at the frequently used classful schedulers CBQ, HTB and HSFC I cannot see a straightforward way to accomplish this.

    Read the article

  • urllib2 misbehaving with dynamically loaded content

    - by Sheena
    Some Code headers = {} headers['user-agent'] = 'User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0' headers['Accept'] = 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' headers['Accept-Language'] = 'en-gb,en;q=0.5' #headers['Accept-Encoding'] = 'gzip, deflate' request = urllib.request.Request(sURL, headers = headers) try: response = urllib.request.urlopen(request) except error.HTTPError as e: print('The server couldn\'t fulfill the request.') print('Error code: {0}'.format(e.code)) except error.URLError as e: print('We failed to reach a server.') print('Reason: {0}'.format(e.reason)) else: f = open('output/{0}.html'.format(sFileName),'w') f.write(response.read().decode('utf-8')) A url http://groupon.cl/descuentos/santiago-centro The situation Here's what I did: enable javascript in browser open url above and keep an eye on the console disable javascript repeat step 2 use urllib2 to grab the webpage and save it to a file enable javascript open the file with browser and observe console repeat 7 with javascript off results In step 2 I saw that a whole lot of the page content was loaded dynamically using ajax. So the HTML that arrived was a sort of skeleton and ajax was used to fill in the gaps. This is fine and not at all surprising Since the page should be seo friendly it should work fine without js. in step 4 nothing happens in the console and the skeleton page loads pre-populated rendering the ajax unnecessary. This is also completely not confusing in step 7 the ajax calls are made but fail. this is also ok since the urls they are using are not local, the calls are thus broken. The page looks like the skeleton. This is also great and expected. in step 8: no ajax calls are made and the skeleton is just a skeleton. I would have thought that this should behave very much like in step 4 question What I want to do is use urllib2 to grab the html from step 4 but I cant figure out how. What am I missing and how could I pull this off? To paraphrase If I was writing a spider I would want to be able to grab plain ol' HTML (as in that which resulted in step 4). I dont want to execute ajax stuff or any javascript at all. I don't want to populate anything dynamically. I just want HTML. The seo friendly site wants me to get what I want because that's what seo is all about. How would one go about getting plain HTML content given the situation I outlined? To do it manually I would turn off js, navigate to the page and copy the html. I want to automate this. stuff I've tried I used wireshark to look at packet headers and the GETs sent off from my pc in steps 2 and 4 have the same headers. Reading about SEO stuff makes me think that this is pretty normal otherwise techniques such as hijax wouldn't be used. Here are the headers my browser sends: Host: groupon.cl User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-gb,en;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Here are the headers my script sends: Accept-Encoding: identity Host: groupon.cl Accept-Language: en-gb,en;q=0.5 Connection: close Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 User-Agent: User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0 The differences are: my script has Connection = close instead of keep-alive. I can't see how this would cause a problem my script has Accept-encoding = identity. This might be the cause of the problem. I can't really see why the host would use this field to determine the user-agent though. If I change encoding to match the browser request headers then I have trouble decoding it. I'm working on this now... watch this space, I'll update the question as new info comes up

    Read the article

  • ValueError: too many values to unpack in a tuple

    - by falosi
    Please put some light on why am getting a too many to unpack (ValueError in my for loop).Have tried deb naislist = [('CONTROL FILE', '0', '0', '0'), ('REDO LOG', '0', '0', '0'), ('ARCHIVED LOG', '.69', '.59', '3'), ('BACKUP PIECE', '46.54', '0', '192'), ('IMAGE COPY', '0', '0', '0'), ('FLASHBACK LOG', '10.15', '6.31', '82'), ('FOREIGN ARCHIVED LOG', '0', '0', '0')] print "size of naislist is ",len((naislist)) heading = ('MAIN MENU', 'LEVELS', 'LEVEL2', 'LEVEL3') rearrange = dict(zip((0, 1, 2, 3), (len(str(x)) for x in heading))) for tu, x in naislist: rearrange.update((i, max(rearrange[i], len(str(el)))) for i, el in enumerate(tu)) rearrange[4] = max(rearrange[4], len(str(x))) forkit = '|'. join('%%-%ss' % rearrange[i] for i in xrange(0, 4)) print '\n'.join((forkit % heading, '-|-'.join(rearrange[i] * '-' for i in xrange(4)), '\n'.join(forkit % (a, b, c, d) for (a, b, c), d in naislist)))

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >