Search Results

Search found 15187 results on 608 pages for 'boost python'.

Page 28/608 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Python PyBluez loses Bluetooth connection after a while

    - by Travis G.
    I am using Python to write a simple serial Bluetooth script that sends information about my computer stats periodically. The receiving device is a Sparkfun BlueSmirf Silver. The problem is that, after the script runs for a few minutes, it stops sending packets to the receiver and fails with the error: (11, 'Resource temporarily unavailable') Noticing that this inevitably happens, I added some code to automatically try to reopen the connection. However, then I get: Could not connect: (16, 'Device or resource busy') Am I doing something wrong with the connection? Do I need to occasionally reopen the socket? I'm not sure how to recover from this type of error. I understand that sometimes the port will be busy and a write operation is deferred to avoid blocking other processes, but I wouldn't expect the connection to fail so regularly. Any thoughts? Here is the script: import psutil import serial import string import time import bluetooth sampleTime = 1 numSamples = 5 lastTemp = 0 TEMP_CHAR = 't' USAGE_CHAR = 'u' SENSOR_NAME = 'TC0D' #gauges = serial.Serial() #gauges.port = '/dev/rfcomm0' #gauges.baudrate = 9600 #gauges.parity = 'N' #gauges.writeTimeout = 0 #gauges.open() filename = '/sys/bus/platform/devices/applesmc.768/temp2_input' def parseSensorsOutputLinux(output): return int(round(float(output) / 1000)) def connect(): while(True): try: gaugeSocket = bluetooth.BluetoothSocket(bluetooth.RFCOMM) gaugeSocket.connect(('00:06:66:42:22:96', 1)) break; except bluetooth.btcommon.BluetoothError as error: print "Could not connect: ", error, "; Retrying in 5s..." time.sleep(5) return gaugeSocket; gaugeSocket = connect() while(1): usage = psutil.cpu_percent(interval=sampleTime) sensorFile = open(filename) temp = parseSensorsOutputLinux(sensorFile.read()) try: #gauges.write(USAGE_CHAR) gaugeSocket.send(USAGE_CHAR) #gauges.write(chr(int(usage))) #write the first byte gaugeSocket.send(chr(int(usage))) #print("Wrote usage: " + str(int(usage))) #gauges.write(TEMP_CHAR) gaugeSocket.send(TEMP_CHAR) #gauges.write(chr(temp)) gaugeSocket.send(chr(temp)) #print("Wrote temp: " + str(temp)) except bluetooth.btcommon.BluetoothError as error: print "Caught BluetoothError: ", error time.sleep(5) gaugeSocket = connect() pass gaugeSocket.close() EDIT: I should add that this code connects fine after I power-cycle the receiver and start the script. However, it fails after the first exception until I restart the receiver. P.S. This is related to my recent question, Why is /dev/rfcomm0 giving PySerial problems?, but that was more about PySerial specifically with rfcomm0. Here I am asking about general rfcomm etiquette.

    Read the article

  • XMPP— openfire,PHP and python web service

    - by mlakhara
    I am planning to integrate real time notifications into a web application that I am currently working on. I have decided to go with XMPP for this and selected openfire server which i thought to be suitable for my needs. The front end uses strophe library to fetch the notifications using BOSH from my openfire server. However the notices are the notifications and other messages are to be posted by my application and hence I think this code needs to reside at the backend. Initially I thougt of going with PHP XMPP libraries like XMPHP and JAXL but then I think that this would cause much overhead as each script will have to do same steps like connection, authentication etc. and I think this would make the PHP end a little slow and unresponsive. Now I am thinking of creating a middle-ware application acting as a web service that the PHP will call and this application will handle the stuff with XMPP service. The benefit with this is that this app(a server if you will) will have to connect just once and the it will sit there listening on a port. also I am planning to build it in a asynchronous way such that It will first take all the requests from my PHp app and then when there are no more requests; go about doing the notification publishing stuff. I am planninng to create this service in Python using SleekXMPP. This is just what I planned. I am new to XMPP and this whole web service stuff ans would like to take your comments on this regarding issues like memory and CPU usage, advantages, disadvantages, scalability issues,security etc. Thanks in advance. PS:-- also if something like this already exists(although I didn't find after a lot of Googling) Please direct me there. EDIT --- The middle-level service should be doing the following(but not limited to): 1. Publishing notifications for different level of groups and community pages. 2. Notification for single user on some event. 3. User registration(can be done using user service plugin though). EDIT --- Also it should like to create pub-sub nodes and subscribe and unsubscribe users from these pub-sub nodes. Also I want to store the notifications and messages in a database(openfire doesn't). Would that be a good choice?

    Read the article

  • Project Euler 18: (Iron)Python

    - by Ben Griswold
    In my attempt to learn (Iron)Python out in the open, here’s my solution for Project Euler Problem 18.  As always, any feedback is welcome. # Euler 18 # http://projecteuler.net/index.php?section=problems&id=18 # By starting at the top of the triangle below and moving # to adjacent numbers on the row below, the maximum total # from top to bottom is 23. # # 3 # 7 4 # 2 4 6 # 8 5 9 3 # # That is, 3 + 7 + 4 + 9 = 23. # Find the maximum total from top to bottom of the triangle below: # 75 # 95 64 # 17 47 82 # 18 35 87 10 # 20 04 82 47 65 # 19 01 23 75 03 34 # 88 02 77 73 07 63 67 # 99 65 04 28 06 16 70 92 # 41 41 26 56 83 40 80 70 33 # 41 48 72 33 47 32 37 16 94 29 # 53 71 44 65 25 43 91 52 97 51 14 # 70 11 33 28 77 73 17 78 39 68 17 57 # 91 71 52 38 17 14 91 43 58 50 27 29 48 # 63 66 04 68 89 53 67 30 73 16 69 87 40 31 # 04 62 98 27 23 09 70 98 73 93 38 53 60 04 23 # NOTE: As there are only 16384 routes, it is possible to solve # this problem by trying every route. However, Problem 67, is the # same challenge with a triangle containing one-hundred rows; it # cannot be solved by brute force, and requires a clever method! ;o) import time start = time.time() triangle = [ [75], [95, 64], [17, 47, 82], [18, 35, 87, 10], [20, 04, 82, 47, 65], [19, 01, 23, 75, 03, 34], [88, 02, 77, 73, 07, 63, 67], [99, 65, 04, 28, 06, 16, 70, 92], [41, 41, 26, 56, 83, 40, 80, 70, 33], [41, 48, 72, 33, 47, 32, 37, 16, 94, 29], [53, 71, 44, 65, 25, 43, 91, 52, 97, 51, 14], [70, 11, 33, 28, 77, 73, 17, 78, 39, 68, 17, 57], [91, 71, 52, 38, 17, 14, 91, 43, 58, 50, 27, 29, 48], [63, 66, 04, 68, 89, 53, 67, 30, 73, 16, 69, 87, 40, 31], [04, 62, 98, 27, 23, 9, 70, 98, 73, 93, 38, 53, 60, 04, 23]] # Loop through each row of the triangle starting at the base. for a in range(len(triangle) - 1, -1, -1): for b in range(0, a): # Get the maximum value for adjacent cells in current row. # Update the cell which would be one step prior in the path # with the new total. For example, compare the first two # elements in row 15. Add the max of 04 and 62 to the first # position of row 14.This provides the max total from row 14 # to 15 starting at the first position. Continue to work up # the triangle until the maximum total emerges at the # triangle's apex. triangle [a-1][b] += max(triangle [a][b], triangle [a][b+1]) print triangle [0][0] print "Elapsed Time:", (time.time() - start) * 1000, "millisecs" a=raw_input('Press return to continue')

    Read the article

  • How to refactor a Python “god class”?

    - by Zearin
    Problem I’m working on a Python project whose main class is a bit “God Object”. There are so friggin’ many attributes and methods! I want to refactor the class. So Far… For the first step, I want to do something relatively simple; but when I tried the most straightforward approach, it broke some tests and existing examples. Basically, the class has a loooong list of attributes—but I can clearly look over them and think, “These 5 attributes are related…These 8 are also related…and then there’s the rest.” getattr I basically just wanted to group the related attributes into a dict-like helper class. I had a feeling __getattr__ would be ideal for the job. So I moved the attributes to a separate class, and, sure enough, __getattr__ worked its magic perfectly well… At first. But then I tried running one of the examples. The example subclass tries to set one of these attributes directly (at the class level). But since the attribute was no longer “physically located” in the parent class, I got an error saying that the attribute did not exist. @property I then read up about the @property decorator. But then I also read that it creates problems for subclasses that want to do self.x = blah when x is a property of the parent class. Desired Have all client code continue to work using self.whatever, even if the parent’s whatever property is not “physically located” in the class (or instance) itself. Group related attributes into dict-like containers. Reduce the extreme noisiness of the code in the main class. For example, I don’t simply want to change this: larry = 2 curly = 'abcd' moe = self.doh() Into this: larry = something_else('larry') curly = something_else('curly') moe = yet_another_thing.moe() …because that’s still noisy. Although that successfully makes a simply attribute into something that can manage the data, the original had 3 variables and the tweaked version still has 3 variables. However, I would be fine with something like this: stooges = Stooges() And if a lookup for self.larry fails, something would check stooges and see if larry is there. (But it must also work if a subclass tries to do larry = 'blah' at the class level.) Summary Want to replace related groups of attributes in a parent class with a single attribute that stores all the data elsewhere Want to work with existing client code that uses (e.g.) larry = 'blah' at the class level Want to continue to allow subclasses to extend, override, and modify these refactored attributes without knowing anything has changed Is this possible? Or am I barking up the wrong tree?

    Read the article

  • Project Euler 11: (Iron)Python

    - by Ben Griswold
    In my attempt to learn (Iron)Python out in the open, here’s my solution for Project Euler Problem 11.  As always, any feedback is welcome. # Euler 11 # http://projecteuler.net/index.php?section=problems&id=11 # What is the greatest product # of four adjacent numbers in any direction (up, down, left, # right, or diagonally) in the 20 x 20 grid? import time start = time.time() grid = [\ [8,02,22,97,38,15,00,40,00,75,04,05,07,78,52,12,50,77,91,8],\ [49,49,99,40,17,81,18,57,60,87,17,40,98,43,69,48,04,56,62,00],\ [81,49,31,73,55,79,14,29,93,71,40,67,53,88,30,03,49,13,36,65],\ [52,70,95,23,04,60,11,42,69,24,68,56,01,32,56,71,37,02,36,91],\ [22,31,16,71,51,67,63,89,41,92,36,54,22,40,40,28,66,33,13,80],\ [24,47,32,60,99,03,45,02,44,75,33,53,78,36,84,20,35,17,12,50],\ [32,98,81,28,64,23,67,10,26,38,40,67,59,54,70,66,18,38,64,70],\ [67,26,20,68,02,62,12,20,95,63,94,39,63,8,40,91,66,49,94,21],\ [24,55,58,05,66,73,99,26,97,17,78,78,96,83,14,88,34,89,63,72],\ [21,36,23,9,75,00,76,44,20,45,35,14,00,61,33,97,34,31,33,95],\ [78,17,53,28,22,75,31,67,15,94,03,80,04,62,16,14,9,53,56,92],\ [16,39,05,42,96,35,31,47,55,58,88,24,00,17,54,24,36,29,85,57],\ [86,56,00,48,35,71,89,07,05,44,44,37,44,60,21,58,51,54,17,58],\ [19,80,81,68,05,94,47,69,28,73,92,13,86,52,17,77,04,89,55,40],\ [04,52,8,83,97,35,99,16,07,97,57,32,16,26,26,79,33,27,98,66],\ [88,36,68,87,57,62,20,72,03,46,33,67,46,55,12,32,63,93,53,69],\ [04,42,16,73,38,25,39,11,24,94,72,18,8,46,29,32,40,62,76,36],\ [20,69,36,41,72,30,23,88,34,62,99,69,82,67,59,85,74,04,36,16],\ [20,73,35,29,78,31,90,01,74,31,49,71,48,86,81,16,23,57,05,54],\ [01,70,54,71,83,51,54,69,16,92,33,48,61,43,52,01,89,19,67,48]] # left and right max, product = 0, 0 for x in range(0,17): for y in xrange(0,20): product = grid[y][x] * grid[y][x+1] * \ grid[y][x+2] * grid[y][x+3] if product > max : max = product # up and down for x in range(0,20): for y in xrange(0,17): product = grid[y][x] * grid[y+1][x] * \ grid[y+2][x] * grid[y+3][x] if product > max : max = product # diagonal right for x in range(0,17): for y in xrange(0,17): product = grid[y][x] * grid[y+1][x+1] * \ grid[y+2][x+2] * grid[y+3][x+3] if product > max: max = product # diagonal left for x in range(0,17): for y in xrange(0,17): product = grid[y][x+3] * grid[y+1][x+2] * \ grid[y+2][x+1] * grid[y+3][x] if product > max : max = product print max print "Elapsed Time:", (time.time() - start) * 1000, "millisecs" a=raw_input('Press return to continue')

    Read the article

  • Python easy_install confused on Mac OS X

    - by slf
    environment info: $ echo $PATH /opt/local/bin:/opt/local/sbin:/sw/bin:/sw/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/usr/X11R6/bin:/opt/local/bin:/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin:~/.utility_scripts $ which easy_install /usr/bin/easy_install specifically, let's try the simplejson module (I know it's the same thing as import json in 2.6, but that isn't the point) $ sudo easy_install simplejson Searching for simplejson Reading http://pypi.python.org/simple/simplejson/ Reading http://undefined.org/python/#simplejson Best match: simplejson 2.1.0 Downloading http://pypi.python.org/packages/source/s/simplejson/simplejson-2.1.0.tar.gz#md5=3ea565fd1216462162c6929b264cf365 Processing simplejson-2.1.0.tar.gz Running simplejson-2.1.0/setup.py -q bdist_egg --dist-dir /tmp/easy_install-Ojv_yS/simplejson-2.1.0/egg-dist-tmp-AypFWa The required version of setuptools (>=0.6c11) is not available, and can't be installed while this script is running. Please install a more recent version first, using 'easy_install -U setuptools'. (Currently using setuptools 0.6c9 (/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python)) error: Setup script exited with 2 ok, so I'll update setuptools... $ sudo easy_install -U setuptools Searching for setuptools Reading http://pypi.python.org/simple/setuptools/ Best match: setuptools 0.6c11 Processing setuptools-0.6c11-py2.6.egg setuptools 0.6c11 is already the active version in easy-install.pth Installing easy_install script to /usr/local/bin Installing easy_install-2.6 script to /usr/local/bin Using /Library/Python/2.6/site-packages/setuptools-0.6c11-py2.6.egg Processing dependencies for setuptools Finished processing dependencies for setuptools I'm not going to speculate, but this could have been caused by any number of environment changes like the Leopard - Snow Leopard upgrade, MacPorts or Fink updates, or multiple Google App Engine updates.

    Read the article

  • Generic RPM package for Python 2.x

    - by RaphDG
    I have a python application, it can run on Python = 2.6 and it's architecture independant. I need the rpm package of this application to be installed on Fedora 14 (python 2.7) and Centos 6.2 (python 2.6). I currently use mock to build one rpm package for each "flavour" and it works well. I apparently can't install the Centos compiled rpm on Fedora. It gives me this error message : error: Failed dependencies: python(abi) = 2.6 is needed by myapp-0.9.el6.noarch Here is the relevant part of my .spec file : %{!?python_sitelib: %global python_sitelib %(%{__python} -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())")} %{!?python_sitearch: %global python_sitearch %(%{__python} -c "from distutils.sysconfig import get_python_lib; print(get_python_lib(1))")} Name: myapp Version: #VERSION# Release: #RELEASE#%{dist} Summary: myapp Group: Development/Languages License: Apache v2 Source0: %{name}-%{version}-#RELEASE#.tar.gz BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n) BuildArch: noarch BuildRequires: python-devel BuildRequires: python-setuptools %description myapp %prep %setup -c %build %{__python} setup.py build %install %{__rm} -rf %{buildroot} %{__python} setup.py install -O1 --skip-build --root %{buildroot} Do I really have to use mock and build 2 rpms or is there another way to create a single generic 2.x rpm package ?

    Read the article

  • Mac 10.5 Python libsvm 64 bit vs 32 bit

    - by shadowsoul
    I have a Mac 10.5 when I type "python" in terminal, it says Enthought Python Distribution -- www.enthought.com Version: 7.3-2 (64-bit) Python 2.7.3 |EPD 7.3-2 (64-bit)| (default, Apr 12 2012, 11:14:05) [GCC 4.0.1 (Apple Inc. build 5493)] on darwin Type "credits", "demo" or "enthought" for more information. then I go to my libsvm/python folder and type "make" which results in make -C .. lib if [ "Darwin" = "Darwin" ]; then \ SHARED_LIB_FLAG="-dynamiclib -W1,-install_name,libsvm.so.2"; \ else \ SHARED_LIB_FLAG="-shared -W1,-soname,libsvm.so.2"; \ fi; \ g++ ${SHARED_LIB_FLAG} svm.o -o libsvm.so.2 when I try to do "from svmutil import *" I get the error: OSError: dlopen(.../libsvm-3.12/python/../libsvm.so.2, 6): no suitable image found. Did find: .../libsvm-3.12/python/../libsvm.so.2: mach-o, but wrong architecture when I do "lipo -info libsvm.so.2", I get: Non-fat file: libsvm.so.2 is architecture: i386 So it looks like I'm running 64-bit python but libsvm ends up as a 32-bit program. Any way I can get it to compile as a 64-bit program?

    Read the article

  • Broken Python installation on CentOS 5.8

    - by Beckett
    I already searched for solution to my problem via Google and stackoverflow's search facility, but haven't found anything related specifically to it. Here's the problem: I needed python 2.7.3 on CentOS 5.8 machine which has only python 2.4.3 preinstalled. Also neither there's the suitable version in it's repositories nor I can upgrade installed version. That's why I decided to build python from source code. But I've made a mistake: instead of make altinstall I did make install thus changing default version of the current installation. It was before I found this article - How to install Python 2.7.3 on CentOS 6.2 . I guess 5.8 and 6.2 versions aren't different to the extent this article is inapplicable. After installation of new python version I installed pip, but once I tried to invoke it, I got "No module named pkg_resources" error. In order to solve this issue I installed setuptools from repository. But it had only led to another error: "Distribution Not Found". My final step was to follow the guide I posted the link to, but I was unable to perform last step: easy_install-2.7 virtualenv command threw "-bash: /usr/local/bin/easy_install-2.7: .: bad interpreter: Permission denied" error. Now when I try to invoke pip or pip-2.7 both commands raise the same error with different names of binaries after "-bash:". Is there any way to fix this problem, so I could install new python version (2.7.3) alongside with the preinstalled one (2.4.3) according to the guide? Any help will be appreciated. P.S.: yum is working fine, although it needs python to function, so I hope the damage I unknowingly caused isn't very severe. Also I'm not a native English speaker, so I apologize for possible occasional grammatical and/or spelling errors.

    Read the article

  • Python Coding standards vs. productivity

    - by Shroatmeister
    I work for a large humanitarian organisation, on a project building software that could help save lives in emergencies by speeding up the distribution of food. Many NGOs desperately need our software and we are weeks behind schedule. One thing that worries me in this project is what I think is an excessive focus on coding standards. We write in python/django and use a version of PEP0008, with various modifications e.g. line lengths can go up to 160 chars and all lines should go that long if possible, no blank lines between imports, line wrapping rules that apply only to certain kinds of classes, lots of templates that we must use, even if they aren't the best way to solve a problem etc. etc. One core dev spent a week rewriting a major part of the system to meet the then new coding standards, throwing away several suites of tests in the process, as the rewrite meant they were 'invalid'. We spent two weeks rewriting all the functionality that was lost, and fixing bugs. He is the lead dev and his word carries weight, so he has convinced the project manager that these standards are necessary. The junior devs do as they are told. I sense that the project manager has a strong feeling of cognitive dissonance about all this but nevertheless agrees with it vehemently as he feels unsure what else to do. Today I got in serious trouble because I had forgotten to put some spaces after commas in a keyword argument. I was literally shouted at by two other devs and the project manager during a Skype call. Personally I think coding standards are important but also think that we are wasting a lot of time obsessing with them, and when I verbalized this it provoked rage. I'm seen as a troublemaker in the team, a team that is looking for scapegoats for its failings. Since the introduction of the coding standards, the team's productivity has measurably plummeted, however this only reinforces the obsession, i.e. the lead dev simply blames our non-adherence to standards for the lack of progress. He believes that we can't read each other's code if we don't adhere to the conventions. This is starting to turn sticky. Now I am trying to modify various scripts, autopep8, pep8ify and PythonTidy to try to match the conventions. We also run pep8 against source code but there are so many implicit amendments to our standard that it's hard to track them all. The lead dev simple picks faults that the pep8 script doesn't pick up and shouts at us in the next stand-up meeting. Every week there are new additions to the coding standards that force us to rewrite existing, working, tested code. Thank heavens we still have tests, (I reverted some commits and fixed a bunch of the ones he removed). All the while there is increasing pressure to meet the deadline. I believe a fundamental issue is that the lead dev and another core dev refuse to trust other developers to do their job. But how to deal with that? We can't do our job because we are too busy rewriting everything. I've never encountered this dynamic in a software engineering team. Am I wrong to question their adherence to coding standards? Has anyone else experienced a similar situation and how have they dealt with it successfully? (I'm not looking for a discussion just actual solutions people have found)

    Read the article

  • Project Euler 17: (Iron)Python

    - by Ben Griswold
    In my attempt to learn (Iron)Python out in the open, here’s my solution for Project Euler Problem 17.  As always, any feedback is welcome. # Euler 17 # http://projecteuler.net/index.php?section=problems&id=17 # If the numbers 1 to 5 are written out in words: # one, two, three, four, five, then there are # 3 + 3 + 5 + 4 + 4 = 19 letters used in total. # If all the numbers from 1 to 1000 (one thousand) # inclusive were written out in words, how many letters # would be used? # # NOTE: Do not count spaces or hyphens. For example, 342 # (three hundred and forty-two) contains 23 letters and # 115 (one hundred and fifteen) contains 20 letters. The # use of "and" when writing out numbers is in compliance # with British usage. import time start = time.time() def to_word(n): h = { 1 : "one", 2 : "two", 3 : "three", 4 : "four", 5 : "five", 6 : "six", 7 : "seven", 8 : "eight", 9 : "nine", 10 : "ten", 11 : "eleven", 12 : "twelve", 13 : "thirteen", 14 : "fourteen", 15 : "fifteen", 16 : "sixteen", 17 : "seventeen", 18 : "eighteen", 19 : "nineteen", 20 : "twenty", 30 : "thirty", 40 : "forty", 50 : "fifty", 60 : "sixty", 70 : "seventy", 80 : "eighty", 90 : "ninety", 100 : "hundred", 1000 : "thousand" } word = "" # Reverse the numbers so position (ones, tens, # hundreds,...) can be easily determined a = [int(x) for x in str(n)[::-1]] # Thousands position if (len(a) == 4 and a[3] != 0): # This can only be one thousand based # on the problem/method constraints word = h[a[3]] + " thousand " # Hundreds position if (len(a) >= 3 and a[2] != 0): word += h[a[2]] + " hundred" # Add "and" string if the tens or ones # position is occupied with a non-zero value. # Note: routine is broken up this way for [my] clarity. if (len(a) >= 2 and a[1] != 0): # catch 10 - 99 word += " and" elif len(a) >= 1 and a[0] != 0: # catch 1 - 9 word += " and" # Tens and ones position tens_position_value = 99 if (len(a) >= 2 and a[1] != 0): # Calculate the tens position value per the # first and second element in array # e.g. (8 * 10) + 1 = 81 tens_position_value = int(a[1]) * 10 + a[0] if tens_position_value <= 20: # If the tens position value is 20 or less # there's an entry in the hash. Use it and there's # no need to consider the ones position word += " " + h[tens_position_value] else: # Determine the tens position word by # dividing by 10 first. E.g. 8 * 10 = h[80] # We will pick up the ones position word later in # the next part of the routine word += " " + h[(a[1] * 10)] if (len(a) >= 1 and a[0] != 0 and tens_position_value > 20): # Deal with ones position where tens position is # greater than 20 or we have a single digit number word += " " + h[a[0]] # Trim the empty spaces off both ends of the string return word.replace(" ","") def to_word_length(n): return len(to_word(n)) print sum([to_word_length(i) for i in xrange(1,1001)]) print "Elapsed Time:", (time.time() - start) * 1000, "millisecs" a=raw_input('Press return to continue')

    Read the article

  • Python: undefined reference to `_imp __Py_InitModule4'

    - by Mark
    I'm trying to do a debug build of the Rabbyt library using mingw's gcc to run with my MSVC built python26_d.. I got a lot of undefined references which caused me to create libpython26_d.a, however one of the undefined references remains. Googling gives me: http://www.techlists.org/archives/programming/pythonlist/2003-03/msg01035.shtml But -rdynamic doesn't help. e:\MinGW/bin\gcc.exe -mno-cygwin -mdll -O -Wall -g -IE:\code\python\python\py26\ include -IE:\code\python\python\py26\PC -c rabbyt/rabbyt._rabbyt.c -o build\temp .win32-2.6-pydebug\Debug\rabbyt\rabbyt._rabbyt.o -O3 -fno-strict-aliasing rabbyt/rabbyt._rabbyt.c:1351: warning: '__Pyx_SetItemInt' defined but not used writing build\temp.win32-2.6-pydebug\Debug\rabbyt\_rabbyt_d.def e:\MinGW/bin\gcc.exe -mno-cygwin -shared -g build\temp.win32-2.6-pydebug\Debug\r abbyt\rabbyt._rabbyt.o build\temp.win32-2.6-pydebug\Debug\rabbyt\_rabbyt_d.def - LE:\code\python\python\py26\libs -LE:\code\python\python\py26\PCbuild -lopengl32 -lglu32 -lpython26_d -lmsvcr90 -o build\lib.win32-2.6-pydebug\rabbyt\_rabbyt_d. pyd build\temp.win32-2.6-pydebug\Debug\rabbyt\rabbyt._rabbyt.o: In function `init_ra bbyt': E:/code/python/rabbyt/rabbyt/rabbyt._rabbyt.c:1121: undefined reference to `_imp __Py_InitModule4'

    Read the article

  • install pymedia and python audio tools

    - by aaron
    I noticed a pattern of errors while trying to install PyMedia and Python Audio Tools. For both modules I run the following: $ python setup.py install Then I get a series of compilation errors, and then this: lipo: can't figure out the architecture type of: /var/folders/Kx/Kxxj4868HGi6VMhZLPyZN++++TI/-Tmp-//cch1y9AO.out error: command '/usr/bin/gcc-4.2' failed with exit status 1 I'm running Mac OS X 10.5, and this happens whether I'm using gcc-4.0 or gcc-4.2, Mac-Python 2.5 or 2.6, and MacPorts-Python 2.6. What's going on?

    Read the article

  • Unable to build Python modules in Mandriva 2010

    - by SteveJ
    I am trying to build a Python module (pyfits) but I get the following error: # python setup.py install /home/steve/src/pyfits-2.2.2/stsci_distutils_hack.py:239: DeprecationWarning: os.popen3 is deprecated. Use the subprocess module. (sin, sout, serr) = os.popen3(cmd) running install error: invalid Python installation: unable to open /usr/lib64/python2.6/config/Makefile (No such file or directory) I get the same error when I try and build other modules so my guess is I am missing a Python development library. I am running Mandriva 2010.0, any suggestions?

    Read the article

  • Run serveral daemon using python

    - by ylc
    I noticed that serveral daemon invoked python seperately. For example, I have both wicd and ibus daemon running on my machine. Instead of launching a single instance of python, the daemons run with two python instance at the same time in htop: /usr/bin/python2 -O /usr/share/wicd/daemon/monitor.py python2 /usr/share/ibus/ui/gtk/main.py Is it a waste of doing that? If yes, how can I improve this? If no, why avoid putting all daemons run on a single python instance?

    Read the article

  • How to create a simple server/client application using boost.asio?

    - by the_drow
    I was going over the examples of boost.asio and I am wondering why there isn't an example of a simple server/client example that prints a string on the server and then returns a response to the client. I tried to modify the echo server but I can't really figure out what I'm doing at all. Can anyone find me a template of a client and a template of a server? I would like to eventually create a server/client application that receives binary data and just returns an acknowledgment back to the client that the data is received.

    Read the article

  • Using Boost statechart, how can I transition to a state unconditionally?

    - by nickb
    I have a state A that I would like to transition to its next state B unconditionally, once the constructor of A has completed. Is this possible? I tried posting an event from the constructor, which does not work, even though it compiles. Thanks. Edit: Here is what I've tried so far: struct A : sc::simple_state< A, Active > { public: typedef sc::custom_reaction< EventDoneA > reactions; A() { std::cout << "Inside of A()" << std::endl; post_event( EventDoneA() ); } sc::result react( const EventDoneA & ) { return transit< B >(); } }; This yields the following runtime assertion failure: Assertion failed: get_pointer( pContext_ ) != 0, file /includ e/boost/statechart/simple_state.hpp, line 459

    Read the article

  • Boost in Visual Studio 2010, IntelliSense error

    - by Peretz
    Hello, I would like to see if you could orient me. It happens that I compiled and referenced the boost libraries in order to use them with Visual Studio 2010. When building my test project I get these two IntelliSense errors 1 IntelliSense: #error directive: "Macro BOOST_LIB_NAME not set (internal error)" c:\boost_1_43_0\boost\config\auto_link.hpp 2 IntelliSense: #error directive: "some required macros where not defined (internal logic error)." c:\boost_1_43_0\boost\config\auto_link.hpp Checking the auto_link.hpp header file the first error is in this line #ifndef BOOST_LIB_NAME # error "Macro BOOST_LIB_NAME not set (internal error)" #endif Tracing the definition of BOOST_LIB_NAME, it seems that is defined in config.hpp by boost_regex, which code I am including below #if !defined(BOOST_REGEX_NO_LIB) && !defined(BOOST_REGEX_SOURCE) && !defined(BOOST_ALL_NO_LIB) && defined(__cplusplus) # define BOOST_LIB_NAME boost_regex # if defined(BOOST_REGEX_DYN_LINK) || defined(BOOST_ALL_DYN_LINK) # define BOOST_DYN_LINK ... more code and strangely when I point to BOOST_LIB_NAME it defines BOOST_LIB_NAME and the IntelliSense errors disappear. My program builds and executes fine using the Boost:Regex library -- with or without the Intellisense errors; however, I do not understand why these IntelliSense errors appear in the first place, and second why pointing the macro in the config.hpp defines BOOST_LIB_NAME. Any guidance will be greatly appreciated. Thanks, Jaime

    Read the article

  • python manage.py runserver fails

    - by Randy Simon
    I am trying to learn django by following along with this tutorial. I am using django version 1.1.1 I run django-admin.py startproject mysite and it creates the files it should. Then I try to start the server by running python manage.py runserver but here is where I get the following error. Traceback (most recent call last): File "manage.py", line 11, in <module> execute_manager(settings) File "/Library/Python/2.6/site-packages/django/core/management/__init__.py", line 362, in execute_manager utility.execute() File "/Library/Python/2.6/site-packages/django/core/management/__init__.py", line 303, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/Library/Python/2.6/site-packages/django/core/management/base.py", line 195, in run_from_argv self.execute(*args, **options.__dict__) File "/Library/Python/2.6/site-packages/django/core/management/base.py", line 213, in execute translation.activate('en-us') File "/Library/Python/2.6/site-packages/django/utils/translation/__init__.py", line 73, in activate return real_activate(language) File "/Library/Python/2.6/site-packages/django/utils/translation/__init__.py", line 43, in delayed_loader return g['real_%s' % caller](*args, **kwargs) File "/Library/Python/2.6/site-packages/django/utils/translation/trans_real.py", line 205, in activate _active[currentThread()] = translation(language) File "/Library/Python/2.6/site-packages/django/utils/translation/trans_real.py", line 194, in translation default_translation = _fetch(settings.LANGUAGE_CODE) File "/Library/Python/2.6/site-packages/django/utils/translation/trans_real.py", line 172, in _fetch for localepath in settings.LOCALE_PATHS: File "/Library/Python/2.6/site-packages/django/utils/functional.py", line 273, in __getattr__ return getattr(self._wrapped, name) AttributeError: 'Settings' object has no attribute 'LOCALE_PATHS' Now, I can add a LOCALE_PATH atribute and set to an empty tuple to my settings.py file but then it just complains about another setting and so on. What am I missing here?

    Read the article

  • Cocoa App with Python extension which use Scipy -> ImportError: No module named scipy

    - by Snej
    Hi: I have installed Scipy (via macports) for Python on my Mac and it runs fine when running Python scripts. But now I'm using Scipy (via PyObjc) for calculations embedded in a Cocoa App frontend. The following error occurs: ImportError: No module named scipy I am using the "Python.framework" in XCode. Does anybody know why Scipy module is not found? I even added it manually to the module search path via sys.path.append("/opt/local/var/macports/software/py26-scipy/0.7.1_0+gcc43/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/") EDIT: I found the problem myself. The path should be without "/scipy" at the end. But now I got an architecture problem: ImportError: dlopen(/opt/local/var/macports/software/py26-scipy/0.7.1_0+gcc43/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/fftpack/_fftpack.so, 2): no suitable image found. Did find: /opt/local/var/macports/software/py26-scipy/0.7.1_0+gcc43/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/fftpack/_fftpack.so: mach-o, but wrong architecture EDIT 2: I checked the architectures: Yes, sure it is an architecture problem. But when I run: file /opt/local/var/macports/software/py26-scipy/0.7.1_0+gcc43/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/fftpack/_fftpack.so I get a result Mach-O 64-bit bundle x86_64. And the Mac OS 10.6 PYTHON is: Mach-O universal binary with 3 architectures /usr/bin/python (for architecture x86_64): Mach-O 64-bit executable x86_64 /usr/bin/python (for architecture i386): Mach-O executable i386 /usr/bin/python (for architecture ppc7400): Mach-O executable ppc I build the XCode project as x86_64.

    Read the article

  • boost::lambda bind expressions can't get bind to string's empty() to work

    - by navigator
    Hi, I am trying to get the below code snippet to compile. But it fails with: error C2665: 'boost::lambda::function_adaptor::apply' : none of the 8 overloads could convert all the argument types. Sepcifying the return type when calling bind does not help. Any idea what I am doing wrong? Thanks. #include <boost/lambda/lambda.hpp> #include <boost/lambda/bind.hpp> #include <string> #include <map> int main() { namespace bl = boost::lambda; typedef std::map<int, std::string> types; types keys_and_values; keys_and_values[ 0 ] = "zero"; keys_and_values[ 1 ] = "one"; keys_and_values[ 2 ] = "Two"; std::for_each( keys_and_values.begin(), keys_and_values.end(), std::cout << bl::constant("Value empty?: ") << std::boolalpha << bl::bind(&std::string::empty, bl::bind(&types::value_type::second, _1)) << "\n"); return 0; }

    Read the article

  • getting boost::gregorian dates from a string

    - by Chris H
    I asked a related question yesterday http://stackoverflow.com/questions/2612343/basic-boost-date-time-input-format-question It worked great for posix_time ptime objects. I'm have trouble adapting it to get Gregorian date objects. try { stringstream ss; ss << dateNode->GetText(); using boost::local_time::local_time_input_facet; //using boost::gregorian; ss.imbue(locale(locale::classic(), new local_time_input_facet("%a, %d %b %Y "))); ss.exceptions(ios::failbit); ss>>dayTime; } catch (...) { cout<<"Failed to get a date..."<<endl; //cout<<e.what()<<endl; throw; } The dateNode-GetText() function returns a pointer to a string of the form Sat, 10 Apr 2010 19:30:00 The problem is I keep getting an exception. So concretely the question is, how do I go from const char * of the given format, to a boost::gregorian::date object? Thanks again.

    Read the article

  • Mac OS X and static boost libs -> std::string fail

    - by Ionic
    Hi all, I'm experiencing some very weird problems with static boost libraries under Mac OS X 10.6.6. The error message is main(78485) malloc: *** error for object 0x1000e0b20: pointer being freed was not allocated *** set a breakpoint in malloc_error_break to debug [1] 78485 abort (core dumped) and a tiny bit of example code which will trigger this problem: #define BOOST_FILESYSTEM_VERSION 3 #include <boost/filesystem.hpp> #include <iostream> int main (int argc, char **argv) { std::cout << boost::filesystem::current_path ().string () << '\n'; } This problem always occurs when linking the static boost libraries into the binary. Linking dynamically will work fine, though. I've seen various reports for quite a similar OS X bug with GCC 4.2 and the _GLIBCXX_DEBUG macro set, but this one seems even more generic, as I'm neither using XCode, nor setting the macro (even undefining it does not help. I tried it just to make sure it's really not related to this problem.) Does anybody have any pointers to why this is happening or even maybe a solution (rather than using the dynamic library workaround)? Best regards, Mihai

    Read the article

  • Boost Asio UDP retrieve last packet in socket buffer

    - by Alberto Toglia
    I have been messing around Boost Asio for some days now but I got stuck with this weird behavior. Please let me explain. Computer A is sending continuos udp packets every 500 ms to computer B, computer B desires to read A's packets with it own velocity but only wants A's last packet, obviously the most updated one. It has come to my attention that when I do a: mSocket.receive_from(boost::asio::buffer(mBuffer), mEndPoint); I can get OLD packets that were not processed (almost everytime). Does this make any sense? A friend of mine told me that sockets maintain a buffer of packets and therefore If I read with a lower frequency than the sender this could happen. ¡? So, the first question is how is it possible to receive the last packet and discard the ones I missed? Later I tried using the async example of the Boost documentation but found it did not do what I wanted. http://www.boost.org/doc/libs/1_36_0/doc/html/boost_asio/tutorial/tutdaytime6.html From what I could tell the async_receive_from should call the method "handle_receive" when a packet arrives, and that works for the first packet after the service was "run". If I wanted to keep listening the port I should call the async_receive_from again in the handle code. right? BUT what I found is that I start an infinite loop, it doesn't wait till the next packet, it just enters "handle_receive" again and again. I'm not doing a server application, a lot of things are going on (its a game), so my second question is, do I have to use threads to use the async receive method properly, is there some example with threads and async receive? Thanks for you attention.

    Read the article

  • Increasing time resolution of BOOST::progress timer

    - by feelfree
    BOOST::progress_timer is a very useful class to measure the running time of a function. However, the default implementation of progress_timer is not accurate enough and a possible way of increasing time resolution is to reconstruct a new class as the following codes show: #include <boost/progress.hpp> #include <boost/static_assert.hpp> template<int N=2> class new_progress_timer:public boost::timer { public: new_progress_timer(std::ostream &os=std::cout):m_os(os) { BOOST_STATIC_ASSERT(N>=0 &&N<=10); } ~new_progress_timer(void) { try { std::istream::fmtflags old_flags = m_os.setf(std::istream::fixed,std::istream::floatfield); std::streamsize old_prec = m_os.precision(N); m_os<<elapsed()<<"s\n" <<std::endl; m_os.flags(old_flags); m_os.precison(old_prec); } catch(...) { } } private: std::ostream &m_os; }; However, when I compile the codes with VC10, the following error appear: 'precison' : is not a member of 'std::basic_ostream<_Elem,_Traits>' Any ideas? Thanks.

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >