Search Results

Search found 15187 results on 608 pages for 'boost python'.

Page 20/608 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • R vs Python for data analysis

    - by The_Cthulhu_Kid
    I have been programming for about a year and I am really interested in data analysis and machine learning. I am taking part in a couple of online courses and am reading a couple of books. Everything I am doing uses either R or Python and I am looking for suggestions on whether or not I should concentrate on one language (and if so which) or carry on with both; do they complement each other? -- I should mention that I use C# in school but am familiar with Python through self-study.

    Read the article

  • Learning Python is good?

    - by user15220
    Recently I have seen some videos from MIT on computer programming topics. I found it's really worth watching. Especially the concepts of algorithms and fundamental stuffs. The programs were written and explained in Python. I never had looked into this language before as I learned and doing stuffs with C/C++ programming. But the cleanliness and better readability of syntax attracted me. Of course as a C++ programmer for long time it's the most readable language for me. Also I heard Python library contains solid algorithms and data-structures implementations. Can you share your experience in this language?

    Read the article

  • Learning to program in C (coming from Python)

    - by Honza Pokorny
    If this is the wrong place to ask this question, please let me know. I'm a Python programmer by occupation. I would love to learn C. Indeed, I have tried many times, but I always get discouraged. In Python, you write a few lines and the program does wonders. In C, I can't seem to be able to do anything useful. It seems to be very complicated to even connect to the Internet. Do you have any suggestions on what I can do to learn C? Are there are any good websites? Any cool projects? Thanks

    Read the article

  • problem using pydoc in python

    - by rohanag
    I'm using pydoc in python 2.7.3 to generate documentation for a file called PreProcessingAPI.py which contains a class called PreProcessingAPI In PreProcessingAPI.py, I have the following import in the beginning of the file: from __future__ import division from re import * from nltk.stem import porter The problem is, in the documentation generated by pydoc, nltk.stem.porter is shown as a Module. There is also a DATA heading with all sorts of variables I do not know about. Is there a way to avoid these variables and avoid showing nltk.stem.porter in the modules? I'm running the following command to generate documentation python pydoc.py -w PreProcessingAPI.py I've put the file pydoc.py in the directory containing my file. Here is the file generated: https://www.dropbox.com/s/4rb6ut99o25mwly/PreProcessingAPI.html

    Read the article

  • Learning Python from Beginner to Advanced level

    - by Christofer Bogaso
    I have some problems in my hand and would like to resolve them by myself (rather than hiring some professional, obviously due to cash problem!): build a really good website (planning to set-up my own start-up). build some good software (preferrably with exe installation files) on many mathematical and statistical techniques. To accomplish those tasks, is it worth to learn Python in advance level? I have advanced programming experiences with R and Matlab and VBA (and some sort of C), however not anything on Python. Be very grateful if experts put some guidance here. Thanks for your time.

    Read the article

  • Python or HTML5/JS for game development on 2014 [on hold]

    - by AlexKvazos
    So I've decided to give game development a go. I have experience on php/html/css/sql/js(jquery) so learning a new language shouldn't be as hard. I was reading that python and javascript are both nice for simple 2d non-intensive games. I found that python has this library/engine called PyGame but I realized that it was last updated 4 years ago. People still use this? And for javascript, I found libraries like 'pixi.js', 'melon.js' and 'cocos2d'. My goal is to make 2D games that would require the same performance as terraria, realm of the mad god, castle crashers.. and all those types of games. Taking into consideration, that I do want an updated library, what language of this two would be best to choose and what library to grab for it? Thanks in advance, sorry if question is broad. Let me know and I can edit to add more.

    Read the article

  • [python] voice communication for python help!

    - by Eric
    Hello! I'm currently trying to write a voicechat program in python. All tips/trick is welcome to do this. So far I found pyAudio to be a wrapper of PortAudio. So I played around with that and got an input stream from my microphone to be played back to my speakers. Only RAW of course. But I can't send RAW-data over the netowrk (due the size duh), so I'm looking for a way to encode it. And I searched around the 'net and stumbled over this speex-wrapper for python. It seems to good to be true, and believe me, it was. You see in pyAudio you can set the size of the chunks you want to take from your input audiobuffer, and in that sample code on the link, it's set to 320. Then when it's encoded, its like ~40 bytes of data per chunk, which is fairly acceptable I guess. And now for the problem. I start a sample program which just takes the input stream, encodes the chunks, decodes them and play them (not sending over the network due testing). If I just let my computer idle and run this program it works great, but as soon as I do something, i.e start Firefox or something, the audio input buffer gets all clogged up! It just grows and then it all crashes and gives me an overflow error on the buffer.. OK, so why am I just taking 320 bytes of the stream? I could just take like 1024 bytes or something and that will easy the pressure on the buffer. BUT. If I give speex 1024 bytes of data to encode/decode, it either crashes and says that thats too big for its buffer. OR it encodes/decodes it, but the sound is very noisy and "choppy" as if it only encoded a tiny bit of that 1024 chunk and the rest is static noise. So the sound sounds like a helicopter, lol. I did some research and it seems that speex only can convert 320 bytes of data at time, and well, 640 for wide-band. But that's the standard? How can I fix this problem? How should I construct my program to work with speex? I could use a middle-buffer tho that takes all available data to read from the buffer, then chunk this up in 320 bits and encode/decode them. But this takes a bit longer time and seems like a very bad solution of the problem.. Because as far as I know, there's no other encoder for python that encodes the audio so it can be sent over the network in acceptable small packages, or? I've been googling for three days now. Also there is this pyMedia library, I don't know if its good to convert to mp3/ogg for this kind of software. Thank in in advance for reading this, hope anyone can help me! (:

    Read the article

  • Get signal names from numbers in Python

    - by Brian M. Hunt
    Is there a way to map a signal number (e.g. signal.SIGINT) to its respective name (i.e. "SIGINT")? I'd like to be able to print the name of a signal in the log when I receive it, however I cannot find a map from signal numbers to names in Python, i.e. import signal def signal_handler(signum, frame): logging.debug("Received signal (%s)" % sig_names[signum]) signal.signal(signal.SIGINT, signal_handler) For some dictionary sig_names, so when the process receives SIGINT it prints: Received signal (SIGINT) Thank you.

    Read the article

  • Python: Implementing slicing in __getitem__

    - by nicotine
    I am trying to implement slice functionality for a class I am making that creates a vector representation. I have this code so far, which I believe will properly implement the slice but whenever I do a call like v[4] where v is a vector python returns an error about not having enough parameters. So I am trying to figure out how to define the getitem class to handle both plain indexes and slicing. def __getitem__(self, start, stop, step): indx = start if stop == None: end = start + 1 else: end = stop if step == None: stride = 1 else: stride = step return self.__data[indx:end:stride]

    Read the article

  • SyntaxError using gdata-python-client to access Google Book Search Data API

    - by isbadawi
    >>> import gdata.books.service >>> service = gdata.books.service.BookService() >>> results = service.search_by_keyword(isbn='0434003484') Traceback (most recent call last): File "<pyshell#4>", line 1, in <module> results = service.search_by_keyword(isbn='0434003484') ... snip ... File "C:\Python26\lib\site-packages\atom\__init__.py", line 127, in CreateClassFromXMLString tree = ElementTree.fromstring(xml_string) File "<string>", line 85, in XML SyntaxError: syntax error: line 1, column 0 This is a minimal example -- in particular, the book service unit tests included in the package also fail with the exact same error. I've looked at the wiki and open issue tickets on Google Code to no avail (and this seems to me more apt to be a silly error on my end rather than a problem with the library). I'm not sure how to interpret the error message. If it matters, I'm using python 2.6.5.

    Read the article

  • Python: Slicing a list into n nearly-equal-length partitions

    - by Drew
    I'm looking for a fast, clean, pythonic way to divide a list into exactly n nearly-equal partitions. partition([1,2,3,4,5],5)->[[1],[2],[3],[4],[5]] partition([1,2,3,4,5],2)->[[1,2],[3,4,5]] (or [[1,2,3],[4,5]]) partition([1,2,3,4,5],3)->[[1,2],[3,4],[5]] (there are other ways to slice this one too) There are several answers in here http://stackoverflow.com/questions/1335392/iteration-over-list-slices that run very close to what I want, except they are focused on the size of the list, and I care about the number of the lists (some of them also pad with None). These are trivially converted, obviously, but I'm looking for a best practice. Similarly, people have pointed out great solutions here http://stackoverflow.com/questions/312443/how-do-you-split-a-list-into-evenly-sized-chunks-in-python for a very similar problem, but I'm more interested in the number of partitions than the specific size, as long as it's within 1. Again, this is trivially convertible, but I'm looking for a best practice.

    Read the article

  • python: variable not getting defined after several conditionals

    - by Protean
    For some reason this program is saying that 'switch' is not defined. What is going on? #PYTHON 3.1.1 class mysrt: def __init__(self): self.DATA = open('ORDER.txt', 'r') self.collect = 0 cache1 = str(self.DATA.readlines()) cache2 = [] for i in range(len(cache1)): if cache1[i] == '*': if self.collect == 0: self.collect = 1 elif self.collect == 1: self.collect = 0 elif self.collect == 1: cache2.append(cache1[i]) self.ORDER = cache2 self.ARRAY = [] self.GLOBALi = 0 self.GLOBALmax = range(len(self.ORDER)) self.GLOBALc = [] self.GLOBALl = [] def sorter(self, array): CACHE_LIST_1 = [] CACHE_LIST_2 = [] i = 0 for ORDERi in range(len(self.ORDER)): for ARRAYi in range(len(array)): CACHE = array[ARRAYi] if CACHE[self.GLOBALi] == self.ORDER[ORDERi]: CACHE_LIST_1.append(CACHE) else: CACHE_LIST_2.append(CACHE) for i in range(len(CACHE_LIST_1)): if CACHE_LIST_1[0] == CACHE_LIST_1[i] or range(len(CACHE_LIST_1)) == 1: switch = 1 print ('1') else: switch = 0 print ('0') break if switch == 1: self.GLOBALl += CACHE_LIST_1 + self.GLOBALc self.GLOBALi = 0 self.GLOBALc = [] else: self.GLOBALi += 1 self.GLOBALc += CACHE_LIST_2 mysrt.sorter(CACHE) return (self.GLOBALl) #GLOBALi =0 # if range(len(self.GLOBALc)) =! range(len(self.ARRAY)) array = ['ape', 'cow','dog','bat'] ORDER_FILE = [] mysort = mysrt() print (mysort.sorter(array))

    Read the article

  • Visual Python - Visualize graphs relating to a movement

    - by Francisco P.
    Hello, everyone! I'm working with visual python on a project where I need to simulate a physical movement. I'd like to present, in a different window than the one the actual, 3D sim is running, two graphs, both related to the movement: How the velocity and angular velocity progress over time. How the movement and rotation progress over time. All these vars are refreshed once per cycle (inside a while(true)) How can I accomplish this? Thank you for your time!

    Read the article

  • Able to ping but cannot browse after several hours running of my python program

    - by Shane
    It's a GUI program I wrote in python checking website/server status running on my XP SP3, multi threads are used to check different site/server. After several hours running, the program starts to get urlopen error timed out all the time, and this always happens right after a POST request from a server(not a certain one, might be A or B or C), and it's also not the first POST request causing the problem, normally after several hours running and it happens to make a POST request at an unknown moment, all you get from then on is urlopen error timed out. I'm still able to ping but cannot browse any site, once the program closed everything's fine. It's definitely the program causing this problem, well I just don't know how to debug/check what the problem is, also don't know if it's from OS side or my program wasting too many resources/connections(are you still able to ping when too many connections used?), would anybody please help me out?

    Read the article

  • Cython Speed Boost vs. Usability

    - by zubin71
    I just came across Cython, while I was looking out for ways to optimize Python code. I read various posts on stackoverflow, the python wiki and read the article "General Rules for Optimization". Cython is something which grasps my interest the most; instead of writing C-code for yourself, you can choose to have other datatypes in your python code itself. Here is a silly test i tried, #!/usr/bin/python # test.pyx def test(value): for i in xrange(value): i**2 if(i==1000000): print i test(10000001) $ time python test.pyx real 0m16.774s user 0m16.745s sys 0m0.024s $ time cython test.pyx real 0m0.513s user 0m0.196s sys 0m0.052s Now, honestly, i`m dumbfounded. The code which I have used here is pure python code, and all I have changed is the interpreter. In this case, if cython is this good, then why do people still use the traditional Python interpretor? Are there any reliability issues for Cython?

    Read the article

  • Piping SoX in Python - subprocess alternative?

    - by Cochise Ruhulessin
    I use SoX in an application. The application uses it to apply various operations on audiofiles, such as trimming. This works fine: from subprocess import Popen, PIPE kwargs = {'stdin': PIPE, 'stdout': PIPE, 'stderr': PIPE} pipe = Popen(['sox','-t','mp3','-', 'test.mp3','trim','0','15'], **kwargs) output, errors = pipe.communicate(input=open('test.mp3','rb').read()) if errors: raise RuntimeError(errors) This will cause problems on large files hower, since read() loads the complete file to memory; which is slow and may cause the pipes' buffer to overflow. A workaround exists: from subprocess import Popen, PIPE import tempfile import uuid import shutil import os kwargs = {'stdin': PIPE, 'stdout': PIPE, 'stderr': PIPE} tmp = os.path.join(tempfile.gettempdir(), uuid.uuid1().hex + '.mp3') pipe = Popen(['sox','test.mp3', tmp,'trim','0','15'], **kwargs) output, errors = pipe.communicate() if errors: raise RuntimeError(errors) shutil.copy2(tmp, 'test.mp3') os.remove(tmp) So the question stands as follows: Are there any alternatives to this approach, aside from writing a Python extension to the Sox C API?

    Read the article

  • How to set timeout with python-mechanize?

    - by Michal Cihar
    I'm using python-mechanize to scrape some web sites, which sometime simply don't respond to requests and these requests stay open too long, so I need to limit timeout for these requests. While using urlopen method, the timeout can be set using timeout parameter, but I have not found easy way for doing it with high level API such as submit or click methods. Ideally the timeout would be set just once for whole browser class and all calls would honor that. It would be probably possible to customize this by passing custom request_class to every click and submit call, but this would just pollute the code, so I'm looking for nicer solution for setting timeout for mechanize's browser class (and no, I don't want to change default socket timeout using socket.setdefaulttimeout).

    Read the article

  • Boost thread synchronization in release build

    - by Joseph16
    Hi, when I try to run the following code in debug and release mode in VS2005. Each time I see different output in console and It doesn't seem like the multithreading is achieved in release mode. 1. #include <boost/thread.hpp> 2. #include <iostream> 3. 4. void wait(int seconds) 5. { 6. boost::this_thread::sleep(boost::posix_time::seconds(seconds)); 7. } 8. 9. boost::mutex mutex; 10. 11. void thread() 12. { 13. for (int i = 0; i < 5; ++i) 14. { 15. //wait(1); 16. mutex.lock(); 17. std::cout << "Thread " << boost::this_thread::get_id() << ": " << i << std::endl; 18. mutex.unlock(); 19. } 20. } 21. 22. int main() 23. { 24. boost::thread t1(thread); 25. boost::thread t2(thread); 26. t1.join(); 27. t2.join(); 28. } Debug Mode Thread 00153E60: 0 Thread 00153E90: 0 Thread 00153E60: 1 Thread 00153E90: 1 Thread 00153E90: 2 Thread 00153E60: 2 Thread 00153E90: 3 Thread 00153E60: 3 Thread 00153E60: 4 Thread 00153E90: 4 Press any key to continue . . . Release Mode Thread 00153D28: 0 Thread 00153D28: 1 Thread 00153D28: 2 Thread 00153D28: 3 Thread 00153D28: 4 Thread 00153D58: 0 Thread 00153D58: 1 Thread 00153D58: 2 Thread 00153D58: 3 Thread 00153D58: 4 Press any key to continue . . .

    Read the article

  • Removing items from a nested list Python

    - by johntfoster
    I'm trying to remove items from a nested list in Python. I have a nested list as follows: families = [[0, 1, 2],[0, 1, 2, 3],[0, 1, 2, 3, 4],[1, 2, 3, 4, 5],[2, 3, 4, 5, 6]] I want to remove the entries in each sublist that coorespond to the indexed position of the sublist in the master list. So, for example, I need to remove 0 from the first sublist, 1 from second sublist, etc. I am trying to use a list comrehension do do this. This is what I have tried: familiesNew = [ [ families[i][j] for j in families[i] if i !=j ] for i in range(len(families)) ] This works for range(len(families)) up to 3, however beyond that I get IndexError: list index out of range. I'm not sure why. Can somebody give me an idea of how to do this. Preferably a one-liner (list comprehension). Thanks.

    Read the article

  • Python: Access members of a set

    - by emu
    Say I have a set myset of custom objects that may be equal although their references are different (a == b and a is not b). Now if I add(a) to the set, Python correctly assumes that a in myset and b in myset even though there is only len(myset) == 1 object in the set. That is clear. But is it now possible to extract the value of a somehow out from the set, using b only? Suppose that the objects are mutable and I want to change them both, having forgotten the direct reference to a. Put differently, I am looking for the myset[b] operation, which would return exactly the member a of the set. It seems to me that the type set cannot do this (faster than iterating through all its members). If so, is there at least an effective work-around?

    Read the article

  • Constructing a tree using Python

    - by stealthspy
    I am trying to implement a unranked boolean retrieval. For this, I need to construct a tree and perform a DFS to retrieve documents. I have the leaf nodes but I am having difficulty to construct the tree. Eg: query = OR ( AND (maria sharapova) tennis) Result: OR | | AND tennis | | maria sharapova I traverse the tree using DFS and calculate the boolean equivalent of certain document ids to identify the required document from the corpus. Can someone help me with the design of this using python? I have parsed the query and retrieved the leaf nodes for now.

    Read the article

  • How to setup Python with Lighttpd and FastCGI (like PHP)

    - by johndir
    Running Lighttpd on Linux, I would like to be able to execute Python scripts just the way I execute PHP scripts. The goal is to be able to execute arbitrary script files stored in the WWW directory, e.g. http://www.example.com/*.py. I would not like to spawn a new Python instance (interpreter) for every request (like done in regular CGI, if I'm not mistaken), which is why I'm using FastCGI. Following Lighttpd's documentation, the following is the FastCGI part of my config file. The problem is that it always runs the /usr/local/bin/python-fcgi script for every *.py file, regardless of the content of that file: http://www.example.com/script.py [output=>] "python-fcgi: test" (regardless of the content of script.py) I'm not interested in using any framework, but simply executing individual [web] scripts. How can I make it act like PHP, executing any script in the WWW directory by requesting it's path? /etc/lighttpd/conf.d/fastcgi.conf: server.modules += ( "mod_fastcgi" ) index-file.names += ( "index.php" ) fastcgi.server = ( ".php" => ( "localhost" => ( "bin-path" => "/usr/bin/php-cgi", "socket" => "/var/run/lighttpd/php-fastcgi.sock", "max-procs" => 4, # default value "bin-environment" => ( "PHP_FCGI_CHILDREN" => "1", # default value ), "broken-scriptfilename" => "enable" ) ), ".py" => ( "python-fcgi" => ( "socket" => "/var/run/lighttpd/fastcgi.python.socket", "bin-path" => "/usr/local/bin/python-fcgi", "check-local" => "disable", "max-procs" => 1, ) ) ) /usr/local/bin/python-fcgi: #!/usr/bin/python2 def myapp(environ, start_response): start_response('200 OK', [('Content-Type', 'text/plain')]) return ['python-fcgi: test\n'] if __name__ == '__main__': from flup.server.fcgi import WSGIServer WSGIServer(myapp).run()

    Read the article

  • Why boost property tree write_json saves everything as string? Is it possible to change that?

    - by pprzemek
    I'm trying to serialize using boost property tree write_json, it saves everything as strings, it's not that data are wrong, but I need to cast them explicitly every time and I want to use them somewhere else. (like in python or other C++ json (non boost) library) here is some sample code and what I get depending on locale: boost::property_tree::ptree root, arr, elem1, elem2; elem1.put<int>("key0", 0); elem1.put<bool>("key1", true); elem2.put<float>("key2", 2.2f); elem2.put<double>("key3", 3.3); arr.push_back( std::make_pair("", elem1) ); arr.push_back( std::make_pair("", elem2) ); root.put_child("path1.path2", arr); std::stringstream ss; write_json(ss, root); std::string my_string_to_send_somewhare_else = ss.str(); and my_string_to_send_somewhere_else is sth. like this: { "path1" : { "path2" : [ { "key0" : "0", "key1" : "true" }, { "key2" : "2.2", "key3" : "3.3" } ] } } Is there anyway to save them as the values, like: "key1" : true or "key2" : 2.2 ?

    Read the article

  • Python - multithreading / multiprocessing, very strange problem.

    - by orokusaki
    import uuid import time import multiprocessing def sleep_then_write(content): time.sleep(5) print(content) if __name__ == '__main__': for i in range(15): p = multiprocessing.Process(target=sleep_then_write, args=('Hello World',)) p.start() print('Ah, what a hard day of threading...') This script output the following: Ah, what a hard day of threading... Ah, what a hard day of threading... Ah, what a hard day of threading... Ah, what a hard day of threading... Ah, what a hard day of threading... Ah, what a hard day of threading... Ah, what a hard day of threading... Ah, what a hard day of threading... Ah, what a hard day of threading... AAh, what a hard day of threading.. h, what a hard day of threading... Ah, what a hard day of threading... Ah, what a hard day of threading... Ah, what a hard day of threading... Ah, what a hard day of threading... Ah, what a hard day of threading... Hello World Hello World Hello World Hello World Hello World Hello World Hello World Hello World Hello World Hello World Hello World Hello World Hello World Hello World Hello World Firstly, why the heck did it print the bottom statement sixteen times (one for each process) instead of just the one time? Second, notice the AAh, and h, about half way down; that was the real output. This makes me wary of using threads ever, now. (Windows XP, Python 2.6.4, Core 2 Duo)

    Read the article

  • python-social-auth AuthCanceled exception

    - by vero4ka
    I'm using python-social-auth in my Django application for authentication via Facebook. But when a user tries to login and when it's been refirected to Facebook app page clicks on "Cancel" button, appears the following exception: ERROR 2014-01-03 15:32:15,308 base :: Internal Server Error: /complete/facebook/ Traceback (most recent call last): File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 114, in get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/django/views/decorators/csrf.py", line 57, in wrapped_view return view_func(*args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/apps/django_app/utils.py", line 45, in wrapper return func(request, backend, *args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/apps/django_app/views.py", line 21, in complete redirect_name=REDIRECT_FIELD_NAME, *args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/actions.py", line 54, in do_complete *args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/strategies/base.py", line 62, in complete return self.backend.auth_complete(*args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/backends/facebook.py", line 63, in auth_complete self.process_error(self.data) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/backends/facebook.py", line 56, in process_error super(FacebookOAuth2, self).process_error(data) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/backends/oauth.py", line 312, in process_error raise AuthCanceled(self, data.get('error_description', '')) AuthCanceled: Authentication process canceled Is the any way to catch it Django?

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >