Search Results

Search found 338 results on 14 pages for 'numpy'.

Page 12/14 | < Previous Page | 8 9 10 11 12 13 14  | Next Page >

  • How do you know where macports installs python packages to?

    - by xmaslist
    I am running macports to install scipy and such on OS X leopard with python 2.7. The install runs successfully, but running python and trying to import the packages I've installed, they're not found. What I'm running is: sudo python_select python27 sudo port install py27-wxpython py27-numpy py27-matplotlib sudo port install py27-scipy py27-ipython Opening up python in interactive mode (it is the correct version of python), I type 'import scipy' and get a module not found error. What gives? How can I find out where it is installing the packages to instead?

    Read the article

  • Mac 10.6 Universal Binary scipy: cephes/specfun "_aswfa_" symbol not found

    - by Markus
    Hi folks, I can't get scipy to function in 32 bit mode when compiled as a i386/x86_64 universal binary, and executed on my 64 bit 10.6.2 MacPro1,1. My python setup With the help of this answer, I built a 32/64 bit intel universal binary of python 2.6.4 with the intention of using the arch command to select between the architectures. (I managed to make some universal binaries of a few libraries I wanted using lipo.) That all works. I then installed scipy according to the instructions on hyperjeff's article, only with more up-to-date numpy (1.4.0) and skipping the bit about moving numpy aside briefly during the installation of scipy. Now, everything except scipy seems to be working as far as I can tell, and I can indeed select between 32 and 64 bit mode using arch -i386 python and arch -x86_64 python. The error Scipy complains in 32 bit mode: $ arch -x86_64 python -c "import scipy.interpolate; print 'success'" success $ arch -i386 python -c "import scipy.interpolate; print 'success'" Traceback (most recent call last): File "<string>", line 1, in <module> File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/interpolate/__init__.py", line 7, in <module> from interpolate import * File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/interpolate/interpolate.py", line 13, in <module> import scipy.special as spec File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/__init__.py", line 8, in <module> from basic import * File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/basic.py", line 8, in <module> from _cephes import * ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/_cephes.so, 2): Symbol not found: _aswfa_ Referenced from: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/_cephes.so Expected in: flat namespace in /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/_cephes.so Attempt at tracking down the problem It looks like scipy.interpolate imports something called _cephes, which looks for a symbol called _aswfa_ but can't find it in 32 bit mode. Browsing through scipy's source, I find an ASWFA subroutine in specfun.f. The only scipy product file with a similar name is specfun.so, but both that and _cephes.so appear to be universal binaries: $ cd /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/ $ file _cephes.so specfun.so _cephes.so: Mach-O universal binary with 2 architectures _cephes.so (for architecture i386): Mach-O bundle i386 _cephes.so (for architecture x86_64): Mach-O 64-bit bundle x86_64 specfun.so: Mach-O universal binary with 2 architectures specfun.so (for architecture i386): Mach-O bundle i386 specfun.so (for architecture x86_64): Mach-O 64-bit bundle x86_64 Ho hum. I'm stuck. Things I may try but haven't figured out how yet include compiling specfun.so myself manually, somehow. I would imagine that scipy isn't broken for all 32 bit machines, so I guess something is wrong with the way I've installed it, but I can't figure out what. I don't really expect a full answer given my fairly unique (?) setup, but if anyone has any clues that might point me in the right direction, they'd be greatly appreciated. (edit) More details to address questions: I'm using gfortran (GNU Fortran from GCC 4.2.1 Apple Inc. build 5646). Python 2.6.4 was installed more-or-less like so: cd /tmp curl -O http://www.python.org/ftp/python/2.6.4/Python-2.6.4.tar.bz2 tar xf Python-2.6.4.tar.bz2 cd Python-2.6.4 # Now replace buggy pythonw.c file with one that supports the "arch" command: curl http://bugs.python.org/file14949/pythonw.c | sed s/2.7/2.6/ > Mac/Tools/pythonw.c ./configure --enable-framework=/Library/Frameworks --enable-universalsdk=/ --with-universal-archs=intel make -j4 sudo make frameworkinstall Scipy 0.7.1 was installed pretty much as described as here, but it boils down to a simple sudo python setup.py install. It would indeed appear that the symbol is undefined in the i386 architecture if you look at the _cephes library with nm, as suggested by David Cournapeau: $ nm -arch x86_64 /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/_cephes.so | grep _aswfa_ 00000000000d4950 T _aswfa_ 000000000011e4b0 d _oblate_aswfa_data 000000000011e510 d _oblate_aswfa_nocv_data (snip) $ nm -arch i386 /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/_cephes.so | grep _aswfa_ U _aswfa_ 0002e96c d _oblate_aswfa_data 0002e99c d _oblate_aswfa_nocv_data (snip) however, I can't yet explain its absence.

    Read the article

  • Python in Finance by Yuxing Yan, Packt Publishing Book Review

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2014/06/04/python-in-finance-by-yuxing-yan-packt-publishing-book-review.aspx I picked Python in Finance from Packt Publishing to review expecting to bore myself with complex algorithms and senseless formulas while seeing little actual Python in action, indeed at 400 pages plus it may seem so. But, it turned out to be quite the opposite. I learned a lot about practical implementations of various Python modules as SciPy, NumPy and several more, I think they empower a developer a lot. No wonder Python is on the track to become a de-facto scientist language of choice! But I am not going to compromise the truth, the book does discuss numerous financial terms, many of them, and this is where the enormous power of this book is coming from: it is like standing on the shoulders of a giant. Python is that giant - flexible and powerful, yet very approachable. The TOC is very detailed thanks to Packt, any one can see what financial algorithms are covered, I am only going to name a few which I had most fun with (though all of them are covered in enough details): Fama*, Fat Tail, ARCH, Monte-Carlo and of course the volatility smile! I am under an impression this book is best suited for students in Finance, especially those who are about to join the workforce, but I suspect the material in this book is very well suited for mature Financists, an investor who has some programming skills and wants to benefit from it, or even a programmer, or a mathematician who already knows Python or any other language, but wants to have fun in Quantitative Finance and earn a few buck! Pure fun, real results, tons of practical insight from reading data from a file to downloading trade data from Yahoo! Lastly, I need to complement Yuxing – he is a talented teacher, this book could not be what it is otherwise. It is a 5 out of 5 product. Disclaimer: I received a  free copy of this book for review purposes from the publisher.

    Read the article

  • Using pip to install modules in python failing

    - by James N
    I'm having trouble installing python modules using pip. Below is the output from the command window: Note that I installed pip immediately before trying to install GDAL module. I am on a w7 64bit machine running python 2.7 Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\Users\jnunn\Desktop>python get-pip.py Downloading/unpacking pip Downloading pip-1.2.1.tar.gz (102Kb): 102Kb downloaded Running setup.py egg_info for package pip warning: no files found matching '*.html' under directory 'docs' warning: no previously-included files matching '*.txt' found under directory 'docs\_build' no previously-included directories found matching 'docs\_build\_sources' Installing collected packages: pip Running setup.py install for pip warning: no files found matching '*.html' under directory 'docs' warning: no previously-included files matching '*.txt' found under directory 'docs\_build' no previously-included directories found matching 'docs\_build\_sources' Installing pip-script.py script to C:\Python26\ArcGIS10.1\Scripts Installing pip.exe script to C:\Python26\ArcGIS10.1\Scripts Installing pip.exe.manifest script to C:\Python26\ArcGIS10.1\Scripts Installing pip-2.7-script.py script to C:\Python26\ArcGIS10.1\Scripts Installing pip-2.7.exe script to C:\Python26\ArcGIS10.1\Scripts Installing pip-2.7.exe.manifest script to C:\Python26\ArcGIS10.1\Scripts Successfully installed pip Cleaning up... C:\Users\jnunn\Desktop>pip install gdal Downloading/unpacking gdal Downloading GDAL-1.9.1.tar.gz (420kB): 420kB downloaded Running setup.py egg_info for package gdal Installing collected packages: gdal Running setup.py install for gdal building 'osgeo._gdal' extension c:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\BIN\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -I../../port -I../../gcore -I../../alg -I../../ogr/ -I C:\Python26\ArcGIS10.1\include -IC:\Python26\ArcGIS10.1\PC -IC:\Python26\ArcGIS1 0.1\lib\site-packages\numpy\core\include /Tpextensions/gdal_wrap.cpp /Fobuild\te mp.win32-2.7\Release\extensions/gdal_wrap.obj gdal_wrap.cpp c:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\INCLUDE\xlocale(342) : warning C4530: C++ exception handler used, but unwind semantics are not enabled . Specify /EHsc extensions/gdal_wrap.cpp(2853) : fatal error C1083: Cannot open include file : 'cpl_port.h': No such file or directory error: command '"c:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\BIN\c l.exe"' failed with exit status 2 Complete output from command C:\Python26\ArcGIS10.1\python.exe -c "import se tuptools;__file__='c:\\users\\jnunn\\appdata\\local\\temp\\pip-build\\gdal\\setu p.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec' ))" install --record c:\users\jnunn\appdata\local\temp\pip-f7tgze-record\install -record.txt --single-version-externally-managed: running install running build running build_py creating build creating build\lib.win32-2.7 copying gdal.py -> build\lib.win32-2.7 copying ogr.py -> build\lib.win32-2.7 copying osr.py -> build\lib.win32-2.7 copying gdalconst.py -> build\lib.win32-2.7 copying gdalnumeric.py -> build\lib.win32-2.7 creating build\lib.win32-2.7\osgeo copying osgeo\gdal.py -> build\lib.win32-2.7\osgeo copying osgeo\gdalconst.py -> build\lib.win32-2.7\osgeo copying osgeo\gdalnumeric.py -> build\lib.win32-2.7\osgeo copying osgeo\gdal_array.py -> build\lib.win32-2.7\osgeo copying osgeo\ogr.py -> build\lib.win32-2.7\osgeo copying osgeo\osr.py -> build\lib.win32-2.7\osgeo copying osgeo\__init__.py -> build\lib.win32-2.7\osgeo running build_ext building 'osgeo._gdal' extension creating build\temp.win32-2.7 creating build\temp.win32-2.7\Release creating build\temp.win32-2.7\Release\extensions c:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\BIN\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -I../../port -I../../gcore -I../../alg -I../../ogr/ -IC:\P ython26\ArcGIS10.1\include -IC:\Python26\ArcGIS10.1\PC -IC:\Python26\ArcGIS10.1\ lib\site-packages\numpy\core\include /Tpextensions/gdal_wrap.cpp /Fobuild\temp.w in32-2.7\Release\extensions/gdal_wrap.obj gdal_wrap.cpp c:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\INCLUDE\xlocale(342) : war ning C4530: C++ exception handler used, but unwind semantics are not enabled. Sp ecify /EHsc extensions/gdal_wrap.cpp(2853) : fatal error C1083: Cannot open include file: 'c pl_port.h': No such file or directory error: command '"c:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\BIN\cl.ex e"' failed with exit status 2 ---------------------------------------- Command C:\Python26\ArcGIS10.1\python.exe -c "import setuptools;__file__='c:\\us ers\\jnunn\\appdata\\local\\temp\\pip-build\\gdal\\setup.py';exec(compile(open(_ _file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\u sers\jnunn\appdata\local\temp\pip-f7tgze-record\install-record.txt --single-vers ion-externally-managed failed with error code 1 in c:\users\jnunn\appdata\local\ temp\pip-build\gdal Storing complete log in C:\Users\jnunn\pip\pip.log C:\Users\jnunn\Desktop> I have tried to use easy_install before too, and it came back with a common error to this: c:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\INCLUDE\xlocale(342) : war ning C4530: C++ exception handler used, but unwind semantics are not enabled. Sp ecify /EHsc extensions/gdal_wrap.cpp(2853) : fatal error C1083: Cannot open include file: 'c pl_port.h': No such file or directory error: command '"c:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\BIN\cl.ex e"' failed with exit status 2 Plus the following additional pip.log: Exception information: Traceback (most recent call last): File "C:\Python26\ArcGIS10.1\lib\site-packages\pip\basecommand.py", line 107, in main status = self.run(options, args) File "C:\Python26\ArcGIS10.1\lib\site-packages\pip\commands\install.py", line 261, in run requirement_set.install(install_options, global_options) File "C:\Python26\ArcGIS10.1\lib\site-packages\pip\req.py", line 1166, in install requirement.install(install_options, global_options) File "C:\Python26\ArcGIS10.1\lib\site-packages\pip\req.py", line 589, in install cwd=self.source_dir, filter_stdout=self._filter_install, show_stdout=False) File "C:\Python26\ArcGIS10.1\lib\site-packages\pip\util.py", line 612, in call_subprocess % (command_desc, proc.returncode, cwd)) InstallationError: Command C:\Python26\ArcGIS10.1\python.exe -c "import setuptools;__file__='c:\\users\\jnunn\\appdata\\local\\temp\\pip-build\\gdal\\setup.py';exec(compile(open(__file__).read().replace('\r \n', '\n'), __file__, 'exec'))" install --record c:\users\jnunn\appdata\local\temp\pip-f7tgze-record\install-record.txt --single-version-externally-managed failed with error code 1 in c:\users\jnunn\appdata \local\temp\pip-build\gdal

    Read the article

  • Vectorizatoin of index operation for a scipy.sparse matrix

    - by celil
    The following code runs too slowly even though everything seems to be vectorized. from numpy import * from scipy.sparse import * n = 100000; i = xrange(n); j = xrange(n); data = ones(n); A=csr_matrix((data,(i,j))); x = A[i,j] The problem seems to be that the indexing operation is implemented as a python function, and invoking A[i,j] results in the following profiling output 500033 function calls in 8.718 CPU seconds Ordered by: internal time ncalls tottime percall cumtime percall filename:lineno(function) 100000 7.933 0.000 8.156 0.000 csr.py:265(_get_single_element) 1 0.271 0.271 8.705 8.705 csr.py:177(__getitem__) (...) Namely, the python function _get_single_element gets called 100000 times which is really inefficient. Why isn't this implemented in pure C? Does anybody know of a way of getting around this limitation, and speeding up the above code? Should I be using a different sparse matrix type?

    Read the article

  • AttributeError while adding colorbar in matplotlib

    - by bgbg
    The following code fails to run on Python 2.5.4: from matplotlib import pylab as pl import numpy as np data = np.random.rand(6,6) fig = pl.figure(1) fig.clf() ax = fig.add_subplot(1,1,1) ax.imshow(data, interpolation='nearest', vmin=0.5, vmax=0.99) pl.colorbar() pl.show() The error message is C:\temp>python z.py Traceback (most recent call last): File "z.py", line 10, in <module> pl.colorbar() File "C:\Python25\lib\site-packages\matplotlib\pyplot.py", line 1369, in colorbar ret = gcf().colorbar(mappable, cax = cax, ax=ax, **kw) File "C:\Python25\lib\site-packages\matplotlib\figure.py", line 1046, in colorbar cb = cbar.Colorbar(cax, mappable, **kw) File "C:\Python25\lib\site-packages\matplotlib\colorbar.py", line 622, in __init__ mappable.autoscale_None() # Ensure mappable.norm.vmin, vmax AttributeError: 'NoneType' object has no attribute 'autoscale_None' How can I add colorbar to this code? Following is the interpreter information: Python 2.5.4 (r254:67916, Dec 23 2008, 15:10:54) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>>

    Read the article

  • STFT and ISTFT in Python

    - by endolith
    Is there any form of short-time Fourier transform with corresponding inverse transform built into SciPy or NumPy or whatever? There's the pyplot specgram function in matplotlib, which calls ax.specgram(), which calls mlab.specgram(), which calls _spectral_helper(): #The checks for if y is x are so that we can use the same function to #implement the core of psd(), csd(), and spectrogram() without doing #extra calculations. We return the unaveraged Pxy, freqs, and t. I'm not sure if this can be used to do an STFT and ISTFT, though. Is there anything else, or should I translate something like this?

    Read the article

  • Python bindings for C++ code using OpenCV giving segmentation fault

    - by lightalchemist
    I'm trying to write a python wrapper for some C++ code that make use of OpenCV but I'm having difficulties returning the result, which is a OpenCV C++ Mat object, to the python interpreter. I've looked at OpenCV's source and found the file cv2.cpp which has conversions functions to perform conversions to and fro between PyObject* and OpenCV's Mat. I made use of those conversions functions but got a segmentation fault when I tried to use them. I basically need some suggestions/sample code/online references on how to interface python and C++ code that make use of OpenCV, specifically with the ability to return OpenCV's C++ Mat to the python interpreter or perhaps suggestions on how/where to start investigating the cause of the segmentation fault. Currently I'm using Boost Python to wrap the code. Thanks in advance to any replies. The relevant code: // This is the function that is giving the segmentation fault. PyObject* ABC::doSomething(PyObject* image) { Mat m; pyopencv_to(image, m); // This line gives segmentation fault. // Some code to create cppObj from CPP library that uses OpenCV cv::Mat processedImage = cppObj->align(m); return pyopencv_from(processedImage); } The conversion functions taken from OpenCV's source follows. The conversion code gives segmentation fault at the commented line with "if (!PyArray_Check(o)) ...". static int pyopencv_to(const PyObject* o, Mat& m, const char* name = "<unknown>", bool allowND=true) { if(!o || o == Py_None) { if( !m.data ) m.allocator = &g_numpyAllocator; return true; } if( !PyArray_Check(o) ) // Segmentation fault inside PyArray_Check(o) { failmsg("%s is not a numpy array", name); return false; } int typenum = PyArray_TYPE(o); int type = typenum == NPY_UBYTE ? CV_8U : typenum == NPY_BYTE ? CV_8S : typenum == NPY_USHORT ? CV_16U : typenum == NPY_SHORT ? CV_16S : typenum == NPY_INT || typenum == NPY_LONG ? CV_32S : typenum == NPY_FLOAT ? CV_32F : typenum == NPY_DOUBLE ? CV_64F : -1; if( type < 0 ) { failmsg("%s data type = %d is not supported", name, typenum); return false; } int ndims = PyArray_NDIM(o); if(ndims >= CV_MAX_DIM) { failmsg("%s dimensionality (=%d) is too high", name, ndims); return false; } int size[CV_MAX_DIM+1]; size_t step[CV_MAX_DIM+1], elemsize = CV_ELEM_SIZE1(type); const npy_intp* _sizes = PyArray_DIMS(o); const npy_intp* _strides = PyArray_STRIDES(o); bool transposed = false; for(int i = 0; i < ndims; i++) { size[i] = (int)_sizes[i]; step[i] = (size_t)_strides[i]; } if( ndims == 0 || step[ndims-1] > elemsize ) { size[ndims] = 1; step[ndims] = elemsize; ndims++; } if( ndims >= 2 && step[0] < step[1] ) { std::swap(size[0], size[1]); std::swap(step[0], step[1]); transposed = true; } if( ndims == 3 && size[2] <= CV_CN_MAX && step[1] == elemsize*size[2] ) { ndims--; type |= CV_MAKETYPE(0, size[2]); } if( ndims > 2 && !allowND ) { failmsg("%s has more than 2 dimensions", name); return false; } m = Mat(ndims, size, type, PyArray_DATA(o), step); if( m.data ) { m.refcount = refcountFromPyObject(o); m.addref(); // protect the original numpy array from deallocation // (since Mat destructor will decrement the reference counter) }; m.allocator = &g_numpyAllocator; if( transposed ) { Mat tmp; tmp.allocator = &g_numpyAllocator; transpose(m, tmp); m = tmp; } return true; } static PyObject* pyopencv_from(const Mat& m) { if( !m.data ) Py_RETURN_NONE; Mat temp, *p = (Mat*)&m; if(!p->refcount || p->allocator != &g_numpyAllocator) { temp.allocator = &g_numpyAllocator; m.copyTo(temp); p = &temp; } p->addref(); return pyObjectFromRefcount(p->refcount); } My python test program: import pysomemodule # My python wrapped library. import cv2 def main(): myobj = pysomemodule.ABC("faces.train") # Create python object. This works. image = cv2.imread('61.jpg') processedImage = myobj.doSomething(image) cv2.imshow("test", processedImage) cv2.waitKey() if __name__ == "__main__": main()

    Read the article

  • Matplotlib subplots_adjust hspace so titles and xlabels don't overlap ?

    - by Denis
    With say 3 rows of subplots in matplotlib, xlabels of one row can overlap the title of the next; one has to fiddle with pl.subplots_adjust( hspace ), annoying. Is there a recipe for hspace that prevents overlaps and works for any nrow ? """ matplotlib xlabels overlap titles ? """ import sys import numpy as np import pylab as pl nrow = 3 hspace = .4 # of plot height, titles and xlabels both fall within this ?? exec "\n".join( sys.argv[1:] ) # nrow= ... y = np.arange(10) pl.subplots_adjust( hspace=hspace ) for jrow in range( 1, nrow+1 ): pl.subplot( nrow, 1, jrow ) pl.plot( y**jrow ) pl.title( 5 * ("title %d " % jrow) ) pl.xlabel( 5 * ("xlabel %d " % jrow) ) pl.show() My versions: matplotlib 0.99.1.1, python 2.6.4, Mac osx 10.4.11, backend: Qt4Agg (TkAgg = Exception in Tkinter callback) (For many extra points, can anyone outline how matplotlib's packer / spacer works, along the lines of chapter 17 "the packer" in the Tcl/Tk book ?)

    Read the article

  • Match multiline regex in file object

    - by williamx
    How can I extract the groups from this regex from a file object (data.txt)? import numpy as np import re import os ifile = open("data.txt",'r') # Regex pattern pattern = re.compile(r""" ^Time:(\d{2}:\d{2}:\d{2}) # Time: 12:34:56 at beginning of line \r{2} # Two carriage return \D+ # 1 or more non-digits storeU=(\d+\.\d+) \s uIx=(\d+) \s storeI=(-?\d+.\d+) \s iIx=(\d+) \s avgCI=(-?\d+.\d+) """, re.VERBOSE | re.MULTILINE) time = []; for line in ifile: match = re.search(pattern, line) if match: time.append(match.group(1)) The problem in the last part of the code, is that I iterate line by line, which obviously doesn't work with multiline regex. I have tried to use pattern.finditer(ifile) like this: for match in pattern.finditer(ifile): print match ... just to see if it works, but the finditer method requires a string or buffer. I have also tried this method, but can't get it to work matches = [m.groups() for m in pattern.finditer(ifile)] Any idea?

    Read the article

  • Converting from samplerate/cutoff frequency to pi-radians/sample in a discrete time sampled IIR filter system.

    - by Fake Name
    I am working on doing some digital filter work using Python and Numpy/Scipy. I'm using scipy.signal.iirdesign to generate my filter coefficents, but it requires the filter passband coefficents in a format I am not familiar with wp, ws : float Passband and stopband edge frequencies, normalized from 0 to 1 (1 corresponds to pi radians / sample). For example: Lowpass: wp = 0.2, ws = 0.3 Highpass: wp = 0.3, ws = 0.2 (from here) I'm not familiar with digital filters (I'm coming from a hardware design background). In an analog context, I would determine the desired slope and the 3db down point, and calculate component values from that. In this context, how do I take a known sample rate, a desired corner frequency, and a desired rolloff, and calculate the wp, ws values from that? (This might be more appropriate for math.stackexchange. I'm not sure)

    Read the article

  • Stochastic calculus library in python

    - by LeMiz
    Hello, I am looking for a python library that would allow me to compute stochastic calculus stuff, like the (conditional) expectation of a random process I would define the diffusion. I had a look a at simpy (simpy.sourceforge.net), but it does not seem to cover my needs. This is for quick prototyping and experimentation. In java, I used with some success the (now inactive) http://martingale.berlios.de/Martingale.html library. The problem is not difficult in itself, but there is a lot non trivial, boilerplate things to do (efficient memory use, variable reduction techniques, and so on). Ideally, I would be able to write something like this (just illustrative): def my_diffusion(t, dt, past_values, world, **kwargs): W1, W2 = world.correlated_brownians_pair(correlation=kwargs['rho']) X = past_values[-1] sigma_1 = kwargs['sigma1'] sigma_2 = kwargs['sigma2'] dX = kwargs['mu'] * X * dt + sigma_1 * W1 * X * math.sqrt(dt) + sigma_2 * W2 * X * X * math.sqrt(dt) return X + dX X = RandomProcess(diffusion=my_diffusion, x0 = 1.0) print X.expectancy(T=252, dt = 1./252., N_simul= 50000, world=World(random_generator='sobol'), sigma1 = 0.3, sigma2 = 0.01, rho=-0.1) Does someone knows of something else than reimplementing it in numpy for example ?

    Read the article

  • PyOpenGL: glVertexPointer() offset problem

    - by SurvivalMachine
    My vertices are interleaved in a numpy array (dtype = float32) like this: ... tu, tv, nx, ny, nz, vx, vy, vz, ... When rendering, I'm calling gl*Pointer() like this (I have enabled the arrays before): stride = (2 + 3 + 3) * 4 glTexCoordPointer( 2, GL_FLOAT, stride, self.vertArray ) glNormalPointer( GL_FLOAT, stride, self.vertArray + 2 ) glVertexPointer( 3, GL_FLOAT, stride, self.vertArray + 5 ) glDrawElements( GL_TRIANGLES, len( self.indices ), GL_UNSIGNED_SHORT, self.indices ) The result is that nothing renders. However, if I organize my array so that the vertex position is the first element ( ... vx, vy, vz, tu, tv, nx, ny, nz, ... ) I get correct positions for vertices while rendering but texture coords and normals aren't rendered correctly. This leads me to believe that I'm not setting the pointer offset right. How should I set it? I'm using almost the exact same code in my other app in C++ and it works.

    Read the article

  • Significant figures in the decimal module

    - by Jason Baker
    So I've decided to try to solve my physics homework by writing some python scripts to solve problems for me. One problem that I'm running into is that significant figures don't always seem to come out properly. For example this handles significant figures properly: from decimal import Decimal >>> Decimal('1.0') + Decimal('2.0') Decimal("3.0") But this doesn't: >>> Decimal('1.00') / Decimal('3.00') Decimal("0.3333333333333333333333333333") So two questions: Am I right that this isn't the expected amount of significant digits, or do I need to brush up on significant digit math? Is there any way to do this without having to set the decimal precision manually? Granted, I'm sure I can use numpy to do this, but I just want to know if there's a way to do this with the decimal module out of curiosity.

    Read the article

  • Python Least-Squares Natural Splines

    - by Eldila
    I am trying to find a numerical package which will fit a natural which minimizes weighted least squares. There is a package in scipy which does what I want for unnatural splines. import numpy as np import matplotlib.pyplot as plt from scipy import interpolate import random x = np.arange(0,5,1.0/2) xs = np.arange(0,5,1.0/500) y = np.sin(x+1) for i in range(len(y)): y[i] += .2*random.random() - .1 knots = np.array([1,2,3,4]) tck = interpolate.splrep(x,y,s=1,k=3,t=knots,task=-1) ynew = interpolate.splev(xs,tck,der=0) plt.figure() plt.plot(xs,ynew,x,y,'x')

    Read the article

  • How can I plot NaN values as a special color with imshow in matplotlib?

    - by Adam Fraser
    example: import numpy as np import matplotlib.pyplot as plt f = plt.figure() ax = f.add_subplot(111) a = np.arange(25).reshape((5,5)).astype(float) a[3,:] = np.nan ax.imshow(a, interpolation='nearest') f.canvas.draw() The resultant image is unexpectedly all blue (the lowest color in the jet colormap). However, if I do the plotting like this: ax.imshow(a, interpolation='nearest', vmin=0, vmax=24) --then I get something better, but the NaN values are drawn the same color as vmin... Is there a graceful way that I can set NaNs to be drawn with a special color (eg: gray or transparent)?

    Read the article

  • Confusion Matrix with number of classified/misclassified instances on it (Python/Matplotlib)

    - by Pinkie
    I am plotting a confusion matrix with matplotlib with the following code: from numpy import * import matplotlib.pyplot as plt from pylab import * conf_arr = [[33,2,0,0,0,0,0,0,0,1,3], [3,31,0,0,0,0,0,0,0,0,0], [0,4,41,0,0,0,0,0,0,0,1], [0,1,0,30,0,6,0,0,0,0,1], [0,0,0,0,38,10,0,0,0,0,0], [0,0,0,3,1,39,0,0,0,0,4], [0,2,2,0,4,1,31,0,0,0,2], [0,1,0,0,0,0,0,36,0,2,0], [0,0,0,0,0,0,1,5,37,5,1], [3,0,0,0,0,0,0,0,0,39,0], [0,0,0,0,0,0,0,0,0,0,38] ] norm_conf = [] for i in conf_arr: a = 0 tmp_arr = [] a = sum(i,0) for j in i: tmp_arr.append(float(j)/float(a)) norm_conf.append(tmp_arr) plt.clf() fig = plt.figure() ax = fig.add_subplot(111) res = ax.imshow(array(norm_conf), cmap=cm.jet, interpolation='nearest') cb = fig.colorbar(res) savefig("confmat.png", format="png") But I want to the confusion matrix to show the numbers on it like this graphic (the right one): http://i48.tinypic.com/2e30kup.jpg How can I plot the conf_arr on the graphic?

    Read the article

  • Plot smooth line with PyPlot

    - by Paul
    I've got the following simple script that plots a graph: import matplotlib.pyplot as plt import numpy as np T = np.array([6, 7, 8, 9, 10, 11, 12]) power = np.array([1.53E+03, 5.92E+02, 2.04E+02, 7.24E+01, 2.72E+01, 1.10E+01, 4.70E+00]) plt.plot(T,power) plt.show() As it is now, the line goes straight from point to point which looks ok, but could be better in my opinion. What I want is to smooth the line between the points. In Gnuplot I would have plotted with smooth cplines. Is there an easy way to do this in PyPlot? I've found some tutorials, but they all seem rather complex.

    Read the article

  • Simple Python Challenge: Fastest Bitwise XOR on Data Buffers

    - by user213060
    Challenge: Perform a bitwise XOR on two equal sized buffers. The buffers will be required to be the python str type since this is traditionally the type for data buffers in python. Return the resultant value as a str. Do this as fast as possible. The inputs are two 1 megabyte (2**20 byte) strings. The challenge is to substantially beat my inefficient algorithm using python or existing third party python modules (relaxed rules: or create your own module.) Marginal increases are useless. from os import urandom from numpy import frombuffer,bitwise_xor,byte def slow_xor(aa,bb): a=frombuffer(aa,dtype=byte) b=frombuffer(bb,dtype=byte) c=bitwise_xor(a,b) r=c.tostring() return r aa=urandom(2**20) bb=urandom(2**20) def test_it(): for x in xrange(1000): slow_xor(aa,bb)

    Read the article

  • Binomial test in Python

    - by Morlock
    I need to do a binomial test in Python that allows calculation for 'n' numbers of the order of 10000. I have implemented a quick binomial_test function using scipy.misc.comb, however, it is pretty much limited around n = 1000, I guess because it reaches the biggest representable number while computing factorials or the combinatorial itself. Here is my function: from scipy.misc import comb def binomial_test(n, k): """Calculate binomial probability """ p = comb(n, k) * 0.5**k * 0.5**(n-k) return p How could I use a native python (or numpy, scipy...) function in order to calculate that binomial probability? If possible, I need scipy 0.7.2 compatible code. Many thanks!

    Read the article

  • How can I draw a log-normalized imshow plot with a colorbar representing the raw data in matplotlib

    - by Adam Fraser
    I'm using matplotlib to plot log-normalized images but I would like the original raw image data to be represented in the colorbar rather than the [0-1] interval. I get the feeling there's a more matplotlib'y way of doing this by using some sort of normalization object and not transforming the data beforehand... in any case, there could be negative values in the raw image. import matplotlib.pyplot as plt import numpy as np def log_transform(im): '''returns log(image) scaled to the interval [0,1]''' try: (min, max) = (im[im > 0].min(), im.max()) if (max > min) and (max > 0): return (np.log(im.clip(min, max)) - np.log(min)) / (np.log(max) - np.log(min)) except: pass return im a = np.ones((100,100)) for i in range(100): a[i] = i f = plt.figure() ax = f.add_subplot(111) res = ax.imshow(log_transform(a)) # the colorbar drawn shows [0-1], but I want to see [0-99] cb = f.colorbar(res) I've tried using cb.set_array, but that didn't appear to do anything, and cb.set_clim, but that rescales the colors completely. Thanks in advance for any help :)

    Read the article

  • Handling missing/incomplete data in R

    - by doug
    As you would expect from a DSL aimed at data analysts, R handles missing/incomplete data very well, for instance: Many R functions have an 'na.rm' flag that you can set to 'T' to remove the NAs, but if you want to deal with this before the function call, then: to replace each 'NA' w/ 0: ifelse(is.na(vx), 0, vx) to remove each 'NA': vx = vx[!is.na(a)] to remove entire each row that contains 'NA' from a data frame: dfx = dfx[complete.cases(dfx),] All of these functions remove 'NA' or rows with an 'NA' in them. Sometimes this isn't quite what you want though--making an 'NA'-excised copy of the data frame might be necessary for the next step in the workflow but in subsequent steps you often want those rows back (e.g., to calculate a column-wise statistic for a column that has missing rows caused by a prior call to 'complete cases' yet that column has no 'NA' values in it). to be as clear as possible about what i'm looking for: python/numpy has a class, 'masked array', with a 'mask' method, which lets you conceal--but not remove--NAs during a function call. Is there an analogous function in R?

    Read the article

  • Matplotlib plotting non uniform data in 3D surface

    - by Raj Tendulkar
    I have a simple code to plot the points in 3D for Matplotlib as below - from mpl_toolkits.mplot3d import axes3d import matplotlib.pyplot as plt import numpy as np from numpy import genfromtxt import csv fig = plt.figure() ax = fig.add_subplot(111, projection='3d') my_data = genfromtxt('points1.csv', delimiter=',') points1X = my_data[:,0] points1Y = my_data[:,1] points1Z = my_data[:,2] ## I remove the header of the CSV File. points1X = np.delete(points1X, 0) points1Y = np.delete(points1Y, 0) points1Z = np.delete(points1Z, 0) # Convert the array to 1D array points1X = np.reshape(points1X,points1X.size) points1Y = np.reshape(points1Y,points1Y.size) points1Z = np.reshape(points1Z,points1Z.size) my_data = genfromtxt('points2.csv', delimiter=',') points2X = my_data[:,0] points2Y = my_data[:,1] points2Z = my_data[:,2] ## I remove the header of the CSV File. points2X = np.delete(points2X, 0) points2Y = np.delete(points2Y, 0) points2Z = np.delete(points2Z, 0) # Convert the array to 1D array points2X = np.reshape(points2X,points2X.size) points2Y = np.reshape(points2Y,points2Y.size) points2Z = np.reshape(points2Z,points2Z.size) ax.plot(points1X, points1Y, points1Z, 'd', markersize=8, markerfacecolor='red', label='points1') ax.plot(points2X, points2Y, points2Z, 'd', markersize=8, markerfacecolor='blue', label='points2') plt.show() My problem is that I tried to make a decent surface plot out of these data points that I have. I already tried to use ax.plot_surface() function to make it look nice. For this I eliminated some points, and recalculated the matrix kind of input needed by this function. However, the graph I generated was far more difficult to interpret and understand. So there might be 2 possibilities: either I am not using the function correctly, or otherwise, the data I am trying to plot is not good for the surface plot. What I was expecting was 3D graph which would have an effect similar to that we have of 3D pie chart. We see that one piece (that which is extracted out) is part of another piece. I was not expecting it to be exactly same like that, but some kind of effect like that. What I would like to ask is: Do you think it will be possible to make such 3D graph? Is there any way better, I could express my data in 3 dimension? Here are the 2 files - points1.csv Dim1,Dim2,Dim3 3,8,1 3,8,2 3,8,3 3,8,4 3,8,5 3,9,1 3,9,2 3,9,3 3,9,4 3,9,5 3,10,1 3,10,2 3,10,3 3,10,4 3,10,5 3,11,1 3,11,2 3,11,3 3,11,4 3,11,5 3,12,1 3,12,2 3,13,1 3,13,2 3,14,1 3,14,2 3,15,1 3,15,2 3,16,1 3,16,2 3,17,1 3,17,2 3,18,1 3,18,2 4,8,1 4,8,2 4,8,3 4,8,4 4,8,5 4,9,1 4,9,2 4,9,3 4,9,4 4,9,5 4,10,1 4,10,2 4,10,3 4,10,4 4,10,5 4,11,1 4,11,2 4,11,3 4,11,4 4,11,5 4,12,1 4,13,1 4,14,1 4,15,1 4,16,1 4,17,1 4,18,1 5,8,1 5,8,2 5,8,3 5,8,4 5,8,5 5,9,1 5,9,2 5,9,3 5,9,4 5,9,5 5,10,1 5,10,2 5,10,3 5,10,4 5,10,5 5,11,1 5,11,2 5,11,3 5,11,4 5,11,5 5,12,1 5,13,1 5,14,1 5,15,1 5,16,1 5,17,1 5,18,1 6,8,1 6,8,2 6,8,3 6,8,4 6,8,5 6,9,1 6,9,2 6,9,3 6,9,4 6,9,5 6,10,1 6,11,1 6,12,1 6,13,1 6,14,1 6,15,1 6,16,1 6,17,1 6,18,1 7,8,1 7,8,2 7,8,3 7,8,4 7,8,5 7,9,1 7,9,2 7,9,3 7,9,4 7,9,5 and points2.csv Dim1,Dim2,Dim3 3,12,3 3,12,4 3,12,5 3,13,3 3,13,4 3,13,5 3,14,3 3,14,4 3,14,5 3,15,3 3,15,4 3,15,5 3,16,3 3,16,4 3,16,5 3,17,3 3,17,4 3,17,5 3,18,3 3,18,4 3,18,5 4,12,2 4,12,3 4,12,4 4,12,5 4,13,2 4,13,3 4,13,4 4,13,5 4,14,2 4,14,3 4,14,4 4,14,5 4,15,2 4,15,3 4,15,4 4,15,5 4,16,2 4,16,3 4,16,4 4,16,5 4,17,2 4,17,3 4,17,4 4,17,5 4,18,2 4,18,3 4,18,4 4,18,5 5,12,2 5,12,3 5,12,4 5,12,5 5,13,2 5,13,3 5,13,4 5,13,5 5,14,2 5,14,3 5,14,4 5,14,5 5,15,2 5,15,3 5,15,4 5,15,5 5,16,2 5,16,3 5,16,4 5,16,5 5,17,2 5,17,3 5,17,4 5,17,5 5,18,2 5,18,3 5,18,4 5,18,5 6,10,2 6,10,3 6,10,4 6,10,5 6,11,2 6,11,3 6,11,4 6,11,5 6,12,2 6,12,3 6,12,4 6,12,5 6,13,2 6,13,3 6,13,4 6,13,5 6,14,2 6,14,3 6,14,4 6,14,5 6,15,2 6,15,3 6,15,4 6,15,5 6,16,2 6,16,3 6,16,4 6,16,5 6,17,2 6,17,3 6,17,4 6,17,5 6,18,2 6,18,3 6,18,4 6,18,5 7,10,1 7,10,2 7,10,3 7,10,4 7,10,5 7,11,1 7,11,2 7,11,3 7,11,4 7,11,5 7,12,1 7,12,2 7,12,3 7,12,4 7,12,5 7,13,1 7,13,2 7,13,3 7,13,4 7,13,5 7,14,1 7,14,2 7,14,3 7,14,4 7,14,5 7,15,1 7,15,2 7,15,3 7,15,4 7,15,5 7,16,1 7,16,2 7,16,3 7,16,4 7,16,5 7,17,1 7,17,2 7,17,3 7,17,4 7,17,5 7,18,1 7,18,2 7,18,3 7,18,4 7,18,5

    Read the article

  • Python, Matplotlib, subplot: How to set the axis range?

    - by someone
    How can I set the y axis range of the second subplot to e.g. [0,1000] ? The FFT plot of my data (a column in a text file) results in a (inf.?) spike so that the actual data is not visible. pylab.ylim([0,1000]) has no effect, unfortunately. This is the whole script: # based on http://www.swharden.com/blog/2009-01-21-signal-filtering-with-python/ import numpy, scipy, pylab, random xs = [] rawsignal = [] with open("test.dat", 'r') as f: for line in f: if line[0] != '#' and len(line) > 0: xs.append( int( line.split()[0] ) ) rawsignal.append( int( line.split()[1] ) ) h, w = 3, 1 pylab.figure(figsize=(12,9)) pylab.subplots_adjust(hspace=.7) pylab.subplot(h,w,1) pylab.title("Signal") pylab.plot(xs,rawsignal) pylab.subplot(h,w,2) pylab.title("FFT") fft = scipy.fft(rawsignal) #~ pylab.axis([None,None,0,1000]) pylab.ylim([0,1000]) pylab.plot(abs(fft)) pylab.savefig("SIG.png",dpi=200) pylab.show() Other improvements are also appreciated!

    Read the article

  • python dictionary with constant value-type

    - by s.kap
    hi there, I bumped into a case where I need a big (=huge) python dictionary, which turned to be quite memory-consuming. However, since all of the values are of a single type (long) - as well as the keys, I figured I can use python (or numpy, doesn't really matter) array for the values ; and wrap the needed interface (in: x ; out: d[x]) with an object which actually uses these arrays for the keys and values storage. I can use a index-conversion object (input -- index, of 1..n, where n is the different-values counter), and return array[index]. I can elaborate on some techniques of how to implement such an indexing-methods with reasonable memory requirement, it works and even pretty good. However, I wonder if there is such a data-structure-object already exists (in python, or wrapped to python from C/++), in any package (I checked collections, and some Google searches). Any comment will be welcome, thanks.

    Read the article

< Previous Page | 8 9 10 11 12 13 14  | Next Page >