Search Results

Search found 9461 results on 379 pages for 'digital signal processing'.

Page 40/379 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Convolving two signals

    - by John Elway
    Calculate the convolution of the following signals (your answer will be in the form of an equation): h[n] = delta[n-1] + delta[n+1], x[n] = delta[n-a] + delta[n+b] I'm lost as to what I do with h and x. Do I simply multiply them? h[n]*x[n]? I programmed convolution with several types of blurs and edge detectors, but I don't see how to translate that knowledge to this problem. please help!

    Read the article

  • What's the fastest way to approximate the period of data using Octave?

    - by John
    I have a set of data that is periodic (but not sinusoidal). I have a set of time values in one vector and a set of amplitudes in a second vector. I'd like to quickly approximate the period of the function. Any suggestions? Specifically, here's my current code. I'd like to approximate the period of the vector x(:,2) against the vector t. Ultimately, I'd like to do this for lots of initial conditions and calculate the period of each and plot the result. function xdot = f (x,t) xdot(1) =x(2); xdot(2) =-sin(x(1)); endfunction x0=[1;1.75]; #eventually, I'd like to try lots of values for x0(2) t = linspace (0, 50, 200); x = lsode ("f", x0, t) plot(x(:,1),x(:,2)); Thank you! John

    Read the article

  • BlackBerry Technical Specification

    - by Sam
    I'm having trouble locating BlackBerry techical specifications and their website is a mess. They also don't have a number that I can use to easily contact them. This isn't exactly a coding question, but what does the BlackBerry audio API look like, and where can I get technical specifications on audio? Specifically, I'm trying to find out more information on Audio-In, specifically, through the Mic-In on the 3.5 mm jack. Unfortunately, before I can proceed, I need to know such things like sampling rate, data width, etc. Direction to the right resource or if you know off of the top of your head is appreciated.

    Read the article

  • Why is object destructor not called when script terminates ?

    - by planetp
    I have a test script like this: package Test; sub new { bless {} } sub DESTROY { print "in DESTROY\n" } package main; my $t = new Test; sleep 10; The destructor is called after sleep returns (and before the program terminates). But it's not called if the script is terminated with Ctrl-C. Is it possible to have the destructor called in this case also?

    Read the article

  • How to process audio in real time?

    - by user1756648
    I am giving some audio input through microphone. I recorded it in Audacity, it looks something like as shown below. I want to process this audio in real time. I mainly want to do this. 1) see real time audio amplitude vs time graph 2) perform some actions based on some thing (like if a specific type of hike is seen in audio, then do something, else do something else) Is there any python module or C library that can allow me to do this ?

    Read the article

  • How to add a slot to my main window in Qt builder?

    - by George Edison
    I am using Qt Builder to create a simple window. I used the menu editor to add a menu. Now, I figured out how to connect one of the menu items to the close() method of the main window. My problem is how to add a slot to the main window. Here is what I have: private slots: void OnAbout(); However, I can't get this method to show up in the 'Signals and Slots Editor'. How can I get it to show up?

    Read the article

  • "Voice trigger" detection

    - by sehugg
    I have a voice application that would be much-improved if there was the ability to use a "trigger word" to start recording audio. I don't need a full speech-text engine, just the ability to reliably/efficiently detect the trigger word. I am wondering if there are any specialized speech engines that support this specific use case, or any libraries/methods to developing such a single-purpose detection engine. Ideally I'd like it to work in noisy environments, but it can be trained for a single user's voice. Pointers to research papers / topics would also be appreciated so I know what to ask for.

    Read the article

  • Compare two audio files of beat/tempo and rating in iphone

    - by Senthil Kumar
    Hello, I want to develop iPhone application should have the ability to count the number of phrases that are received when user sing on mic. This application should also have the ability to decipher whether the users phrases are in or out of cadence with a preset beat.When user sing on mic Instrumental music only play. So I have to merge the User Recorded voice with Instrumental music this is one Audio file.Already i have on original Song file.I have to compare both and give the Rating to users. [Note: Instrumental music is without vocal of Original Song file] Can you please help me?. Thanks Vadivelu

    Read the article

  • Linear Interpolation. How to implement this algorithm in C ? (Python version is given)

    - by psihodelia
    There exists one very good linear interpolation method. It performs linear interpolation requiring at most one multiply per output sample. I found its description in a third edition of Understanding DSP by Lyons. This method involves a special hold buffer. Given a number of samples to be inserted between any two input samples, it produces output points using linear interpolation. Here, I have rewritten this algorithm using Python: temp1, temp2 = 0, 0 iL = 1.0 / L for i in x: hold = [i-temp1] * L temp1 = i for j in hold: temp2 += j y.append(temp2 *iL) where x contains input samples, L is a number of points to be inserted, y will contain output samples. My question is how to implement such algorithm in ANSI C in a most effective way, e.g. is it possible to avoid the second loop? NOTE: presented Python code is just to understand how this algorithm works. UPDATE: here is an example how it works in Python: x=[] y=[] hold=[] num_points=20 points_inbetween = 2 temp1,temp2=0,0 for i in range(num_points): x.append( sin(i*2.0*pi * 0.1) ) L = points_inbetween iL = 1.0/L for i in x: hold = [i-temp1] * L temp1 = i for j in hold: temp2 += j y.append(temp2 * iL) Let's say x=[.... 10, 20, 30 ....]. Then, if L=1, it will produce [... 10, 15, 20, 25, 30 ...]

    Read the article

  • What does Ubuntu do when I signal undocking to a laptop?

    - by Seppo Erviälä
    It seems that Ubuntu runs some script or command when I signal that I want to undock my laptop by pressing the undock button on the dock. Most visible thing that happens is that resolution on external display is changed. After prepearing for undock my laptop is still connected to power, VGA-output and audio jacks through dock but not to any usb devices or optical drive. I'm running 11.04 on a ThinkPad X61s with X6 UltraBase. What happens when I signal undocking? This is what dmesg says after pressing undock button: [81459.990682] ata1.00: disabled [81459.990727] ata1.00: detaching (SCSI 0:0:0:0) [81459.991722] ACPI: \_SB_.GDCK - undocking [81460.009462] ehci_hcd 0000:00:1a.7: power state changed by ACPI to D0 [81460.020252] ehci_hcd 0000:00:1a.7: BAR 0: set to [mem 0xfe226c00-0xfe226fff] (PCI address [0xfe226c00-0xfe226fff]) [81460.020265] ehci_hcd 0000:00:1a.7: power state changed by ACPI to D0 [81460.020281] ehci_hcd 0000:00:1a.7: restoring config space at offset 0xf (was 0x300, writing 0x30b) [81460.020309] ehci_hcd 0000:00:1a.7: restoring config space at offset 0x1 (was 0x2900000, writing 0x2900102) [81460.020338] ehci_hcd 0000:00:1a.7: PME# disabled [81460.020346] ehci_hcd 0000:00:1a.7: power state changed by ACPI to D0 [81460.020352] ehci_hcd 0000:00:1a.7: power state changed by ACPI to D0 [81460.020363] ehci_hcd 0000:00:1a.7: PCI INT C -> GSI 22 (level, low) -> IRQ 22 [81460.020372] ehci_hcd 0000:00:1a.7: setting latency timer to 64 [81460.020432] ehci_hcd 0000:00:1d.7: power state changed by ACPI to D0 [81460.040071] ehci_hcd 0000:00:1d.7: BAR 0: set to [mem 0xfe227000-0xfe2273ff] (PCI address [0xfe227000-0xfe2273ff]) [81460.040085] ehci_hcd 0000:00:1d.7: power state changed by ACPI to D0 [81460.040104] ehci_hcd 0000:00:1d.7: restoring config space at offset 0xf (was 0x400, writing 0x40b) [81460.040133] ehci_hcd 0000:00:1d.7: restoring config space at offset 0x1 (was 0x2900000, writing 0x2900102) [81460.040170] ehci_hcd 0000:00:1d.7: PME# disabled [81460.040178] ehci_hcd 0000:00:1d.7: power state changed by ACPI to D0 [81460.040184] ehci_hcd 0000:00:1d.7: power state changed by ACPI to D0 [81460.040195] ehci_hcd 0000:00:1d.7: PCI INT D -> GSI 19 (level, low) -> IRQ 19 [81460.040204] ehci_hcd 0000:00:1d.7: setting latency timer to 64 [81460.040503] ehci_hcd 0000:00:1d.7: PCI INT D disabled [81460.040552] ehci_hcd 0000:00:1d.7: PME# enabled [81460.061657] ehci_hcd 0000:00:1d.7: power state changed by ACPI to D3 [81460.200414] usb 1-4: USB disconnect, address 14 [81462.220088] ehci_hcd 0000:00:1a.7: PCI INT C disabled [81462.220169] ehci_hcd 0000:00:1a.7: PME# enabled [81462.240115] ehci_hcd 0000:00:1a.7: power state changed by ACPI to D3

    Read the article

  • How to increase signal/range of your Wi-Fi antenna-less repeater/booster over the network?

    - by kenorb
    I've BT Home Hub in the upper flat (2-3 walls behind) and I'm using WPS Wireless-N Wifi Range Router Repeater Extender in my flat where I'm using my laptop. These are antenna-less devices. Are there any life-hack tricks to increase signal/range of my repeater without buying the new more powerful repeater? I've tried already to move my repeater closer to the ceiling or putting the aluminium foil underneath, but it didn't help. Are there any methods, specific plates or materials which can boost the signal? Specification: Model: WN518W2 Frequency range: 2.4-2.4835GHz Wireless transmit power: 14 ~17 dBm (Typical) Wireless Signal Rates With Automatic Fallback: 11n: Up to 300Mbps(dynamic), 11g: Up to 54Mbps(dynamic), 11b: Up to 11Mbps(dynamic) Modulation Technology: DBPSK, DQPSK, CCK, OFDM, 16-QAM, 64-QAM Receiver Sensitivity: 300M: -68dBm@10% PER / 150M: -68dBm@10% PER / 108M: -68dBm@10% PER / 54M: -68dBm@10% PER / 11M: -85dBm@8% PER / 6M: -88dBm@10% PER / 1M: -90dBm@8% PER Product dimensions: 11 * 6 * 7cm

    Read the article

  • how to solve error processing /usr/lib/python2.7/dist-packages/pygst.pth:?

    - by ChitKo
    Error processing line 1 of /usr/lib/python2.7/dist-packages/pygst.pth: Traceback (most recent call last): File "/usr/lib/python2.7/site.py", line 161, in addpackage if not dircase in known_paths and os.path.exists(dir): File "/usr/lib/python2.7/genericpath.py", line 18, in exists os.stat(path) TypeError: must be encoded string without NULL bytes, not str Remainder of file ignored Error processing line 1 of /usr/lib/python2.7/dist-packages/pygtk.pth: Traceback (most recent call last): File "/usr/lib/python2.7/site.py", line 161, in addpackage if not dircase in known_paths and os.path.exists(dir): File "/usr/lib/python2.7/genericpath.py", line 18, in exists os.stat(path) TypeError: must be encoded string without NULL bytes, not str Remainder of file ignored Traceback (most recent call last): File "/usr/share/apport/apport-gtk", line 16, in <module> from gi.repository import GObject File "/usr/lib/python2.7/dist-packages/gi/importer.py", line 76, in load_module dynamic_module._load() File "/usr/lib/python2.7/dist-packages/gi/module.py", line 222, in _load version) File "/usr/lib/python2.7/dist-packages/gi/module.py", line 90, in __init__ repository.require(namespace, version) gi.RepositoryError: Failed to load typelib file '/usr/lib/girepository-1.0/GLib-2.0.typelib' for namespace 'GLib': Invalid magic header

    Read the article

  • Can I use Ubuntu as a wireless media server which performs all decoding/processing server-side?

    - by AthloX
    I want to setup UBUNTU 12.04 desktop as Home media server. I have window 7 netbook and UBUNTU 12.04lts laptop even a samsun galaxy note tablet (android). Two desktop in other room with dualboot win7 and ubuntu. SHARP AQUOS Plasma Tv with Wi-Fi connected. I want to install ubuntu as media server to stream audio/video files over wi-fi. Not only this i want this media server to use its own processing power to decode ans stream so that on remote end only file can play without using their own resource. Is it possible to use ubuntu as media server to stream files without making the remote end to use there own resource. I want only bandwidth of Wi-Fi to be use in this and media center hardware resource.Remote end gadget should use only speaker and screen and not processing power of their own. Please any suggestion is it possible to do so ?

    Read the article

  • image filters for iphone sdk development

    - by plsp
    Hi All, I am planning to develop an iphone app which makes use of image filters like blurring, sharpening,etc. I noticed that there are few approaches for this one, Use openGL ES. I even found an example code on apple iphone dev site. How easy is openGL for somebody who has never used it? Can the image filters be implemented using the openGL framework? There is a Quartz demo as well posted on apple iphone dev site. Has anybody used this framework for doing image processing? How is this approach compared to openGL framework? Don't use openGL and Quartz framework. Basically access the raw pixels from the image and do the manipulation myself. Make use of any custom built image processing libraries like this one. Do you know of any other libraries like this one? Can anybody provide insights/suggestions on which option is the best? Your opinions are highly appreciated. Thanks!

    Read the article

  • GPGPU programming with OpenGL ES 2.0

    - by Albus Dumbledore
    I am trying to do some image processing on the GPU, e.g. median, blur, brightness, etc. The general idea is to do something like this framework from GPU Gems 1. I am able to write the GLSL fragment shader for processing the pixels as I've been trying out different things in an effect designer app. I am not sure however how I should do the other part of the task. That is, I'd like to be working on the image in image coords and then outputting the result to a texture. I am aware of the gl_FragCoords variable. As far as I understand it it goes like that: I need to set up a view (an orthographic one maybe?) and a quad in such a way so that the pixel shader would be applied once to each pixel in the image and so that it would be rendering to a texture or something. But how can I achieve that considering there's depth that may make things somewhat awkward to me... I'd be very grateful if anyone could help me with this rather simple task as I am really frustrated with myself. UPDATE It seems I'll have to use an FBO, getting one like this: glBindFramebuffer(...)

    Read the article

  • SSAS Cube reprocessing fails - then succeeds if I try again

    - by EdgarVerona
    So I'm basically brand new to the concept of BI, and I've inherited an existing ETL process that is a two step process: 1) Loads the data into a database that is only used by the cube processing 2) Starts off the SSAS cube processing against said database It seems pretty well isolated, but occasionally (once a week, sometimes twice) it will fail with the following exception: "Errors in the OLAP storage engine: The attribute key cannot be found" Now the interesting thing is that: 1) The dimension having the issue is not usually the same one (i.e. there's no single dimension that consistently has this failure) 2) The source table, when I inspect it, does actually contain the attribute key that it says could not be found And, most interestingly... 3) If I then immediately reprocess the dimensions and cubes manually through SSMS, they reprocess successfully and without incident. In both the aforementioned job and when I reprocess them through SSMS, I am using "ProcessFull", so it should be reprocessing them completely. Has anyone run into such an issue? I'm scratching my head about it... because if it was a genuine data integrity issue, reprocessing the cube again wouldn't fix it. What on earth could be happening? I've been tasked with finding out why this happens, but I can neither reproduce it consistently nor can I point to a data integrity problem as the root cause. Thanks for any input you can provide!

    Read the article

  • How to pass dynamic parameters to .pde file

    - by Kalpana
    class Shape contains two methods drawCircle() and drawTriangle(). Each function takes different set of arguments. At present, I invoke this by calling the pde file directly. How to pass these arguments from a HTML file directly if I have to control the arguments being passed to the draw function? 1) Example.html has (current version) <script src="processing-1.0.0.min.js"></script> <canvas data-processing-sources="example.pde"></canvas> 2) Example.pde has class Shape { void drawCircle(intx, int y, int radius) { ellipse(x, y, radius, radius); } void drawTriangle(int x1, int y1, int x2, int y2, int x3, int y3) { rect(x1, y1, x2, y2, x3, y3); } } Shape shape = new Shape(); shape.drawCircle(10, 40, 70); I am looking to do something like this in my HTML file, so that I can move all the functions into a separate file and call them with different arguments to draw different shapes (much similar to how you would do it in Java) A.html <script> Shape shape = new Shape(); shape.drawCircle(10, 10, 3); </script> B.html <script> Shape shape = new Shape(); shape.drawTriangle(30, 75, 58, 20, 86, 75); </script> 2) Iam using Example2.pde has void setup() { size(200,200); background(125); fill(255); } void rectangle(int x1, int y1, int x2, int y2) { rect(x1, y1, x2, y2); } My Example2.html has var processingInstance; processingInstance.rectangle(30, 20, 55, 55); but this is not working. How to pass these parameters dynamically from html.

    Read the article

  • Projective transformation

    - by mcwehner
    Given two image buffers (assume it's an array of ints of size width * height, with each element a color value), how can I map an area defined by a quadrilateral from one image buffer into the other (always square) image buffer? I'm led to understand this is called "projective transformation". I'm also looking for a general (not language- or library-specific) way of doing this, such that it could be reasonably applied in any language without relying on "magic function X that does all the work for me". An example: I've written a short program in Java using the Processing library (processing.org) that captures video from a camera. During an initial "calibrating" step, the captured video is output directly into a window. The user then clicks on four points to define an area of the video that will be transformed, then mapped into the square window during subsequent operation of the program. If the user were to click on the four points defining the corners of a door visible at an angle in the camera's output, then this transformation would cause the subsequent video to map the transformed image of the door to the entire area of the window, albeit somewhat distorted.

    Read the article

  • BufferedImage & ColorModel in Java

    - by spol
    I am using a image processing library in java to manipulate images.The first step I do is I read an image and create a java.awt.Image.BufferedImage object. I do it in this way, BufferedImage sourceImage = ImageIO.read( new File( filePath ) ); The above code creates a BufferedImage ojbect with a DirectColorModel: rmask=ff0000 gmask=ff00 bmask=ff amask=0. This is what happens when I run the above code on my macbook. But when I run this same code on a linux machine (hosted server), this creates a BufferedImage object with ColorModel: #pixelBits = 24 numComponents = 3 color space = java.awt.color.ICC_ColorSpace@c39a20 transparency = 1 has alpha = false isAlphaPre = false. And I use the same jpg image in both the cases. I don't know why the color model on the same image is different when run on mac and linux. The colormodel for mac has 4 components and the colormodel for linux has 3 components.There is a problem arising because of this, the image processing library that I use always assumes that there are always 4 components in the colormodel of the image passed, and it throws array out of bounds exception when run on linux box. But on macbook, it runs fine. I am not sure if I am doing something wrong or there is a problem with the library. Please let me know your thoughts. Also ask me any questions if I am not making sense!

    Read the article

  • How to Digitally Sign PDF files

    - by Sergio
    I have a digital certificate that identifies an user. I need to use it to Digitally sign pdf files. Does anyone have an example that does not uses a third party component? I need to get this done but it would be nice to fully understand how things are done. C# Examples please :)

    Read the article

  • Does Python/Scipy have a firls( ) replacement (i.e. a weighted, least squares, FIR filter design)?

    - by delicasso
    I am porting code from Matlab to Python and am having trouble finding a replacement for the firls( ) routine. It is used for, least-squares linear-phase Finite Impulse Response (FIR) filter design. I looked at scipy.signal and nothing there looked like it would do the trick. Of course I was able to replace my remez and freqz algorithsm, so that's good. On one blog I found an algorithm that implemented this filter without weighting, but I need one with weights. Thanks, David

    Read the article

  • Is a signal sent with kill to a parent thread guaranteed to be processed before the next statement?

    - by Jonathan M Davis
    Okay, so if I'm running in a child thread on linux (using pthreads if that matters), and I run the following command kill(getpid(), someSignal); it will send the given signal to the parent of the current thread. My question: Is it guaranteed that the parent will then immediately get the CPU and process the signal (killing the app if it's a SIGKILL or doing whatever else if it's some other signal) before the statement following kill() is run? Or is it possible - even probable - that whatever command follows kill() will run before the signal is processed by the parent thread?

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >