Search Results

Search found 4461 results on 179 pages for 'pic audio'.

Page 152/179 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • How to use Speech 2 Text in Microsoft Surface

    - by Roflcoptr
    I'd like to use some speech 2 text in my microsoft surface application. I saw that it is possible, but I don't really know where to start. Is there any framework/library available, or a code snippet, or a tutorial?? I don't even know exactly what i should google for ;) ===EDIT=== I read that it is necessary to use a grammar to recognize words. So if I want to proceed free text, is there a predefined grammar for the english language? Or is it a better choice to don't use speech2text but just audio files instead?

    Read the article

  • Testing MPMoviePlayerViewController in iPad simulator

    - by hgpc
    I have a view that shows a MPMoviePlayerViewController modally. When testing it in the iPad simulator it works well on the first try. If I dismiss the video and then show the view again, the player only plays the audio, but not the video. Is this a simulator quirk or am I doing something wrong? Here's my code: - (void)viewWillAppear:(BOOL)animated { [super viewWillAppear:animated]; MPMoviePlayerViewController* v = [[MPMoviePlayerViewController alloc] initWithContentURL:url]; [[NSNotificationCenter defaultCenter] addObserver:self selector: @selector(playbackDidFinish:) name:MPMoviePlayerPlaybackDidFinishNotification object:v.moviePlayer]; [self presentMoviePlayerViewControllerAnimated:v]; [v release]; } -(void) playbackDidFinish:(NSNotification*)aNotification { MPMoviePlayerController *player = [aNotification object]; [[NSNotificationCenter defaultCenter] removeObserver:self name:MPMoviePlayerPlaybackDidFinishNotification object:player]; [player stop]; [self dismissMoviePlayerViewControllerAnimated]; }

    Read the article

  • Videoconference using Flash and SIP

    - by Júlio Santos
    The front-end will be Flash, to run in a browser and have access to the camera. I must use SIP to control the sessions. How could I do this? Will a Red5 server and a MjSip sever do the trick? As in i'd use MjSip to setup the session and warn users about calls, and Red5 to stream the video and audio? Any suggestions? Note: only 1-on-1 conference is required.

    Read the article

  • DSP - How are frequency amplitudes modified using DFT?

    - by Trap
    I'm trying to implement a DFT-based equalizer (not FFT) for the sole purpose of learning. To check if it works I took an audio signal, analyzed it and then resynthesized it again with no modifications made to the frequency spectrum. So far so good. Now I tried to silence some frequency bands, just by setting their amplitudes to zero before resynthesis, but definitely it's not the way to go. What I get is a rather distorted signal. I'm using the so-called 'standard way of calculating the DFT' which is by correlation. I first tried to modify the real part amplitudes only, then modifying both the real and imaginary part amplitudes. I also tried to convert the DFT output to polar notation, then modifying the magnitude and convert back to rectangular notation, but none of this is working. Can someone show me what I'm doing wrong? I tried to find info on this subject in the internet but couldn't find any. Thanks in advance.

    Read the article

  • C# How to Present Such Question?

    - by ikurtz
    greetings! i have a C# game program that im developing. it uses sound samples and winsock. when i test run the game most of the audio works fine but from time to time if it is multiple samples being played sequentially the application form shakes a little bit and then goes back to its old position. how do i go about debugging this or present it to you folks in a manageable manner? im sure no one is going to want the whole app code in fear of virus attacks. please guide me.. thanking you. EDIT: i have not been able to pin down any code section that produces this result. it just does and i cannot explain it. EDIT: no the x/y position are not changing. the window like shakes around a few pixels and then goes back to the position were it was before the shake.

    Read the article

  • How can HTML5 "replace" Flash?

    - by Kassini
    A topic of debate that's seen a resurgence since the unveiling of the iPad is the issue of Flash versus HTML5. There are those that suggest that HTML5 will one day supplant/replace Adobe Flash. I do not develop software that runs in a browser, so my (limited) understanding is: HTML is a pure-text markup language that is delivered over HTTP to a client browser. The client browser interprets the markup and renders (with varying degrees of success) the page according to an standard specification. Adobe Flash is a propriety framework for working with audio, video, sound and raster/vector graphics. It requires special authoring tools (a compiler perhaps?) and a custom player that's available as a plug-in to most common browsers. Could someone please explain (to this C/C++ developer) how it is possible from a technical/coding point-of-view that a text-based markup language (HTML5) could be considered a replacement to a multimedia framework (Flash)? Please no opinionated arguments - just technical facts.

    Read the article

  • Python File Meta Tag reading

    - by Jeff
    Anyone know of a Python module that can pull Tag data from multiple media formats? Trying to build an app that allows for manipulation of ASF (Windows Media Player files, ie WMA, WMV, etc), ID3, including both ID3v1 and ID3v2 (MPEG files, ie MP3), MPEG Audio Bit Stream (ie ABS, MP1, MP2, MP3), MPEG Program Stream (MPEG movies, and DVD and HD DVD video discs, ie MPG, MPEG, VOB, EVO), and ISO Base Media File Format (eg QuickTime, MPEG-4 and iTunes AAC files, ie QT, MOV, MP4, M4A, M4B, M4P, M4V, etc). Don't need ALL of that but just most standard consumer formats like mov and mpeg. I can't seem to find a good module to support that or a library. Any recommendations?

    Read the article

  • Reading iTunesMovies file in iPhone?

    - by raziiq
    Hi there. In iPhone, the iPod app saves the media files (audio, video) with strange names and in weird folders (F00,F01 etc). There is a file named iTunesMovies in iPhone, which contains all the information about the metadata of those video files and how they are to be displayed in iPod app. I copied that to my Mac also, and when i tried to open that file in textEdit, it showed some alien characters which made me believe that it is encrypted may be(Thats just a wild guess). I want to read/change the contents of that iTunesMovies file. Can i do that? Is there any Framework which deals with that iTunesMovies file? Thanks in advance

    Read the article

  • How can I find the program making a harmonica sound?

    - by Josh
    A friend has a Windows XP SP3 machine that plays a harmonica sound for about 5 seconds throughout the day at what seems to be random intervals (every couple hours). My question is how can I find the program making this sound? Is there a Windows API hook for monitoring audio access? I've gone through and checked all the standard Windows sounds in the Control Panel and right now the theme is set to no sounds and I personally checked to make sure none of the events have a sound specified. I also checked the Task Scheduler to make sure there wasn't something scheduled to go off every couple hours. Any ideas on how to go about finding the bugger?

    Read the article

  • Set a OGG in raw folder as Ringtone/Notification?

    - by YaW
    Hi, I have some ogg audios in my raw folder and I'm trying to set one of them as a Ringtone (or Notification, Alarm... whatever). I've been looking at the source code of RingDroid and I can see how is this done using the ContentValues and MediaStore, but in all the examples I've seen, the audio files is in the SDCard. Is it possible to set the ringtone directly from the raw folder? If not, how can I make a copy of the raw file to a folder in the SD? Thanks in advance.

    Read the article

  • How to send Sound Stream of a file from disk over network using FMOD?

    - by chris
    Hey everyone, i'm currently working on a project in college. my application should do some things with audio files from my computer. i'm using FMOD as sound library. the problem i have is, that i dont know how to access the data of a soundfile (wich was opened and startet using the FMOD methods) to stream it over network for playback on another pc in the net. does anyone has a similar problem?! any help is apreciated. thanks in advance. chris

    Read the article

  • Can't get max_post_size php variable set in lunar pages.

    - by Behrooz Karjooravary
    I need to increase max post size and upload size for php to use the audio module of drupal. I read this has to be set in php.ini. However I don't think I have access to that file in lunar pages. I also read it can also be set in .htaccess. However it doesn't change anything. I tried: php_value post_max_size "40M" php_value upload_max_filesize "40M" i also tried: php_value post_max_size 40M php_value upload_max_filesize 40M On localhost it says restart webserver. But this is not possible on shared host. Could that be the problem?

    Read the article

  • Crashes when using AVAudioPlayer on iPhone

    - by mindthief
    Hi all, I am trying to use AVAudioPlayer to play some sounds in quick succession. When I invoke the sound-playing function less frequently so that the sounds play fully before the function is invoked again, the application runs fine. But if I invoke the function quickly in rapid succession (so that sounds are played while the previous sounds are still being played), the app eventually crashes after ~20 calls to the function, with the message "EXC_BAD_ACCESS". Here is code from the function: NSString *nsWavPath = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:wavFileName]; AVAudioPlayer* theAudio = [[AVAudioPlayer alloc] initWithContentsOfURL:[NSURL fileURLWithPath:nsWavPath] error:NULL]; theAudio.delegate = self; [theAudio play]; As mentioned in another thread, I implemented the following delegate function: - (void) audioPlayerDidFinishPlaying:(AVAudioPlayer *)player successfully:(BOOL)flag { if(!flag) NSLog(@"audio did NOT finish successfully\n"); [player release]; } But the app still crashes after around ~20 rapid calls to the function. Any idea what I'm doing wrong?

    Read the article

  • Best Design pattern for social media file transfer

    - by Onema
    Our system would like our clients to link their accounts with different social media sites like youtube, vimeo, facebook, myspace and so on. One of the benefits we would like to give to the user is to transfer, update and delete files they have uploaded to our sites and transfer them to the social media sites mentioned above. this files could be videos, images or audio. We started thinking about using a strategy pattern, as all of these sites share a common process ( authentication, connection, use the API to transfer/edit/delete the file ), but we soon realized that it may not work as me may want to use some of the extended functionality that is specific to each service (eg: associate a youtube video with a channel, or upload images to a specific album on facebook, and much, much more...) My question is, what would be the best Structural Design Patter to use for this scenario?

    Read the article

  • Make two servers talk to each other

    - by Maksim
    I have application written in GWT and hosted on Google AppEngine/Java. In this application user will have an option to upload video/audio/text file to the server. Those files could be big, up to 1gb or so and because GAE/J does not support large file I have to use another server to store those files. This would be easy to implement if there was no cross-domain security feature in browsers. So, what I'm thinking is to make GAE Server talk to my server (Glassfish or any other java servers if needed) to tell url to the file and if possible send status of uploaded file (how many percent was uploaded) so I can show status on clients screen. Here is what I'm thinking to do. When user loads GWT page that is stored on GAE/J he/she will upload file to my server, then my server will send response back to GAE and GAE will send response to the client. If this scenario is possible what would be the best way to implement GAE to Glassfish conversation?

    Read the article

  • showSettings callback in Flex?

    - by Jim Robert
    I am pretty new to flex, so forgive me if this is an obvious question. Is there a way to open the Security.showSettings (flash.system.Security) with a callback? or at least to detect if it is currently open or not? My flex application is used for streaming audio, and is normally controlled by javascript, so I keep it hidden for normal use (via absolute positioning it off the page). When I need microphone access I need to make the flash settings dialog visible, which works fine, I move it into view and open the dialog. When the user closes it, I need to move it back off the screen so they don't see an empty flex app sitting there after they change their settings. thanks :)

    Read the article

  • Do you know a good and efficient FFT?

    - by yan bellavance
    Hi, I am trying to find a very fast and efficient Fourier transform (FFT). Does anyone know of any good ones. I need to run it on the iPhone so it must not be intensive. Instead, maybe you know of one that is wavelet like, i need frequency resolution but only a narrow band (vocal audio range up to 10khz max...even 10Khz might be too high). Im thinking also of truncating this FFT to keep the frequency resolution while eliminating the unwanted frequency band. This is for an iphone ...I have taken a look at the FFT in Aurio touch but it seems this is an int FFT but my app uses floats.....would it give a big performance increase to try and adapt program to an int FFT or not(which i really dont feel like doing...plus aurio touch uses a radix 2 FFT which is not that great).

    Read the article

  • Looking for a component (.NET or COM/ActiveX) that can play AVI files in a WinForms app

    - by MusiGenesis
    I'm looking for something like the Windows Media Player control that can be hosted on a form. The WMP doesn't work for me because I need a control that can play a continuously-appended playlist of AVI files in sequence, so that the transition from one file to the next happens seamlessly (i.e. without any glitches or pauses in the video and audio). With WMP, there's always a delay between files of half a second or so. Does anyone know of a control (it can be either commercial or open-source) that can do this? I assume anything like this wraps DirectX, and that's OK too.

    Read the article

  • OpenAL doesn't work when using AVAudioRecorder and AVAudioPlayer

    - by Pawe
    hi. i have been troubled about audio problem for several days. i don't think OpenAL get along with AVAudio functions. i have my own OpenAL class. ( wrapped the MyOpenAL class ) my app start to record using AVAudioRecorder. i stop recording. and then i clicked the "OpenAL Play"button that play any sound using OpenAL. i can't hear it. but i can hear my recording when i clicked the "AVAudioPlayer Play" button using AVAudioPlayer. i tested oalTouch,avTouch,SpeakHear sample code. they resulted same. Does OpenAL have the problem using AVAudio~ Functions together? I was googling for a long time. but i couln't find out the solutions and the same problems issue. thanks for reading mine.

    Read the article

  • problem with asf writer

    - by hatham
    Im trying to encode raw data(both video frame and audio sample) into .asf file, using asf writer filter in directshow. my filter graph structure: raw_send_filter - asf writer filter raw_send_filter implements CBaseFilter and CBaseOutputPin. It plays a role as source filter which get raw data, then deliver them to ASF writer filter. The process follows these steps: Get deliver buffer (return into "sample") , using the function CBaseOutputPin::GetDeliveryBuffer sample-GetPointer(&buffer); Set time stamp (with frame rate = 30 fps) deliver sample The problem is after encode some raw data, I can not deliver any more. I can encode .avi file with this way, using Avi mux filter. Can u tell me why I can not deliver samples after encoding some? Thanks.

    Read the article

  • remuxing mpv files from h264 AVI files

    - by crankharder
    I have a bunch of, I think, x264 encoded AVIs that I'd like to convert to m4v so that I can play with Quicktime. Here's how I created them First I dump the vob from DVD with this: $ mplayer -dumpstream -dumpfile new.vob dvd://1 Then I compress it: $ mencoder -oac copy -o new.avi -ovc x264 -x264encopts crf=18 new.vob I tried doing this to convertthem to m4v, but it's blowing up... I tried dumping the h264/acc streams: $ mplayer new.avi -dumpvideo -dumpfile new.h264 $ mplayer new.avi -dumpaudio -dumpfile new.acc And remuxing(?) with MP4Box but I'm getting an error: $ MP4Box -add new.h264#video -add new.aac#audio new.m4v Cannot find H264 start code Error importing new.h264#video: BitStream Not Compliant So not sure what to do now...

    Read the article

  • Best solution for a comment table for multiple content types

    - by KRTac
    I'm currently designing a comments table for a site I'm building. Users will be able to upload images, link videos and add audio files to the profile. Each of these types of content must be commentable. Now I'm wondering what's the best approach to this. My current options are: 1. to have one big comments table and a link tables for every content type (comments_videos, ...) with comment_id and _id. 2. to have comments separated by the type of content their for. So each type of content would have his own comments table with the comments for that type.

    Read the article

  • What is Boost missing?

    - by Robert Gould
    After spending most of my waking time on Stack Overflow, for better or for worse, I've come to notice how 99% of the C++ questions are answered with "use boost::wealreadysolvedyourproblem", but there must definitely be a few areas Boost doesn't cover, but would be better if it did. So what features is Boost missing? I'll start by saying: boost::sql (although SOCI should try to become a legal part of boost) boost::json (although TinyJSON should try to become a legal part of boost) boost::audio (no idea about a good boost-like C++ library) PS: The purpose is to compile a reasonable list, and hopefully Boost-like solutions out there that aren't yet a part of Boost, so no silly stuff like boost::turkey please.

    Read the article

  • How do I take advantage of Android's "Clear Cache" button

    - by Jay Askren
    In Android's settings, in the "Manage Applications" activity when clicking on an app, the data is broken down into Application, Data, and cache. There is also a button to clear the cache. My app caches audio files and I would like the user to be able to clear the cache using this button. How do I store them so they get lumped in with the cache and the user can clear them? I've tried storing files using both of the following techniques: newFile = File.createTempFile("mcb", ".mp3", context.getCacheDir()); newFile = new File(context.getCacheDir(), "mcb.mp3"); newFile.createNewFile(); In both cases, these files are listed as Data and not Cache.

    Read the article

  • ruby-gstreamer doesn't send EOS message

    - by Cheba
    I've managed to make it play sound but it never gets EOS message. And thus script never exits. require 'gst' main_loop = GLib::MainLoop.new pipeline = Gst::Pipeline.new "audio-player" source = Gst::ElementFactory.make "filesrc", "file-source" source.location = "/usr/share/sounds/gnome/default/alerts/bark.ogg" decoder = Gst::ElementFactory.make "decodebin", "decoder" conv = Gst::ElementFactory.make "audioconvert", "converter" sink = Gst::ElementFactory.make "alsasink", "output" pipeline.add source, decoder, conv, sink source >> decoder conv >> sink decoder.signal_connect "pad-added" do |element, pad, data| pad >> conv['sink'] end pipeline.bus.add_watch do |bus, message| puts "Message: #{message.inspect}" case message.type when Gst::Message::Type::ERROR puts message.structure["debug"] main_loop.quit when Gst::Message::Type::EOS puts 'End of stream' main_loop.quit end end pipeline.play begin puts 'Running main loop' main_loop.run ensure puts 'Shutting down main loop' pipeline.stop end

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >