Search Results

Search found 5991 results on 240 pages for 'html5 audio'.

Page 233/240 | < Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >

  • Command-line video editing in Linux (cut, join and preview)

    - by sdaau
    I have rather simple editing needs - I need to cut up some videos, maybe insert some PNGs in between them, and join these videos (don't need transitions, effects, etc.). Basically, pitivi would do what I want - except, I use 640x480 30 fps AVI's from a camera, and as soon as I put in over a couple of minutes of that kind of material, pitivi starts freezing on preview, and thus becomes unusable. So, I started looking for a command line tool for Linux; I guess only ffmpeg (command line - Using ffmpeg to cut up video - Super User) and mplayer (Sam - Edit video file with mencoder under linux) are so far candidates, but I cannot find examples of the use I have in mind.   Basically, I'd imagine there's an encoder and player tools (like ffmpeg vs ffplay; or mencoder vs mplayer) - such that, to begin with, the edit sequence could be specified directly on the command line, preferably with frame resolution - a pseudocode would look like: videnctool -compose --file=vid1.avi --start=00:00:30:12 --end=00:01:45:00 --file=vid2.avi --start=00:05:00:00 --end=00:07:12:25 --file=mypicture.png --duration=00:00:02:00 --file=vid3.avi --start=00:02:00:00 --end=00:02:45:10 --output=editedvid.avi ... or, it could have a "playlist" text file, like: vid1.avi 00:00:30:12 00:01:45:00 vid2.avi 00:05:00:00 00:07:12:25 mypicture.png - 00:00:02:00 vid3.avi 00:02:00:00 00:02:45:10 ... so it could be called with videnctool -compose --playlist=playlist.txt --output=editedvid.avi The idea here would be that all of the videos are in the same format - allowing the tool to avoid transcoding, and just do a "raw copy" instead (as in mencoder's copy codec: "-oac copy -ovc copy") - or in lack of that, uncompressed audio/video would be OK (although it would eat a bit of space). In the case of the still image, the tool would use the encoding set by the video files.   The thing is, I can so far see that mencoder and ffmpeg can operate on individual files; e.g. cut a single section from a single file, or join files (mencoder also has Edit Decision Lists (EDL), which can be used to do frame-exact cutting - so you can define multiple cut regions, but it's again attributed to a single file). Which implies I have to work on cutting pieces first from individual files first (each of which would demand own temporary file on disk), and then joining them in a final video file. I would then imagine, that there is a corresponding player tool, which can read the same command line option format / playlist file as the encoding tool - except it will not generate an output file, but instead play the video; e.g. in pseudocode: vidplaytool --playlist=playlist.txt --start=00:01:14 --end=00:03:13 ... and, given there's enough memory, it would generate a low-res video preview in RAM, and play it back in a window, while offering some limited interaction ( like mplayer's keyboard shortcuts for play, pause, rewind, step frame). Of course, I'd imagine the start and end times to refer to the entire playlist, and include any file that may end up in that region in the playlist. Thus, the end result of all this would be: command line operation; no temporary files while doing the editing - and also no temporary files (nor transcoding) when rendering final output... which I myself think would be nice. So, while I think that all of the above may be a bit of a stretch - does there exist anything that would approximate the workflow described above?

    Read the article

  • How to diagnose and solve an erratic "HDCP Support Required"?

    - by Jom Orgstrom
    I am playing a digital tv broadcast on Windows Media Center for Windows 7. I built this system so it works with HDCP, and in fact I have been able to watch tv and bluray before with this same computer. However, I suddenly started getting an "HDCP Support Required" error from WMC. The entire message is as follows: HDCP Support Required High-bandwidth Digital Content Protection (HDCP) may not be supported by the current video card. Use an HDCP-compliant display, video card, and video driver. Or, connect using an analog connection such as component or VGA. Relevant specs are: CPU: Ivy Bridge Core i7-3770 Motherboard: Asus P8H77-I Memory: 16GB DDR3-1600 Graphics: Radeon HD 7850 (Driver by AMD, version 8.982.0.0 built on 2012/07/27) Display: Acer P243w connected by HDMI Sound: Roland Quad-Capture (It complains even when I use the bundled VIA HD Audio) TV Tuner: I-O Data GV-MC7/HZ3 OS: Windows 7 Professional SP1, Windows Update enabled. All patched and up to date. As you can see, there is nothing weird or old about my setup. I am also not doing anything strange, not doing any overclocking, weird system changes and so on. One thing that does happen from time to time, is that the display goes black for a few seconds (sometimes when watching media contents, sometimes when just using photoshop or Visual Studio). This happened with my previous setup as well, so I'd be inclined to think it is a display or cable issue (apart from the BD drive, these are the only things I kept from my previous setup to this one). But being a digital transfer, as far as I know, these things either work or not. Never erratically or with decreased quality. The thing is that sometimes I can watch the TV, sometimes not. This happens with recorded programs as well, so it's not a per-program thing. Sometimes rebooting helps, sometimes it doesn't. Sometimes unplugging and plugging back the HDMI connector helps, sometimes it doesn't. Sometimes doing so doesn't even turn the screen back on, so I have to reboot. Unfortunately, WMC's error message is quite unhelpful. I'd like to know exactly where the problem is, so I can solve it. I don't want to buy a brand new display just to then find out it was a registry setting that was misconfigured. I've tried looking at the system event viewer, but these errors don't show up at all in there. Other people who have this problem seem to have a setup that is not HDCP compliant, so I turn to you guys here. Anybody knows how to diagnose this problem? Edit: So I got the Cyberlink Blu-ray disc advisor. I ran it and told me everything was okay, except for the Video Connection Type, which showed as "Digital (without HDCP)". I then proceeded to unplug the power cable from the monitor, plugged it in again, ran the tool again, and now it's "Digital (with HDCP)". Needless to say, I can watch my TV and recorded programs on WMP again. I'm guessing that at some point, something may be slightly wrong with the HDCP setup, and Windows decides to reset the entire content protection path (which leads to the screen blanking out). Usually the reset succeeds, but sometimes it doesn't, so Windows defaults to turning HDCP off. There's no way to turn it back on, except by doing a hard reset of the display. I really want to know what the exact error was, so I can fix it. Is it the cable? is it the display? is it the video card? the driver? Also, is there any other way to try and turn HDCP on again without having to hard reset the display? Oh, questions, questions...

    Read the article

  • FFMPEG dropping frames while encoding JPEG sequence at color change

    - by Matt
    I'm trying to put together a slide show using imagemagick and FFMPEG. I use imagemagick to expand a single photo into 30fps video (imagemagick also handles things like putting some text captions on the frames along the way). When I go to let ffmpeg digest it into a video it clips along nicely on the color parts of the video, but when it gets to a black and white section it reports "frame= 2030 fps=102 q=32766.0 Lsize= 5203kB time=00:01:07.60 bitrate= 630.5kbits/s dup=0 drop=703" and drops every frame of video until it hits something with color. As you can imagine this results in entire photos being removed from the slideshow. Here is my latest dump... ffmpeg -y -r 30 -i "teststream/%06d.jpg" -c:v libx264 -r 30 newffmpeg.mp4 ffmpeg version git-2012-12-10-c3bb333 Copyright (c) 2000-2012 the FFmpeg developers built on Dec 10 2012 22:02:04 with gcc 4.6.1 (Ubuntu/Linaro 4.6.1-9ubuntu3) configuration: --enable-gpl --enable-libfaac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-librtmp --enable-libtheora --enable-libvorbis --enable-libx264 --enable-nonfree --enable-version3 libavutil 52. 12.100 / 52. 12.100 libavcodec 54. 79.101 / 54. 79.101 libavformat 54. 49.100 / 54. 49.100 libavdevice 54. 3.102 / 54. 3.102 libavfilter 3. 26.101 / 3. 26.101 libswscale 2. 1.103 / 2. 1.103 libswresample 0. 17.102 / 0. 17.102 libpostproc 52. 2.100 / 52. 2.100 Input #0, image2, from 'teststream/%06d.jpg': Duration: 00:12:02.80, start: 0.000000, bitrate: N/A Stream #0:0: Video: mjpeg, yuvj444p, 720x480 [SAR 72:72 DAR 3:2], 25 fps, 25 tbr, 25 tbn, 25 tbc [libx264 @ 0x3450140] using SAR=1/1 [libx264 @ 0x3450140] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.2 [libx264 @ 0x3450140] profile High, level 3.0 [libx264 @ 0x3450140] 264 - core 129 r2 1cffe9f - H.264/MPEG-4 AVC codec - Copyleft 2003-2012 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to 'newffmpeg.mp4': Metadata: encoder : Lavf54.49.100 Stream #0:0: Video: h264 ([33][0][0][0] / 0x0021), yuvj420p, 720x480 [SAR 1:1 DAR 3:2], q=-1--1, 15360 tbn, 30 tbc Stream mapping: Stream #0:0 - #0:0 (mjpeg - libx264) Press [q] to stop, [?] for help Input stream #0:0 frame changed from size:720x480 fmt:yuvj444p to size:720x480 fmt:yuvj422p Input stream #0:0 frame changed from size:720x480 fmt:yuvj422p to size:720x480 fmt:yuvj444pp=584 frame= 2030 fps=102 q=32766.0 Lsize= 5203kB time=00:01:07.60 bitrate= 630.5kbits/s dup=0 drop=703 video:5179kB audio:0kB subtitle:0 global headers:0kB muxing overhead 0.472425% [libx264 @ 0x3450140] frame I:9 Avg QP:20.10 size: 33933 [libx264 @ 0x3450140] frame P:636 Avg QP:24.12 size: 6737 [libx264 @ 0x3450140] frame B:1385 Avg QP:27.04 size: 514 [libx264 @ 0x3450140] consecutive B-frames: 2.5% 15.2% 13.2% 69.2% [libx264 @ 0x3450140] mb I I16..4: 8.3% 80.3% 11.5% [libx264 @ 0x3450140] mb P I16..4: 1.5% 2.5% 0.2% P16..4: 41.7% 18.0% 10.3% 0.0% 0.0% skip:25.9% [libx264 @ 0x3450140] mb B I16..4: 0.0% 0.0% 0.0% B16..8: 26.6% 0.6% 0.1% direct: 0.2% skip:72.3% L0:35.0% L1:60.3% BI: 4.7% [libx264 @ 0x3450140] 8x8 transform intra:64.1% inter:75.1% [libx264 @ 0x3450140] coded y,uvDC,uvAC intra: 51.6% 78.0% 43.7% inter: 10.6% 14.9% 2.1% [libx264 @ 0x3450140] i16 v,h,dc,p: 29% 19% 6% 46% [libx264 @ 0x3450140] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 23% 15% 17% 5% 9% 10% 7% 8% 6% [libx264 @ 0x3450140] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 31% 18% 11% 5% 9% 10% 6% 6% 4% [libx264 @ 0x3450140] i8c dc,h,v,p: 46% 18% 24% 12% [libx264 @ 0x3450140] Weighted P-Frames: Y:20.1% UV:18.7% [libx264 @ 0x3450140] ref P L0: 59.2% 23.2% 13.1% 4.3% 0.2% [libx264 @ 0x3450140] ref B L0: 88.7% 8.3% 3.0% [libx264 @ 0x3450140] ref B L1: 95.0% 5.0% [libx264 @ 0x3450140] kb/s:626.88 Received signal 2: terminating. One last note: If I remove the -r 30 from the input and output it works flawlessly. I have no idea why the -r 30 is causing it to freak out.

    Read the article

  • Trying to prevent Windows from hibernating/sleeping automatically

    - by user328821
    My Dell XPS 8700 (Win 7) suddenly began putting itself to sleep at 6pm daily, even if I'm typing. I don't know what caused this to occur, except possibly a windows update that took place in the middle of the night. I initially went into settings for power and saw 2 plans set up, one from Dell and the other window's Power saver plan. I set both to never for sleep and hibernate yet it still occurred. I have current drivers and a fairly new UPS that has software to set to shutdown only after power loss. Dell is of little help, can anyone point me in the right direction? I did do the powerdfg -energy program and came up with this: Power Efficiency Diagnostics Report Scan Time 2014-05-08T19:21:48Z Scan Duration 60 seconds System Manufacturer Dell Inc. System Product Name XPS 8700 BIOS Date 08/23/2013 BIOS Version A04 OS Build 7601 Platform Role PlatformRoleDesktop Plugged In true Process Count 115 Thread Count 1631 Report GUID {097caf99-039b-44c3-b154-d797bfbfdfcc} Analysis Results Errors Power Policy:Sleep timeout is disabled (Plugged In) The computer is not configured to automatically sleep after a period of inactivity. System Availability Requests:System Required Request The device or driver has made a request to prevent the system from automatically entering sleep. Requesting Driver Instance HDAUDIO\FUNC_01&VEN_10EC&DEV_0899&SUBSYS_102805B7&REV_1000\4&220b1bbc&0&0001 Requesting Driver Device Realtek High Definition Audio CPU Utilization:Processor utilization is high The average processor utilization during the trace was high. The system will consume less power when the average processor utilization is very low. Review processor utilization for individual processes to determine which applications and services contribute the most to total processor utilization. Average Utilization (%) 9.48 Warnings Platform Timer Resolution:Platform Timer Resolution The default platform timer resolution is 15.6ms (15625000ns) and should be used whenever the system is idle. If the timer resolution is increased, processor power management technologies may not be effective. The timer resolution may be increased due to multimedia playback or graphical animations. Current Timer Resolution (100ns units) 10000 Maximum Timer Period (100ns units) 156001 Platform Timer Resolution:Outstanding Kernel Timer Request A kernel component or device driver has requested a timer resolution smaller than the platform maximum timer resolution. Requested Period 10000 Request Count 2 Platform Timer Resolution:Outstanding Timer Request A program or service has requested a timer resolution smaller than the platform maximum timer resolution. Requested Period 10000 Requesting Process ID 8672 Requesting Process Path \Device\HarddiskVolume3\Program Files (x86)\Mozilla Firefox\firefox.exe Platform Timer Resolution:Outstanding Timer Request A program or service has requested a timer resolution smaller than the platform maximum timer resolution. Requested Period 100000 Requesting Process ID 1212 Requesting Process Path \Device\HarddiskVolume3\Windows\System32\svchost.exe Power Policy:802.11 Radio Power Policy is Maximum Performance (Plugged In) The current power policy for 802.11-compatible wireless network adapters is not configured to use low-power modes. CPU Utilization:Individual process with significant processor utilization. This process is responsible for a significant portion of the total processor utilization recorded during the trace. Process Name audiodg.exe PID 1304 Average Utilization (%) 4.73 Module Average Module Utilization (%) \Device\HarddiskVolume3\Windows\System32\msvcrt.dll 1.88 \Device\HarddiskVolume3\Windows\System32\MaxxAudioAPO5064.dll 1.77 \Device\HarddiskVolume3\Windows\System32\AudioEng.dll 0.80 CPU Utilization:Individual process with significant processor utilization. This process is responsible for a significant portion of the total processor utilization recorded during the trace. Process Name thunderbird.exe PID 6036 Average Utilization (%) 0.35 Module Average Module Utilization (%) \Device\HarddiskVolume3\Program Files (x86)\Mozilla Thunderbird\xul.dll 0.16 \Device\HarddiskVolume3\Program Files (x86)\Mozilla Thunderbird\mozjs.dll 0.05 \SystemRoot\System32\win32k.sys 0.03 CPU Utilization:Individual process with significant processor utilization. This process is responsible for a significant portion of the total processor utilization recorded during the trace. Process Name dwm.exe PID 1340 Average Utilization (%) 0.25 Module Average Module Utilization (%) \Device\HarddiskVolume3\Windows\System32\dwmcore.dll 0.08 \Device\HarddiskVolume3\Windows\System32\nvwgf2umx.dll 0.05 \SystemRoot\system32\ntoskrnl.exe 0.03 CPU Utilization:Individual process with significant processor utilization. This process is responsible for a significant portion of the total processor utilization recorded during the trace.

    Read the article

  • Problem with video playback on iPad with MPMoviePlayerViewController

    - by Symo
    Hello everybody... I have been fighting some code for about a week, and am hoping that someone else may have experienced this problem and can point me in the right direction. I am using the MPMoviePlayerViewController to play a video on the iPad. The primary problem is that it works FLAWLESSLY on the iPad Simulator, but will not play at all on the iPad. I have tried re-encoding the video to make sure that isn't an issue. The video I'm using is currently a 480x360 video encoded with H.264 Basline 3.0 with AAC/LC audio. The video plays fine on the iPhone, and also does play through Safari on the iPad. The video actually loads, and you can scrub through the video with the scrubber bar and see that it is there. The frames actually display, but just will not play. If you click play, it just immediately stops. Even when I have mp.moviePlayer.shouldAutoplay=YES set, you can see the player attempt to play, but only for a split second (maybe 1 frame?). I have tried just adding view with the following code: in .h ------ MPMoviePlayerViewController *vidViewController; @property (readwrite, retain) MPMoviePlayerViewController *vidViewController; in .m ------ MPMoviePlayerViewController *mp=[[MPMoviePlayerViewController alloc] initWithContentURL:[NSURL URLWithString:videoURL]]; [mp shouldAutorotateToInterfaceOrientation:YES]; mp.moviePlayer.scalingMode=MPMovieScalingModeAspectFit; mp.moviePlayer.shouldAutoplay=YES; mp.moviePlayer.controlStyle=MPMovieControlStyleFullscreen; [videoURL release]; self.vidViewController = mp; [mp release]; [self.view addSubview:vidViewController.view]; float w = self.view.frame.size.width; float h = w * 0.75; self.vidViewController.view.frame = CGRectMake(0, 0, w, h); I have also just tried to do a: [self presentMoviePlayerViewControllerAnimated:self.vidViewController]; which I actually can not get to orient properly...always shows up in Portrait and almost completely off the screen on the bottom, and the app is only intended to run in either of the Landscape views... If anybody needs more info, just let me know. I'm about at my wits end on this. ANY help will be GREATLY appreciated.

    Read the article

  • CSS import or multiple CSS files

    - by David H
    I originally wanted to include a .css in my HTML doc that loads multiple other .css files in order to divide up some chunks of code for development purposes. I have created a test page: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>The Recipe Site</title> <link rel='stylesheet' href='/css/main.css'> <link rel='stylesheet' href='/css/site_header.css'> <!-- Let google host jQuery for us, maybeb replace with their api --> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.1/jquery.min.js"></script> <script type="text/javascript" src="/js/main.js"></script> </head> <body> <div id="site_container"> <div id="site_header"><?php include_once($r->base_dir . "inc/site_header.inc.php"); ?><!-- Include File, Update on ajax request. --></div> <div id="site_content"> Some main content. </div> <div id="site_footer"><?php include_once($r->base_dir . "inc/site_footer.inc.php"); ?><!-- Include File, Update on ajax request. --></div> </div> </body> </html> File: /css/main.css /* Reset Default Padding & Margin */ * { margin: 0; padding: 0; border: 0; } /* Set Our Float Classes */ .clear { clear: both; } .right { float: right; } .left { float: left; } /* Setup the main body/site container */ body { background: url(/images/wallpaper.png) repeat; color: #000000; text-align: center; font: 62.5%/1.5 "Lucida Grande", "Lucida Sans", Tahoma, Verdana, sans-serif; } site_container { background-color: #FFFFFF; height: 100%; margin-left: auto; margin-right: auto; text-align: left; width: 100%; } /* Some style sheet includes / / @import "/css/site_header.css"; */ /* Default Font Sizes */ h1 { font-size: 2.2em; } h2 { font-size: 2.0em; } h3 { font-size: 1.8em; } h4 { font-size: 1.6em; } h5 { font-size: 1.4em; } p { font-size: 1.2em; } /* Default Form Layout */ input.text { padding: 3px; border: 1px solid #999999; } /* Default Table Reset */ table { border-spacing: 0; border-collapse: collapse; } td{ text-align: left; font-weight: normal; } /* Cause not all browsers know what HTML5 is... */ header { display:block;} footer { display:block;} and now the file: /css/site_header.css: site_header { background-color: #c0c0c0; height: 100px; position: absolute; top: 100px; width: 100%; } Problem: When I use the above code, the site_header div does not have any formatting/background. When I remove the link line from the HTML doc for site_header.css and instead use an @import url("/css/site_header.css"); in my main.css file, the same results -- nothing gets rendered for for the same div. Now when I take the CSS markup from site_header.css and add it to main.css, the div gets rendered fine... So I am wondering if having multiple css files is somehow not working... or maybe having that css markup at the end of my previous css is somehow conflicting, though I cannot find a reason why it would.

    Read the article

  • Javascript - find swfobject on included page and call javascript function

    - by Rob
    I’m using the following script on my website to play an mp3 in flash. To instantiate the flash object I use the swfobject framework in a javascript function. When the function is called the player is created and added to the page. The rest of the website is in php and the page calling this script is being included with the php include function. All the other used scripts are in the php 'master'-page var playerMp3 = new SWFObject("scripts/player.swf","myplayer1","0","0","0"); playerMp3.addVariable("file","track.mp3"); playerMp3.addVariable("icons","false"); playerMp3.write("player1"); var player1 = document.getElementById("myplayer1"); var status1 = $("#status1"); $("#play1").click(function(){ player1.sendEvent("play","true"); $("#status1").fadeIn(400); player4.sendEvent("stop","false"); $("#status4").fadeOut(400); player3.sendEvent("stop","false"); $("#status3").fadeOut(400); player2.sendEvent("stop","false"); $("#status2").fadeOut(400); }); $("#stop1").click(function(){ player1.sendEvent("stop","false"); $("#status1").fadeOut(400); }); $(".closeOver").click(function(){ player1.sendEvent("stop","false"); $("#status1").fadeOut(400); }); $(".accordionButton2").click(function(){ player1.sendEvent("stop","false"); $("#status1").fadeOut(400); }); $(".accordionButton3").click(function(){ player1.sendEvent("stop","false"); $("#status1").fadeOut(400); }); $(".turnOffMusic").click(function(){ player1.sendEvent("stop","false"); $("#status1").fadeOut(400); }); }); I have a play-button with the id ‘#play1’ and a stop-button with the id ‘#stop1’ on my page. A div on the same page has the id ‘#status1’ and a little image of a speaker is in the div. When you push the playbutton, the div with the speaker is fading in and when you push the stopbutton, the div with the speaker is fading out, very simple. And it works as I want it to do. But the problem is, when a song is finished, the speaker doesn’t fade out. Is there a simple solution for this? I already tried using the swfobject framework to get the flash player from the page and call the ‘IsPlaying’ on it, but I’m getting the error that ‘swfobject’ can’t be found. All I need is a little push in the right direction or an example showing me how I can correctly get the currently playing audio player (in flash), check if it’s playing and if finished, call a javascript function to led the speaker-image fade-out again. Hope someone here can help me

    Read the article

  • Voice Recognition Connection problem

    - by user244190
    I,m trying to work through and test a Voice Recognition example based on the VoiceRecognition.java example at http://developer.android.com/resources/samples/ApiDemos/src/com/example/android/apis/app/VoiceRecognition.html but when click on the button to create the activity, I get a dialog that says Connection problem. My Manifest file is using the Internet Permission, and I understand it passes the to the Google Servers. Do I need to do anything else to use this. Code below UPDATE: Ok, I was able to replace my emulator image with one from HTC that appears to come with Google Voice Search, however now when I run from the emulator, i'm getting an Audio Problem message with Speak Again or Cancel buttons. It appears to make it back to the onActivityResult(), but the resultCode is 0. Here is the LogCat output: 03-07 20:21:25.396: INFO/ActivityManager(578): Starting activity: Intent { action=android.speech.action.RECOGNIZE_SPEECH comp={com.google.android.voicesearch/com.google.android.voicesearch.RecognitionActivity} (has extras) } 03-07 20:21:25.406: WARN/ActivityManager(578): Activity is launching as a new task, so cancelling activity result. 03-07 20:21:25.968: WARN/ActivityManager(578): Activity pause timeout for HistoryRecord{434f7850 {com.ikonicsoft.mileagegenie/com.ikonicsoft.mileagegenie.MileageGenie}} 03-07 20:21:26.206: WARN/AudioHardwareInterface(554): getInputBufferSize bad sampling rate: 16000 03-07 20:21:26.256: ERROR/AudioRecord(819): Recording parameters are not supported: sampleRate 16000, channelCount 1, format 1 03-07 20:21:26.696: INFO/ActivityManager(578): Displayed activity com.google.android.voicesearch/.RecognitionActivity: 1295 ms 03-07 20:21:29.890: DEBUG/dalvikvm(806): threadid=3: still suspended after undo (s=1 d=1) 03-07 20:21:29.896: INFO/dalvikvm(806): Uncaught exception thrown by finalizer (will be discarded): 03-07 20:21:29.896: INFO/dalvikvm(806): Ljava/lang/IllegalStateException;: Finalizing cursor android.database.sqlite.SQLiteCursor@435d3c50 on ml_trackdata that has not been deactivated or closed 03-07 20:21:29.896: INFO/dalvikvm(806): at android.database.sqlite.SQLiteCursor.finalize(SQLiteCursor.java:596) 03-07 20:21:29.896: INFO/dalvikvm(806): at dalvik.system.NativeStart.run(Native Method) 03-07 20:21:31.468: DEBUG/dalvikvm(806): threadid=5: still suspended after undo (s=1 d=1) 03-07 20:21:32.436: WARN/IInputConnectionWrapper(806): showStatusIcon on inactive InputConnection I,m still not sure why I,m getting the Connect problem on the Droid. I can use Voice Search ok. I also tried clearing the cache, and data as described in some posts, butstill not working?? /** * Fire an intent to start the speech recognition activity. */ private void startVoiceRecognitionActivity() { Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH); intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM); intent.putExtra(RecognizerIntent.EXTRA_PROMPT, "Speech recognition demo"); startActivityForResult(intent, VOICE_RECOGNITION_REQUEST_CODE); } /** * Handle the results from the recognition activity. */ @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (requestCode == VOICE_RECOGNITION_REQUEST_CODE && resultCode == RESULT_OK) { // Fill the list view with the strings the recognizer thought it could have heard ArrayList<String> matches = data.getStringArrayListExtra( RecognizerIntent.EXTRA_RESULTS); mList.setAdapter(new ArrayAdapter<String>(this, android.R.layout.simple_list_item_1, matches)); } super.onActivityResult(requestCode, resultCode, data); }

    Read the article

  • Android stream to Wowza

    - by Curtis Kiu
    I feel very confused about Android streaming to wowza. I am doing a video conference using rtmp cross-platform, but Android doesn't eat RTMP. Therefore I need to find another way to do it. Upstreaming I found a new open-source app called spydroid-ipcamera. It is using rtp, sending udp packets to computer, and opens it in vlc using the following sdp v=0 s=Unnamed m=video 5006 RTP/AVP 96 a=rtpmap:96 H264/90000 a=fmtp:96 packetization-mode=1;profile-level-id=420016;sprop-parameter-sets=Z0IAFukBQHsg,aM4BDyA=; But it can't work. Then I follow wowza tutorial and stream to it and then play again in VLC. That works! I wrote it in http://code.google.com/p/spydroid-ipcamera/issues/detail?id=2 However when I want to add audio in the packet, it fails to work. I change to code in http://code.google.com/p/spydroid-ipcamera/source/browse/trunk/src/net/mkp/spydroid/CameraStreamer.java mr.setAudioSource(MediaRecorder.AudioSource.MIC); mr.setVideoSource(MediaRecorder.VideoSource.CAMERA); mr.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4); mr.setVideoFrameRate(20); mr.setVideoSize(640, 480); mr.setAudioEncoder(MediaRecorder.AudioEncoder.AAC); mr.setVideoEncoder(MediaRecorder.VideoEncoder.H264); mr.setPreviewDisplay(holder.getSurface()); Then I thought that the problem should be in sdp, but I don't know how to due with sdp. I am streaming H.264/AAC with Mp4 Second I don't understand sdp. So how can I make video conference upstreaming part using this apps. Android ----(UDP Port:5006)----> PC (SDP file) and then Wowza read the SDP file ------> VLC I think in this way the system cannot handle more than 1 client. sdp can only hold 1 port, any idea or actually it wont' work? Also Wowza need to set the stream before we stream it, so does it mean that I should not follow this way to do it? Sorry my English is poor, I hope you guys understand.

    Read the article

  • Cocoa: AVAudioRecorder Fails to Record

    - by kumaryr
    AVAudioSession *audioSession = [AVAudioSession sharedInstance]; NSError *err = nil; [audioSession setCategory:AVAudioSessionCategoryPlayAndRecord error:&err]; if(err){ NSLog(@"audioSession: %@ %d %@", [err domain], [err code], [[err userInfo] description]); return; } [audioSession setActive:YES error:&err]; err = nil; if(err){ NSLog(@"audioSession: %@ %d %@", [err domain], [err code], [[err userInfo] description]); return; } NSMutableDictionary *recordSetting = [[NSMutableDictionary alloc] init]; [recordSetting setValue:[NSNumber numberWithInt: kAudioFormatAppleIMA4] forKey:AVFormatIDKey]; [recordSetting setValue:[NSNumber numberWithFloat:40000.0] forKey:AVSampleRateKey]; [recordSetting setValue:[NSNumber numberWithInt: 2] forKey:AVNumberOfChannelsKey]; [recordSetting setValue:[NSNumber numberWithInt:16] forKey:AVLinearPCMBitDepthKey]; [recordSetting setValue:[NSNumber numberWithBool:NO] forKey:AVLinearPCMIsBigEndianKey]; [recordSetting setValue:[NSNumber numberWithBool:NO] forKey:AVLinearPCMIsFloatKey]; // Create a new dated file NSDate *now = [NSDate dateWithTimeIntervalSinceNow:0]; NSString *caldate = [now description]; NSString *recorderFilePath = [[NSString stringWithFormat:@"%@/%@.caf", DOCUMENTS_FOLDER, caldate] retain]; NSLog(recorderFilePath); url = [NSURL fileURLWithPath:recorderFilePath]; err = nil; recorder = [[ AVAudioRecorder alloc] initWithURL:url settings:recordSetting error:&err]; if(!recorder){ NSLog(@"recorder: %@ %d %@", [err domain], [err code], [[err userInfo] description]); UIAlertView *alert = [[UIAlertView alloc] initWithTitle: @"Warning" message: [err localizedDescription] delegate: nil cancelButtonTitle:@"OK" otherButtonTitles:nil]; [alert show]; [alert release]; return; } //prepare to record [recorder setDelegate:self]; [recorder prepareToRecord]; recorder.meteringEnabled = YES; BOOL audioHWAvailable = audioSession.inputIsAvailable; if (! audioHWAvailable) { UIAlertView *cantRecordAlert = [[UIAlertView alloc] initWithTitle: @"Warning" message: @"Audio input hardware not available" delegate: nil cancelButtonTitle:@"OK" otherButtonTitles:nil]; [cantRecordAlert show]; [cantRecordAlert release]; return; } // [NSTimer scheduledTimerWithTimeInterval:1 target:self selector:@selector( updateTimerDisplay) userInfo:nil repeats:YES]; // [recorder recordForDuration:(NSTimeInterval)10 ]; // [NSTimer scheduledTimerWithTimeInterval:1 target:self selector:@selector( updateTimerDisplay) userInfo:nil repeats:YES];

    Read the article

  • Segmentation fault when creating a Phonon MediaObject

    - by Luke Hansford
    I have music playing program made using PySide which uses Phonon to playback audio. I updated to MacOS X Mavericks a few days ago, which meant I needed to reinstall PySide. I'm not sure which of these actions has caused this, but now whenever I try to create a Phonon MediaObject I get a Segmentation Fault: 11 from Python. It's not just in my program, it happens when trying to create a MediaObject in Python without any other actions. I'm getting the following error message from my Mac whenever it crashes: Process: Python [13711] Path: /usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python Identifier: org.python.python Version: 2.7.5 (2.7.5) Code Type: X86-64 (Native) Parent Process: bash [13707] Responsible: Terminal [13704] User ID: 501 Date/Time: 2013-11-01 19:47:53.164 +1000 OS Version: Mac OS X 10.9 (13A603) Report Version: 11 Anonymous UUID: C2686854-18CA-9D37-26E9-60050E3C4DA6 Sleep/Wake UUID: BB983BF6-CCE2-44D1-82A0-1C73382DFFE4 Crashed Thread: 0 Dispatch queue: com.apple.main-thread Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000008 VM Regions Near 0x8: --> __TEXT 00000001082e8000-00000001082e9000 [ 4K] r-x/rwx SM=COW /usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python Thread 0 Crashed:: Dispatch queue: com.apple.main-thread 0 QtCore 0x000000010a1b34cb QObject::moveToThread(QThread*) + 17 1 QtDBus 0x000000010d55f98b QDBusDefaultConnection::QDBusDefaultConnection(QDBusConnection::BusType, char const*) + 171 2 QtDBus 0x000000010d55ebdf QDBusConnection::sessionBus() + 71 3 phonon 0x000000010d50228d Phonon::FactoryPrivate::FactoryPrivate() + 189 4 phonon 0x000000010d5024d5 Phonon::$_249::operator->() + 99 5 phonon 0x000000010d502991 Phonon::Factory::registerFrontendObject(Phonon::MediaNodePrivate*) + 17 6 phonon 0x000000010d50b27e Phonon::MediaNodePrivate::MediaNodePrivate(Phonon::MediaNodePrivate::CastId) + 80 7 phonon 0x000000010d50f570 Phonon::MediaObjectPrivate::MediaObjectPrivate() + 24 8 phonon 0x000000010d50be9d Phonon::MediaObject::MediaObject(QObject*) + 45 9 phonon.so 0x000000010d42f24a Sbk_Phonon_MediaObject_Init + 458 10 org.python.python 0x0000000108338707 type_call + 189 11 org.python.python 0x00000001082f74fd PyObject_Call + 101 12 org.python.python 0x00000001083714f0 PyEval_EvalFrameEx + 15525 13 org.python.python 0x0000000108373aaf fast_function + 182 14 org.python.python 0x0000000108370919 PyEval_EvalFrameEx + 12494 15 org.python.python 0x000000010836d721 PyEval_EvalCodeEx + 1638 16 org.python.python 0x000000010836d0b5 PyEval_EvalCode + 54 17 org.python.python 0x000000010838beb8 run_mod + 53 18 org.python.python 0x000000010838bf5f PyRun_FileExFlags + 137 19 org.python.python 0x000000010838baad PyRun_SimpleFileExFlags + 718 20 org.python.python 0x000000010839c58b Py_Main + 3039 21 libdyld.dylib 0x00007fff8e4fb5fd start + 1 Thread 1:: Dispatch queue: com.apple.libdispatch-manager 0 libsystem_kernel.dylib 0x00007fff8c938662 kevent64 + 10 1 libdispatch.dylib 0x00007fff923e743d _dispatch_mgr_invoke + 239 2 libdispatch.dylib 0x00007fff923e7152 _dispatch_mgr_thread + 52 Thread 2: 0 libsystem_kernel.dylib 0x00007fff8c937e6a __workq_kernreturn + 10 1 libsystem_pthread.dylib 0x00007fff90bd8f08 _pthread_wqthread + 330 2 libsystem_pthread.dylib 0x00007fff90bdbfb9 start_wqthread + 13 Thread 3: 0 libsystem_kernel.dylib 0x00007fff8c937e6a __workq_kernreturn + 10 1 libsystem_pthread.dylib 0x00007fff90bd8f08 _pthread_wqthread + 330 2 libsystem_pthread.dylib 0x00007fff90bdbfb9 start_wqthread + 13 Thread 4: 0 libsystem_kernel.dylib 0x00007fff8c937e6a __workq_kernreturn + 10 1 libsystem_pthread.dylib 0x00007fff90bd8f08 _pthread_wqthread + 330 2 libsystem_pthread.dylib 0x00007fff90bdbfb9 start_wqthread + 13 Thread 0 crashed with X86 Thread State (64-bit): rax: 0x00007feba0d19700 rbx: 0x000000010d5b7098 rcx: 0x00000000002f4180 rdx: 0x000000000012c040 rdi: 0x0000000000000000 rsi: 0x00007feba0d19700 rbp: 0x00007fff57917210 rsp: 0x00007fff579171d0 r8: 0x00007feba0fd5d10 r9: 0x00007feba0ff5310 r10: 0x0000000019c04cbe r11: 0x0000000070769b38 r12: 0x00007fff57917220 r13: 0x00007feba0c07190 r14: 0x0000000000000000 r15: 0x00007feba0fe1430 rip: 0x000000010a1b34cb rfl: 0x0000000000010202 cr2: 0x0000000000000008 Logical CPU: 0 Error Code: 0x00000004 Trap Number: 14 Anyone have any ideas about what is happening?

    Read the article

  • mciSendString cannot save to directory path

    - by robUK
    Hello, VS C# 2008 SP1 I have a created a small application that records and plays audio. However, my application needs to save the wave file to the application data directory on the users computer. The mciSendString takes a C style string as a parameter and has to be in 8.3 format. However, my problem is I can't get it to save. And what is strange is sometime it does and sometimes it doesn't. Howver, most of the time is failes. However, if I save directly to the C drive it works first time everything. I have used 3 different methods that I have coded below. The error number that I get when it fails is 286."The file was not saved. Make sure your system has sufficient disk space or has an intact network connection" Many thanks for any suggestins, [DllImport("winmm.dll",CharSet=CharSet.Auto)] private static extern uint mciSendString([MarshalAs(UnmanagedType.LPTStr)] string command, StringBuilder returnValue, int returnLength, IntPtr winHandle); [DllImport("winmm.dll", CharSet = CharSet.Auto)] private static extern int mciGetErrorString(uint errorCode, StringBuilder errorText, int errorTextSize); [DllImport("Kernel32.dll", CharSet=CharSet.Auto)] private static extern int GetShortPathName([MarshalAs(UnmanagedType.LPTStr)] string longPath, [MarshalAs(UnmanagedType.LPTStr)] StringBuilder shortPath, int length); // Stop recording private void StopRecording() { // Save recorded voice string shortPath = this.shortPathName(); string formatShortPath = string.Format("save recsound \"{0}\"", shortPath); uint result = 0; StringBuilder errorTest = new StringBuilder(256); // C:\DOCUME~1\Steve\APPLIC~1\Test.wav // Fails result = mciSendString(string.Format("{0}", formatShortPath), null, 0, IntPtr.Zero); mciGetErrorString(result, errorTest, errorTest.Length); // command line convention - fails result = mciSendString("save recsound \"C:\\DOCUME~1\\Steve\\APPLIC~1\\Test.wav\"", null, 0, IntPtr.Zero); mciGetErrorString(result, errorTest, errorTest.Length); // 8.3 short format - fails result = mciSendString(@"save recsound C:\DOCUME~1\Steve\APPLIC~1\Test.wav", null, 0, IntPtr.Zero); mciGetErrorString(result, errorTest, errorTest.Length); // Save to C drive works everytime. result = mciSendString(@"save recsound C:\Test.wav", null, 0, IntPtr.Zero); mciGetErrorString(result, errorTest, errorTest.Length); mciSendString("close recsound ", null, 0, IntPtr.Zero); } // Get the short path name so that the mciSendString can save the recorded wave file private string shortPathName() { string shortPath = string.Empty; long length = 0; StringBuilder buffer = new StringBuilder(256); // Get the length of the path length = GetShortPathName(this.saveRecordingPath, buffer, 256); shortPath = buffer.ToString(); return shortPath; }

    Read the article

  • Need help manipulating WAV (RIFF) Files at a byte level

    - by Eric
    I'm writing an an application in C# that will record audio files (*.wav) and automatically tag and name them. Wave files are RIFF files (like AVI) which can contain meta data chunks in addition to the waveform data chunks. So now I'm trying to figure out how to read and write the RIFF meta data to and from recorded wave files. I'm using NAudio for recording the files, and asked on their forums as well on SO for way to read and write RIFF tags. While I received a number of good answers, none of the solutions allowed for reading and writing RIFF chunks as easily as I would like. But more importantly I have very little experience dealing with files at a byte level, and think this could be a good opportunity to learn. So now I want to try writing my own class(es) that can read in a RIFF file and allow meta data to be read, and written from the file. I've used streams in C#, but always with the entire stream at once. So now I'm little lost that I have to consider a file byte by byte. Specifically how would I go about removing or inserting bytes to and from the middle of a file? I've tried reading a file through a FileStream into a byte array (byte[]) as shown in the code below. System.IO.FileStream waveFileStream = System.IO.File.OpenRead(@"C:\sound.wav"); byte[] waveBytes = new byte[waveFileStream.Length]; waveFileStream.Read(waveBytes, 0, waveBytes.Length); And I could see through the Visual Studio debugger that the first four byte are the RIFF header of the file. But arrays are a pain to deal with when performing actions that change their size like inserting or removing values. So I was thinking I could then to the byte[] into a List like this. List<byte> list = waveBytes.ToList<byte>(); Which would make any manipulation of the file byte by byte a whole lot easier, but I'm worried I might be missing something like a class in the System.IO name-space that would make all this even easier. Am I on the right track, or is there a better way to do this? I should also mention that I'm not hugely concerned with performance, and would prefer not to deal with pointers or unsafe code blocks like this guy. If it helps at all here is a good article on the RIFF/WAV file format.

    Read the article

  • Odd "Object reference not set to an instance of an object" involving xWinForms

    - by Kyle
    Hey, I've been trying to get the xWinForms 3.0 library (a library with forms support in xna) working with my C# XNA Game project but I keep getting the same problem. I add the reference to my project, put in the using statement, declare a formCollection variable and then I try to initialize it. whenever I run the project I get stopped on this line: formCollection = new FormCollection(this.Window, Services, ref graphics); it gives me the error: " System.NullReferenceException was unhandled Message="Object reference not set to an instance of an object." Source="Microsoft.Xna.Framework" StackTrace: at Microsoft.Xna.Framework.Graphics.VertexShader..ctor(GraphicsDevice graphicsDevice, Byte[] shaderCode) at Microsoft.Xna.Framework.Graphics.SpriteBatch.ConstructPlatformData() at Microsoft.Xna.Framework.Graphics.SpriteBatch..ctor(GraphicsDevice graphicsDevice) at xWinFormsLib.FormCollection..ctor(GameWindow window, IServiceProvider services, GraphicsDeviceManager& graphics) at GameSolution.Game2.LoadContent() in C:\Users\Owner\Documents\School\Year 3\Winter\Soen 390\TeamWTF_3\SourceCode\GameSolution\GameSolution\Game2.cs:line 45 at Microsoft.Xna.Framework.Game.Initialize() at GameSolution.Game2.Initialize() in C:\Users\Owner\Documents\School\Year 3\Winter\Soen 390\TeamWTF_3\SourceCode\GameSolution\GameSolution\Game2.cs:line 37 at Microsoft.Xna.Framework.Game.Run() at GameSolution.Program.Main(String[] args) in C:\Users\Owner\Documents\School\Year 3\Winter\Soen 390\TeamWTF_3\SourceCode\GameSolution\GameSolution\Program.cs:line 14 InnerException: " In a project I downloaded that used the xWinForms, I put the following code in and it compiled and ran no error. but when I put it in my project I get the error. Am I making some stupid mistake about including dlls or something? I've been at this for hours and I can't seem to find anything that would cause this. using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.GamerServices; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; using Microsoft.Xna.Framework.Media; using Microsoft.Xna.Framework.Net; using Microsoft.Xna.Framework.Storage; using xWinFormsLib; namespace GameSolution { public class Game2 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; FormCollection formCollection; public Game2() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } protected override void Initialize() { // TODO: Add your initialization logic here base.Initialize(); } protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); formCollection = new FormCollection(this.Window, Services, ref graphics); } protected override void Update(GameTime gameTime) { base.Update(gameTime); } protected override void Draw(GameTime gameTime) { base.Draw(gameTime); } } } Any help would be greatly appreciated ._.

    Read the article

  • Streaming webcam video in Flash using MP4 encoding

    - by Herms
    One of the features of the Flash app I'm working on is to be able to stream a webcam to others. We're just using the built-in webcam support in Flash and sending it through FMS. We've had some people ask for higher quality video, but we're already using the highest quality setting we can in Flash (setting quality to 100%). My understanding is that in the newer flash players they added support for MPEG-4 encoding for the videos. I created a simple test Flex app to try and compare the video quality of the MP4 vs FLV encodings. However, I can't seem to get MP4 to work at all. According to the Flex documentation the only thing I need to do to use MP4 instead of FLV is prepend "mp4:" to the name of the stream when calling publish: Specify the stream name as a string with the prefix mp4: with or without the filename extension. The prefix indicates to the server that the file contains H.264-encoded video and AAC-encoded audio within the MPEG-4 Part 14 container format. When I try this nothing happens. I don't get any events raised on the client side, no exceptions thrown, and my logging on the server side doesn't show any streams starting. Here's the relevant code: // These are all defined and created within the class. private var nc:NetConnection; private var sharing:Boolean; private var pubStream:NetStream; private var format:String; private var streamName:String; private var camera:Camera; // called when the user clicks the start button private function startSharing():void { if (!nc.connected) { return; } if (sharing) { return; } if(pubStream == null) { pubStream = new NetStream(nc); pubStream.attachCamera(camera); } startPublish(); sharing = true; } private function startPublish():void { var name:String; if (this.format == "mp4") { name = "mp4:" + streamName; } else { name = streamName; } //pubStream.publish(name, "live"); pubStream.publish(name, "record"); }

    Read the article

  • Using PHP's IMAP library triggers Kaspersky's Antivirus

    - by TMG
    Hello, I just started today working with PHP's IMAP library, and while imap_fetchbody or imap_body are called, it is triggering my Kaspersky antivirus. The viruses are Trojan.Win32.Agent.dmyq and Trojan.Win32.FraudPack.aoda. I am running this off a local development machine with XAMPP and Kaspersky AV. Now, I am sure there are viruses there since there is spam in the box (who doesn't need a some viagra or vicodin these days?). And I know that since the raw body includes attachments and different mime-types, bad stuff can be in the body. So my question is: are there any risks using these libraries? I am assuming that the IMAP functions are retrieving the body, caching it to disk/memory and the AV scanning it sees the data. Is that correct? Are there any known security concerns using this library (I couldn't find any)? Does it clean up cached message parts perfectly or might viral files be sitting somewhere? Is there a better way to get plain text out of the body than this? Right now I am using the following code (credit to Kevin Steffer): function get_mime_type(&$structure) { $primary_mime_type = array("TEXT", "MULTIPART","MESSAGE", "APPLICATION", "AUDIO","IMAGE", "VIDEO", "OTHER"); if($structure->subtype) { return $primary_mime_type[(int) $structure->type] . '/' .$structure->subtype; } return "TEXT/PLAIN"; } function get_part($stream, $msg_number, $mime_type, $structure = false, $part_number = false) { if(!$structure) { $structure = imap_fetchstructure($stream, $msg_number); } if($structure) { if($mime_type == get_mime_type($structure)) { if(!$part_number) { $part_number = "1"; } $text = imap_fetchbody($stream, $msg_number, $part_number); if($structure->encoding == 3) { return imap_base64($text); } else if($structure->encoding == 4) { return imap_qprint($text); } else { return $text; } } if($structure->type == 1) /* multipart */ { while(list($index, $sub_structure) = each($structure->parts)) { if($part_number) { $prefix = $part_number . '.'; } $data = get_part($stream, $msg_number, $mime_type, $sub_structure,$prefix . ($index + 1)); if($data) { return $data; } } // END OF WHILE } // END OF MULTIPART } // END OF STRUTURE return false; } // END OF FUNCTION $connection = imap_open($server, $login, $password); $count = imap_num_msg($connection); for($i = 1; $i <= $count; $i++) { $header = imap_headerinfo($connection, $i); $from = $header->fromaddress; $to = $header->toaddress; $subject = $header->subject; $date = $header->date; $body = get_part($connection, $i, "TEXT/PLAIN"); }

    Read the article

  • not getting voice , which is recorded could you suggest me what is the bug in the below code ?

    - by kumaryr
    AVAudioSession *audioSession = [AVAudioSession sharedInstance]; NSError *err = nil; [audioSession setCategory:AVAudioSessionCategoryPlayAndRecord error:&err]; if(err){ NSLog(@"audioSession: %@ %d %@", [err domain], [err code], [[err userInfo] description]); return; } [audioSession setActive:YES error:&err]; err = nil; if(err){ NSLog(@"audioSession: %@ %d %@", [err domain], [err code], [[err userInfo] description]); return; } NSMutableDictionary *recordSetting = [[NSMutableDictionary alloc] init]; [recordSetting setValue :[NSNumber numberWithInt: kAudioFormatAppleIMA4] forKey:AVFormatIDKey]; [recordSetting setValue:[NSNumber numberWithFloat:40000.0] forKey:AVSampleRateKey]; [recordSetting setValue:[NSNumber numberWithInt: 2] forKey:AVNumberOfChannelsKey]; [recordSetting setValue :[NSNumber numberWithInt:16] forKey:AVLinearPCMBitDepthKey]; [recordSetting setValue :[NSNumber numberWithBool:NO] forKey:AVLinearPCMIsBigEndianKey]; [recordSetting setValue :[NSNumber numberWithBool:NO] forKey:AVLinearPCMIsFloatKey]; // Create a new dated file NSDate *now = [NSDate dateWithTimeIntervalSinceNow:0]; NSString *caldate = [now description]; NSString *recorderFilePath = [[NSString stringWithFormat:@"%@/%@.caf", DOCUMENTS_FOLDER, caldate] retain]; NSLog(recorderFilePath); url = [NSURL fileURLWithPath:recorderFilePath]; err = nil; recorder = [[ AVAudioRecorder alloc] initWithURL:url settings:recordSetting error:&err]; if(!recorder){ NSLog(@"recorder: %@ %d %@", [err domain], [err code], [[err userInfo] description]); UIAlertView *alert = [[UIAlertView alloc] initWithTitle: @"Warning" message: [err localizedDescription] delegate: nil cancelButtonTitle:@"OK" otherButtonTitles:nil]; [alert show]; [alert release]; return; } //prepare to record [recorder setDelegate:self]; [recorder prepareToRecord]; recorder.meteringEnabled = YES; BOOL audioHWAvailable = audioSession.inputIsAvailable; if (! audioHWAvailable) { UIAlertView *cantRecordAlert = [[UIAlertView alloc] initWithTitle: @"Warning" message: @"Audio input hardware not available" delegate: nil cancelButtonTitle:@"OK" otherButtonTitles:nil]; [cantRecordAlert show]; [cantRecordAlert release]; return; } //[NSTimer scheduledTimerWithTimeInterval:1 target:self selector:@selector( updateTimerDisplay) userInfo:nil repeats:YES]; [recorder recordForDuration:(NSTimeInterval)10 ]; // [NSTimer scheduledTimerWithTimeInterval:1 target:self selector:@selector( updateTimerDisplay) userInfo:nil repeats:YES];

    Read the article

  • Large File Download - Connection With Server Reset

    - by daveywc
    I have an asp.net website that allows the user to download largish files - 30mb to about 60mb. Sometimes the download works fine but often it fails at some varying point before the download finishes with the message saying that the connection with the server was reset. Originally I was simply using Server.TransmitFile but after reading up a bit I am now using the code posted below. I am also setting the Server.ScriptTimeout value to 3600 in the Page_Init event. private void DownloadFile(string fname, bool forceDownload) { string path = MapPath(fname); string name = Path.GetFileName(path); string ext = Path.GetExtension(path); string type = ""; // set known types based on file extension if (ext != null) { switch (ext.ToLower()) { case ".mp3": type = "audio/mpeg"; break; case ".htm": case ".html": type = "text/HTML"; break; case ".txt": type = "text/plain"; break; case ".doc": case ".rtf": type = "Application/msword"; break; } } if (forceDownload) { Response.AppendHeader("content-disposition", "attachment; filename=" + name.Replace(" ", "_")); } if (type != "") { Response.ContentType = type; } else { Response.ContentType = "application/x-msdownload"; } System.IO.Stream iStream = null; // Buffer to read 10K bytes in chunk: byte[] buffer = new Byte[10000]; // Length of the file: int length; // Total bytes to read: long dataToRead; try { // Open the file. iStream = new System.IO.FileStream(path, System.IO.FileMode.Open, System.IO.FileAccess.Read, System.IO.FileShare.Read); // Total bytes to read: dataToRead = iStream.Length; //Response.ContentType = "application/octet-stream"; //Response.AddHeader("Content-Disposition", "attachment; filename=" + filename); // Read the bytes. while (dataToRead > 0) { // Verify that the client is connected. if (Response.IsClientConnected) { // Read the data in buffer. length = iStream.Read(buffer, 0, 10000); // Write the data to the current output stream. Response.OutputStream.Write(buffer, 0, length); // Flush the data to the HTML output. Response.Flush(); buffer = new Byte[10000]; dataToRead = dataToRead - length; } else { //prevent infinite loop if user disconnects dataToRead = -1; } } } catch (Exception ex) { // Trap the error, if any. Response.Write("Error : " + ex.Message); } finally { if (iStream != null) { //Close the file. iStream.Close(); } Response.Close(); } }

    Read the article

  • Android library to get pitch from WAV file

    - by Sakura
    I have a list of sampled data from the WAV file. I would like to pass in these values into a library and get the frequency of the music played in the WAV file. For now, I will have 1 frequency in the WAV file and I would like to find a library that is compatible with Android. I understand that I need to use FFT to get the frequency domain. Is there any good libraries for that? I found that [KissFFT][1] is quite popular but I am not very sure how compatible it is on Android. Is there an easier and good library that can perform the task I want? EDIT: I tried to use JTransforms to get the FFT of the WAV file but always failed at getting the correct frequency of the file. Currently, the WAV file contains sine curve of 440Hz, music note A4. However, I got the result as 441. Then I tried to get the frequency of G4, I got the result as 882Hz which is incorrect. The frequency of G4 is supposed to be 783Hz. Could it be due to not enough samples? If yes, how much samples should I take? //DFT DoubleFFT_1D fft = new DoubleFFT_1D(numOfFrames); double max_fftval = -1; int max_i = -1; double[] fftData = new double[numOfFrames * 2]; for (int i = 0; i < numOfFrames; i++) { // copying audio data to the fft data buffer, imaginary part is 0 fftData[2 * i] = buffer[i]; fftData[2 * i + 1] = 0; } fft.complexForward(fftData); for (int i = 0; i < fftData.length; i += 2) { // complex numbers -> vectors, so we compute the length of the vector, which is sqrt(realpart^2+imaginarypart^2) double vlen = Math.sqrt((fftData[i] * fftData[i]) + (fftData[i + 1] * fftData[i + 1])); //fd.append(Double.toString(vlen)); // fd.append(","); if (max_fftval < vlen) { // if this length is bigger than our stored biggest length max_fftval = vlen; max_i = i; } } //double dominantFreq = ((double)max_i / fftData.length) * sampleRate; double dominantFreq = (max_i/2.0) * sampleRate / numOfFrames; fd.append(Double.toString(dominantFreq)); Can someone help me out? EDIT2: I manage to fix the problem mentioned above by increasing the number of samples to 100000, however, sometimes I am getting the overtones as the frequency. Any idea how to fix it? Should I use Harmonic Product Frequency or Autocorrelation algorithms?

    Read the article

  • How to adjust microphone gain from C# (needs to work on XP & W7)...

    - by Ed
    First, note that I know there are a few questions like this already posted; however they don't seem to address the problem adequately. I have a C# application, with all the pInvoke hooks to talk to the waveXXX API, and I'm able to do capture and play back of audio with that. I'm also able to adjust speaker (WaveOut) volume with that API. The problem is that for whatever reason, that API does not allow me to adjust microphone (WaveIn) volume. So, I managed to find some mixer code that I've also pulled in and access through pInvoke and that allows me to adjust microphone volume, but only on my W7 PC. The mixer code I started with comes from here: http://social.msdn.microsoft.com/Forums/en-US/isvvba/thread/05dc2d35-1d45-4837-8e16-562ee919da85 and it works, but is written to adjust speaker volume. I added the SetMicVolume method shown here... public static void SetMicVolume(int mxid, int percentage) { bool rc; int mixer, vVolume; MIXERCONTROL volCtrl = new MIXERCONTROL(); int currentVol; mixerOpen(out mixer, mxid, 0, 0, MIXER_OBJECTF_WAVEIN); int type = MIXERCONTROL_CONTROLTYPE_VOLUME; rc = GetVolumeControl(mixer, MIXERLINE_COMPONENTTYPE_SRC_MICROPHONE, type, out volCtrl, out currentVol); if (rc == false) { mixerClose(mixer); mixerOpen(out mixer, 0, 0, 0, 0); rc = GetVolumeControl(mixer, MIXERLINE_COMPONENTTYPE_SRC_MICROPHONE, type, out volCtrl, out currentVol); if (rc == false) throw new Exception("SetMicVolume/GetVolumeControl() failed"); } vVolume = ((int)((float)(volCtrl.lMaximum - volCtrl.lMinimum) / 100.0F) * percentage); rc = SetVolumeControl(mixer, volCtrl, vVolume); if (rc == false) throw new Exception("SetMicVolume/SetVolumeControl() failed"); mixerClose(mixer); } Note the "second attempt" to call GetVolumeControl(). This is done because on XP, in the first call to GetVolumeControl (refer to site above for that code), the call to mixerGetLineControlsA() fails with XP systems returning MIXERR_INVALCONTROL. Then, with this second attempt using mixerOpen(out mixer, 0, 0, 0, 0), the code doesn't return a failure but the mic gain is unaffected. Note, as I said above, this works on W7 (the second attempt is never executed because it doesn't fail using mixerOpen(out mixer, mxid, 0, 0, MIXER_OBJECTF_WAVEIN)). I admit to not having a good grasp on the mixer API, so that's what I'm looking into now; however if anyone has a clue why this would work on W7, but not XP, I'd sure like to hear it. Meanwhile, if I figure it out before I get a response, I'll post my own answer...

    Read the article

  • To Interface or Not?: Creating a polymorphic model relationship in Ruby on Rails dynamically..

    - by Globalkeith
    Please bear with me for a moment as I try to explain exactly what I would like to achieve. In my Ruby on Rails application I have a model called Page. It represents a web page. I would like to enable the user to arbitrarily attach components to the page. Some examples of "components" would be Picture, PictureCollection, Video, VideoCollection, Background, Audio, Form, Comments. Currently I have a direct relationship between Page and Picture like this: class Page < ActiveRecord::Base has_many :pictures, :as => :imageable, :dependent => :destroy end class Picture < ActiveRecord::Base belongs_to :imageable, :polymorphic => true end This relationship enables the user to associate an arbitrary number of Pictures to the page. Now if I want to provide multiple collections i would need an additional model: class PictureCollection < ActiveRecord::Base belongs_to :collectionable, :polymorphic => true has_many :pictures, :as => :imageable, :dependent => :destroy end And alter Page to reference the new model: class Page < ActiveRecord::Base has_many :picture_collections, :as => :collectionable, :dependent => :destroy end Now it would be possible for the user to add any number of image collections to the page. However this is still very static in term of the :picture_collections reference in the Page model. If I add another "component", for example :video_collections, I would need to declare another reference in page for that component type. So my question is this: Do I need to add a new reference for each component type, or is there some other way? In Actionscript/Java I would declare an interface Component and make all components implement that interface, then I could just have a single attribute :components which contains all of the dynamically associated model objects. This is Rails, and I'm sure there is a great way to achieve this, but its a tricky one to Google. Perhaps you good people have some wise suggestions. Thanks in advance for taking the time to read and answer this.

    Read the article

  • Javascript: selfmade methods not working correctly

    - by hdr
    Hi everyone, I tried to figure this out for some days now, I tried to use my own object to sort of replace the global object to reduce problems with other scripts (userscripts, chrome extensions... that kind of stuff). However I can't get things to work for some reason. I tried some debugging with JSLint, the developer tools included in Google Chrome, Firebug and the integrated schript debugger in IE8 but there is no error that explains why it doesn't work at all in any browser I tried. I tried IE 8, Google Chrome 10.0.612.3 dev, Firefox 3.6.13, Safari 5.0.3 and Opera 11. So... here is the code: HTML: <!DOCTYPE HTML> <html manifest="c.manifest"> <head> <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> <meta charset="utf-8"> <!--[if IE]> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/chrome-frame/1/CFInstall.min.js"></script> <script src="https://ie7-js.googlecode.com/svn/version/2.1(beta4)/IE9.js">IE7_PNG_SUFFIX=".png";</script> <![endif]--> <!--[if lt IE 9]> <script src="js/lib/excanvas.js"></script> <script src="https://html5shiv.googlecode.com/svn/trunk/html5.js"></script> <![endif]--> <script src="js/data.js"></script> </head> <body> <div id="controls"> <button onclick="MYOBJECTis.next()">Next</button> </div> <div id="textfield"></div> <canvas id="game"></canvas> </body> </html> Javascript: var that = window, it = document, k = Math.floor; var MYOBJECTis = { aresizer: function(){ // This method looks like it doesn't work. // It should automatically resize all the div elements and the body. // JSLint: no error (execpt for "'window' is not defined." which is normal since // JSLint does nor recognize references to the global object like "window" or "self" // even if you assume a browser) // Firebug: no error // Chrome dev tools: no error // IE8: that.documentElement.clientWidth is null or not an object "use strict"; var a = that.innerWidth || that.documentElement.clientWidth, d = that.innerHeight || that.documentElement.clientHeight; (function() { for(var b = 0, c = it.getElementsByTagName("div");b < c.length;b++) { c.style.width = k(c.offsetWidth) / 100 * k(a); c.style.height = k(c.offsetHight) / 100 * k(d); } }()); (function() { var b = it.getElementsByTagName("body"); b.width = a; b.height = d; }()); }, next: function(){ // This method looks like it doesn't work. // It should change the text inside a div element // JSLint: no error (execpt for "'window' is not defined.") // Firebug: no error // Chrome dev tools: no error // IE8: no error (execpt for being painfully slow) "use strict"; var b = it.getElementById("textfield"), a = [], c; switch(c !== typeof Number){ case true: a[1] = ["HI"]; c = 0; break; case false: return Error; default: b.innerHtml = a[c]; c+=1; } } }; // auto events (function(){ "use strict"; that.onresize = MYOBJECTis.aresizer(); }()); If anyone can help me out with this I would very much appreciate it. EDIT: To answer the question what's not working I can just say that no method I showed here is working at all and I don't know the cause of the problem. I also tried to clean up some of the code that has most likely nothing to do with it. Additional information is in the comments inside the code.

    Read the article

  • How to use sound and images in a Java applet?

    - by Click Upvote
    Question 1: How should I structure my project so the sound and images files can be loaded most easily? Right now, I have the folder: C:\java\pacman with the sub-directory C:\java\pacman\src containing all the code, and C:\java\pacman\assets containing the images and .wav files. Is this the best structure or should I put the assets somewhere else? Question 2: What's the best way to refer to the images/sounds without using the full path e.g C:\java\pacman\assets\something.png to them? If I use the getCodeBase() function it seems to refer to the C:\java\pacman\bin instead of C:\java\pacman\. I want to use such a function/class which would work automatically when i compile the applet in a jar as well as right now when I test the applet through eclipse. Question 3: How should I load the images/sounds? This is what I'm using now: 1) For general images: import java.awt.Image; public Image getImg(String file) { //imgDir in this case is a hardcoded string containing //"C:\\java\\pacman\\assets\\" file=imgDir + file; return new ImageIcon(file).getImage(); } The images returned from this function are used in the drawImage method of the Graphics class in the paint method of the applet. 2) For a buffered image, which is used to get subImages and load sprites from a sprite sheet: public BufferedImage getSheet() throws IOException { return ImageIO.read(new File(img.getPath("pacman-sprites.png"))); } Later: public void loadSprites() { BufferedImage sheet; try { sheet=getSheet(); redGhost.setNormalImg(sheet.getSubimage(0, 60, 20, 20)); redGhost.setUpImg(sheet.getSubimage(0, 60, 20, 20)); redGhost.setDownImg(sheet.getSubimage(30, 60, 20, 20)); redGhost.setLeftImg(sheet.getSubimage(30, 60, 20, 20)); redGhost.setRightImg(sheet.getSubimage(60, 60, 20, 20)); } catch (IOException e) { System.out.println("Couldnt open file!"); System.out.println(e.getLocalizedMessage()); } } 3) For sound files: import sun.audio.*; import java.io.*; public synchronized void play() { try { InputStream in = new FileInputStream(filename); AudioStream as = new AudioStream(in); AudioPlayer.player.start(as); } catch (IOException e) { e.printStackTrace(); } }

    Read the article

  • php download file slows

    - by hobbywebsite
    OK first off thanks for your time I wish I could give more than one point for this question. Problem: I have some music files on my site (.mp3) and I am using a php file to increment a database to count the number of downloads and to point to the file to download. For some reason this method starts at 350kb/s then slowly drops to 5kb/s which then the file says it will take 11hrs to complete. BUT if I go directly to the .mp3 file my browser brings up a player and then I can right click and "save as" which works fine complete download in 3mins. (Yes both during the same time for those that are thinking it's my connection or ISP and its not my server either.) So the only thing that I've been playing around with recently is the php.ini and the .htcaccess files. So without further ado, the php file, php.ini, and the .htcaccess: download.php <?php include("config.php"); include("opendb.php"); $filename = 'song_name'; $filedl = $filename . '.mp3'; $query = "UPDATE songs SET song_download=song_download+1 WHER song_linkname='$filename'"; mysql_query($query); header('Content-Disposition: attachment; filename='.basename($filedl)); header('Content-type: audio/mp3'); header('Content-Length: ' . filesize($filedl)); readfile('/music/' . $filename . '/' . $filedl); include("closedb.php"); ?> php.ini register_globals = off allow_url_fopen = off expose_php = Off max_input_time = 60 variables_order = "EGPCS" extension_dir = ./ upload_tmp_dir = /tmp precision = 12 SMTP = relay-hosting.secureserver.net url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=,fieldset=" ; Defines the default timezone used by the date functions date.timezone = "America/Los_Angeles" .htaccess Options +FollowSymLinks RewriteEngine on RewriteCond %{HTTP_HOST} !^(www.MindCollar.com)?$ [NC] RewriteRule (.*) http://www.MindCollar.com/$1 [R=301,L] <IfModule mod_rewrite.c> RewriteEngine On ErrorDocument 404 /errors/404.php ErrorDocument 403 /errors/403.php ErrorDocument 500 /errors/500.php </IfModule> Options -Indexes Options +FollowSymlinks <Files .htaccess> deny from all </Files> thanks for you time

    Read the article

  • Can I take the voice data (f.e. in mp3 format) from speech recognition? [closed]

    - by Ersin Gulbahar
    Possible Duplicate: Android: Voice Recording and saving audio I mean ; I use voice recognition classes on android and I succeed voice recognition. But I want to real voice data not words instead of it. For example I said 'teacher' and android get you said teacher.Oh ok its good but I want to my voice which include 'teacher'.Where is it ? Can I take it and save another location? I use this class to speech to text : package net.viralpatel.android.speechtotextdemo; import java.util.ArrayList; import android.app.Activity; import android.content.ActivityNotFoundException; import android.content.Intent; import android.os.Bundle; import android.speech.RecognizerIntent; import android.view.Menu; import android.view.View; import android.widget.ImageButton; import android.widget.TextView; import android.widget.Toast; public class MainActivity extends Activity { protected static final int RESULT_SPEECH = 1; private ImageButton btnSpeak; private TextView txtText; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); txtText = (TextView) findViewById(R.id.txtText); btnSpeak = (ImageButton) findViewById(R.id.btnSpeak); btnSpeak.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Intent intent = new Intent( RecognizerIntent.ACTION_RECOGNIZE_SPEECH); intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, "en-US"); try { startActivityForResult(intent, RESULT_SPEECH); txtText.setText(""); } catch (ActivityNotFoundException a) { Toast t = Toast.makeText(getApplicationContext(), "Ops! Your device doesn't support Speech to Text", Toast.LENGTH_SHORT); t.show(); } } }); } @Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.activity_main, menu); return true; } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); switch (requestCode) { case RESULT_SPEECH: { if (resultCode == RESULT_OK && null != data) { ArrayList<String> text = data .getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS); txtText.setText(text.get(0)); } break; } } } } Thanks.

    Read the article

< Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >