Search Results

Search found 1357 results on 55 pages for 'mp3'.

Page 49/55 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Play audio file data - Spring MVC

    - by Vijay Veeraraghavan
    In my web-application, I have various audio clips uploaded by the users in the database stored in the BLOB column. The audio files are low bit rate WAV files. The clips are secured, one can see only those clips he has uploaded. Instead of user downloading the clip and playing it in his player, I need it be steamed online in the web page itself. In the jsp I use the <audio> tag with the source mapping to the controller mappping url. <td> <audio controls><source src="recfile/${au.id}" type="audio/mpeg" /></audio> </td> Where, the recfile is the request mapping and the au.id is the audio id. In the controller I process the request like below @RequestMapping(value = "/recfile/{id}", method = RequestMethod.GET, produces = { MediaType.APPLICATION_OCTET_STREAM_VALUE }) public HttpEntity<byte[]> downloadRecipientFile(@PathVariable("id") int id, ModelMap model, HttpServletResponse response) throws IOException, ServletException { LOGGER.debug("[GroupListController downloadRecipientFile]"); VoiceAudioLibrary dGroup = audioClipService.findAudioClip(id); if (dGroup == null || dGroup.getAudioData() == null || dGroup.getAudioData().length <= 0) { throw new ServletException("No clip found/clip has not data, id=" + id); } HttpHeaders header = new HttpHeaders(); I tried this too //header.setContentType(new MediaType("audio", "mp3")); header.setContentType(new MediaType("audio", "vnd.wave"); header.setContentLength(dGroup.getAudioData().length); return new HttpEntity<byte[]>(dGroup.getAudioData(), header); } When the jsp loads, the controller get the request, it serves back the audio data fetched from the database, the jsp too shows the player with the controls. But when I play it nothing happens. Why is it? Am I missing anything in the configuration? Am I doing it right?

    Read the article

  • How to use HTTP Live Streaming protocol in iPhone SDk 3.0

    - by Pugal Devan
    Hi Guys, i have developed on IPhone application and submitted to App store. But my application got rejected based on below criteria. Thank you for submitting your yyyyyyyy application. We have reviewed your application and have determined that it cannot be posted to the App Store at this time because it is not using the HTTP Live Streaming protocol to broadcast streaming video. HTTP Live Streaming is required when streaming video feeds over the cellular network, in order to have an optimal user experience and utilize cellular best practices. This protocol automatically determines bandwidth available to users and adjusts the bandwidth appropriately, even as bandwidth streams change. This allows you the flexibility to have as many streams as you like, as long as 64 kbps is set as the baseline feed. In my apps i have to stream prerecorded m4v and mp3 files from my server. I used MPMoviePlayerController to stream and play those videos / audio. How to implement the HTTP Live Streaming Protocol in my apps? Also can i get some sample code? Thanks in advance!

    Read the article

  • Bash scripting problem

    - by komidore64
    I'm writing a bash script to sync my iTunes music directory to a directory on a removable hard drive. The script works fine when there is absolutely nothing in the folder on the external hard drive. Once all files have been copied to the external drive, then the script begins to act strange. Even though i just sync'd everything over, it proceeds to recopy certain files again. After the initial sync, it chooses the same files to resync each consecutive time the script is executed without any changes being made to the source directory. #!/bin/bash # shell script to sync music with gigabeat and/or firewire drive musicdir="/Users/komidore64/Music/iTunes/iTunes Media/Music" gigadir="/Volumes/GIGABEAT/music" # fwdir="/Volumes/" remove() { find "$1" \ ! \( -name "*.wav" \ -o -name "*.ogg" \ -o -name "*.flac" \ -o -name "*.aac" \ -o -name "*.mp3" \ -o -name "*.m4a" \ -o -name "*.wma" \ -o -name "*.m4p" \ -o -name "*.ape" \ -o -type d \) \ -exec rm -i {} \; } if [ $# == 0 ]; then echo "no device argument present" echo "specify '-g' for gigabeat" echo "or '-f' for firewire drive" else remove "$musicdir" while [ $1 ]; do case $1 in -g | --gigabeat ) rsync --archive --verbose --delete "$musicdir/" "$gigadir" ;; -f | --firewire ) rsync --archive --verbose --delete "$musicdir/" "$fwdir" esac shift done echo "music synced" fi

    Read the article

  • How to set the PlayList Index for Mediaplayer(ExpressionMediaPlayer:Mediaplayer)

    - by Subhen
    Hi, I have a Mediaplayer control on my XAML page like below: <CustomMediaElement:CustomMediaPlayer x:Name="custMediaElement" VerticalAlignment="Center" Width="600" Height="300" Visibility="Collapsed" /> Now I am ble to set the playList by using setPlayList() method like below: private void setPlayList() { IEnumerable eLevelData = null; eLevelData = pMainPage.GetDataFromDictonary(pMainPage.strChildFolderID); foreach (RMSMedia folderItems in eLevelData) { string strmediaURL = folderItems.strMediaFileName; if (hasExtension(strmediaURL) == "wmv" || hasExtension(strmediaURL) == "mp4" || hasExtension(strmediaURL) == "mp3" || hasExtension(strmediaURL) == "mpg") { PlaylistItem playListItem = new PlaylistItem(); string thumbSource = folderItems.strAlbumcoverImage; playListItem.MediaSource = new Uri(strmediaURL, UriKind.RelativeOrAbsolute); playListItem.Title = folderItems.strAlbumName; if (!string.IsNullOrEmpty(thumbSource)) playListItem.ThumbSource = new Uri(thumbSource, UriKind.RelativeOrAbsolute); playList.Items.Add(playListItem); } } custMediaElement.Playlist = playList; } Now , I want to change the PlayListIndex of Mediaplayer, when user clicks on a listBox Item , which contains the title of all the songs. When the user clicks on the third song title from the songs Title List the mediaPlayer should play the third song , or if the user cliks on 7th song title, the mediaPlayer should play the 7th song. My motto is to pick up the Selected index from the listbox and assign it to the PlayList Index of mediaPlayer. While I add a watch to playList I am able to see , playList , Items , [0] PlaylistIndex 1 playList , Items , [1] PlaylistIndex 2 But While I am trying to set it from the code , the same property PlaylistIndex seems unavailable. Please help. Thanks, Subhen

    Read the article

  • Reading a WAV file into VST.Net to process with a plugin

    - by Paul
    Hello, I'm trying to use the VST.Net and NAudio frameworks to build an application that processes audio using a VST plugin. Ideally, the application should load a wav or mp3 file, process it with the VST, and then write a new file. I have done some poking around with the VST.Net library and was able to compile and run the samples (specifically the VST Host one). What I have not figured out is how to load an audio file into the program and have it write a new file back out. I'd like to be able to configure the properties for the VST plugin via C#, and be able to process the audio with 2 or more consecutive VSTs. Using NAudio, I was able to create this simple script to copy an audio file. Now I just need to get the output from the WaveFileReader into the VST.Net framework somehow. private void processAudio() { reader = new WaveFileReader("c:/bass.wav"); writer = new WaveFileWriter("c:/bass-copy.wav", reader.WaveFormat); int read; while ((read = reader.Read(buffer, 0, buffer.Length)) > 0) { writer.WriteData(buffer, 0, read); } textBox1.Text = "done"; reader.Close(); reader.Dispose(); writer.Close(); writer.Dispose(); } Please help!! Thanks References: http://vstnet.codeplex.com (VST.Net) http://naudio.codeplex.com (NAudio)

    Read the article

  • SoundManager2 has irregular latency

    - by Stefan Monov
    I'm playing some notes at regular intervals. Each one is delayed by a random number of milliseconds, creating a jarring irregular effect. How do I fix it? Note: I'm OK with some latency, just as long as it's consistent. Answers of the type "implement your own small SoundManager2 replacement, optimized for timing-sensitive playback" are OK, if you know how to do that :) but I'm trying to avoid rewriting my whole app in flash for now. For an example of app with zero audible latency see the flash-based ToneMatrix. Testcase (see it here live or get it in an zip): <head> <title></title> <script type="text/javascript" src="http://www.schillmania.com/projects/soundmanager2/script/soundmanager2.js"> </script> <script type="text/javascript"> soundManager.url = '.' soundManager.flashVersion = 9 soundManager.useHighPerformance = true soundManager.useFastPolling = true soundManager.autoLoad = true function recur(func, delay) { window.setTimeout(function() { recur(func, delay); func(); }, delay) } soundManager.onload = function() { var sound = soundManager.createSound("test", "test.mp3") recur(function() { sound.play() }, 300) } </script> </head> <body> </body> </html>

    Read the article

  • How to parse the "<media:group>" using feedparser?

    - by Wayle.C
    The rss file is shown as below, i want to get the content in section media:group . I check the document of feedparser, but it seems not mention this. How to do it? Any help is appreciated. XYZ InfoX: Special hello http://www1.XYZInfoX.com/learninghello/home hello en Wed, 17 Mar 2010 08:50:06 GMT 2010-03-17T08:50:06Z en Voice of America http://www1.XYZInfoX.com/learninghello http://media.XYZInfoX.com/designimages/XYZRSSIcon.gif <item> <title>Who Were the Deadliest Gunmen of the Wild West?</title> <link>http://www1.XYZInfoX.com/learninghello/home/Deadliest-Gunmen-of-the-Wild-West-87826807.html</link> <description> The story of two of them: "Killin'" Jim Miller was an outlaw, "Texas" John Slaughter was a lawman | EXPLORATIONS </description> <pubDate>Wed, 17 Mar 2010 00:38:48 GMT</pubDate> <guid isPermaLink="false">87826807</guid> <dc:creator></dc:creator> <dc:date>2010-03-17T00:38:48Z</dc:date> *<media:group> <media:content url="http://media.XYZInfoX.com/images/archives_peace_comm_480_16mar_se.jpg" medium="image" isDefault="true" height="300" width="480" /> <media:content url="http://media.XYZInfoX.com/images/archives_peace_comm_230_16mar_se_edited-1.jpg" medium="image" isDefault="false" height="230" width="230" /> <media:content url="http://media.XYZInfoX.com/images/tex_trans_lawmans_230_16mar10_se.jpg" medium="image" isDefault="false" height="230" width="230" /> <media:content url="http://www.XYZInfoX.com/MediaAssets2/learninghello/dalet/se-exp-outlaws-part2-17mar2010.Mp3" type="audio/mpeg" medium="audio" isDefault="false" /> </media:group>* </item>

    Read the article

  • what is the best way to stream a audio file to website users/listners

    - by Naveen Chamikara Gamage
    I'm developing a music site which will stream audio files stored in a server to users, audio files will be played through flash player placed in a webpage.. As I heard I need to use a streaming media server for streaming audio files ( like 2mb to 3mb in size).. Do I need to use one? I found some streaming media server softwares like http://www.icecast.org - but as in their documentation, It is used for streaming radio stations and live streaming purposes, but I just need to stream audio files faster and in low size (low bandwidth) with good quality.. I heard I need to encode the audio files first and then send them to listeners and in their end audio files need to be decoded again. Is that true? How can I do that? if I need to use a special web server, where should I host my files? Any good hosting providers? if I host audio files in a normal web server, they will use HTTP or TCP to deliver my audio files to users/ listners but I found that HTTP and TCP are not good ways to use for multi media purposes like streaming audio and video files, and they are used for delivering HTML and stuff. I found I should use RSTP or UDP for streaming audio files.. What should I use? I know that .MP3 files has much better quality than the other formats but it also gives huge size to the audio files.. which format should I use for audio files? Most of the best quality audio files are more than 7mb so I'm planning to convert them my self using a software so I could get low size files with some level of good quality. If I'm converting my audio files what is the good BITRATE I should use for my files? Any known best softwares for converting audio files while keeping quality in a good level? Note** - I know that I will not need complex requirements at the beginning of the site but I wanted to what are the best ways like they are using for soundcloud.com

    Read the article

  • PInvokeStackImbalance was detected when manually running Xna Content pipeline

    - by Miau
    So I m running this code static readonly string[] PipelineAssemblies = { "Microsoft.Xna.Framework.Content.Pipeline.FBXImporter" + XnaVersion, "Microsoft.Xna.Framework.Content.Pipeline.XImporter" + XnaVersion, "Microsoft.Xna.Framework.Content.Pipeline.TextureImporter" + XnaVersion, "Microsoft.Xna.Framework.Content.Pipeline.EffectImporter" + XnaVersion, "Microsoft.Xna.Framework.Content.Pipeline.XImporter" + XnaVersion, "Microsoft.Xna.Framework.Content.Pipeline.AudioImporters" + XnaVersion, "Microsoft.Xna.Framework.Content.Pipeline.VideoImporters" + XnaVersion, }; more code in between ..... // Register any custom importers or processors. foreach (string pipelineAssembly in PipelineAssemblies) { _buildProject.AddItem("Reference", pipelineAssembly); } more code in between ..... var execute = Task.Factory.StartNew(() => submission.ExecuteAsync(null, null), cancellationTokenSource.Token); var endBuild = execute.ContinueWith(ant => BuildManager.DefaultBuildManager.EndBuild()); endBuild.Wait(); Basically trying to build the content project with code ... this works well if you remove XImporter, AudioIMporters and VideoImporters but with those I get the following error: PInvokeStackImbalance was detected Message: A call to PInvoke function 'Microsoft.Xna.Framework.Content.Pipeline!Microsoft.Xna.Framework.Content.Pipeline.UnsafeNativeMethods+AudioHelper::GetFormatSize' has unbalanced the stack. This is likely because the managed PInvoke signature does not match the unmanaged target signature. Check that the calling convention and parameters of the PInvoke signature match the target unmanaged signature. Things I've tried: turn unsafe code on adding a lot of logging (the dll is found in case you are wondering) different wav and Mp3 files (just in case) Will update...

    Read the article

  • Can a single developer still make money with shareware?

    - by Wouter van Nifterick
    I'm wondering if the shareware concept is dead nowadays. Like most developers, I've built up quite a collection of self-made tools and code libraries that help me to be productive. Some examples to give you an idea of the type of thing I'm talking about: A self-learning program that renames and orders all my mp3 files and adds information to the id3 tags; A Delphi component that wraps the Google Maps API; A text-to-singing-voice converter for musical purposes; A program to control a music synthesizer; A Gps-log <- KML <- ESRI-shapefile converter; I've got one of these already freely downloadable on my website, and on average it gets downloaded about a 150 times per month. Let's say I'd start charging 15 euro's for it; would there actually be people who buy it? How many? What would it depend on? If I could get some money for some of these, I'd finish them up a bit and put them online, but without that, I probably won't bother. Maintaining a SourceForge project is not very rewarding by itself. Is there anyone who is making money with shareware? How much? Any tips?

    Read the article

  • Testing background audio in the simulator

    - by Cactuar
    I'm experimenting with the new background audio service in iPhone OS 4.0 but I can't get it to work in the simulator. According to this page: iPhone Application Programming Guide: Executing Code in the Background it seems that all I have to do is add the a UIBackgroundModes key with an array containing audio to my Info.plist file and the audio my application plays should automatically continue when I switch to another app. I have done this but the audio still pauses as I switch to another app, when I switch back it continues where it left off. This is the code I'm using to play the sound: NSURL *url = [NSURL fileURLWithPath:[NSString stringWithFormat:@"%@/audio.mp3", [[NSBundle mainBundle] resourcePath]]]; NSError *error; audioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:url error:&error]; audioPlayer.numberOfLoops = -1; if (audioPlayer == nil) NSLog(@"%@", [error userInfo]); else [audioPlayer play]; Has anyone gotten this to work? Could it be that it would work on an actual device and it's just a problem with the simulator? I'm a bit hesitant to install 4.0 on my phone since I've heard it's still very buggy. Wish I had another device to use only for development.

    Read the article

  • Can the Flash CS4 [embed] tag be made to export assets to frame 2 rather than frame 1?

    - by Tim Knauf
    We're working on a Flash CS4 project where the main .fla file has ballooned in size and 'Publish' is taking forever. I suspect a large amount of the size (and at least some of the compile time) is due to the quantity of audio symbols in the library. I would love to remove this unnecessary bloat from the .fla file. I've experimented with removing an audio symbol from the library and using the [embed] metadata tag instead, like so: [Embed(source="audio/music/EndOfLevelDitty.mp3")] public var EndOfLevelDitty:Class The resulting published file works perfectly, but there is a problem. Our game uses a preloader on the first frame of the timeline, so all other classes need to be exported in frame 2 (as set in Publish Settings ActionScript 3.0 Settings). So a size report normally begins like this: Frame # Frame Bytes Total Bytes Scene ------- ----------- ----------- ---------------- 1 284515 284515 Scene 1 2 5485305 5769820 (AS 3.0 Classes Export Frame) However, if I use an [embed] tag on a small sound, my size report is now: Frame # Frame Bytes Total Bytes Scene ------- ----------- ----------- ---------------- 1 363320 363320 Scene 1 2 5407240 5770560 (AS 3.0 Classes Export Frame) As you can see, the embedded sound has been exported into frame 1 rather than frame 2. If I were to embed all sounds in this manner, the size of frame 1 would grow to be huge, and users would be looking at a white screen for ages before the preloader frame even loaded. So my question is this: can I use an [embed] tag but have the embedded asset export in frame 2 instead of frame 1? Project constraints: Our team composition means we can't change to pure Flex at this stage. The compiled .swf needs to be 'all in one', so we can't split the preloader into a separate file, and we can't access external resources. Edit: I'd also settle for having the audio in an embedded library SWC, but there seems to be no way to make that embed in frame 2 either; it always ends up in frame 1.

    Read the article

  • SoundManager / Jquery : Get SoundID sID

    - by j-man86
    So I am trying to access a jquery soundmanager variable from one script (wpaudio.js – from the wp-audio plugin) inside of another (init.js – my own javascript). I am creating an alternate pause/play button higher up on the page and need to resume the current soundID, which is contained as part of a class name in the DOM. Here is the code that creates that class name in wpaudio.js: function wpaButtonCheck() { if (!this.playState || this.paused) jQuery('#' + this.sID + '_play').attr('src', wpa_url + '/wpa_play.png'); else jQuery('#' + this.sID + '_play').attr('src', wpa_url + '/wpa_pause.png'); } Here is the output: <img src="http://24.232.185.173/wordpress/wp-content/plugins/wpaudio-mp3-player/wpa_play.png" class="wpa_play" id="wpa0_play"> where wpa0 would be the sID of the sound I need. My current script in init.js is: $('.mixesSidebar #currentSong .playBtn').toggle(function() { soundManager.pauseAll(); $(this).addClass('paused'); }, function() { soundManager.resumeAll(); $(this).removeClass('paused'); }); I need to change resumeAll to "resume(this.sID)", but I need to somehow store the sID onclick and call it in the above function. Alternately, I think a regular expression that could get the class name of the current play button and either parse the string up to the "_play" or use a trim function to get rid of "_play"– but I'm not sure how to do this. Thanks for your help!

    Read the article

  • Flash AS3 load file xml

    - by Elias
    Hello, I'm just trying to load an xml file witch can be anywere in the hdd, this is what I have done to browse it, but later when I'm trying to load the file it would only look in the same path of the swf file here is the code package { import flash.display.Sprite; import flash.events.; import flash.net.; public class cargadorXML extends Sprite { public var cuadro:Sprite = new Sprite(); public var file:FileReference; public var req:URLRequest; public var xml:XML; public var xmlLoader:URLLoader = new URLLoader(); public function cargadorXML() { cuadro.graphics.beginFill(0xFF0000); cuadro.graphics.drawRoundRect(0,0,100,100,10); cuadro.graphics.endFill(); cuadro.addEventListener(MouseEvent.CLICK,browser); addChild(cuadro); } public function browser(e:Event) { file = new FileReference(); file.addEventListener(Event.SELECT,bien); file.browse(); } public function bien(e:Event) { xmlLoader.addEventListener(Event.COMPLETE, loadXML); req=new URLRequest(file.name); xmlLoader.load(req); } public function loadXML(e:Event) { xml=new XML(e.target.data); //xml.name=file.name; trace(xml); } } } when I open a xml file that isnt it the same directory as the swf, it gives me an unfound file error. is there anything I can do? cause for example for mp3 there is an especial class for loading the file, see http://www.flexiblefactory.co.uk/flexible/?p=46 thanks

    Read the article

  • How to make designer generated .Net application settings portable

    - by Ville Koskinen
    Hello, I've been looking at modifying the source of the Doppler podcast aggregator with the goal of being able to run the program directly from my mp3 player. Doppler stores application settings using a Visual Studio designer generated Settings class, which by default serializes user settings to the user's home directory. I'd like to change this so that all settings would be stored in the same directory as the exe. It seems that this would be possible by creating a custom provider class which inherits the SettingsProvider class. Has anyone created such a provider and would like to share code? Update: I was able to get a custom settings provider nearly working by using this MSDN sample, i.e. with simple inheritance. I was initially confused as Windows Forms designer stopped working until I did this trick suggested at Codeproject: internal sealed partial class Settings { private MySettingsProvider settingsprovider = new MySettingsProvider(); public Settings() { foreach (SettingsProperty property in this.Properties) { property.Provider = settingsprovider; } ... The program still starts with window size 0;0 though. Anyone with any insight to this? Why the need to assing the provider in runtime---instead of using attributes as suggested by MSDN? Why the changes in how the default settings are passed to the application with the default settings provider vs. the custom one?

    Read the article

  • How to write C++ audio processing applications?

    - by cesko82
    Hi everyone, I'm an Electronics and Telecommunications student, next to my graduation. I'm gonna work on a project that involves my knowledge about DSP, music and audio in general. I allready know all the basic mathematic instruments and all the stuff I need to manage it, such as FFT, circular convolution ecc ecc. I want to learn C++ programming basically for one reason: it's very important in the professional world!!! And I think it's one of the most used to write applications working with audio, especially when it's about real time processing. Ok, after this small introduction I would like to know first, which are the most used libraries to work with audio processing in c++?? I was longer looking on the web but i couldn't find a lo of working stuff. (I work under linux with eclipse CDT enviroment). Then I would like to know if there are good sources to learn how to write some working code, such as for example how to write a simple low pass filter. Basically now i will not write real time applications, I would like to start from the processing of a WAV file, or even better an MP3 file, so basically on vectors of samples. Let's say that basically for now I would like to extract the waveform from an audio file, and save it to a thumbnail or to a PNG image. Ok, for now I think it's all I would need. Any ideas, advices, libraries, books, interesting sources about that? Thanks a lot in advance for any kind of answer. Giovanni.

    Read the article

  • iphone-AVAudio Player Crashes

    - by user2779450
    I have an app that uses an avaudio player for two things. One of them is to play an explosion sound when a uiimageview collision is detected, and the other is to play a lazer sound when a button is pressed. I declared the audioplayer in the .h class, and I call it each time the button is clicked by doing this: NSURL *url = [NSURL fileURLWithPath:[[NSBundle mainBundle] pathForResource:@"/lazer" ofType:@"mp3"]]; NSError *error; audioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:url error:&error]; if (error) { NSLog(@"Error in audioPlayer: %@", [error localizedDescription]); } else { [audioPlayer prepareToPlay]; } [audioPlayer play]; This works fine, but after many uses of the game, the audio will stop play when i hit the button, and when a collision is detected, the game crashes. Here is my crash log: 2013-09-18 18:09:19.618 BattleShip[506:907] 18:09:19.617 shm_open failed: "AppleAudioQueue.41.2619" (23) flags=0x2 errno=24 (lldb) Suggestions? Could there be something to do with repeatedly creating an audio player? Alternatives maybe?

    Read the article

  • Looping HTML5 audio on the iPhone

    - by Peeps
    I'm trying to make a HTML5 webapp that simply plays a sound over and over and over again, on my iPhone. I don't know any Obj-C to do it natively. What I have works fine, but the sound only plays once: <!DOCTYPE html> <html> <head> <title>noisemaker!</title> <meta http-equiv="content-type" content="text/html; charset=utf-8" /> <meta name="viewport" content="maximum-scale=1, minimum-scale=1, width=device-width, user-scalable=no" /> <meta name="apple-mobile-web-app-capable" content="yes" /> </head> <body> <audio src="noise.mp3" autoplay controls loop></audio> </body> </html> Is there a way to either bypass the QuickTime audio screen and loop it in the webpage, or get the QuickTime audio screen to loop the sound?

    Read the article

  • Merging Passed Parameters

    - by Josh Crowder
    I have a two data arrays sent in from a form, one called transloaded and the other video which is the actual form for the model. I need to get [:video_encoded][:url] and save that to [:video][:flash_url] This is the passed arguments or transloaded, when I try and access [:transload][:results][:video_encode] I get nil. print params[:transload] { "assembly_id":"d59b4293b3d79d2ccd1948c02421c6a6", "status":"success", "uploads":{ "video":{ "name":"bbc_one.mp4", "mime":"video/mp4", "ext":"mp4", "size":601104, "meta":{ "width":720, "height":404, "video_fps":25, "video_bitrate":null, "video_format":"avc1", "video_codec":"ffh264", "audio_bitrate":"128k", "audio_codec":"faad", "duration":3.07, "device_vendor":null, "device_name":null, "device_software":null, "latitude":null, "longitude":null }, "url":"http://tmp.transloadit.com/" } }, "results":{ "video_encode":{ "name":"bbc_one.flv", "mime":"video/x-flv", "steps":["encode","export"], "ext":"flv", "size":388317, "meta":{ "width":480, "height":320, "video_fps":25, "video_bitrate":"512k", "video_format":"FLV1", "video_codec":"ffflv", "audio_bitrate":"64k", "audio_codec":"mp3", "duration":3.11, "device_vendor":null, "device_name":null, "device_software":null, "latitude":null, "longitude":null }, "url":"http://s3.transloadit.com/b7deac9c96af6c745e914e25d0350baa/7a/2b09e822265ac2328789b40dcc02ae/bbc_one.flv" }, "video_encode_iphone":{ "name":"bbc_one.qt", "mime":"video/quicktime", "steps":["encode_iphone","export"], "ext":"qt", "size":218236, "meta":{ "width":480, "height":320, "video_fps":25, "video_bitrate":null, "video_format":"avc1", "video_codec":"ffh264", "audio_bitrate":"128k", "audio_codec":"faad", "duration":3.04, "device_vendor":null, "device_name":null, "device_software":null, "latitude":null, "longitude":null }, "url":"http://s3.transloadit.com/31/58bcc80d5345e52a42c9773125e8f0/bbc_one.qt" } } } Here is what I am trying to use video_links = { :flash_url => params[:transload][:results][:video_encode][:url], :mp4_url => params[:transload][:results][:video_encode_iphone][:url] } params[:video].merge(video_links)

    Read the article

  • How to upload files?

    - by Brian Roisentul
    I just wanted to know how to configure FCKEditor to upload files and images to the server where the website is hosted. The relevant part for it's config file(i think) looks like this: FCKConfig.LinkUpload = true ; FCKConfig.LinkUploadURL = FCKConfig.BasePath + 'filemanager/connectors/' + _QuickUploadLanguage + '/upload.' + _QuickUploadExtension ; FCKConfig.LinkUploadAllowedExtensions = ".(7z|aiff|asf|avi|bmp|csv|doc|fla|flv|gif|gz|gzip|jpeg|jpg|mid|mov|mp3|mp4|mpc|mpeg|mpg|ods|odt|pdf|png|ppt|pxd|qt|ram|rar|rm|rmi|rmvb|rtf|sdc|sitd|swf|sxc|sxw|tar|tgz|tif|tiff|txt|vsd|wav|wma|wmv|xls|xml|zip)$" ; // empty for all FCKConfig.LinkUploadDeniedExtensions = "" ; // empty for no one FCKConfig.ImageUpload = true ; FCKConfig.ImageUploadURL = FCKConfig.BasePath + 'filemanager/connectors/' + _QuickUploadLanguage + '/upload.' + _QuickUploadExtension + '?Type=Image' ; FCKConfig.ImageUploadAllowedExtensions = ".(jpg|gif|jpeg|png|bmp)$" ; // empty for all FCKConfig.ImageUploadDeniedExtensions = "" ; // empty for no one Could it be a folder permission problem? Is this part of the config.js alright?

    Read the article

  • Convert a Relative URL to an Absolute URL in Actionscript / Flex

    - by Bear
    I am working with Flex, and I need to take a relative URL source property and convert it to an absolute URL before loading it. The specific case I am working with involves tweaking SoundEffect's load method. I need to determine if a file will be loaded from the local file system or over the network from looking at the source property, and the easiest way I've found to do this is to generate the absolute URL. I'm having trouble generating the absolute URL for sound effect in particular. Here were my initial thoughts, which haven't worked. Look for the DisplayObject that the Sound Effect targets, and use its loaderInfo property. The target is null when the SoundEffect loads, so this doesn't work. Look at FlexGlobals.topLevelApplication, at the url or loaderInfo properties. Neither of these are set, however. Look at the FlexGlobals.topLevelApplication.systemManager.loaderInfo. This was also not set. The SoundEffect.as code basically boils down to var url:String = "mySound.mp3"; /*>> I'd like to convert the URL to absolute form here and tweak it as necessary <<*/ var req:URLRequest = new URLRequest(url); var loader:Loader = new Loader(); loader.load(req); Does anyone know how to do this? Any help clarifying the rules of how relative urls are resolved for URLRequests in ActionScript would also be much appreciated. edit I would also be perfectly satisfied with some way to tell whether the url will be loaded from the local file system or over the network. Looking at an absolute URL it would just be easy to look at the prefix, like file:// or http://.

    Read the article

  • Null reference exception While Navigating to PlayListItem.

    - by Subhen
    Hi, I am trying to play the selected media from PlayList,if the selected index is not zero as below: if (playList.Items.Count == 0) { setPlayList(); if (selectedIndex!= 0) { if(custMediaElement.Playlist!=null) custMediaElement.GoToPlaylistItem(selectedIndex); } } The setPlayLIst method is as below: private void setPlayList() { IEnumerable eLevelData = pMainPage.GetDataFromDictonary(pMainPage.strChildFolderID); foreach (RMSMedia folderItems in eLevelData) { string strmediaURL = folderItems.strMediaFileName; string strmediaExtension = GetExtension(strmediaURL); if (strmediaExtension == "wmv" || strmediaExtension == "mp4" || strmediaExtension == "mp3" || strmediaExtension == "mpg") { PlaylistItem playListItem = new PlaylistItem(); string thumbSource = folderItems.strAlbumcoverImage; playListItem.MediaSource = new Uri(strmediaURL, UriKind.RelativeOrAbsolute); playListItem.Title = folderItems.strAlbumName; if (!string.IsNullOrEmpty(thumbSource)) playListItem.ThumbSource = new Uri(thumbSource, UriKind.RelativeOrAbsolute); playList.Items.Add(playListItem); } } custMediaElement.Playlist = playList; } But I am getting a Null reference Exception while trying to go to PlayList Item with the help of selected index , as explained in top. This works fine if I don't use custMediaElement.GoToPlaylistItem(selectedIndex); but in that case the Media Player Always plays the 1st Item , No matter , Which ever Song I select from the List Box. Below is few details from Stack Trace: ExpressionMediaPlayer.MediaPlayer.DoOpenPlaylistItem(PlaylistItem playlistItem) at ExpressionMediaPlayer.MediaPlayer.GoToPlaylistItem(Int32 playlistItemIndex) Thanks, Subhen

    Read the article

  • Bash: how to simply parallelize tasks?

    - by NoozNooz42
    I'm writing a tiny script that calls the "PNGOUT" util on a few hundred PNG files. I simply did this: find $BASEDIR -iname "*png" -exec pngout {} \; And then I looked at my CPU monitor and noticed only one of the core was used, which is quite sad. In this day and age of dual, quad, octo and hexa (?) cores desktop, how do I simply parallelize this task with Bash? (it's not the first time I've had such a need, for quite a lot of these utils are mono-threaded... I already had the case with mp3 encoders). Would simply running all the pngout in the background do? How would my find command look like then? (I'm not too sure how to mix find and the '&' character) I if have three hundreds pictures, this would mean swapping between three hundreds processes, which doesn't seem great anyway!? Or should I copy my three hundreds files or so in "nb dirs", where "nb dirs" would be the number of cores, then run concurrently "nb finds"? (which would be close enough) But how would I do this?

    Read the article

  • NSTimer to smooth out playback position

    - by Michael
    I have an audio player and I want to show the current time of the the playback. I'm using a custom play class. The app downloads the mp3 to a file then plays from the file when 5% has been downloaded. I have a progress view update as the file plays and update a label on each call to the progress view. However, this is jerky... sometimes even going backward a digit or two. I was considering using an NSTimer to smooth things out. I would be fired every second to a method and pass the percentage played figure to the method then update the label. First, does this seem reasonable? Second, how do I pass the percentage (a float) over to the target of the timer. Right now I am putting the percent played into a dictionary but this seems less than optimal. This is what is called update the progress bar: -(void)updateAudioProgress:(Percentage)percent { audio = percent; if (!seekChanging) slider.value = percent; NSMutableDictionary *myDictionary = [[NSMutableDictionary alloc] init]; [myDictionary setValue:[NSNumber numberWithFloat:percent] forKey:@"myPercent"]; [NSTimer scheduledTimerWithTimeInterval:5 target:self selector:@selector(myTimerMethod:) userInfo:myDictionary repeats:YES]; [myDictionary release]; } This is called first after 5 seconds but then updates each time the method is called. As always, comments and pointers appreciated.

    Read the article

  • Cocos2d shake/accelerometer issue.

    - by Ryan Poolos
    So I a little backstory. I wanted to implement a particle effect and sound effect that both last about 3 sec or so when the user shakes their iDevice. But first issue arrived when the build in UIEvent for shakes refused to work. So I took the advice of a few Cocos veterans to just use some script to get "violent" accelerometer inputs as shakes. Worked great until now. The problem is that if you keep shaking it just stacks the particle and sounds over and over. Now this wouldn't be that big of a deal except it happens even if you are careful to try and not do so. So what I am hoping to do is disable the accelerometer when the particle effect/sound effect start and then reenable it as soon as they finish. Now I don't know if I should do this by schedule, NStimer, or some other function. I am open to ALL suggestions. here is my current "shake" code. - (void)accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration { const float violence = 1; static BOOL beenhere; BOOL shake = FALSE; if (beenhere) return; beenhere = TRUE; if (acceleration.x > violence * 1.5 || acceleration.x < (-1.5* violence)) shake = TRUE; if (acceleration.y > violence * 2 || acceleration.y < (-2 * violence)) shake = TRUE; if (acceleration.z > violence * 3 || acceleration.z < (-3 * violence)) shake = TRUE; if (shake) { id particleSystem = [CCParticleSystemQuad particleWithFile:@"particle.plist"]; [self addChild: particleSystem]; // Super simple Audio playback for sound effects! [[SimpleAudioEngine sharedEngine] playEffect:@"Sound.mp3"]; shake = FALSE; } beenhere = FALSE; }

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55  | Next Page >