Search Results

Search found 9074 results on 363 pages for 'audio encoding'.

Page 3/363 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Web Audio API and mobile browsers

    - by Michael
    I've run into a problem while implementing sound and music into an HTML game that I'm building. I'm using the Web Audio API, loading all the sound files with XMLHttpRequests and decoding them into an AudioBufferSourceNode with AudioContext.prototype.decodeAudioData(). It looks something like this: var request = new XMLHttpRequest(); request.open("GET", "soundfile.ogg", true); request.responseType = "arraybuffer"; request.onload = function() { context.decodeAudioData(request.response) } request.send(); Everything plays fine, but on mobile the decodeAudioData takes an absurdly long time for the background music. I then tried using AudioContext.prototype.createMediaElementSource() to load the music from an HTML Audio object, since they support streaming and don't have to load the whole file into memory at once. It looked something like this: var audio = new Audio('soundfile.ogg'); var source = context.createMediaElementSource(audio); var mainVolume = context.createGain(); source.connect(mainVolume); mainVolume.connect(context.destination); This loads much faster, but the audio volume isn't affected by the gain node. Works fine on desktop, so I'm assuming this is a bug/limitation of mobile Chrome (testing on Android). Is there actually no good, well-performing way to handle sound on mobile browsers or am just I doing something stupid?

    Read the article

  • Is There any GUI Application for Flash Media Live Encoding for Ubuntu or Linux

    - by Dumindu Mahawela
    I need to Broadcast a TV channel to a Website. I need a GUI application for Flash Media Live Encoding. Famous Adobe FME does not have a Linux version. I did try to install Open Broadcast Encoder in Ubuntu 13.04 64amd but wasnt successfull. So the things that I need to know are; Is There any GUI Application for Flash Media Live Encoding for Ubuntu or Linux ? Is it able to succesfully install Open Broadcast Encoder In Ubuntu ?

    Read the article

  • Change language encoding for file uploading

    - by jc.yin
    Previously we were running a Wordpress site on a Mac OS Server machine. We had several hundred images with Chinese characters for the image names. Now we're trying to migrate to a Ubuntu system and everything is fine except the images. Every time I try to upload an image with a Chinese name via FTP, I get the following message: "MyImage contains illegal characters. Please choose an appropriate text encoding" I have no idea how to solve this issue, do I need to somehow change the system language encoding in Ubuntu to allow for image uploading? Thanks

    Read the article

  • Global UTF-encoding, the right way

    - by mowgli
    I'm curious, as to what is the right way to have UTF-8 encoding on all web files All my files (incl. CSS and JS) are made and saved in UTF-8 encoding In PHP, I set the char-set on top of the main page (this page includes all others) with: header('Content-type: text/html; charset=utf-8'); In the same page I have this html meta tag: <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> Then I stubled upon an external css file that has this on first line: @charset "UTF-8"; And now I wonder, should I set the charset INSIDE all my CSS/JS files too, like that? And/or should I serve each file with charset=utf-8 in the meta tag?

    Read the article

  • Audio programming resources

    - by rashleighp
    I've been very interested in the last few months about getting in to audio programming (I'm from a musical background). I've been a .NET developer for two years and have also done some objective c for an iPhone app recently. I realise I would probably need to work on my C++ chops and have been having a play around with FMOD EX and doing a lot of research into the industry. I was just wondering if anyone could suggest some good resources for audio programming (be they websites, podcasts, books, videos, online courses etc). Anything from Fourier analysis, low level coding, audio engine creation to audio APIs. I just want to learn as much as possible! Thanks in advance.

    Read the article

  • Setting Up Audio on a Server Install

    - by tdcrenshaw
    I'm running on a clean install of 10.10 Server edition and have alsa-base, alsa-tools, alsa-utils, alsaplayer, and alsa-firmware-loader installed. At one point I installed pulseaudio, but I have since removed it. I've tried the following lspci | grep audio 00:1f.5 Multimedia audio controller: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) AC'97 Audio Controller (rev 01) 01:06.0 Multimedia audio controller: Creative Labs [SB Live! Value] EMU10k1X aplay -l aplay: device_list:235: no soundcards found... alsamixer can not open mixer: No such file or directory When I search for modules with find /lib/modules/`uname -r` | grep snd I do get a list of modules I'm not very experienced with alsa setup, so I'm not sure where to go from here

    Read the article

  • How to stream semi-live audio over internet

    - by Thomas Tempelmann
    I want to write something like Skype, i.e. I have a constant audio stream on one computer and then recompress it in a format that's suitable for a latent internet connection, receive it on the other end and play it. Let's also assume that the internet connection is fairly modern and fast, i.e. DSL or alike, no slow connections over phone and such. The involved computers will also be rather modern (Dual Core Intel CPUs at 2GHz or more). I know how to handle the audio on the machines. What I don't know is how to transmit the audio in an efficient way. The challenges are: I'd like get good audio quality across the line. The stream should be received without drops. The stream may, however, be received with a little delay (a second delay is acceptable). I imagine that the transport software could first determine the average (and max) latency, then start the stream and tell the receiver to wait for that max latency before starting to play the audio. With that, if the latency doesn't get any higher, the entire stream will be playable on the other side without stutter or drops. If, due to unexpected IP latencies or blockages, the stream does get cut off, I want to be able to notice this so that I can take actions (e.g. abort the stream) and eventually start a new transmission. What are my options if I want do use ready-made software for the compression and tranmission? I have no intention to write my own audio compression engine, really. OTOH, I plan to sell the solution in a vertical market, meaning I can afford a few dollars of license fees per copy, but not $100s. I guess the simplest solution would be to just open a TCP stream, send a few packets back and forth to determine their running time (or even use UDP for that), then use the results as the guide for my max latency value, then simply fire the audio data in its raw form (uncompressed 16 bit stereo), along with a timing code over the TCP connection. The receiver reads the data and plays it with the pre-determined delay. That might just work with the type of fast connection I expect. I just wonder if there are better solutions to reach this goal, with better performance (lower latency) and less data (compressed). BTW, I first try to implement this on OS X, but might want to do it on Windows, too, if it proves successful.

    Read the article

  • Record audio via MediaRecorder

    - by Isuru Madusanka
    I am trying to record audio by MediaRecorder, and I get an error, I tried to change everything and nothing works. Last two hours I try to find the error, I used Log class too and I found out that error occurred when it call recorder.start() method. What could be the problem? public class AudioRecorderActivity extends Activity { MediaRecorder recorder; File audioFile = null; private static final String TAG = "AudioRecorderActivity"; private View startButton; private View stopButton; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); startButton = findViewById(R.id.start); stopButton = findViewById(R.id.stop); setContentView(R.layout.main); } public void startRecording(View view) throws IOException{ startButton.setEnabled(false); stopButton.setEnabled(true); File sampleDir = Environment.getExternalStorageDirectory(); try{ audioFile = File.createTempFile("sound", ".3gp", sampleDir); }catch(IOException e){ Toast.makeText(getApplicationContext(), "SD Card Access Error", Toast.LENGTH_LONG).show(); Log.e(TAG, "Sdcard access error"); return; } recorder = new MediaRecorder(); recorder.setAudioSource(MediaRecorder.AudioSource.MIC); recorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP); recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB); recorder.setAudioEncodingBitRate(16); recorder.setAudioSamplingRate(44100); recorder.setOutputFile(audioFile.getAbsolutePath()); recorder.prepare(); recorder.start(); } public void stopRecording(View view){ startButton.setEnabled(true); stopButton.setEnabled(false); recorder.stop(); recorder.release(); addRecordingToMediaLibrary(); } protected void addRecordingToMediaLibrary(){ ContentValues values = new ContentValues(4); long current = System.currentTimeMillis(); values.put(MediaStore.Audio.Media.TITLE, "audio" + audioFile.getName()); values.put(MediaStore.Audio.Media.DATE_ADDED, (int)(current/1000)); values.put(MediaStore.Audio.Media.MIME_TYPE, "audio/3gpp"); values.put(MediaStore.Audio.Media.DATA, audioFile.getAbsolutePath()); ContentResolver contentResolver = getContentResolver(); Uri base = MediaStore.Audio.Media.EXTERNAL_CONTENT_URI; Uri newUri = contentResolver.insert(base, values); sendBroadcast(new Intent(Intent.ACTION_MEDIA_SCANNER_SCAN_FILE, newUri)); Toast.makeText(this, "Added File" + newUri, Toast.LENGTH_LONG).show(); } } And here is the xml layout. <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/RelativeLayout1" android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="vertical" > <Button android:id="@+id/start" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentTop="true" android:layout_centerHorizontal="true" android:layout_marginTop="146dp" android:onClick="startRecording" android:text="Start Recording" /> <Button android:id="@+id/stop" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignLeft="@+id/start" android:layout_below="@+id/start" android:layout_marginTop="41dp" android:enabled="false" android:onClick="stopRecording" android:text="Stop Recording" /> </RelativeLayout> And I added permission to AndroidManifest file. <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="in.isuru.audiorecorder" android:versionCode="1" android:versionName="1.0" > <uses-sdk android:minSdkVersion="8" /> <application android:icon="@drawable/ic_launcher" android:label="@string/app_name" > <activity android:name=".AudioRecorderActivity" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/> <uses-permission android:name="android.permission.RECORD_AUDIO" /> </manifest> I need to record high quality audio. Thanks!

    Read the article

  • Character Encoding: â??

    - by akaphenom
    I am trying to piece together the mysterious string of characters â?? I am seeing quite a bit of in our database - I am fairly sure this is a result of conversion between character encodings, but I am not completely positive. The users are able to enter text (or cut and paste) into a Ext-Js rich text editor. The data is posted to a severlet which persists it to the database, and when I view it in the database i see those strange characters... is there any way to decode these back to their original meaning, if I was able to discover the correct encoding - or is there a loss of bits or bytes that has occured through the conversion process? Users are cutting and pasting from multiple versions of MS Word and PDF. Does the encoding follow where the user copied from? Thank you website is UTF-8 We are using ms sql server 2005; SELECT serverproperty('Collation') -- Server default collation. Latin1_General_CI_AS SELECT databasepropertyex('xxxx', 'Collation') -- Database default SQL_Latin1_General_CP1_CI_AS and the column: Column_name Type Computed Length Prec Scale Nullable TrimTrailingBlanks FixedLenNullInSource Collation text varchar no -1 yes no yes SQL_Latin1_General_CP1_CI_AS The non-Unicode equivalents of the nchar, nvarchar, and ntext data types in SQL Server 2000 are listed below. When Unicode data is inserted into one of these non-Unicode data type columns through a command string (otherwise known as a "language event"), SQL Server converts the data to the data type using the code page associated with the collation of the column. When a character cannot be represented on a code page, it is replaced by a question mark (?), indicating the data has been lost. Appearance of unexpected characters or question marks in your data indicates your data has been converted from Unicode to non-Unicode at some layer, and this conversion resulted in lost characters. So this may be the root cause of the problem... and not an easy one to solve on our end.

    Read the article

  • [Asp.Net MVC] Encoding a character

    - by Trimack
    Hi, I am experiencing some weird encoding behaviour in my ASP.NET MVC project. In my Site.Master there is <div class="logo"> <a href="<%=Url.Action("Index", "Win7")%>"><%= Html.Encode("Windows 7 Tutoriál") %></a></div> which translates to the resulting page as <div class="logo"> <a href="/">Windows 7 TutoriA?l</a></div> However, in the Index.aspx there is <h1> Windows 7 Tutoriál</h1> which translates correctly on the same resulting page. I do have <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> as my first line in <head>. Locally, both files are saved in UTF-8 encoding. Any ideas why is this happening and how to fix it? Thanks in advance.

    Read the article

  • How to fix these compiler errors?

    - by Sandra Schlichting
    I have this source code from 2001 that I would like to compile. It gives this: $ make g++ -O99 -Wall -DLINUX -pedantic -c -o audio.o audio.cpp In file included from audio.cpp:7: audio.h:14: error: use of enum ‘mad_flow’ without previous declaration audio.h:15: error: use of enum ‘mad_flow’ without previous declaration audio.h:17: error: use of enum ‘mad_flow’ without previous declaration audio.cpp: In function ‘mad_flow audio::input(void*, mad_stream*)’: audio.cpp:19: error: new declaration ‘mad_flow audio::input(void*, mad_stream*)’ audio.h:14: error: ambiguates old declaration ‘int audio::input(void*, mad_stream*)’ audio.h:11: error: ‘size_t audio::stream::BufferPos’ is private audio.cpp:23: error: within this context audio.h:11: error: ‘size_t audio::stream::BufferSize’ is private audio.cpp:23: error: within this context audio.h:10: error: ‘char* audio::stream::Buffer’ is private audio.cpp:26: error: within this context audio.h:11: error: ‘size_t audio::stream::BufferSize’ is private audio.cpp:26: error: within this context audio.h:11: error: ‘size_t audio::stream::BufferPos’ is private audio.cpp:27: error: within this context audio.h:11: error: ‘size_t audio::stream::BufferSize’ is private audio.cpp:27: error: within this context audio.cpp: In function ‘mad_flow audio::output(void*, const mad_header*, mad_pcm*)’: audio.cpp:49: error: new declaration ‘mad_flow audio::output(void*, const mad_header*, mad_pcm*)’ audio.h:15: error: ambiguates old declaration ‘int audio::output(void*, const mad_header*, mad_pcm*)’ audio.cpp: In function ‘mad_flow audio::error(void*, mad_stream*, mad_frame*)’: audio.cpp:83: error: new declaration ‘mad_flow audio::error(void*, mad_stream*, mad_frame*)’ audio.h:17: error: ambiguates old declaration ‘int audio::error(void*, mad_stream*, mad_frame*)’ audio.cpp: In constructor ‘audio::stream::stream(const char*)’: audio.cpp:119: error: ‘input’ was not declared in this scope audio.cpp:122: error: ‘output’ was not declared in this scope audio.cpp:123: error: ‘error’ was not declared in this scope make: *** [audio.o] Error 1 audio.h contains #include <stdlib.h> #include "mad.h" namespace audio { class stream { private: char* Buffer; size_t BufferSize, BufferPos; struct mad_decoder Decoder; friend enum mad_flow input(void* Data, struct mad_stream* MadStream); friend enum mad_flow output(void* Data, const struct mad_header* Header, struct mad_pcm* PCM); friend enum mad_flow error(void* Data, struct mad_stream* MadStream, struct mad_frame* Frame); public: stream(const char* FileName); ~stream(); void play(); }; } I have tried to just insert enum mad_flow {}; but that just gave a new problem. Can anyone see how to fix this?

    Read the article

  • Decoding not reversing unicode encoding in Django/Python

    - by PhilGo20
    Ok, I have a hardcoded string I declare like this name = u"Par Catégorie" I have a # -- coding: utf-8 -- magic header, so I am guessing it's converted to utf-8 Down the road it's outputted to xml through xml_output.toprettyxml(indent='....', encoding='utf-8') And I get a UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 3: ordinal not in range(128) Most of my data is in French and is ouputted correctly in CDATA nodes, but that one harcoded string keep ... I don't see why an ascii codec is called. what's wrong ?

    Read the article

  • Is there an audio recording application/tool that has Tivo-like functionality?

    - by Bob
    I do a lot of live speech recording that requires me to quickly jump back and then transcribe a particular piece of the audio, then go back to recording again, while still maintaining the full audio file. So Far I've done this by splitting the audio and running one line to a recorder (for the whole audio), and one to my computer. Then I use something like Audacity to record, and then stop/go back whenever I hear something worth transcribing. This requires me to stop the recording, then start it up again and I end up missing chunks of the speech I'm listening to. Is there a tool that would let me rewind, then listen again and continue listening at a buffered distance from the audio recording, the way Tivo does with television shows?

    Read the article

  • HTML Audio performance

    - by user1888309
    I'm working on HTML drum machine, and I`ve met some performance issues, rhythm start to break if BPM is higher than 110 but I'm expecting to make it work on BPM over 180. I guess that it can be related with format or codec of audio files, however it also maybe that my code is not very optimised (as I can see from JS CPU profiling it's not). So I'm expecting you guys give me some code review or some hints on optimisation. Although all similar projects I've found on internet didn't work good and maybe it's just restrictions of Audio API. By the way, it's very raw and sounds works only on Chrome under Mac OS, so any advise on audio encoding for web also would be great Project on Github pages Screenshot of Groove which breaks UPDATE Ok, I've found that I was encoding audio files incorrectly, after fixing that rhythm stopped breaking, and also it started working in Mozilla. But still there are issues on windows OS.

    Read the article

  • C#/Oracle: Specify Encoding/Character Set of Query?

    - by Reini
    I'm trying to fetch some Data of a Oracle 10 Database. Some cells are containing german umlauts (äöü). In my Administration-Tool (TOAD) I can see them very well: "Mantel für Damen" (Jacket for Women) This is my C# Code (simplified): var oracleCommand = new OracleCommand(sqlGetArticles, databaseConnection); var articleResult = oracleCommand.ExecuteReader(); string temp = articleResult.Read()["SomeField"].ToString(); Console.WriteLine(temp); The output is: "Mantel f?r Damen" Tryed on Debugging (moving mouse over variable), Debug-Window, Console-Window, File. I think I have to specify the Encoding/Character Set somwhere. But where?

    Read the article

  • Can SVG render partially if gzipped and chunk-transferred?

    - by Scott Stafford
    Hi - I have some large, dynamically generated SVGs that are being served over a relatively slow internet connection. I'm trying to optimize them to be viewable as fast as possible. If I set the server to Content-Encoding: gzip and Transfer-Encoding: chunked, will any SVG viewers take advantage of that and render it partially, as it is transferred? If not, are there other ways to get it to render as-it-streams? I could break it up into several SVG pieces but that will be a lot of work, I was hoping for server settings... The most common users use IE7 with the Adobe SVG Viewer plugin. I doubt it matters but I'm serving with C#/ASP.NET and IIS6.

    Read the article

  • Multiple audio sources on a single gameObject in unity

    - by angryInsomniac
    So, I have an audio system set up wherein I have loaded all my audio clips centrally and play them on demand by passing the requesting audioSource into the sound manager. However, there is a complication wherein if I want to overlay multiple looping sounds, I need to have multiple audio sources on an object, which is fine , so I created two in my script instantiated them and played my clips on them and then the world went crazy. For some reason, when I create two audio Sources in an object only the latest one is ever used, even if I explicitly keep objects separated, playing a clip on one or the other plays the clip on the last one that was created, furthermore, either this last one is not created in the right place or somehow messes with the rolloff rules because I can hear it all across my level, havign just one source works fine, but putting a second one on it causes shit to go batshit insane. Does anyone know the reason / solution for this ? Some pseudocode : guardSoundsSource = (AudioSource)gameObject.AddComponent("AudioSource"); guardSoundsSource.name = "Guard_Sounds_source"; // Setup this source guardThrusterSource = (AudioSource)gameObject.AddComponent("AudioSource"); guardThrusterSource.name = "Guard_Thruster_Source"; // setup this source // play using custom Sound manager soundMan.soundMgr.playOnSource(guardSoundsSource,"Guard_Idle_loop" ,true,GameManager.Manager.PlayerType); // this method prints out the name of the source the sound was to be played on and it always shows "Guard_Thruster_Source" even on the "Guard_Idle_loop" even though I clearly told it to use "Guard_Sounds_source"

    Read the article

  • Play an audio file using RemoteIO and Audio Unit

    - by NeilMonday
    I am looking at Apple's 'aurioTouch' example for the iPhone and I would like to play an mp3 or wav instead of using the built in mic. I am very new to the audio portion of iPhone programming, but I think I need to modify the SetupRemoteIO(...) function and replace the AudioComponent named 'comp' with a custom AudioComponent that plays a file. Basically I want the app to function exactly the same as the original, but with an audio file as the input instead of the mic.

    Read the article

  • Audio decoding delay when changing the audio language

    - by mahendiran.b
    My gstreamer Pipeline is like this Approach1 --------------input-selector->Queue->AduioParser->AudioSink | Souphttpsrc->tsdemux-->| | --------------- Queue->videoParser->videoSink In this approach 1, there is a delay in audio decoding when I toggle between various audio language. Approach2 ------ input-selector-> Queue->AduioParser->AudioSink | Souphttpsrc->tsdemux---multiqueue>| | ------- Queue->videoParser->VideoSink But there is no delay is observed in approach2. Can anyone please explain the reason behind this ? what is the specialty of multiqueue here?

    Read the article

  • Embed audio broadcasting on web page

    - by giargo
    Hi, I'd like to embed simple audio player on my webpage and I want it to get the audio from a stream broadcasted from my server. I read I can use IceCast on my web-server, getting an audio stream from a client using IceS (or this is what i got from other questions and articles) but once I have my stream, IceCast is supposed to broadcast it on an URL, that can be opened from pkayers like winamp or similar. I've found out this is quite a rare topic, usually people just want to broadcast "radio" where files are taken from a static playlist. In this case I have to get a stream from an IceCast URL and embed it with a player on a web page. Thank.

    Read the article

  • Typical text encoding+BOM, and EOL behavior on mobile devices

    - by Dan W
    Typical things to worry about when dealing with text are the BOM/signature, encoding, and the end of line (EOL) char/chars. I know that Windows often favours \r\n (CR+LF) and Mac/Linux favours \n (LF), but how about mobile devices such as the iPhone and Android? Do typical apps on those platforms favour one or the other? Also, which text encodings are mobiles most likely to use - UTF-8, iso-8859-1, or even Windows 1252 (or other default codepage) or maybe even UTF-16? And if they use UTF-8/16, are they likely to need (or require not having) a BOM/signature? What is the typical behavior here?

    Read the article

  • Audio libraries for PC indie games [closed]

    - by bluescrn
    Possible Duplicate: Cross-Platform Audio API Suggestions What options are out there these days for audio playback/mixing in C++? Primarily for Windows, but portability (particularly to Mac and iOS) would be desirable. For a small indie game, potentially commercial, though - so I'm looking for something free/low-cost. My requirements are fairly basic - I don't need 3D sound, or many-channels - simple stereo is fine. Just need to be able to mix sound effects and a music stream, maybe decoding one or more compressed audio formats (.ogg/.mp3 etc), with all the basic controls over looping, pitch, volume, etc. Is OpenAL more-or-less the standard choice, or are there other good options out there?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >