Search Results

Search found 299 results on 12 pages for 'mic'.

Page 11/12 | < Previous Page | 7 8 9 10 11 12  | Next Page >

  • No sound lenovo t60 alsa ad1981 iec958

    - by Nate
    Any help on getting the sound to come through my lenovo t60 build in speakers, headphones, or mic would greatly be appreciated. The three buttons to increase, decrease sound seem to work. Bios has sound card enabled and the buttons beep when pressed. When going to Utube or playing music, no sound is heard. Thanks Nate Feb 23 - Didn't see anything specific in the sys logs with Rhythmbox when connecting my ipod. Rhythmbox is playing, but still no sound. Here is the syslog details for today. Output is set to analog output. Feb 23 17:42:32 itgis01398 rsyslogd: [origin software="rsyslogd" swVersion="4.2.0" x-pid="824" x-info="http://www.rsyslog.com"] rsyslogd was HUPed, type 'lightweight'. Feb 23 17:42:33 itgis01398 rsyslogd: [origin software="rsyslogd" swVersion="4.2.0" x-pid="824" x-info="http://www.rsyslog.com"] rsyslogd was HUPed, type 'lightweight'. Feb 23 17:42:49 itgis01398 anacron[968]: Job `cron.daily' terminated Feb 23 17:42:49 itgis01398 anacron[968]: Job `cron.weekly' started Feb 23 17:42:49 itgis01398 anacron[12067]: Updated timestamp for job `cron.weekly' to 2011-02-23 Feb 23 17:42:53 itgis01398 anacron[968]: Job `cron.weekly' terminated Feb 23 17:42:53 itgis01398 anacron[968]: Normal exit (2 jobs run) Feb 23 18:01:19 itgis01398 kernel: [ 2731.324067] usb 1-5: new high speed USB device using ehci_hcd and address 3 Feb 23 18:01:19 itgis01398 kernel: [ 2731.482879] Initializing USB Mass Storage driver... Feb 23 18:01:19 itgis01398 kernel: [ 2731.483061] usb-storage 1-5:1.0: Quirks match for vid 05ac pid 1205: 10 Feb 23 18:01:19 itgis01398 kernel: [ 2731.483116] scsi6 : usb-storage 1-5:1.0 Feb 23 18:01:19 itgis01398 kernel: [ 2731.483306] usbcore: registered new interface driver usb-storage Feb 23 18:01:19 itgis01398 kernel: [ 2731.483310] USB Mass Storage support registered. Feb 23 18:01:20 itgis01398 kernel: [ 2732.481116] scsi 6:0:0:0: Direct-Access Apple iPod 1.62 PQ: 0 ANSI: 0 Feb 23 18:01:20 itgis01398 kernel: [ 2732.482466] sd 6:0:0:0: Attached scsi generic sg2 type 0 Feb 23 18:01:20 itgis01398 kernel: [ 2732.485095] sd 6:0:0:0: [sdb] Adjusting the sector count from its reported value: 7999488 Feb 23 18:01:20 itgis01398 kernel: [ 2732.485110] sd 6:0:0:0: [sdb] 7999487 512-byte logical blocks: (4.09 GB/3.81 GiB) Feb 23 18:01:20 itgis01398 kernel: [ 2732.487933] sd 6:0:0:0: [sdb] Write Protect is off Feb 23 18:01:20 itgis01398 kernel: [ 2732.487941] sd 6:0:0:0: [sdb] Mode Sense: 64 00 00 08 Feb 23 18:01:20 itgis01398 kernel: [ 2732.487947] sd 6:0:0:0: [sdb] Assuming drive cache: write through Feb 23 18:01:20 itgis01398 kernel: [ 2732.489927] sd 6:0:0:0: [sdb] Adjusting the sector count from its reported value: 7999488 Feb 23 18:01:20 itgis01398 kernel: [ 2732.491150] sd 6:0:0:0: [sdb] Assuming drive cache: write through Feb 23 18:01:20 itgis01398 kernel: [ 2732.491163] sdb: sdb1 sdb2 Feb 23 18:01:20 itgis01398 kernel: [ 2732.510428] sd 6:0:0:0: [sdb] Adjusting the sector count from its reported value: 7999488 Feb 23 18:01:20 itgis01398 kernel: [ 2732.511288] sd 6:0:0:0: [sdb] Assuming drive cache: write through Feb 23 18:01:20 itgis01398 kernel: [ 2732.511297] sd 6:0:0:0: [sdb] Attached SCSI removable disk Feb 23 18:01:21 itgis01398 kernel: [ 2733.746675] FAT: invalid media value (0x2f) Feb 23 18:01:21 itgis01398 kernel: [ 2733.746682] VFS: Can't find a valid FAT filesystem on dev sdb1. Feb 23 18:01:22 itgis01398 upstart-udev-bridge[330]: Env must be KEY=VALUE pairs Feb 23 18:02:07 itgis01398 kernel: [ 2780.115826] sd 6:0:0:0: [sdb] Unhandled sense code Feb 23 18:02:07 itgis01398 kernel: [ 2780.115835] sd 6:0:0:0: [sdb] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Feb 23 18:02:07 itgis01398 kernel: [ 2780.115844] sd 6:0:0:0: [sdb] Sense Key : Medium Error [current] Feb 23 18:02:07 itgis01398 kernel: [ 2780.115855] Info fld=0x0 Feb 23 18:02:07 itgis01398 kernel: [ 2780.115859] sd 6:0:0:0: [sdb] Add. Sense: Unrecovered read error Feb 23 18:02:07 itgis01398 kernel: [ 2780.115870] sd 6:0:0:0: [sdb] CDB: Read(10): 28 00 00 08 fd e9 00 00 f0 00 Feb 23 18:02:07 itgis01398 kernel: [ 2780.115892] end_request: I/O error, dev sdb, sector 589289 Feb 23 18:02:49 itgis01398 kernel: [ 2821.351464] sd 6:0:0:0: [sdb] Unhandled sense code Feb 23 18:02:49 itgis01398 kernel: [ 2821.351473] sd 6:0:0:0: [sdb] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Feb 23 18:02:49 itgis01398 kernel: [ 2821.351482] sd 6:0:0:0: [sdb] Sense Key : Medium Error [current] Feb 23 18:02:49 itgis01398 kernel: [ 2821.351493] Info fld=0x0 Feb 23 18:02:49 itgis01398 kernel: [ 2821.351497] sd 6:0:0:0: [sdb] Add. Sense: No additional sense information Feb 23 18:02:49 itgis01398 kernel: [ 2821.351507] sd 6:0:0:0: [sdb] CDB: Read(10): 28 00 00 08 fe d9 00 00 10 00 Feb 23 18:02:49 itgis01398 kernel: [ 2821.351530] end_request: I/O error, dev sdb, sector 589529 Feb 23 18:17:01 itgis01398 CRON[12709]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) volume is all of the way up.

    Read the article

  • how to continuously send data without blocking?

    - by Donal Rafferty
    I am trying to send rtp audio data from my Android application. I currently can send 1 RTP packet with the code below and I also have another class that extends Thread that listens to and receives RTP packets. My question is how do I continuously send my updated buffer through the packet payload without blocking the receiving thread? public void run() { isRecording = true; android.os.Process.setThreadPriority (android.os.Process.THREAD_PRIORITY_URGENT_AUDIO); int buffersize = AudioRecord.getMinBufferSize(8000, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT); Log.d("BUFFERSIZE","Buffer size = " + buffersize); arec = new AudioRecord(MediaRecorder.AudioSource.MIC, 8000, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, buffersize); short[] readBuffer = new short[80]; byte[] buffer = new byte[160]; arec.startRecording(); while(arec.getRecordingState() == AudioRecord.RECORDSTATE_RECORDING){ int frames = arec.read(readBuffer, 0, 80); @SuppressWarnings("unused") int lenghtInBytes = codec.encode(readBuffer, 0, buffer, frames); RtpPacket rtpPacket = new RtpPacket(); rtpPacket.setV(2); rtpPacket.setX(0); rtpPacket.setM(0); rtpPacket.setPT(0); rtpPacket.setSSRC(123342345); rtpPacket.setPayload(buffer, 160); try { rtpSession2.sendRtpPacket(rtpPacket); } catch (UnknownHostException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (RtpException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } So when I send on one device and receive on another I get decent audio, but when I send and receive on both I get broken sound like its taking turns to send and receive audio. I have a feeling it could be to do with the while loop? it could be looping around in there and not letting anything else run?

    Read the article

  • Redirect Desktop Internal Pages to Correct Mobile Internal Pages with Htaccess

    - by Luis Alejandro Ramrez Gallardo
    I have built a Mobile site in a sub-domain. I have successfully implemented the redirect 302 from: www.domain.com to m.domain.com in htaccess. What I'm looking to achieve now it to redirect users from: www.domain.com/internal-page/ > 302 > m.domain.com/internal-page.html Notice that URL name for desktop and mobile is not the same. The code I'm using looks like this: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress # Mobile Redirect # Verify Desktop Version Parameter RewriteCond %{QUERY_STRING} (^|&)ViewFullSite=true(&|$) # Set cookie and expiration RewriteRule ^ - [CO=mredir:0:www.domain.com:60] # Prevent looping RewriteCond %{HTTP_HOST} !^m.domain.com$ # Define Mobile agents RewriteCond %{HTTP_ACCEPT} "text\/vnd\.wap\.wml|application\/vnd\.wap\.xhtml\+xml" [NC,OR] RewriteCond %{HTTP_USER_AGENT} "sony|symbian|nokia|samsung|mobile|windows ce|epoc|opera" [NC,OR] RewriteCond %{HTTP_USER_AGENT} "mini|nitro|j2me|midp-|cldc-|netfront|mot|up\.browser|up\.link|audiovox"[NC,OR] RewriteCond %{HTTP_USER_AGENT} "blackberry|ericsson,|panasonic|philips|sanyo|sharp|sie-"[NC,OR] RewriteCond %{HTTP_USER_AGENT} "portalmmm|blazer|avantgo|danger|palm|series60|palmsource|pocketpc"[NC,OR] RewriteCond %{HTTP_USER_AGENT} "smartphone|rover|ipaq|au-mic,|alcatel|ericy|vodafone\/|wap1\.|wap2\.|iPhone|android"[NC] # Verify if not already in Mobile site RewriteCond %{HTTP_HOST} !^m\. # We need to read and write at the same time to set cookie RewriteCond %{QUERY_STRING} !(^|&)ViewFullSite=true(&|$) # Verify that we previously haven't set the cookie RewriteCond %{HTTP_COOKIE} !^.*mredir=0.*$ [NC] # Now redirect the users to the Mobile Homepage RewriteRule ^$ http://m.domain.com [R] RewriteRule $/internal-page/ http://m.domain.com/internal-page.html [R,L]

    Read the article

  • onSubmit returning false is not working

    - by StuckOnSubmit
    I'm completely confused ... I'd swear this was working yesterday ... I woke up this morning and all my forms stopped to work in my project. All the forms have a "onsubmit" to a function that returns false since it's an ajax call, so the form is never sent. After a lot of tests, I simplified the question to this piece of code: <html> <head> <script type="text/javascript"> function sub() { alert ("MIC!"); return false; } </script> </head> <body> <form method = "post" id = "form1" onsubmit = "return sub()"> input: <input type="text" name="input1" > <a href="#" onClick="document.getElementById('form1').submit();">button</a> </form> </body> </html> I would swear that this works perfectly, but today is nor working :D Why if I press the button the form is sent ? I know it's a total newbie question, but I'm stuck Thank you !

    Read the article

  • After interruption, delayed audio route change notifications when recording

    - by Frank Shearar
    My iPhone application requires that I know when a user has/has not plugged in her headphones. That's easy. AudioSessionAddPropertyListener with a callback listening to kAudioSessionProperty_AudioRouteChange. I write logs with NSLog as things happen. User plugs the headphones in? Get a notification, and a line in the gdb console. User unplugs the headphones? Ditto. At the same time I'm sensing the noise level of the environment by starting a recording audio queue. This, too, works great: I can get the mic noise level and listen for audio route changes just fine. What I find is that after an interruption, and I've reactivated the audio session and restored the audio category to kAudioSessionCategory_RecordAudio, the audio route notifications go a bit haywire. When I plug in the headphones, I see no notification. When I unplug the headphones I see BOTH the "plugged in" notification AND the "unplugged" notification, in rapid succession. It's like the "plugged in" notification's delayed and, when the "unplugged" notification arrives, the queue of pending notifications is flushed. What am I doing wrong? How do I correctly restore the audio session to get timeous notifications? EDIT: iPhone OS 3.1.2, running on an iPhone 3G. I'm running a program compiled with the 3.0 SDK (from within XCode 3.1.2).

    Read the article

  • AudioRecord problems with non-HTC devices

    - by Marc
    I'm having troubles using AudioRecord. An example using some of the code derived from the splmeter project: private static final int FREQUENCY = 8000; private static final int CHANNEL = AudioFormat.CHANNEL_CONFIGURATION_MONO; private static final int ENCODING = AudioFormat.ENCODING_PCM_16BIT; private int BUFFSIZE = 50; private AudioRecord recordInstance = null; ... android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO); recordInstance = new AudioRecord(MediaRecorder.AudioSource.MIC, FREQUENCY, CHANNEL, ENCODING, 8000); recordInstance.startRecording(); short[] tempBuffer = new short[BUFFSIZE]; int retval = 0; while (this.isRunning) { for (int i = 0; i < BUFFSIZE - 1; i++) { tempBuffer[i] = 0; } retval = recordInstance.read(tempBuffer, 0, BUFFSIZE); ... // process the data } This works on the HTC Dream and the HTC Magic perfectly without any log warnings/errors, but causes problems on the emulators and Nexus One device. On the Nexus one, it simply never returns useful data. I cannot provide any other useful information as I'm having a remote friend do the testing. On the emulators (Android 1.5, 2.1 and 2.2), I get weird errors from the AudioFlinger and Buffer overflows with the AudioRecordThread. I also get a major slowdown in UI responsiveness (even though the recording takes place in a separate thread than the UI). Is there something apparent that I'm doing incorrectly? Do I have to do anything special for the Nexus One hardware?

    Read the article

  • Virtual audio driver (microphone)

    - by Dalamber
    Hello guys, I want to develop a virtual microphone driver. Please, do not say anything about DirectShow - that's not "the way". I need a solution that will work with any software including Skype and MSN. And DirectShow doesn't fit these requirements. I found AVStream Filter-Centric Simulated Capture Driver (avssamp.sys) in Windows 7 WDK. What I need is an audio part of it. By default it reads avssamp.wav and plays it. But this driver is registered as WDM streaming capture device. And I want it in Audio Capture Device. There are some posts in the web but they are all the same: http://www.tech-archive.net/Archive/Development/microsoft.public.development.device.drivers/2005-05/msg00124.html http://www.winvistatips.com/problem-installing-avssamp-audio-capture-sources-category-t184898.html I think registering this filter-driver as audio capture device will make Skype recognize it as a microphone and thefore I will be able to push any PCM file as if it goes from mic. If someone already faced this problem before, please help. Thanks in advance.

    Read the article

  • CSS highlight menu item based on page body tags

    - by Sai
    I have a menu, I would like to highlight the sub menu item based on the page they are in. Can I use a div tag with an id on the page, and in css if the id is there then highlight the item. in body <div id="doc3"></div> then in css #doc3 #menu li#subnav-5-1 a I tried this but dosent seem to work. How can I change the style of another element based on id in the page body? menu... <!-- Menu 5 --> <li id="nav-5"><a href="ssslate.do">Micro</a> <ul id="subnav-5"> <li class="subnav-5-1"><a href="asdf.do">Site & Visit</a></li> <li><a href="ss.do">MIC</a></li> <li><a href="ss.do">sss</a></li> </ul> </li> CSS body.nav-5-1 li.subnav-5-1 {background-color:red;} htmlbody <body id=nav-5-body class="nav-5-1"> Thanks

    Read the article

  • AudioRecord - empty buffer

    - by Arxas
    I' m trying to record some audio using AudioRecord class. Here is my code: int audioSource = AudioSource.MIC; int sampleRateInHz = 44100; int channelConfig = AudioFormat.CHANNEL_IN_MONO; int audioFormat = AudioFormat.ENCODING_PCM_16BIT; int bufferSizeInShorts = 44100; int bufferSizeInBytes = 2*bufferSizeInShorts; short Data[] = new short[bufferSizeInShorts]; Thread recordingThread; AudioRecord audioRecorder = new AudioRecord(audioSource, sampleRateInHz, channelConfig, audioFormat, bufferSizeInBytes); @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); } @Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.activity_main, menu); return true; } public void startRecording(View arg0) { audioRecorder.startRecording(); recordingThread = new Thread(new Runnable() { public void run() { while (Data[bufferSizeInShorts-1] == 0) audioRecorder.read(Data, 0, bufferSizeInShorts); } }); audioRecorder.stop(); } Unfortunately my short array is empty after the recording is over. May I kindly ask you to help me figure out what's wrong?

    Read the article

  • Get Object from memory using memory adresse

    - by Hamza Karmouda
    I want to know how to get an Object from memory, in my case a MediaRecorder. Here's my class: Mymic class: public class MyMic { MediaRecorder recorder2; File file; private Context c; public MyMic(Context context){ this.c=context; } private void stopRecord() throws IOException { recorder2.stop(); recorder2.reset(); recorder2.release(); } private void startRecord() { recorder2= new MediaRecorder(); recorder2.setAudioSource(MediaRecorder.AudioSource.MIC); recorder2.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP); recorder2.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB); recorder2.setOutputFile(file.getPath()); try { recorder2.prepare(); recorder2.start(); } catch (IllegalStateException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } } } my Receiver Class: public class MyReceiver extends BroadcastReceiver { private Context c; private MyMic myMic; @Override public void onReceive(Context context, Intent intent) { this.c=context; myMic = new MyMic(c); if(my condition = true){ myMic.startRecord(); }else myMic.stopRecord(); } } So when I'm calling startRecord() it create a new MediaRecorder but when i instantiate my class a second time i can't retrieve my Object. Can i retrieve my MediaRecorder with his addresse

    Read the article

  • implementation musical instrument using audio unit

    - by Develop.Kim
    post same question at apple developer forum ,too hi first sorry that my english is poor.. i want develop iphone application that playing musical instrument like 'ocarina' but don't need blow mic features. so first i tried to find that how implementation 'virtual musical instrument ' in iphone development. the during the decide implementation using 'Audio Unit' to report this article (link) so i want two kind of questions. i recognize that the 'musical instrument' can be divided into three sound that 'attack', 'sustain' , 'release'. 'decay' maybe included (link) . How implementation when audio unit base 'AUInstrumentBase' each sound ? i download sample 'SinSynth' (link) . i want play note this instrument unit for analyze source and study. Is there way to using AULab? expected the way using MIDI input . but i don't have MIDI. in addition, i wonder that i would think it right the way. to ask the advice... thank for reading poor english my article.

    Read the article

  • sftp and public keys

    - by Lizard
    I am trying to sftp into an a server hosted by someone else. To make sure this worked I did the standard sftp [email protected] i was promted with the password and that worked fine. I am setting up a cron script to send a file once a week so have given them our public key which they claim to have added to their authorized_keys file. I now try sftp [email protected] again and I am still prompted for a password, but now the password doesn't work... Connecting to [email protected]... [email protected]'s password: Permission denied, please try again. [email protected]'s password: Permission denied, please try again. [email protected]'s password: Permission denied (publickey,password). Couldn't read packet: Connection reset by peer I did notice however that if I simply pressed enter (no password) it logged me in fine... So here are my questions: Is there a way to check what privatekey/pulbickey pair my sftp connection is using? Is it possible to specify what key pair to use? If all is setup correctly (using correct key pair and added to authorized files) why am I being asked to enter a blank password? Thanks for your help in advance! UPDATE I have just run sftp -vvv [email protected] .... debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering public key: /root/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: pkalg ssh-rsa blen 277 debug2: input_userauth_pk_ok: SHA1 fp 45:1b:e7:b6:33:41:1c:bb:0f:e3:c1:0f:1b:b0:d5:e4:28:a3:3f:0e debug3: sign_and_send_pubkey debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey,password debug1: Trying private key: /root/.ssh/id_dsa debug3: no such identity: /root/.ssh/id_dsa debug2: we did not send a packet, disable method debug3: authmethod_lookup password debug3: remaining preferred: ,password debug3: authmethod_is_enabled password debug1: Next authentication method: password It seems to suggest that it tries to use the public key... What am I missing?

    Read the article

  • CPU overheating after cleaning it

    - by Roberts
    I wanted to clean my computer CPU heatsink and fan itself, because the temperature is not what I wanted. About (50C ~ 70C). I have Intel Core 2 Duo E4300 @1.8 GHz (LGA775). The heatsink wasn't so scary filled with dust but I wanted to clean it anyway. I didn't know how to get heatsink with fan from the socket. So after 25 minutes I've figured it out. But I didn't know how to get it back on so I spent a lot time getting out the motherboard from the case. The fan and heatsink... The case and all components are clear of dust. (I'm tired now). Then I put all back just the way it was, well did few things on cable management. But the problem was that I didn't know how to connect front audio connectors. I had Windows XP hibernated. So I started the PC and everything was normal, except CMOS memory was clear. I configured the BIOS just the way it was and while I was doing that I saw about 58C CPU temperature and fan at 1789 RPM. Restarted the computer with new settings applied. But Windows halted with Blue Screen (I forgot what error it was but something with KERNEL). Restarted the PC and deleted hibernation session and everything was back normal. But couldn't record any sound from front panel microphone. The problem was that I messed ground wire with mic. Again after fixing it I turned computer on. No problems. The fan currently is noisy and temperature was 78C. The temperature before was 55C - 60C at idle. Now it's about 60C. If I do something then temperature raises to 79C. While speaking in skype the temperature was 82C. Could this problem occur because of the thermal grease (it's old and never replaced)? Edit The problem wasn't in thermal paste (because I didn't touch it). The problem was that I installed heatsink wrong. Now instead of regular 60C CPU temperature the CPU is at 48C (cool).

    Read the article

  • Slow boot on Ubuntu 12.04

    - by Hailwood
    My Ubuntu is booting really slow (Windows is booting faster...). I am using Ubuntu a Dell Inspiron 1545 Pentium(R) Dual-Core CPU T4300 @ 2.10GHz, 4GB Ram, 500GB HDD running Ubuntu 12.04 with gnome-shell 3.4.1. After running dmesg the culprit seems to be this section, in particular the last three lines: [26.557659] ADDRCONF(NETDEV_UP): eth0: link is not ready [26.565414] ADDRCONF(NETDEV_UP): eth0: link is not ready [27.355355] Console: switching to colour frame buffer device 170x48 [27.362346] fb0: radeondrmfb frame buffer device [27.362347] drm: registered panic notifier [27.362357] [drm] Initialized radeon 2.12.0 20080528 for 0000:01:00.0 on minor 0 [27.617435] init: udev-fallback-graphics main process (1049) terminated with status 1 [30.064481] init: plymouth-stop pre-start process (1500) terminated with status 1 [51.708241] CE: hpet increased min_delta_ns to 20113 nsec [59.448029] eth2: no IPv6 routers present But I have no idea how to start debugging this. sudo lshw -C video $ sudo lshw -C video *-display description: VGA compatible controller product: RV710 [Mobility Radeon HD 4300 Series] vendor: Hynix Semiconductor (Hyundai Electronics) physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 32 bits clock: 33MHz capabilities: pm pciexpress msi vga_controller bus_master cap_list rom configuration: driver=fglrx_pci latency=0 resources: irq:48 memory:e0000000-efffffff ioport:de00(size=256) memory:f6df0000-f6dfffff memory:f6d00000-f6d1ffff After loading the propriety driver my new dmesg log is below (starting from the first major time gap): [2.983741] EXT4-fs (sda6): mounted filesystem with ordered data mode. Opts: (null) [25.094327] ADDRCONF(NETDEV_UP): eth0: link is not ready [25.119737] udevd[520]: starting version 175 [25.167086] lp: driver loaded but no devices found [25.215341] fglrx: module license 'Proprietary. (C) 2002 - ATI Technologies, Starnberg, GERMANY' taints kernel. [25.215345] Disabling lock debugging due to kernel taint [25.231924] wmi: Mapper loaded [25.318414] lib80211: common routines for IEEE802.11 drivers [25.318418] lib80211_crypt: registered algorithm 'NULL' [25.331631] [fglrx] Maximum main memory to use for locked dma buffers: 3789 MBytes. [25.332095] [fglrx] vendor: 1002 device: 9552 count: 1 [25.334206] [fglrx] ioport: bar 1, base 0xde00, size: 0x100 [25.334229] pci 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [25.334235] pci 0000:01:00.0: setting latency timer to 64 [25.337109] [fglrx] Kernel PAT support is enabled [25.337140] [fglrx] module loaded - fglrx 8.96.4 [Mar 12 2012] with 1 minors [25.342803] Adding 4189180k swap on /dev/sda7. Priority:-1 extents:1 across:4189180k [25.364031] type=1400 audit(1338241723.027:2): apparmor="STATUS" operation="profile_load" name="/sbin/dhclient" pid=606 comm="apparmor_parser" [25.364491] type=1400 audit(1338241723.031:3): apparmor="STATUS" operation="profile_load" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=606 comm="apparmor_parser" [25.364760] type=1400 audit(1338241723.031:4): apparmor="STATUS" operation="profile_load" name="/usr/lib/connman/scripts/dhclient-script" pid=606 comm="apparmor_parser" [25.394328] wl 0000:0c:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 [25.394343] wl 0000:0c:00.0: setting latency timer to 64 [25.415531] acpi device:36: registered as cooling_device2 [25.416688] input: Video Bus as /devices/LNXSYSTM:00/device:00/PNP0A03:00/device:34/LNXVIDEO:00/input/input6 [25.416795] ACPI: Video Device [VID] (multi-head: yes rom: no post: no) [25.416865] [Firmware Bug]: Duplicate ACPI video bus devices for the same VGA controller, please try module parameter "video.allow_duplicates=1"if the current driver doesn't work. [25.425133] lib80211_crypt: registered algorithm 'TKIP' [25.448058] snd_hda_intel 0000:00:1b.0: PCI INT A -> GSI 21 (level, low) -> IRQ 21 [25.448321] snd_hda_intel 0000:00:1b.0: irq 47 for MSI/MSI-X [25.448353] snd_hda_intel 0000:00:1b.0: setting latency timer to 64 [25.738867] eth1: Broadcom BCM4315 802.11 Hybrid Wireless Controller 5.100.82.38 [25.761213] input: HDA Intel Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input7 [25.761406] input: HDA Intel Headphone as /devices/pci0000:00/0000:00:1b.0/sound/card0/input8 [25.783432] dcdbas dcdbas: Dell Systems Management Base Driver (version 5.6.0-3.2) [25.908318] EXT4-fs (sda6): re-mounted. Opts: errors=remount-ro [25.928155] input: Dell WMI hotkeys as /devices/virtual/input/input9 [25.960561] udevd[543]: renamed network interface eth1 to eth2 [26.285688] init: failsafe main process (835) killed by TERM signal [26.396426] input: PS/2 Mouse as /devices/platform/i8042/serio2/input/input10 [26.423108] input: AlpsPS/2 ALPS GlidePoint as /devices/platform/i8042/serio2/input/input11 [26.511297] Bluetooth: Core ver 2.16 [26.511383] NET: Registered protocol family 31 [26.511385] Bluetooth: HCI device and connection manager initialized [26.511388] Bluetooth: HCI socket layer initialized [26.511391] Bluetooth: L2CAP socket layer initialized [26.512079] Bluetooth: SCO socket layer initialized [26.530164] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [26.530168] Bluetooth: BNEP filters: protocol multicast [26.553893] type=1400 audit(1338241724.219:5): apparmor="STATUS" operation="profile_replace" name="/sbin/dhclient" pid=928 comm="apparmor_parser" [26.554860] Bluetooth: RFCOMM TTY layer initialized [26.554866] Bluetooth: RFCOMM socket layer initialized [26.554868] Bluetooth: RFCOMM ver 1.11 [26.557910] type=1400 audit(1338241724.223:6): apparmor="STATUS" operation="profile_load" name="/usr/lib/lightdm/lightdm/lightdm-guest-session-wrapper" pid=927 comm="apparmor_parser" [26.559166] type=1400 audit(1338241724.223:7): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=928 comm="apparmor_parser" [26.559574] type=1400 audit(1338241724.223:8): apparmor="STATUS" operation="profile_replace" name="/usr/lib/connman/scripts/dhclient-script" pid=928 comm="apparmor_parser" [26.575519] type=1400 audit(1338241724.239:9): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/mission-control-5" pid=931 comm="apparmor_parser" [26.581100] type=1400 audit(1338241724.247:10): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/telepathy-*" pid=931 comm="apparmor_parser" [26.582794] type=1400 audit(1338241724.247:11): apparmor="STATUS" operation="profile_load" name="/usr/bin/evince" pid=929 comm="apparmor_parser" [26.605672] ppdev: user-space parallel port driver [27.592475] sky2 0000:09:00.0: eth0: enabling interface [27.604329] ADDRCONF(NETDEV_UP): eth0: link is not ready [27.606962] ADDRCONF(NETDEV_UP): eth0: link is not ready [27.852509] vesafb: mode is 1024x768x32, linelength=4096, pages=0 [27.852513] vesafb: scrolling: redraw [27.852515] vesafb: Truecolor: size=0:8:8:8, shift=0:16:8:0 [27.852523] mtrr: type mismatch for e0000000,400000 old: write-back new: write-combining [27.852527] mtrr: type mismatch for e0000000,200000 old: write-back new: write-combining [27.852531] mtrr: type mismatch for e0000000,100000 old: write-back new: write-combining [27.852534] mtrr: type mismatch for e0000000,80000 old: write-back new: write-combining [27.852538] mtrr: type mismatch for e0000000,40000 old: write-back new: write-combining [27.852541] mtrr: type mismatch for e0000000,20000 old: write-back new: write-combining [27.852544] mtrr: type mismatch for e0000000,10000 old: write-back new: write-combining [27.852548] mtrr: type mismatch for e0000000,8000 old: write-back new: write-combining [27.852551] mtrr: type mismatch for e0000000,4000 old: write-back new: write-combining [27.852554] mtrr: type mismatch for e0000000,2000 old: write-back new: write-combining [27.852558] mtrr: type mismatch for e0000000,1000 old: write-back new: write-combining [27.853154] vesafb: framebuffer at 0xe0000000, mapped to 0xffffc90005580000, using 3072k, total 3072k [27.853405] Console: switching to colour frame buffer device 128x48 [27.853426] fb0: VESA VGA frame buffer device [28.539800] fglrx_pci 0000:01:00.0: irq 48 for MSI/MSI-X [28.540552] [fglrx] Firegl kernel thread PID: 1168 [28.540679] [fglrx] Firegl kernel thread PID: 1169 [28.540789] [fglrx] Firegl kernel thread PID: 1170 [28.540932] [fglrx] IRQ 48 Enabled [29.845620] [fglrx] Gart USWC size:1236 M. [29.845624] [fglrx] Gart cacheable size:489 M. [29.845629] [fglrx] Reserved FB block: Shared offset:0, size:1000000 [29.845632] [fglrx] Reserved FB block: Unshared offset:fc21000, size:3df000 [29.845635] [fglrx] Reserved FB block: Unshared offset:1fffb000, size:5000 [59.700023] eth2: no IPv6 routers present

    Read the article

  • Liskov Substitution Principle and the Oft Forgot Third Wheel

    - by Stacy Vicknair
    Liskov Substitution Principle (LSP) is a principle of object oriented programming that many might be familiar with from the SOLID principles mnemonic from Uncle Bob Martin. The principle highlights the relationship between a type and its subtypes, and, according to Wikipedia, is defined by Barbara Liskov and Jeanette Wing as the following principle:   Let be a property provable about objects of type . Then should be provable for objects of type where is a subtype of .   Rectangles gonna rectangulate The iconic example of this principle is illustrated with the relationship between a rectangle and a square. Let’s say we have a class named Rectangle that had a property to set width and a property to set its height. 1: Public Class Rectangle 2: Overridable Property Width As Integer 3: Overridable Property Height As Integer 4: End Class   We all at some point here that inheritance mocks an “IS A” relationship, and by gosh we all know square IS A rectangle. So let’s make a square class that inherits from rectangle. However, squares do maintain the same length on every side, so let’s override and add that behavior. 1: Public Class Square 2: Inherits Rectangle 3:  4: Private _sideLength As Integer 5:  6: Public Overrides Property Width As Integer 7: Get 8: Return _sideLength 9: End Get 10: Set(value As Integer) 11: _sideLength = value 12: End Set 13: End Property 14:  15: Public Overrides Property Height As Integer 16: Get 17: Return _sideLength 18: End Get 19: Set(value As Integer) 20: _sideLength = value 21: End Set 22: End Property 23: End Class   Now, say we had the following test: 1: Public Sub SetHeight_DoesNotAffectWidth(rectangle As Rectangle) 2: 'arrange 3: Dim expectedWidth = 4 4: rectangle.Width = 4 5:  6: 'act 7: rectangle.Height = 7 8:  9: 'assert 10: Assert.AreEqual(expectedWidth, rectangle.Width) 11: End Sub   If we pass in a rectangle, this test passes just fine. What if we pass in a square?   This is where we see the violation of Liskov’s Principle! A square might "IS A” to a rectangle, but we have differing expectations on how a rectangle should function than how a square should! Great expectations Here’s where we pat ourselves on the back and take a victory lap around the office and tell everyone about how we understand LSP like a boss. And all is good… until we start trying to apply it to our work. If I can’t even change functionality on a simple setter without breaking the expectations on a parent class, what can I do with subtyping? Did Liskov just tell me to never touch subtyping again? The short answer: NO, SHE DIDN’T. When I first learned LSP, and from those I’ve talked with as well, I overlooked a very important but not appropriately stressed quality of the principle: our expectations. Our inclination is to want a logical catch-all, where we can easily apply this principle and wipe our hands, drop the mic and exit stage left. That’s not the case because in every different programming scenario, our expectations of the parent class or type will be different. We have to set reasonable expectations on the behaviors that we expect out of the parent, then make sure that those expectations are met by the child. Any expectations not explicitly expected of the parent aren’t expected of the child either, and don’t register as a violation of LSP that prevents implementation. You can see the flexibility mentioned in the Wikipedia article itself: A typical example that violates LSP is a Square class that derives from a Rectangle class, assuming getter and setter methods exist for both width and height. The Square class always assumes that the width is equal with the height. If a Square object is used in a context where a Rectangle is expected, unexpected behavior may occur because the dimensions of a Square cannot (or rather should not) be modified independently. This problem cannot be easily fixed: if we can modify the setter methods in the Square class so that they preserve the Square invariant (i.e., keep the dimensions equal), then these methods will weaken (violate) the postconditions for the Rectangle setters, which state that dimensions can be modified independently. Violations of LSP, like this one, may or may not be a problem in practice, depending on the postconditions or invariants that are actually expected by the code that uses classes violating LSP. Mutability is a key issue here. If Square and Rectangle had only getter methods (i.e., they were immutable objects), then no violation of LSP could occur. What this means is that the above situation with a rectangle and a square can be acceptable if we do not have the expectation for width to leave height unaffected, or vice-versa, in our application. Conclusion – the oft forgot third wheel Liskov Substitution Principle is meant to act as a guidance and warn us against unexpected behaviors. Objects can be stateful and as a result we can end up with unexpected situations if we don’t code carefully. Specifically when subclassing, make sure that the subclass meets the expectations held to its parent. Don’t let LSP think you cannot deviate from the behaviors of the parent, but understand that LSP is meant to highlight the importance of not only the parent and the child class, but also of the expectations WE set for the parent class and the necessity of meeting those expectations in order to help prevent sticky situations.   Code examples, in both VB and C# Technorati Tags: LSV,Liskov Substitution Principle,Uncle Bob,Robert Martin,Barbara Liskov,Liskov

    Read the article

  • Thought Oracle Usability Advisory Board Was Stuffy? Wrong. Justification for Attending OUAB: ROI

    - by ultan o'broin
    Looking for reasons tell your boss why your organization needs to join the Oracle Usability Advisory Board or why you need approval to attend one of its meetings (see the requirements)? Try phrases such as "Continued Return on Investment (ROI)", "Increased Productivity" or "Happy Workers". With OUAB your participation is about realizing and sustaining ROI across the entire applications life-cycle from input to designs to implementation choices and integration, usage and performance and on measuring and improving the onboarding and support experience. If you think this is a boring meeting of middle-aged people sitting around moaning about customizing desktop forms and why the BlackBerry is here to stay, think again! How about this for a rich agenda, all designed to engage the audience in a thought-provoking and feedback-illiciting day of swirling interactions, contextual usage, global delivery, mobility, consumerizationm, gamification and tailoring your implementation to reflect real users doing real work in real environments.  Foldable, rollable ereader devices provide a newspaper-like UK for electronic news. Or a way to wrap silicon chips, perhaps. Explored at the OUAB Europe Meeting (photograph from Terrace Restaurant in TVP. Nom.) At the 7 December 2012 OUAB Europe meeting in Oracle Thames Valley Park, UK, Oracle partners and customers stepped up to the mic and PPT decks with a range of facts and examples to astound any UX conference C-level sceptic. Over the course of the day we covered much ground, but it was all related in a contextual, flexibile, simplication, engagement way aout delivering results for business: that means solving problems. This means being about the user and their tasks and how to make design and technology transforms work into a productive activity that users and bean counters will be excited by. The sessions really gelled for me: 1. Mobile design patterns and the powerful propositions for customers and partners offered by using the design guidance with Oracle ADF Mobile. Customers' and partners' developers existing ADF developers are now productive, efficient ADF Mobile developers applying proven UX guidance using ADF Mobile components and other Oracle Fusion Middleware in the development toolkit. You can find the Mobile UX Design Patterns and Guidance on Building Mobile Apps on OTN. 2. Oracle Voice and Apps. How this medium offers so much potentual in the enterprise and offers a window in Fusion Apps cloud webservices, Oracle RightNow NLP and Nuance technology. Exciting stuff, demoed live on a mobile phone. Stay tuned for more features and modalities and how you can tailor your own apps experience.  3. Oracle RightNow Natural Language Processing (NLP) Virtual Assistant technology (Ella): how contextual intervention and learning from users sessions delivers a great personalized UX for users interacting with Ella, a fifth generation VA to solve problems and seek knowledge. 4. BYOD Keynote: A balanced keynote address contrasting Fujitsu's explaining of the conceprt, challenges, and trends and setting the expectation that BYOD must be embraced in a flexible way,  with the resolute, crafted high security enterprise requirements that nuancing the BYOD concept and proposals with the realities of their world of water tight information and device sharing policies. Fascinating stuff, as well providing anecdotes to make us thing about out own DYOD Deployments. One size does not fit all. 5. Icon Cultural Surveys Results and Insights Arising: Ever wondered about the cultural appropriateness of icons used in software UIs and how these icons assessed for global use? Or considered that social media "Like" icons might be  unacceptable hand gestures in culture or enterprise? Or do the old world icons like Save floppy disk icons still find acceptable? Well the survey results told you. Challenges must be tested, over time, and context of use is critical now, including external factors such as the internet and social media adoption. Indeed the fears about global rejection of the face and hand icons was not borne out, and some of the more anachronistic icons (checkbooks, microphones, real-to-real tape decks, 3.5" floppies for "save") have become accepted metaphors for current actions. More importantly the findings brought into focus the reason for OUAB - engage with and illicit feedback though working groups before we build anything. 6. EReaders and Oracle iBook: What is the uptake and trends of ereaders? And how about a demo of an iBook with enterprise apps content?  Well received by the audience, the session included a live running poll of ereader usage. 7. Gamification Design Jam: Fun, hands on event for teams of Oracle staff, partners and customers, actually building gamified flows, a practice that can be applied right away by customers and partners.  8. UX Direct: A new offering of usability best practices, coming to an external website for you in 2013. FInd a real user, observe their tasks, design and approve, build and measure. Simple stuff to improve apps implications no end. 9. FUSE (an internal term only, basically Fusion Simplified Experience): demo of the new Face of Fusion Applications: inherently mobile, simple to use, social, personalizable and FAST, three great demos from the HCM, CRM and ICT world on how these UX designs can be used in different ways. So, a powerful breadth and depth of UX solutions and opporunities for customers and partners to engage with and explore how they can make their users happy and benefit their business reaping continued ROI from those apps investments. Find out more about the OUAB and how to get involved here ... 

    Read the article

  • Multiple Audio Issues

    - by Lerp
    I am having issues with my audio on Ubuntu 12.04, I will try and give as much detail as possible so sorry if there's too much detail. The Problem Audio plays from both speakers and headphone regardless of what connector I choose and regardless of the profile I use. The microphone is constantly being played through headphones & speakers. The headphone audio is extremely quiet but plays from both ears when I select "Headphones" for the connector in Sound Settings. The headphone audio only plays from one ear and is quiet (but not as quiet as above) when I select "Analogue Output" for the connector in Sound Settings. I can only select "Headphones" as the connector in Sound Settings if I set the profile to either "Analogue Stereo Output/Duplex", all others only allow me to choose "Analogue Output" for the connector. Despite the headphone sound issues, the speaker sound is fine apart from the fact that I am not able to select which output is used, they just both play. My headphone and microphone are plugged into the front and my speakers are plugged into the back. What I have tried I have put everything in alsamixer to 100 apart from "Front Mic Boost" which I have set to 0. Command Output aplay -l **** List of PLAYBACK Hardware Devices **** card 0: Intel [HDA Intel], device 0: AD198x Analog [AD198x Analog] Subdevices: 0/1 Subdevice #0: subdevice #0 card 0: Intel [HDA Intel], device 1: AD198x Digital [AD198x Digital] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: Intel [HDA Intel], device 2: AD198x Headphone [AD198x Headphone] Subdevices: 1/1 Subdevice #0: subdevice #0 arecord -l **** List of CAPTURE Hardware Devices **** card 0: Intel [HDA Intel], device 0: AD198x Analog [AD198x Analog] Subdevices: 2/3 Subdevice #0: subdevice #0 Subdevice #1: subdevice #1 Subdevice #2: subdevice #2 cat /proc/asound/cards 0 [Intel ]: HDA-Intel - HDA Intel HDA Intel at 0xf7ff8000 irq 70 cat /proc/asound/modules 0 snd_hda_intel cat /proc/asound/card*/codec* | grep "Codec" Codec: Analog Devices AD1989B cat /etc/modprobe.d/alsa-base.conf # autoloader aliases install sound-slot-0 /sbin/modprobe snd-card-0 install sound-slot-1 /sbin/modprobe snd-card-1 install sound-slot-2 /sbin/modprobe snd-card-2 install sound-slot-3 /sbin/modprobe snd-card-3 install sound-slot-4 /sbin/modprobe snd-card-4 install sound-slot-5 /sbin/modprobe snd-card-5 install sound-slot-6 /sbin/modprobe snd-card-6 install sound-slot-7 /sbin/modprobe snd-card-7 # Cause optional modules to be loaded above generic modules install snd /sbin/modprobe --ignore-install snd $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-ioctl32 ; /sbin/modprobe --quiet --use-blacklist snd-seq ; } # # Workaround at bug #499695 (reverted in Ubuntu see LP #319505) install snd-pcm /sbin/modprobe --ignore-install snd-pcm $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-pcm-oss ; : ; } install snd-mixer /sbin/modprobe --ignore-install snd-mixer $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-mixer-oss ; : ; } install snd-seq /sbin/modprobe --ignore-install snd-seq $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-seq-midi ; /sbin/modprobe --quiet --use-blacklist snd-seq-oss ; : ; } # install snd-rawmidi /sbin/modprobe --ignore-install snd-rawmidi $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-seq-midi ; : ; } # Cause optional modules to be loaded above sound card driver modules install snd-emu10k1 /sbin/modprobe --ignore-install snd-emu10k1 $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-emu10k1-synth ; } install snd-via82xx /sbin/modprobe --ignore-install snd-via82xx $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-seq ; } # Load saa7134-alsa instead of saa7134 (which gets dragged in by it anyway) install saa7134 /sbin/modprobe --ignore-install saa7134 $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist saa7134-alsa ; : ; } # Prevent abnormal drivers from grabbing index 0 options bt87x index=-2 options cx88_alsa index=-2 options saa7134-alsa index=-2 options snd-atiixp-modem index=-2 options snd-intel8x0m index=-2 options snd-via82xx-modem index=-2 options snd-usb-audio index=-2 options snd-usb-caiaq index=-2 options snd-usb-ua101 index=-2 options snd-usb-us122l index=-2 options snd-usb-usx2y index=-2 # Ubuntu #62691, enable MPU for snd-cmipci options snd-cmipci mpu_port=0x330 fm_port=0x388 # Keep snd-pcsp from being loaded as first soundcard options snd-pcsp index=-2 # Keep snd-usb-audio from beeing loaded as first soundcard options snd-usb-audio index=-2 Hopefully I have provided enough information, I will happily provide anymore information needed. Thank you. Update Reinstalling alsa-base and pulseaudio fixed the headphone issues I was having.

    Read the article

  • Android stream to Wowza

    - by Curtis Kiu
    I feel very confused about Android streaming to wowza. I am doing a video conference using rtmp cross-platform, but Android doesn't eat RTMP. Therefore I need to find another way to do it. Upstreaming I found a new open-source app called spydroid-ipcamera. It is using rtp, sending udp packets to computer, and opens it in vlc using the following sdp v=0 s=Unnamed m=video 5006 RTP/AVP 96 a=rtpmap:96 H264/90000 a=fmtp:96 packetization-mode=1;profile-level-id=420016;sprop-parameter-sets=Z0IAFukBQHsg,aM4BDyA=; But it can't work. Then I follow wowza tutorial and stream to it and then play again in VLC. That works! I wrote it in http://code.google.com/p/spydroid-ipcamera/issues/detail?id=2 However when I want to add audio in the packet, it fails to work. I change to code in http://code.google.com/p/spydroid-ipcamera/source/browse/trunk/src/net/mkp/spydroid/CameraStreamer.java mr.setAudioSource(MediaRecorder.AudioSource.MIC); mr.setVideoSource(MediaRecorder.VideoSource.CAMERA); mr.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4); mr.setVideoFrameRate(20); mr.setVideoSize(640, 480); mr.setAudioEncoder(MediaRecorder.AudioEncoder.AAC); mr.setVideoEncoder(MediaRecorder.VideoEncoder.H264); mr.setPreviewDisplay(holder.getSurface()); Then I thought that the problem should be in sdp, but I don't know how to due with sdp. I am streaming H.264/AAC with Mp4 Second I don't understand sdp. So how can I make video conference upstreaming part using this apps. Android ----(UDP Port:5006)----> PC (SDP file) and then Wowza read the SDP file ------> VLC I think in this way the system cannot handle more than 1 client. sdp can only hold 1 port, any idea or actually it wont' work? Also Wowza need to set the stream before we stream it, so does it mean that I should not follow this way to do it? Sorry my English is poor, I hope you guys understand.

    Read the article

  • How to configure the framesize using AudioUnit.framework on iOS

    - by Piperoman
    I have an audio app i need to capture mic samples to encode into mp3 with ffmpeg First configure the audio: /** * We need to specifie our format on which we want to work. * We use Linear PCM cause its uncompressed and we work on raw data. * for more informations check. * * We want 16 bits, 2 bytes (short bytes) per packet/frames at 8khz */ AudioStreamBasicDescription audioFormat; audioFormat.mSampleRate = SAMPLE_RATE; audioFormat.mFormatID = kAudioFormatLinearPCM; audioFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger; audioFormat.mFramesPerPacket = 1; audioFormat.mChannelsPerFrame = 1; audioFormat.mBitsPerChannel = audioFormat.mChannelsPerFrame*sizeof(SInt16)*8; audioFormat.mBytesPerPacket = audioFormat.mChannelsPerFrame*sizeof(SInt16); audioFormat.mBytesPerFrame = audioFormat.mChannelsPerFrame*sizeof(SInt16); The recording callback is: static OSStatus recordingCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) { NSLog(@"Log record: %lu", inBusNumber); NSLog(@"Log record: %lu", inNumberFrames); NSLog(@"Log record: %lu", (UInt32)inTimeStamp); // the data gets rendered here AudioBuffer buffer; // a variable where we check the status OSStatus status; /** This is the reference to the object who owns the callback. */ AudioProcessor *audioProcessor = (__bridge AudioProcessor*) inRefCon; /** on this point we define the number of channels, which is mono for the iphone. the number of frames is usally 512 or 1024. */ buffer.mDataByteSize = inNumberFrames * sizeof(SInt16); // sample size buffer.mNumberChannels = 1; // one channel buffer.mData = malloc( inNumberFrames * sizeof(SInt16) ); // buffer size // we put our buffer into a bufferlist array for rendering AudioBufferList bufferList; bufferList.mNumberBuffers = 1; bufferList.mBuffers[0] = buffer; // render input and check for error status = AudioUnitRender([audioProcessor audioUnit], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList); [audioProcessor hasError:status:__FILE__:__LINE__]; // process the bufferlist in the audio processor [audioProcessor processBuffer:&bufferList]; // clean up the buffer free(bufferList.mBuffers[0].mData); //NSLog(@"RECORD"); return noErr; } With data: inBusNumber = 1 inNumberFrames = 1024 inTimeStamp = 80444304 // All the time same inTimeStamp, this is strange However, the framesize that i need to encode mp3 is 1152. How can i configure it? If i do buffering, that implies a delay, but i would like to avoid this because is a real time app. If i use this configuration, each buffer i get trash trailing samples, 1152 - 1024 = 128 bad samples. All samples are SInt16.

    Read the article

  • How could I send live video stream to remote server from my phone !!!

    - by poc
    Hello , I have a problem about streaming my video to server in real-time from my phone. that is , let my phone be a IP Camera , and server can watch the live video from my phone I have googled many many solutions, but there is no one can solve my problem. I use MediaRecorder to record . it can save video file in the SD card correctly. then , I refered this page and used some method as followings skt = new Socket(InetAddress.getByName(hostname),port); pfd =ParcelFileDescriptor.fromSocket(skt); mediaRecorder.setOutputFile(pfd.getFileDescriptor()); now it seems I can send the video stream while recording however, I wrote a receiver-side program to receive the video stream from Android , but it doesn't work . is there any error? I can receive file , but I can not open the video file . I guess the problem may caused by file format ? there are outline of my code. in android side Socket skt = new Socket(hostIP,port); ParcelFileDescriptor pfd =ParcelFileDescriptor.fromSocket(skt); .... .... mediaRecorder.setAudioSource(MediaRecorder.AudioSource.MIC); mediaRecorder.setVideoSource(MediaRecorder.VideoSource.DEFAULT); mediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4); mediaRecorder.setOutputFile(pfd.getFileDescriptor()); ..... mediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.DEFAULT); mediaRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.MPEG_4_SP); ..... mediaRecorder.start(); in receiver side (my ACER notebook) // anyway , I don't think the file extentions will do any effect File video = new File (strDate+".3gpp"); FileOutputStream fos; try { fos = new FileOutputStream(video); byte[] data = new byte[1024]; int count =-1; while( (count = fin.read(data,0,1024) ) !=-1) { fos.write(data,0,count); fos.flush(); } fos.close(); fin.close(); I confused a long time.... thanks in advance

    Read the article

  • Drupal incorrectly escapes tags in javascript

    - by sergdev
    I installed drupal-6.16. I applied the patch from the post http://drupal.org/node/222926#comment-930745. It works correctly in simple cases. But following code of counter is handled incorrectly and counter is now displayed on the page after drupal. Drupal modifies the string "alt='1Gb.ua counter' /><\/a>")</a></script> to "alt='1Gb.ua counter' />&lt;\/a>")</a></script> The full code of counter follows: <br><br> Text <br><br> <!-- counter.1Gb.ua --> <script language="javascript" type="text/javascript"> cgb_js="1.0"; cgb_r=""+Math.random()+"&r="+ escape(document.referrer)+"&pg="+ escape(window.location.href); document.cookie="rqbct=1; path=/"; cgb_r+="&c="+ (document.cookie?"Y":"N"); </script><script language="javascript1.1" type="text/javascript"> cgb_js="1.1";cgb_r+="&j="+ (navigator.javaEnabled()?"Y":"N")</script> <script language="javascript1.2" type="text/javascript"> cgb_js="1.2"; cgb_r+="&wh="+screen.width+ 'x'+screen.height+"&px="+ (((navigator.appName.substring(0,3)=="Mic"))? screen.colorDepth:screen.pixelDepth)</script> <script language="javascript1.3" type="text/javascript"> cgb_js="1.3"</script> <script language="javascript" type="text/javascript">cgb_r+="&js="+cgb_js; document.write("<a href='http://www.1Gb.ua?cnt=1416'>"+ "<img src='http://counter.1Gb.ua/cnt.aspx?"+ "u=1416&"+cgb_r+ "&' border=0 width=88 height=31 "+ "alt='1Gb.ua counter'><\/a>")</script> <noscript><a href='http://www.1Gb.ua?cnt=1416'> <img src="http://counter.1Gb.ua/cnt.aspx?u=1416" border=0 width="88" height="31" alt="1Gb.ua counter"></a> </noscript> <!-- /counter.1Gb.ua --> Does anybody have this code working? How can it be fixed? Thanks a lot in advance!

    Read the article

  • How to adjust microphone gain from C# (needs to work on XP & W7)...

    - by Ed
    First, note that I know there are a few questions like this already posted; however they don't seem to address the problem adequately. I have a C# application, with all the pInvoke hooks to talk to the waveXXX API, and I'm able to do capture and play back of audio with that. I'm also able to adjust speaker (WaveOut) volume with that API. The problem is that for whatever reason, that API does not allow me to adjust microphone (WaveIn) volume. So, I managed to find some mixer code that I've also pulled in and access through pInvoke and that allows me to adjust microphone volume, but only on my W7 PC. The mixer code I started with comes from here: http://social.msdn.microsoft.com/Forums/en-US/isvvba/thread/05dc2d35-1d45-4837-8e16-562ee919da85 and it works, but is written to adjust speaker volume. I added the SetMicVolume method shown here... public static void SetMicVolume(int mxid, int percentage) { bool rc; int mixer, vVolume; MIXERCONTROL volCtrl = new MIXERCONTROL(); int currentVol; mixerOpen(out mixer, mxid, 0, 0, MIXER_OBJECTF_WAVEIN); int type = MIXERCONTROL_CONTROLTYPE_VOLUME; rc = GetVolumeControl(mixer, MIXERLINE_COMPONENTTYPE_SRC_MICROPHONE, type, out volCtrl, out currentVol); if (rc == false) { mixerClose(mixer); mixerOpen(out mixer, 0, 0, 0, 0); rc = GetVolumeControl(mixer, MIXERLINE_COMPONENTTYPE_SRC_MICROPHONE, type, out volCtrl, out currentVol); if (rc == false) throw new Exception("SetMicVolume/GetVolumeControl() failed"); } vVolume = ((int)((float)(volCtrl.lMaximum - volCtrl.lMinimum) / 100.0F) * percentage); rc = SetVolumeControl(mixer, volCtrl, vVolume); if (rc == false) throw new Exception("SetMicVolume/SetVolumeControl() failed"); mixerClose(mixer); } Note the "second attempt" to call GetVolumeControl(). This is done because on XP, in the first call to GetVolumeControl (refer to site above for that code), the call to mixerGetLineControlsA() fails with XP systems returning MIXERR_INVALCONTROL. Then, with this second attempt using mixerOpen(out mixer, 0, 0, 0, 0), the code doesn't return a failure but the mic gain is unaffected. Note, as I said above, this works on W7 (the second attempt is never executed because it doesn't fail using mixerOpen(out mixer, mxid, 0, 0, MIXER_OBJECTF_WAVEIN)). I admit to not having a good grasp on the mixer API, so that's what I'm looking into now; however if anyone has a clue why this would work on W7, but not XP, I'd sure like to hear it. Meanwhile, if I figure it out before I get a response, I'll post my own answer...

    Read the article

  • Drupal incorrectly espaces tags in javascript

    - by sergdev
    I installed drupal-6.16. I applied the patch from the post http://drupal.org/node/222926#comment-930745. It works correctly in simple cases. But for following code for counter is handled incorrectly: <br><br> Text <br><br> <!-- counter.1Gb.ua --> <script language="javascript" type="text/javascript"> cgb_js="1.0"; cgb_r=""+Math.random()+"&r="+ escape(document.referrer)+"&pg="+ escape(window.location.href); document.cookie="rqbct=1; path=/"; cgb_r+="&c="+ (document.cookie?"Y":"N"); </script><script language="javascript1.1" type="text/javascript"> cgb_js="1.1";cgb_r+="&j="+ (navigator.javaEnabled()?"Y":"N")</script> <script language="javascript1.2" type="text/javascript"> cgb_js="1.2"; cgb_r+="&wh="+screen.width+ 'x'+screen.height+"&px="+ (((navigator.appName.substring(0,3)=="Mic"))? screen.colorDepth:screen.pixelDepth)</script> <script language="javascript1.3" type="text/javascript"> cgb_js="1.3"</script> <script language="javascript" type="text/javascript">cgb_r+="&js="+cgb_js; document.write("<a href='http://www.1Gb.ua?cnt=1416'>"+ "<img src='http://counter.1Gb.ua/cnt.aspx?"+ "u=1416&"+cgb_r+ "&' border=0 width=88 height=31 "+ "alt='1Gb.ua counter'><\/a>")</script> <noscript><a href='http://www.1Gb.ua?cnt=1416'> <img src="http://counter.1Gb.ua/cnt.aspx?u=1416" border=0 width="88" height="31" alt="1Gb.ua counter"></a> </noscript> <!-- /counter.1Gb.ua --> It modifies the string "alt='1Gb.ua counter' /><\/a>")</a></script> to "alt='1Gb.ua counter' />&lt;\/a>")</a></script> Does anybody have this code working? If so how this can be fixed? Thanks a lot in advance!

    Read the article

  • Android - dialer icon gets placed in recently used apps after finish()

    - by Donal Rafferty
    In my application I detect the out going call when a call is dialled from the dialer or contacts. This works fine and I then pop up a dialog saying I have detected the call and then the user presses a button to close the dialog which calls finish() on that activity. It all works fine except that when I then hold the home key to bring up the recently used apps the dialer icon is there. And when it is clicked the dialog is brought back into focus in the foreground when the dialog activity should be dead and gone and not be able to be brought back to the foreground. Here is a picture of what I mean. So two questions arise, why would the dialer icon be getting placed there and why would it be recalling my activity to the foreground? Here is the code for that Activity which has a dialog theme: public class CallDialogActivity extends Activity{ boolean isRecording; AudioManager audio_service; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.dialog); audio_service = (AudioManager) getSystemService(Context.AUDIO_SERVICE); getWindow().addFlags(WindowManager.LayoutParams.FLAG_BLUR_BEHIND); Bundle b = this.getIntent().getExtras(); String number = b.getString("com.networks.NUMBER"); String name = b.getString("com.networks.NAME"); TextView tv = (TextView) findViewById(R.id.voip) ; tv.setText(name); Intent service = new Intent(CallAudio.CICERO_CALL_SERVICE); startService(service); final Button stop_Call_Button = (Button) findViewById(R.id.widget35); this.setVolumeControlStream(AudioManager.STREAM_VOICE_CALL); stop_Call_Button.setOnClickListener(new View.OnClickListener(){ public void onClick(View v){ Intent service = new Intent(CallAudio._CALL_SERVICE); //this is for Android 1.5 (sets speaker going for a few seconds before shutting down) stopService(service); Intent setIntent = new Intent(Intent.ACTION_MAIN); setIntent.addCategory(Intent.CATEGORY_HOME); setIntent.addFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP); startActivity(setIntent); finish(); isRecording = false; } }); final Button speaker_Button = (Button) findViewById(R.id.widget36); speaker_Button.setOnClickListener(new View.OnClickListener(){ public void onClick(View v){ if(true){ audio_service.setSpeakerphoneOn(false); } else{ audio_service.setSpeakerphoneOn(true); } } }); } @Override protected void onResume() { super.onResume(); } @Override protected void onPause() { super.onPause(); } public void onCofigurationChanged(Configuration newConfig) { super.onConfigurationChanged(newConfig); } } It calls a service that uses AudioRecord to record from the Mic and AudioTrack to play it out the earpiece, nothing in the service to do with the dialler. Has anyone any idea why this might be happening?

    Read the article

  • HTML Language question

    - by Mike
    Note my code below. I am trying to figure out why my data is not changing to Spanish. I understand it to be one line of code and that is all within the HTML attribute lang=”es”. Any help would be greatly appreciated. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xlmns="http://www.w3.org/1999/xhtml" lang=”es” xml:lang="en"> <head> <title>JavaJam Coffee House</title> <link href="javajam.css" rel="stylesheet" type="text/css" /> </head> <body bgcolor="brown"> <h1>JavaJam Coffee House</h1> <ul> <li>Specialty Coffee and Tea</li> <li>Bagels, Muffins, and Organic Snacks</li> <li>Music and Poetry Readings</li> <li>Usability Studies</li> <li>Open Mic Night</li> </ul> <br></br> <p>12312 Main Street<br> Mountain Home, CA 93923<br> 1-888-555-5555</br> </p> <p> <em> <small>Copyright &copy; 2008 JavaJam Coffee House</em></p> E-Mail <a href="mailto;[email protected]"> Michael J. Crawley</a> </body> </html>

    Read the article

< Previous Page | 7 8 9 10 11 12  | Next Page >