Search Results

Search found 8342 results on 334 pages for 'audio player'.

Page 43/334 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Display a ranking grid for game : optimization of left outer join and find a player

    - by Jerome C.
    Hello, I want to do a ranking grid. I have a table with different values indexed by a key: Table SimpleValue : key varchar, value int, playerId int I have a player which have several SimpleValue. Table Player: id int, nickname varchar Now imagine these records: SimpleValue: Key value playerId for 1 1 int 2 1 agi 2 1 lvl 5 1 for 6 2 int 3 2 agi 1 2 lvl 4 2 Player: id nickname 1 Bob 2 John I want to display a rank of these players on various SimpleValue. Something like: nickname for lvl Bob 1 5 John 6 4 For the moment I generate an sql query based on which SimpleValue key you want to display and on which SimpleValue key you want to order players. eg: I want to display 'lvl' and 'for' of each player and order them on the 'lvl' The generated query is: SELECT p.nickname as nickname, v1.value as lvl, v2.value as for FROM Player p LEFT OUTER JOIN SimpleValue v1 ON p.id=v1.playerId and v1.key = 'lvl' LEFT OUTER JOIN SimpleValue v2 ON p.id=v2.playerId and v2.key = 'for' ORDER BY v1.value This query runs perfectly. BUT if I want to display 10 different values, it generates 10 'left outer join'. Is there a way to simplify this query ? I've got a second question: Is there a way to display a portion of this ranking. Imagine I've 1000 players and I want to display TOP 10, I use the LIMIT keyword. Now I want to display the rank of the player Bob which is 326/1000 and I want to display 5 rank player above and below (so from 321 to 331 position). How can I achieve it ? thanks.

    Read the article

  • Split UInt32 (audio frame) into two SInt16s (left and right)?

    - by morgancodes
    Total noob to anything lower-level than Java, diving into iPhone audio, and realing from all of the casting/pointers/raw memory access. I'm working with some example code wich reads a WAV file from disc and returns stereo samples as single UInt32 values. If I understand correctly, this is just a convenient way to return the 32 bits of memory required to create two 16 bit samples. Eventually this data gets written to a buffer, and an audio unit picks it up down the road. Even though the data is written in UInt32-sized chunks, it eventually is interpreted as pairs of 16-bit samples. What I need help with is splitting these UInt32 frames into left and right samples. I'm assuming I'll want to convert each UInt32 into an SInt16, since an audio sample is a signed value. It seems to me that for efficiency's sake, I ought to be able to simply point to the same blocks in memory, and avoid any copying. So, in pseudo-code, it would be something like this: UInt32 myStereoFrame = getFramefromFilePlayer; SInt16* leftChannel = getFirst16Bits(myStereoFrame); SInt16* rightChannel = getSecond16Bits(myStereoFrame); Can anyone help me turn my pseudo into real code?

    Read the article

  • Java collision detection and player movement: tips

    - by Loris
    I have read a short guide for game develompent (java, without external libraries). I'm facing with collision detection and player (and bullets) movements. Now i put the code. Most of it is taken from the guide (should i link this guide?). I'm just trying to expand and complete it. This is the class that take care of updates movements and firing mechanism (and collision detection): public class ArenaController { private Arena arena; /** selected cell for movement */ private float targetX, targetY; /** true if droid is moving */ private boolean moving = false; /** true if droid is shooting to enemy */ private boolean shooting = false; private DroidController droidController; public ArenaController(Arena arena) { this.arena = arena; this.droidController = new DroidController(arena); } public void update(float delta) { Droid droid = arena.getDroid(); //droid movements if (moving) { droidController.moveDroid(delta, targetX, targetY); //check if arrived if (droid.getX() == targetX && droid.getY() == targetY) moving = false; } //firing mechanism if(shooting) { //stop shot if there aren't bullets if(arena.getBullets().isEmpty()) { shooting = false; } for(int i = 0; i < arena.getBullets().size(); i++) { //current bullet Bullet bullet = arena.getBullets().get(i); System.out.println(bullet.getBounds()); //angle calculation double angle = Math.atan2(bullet.getEnemyY() - bullet.getY(), bullet.getEnemyX() - bullet.getX()); //increments x and y bullet.setX((float) (bullet.getX() + (Math.cos(angle) * bullet.getSpeed() * delta))); bullet.setY((float) (bullet.getY() + (Math.sin(angle) * bullet.getSpeed() * delta))); //collision with obstacles for(int j = 0; j < arena.getObstacles().size(); j++) { Obstacle obs = arena.getObstacles().get(j); if(bullet.getBounds().intersects(obs.getBounds())) { System.out.println("Collision detect!"); arena.removeBullet(bullet); } } //collisions with enemies for(int j = 0; j < arena.getEnemies().size(); j++) { Enemy ene = arena.getEnemies().get(j); if(bullet.getBounds().intersects(ene.getBounds())) { System.out.println("Collision detect!"); arena.removeBullet(bullet); } } } } } public boolean onClick(int x, int y) { //click on empty cell if(arena.getGrid()[(int)(y / Arena.TILE)][(int)(x / Arena.TILE)] == null) { //coordinates targetX = x / Arena.TILE; targetY = y / Arena.TILE; //enables movement moving = true; return true; } //click on enemy: fire if(arena.getGrid()[(int)(y / Arena.TILE)][(int)(x / Arena.TILE)] instanceof Enemy) { //coordinates float enemyX = x / Arena.TILE; float enemyY = y / Arena.TILE; //new bullet Bullet bullet = new Bullet(); //start coordinates bullet.setX(arena.getDroid().getX()); bullet.setY(arena.getDroid().getY()); //end coordinates (enemie) bullet.setEnemyX(enemyX); bullet.setEnemyY(enemyY); //adds bullet to arena arena.addBullet(bullet); //enables shooting shooting = true; return true; } return false; } As you can see for collision detection i'm trying to use Rectangle object. Droid example: import java.awt.geom.Rectangle2D; public class Droid { private float x; private float y; private float speed = 20f; private float rotation = 0f; private float damage = 2f; public static final int DIAMETER = 32; private Rectangle2D rectangle; public Droid() { rectangle = new Rectangle2D.Float(x, y, DIAMETER, DIAMETER); } public float getX() { return x; } public void setX(float x) { this.x = x; //rectangle update rectangle.setRect(x, y, DIAMETER, DIAMETER); } public float getY() { return y; } public void setY(float y) { this.y = y; //rectangle update rectangle.setRect(x, y, DIAMETER, DIAMETER); } public float getSpeed() { return speed; } public void setSpeed(float speed) { this.speed = speed; } public float getRotation() { return rotation; } public void setRotation(float rotation) { this.rotation = rotation; } public float getDamage() { return damage; } public void setDamage(float damage) { this.damage = damage; } public Rectangle2D getRectangle() { return rectangle; } } For now, if i start the application and i try to shot to an enemy, is immediately detected a collision and the bullet is removed! Can you help me with this? If the bullet hit an enemy or an obstacle in his way, it must disappear. Ps: i know that the movements of the bullets should be managed in another class. This code is temporary. update I realized what happens, but not why. With those for loops (which checks collisions) the movements of the bullets are instantaneous instead of gradual. In addition to this, if i add the collision detection to the Droid, the method intersects returns true ALWAYS while the droid is moving! public void moveDroid(float delta, float x, float y) { Droid droid = arena.getDroid(); int bearing = 1; if (droid.getX() > x) { bearing = -1; } if (droid.getX() != x) { droid.setX(droid.getX() + bearing * droid.getSpeed() * delta); //obstacles collision detection for(Obstacle obs : arena.getObstacles()) { if(obs.getRectangle().intersects(droid.getRectangle())) { System.out.println("Collision detected"); //ALWAYS HERE } } //controlla se è arrivato if ((droid.getX() < x && bearing == -1) || (droid.getX() > x && bearing == 1)) droid.setX(x); } bearing = 1; if (droid.getY() > y) { bearing = -1; } if (droid.getY() != y) { droid.setY(droid.getY() + bearing * droid.getSpeed() * delta); if ((droid.getY() < y && bearing == -1) || (droid.getY() > y && bearing == 1)) droid.setY(y); } }

    Read the article

  • Record 8 separate Line IN Channels from M-Audio Delta 1010 Card

    - by Peter Hoffmann
    I want to record the 8 separate Line IN Channels from my M-Audio Delta 1010 Card. The card is recogniced nicely and a can record a single channel via arecord -d 10 -f cd -t wav -D channel1 out2.wav. I've set up the different channels in ~/.asoundrc. Now if I want to record a second channel in parallel (arecord -d 10 -f cd -t wav -D channel2 out2.wav) I get the error arecord: main:564: audio open error: Device or resource busy As I understand the delta 1010 is a single Access Card, so only one application can access it at a time. Is this correct? The next step was to configure a dual channel input in .asoundrc # envy24 channel 1+2 only pcm.test { type plug ttable.0.0 1 ttable.0.1 1 slave.pcm ice1712 } Which works ok when I do a arecord -d 10 -f cd -t wav -D test -c 2 out.wav (BTW can anyone point me to a tool to split a multi channel wav into a file per channel?) But when I want to record the channels separately with (-I option) arecord -d 10 -f cd -t wav -D test -c 2 -I channel1.wav channel2.wav I get no recordings. Did I miss something with the configuration or what are my options to record all 8 channels via arecord. I've no experience with jackd. Is it an option to install jackd and record the line ins via jackd?

    Read the article

  • bluetooth headset can connect, but not visible in pulse audio

    - by Kim Marivoet
    I have a plantronics bluetooth headset, and until yesterday I could use it without any problem. However, today it suddenly stopped working (maybe related to the last software update I did). I can still connect/disconnect my headset, but it doesn't show up in pulse audio anymore. I read through various posts that describes kind of the same problem, but none of the suggested solutions worked. I get following error in the syslog: Oct 13 16:49:57 desktop bluetoothd[1040]: Endpoint registered: sender=:1.34 path=/MediaEndpoint/HFPAG Oct 13 16:49:57 desktop bluetoothd[1040]: Endpoint registered: sender=:1.34 path=/MediaEndpoint/A2DPSource Oct 13 16:49:57 desktop bluetoothd[1040]: Endpoint registered: sender=:1.34 path=/MediaEndpoint/A2DPSink Oct 13 16:50:09 desktop kernel: [ 17.340943] input: 48:C1:AC:08:FE:8F as /devices/virtual/input/input14 Oct 13 16:50:09 desktop bluetoothd[1040]: /org/bluez/1040/hci0/dev_48_C1_AC_08_FE_8F/fd0: fd(36) ready Oct 13 16:50:09 desktop rtkit-daemon[1894]: Successfully made thread 2213 of process 1892 (n/a) owned by '1000' RT at priority 5. Oct 13 16:50:09 desktop rtkit-daemon[1894]: Supervising 5 threads of 1 processes of 1 users. Oct 13 16:50:10 desktop bluetoothd[1040]: Badly formated or unrecognized command: AT+XEVENT=USER-AGENT,COM.PLANTRONICS,PLT_VOYAGERPRO,0109,27.90,FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF Oct 13 16:50:10 desktop bluetoothd[1040]: Audio connection got disconnected Any help would be much appreciated. I'm using Ubuntu 12.04. Thanks, Kim

    Read the article

  • Setting up Beats audio on HP Pavilion m6

    - by Joel Auterson
    I have an HP Pavilion m6-1054sa laptop, with a Beats subwoofer on the bottom. The normal laptop speakers work fine under Ubuntu but the Beats speaker(s?) does not. Anyone know how to get this working? Here's my lspci output, if it helps... 00:00.0 Host bridge: Intel Corporation Ivy Bridge DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Ivy Bridge PCI Express Root Port (rev 09) 00:02.0 VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09) 00:14.0 USB controller: Intel Corporation Panther Point USB xHCI Host Controller (rev 04) 00:16.0 Communication controller: Intel Corporation Panther Point MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation Panther Point High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 1 (rev c4) 00:1c.1 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 2 (rev c4) 00:1d.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation Panther Point LPC Controller (rev 04) 00:1f.2 RAID bus controller: Intel Corporation 82801 Mobile SATA Controller [RAID mode] (rev 04) 00:1f.3 SMBus: Intel Corporation Panther Point SMBus Controller (rev 04) 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Thames XT/GL [Radeon HD 7600M Series] (rev ff) 07:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. Device 5289 (rev 01) 07:00.2 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 0a) 08:00.0 Network controller: Intel Corporation Centrino Wireless-N 2230 (rev c4)

    Read the article

  • Problem Installing Xubuntu 12.04 Audio-Video codecs

    - by Seib
    I used Crouton to install Xubuntu 12.04.4 LTS with the XFCE environment on my HP Chromebook 11. I've gotten it all fully installed and everything; the only thing I'm doing now is the basic setup things, like adding codecs and other things like LibreOffice, VLC, Firefox, Ubuntu Software Center, etc. This information I got from 2 sources: http://www.efytimes.com/e1/fullnews.asp?edid=137269 http://www.binarytides.com/better-xubuntu-14-04/ . I'm currently on the same step at both URLs, which is #6 on Link 1 and #8 on Link 2. Per the articles, which both said the same thing, I typed in sudo apt-get install xubuntu-restricted-extras libavcodec-extra and it didn't do anything. It just kept on saying the same thing: Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package libavcodec-extra I've spend the last hour or so scouring the internet for a solution, for a hint even at what is going on. I don't want to do a clean reinstall, for two reasons: 1) it's takes like 1.5h just to get the croot fully installed, and 2) everything but this out of what I've done so far (up to #6 at Link 1 & up to #8 at Link 2) works except the audio. I've already installed flash, so YouTube works fine. It's just I can't hear any audio. Please help? Thanks in advance. I appreciate all the great help I've been getting from AskUbuntu lately. You all are great.

    Read the article

  • M-Audio Delta starts up at wrong sample rate

    - by steevc
    When the PC starts my M-Audio Delta 66 is using 48000kHz sampling rate when it is set for 44100 in Envy24. This causes audio to play slower than it should. This is in Kubuntu 14.04 on my new PC using an AMD A8 6500 with 8GB. When I first installed it seemed okay, but at some point it went wrong and has been doing this consistently since then. Kernel is 3.13.0-24-generic #47-Ubuntu SMP Fri May 2 23:31:42 UTC 2014 i686 athlon i686 GNU/Linux steve@slarti:~$ cat /proc/asound/card2/pcm0p/sub0/hw_params access: MMAP_INTERLEAVED format: S32_LE subformat: STD channels: 10 rate: 48000 (48000/1) period_size: 441 buffer_size: 3528 I can get it to switch to 44100 if I disable/enable the Delta in Pulseaudio volume control, but I have to do this every day and the sound still seems distorted. I can't see any issues in any of the config or log files I can find. If I boot the PC with a Mint Live USB it starts at 44100 and sound fine. Originally reported on Youtube is playing at the wrong speed - maybe soundcard related, but I'll close that and have this more relevant question instead.

    Read the article

  • ffmpeg add two audio streams to video

    - by Tossin Hausen
    I tried this: ffmpeg -i /sdcard/video/transcode/video.avi -map 0:0,0 -i /sdcard/video/transcode/first.mp3 -map 1:0,1 -i /sdcard/video/transcode/second.mp3 -map 2:0,2 -acodec copy -vcodec py /sdcard/video/transcode/Output.avi to add two audio streams to one video file. But ffmpeg says the number of mappings should match the number of output streams. What is wrong here? I'm trying to work with an Android build of FFmepg "ffmpeg for android beta". "Does not work" means that this uncommunicative Android build of FFmpeg just stops without giving any error message. The -codec copy option does not work with this build. Now I tried the same set of files with the FFmpeg called command line tool that comes with Ubuntu 10. Something (can't say where it is from). The -codec copy option does not work with this FFmpeg too. Here the complete output: m30x:~/movie/Film$ ffmpeg -i input.avi -i first.mp3 -i second.mp3 -map 0 -map 1 -map 2 -acodec copy -vcodec copy output.avi FFmpeg version SVN-r0.5.9-4:0.5.9-0ubuntu0.10.04.1, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --extra-version=4:0.5.9-0ubuntu0.10.04.1 --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libgsm --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-pthreads --enable-zlib --disable-stripping --disable-vhook --enable-runtime-cpudetect --enable-gpl --enable-postproc --enable-swscale --enable-x11grab --enable-libdc1394 --enable-shared --disable-static libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 1 / 52.20. 1 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 0. 4. 0 / 0. 4. 0 libswscale 0. 7. 1 / 0. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Jun 12 2012 16:27:34, gcc: 4.4.3 [NULL @ 0x93cfd10]looks like this file was encoded with (divx4/(old)xvid/opendivx) -> forcing low_delay flag Seems stream 0 codec frame rate differs from container frame rate: 30000.00 (30000/1) -> 25.00 (25/1) Input #0, avi, from 'input.avi': Duration: 01:30:33.00, start: 0.000000, bitrate: 901 kb/s Stream #0.0: Video: mpeg4, yuv420p, 576x432, 25 tbr, 25 tbn, 30k tbc Input #1, mp3, from 'first.mp3': Duration: 01:30:32.84, start: 0.000000, bitrate: 63 kb/s Stream #1.0: Audio: mp3, 22050 Hz, stereo, s16, 64 kb/s Input #2, mp3, from 'second.mp3': Duration: 01:30:32.84, start: 0.000000, bitrate: 63 kb/s Stream #2.0: Audio: mp3, 22050 Hz, stereo, s16, 64 kb/s Number of stream maps must match number of output streams Merging only one audio stream with the video stream works with Ubuntu and Android version of FFmpeg. Here the complete output: ffmpeg -i input.avi -i first.mp3 -map 0 -map 1 -acodec copy -vcodec copy output.avi FFmpeg version SVN-r0.5.9-4:0.5.9-0ubuntu0.10.04.1, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --extra-version=4:0.5.9-0ubuntu0.10.04.1 --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libgsm --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-pthreads --enable-zlib --disable-stripping --disable-vhook --enable-runtime-cpudetect --enable-gpl --enable-postproc --enable-swscale --enable-x11grab --enable-libdc1394 --enable-shared --disable-static libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 1 / 52.20. 1 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 0. 4. 0 / 0. 4. 0 libswscale 0. 7. 1 / 0. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Jun 12 2012 16:27:34, gcc: 4.4.3 [NULL @ 0x9bfad10]looks like this file was encoded with (divx4/(old)xvid/opendivx) -> forcing low_delay flag Seems stream 0 codec frame rate differs from container frame rate: 30000.00 (30000/1) -> 25.00 (25/1) Input #0, avi, from 'input.avi': Duration: 01:30:33.00, start: 0.000000, bitrate: 901 kb/s Stream #0.0: Video: mpeg4, yuv420p, 576x432, 25 tbr, 25 tbn, 30k tbc Input #1, mp3, from 'first.mp3': Duration: 01:30:32.84, start: 0.000000, bitrate: 63 kb/s Stream #1.0: Audio: mp3, 22050 Hz, stereo, s16, 64 kb/s Output #0, avi, to 'output.avi': Stream #0.0: Video: mpeg4, yuv420p, 576x432, q=2-31, 90k tbn, 25 tbc Stream #0.1: Audio: libmp3lame, 22050 Hz, stereo, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #1.0 -> #0.1 Press [q] to stop encoding frame= 6157 fps=6156 q=-1.0 size= 31667kB time=246.28 bitrate=1053.3kbits/s Do you have an idea why it does not work with two audio streams? By the way, ffmpeg -i input_with_first_audio_stream.avi -i second.mp3 -acodec copy -vcodec copy output_two_audio_streams.avi -newaudio works with both versions of ffmpeg that I use, but the first audio stream is played too fast (x10 or more), while the second audio stream is played correct. Many thanks in advance and sorry for my unconventional question and outdated versions of ffmpeg. But I am a lamer and it is not so easy for me to compile from the source (especially for the Android version). I will try to compile an up to date version of ffmpeg with Ubuntu, but I don't have much free time.

    Read the article

  • How to play 24 fps video smoothly on a 60Hz display? (or which player supports frame interpolation?)

    - by netvope
    I use mpc-hc to play videos on Win7 x64. With the default settings (#1), video playback is great most of the time. But for panning shots, playback is not smooth. I stepped through the video frame by frame and found that the panning movement is smooth (e.g. each frame shifts horizontally by 10 pixels), so the problem is how the 23.976 fps video is interpolated to 60Hz. The judder looks like what would be caused by a "2:3 pulldown", where the frames are played unevenly like: frame 1, 1, 2, 2, 2, 3, 3, 4, 4, 4, etc (#2) Using "optimal renderer settings" (#3) instead of the default disables the Aero theme and causes tearing. Setting my LCD display to 50Hz may have improved the judder slightly (but I can't really tell). My display does not support 24Hz or 48Hz, and forcing them in the Nvidia control panel gives blurry screen. I've tried other video players (VLC and KMPlayer), the ReClock Directshow Filter, video files from different sources (#4), turning on/off DXVA, and a computer with a different GPU, but the judder in the playback is similar. None of them solved the problem. So, how can I play 23.976 or 24 fps video smoothly on a 60Hz display? I think a video player could make the video smoother by doing linear interpolation, such as: 1. 100% frame 1 2. 60% frame 1 + 40% frame 2 3. 20% frame 1 + 80% frame 2 4. 80% frame 2 + 20% frame 3 5. 40% frame 2 + 60% frame 3 6. 100% frame 3 7. 60% frame 3 + 40% frame 4 .. etc Can any existing video player do this? Footnotes: (#1) Video renderer: EVR Custom Pres. (#2) This example converts a 24 fps video into 30 fps (#3) View Renderer settings Reset Reset to optimal renderer settings (#4) The files I have are all H.264 mkv files, but I don't think the file format/encoding matters.

    Read the article

  • 5.1 sound in Unity3d 3.5

    - by N0xus
    I'm trying to implement 5.1 surround sound in my game. I've set Unity's AudioManager to a default of 5.1 surround and loaded in a 6 channel audio clip that should play a sound in each of the different audio spots. However, when I go to run my game, all I get is flat sound coming out of my front two speakers. Even then, these don't play the sound they should (front speaker should play "front speaker" right should play "right speaker" and so). Both speakers just end up playing the entire sound file. I've tried looking to see if there is a parameter that I have missed, but information on how to set up 5.1 sound in Unity is lacking (or my google skills aren't that good) and I can't get it to work as intended. Could someone please either tell me what I'm missing, or point me in the right direction? My audio source is situated at point (0, 0, 0) with my camera also being in the same point. I've moved about the scene but the same thing happens as I've already described.

    Read the article

  • Alsa devices under Wine

    - by Roberto Aloi
    Hi all, I'm running OpenSuse 11.2 and Wine 1.1.28. Even if audio is perfectly working fine for me (Skype, Banshee, etc), when I try to configure audio for Wine (to use Spotify) I cannot hear anything from the audio test. In the winecfg audio tab, ALSA is checked, but no devices are available. I tried to run alsaconf (it needs root permissions) but it returns: No supported PnP or PCI card found No legacy drivers available, either. Any idea?

    Read the article

  • Extending the SketchFlow player

    - by Aravind
    I would like to add some controls to the SketchFlow player. For example, I would like to add a combo box with a list of values for a specific variable, and selecting a value will make a specific screen/state show up in the SketchFlow player canvas. Is this possible? I have seen that using the PlayerContext allows access to some controls/events in the Player, but I am not sure how extensible the Player itself is.

    Read the article

  • Wire VMWare Player NIC to a VLAN in Ubuntu 8.04.3

    - by Sophie Charlesworth
    I've got VMWare Player 2.5.x installed on a Ubuntu 8.04.3 host running CentOS 5.3 running Cobbler. VMWare Player has two NICs (I actually took this image from an ESXi image, converted it to Player 2.x image via VMWare Standalone Converter). I've also setup a vlan (vlan5) on the host with 10.0.0.x and I'd like Cobbler to use that VLAN to serve any incoming requests. How do I wire up my VMWare to use the VLAN I've setup? Just one of the NICs. What I'm trying to do is to offer a laptop with a VM that our sysadmins can go, plug it into a box (which does not connect to the interwebs) and install RHEL images via cobbler. So essentially, its a cross over cable from the network port on the lappy to the Dell server box. PXE boot in the dell box and install RHEL. I have the cobbler working fine under VMWare ESXi but not so on the VMWare Player because of the VLAN issue - I think. Any ideas?

    Read the article

  • Wire VMWare Player NIC to a VLAN in Ubuntu 8.04.3

    - by Sophie Charlesworth
    Hi, I've got VMWare Player 2.5.x installed on a Ubuntu 8.04.3 host running CentOS 5.3 running Cobbler. VMWare Player has two NICs (I actually took this image from an ESXi image, converted it to Player 2.x image via VMWare Standalone Converter). I've also setup a vlan (vlan5) on the host with 10.0.0.x and I'd like Cobbler to use that VLAN to serve any incoming requests. How do I wire up my VMWare to use the VLAN I've setup? Just one of the NICs. What I'm trying to do is to offer a laptop with a VM that our sysadmins can go, plug it into a box (which does not connect to the interwebs) and install RHEL images via cobbler. So essentially, its a cross over cable from the network port on the lappy to the Dell server box. PXE boot in the dell box and install RHEL. I have the cobbler working fine under VMWare ESXi but not so on the VMWare Player because of the VLAN issue - I think. Any ideas?

    Read the article

  • Python coding with VLC player (quite a basic query I expect)

    - by Todd
    I'm fairly new to the whole coding realm so my knowledge is fairly limited, and I can't seem to find any basic tutorials on how to use scripts with VLC player. More specifically, the reason I'm asking here is because I stumbled across a post on this site about playing random clips from random videos on VLC player automatically. This is the forum post: Playback random section from multiple videos changing every 5 minutes My situation is similar to this lovely gentleman's was, though he clearly knows a lot more about coding than I do. In short, I'd like to copy this coding into a file of some sort and apply it to VLC player myself. Only I'm not sure what file type I'd have to save it as (I have Python by the way, and I tried saving it as a .py file but I didn't know if it was correct or where to go from there). Additionally, I'm not sure how to get VLC to "read" the script, so to speak - is there a specific location the file needs to be, and do I run the script from another program or through VLC? I'll reiterate that I'm relatively new to this, so if anybody would be so kind as to post a quick list of steps on how to save/place the file and use it with VLC player I really would appreciate it! P.S. I'm not computer illiterate, I'm fine with most programs and I'd understand if you just said things like "C:\Program Files (x86)\VideoLAN\VLC\plugins" or "in VLC, select Tools Plugins and extensions", I just wouldn't catch on to anything about adding a line of coding that does something without being told exactly what to write! Many thanks in advance! :) Todd

    Read the article

  • Audio stream mangement in Linux

    - by User1
    I have a very complicated audio setup for a project. Here's what we have: 3 applications playing sound 2 applications recording sound 2 sound cards I really don't really have the code to any of these applications. All I want to do is monitor and control the audio streams. Here are a few examples of operations I'd like to do while the applications are running: Mute one of the incoming audio streams. Have one of the incoming audio streams do a "solo" (be the only stream that can "talk"). Get a graph (about 30 seconds worth) of the audio that each stream produced. Send one of the audio streams to soundcard #1, but all three audio streams to soundcard #2. I would likely switch audio streams every 2 minutes or so with one of the operations listed above. A GUI would be preferred. I started looking at the sound systems in Linux and it gets extremely complex and I feel like there have been many new advances in the past few years. I see jack, pulseaudio, artsd, and several other packages. They all have some promise but where should I start? Is there something someone already built that can help?

    Read the article

  • Is there an easy way to copy an audio CD in Mac OS X?

    - by Bob D
    (not a commercial CD). I did some recordings of a band years ago and ran into one of the band members who asked me if I could make copies. I assumed that this would be easy. I know that I can rip the CD into iTunes and then burn a new CD, but I have two optical drives available, is there a way to simply copy the CD from one drive to the other in one step?

    Read the article

  • Ubuntu: how to get audio to work in both Spotify (under Wine) and Flash (in Firefox)?

    - by Jonik
    I'm running Spotify on Linux using Wine. Sound worked great (even though the sound test in winecfg failed!), until I installed alsa-oss package yesterday to get Flash sound working in Firefox. Now Spotify says: "There is a problem with your sound card. Spotify can't play music." So the question is, how to get the sound in Spotify working again, so that it also keeps working in Flash & Firefox? Tweak some ALSA settings? Spotify settings? Add/remove some packages? By the way, curiously, now that sound doesn't work in Spotify, winecfg's "Test Sound" does work! This is Ubuntu 8.04 (Hardy). Sound card / driver is probably an integrated AC'97. Please mention if any additional information about the system is needed! Update: I have Flash 10 installed (outside the packaging system, using $MOZ_PLUGIN_PATH env variable), but also had Flash 9 from flashplugin-nonfree package - and the earlier version was being used by Firefox! Based on what Mike Arthur said about Flash and alsa-oss, I removed the older Flash (flashplugin-nonfree package) and alsa-oss - and Flash sound still works, which is nice. But for some reason Spotify still doesn't play sound, even though things should now be like they were originally... Update 2: Got it working, all smoothly, finally.

    Read the article

  • Audio recording, a tool for human-aided drum quantizing.

    - by basilio.mp
    I have this situation: the drummer records the track (8 tracks in a multitrack session). Now, how do I check how distant are the recorded beats from their theoretical position i.e.: there is always some error in human recorded tracks, but is there any software that can show me the ideal (theoretical, quantized) beat and the recorded one and could alert me if the error is too big. P.S.: I'm searching for a standalone tool, or for a plugin that can work with Adobe Audition 3 or Nuendo 3.

    Read the article

  • Is the Zune HD's audio card better or worse than the iPod touch's?

    - by MatthewThepc
    Firstly, if this is the wrong site to ask this question I apologize, but I didn't see one for "music players" on the stack exchange website :) After reading a few people online say that music playing from a Zune HD sounds better to them than that on an iPod touch, I was wondering whether there's any truth to that? From what I can tell, the Zune HD uses a Wolfson Microelectronics WM8352, while the first-generation iPod Touch (which the Zune HD was competing with) used a Wolfson Microelectronics WM8758BG, and newer models use the Cirrus Logic CS4398 and CS42L61. Which ones are better (to make the question less subjective, let's say in terms of quality, range, & accuracy of output)? Admittedly, I have almost no idea how everything compares and works together, but it would seem to me that, just by looking at the version numbers, the iPod has been better since it's launch. Is there anything else that effects sound quality? Thanks!

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >