Search Results

Search found 16166 results on 647 pages for 'conexant high def audio'.

Page 48/647 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • How to route KVM virtual machine audio to Ubuntu 11.10 host using virt-manager?

    - by iGadget
    I've been using KVM in combination with Virt-Manager and Remmina at a fair success up until now. The issue I need to solve now is to get audio from a virtualized Windows XP and make it audible on the Ubuntu 11.10 host. Remmina / RDP works for 'simple' audio (system sounds and such), but when the source gets trickier (e.g. Flash audio), Remmina / RDP messes up. So I figured I'd just connect to the machine directly using Virt-Manager. Unfortunately, it seems that even though I have successfully configured the AC97 audio device on WinXP, it's unable to get it's output to the Ubuntu host. This is probably because Virt-Manager uses VNC (and AFAIK, VNC doesn't transport audio). Does anyone know if there is a solution to fix this? I've heard of Spice, but the installation required so much voodoo last time I checked, I figured I'd let that solution boil to maturity a little longer ;) But perhaps there are other options I haven't thought of yet (which don't require switching to VirtualBox / VMware)...

    Read the article

  • Which version management design methodology to be used in a Dependent System nodes?

    - by actiononmail
    This is my first question so please indicate if my question is too vague and not understandable. My question is more related to High Level Design. We have a system (specifically an ATCA Chassis) configured in a Star Topology, having Master Node (MN) and other sub-ordinate nodes(SN). All nodes are connected via Ethernet and shall run on Linux OS with other proprietary applications. I have to build a recovery Framework Design so that any software entity, whether its Linux, Ramdisk or application can be rollback to previous good versions if something bad happens. Thus I think of maintaining a State Version Matrix over MN, where each State(1,2....n) represents Good Kernel, Ramdisk and application versions for each SN. It may happen that one SN version can dependent on other SN's version. Please see following diagram:- So I am in dilemma whether to use Package Management Methodology used by Debian Distributions (Like Ubuntu) or GIT repository methodology; in order to do a Rollback to previous good versions on either one SN or on all the dependent SNs. The method should also be easier for upgrading SNs along with MNs. Some of the features which I am trying to achieve:- 1) Upgrade of even single software entity is achievable without hindering others. 2) Dependency checks must be done before applying rollback or upgrade on each of the SN 3) User Prompt should be given in case dependency fails.If User still go for rollback, all the SNs should get notification to rollback there own releases (if required). 4) The binaries should be distributed on SNs accordingly so that recovery process is faster; rather fetching every time from MN. 5) Release Patches from developer for bug fixes, feature enhancement can be applied on running system. 6) Each version can be easily tracked and distinguishable. Thanks

    Read the article

  • What trick will give most reliable/compatible sound alarm in a browser window for most browsers

    - by Dirk Paessler
    I want to be able to play an alarm sound using Javascript in a browser window, preferably with the requirement for any browser plugins (Quicktime/Flash). I have been experimenting with the tag and the new Audio object in Javascript, but results are mixed: As you can see, there is no variant that works on all browsers. Do I miss a trick that is more cross-browser compatible? This is my code: // mp3 with Audio object var snd = new Audio("/sounds/beep.mp3");snd.play(); // wav with Audio object var snd = new Audio("/sounds/beep.wav");snd.play(); // mp3 with EMBED tag $("#alarmsound").empty().append ('<embed src="/sounds/beep.mp3" autostart="true" loop="false" '+ 'volume="100" hidden="true" width="1" height="1" />'); // wav with EMBED tag $("#alarmsound").empty().append ('<embed src="/sounds/beep.wav" autostart="true" loop="false" '+ 'volume="100" hidden="true" width="1" height="1" />'); }

    Read the article

  • Why is a FLAC encoded from a decoded MP3 bigger than the MP3?

    - by Ryan Thompson
    To be more precise than in the title, suppose I have a MP3 file that is 320 kbps. If I decompress it, then logically, all the data except for roughly 320 kilobits out of each second of audio should be redundant data, able to be compressed away. So, when I encode the decompressed file to FLAC, or any other lossless codec, why is it so much larger? On a related note, is it theoretically possible to losslessly recover the source mp3 audio from a decompressed wav? (I know the mp3 itself is lossy. I'm asking if it's possible to re-encode without any further loss.) EDIT: Let me clarify the related question, and the rationale behind it. Suppose I have a wav that was decompressed from an MP3 file (and assume I don't have the mp3 itself for some reason). If I don't want to lose any more quality, I can re-encode it with FLAC or any other lossless encoder and get a larger file just to maintain the same quality. Or, I can re-encode it to mp3 again and get the same size as the original but lose more data. Obviously, neither of these cases is ideal. I can either have the original size or the original quality, but not both (I mean the quality of the original mp3, not the original lossless source). My question is: Can we get both? Is it theoretically possible to recover the lossy compressed data from the lossy decompressed data, without losing even more? If it is possible, I could imagine a lossless compression algorithm that compresses the audio with FLAC. Then it also scans the audio for any signs of previous lossy compression, and if detected, recompresses it losslessly to the original lossy file. Then it keeps whichever file is smaller.

    Read the article

  • Terrible noises from subwoofer of ACER Aspire 6930 with Realtek sound chip

    - by OneWorld
    After approximately 5-15 min of listening to music my subwoofer begins to make terrible noises. He's just "coughing". That began after 6 months I had this computer. Now I found out, that I can temporarily fix this problem by "restarting" the audio stream of the application that plays music. For example reloading last.fm page (reloads the flash file). Another way to reset the audio playback is switching the speaker configuration shown below in the screenshot. According to many posts on the internet like http://www.tomshardware.co.uk/forum/52918-20-acer-aspire-6935g-speaker-problem ACER support isn't any help Exchanging hardware doesn't fix the problem Even the later models have this problem Turning off the volume of the subwoofer is not an option to me. I still have warranty (I bought an extension of one year). I already tried about 15 versions of the Realtek driver with no success. I am not sure but MAYBE the problem did not occur on the original windows vista that was shipped with this computer. However, I removed the original windows for good reasons (english). What do you suggest me? Did anyone fix this problem? Maybe by writing a script which resets the audio streams every 5 minutes? Shall I take the effort to deal with the acer support until they give me another model? (I won't have a computer than for a longer time, will spend money on telephone hotlines (1,30 EUR / min)......) Here are additional infos, if they are any help: Windows 7 64 Bit (Original was Windows Vista Home Premium 32 Bit) All specs Audio driver version:

    Read the article

  • Sound doesn't work anymore after replacing RAM

    - by thejh
    Hello, today, I replaced one old RAM module with two newer, bigger ones, but now, the sound doesn't seem to work anymore. Already ran alsaconf and it didn't help. Output of lspci for the audio device: 00:07.0 Audio device: nVidia Corporation MCP67 High Definition Audio (rev a1) Subsystem: Giga-byte Technology Device a002 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz+ UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 (500ns min, 1250ns max) Interrupt: pin A routed to IRQ 21 Region 0: Memory at f5100000 (32-bit, non-prefetchable) [size=16K] Capabilities: [44] Power Management version 2 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot+,D3cold+) Status: D0 PME-Enable- DSel=0 DScale=0 PME- Capabilities: [50] Message Signalled Interrupts: Mask+ 64bit+ Queue=0/0 Enable- Address: 0000000000000000 Data: 0000 Masking: 00000000 Pending: 00000000 Capabilities: [6c] HyperTransport: MSI Mapping Enable+ Fixed+ Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel The audio device is onboard and has six configurable outputs, two or so are also capable of being an input (if I remember it correctly), but I don't know how to control it under linux. Does somebody know how/whether replacing the RAM could be related to my problem and/or how to fix it?

    Read the article

  • How to boost playback volume in real time on media recorded with a very low volume.

    - by L Marksman
    I have never heard a satisfactory answer to this often misunderstood question, let me explain. Lets say I have a sound card and earphones/speakers that can play back audio loud enough in most cases. This is great but the problem is that you always find people who do not know how to record audio, from Youtube video's to music. So now you end up with a audio playback that only uses 10% or less of the capacity of your sound hardware, in vista/win 7 you will see this frequently in the mixer with the volume pushed up to max but the green sound level only goes up a millimeter or two. I am looking for (preferably free) software or a method to boost the sound level of any audio from any source in real time to use more of my hardware capacity similar to what VLC media player can do. Oh and please, do not tell me it is impossible. I am not trying to boost the volume past what my hardware is capable of, I am just trying to use my hardware's full capacity. Also please do not tell met to buy new hardware, I know I can use hardware amplification, I don't want to (like many others) spend money on a simple little problem like this. Thanks!

    Read the article

  • Why am I getting a " instance has no attribute '__getitem__' " error?

    - by Kevin Yusko
    Here's the code: class BinaryTree: def __init__(self,rootObj): self.key = rootObj self.left = None self.right = None root = [self.key, self.left, self.right] def getRootVal(root): return root[0] def setRootVal(newVal): root[0] = newVal def getLeftChild(root): return root[1] def getRightChild(root): return root[2] def insertLeft(self,newNode): if self.left == None: self.left = BinaryTree(newNode) else: t = BinaryTree(newNode) t.left = self.left self.left = t def insertRight(self,newNode): if self.right == None: self.right = BinaryTree(newNode) else: t = BinaryTree(newNode) t.right = self.right self.right = t def buildParseTree(fpexp): fplist = fpexp.split() pStack = Stack() eTree = BinaryTree('') pStack.push(eTree) currentTree = eTree for i in fplist: if i == '(': currentTree.insertLeft('') pStack.push(currentTree) currentTree = currentTree.getLeftChild() elif i not in '+-*/)': currentTree.setRootVal(eval(i)) parent = pStack.pop() currentTree = parent elif i in '+-*/': currentTree.setRootVal(i) currentTree.insertRight('') pStack.push(currentTree) currentTree = currentTree.getRightChild() elif i == ')': currentTree = pStack.pop() else: print "error: I don't recognize " + i return eTree def postorder(tree): if tree != None: postorder(tree.getLeftChild()) postorder(tree.getRightChild()) print tree.getRootVal() def preorder(self): print self.key if self.left: self.left.preorder() if self.right: self.right.preorder() def inorder(tree): if tree != None: inorder(tree.getLeftChild()) print tree.getRootVal() inorder(tree.getRightChild()) class Stack: def __init__(self): self.items = [] def isEmpty(self): return self.items == [] def push(self, item): self.items.append(item) def pop(self): return self.items.pop() def peek(self): return self.items[len(self.items)-1] def size(self): return len(self.items) def main(): parseData = raw_input( "Please enter the problem you wished parsed.(NOTE: problem must have parenthesis to seperate each binary grouping and must be spaced out.) " ) tree = buildParseTree(parseData) print( "The post order is: ", + postorder(tree)) print( "The post order is: ", + postorder(tree)) print( "The post order is: ", + preorder(tree)) print( "The post order is: ", + inorder(tree)) main() And here is the error: Please enter the problem you wished parsed.(NOTE: problem must have parenthesis to seperate each binary grouping and must be spaced out.) ( 1 + 2 ) Traceback (most recent call last): File "C:\Users\Kevin\Desktop\Python Stuff\Assignment 11\parseTree.py", line 108, in main() File "C:\Users\Kevin\Desktop\Python Stuff\Assignment 11\parseTree.py", line 102, in main tree = buildParseTree(parseData) File "C:\Users\Kevin\Desktop\Python Stuff\Assignment 11\parseTree.py", line 46, in buildParseTree currentTree = currentTree.getLeftChild() File "C:\Users\Kevin\Desktop\Python Stuff\Assignment 11\parseTree.py", line 15, in getLeftChild return root[1] AttributeError: BinaryTree instance has no attribute '__getitem__'

    Read the article

  • Using Hidden Markov Model for designing AI mp3 player

    - by Casper Slynge
    Hey guys. Im working on an assignment, where I want to design an AI for a mp3 player. The AI must be trained and designed with the use of a HMM method. The mp3 player shall have the functionality of adapting to its user, by analyzing incoming biological sensor data, and from this data the mp3 player will choose a genre for the next song. Given in the assignment is 14 samples of data: One sample consist of Heart Rate, Respiration, Skin Conductivity, Activity and finally the output genre. Below is the 14 samples of data, just for you to get an impression of what im talking about. Sample HR RSP SC Activity Genre S1 Medium Low High Low Rock S2 High Low Medium High Rock S3 High High Medium Low Classic S4 High Medium Low Medium Classic S5 Medium Medium Low Low Classic S6 Medium Low High High Rock S7 Medium High Medium Low Classic S8 High Medium High Low Rock S9 High High Low Low Classic S10 Medium Medium Medium Low Classic S11 Medium Medium High High Rock S12 Low Medium Medium High Classic S13 Medium High Low Low Classic S14 High Low Medium High Rock My time of work regarding HMM is quite low, so my question to you is if I got the right angle on the assignment. I have three different states for each sensor: Low, Medium, High. Two observations/output symbols: Rock, Classic In my own opinion I see my start probabilities as the weightened factors for either a Low, Medium or High state in the Heart Rate. So the ideal solution for the AI is that it will learn these 14 sets of samples. And when a users sensor input is received, the AI will compare the combination of states for all four sensors, with the already memorized samples. If there exist a matching combination, the AI will choose the genre, and if not it will choose a genre according to the weightened transition probabilities, while simultaniously updating the transition probabilities with the new data. Is this a right approach to take, or am I missing something ? Is there another way to determine the output probability (read about Maximum likelihood estimation by EM, but dont understand the concept)? Best regards, Casper

    Read the article

  • Playing NSF music in FMOD.net

    - by Tesserex
    So, as the title says, I want to be able to play NSF files using FMOD, because my project already uses FMOD and I'd rather not replace it. This will involve figuring out how existing players and emulators work and porting it. I haven't yet found an existing player that uses FMOD. My starting point is the MyNes source from http://sourceforge.net/projects/mynes/. There are two big steps between here and what I'm looking for. MyNes plays from a ROM, not NSF. So, I have to rip out the APU and get it to play NSF files. The MyNes APU uses SlimDX, so I have to convert that to FMOD.NET. I am really stuck about how to go about either of these, because I'm not that familiar with audio formats and it's hard finding resources online. So here are a few questions: From what I can tell from the NSF spec at http://kevtris.org/nes/nsfspec.txt, it's just contains the relevant memory section of the ROM, plus the header. If anyone can verify or correct this that would be great. The emulator APU uses data from the rest of the emulator to play, including things like cycle counts. I'm not sure what replaces this in a standalone player. Can't I just load all the music data at once into a stream and play it? Joining #1 and #2, does the header data from the NSF substitute for some of the ROM data in the emulator code? Using FMOD, will I be following the usercreatedsound example for loading a stream? And does this format count as PCM? Specifically MyNes says PCM8. Any tips on loading / playing the stream in FMOD are appreciated. As an aside, I don't really understand the loading / playing sections of the spec I linked at all. It seems to apply to 6502 systems / emulators only and not to my situation. I know it's a long shot for anyone here to have enough experience in this area to help, but anything you can provide is definitely appreciated. A link to an existing .NET library that does this would be even better, but I don't believe one exists.

    Read the article

  • Playing NSF music in FMOD.net

    - by Tesserex
    So, as the title says, I want to be able to play NSF files using FMOD, because my project already uses FMOD and I'd rather not replace it. This will involve figuring out how existing players and emulators work and porting it. I haven't yet found an existing player that uses FMOD. My starting point is the MyNes source from http://sourceforge.net/projects/mynes/. There are two big steps between here and what I'm looking for. MyNes plays from a ROM, not NSF. So, I have to rip out the APU and get it to play NSF files. The MyNes APU uses SlimDX, so I have to convert that to FMOD.NET. I am really stuck about how to go about either of these, because I'm not that familiar with audio formats and it's hard finding resources online. So here are a few questions: From what I can tell from the NSF spec at http://kevtris.org/nes/nsfspec.txt, it's just contains the relevant memory section of the ROM, plus the header. If anyone can verify or correct this that would be great. The emulator APU uses data from the rest of the emulator to play, including things like cycle counts. I'm not sure what replaces this in a standalone player. Can't I just load all the music data at once into a stream and play it? Joining #1 and #2, does the header data from the NSF substitute for some of the ROM data in the emulator code? Using FMOD, will I be following the usercreatedsound example for loading a stream? And does this format count as PCM? Specifically MyNes says PCM8. Any tips on loading / playing the stream in FMOD are appreciated. As an aside, I don't really understand the loading / playing sections of the spec I linked at all. It seems to apply to 6502 systems / emulators only and not to my situation. I know it's a long shot for anyone here to have enough experience in this area to help, but anything you can provide is definitely appreciated. A link to an existing .NET library that does this would be even better, but I don't believe one exists.

    Read the article

  • Oracle User Productivity Kit Translation

    - by ultan o'broin
    Oracle's customers just love the User Productivity Kit (UPK). I hear only great things about it from our international customers at the Oracle Usability Advisory Board meetings too. The UPK is the perfect solution for enterprise applications training needs (I previously reviewed a fine book about UPK btw). One question I am often asked is how source content created using the UPK can be translated into another language. I spoke with Peter Maravelias, Principal Product Strategy Manager for UPK about this recently. UPK is already optimized for easy source-target translation already. There is even a solution for re-recording demos. Here's what you can do to get your source content into another language: Use UPK's ability to automatically translate events and actions. UPK comes with XML templates that allow you to accomplish this in 21 languages with a simple publishing action switch. These templates even deal with the tricky business of using gender-based translations. Spanish localization template sample Japanese localization template sample Use the Import and Export localization features to export additional custom content in a format like XLIFF, easily handled by translation tools. You could also export and import in Word format. Re-record the sound (audio) files that go with the recordings, one per screen. UPK's granular approach to the sound files means that timing isn't an option. Retiming demos isn't required. A tip here with sound files and XLFF-exported custom content is to facilitate translation context by avoiding explicit references to actions going on in the screen recordings. A text based storyboard with screenshots accompanying the sound files should also be provided to the translators. Provide a glossary of terms too. Use the re-record option in UPK to record any demo from a translated application. This will allow all the translated UI labels to be automatically captured. You may be required to resize any action events here due to text expansion issues. Of course, you will need translated data in the translated application too, so plan for this in advance. However, source-target language skills aren't required for the re-recording. The UPK Player itself, of course, is also available from Oracle along with content and doc in 21 languages. The Developer and Setup is also translated in a smaller number of languages. Check the Oracle UPK website for latest details. UPK is a super solution for global enterprise applications training deployments allowing source content to be translated into multiple languages easily. See this post on the UPK blog for more insight too!

    Read the article

  • Beat detection and FFT

    - by Quincy
    So I am working on a platformer game which includes music with beat detection. I am currently using a simple if the energy that is stored in the history buffer is smaller then the current energy there is a beat. The problem with this is that ofcourse if you use songs like rock songs where you have a pretty steady amplitude this isn't going to work. So I looked further and found algorithms splitting the sound into multiple bands using FFT. I then found this : http://en.literateprograms.org/Cooley-Tukey_FFT_algorithm_(C) The only problem I'm having is that I am quite new to audio and I have no idea how to use that to split the signal up into multiple signals. So my question is : How do you use a FFT to split a signal into multiple bands ? Also for the guys interested, this is my algorithm in c# : // C = threshold, N = size of history buffer / 1024 public void PlaceBeatMarkers(float C, int N) { List<float> instantEnergyList = new List<float>(); short[] samples = soundData.Samples; float timePerSample = 1 / (float)soundData.SampleRate; int sampleIndex = 0; int nextSamples = 1024; // Calculate instant energy for every 1024 samples. while (sampleIndex + nextSamples < samples.Length) { float instantEnergy = 0; for (int i = 0; i < nextSamples; i++) { instantEnergy += Math.Abs((float)samples[sampleIndex + i]); } instantEnergy /= nextSamples; instantEnergyList.Add(instantEnergy); if(sampleIndex + nextSamples >= samples.Length) nextSamples = samples.Length - sampleIndex - 1; sampleIndex += nextSamples; } int index = N; int numInBuffer = index; float historyBuffer = 0; //Fill the history buffer with n * instant energy for (int i = 0; i < index; i++) { historyBuffer += instantEnergyList[i]; } // If instantEnergy / samples in buffer < instantEnergy for the next sample then add beatmarker. while (index + 1 < instantEnergyList.Count) { if(instantEnergyList[index + 1] > (historyBuffer / numInBuffer) * C) beatMarkers.Add((index + 1) * 1024 * timePerSample); historyBuffer -= instantEnergyList[index - numInBuffer]; historyBuffer += instantEnergyList[index + 1]; index++; } }

    Read the article

  • Nginx php-fpm high cpu usage

    - by Piotr Kaluza
    I have a problem with a high traffic wordpress, super high CPU load under nginx php-fpm, I am caching with apc, and memcached, spent 2-3 days tweaking configs and looking for answers it seems to me that php-fpm takes up all the cpu available no matter how many max_children i set if i set 5 then the load is 20% each, if i set 20 then the load adds up till 90% i tried static and dynamic server is 2x3.0Ghz 6GB Ram SSD in raid 10 on ubuntu 12.04 x64 utpime: 17:27:51 up 2:19, 1 user, load average: 29.79, 28.08, 26.29 what can be the issue?

    Read the article

  • SQLS Timeouts - High Reads in Profiler

    - by lb01
    I've audited a SQLS2008 server with Profiler for one day.. the overhead didn't seem to trouble this new client my company has. They are using a legacy VB6 application as a front-end. They're experiencing timeouts once SQLS RAM usage is high. The server is currently running x64 sqls2008 on a VM with nearly 9 GB of RAM. SQL Server's 'max server memory option' is currently set to 6GB. I've put the results of the trace in a table and queried them using this query. SELECT TextData, ApplicationName, Reads FROM [TraceWednesday] WHERE textdata is not null and EventClass = 12 GROUP BY TextData, ApplicationName, Reads ORDER BY Reads DESC As I expected, some values are very high. Top Reads, in pages. 2504188 1965910 1445636 1252433 1239108 1210153 1088580 1072725 Am I correct in thinking that the top one (2504188 pages) is 20033504 KB, which then is roughly ~20'000 MB, 20GB? These queries are often executed and can take quite some time to run. Eventually RAM is used up because of the cache fattening, and timeouts occur once SQL cannot 'splash' pages in the buffer pool as much. Costs go up. Am I correct in my understanding? I've read that I should tune the associated T-SQL and create appropriate indices. Obviously cutting down the I/O would make SQL Server use less RAM. OR, maybe it might just slow down the process of chewing up the whole RAM. If a lot less pages are read, maybe it'll all run much better even when usage is high? (less time swapping, etc.) Currently, our only option is to restart SQL once a week when RAM usage is high, suddenly the timeouts disappear. SQL breathes again. I'm sure lots of DBAs have been in this situation.. I'm asking before I start digging out all of the bad T-SQL and put indices here and there, is there is something else I can do? Any advice except from what I know (not much yet..) Much appreciated. Leo.

    Read the article

  • SQLS Timeouts - High Reads in Profiler

    - by lb01
    Hi I've audited a SQLS2008 server with Profiler for one day.. the overhead didn't seem to trouble this new client my company has. They are using a legacy VB6 application as a front-end. They're experiencing timeouts once SQLS RAM usage is high. The server is currently running x64 sqls2008 on a VM with nearly 9 GB of RAM. SQL Server's 'max server memory option' is currently set to 6GB. I've put the results of the trace in a table and queried them using this query. SELECT TextData, ApplicationName, Reads FROM [TraceWednesday] WHERE textdata is not null and EventClass = 12 GROUP BY TextData, ApplicationName, Reads ORDER BY Reads DESC As I expected, some values are very high. Top Reads, in pages. 2504188 1965910 1445636 1252433 1239108 1210153 1088580 1072725 Am I correct in thinking that the top one (2504188 pages) is 20033504 KB, which then is roughly ~20'000 MB, 20GB? These queries are often executed and can take quite some time to run. Eventually RAM is used up because of the cache fattening, and timeouts occur once SQL cannot 'splash' pages in the buffer pool as much. Costs go up. Am I correct in my understanding? I've read that I should tune the associated T-SQL and create appropriate indices. Obviously cutting down the I/O would make SQL Server use less RAM. OR, maybe it might just slow down the process of chewing up the whole RAM. If a lot less pages are read, maybe it'll all run much better even when usage is high? (less time swapping, etc.) Currently, our only option is to restart SQL once a week when RAM usage is high, suddenly the timeouts disappear. SQL breathes again. I'm sure lots of DBAs have been in this situation.. I'm asking before I start digging out all of the bad T-SQL and put indices here and there, is there is something else I can do? Any advice except from what I know (not much yet..) Much appreciated. Leo.

    Read the article

  • Android: Voice Recording and saving audio

    - by user1320912
    I am working on application that will record the voice of the user and save the file on the SD card and then allow the user to listen to the audio again. I am able to allow the user to record his voice using the RecognizerIntent, but I cant figure out how to save the audio file and allow the user to hear the audio. I would appreciate it if someone could help me out. I have displayed my code below: // Setting up the onClickListener for Audio Button attachVoice = (Button) findViewById(R.id.AttachVoice_questionandanswer); attachVoice.setOnClickListener(new OnClickListener() { public void onClick(View v) { Intent voiceIntent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH); voiceIntent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM); voiceIntent.putExtra(RecognizerIntent.EXTRA_PROMPT, "Please Speak"); startActivityForResult(voiceIntent, VOICE_REQUEST); } }); protected void onActivityResult(int requestCode, int resultCode, Intent data) { if(requestCode == VOICE_REQUEST && resultCode == RESULT_OK){ }

    Read the article

  • Ubuntu 12.04 - No sound - HELP!

    - by Bruno Tacca
    I'm panicking... my sound stopped working after I tried to set-up my notebook speakers, plus two headphone jacks... My idea was to multichannel the sound to 3 channels, built-in speakers, and sound-card 2 headphone jacks. After a couple efforts I did it with 2 channels, speakers and 1 headphone jack, but the other wasn't working. After more tries and tries, sound stop working. I just want my sound back... crying like a baby on the floor. And, if possible, but not necessary, a simple guide to active the 3 channels. xD I will post the diagnosis according to https://help.ubuntu.com/community/SoundTroubleshootingProcedure STEP 1 Did it, still no sound. STEP 2 Did it, still no sound. STEP 3 and #STEP 4 (I removed the log cause there is a limit of characters to be posted.) The log can be found here: https://answers.launchpad.net/ubuntu/+source/alsa-driver/+question/238653 STEP 5 Rebooted, still no sound. STEP 6 Did it. In the Output Devices tab, nothing is muted. I play a music with the Rhythmbox Music Player, I don't hear anything but in the pavucontrol I can see in the Built-in Audio Analog Stereo a sound bar shaking... but, no sound. STEP 7 In alsamixer, AlsaMixer v1.0.25 Card: HDA Intel PCH Chip: Creative CA0132 information View: F3:[Playback] F4: Capture F5: All Item: Headphone [dB gain: 25.00, 25.00] Then, I have 5 columns Headphone, Speaker, PCM, S/PDIF, S/PDIF Default PCM A little weird when I try to mute the Headphone and the Speaker, here what happens: Starting both unmutted, mutting headphone cause speaker being mutted automaticaly. Starting both unmutted, mutting speaker cause headphone being mutted automaticaly. Starting both mutted, possible to unmute both separately. STEP 8 I cannot hear sound on both (headphone and/or speaker). STEP 9 Dual boot... Restarted, windows was with sound at max volume. Restarted again, still no sound at ubuntu. I heard something when ubuntu started, a little noise, then silence again. The sound icon always start mutted, after unmutting, I have no sound. STEP 10 I dont have this command in my ubuntu. STEP 11 Tried at STEP 8, no sound. There are no problem with jumpers or hardware, cause I have sound working on windows. STEP 12 No way to open my alienware and loss the warranty x.X" STEP 13 I think it's loaded, judging my the logs STEP 14 Alienware M17xR4, the hardware is listed in the logs above, at STEP 4. There are two headphone hacks, one with just an headphone printed above, and the other with an headset (with mic) printed, there is a mic jack too, and a spdif (optical) too. STEP 15 I dont want to enable S/PDIF STEP 16 I never used the HDMI output, yet... Thanks in advance. I hope I listed all the information you need.

    Read the article

  • Testing background audio in the simulator

    - by Cactuar
    I'm experimenting with the new background audio service in iPhone OS 4.0 but I can't get it to work in the simulator. According to this page: iPhone Application Programming Guide: Executing Code in the Background it seems that all I have to do is add the a UIBackgroundModes key with an array containing audio to my Info.plist file and the audio my application plays should automatically continue when I switch to another app. I have done this but the audio still pauses as I switch to another app, when I switch back it continues where it left off. This is the code I'm using to play the sound: NSURL *url = [NSURL fileURLWithPath:[NSString stringWithFormat:@"%@/audio.mp3", [[NSBundle mainBundle] resourcePath]]]; NSError *error; audioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:url error:&error]; audioPlayer.numberOfLoops = -1; if (audioPlayer == nil) NSLog(@"%@", [error userInfo]); else [audioPlayer play]; Has anyone gotten this to work? Could it be that it would work on an actual device and it's just a problem with the simulator? I'm a bit hesitant to install 4.0 on my phone since I've heard it's still very buggy. Wish I had another device to use only for development.

    Read the article

  • Custom flash mp3 player stopping in the middle of playing audio on windows nt ie6 system

    - by Charlotte Moller
    We have used a custom MP3 flash player for a lot of years on our website without any issues, but recently, a client of ours is reporting that the audio is playing for several seconds and then stopping. When they refresh the page or click play in the player again the audio plays fine. We are puzzled as to what could be causing this issue after this running successfully for our clients for so many years. The client system is Windows NT running IE6. Does anyone have any idea what could cause the audio to behave this way? Could audio drivers or the version of flash cause problems? We do not have flash programmers on our team so we are not even sure where to start looking within the flash code of the player. Any ideas?

    Read the article

  • Capture Flash Audio in 4.7 Edge?

    - by emcmanus
    Is there a way to capture plugin (Flash) audio before it gets to the sound card? I'd like to record plugin audio, hopefully without actually playing the sound. Capturing audio at the device level is an absolute last resort, as the application would pick up all system audio rather than just the Webkit plugin. I'm aware of the recent switch back from QTMultimedia; is this possible with phonon? I spent the night looking for some way to access the phonon graph via QWebFrame (or any of the QtWebkit widgets) -- and didn't turn up much. I also started digging through QTWebkit, particularly NPAPI, without success. For reference, I'm using the edge version of 4.7 (6aa50af000f85cc4497749fcf0860c8ed244a60e) This seems to be a fairly challenging problem. Any hints would be greatly appreciated.

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >