Search Results

Search found 427 results on 18 pages for 'wave'.

Page 10/18 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • How to play the sound of an object sliding on another object for a variable duration

    - by Antoine
    I would like to add sound effects to a basic 2D game. For example, a stone sphere is rolling on wood surface. Let's say I have a 2 second audio recording of this. How could I use the sample to add sound for an arbitrary duration ? So far I have two solutions in mind: a/ record the sound for an amount of time that is greater than the maximum expected duration, and play only a part of it; b/ extract a small portion of the sample and play it in a loop for the duration of the move; however I'm not sure if it makes sense with an audio wave.

    Read the article

  • Open XML at TechEd 2010

    Open XML was a big part of my first session at TechEd 2010 called, "Office 2010: Developing the Next Wave of Productivity Solutions". The thing that gets the biggest reaction is the Open XML SDK 2.0 "Productivity Tool"-- especially the ability to reflect over an Office document to produce C# code that will produce the target document. Here's the scenario: I have a Word document (Excel spreadsheet, PowerPoint deck) that a user produced manually. I want to be able to produce that same document...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Office 2010 & SharePoint 2010: Platform for Innovation

    There's a great new article by Michael Desmond in Visual Studio Magazine called "Office Alignment: Why Office 2010 and SharePoint 2010 are poised to unleash a new wave of developer innovation". Read it and you'll get Michael's always engaging insight into the new products investments in this release, and you'll read about some key customers who have leveraged the platform to drive their business. I've been reading a lot about innovation, and it can be a topic that begins to elude us when we...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Optimal Compression for Speech

    - by ashes999
    I'm designing a game that depends heavily on audio; I will have some 300+ speech files (most of them just a word or two long). This can very quickly escalate the size of my final game. What's the optimal way to encode/compress speech files to keep the size minimal without getting audio artifacts? Please address both per-file compression/encoding, and also zipping/compressing the set of all speech files together in your answer. Because I'm not sure which (or combination of both) factors will give me the best results. Edit: I need this to run in Silverlight and Android, so I'm presumably stuck with only MP3 as my option (other than uncompressed wave files).

    Read the article

  • How to tile multiple procedurally generated textures?

    - by Burhuc
    I'm trying to develop a procedural tile generator for a game, mostly for the ground tiles, instead of using "hand-drawn" tiles. To achieve this I'm using Perlin noise and a sine wave with multiple parameters, which already gives me pretty nice results. I don't want to generate 1 tile and repeat that one forever for one ground type, but I want to avoid recurrences, so I'm generating n different tiles. The problem I'm having now is that I want to tile the generated textures (smooth transitions). At the moment I have this: 4 256x256 textures. I thought a simple method would be to just add the positions of the different tiles to the noise generation algorithm, so that, when creating the 4 256x256 textures, it would behave like it would create a 512x512 texture, but that somehow didn't work as intented. So how can I tile those textures?

    Read the article

  • Removing surrounding noises from voice recording

    - by Peak Reconstruction Wavelength
    I have a wave file whose frequency spectrum looks like this. http://i.stack.imgur.com/2rRaS.png It contains audio, which I want to keep while removing the rest. The problem is that the surround noise changes, just those distinct voice patterns remain. I marked the voice patterns for clarity: http://i.stack.imgur.com/eLkBl.png What could an algorithm look like / a workflow in adobe audition look like that removes everything but the voice patterns? I think that the main characteristic is the line-shaped form over time. Loudness alone is not enough as the noise is loud aswell.

    Read the article

  • Quatre nouveaux Google Web Elements pour aider la création de Wikis ou de magasin en ligne, et le re

    Quatre nouveaux Google Web Elements Personnalisables par les développeurs Webs et le retour de Google Wave Google propose quatre nouveaux outils d'aide à la création de sites. Ils complètent son offre Google Web Elements. Si ces outils clefs en main sont à la base ciblent les débutants, ils sont également destinés aux développeurs. « Les Web Elements sont un excellent point de départ, car ils s'appuient sur des APIs qui vous donnent plus de contrôle sur le contenu ou la présentation », précise l'équipe du projet. La première de ces APIs, et certainement la plus intéressante, permet d'intégrer et d'adapter un Wiki à un site pré-existant Logiquement baptisé Side...

    Read the article

  • Huge 2d pixelized world

    - by aspcartman
    I would like to make a game field in a indie-strategic 2d game to be some a-like this popular picture. http://0.static.wix.com/media/6a83ae_cd307e45ffd9c6b145237263ac1a86be.jpg_1024 So every "pixel"(blocks) changes it's color slowly, sometimes a bright color wave happens, etc, but the spaces beetwen this pixels should stay dark (not to count shades, lightning and other 3rd party stuff going on). Units are going to be same "pixelized" and should position them-selfs according those blocks. I have some experience in game-developing, but this task seems not trivial for me. What approaches (shader, tons of sprites or code-render, i don't now) would you recommend me to follow? (I'm thinking of making this game using Unity Engine) Thanks everyone! :)

    Read the article

  • Samsung équipera un tiers de ses nouveaux smartphones avec Bada, quels seront les plus de cet OS mob

    Samsung équipera un tiers de ses nouveaux smartphones avec Bada, quels seront les plus de cet OS mobile basé sur Linux ? Samsung est le numéro deux mondial des mobiles, pour autant, le coréen reste très à la traine sur le marché des terminaux intelligents (seulement 8% des parts de marché en France en 2009). Le système d'exploitation maison de la firme, Bada, est vu par ses créateurs comme la botte secrète qui permettra d'investir le secteur des smartphones. Actuellement, un seul modèle (le Wave) en est équipé. Mais le constructeur promet une multiplication de la présence de Bada, en annonçant qu'un tiers des appareils qu'il lancera cette année en seront équipés. L'OS de Samsung est ouvert et basé sur ...

    Read the article

  • Implementation of FIR filter in C#

    - by user261924
    Hi, at the moment im trying to implement a FIR lowpass filter on a wave file. The FIR coefficients where obtained using MATLAB using a 40 order. Now i need to implement the FIR algorithm in C# and im finding it difficult to implement it. Any help? Thanks

    Read the article

  • mciSendString cannot save to directory path

    - by robUK
    Hello, VS C# 2008 SP1 I have a created a small application that records and plays audio. However, my application needs to save the wave file to the application data directory on the users computer. The mciSendString takes a C style string as a parameter and has to be in 8.3 format. However, my problem is I can't get it to save. And what is strange is sometime it does and sometimes it doesn't. Howver, most of the time is failes. However, if I save directly to the C drive it works first time everything. I have used 3 different methods that I have coded below. The error number that I get when it fails is 286."The file was not saved. Make sure your system has sufficient disk space or has an intact network connection" Many thanks for any suggestins, [DllImport("winmm.dll",CharSet=CharSet.Auto)] private static extern uint mciSendString([MarshalAs(UnmanagedType.LPTStr)] string command, StringBuilder returnValue, int returnLength, IntPtr winHandle); [DllImport("winmm.dll", CharSet = CharSet.Auto)] private static extern int mciGetErrorString(uint errorCode, StringBuilder errorText, int errorTextSize); [DllImport("Kernel32.dll", CharSet=CharSet.Auto)] private static extern int GetShortPathName([MarshalAs(UnmanagedType.LPTStr)] string longPath, [MarshalAs(UnmanagedType.LPTStr)] StringBuilder shortPath, int length); // Stop recording private void StopRecording() { // Save recorded voice string shortPath = this.shortPathName(); string formatShortPath = string.Format("save recsound \"{0}\"", shortPath); uint result = 0; StringBuilder errorTest = new StringBuilder(256); // C:\DOCUME~1\Steve\APPLIC~1\Test.wav // Fails result = mciSendString(string.Format("{0}", formatShortPath), null, 0, IntPtr.Zero); mciGetErrorString(result, errorTest, errorTest.Length); // command line convention - fails result = mciSendString("save recsound \"C:\\DOCUME~1\\Steve\\APPLIC~1\\Test.wav\"", null, 0, IntPtr.Zero); mciGetErrorString(result, errorTest, errorTest.Length); // 8.3 short format - fails result = mciSendString(@"save recsound C:\DOCUME~1\Steve\APPLIC~1\Test.wav", null, 0, IntPtr.Zero); mciGetErrorString(result, errorTest, errorTest.Length); // Save to C drive works everytime. result = mciSendString(@"save recsound C:\Test.wav", null, 0, IntPtr.Zero); mciGetErrorString(result, errorTest, errorTest.Length); mciSendString("close recsound ", null, 0, IntPtr.Zero); } // Get the short path name so that the mciSendString can save the recorded wave file private string shortPathName() { string shortPath = string.Empty; long length = 0; StringBuilder buffer = new StringBuilder(256); // Get the length of the path length = GetShortPathName(this.saveRecordingPath, buffer, 256); shortPath = buffer.ToString(); return shortPath; }

    Read the article

  • Rails 2 support after Rails 3 has been released

    - by J. Pablo Fernández
    Is there some plan or estimate about how long will Rails 2 be supported after Rails 3 has been released? I wanted to ride the wave and move to Rails 3 right away, specially for projects that may take 4 or 6 months to finish (so that they would probably be released with Rails 3.0.0 final) but I've found many things still not working, many basic plugins and gems; so I believe I'm stuck with Rails 2 for now.

    Read the article

  • Need help manipulating WAV (RIFF) Files at a byte level

    - by Eric
    I'm writing an an application in C# that will record audio files (*.wav) and automatically tag and name them. Wave files are RIFF files (like AVI) which can contain meta data chunks in addition to the waveform data chunks. So now I'm trying to figure out how to read and write the RIFF meta data to and from recorded wave files. I'm using NAudio for recording the files, and asked on their forums as well on SO for way to read and write RIFF tags. While I received a number of good answers, none of the solutions allowed for reading and writing RIFF chunks as easily as I would like. But more importantly I have very little experience dealing with files at a byte level, and think this could be a good opportunity to learn. So now I want to try writing my own class(es) that can read in a RIFF file and allow meta data to be read, and written from the file. I've used streams in C#, but always with the entire stream at once. So now I'm little lost that I have to consider a file byte by byte. Specifically how would I go about removing or inserting bytes to and from the middle of a file? I've tried reading a file through a FileStream into a byte array (byte[]) as shown in the code below. System.IO.FileStream waveFileStream = System.IO.File.OpenRead(@"C:\sound.wav"); byte[] waveBytes = new byte[waveFileStream.Length]; waveFileStream.Read(waveBytes, 0, waveBytes.Length); And I could see through the Visual Studio debugger that the first four byte are the RIFF header of the file. But arrays are a pain to deal with when performing actions that change their size like inserting or removing values. So I was thinking I could then to the byte[] into a List like this. List<byte> list = waveBytes.ToList<byte>(); Which would make any manipulation of the file byte by byte a whole lot easier, but I'm worried I might be missing something like a class in the System.IO name-space that would make all this even easier. Am I on the right track, or is there a better way to do this? I should also mention that I'm not hugely concerned with performance, and would prefer not to deal with pointers or unsafe code blocks like this guy. If it helps at all here is a good article on the RIFF/WAV file format.

    Read the article

  • Server push: comet vs ape?

    - by noname
    I've read a little about comet and also APE. Which one is better? I want the users to see other users updated content. Like Google Wave. And in comet, there are 2 versions: iframe vs traditional ajax. what is the difference and which is better. I dont quite understand it. Thanks.

    Read the article

  • How to produce precisely-timed tone and silence in C#

    - by Bob Denny
    I have a C# project that plays Morse code for RSS feeds. I write it using Managed DirectX, only to discover that Managed DirectX is old and deprecated. The task I have is to play pure sine wave bursts interspersed with silence periods (the code) which are precisely timed as to their duration. I need to be able to call a function which plays a pure tone for so many milliseconds, then Thread.Sleep() then play another, etc. At its fastest, the tones and spaces can be as short as 40ms. It's working quite well in Managed DirectX. To get the precisely timed tone I create 1 sec. of sine wave into a secondary buffer, then to play a tone of a certain duration I seek forward to within x milliseconds of the end of the buffer then play. I've tried System.Media.SoundPlayer. It's a loser because you have to Play(), Sleep(), then Stop() for arbitrary tone lengths. The result is a tone that is too long, variable by CPU load. It takes an indeterminate amount of time to actually stop the tone. I then embarked on a lengthy attempt to use NAudio 1.3. I ended up with a memory resident stream providing the tone data, and again seeking forward leaving the desired length of tone remaining in the stream, then playing. This worked OK on the DirectSoundOut class for a while (see below) but the WaveOut class quickly dies with an internal assert saying that buffers are still on the queue despite PlayerStopped = true. This is odd since I play to the end then put a wait of the same duration between the end of the tone and the start of the next. You'd think that 80ms after starting Play of a 40 ms tone that it wouldn't have buffers on the queue. DirectSoundOut works well for a while, but its problem is that for every tone burst Play() it spins off a separate thread. Eventually (5 min or so) it just stops working. You can see thread after thread after thread exiting in the Output window while running the project in VS2008 IDE. I don't create new objects during playing, I just Seek() the tone stream then call Play() over and over, so I don't think it's a problem with orphaned buffers/whatever piling up till it's choked. I'm out of patience on this one, so I'm asking in the hopes that someone here has faced a similar requirement and can steer me in a direction with a likely solution. Thanks in advance...

    Read the article

  • Software development working from home

    - by johnhilbron
    Hi, Do you all think that working from home is the wave of the future for software development? In this day and age it seems like a logical next step for software developers to work from their homes and connect to each other using IM, video chat and phone. Etc, etc... What forces are in action pushing software development in this direction? What forces are keeping more people from working remotely? John

    Read the article

  • How to produce precisely-timed tone and silence?

    - by Bob Denny
    I have a C# project that plays Morse code for RSS feeds. I write it using Managed DirectX, only to discover that Managed DirectX is old and deprecated. The task I have is to play pure sine wave bursts interspersed with silence periods (the code) which are precisely timed as to their duration. I need to be able to call a function which plays a pure tone for so many milliseconds, then Thread.Sleep() then play another, etc. At its fastest, the tones and spaces can be as short as 40ms. It's working quite well in Managed DirectX. To get the precisely timed tone I create 1 sec. of sine wave into a secondary buffer, then to play a tone of a certain duration I seek forward to within x milliseconds of the end of the buffer then play. I've tried System.Media.SoundPlayer. It's a loser because you have to Play(), Sleep(), then Stop() for arbitrary tone lengths. The result is a tone that is too long, variable by CPU load. It takes an indeterminate amount of time to actually stop the tone. I then embarked on a lengthy attempt to use NAudio 1.3. I ended up with a memory resident stream providing the tone data, and again seeking forward leaving the desired length of tone remaining in the stream, then playing. This worked OK on the DirectSoundOut class for a while (see below) but the WaveOut class quickly dies with an internal assert saying that buffers are still on the queue despite PlayerStopped = true. This is odd since I play to the end then put a wait of the same duration between the end of the tone and the start of the next. You'd think that 80ms after starting Play of a 40 ms tone that it wouldn't have buffers on the queue. DirectSoundOut works well for a while, but its problem is that for every tone burst Play() it spins off a separate thread. Eventually (5 min or so) it just stops working. You can see thread after thread after thread exiting in the Output window while running the project in VS2008 IDE. I don't create new objects during playing, I just Seek() the tone stream then call Play() over and over, so I don't think it's a problem with orphaned buffers/whatever piling up till it's choked. I'm out of patience on this one, so I'm asking in the hopes that someone here has faced a similar requirement and can steer me in a direction with a likely solution.

    Read the article

  • Normalize amplitude and phase with c#

    - by Lehto
    Hey I'm in the situation where i need to do some math related stuff in c# and for that i need some external libarys. The tool i look for should do the following actions: Process sound(wave/mp3): Normalize the amplitude Normalize the phase Any idea which way to go? And is there a big difference if a should to it on mp3 instead of wav Michael.

    Read the article

  • Ipad/Iphone like scrolling

    - by Uruhara747
    Have any of you seen like a javascript library that allows fluid div scrolling. I kind of want to do something like the scroll bars in google wave...but maybe less annoying. I happen to love them but it doesn't seem like they're getting that good of a review.

    Read the article

  • Yes, another thread question...

    - by Michael
    I can't understand why I am loosing control of my GUI even though I am implementing a thread to play a .wav file. Can someone pin point what is incorrect? #!/usr/bin/env python import wx, pyaudio, wave, easygui, thread, time, os, sys, traceback, threading import wx.lib.delayedresult as inbg isPaused = False isStopped = False class Frame(wx.Frame): def __init__(self): print 'Frame' wx.Frame.__init__(self, parent=None, id=-1, title="Jasmine", size=(720, 300)) #initialize panel panel = wx.Panel(self, -1) #initialize grid bag sizer = wx.GridBagSizer(hgap=20, vgap=20) #initialize buttons exitButton = wx.Button(panel, wx.ID_ANY, "Exit") pauseButton = wx.Button(panel, wx.ID_ANY, 'Pause') prevButton = wx.Button(panel, wx.ID_ANY, 'Prev') nextButton = wx.Button(panel, wx.ID_ANY, 'Next') stopButton = wx.Button(panel, wx.ID_ANY, 'Stop') #add widgets to sizer sizer.Add(pauseButton, pos=(1,10)) sizer.Add(prevButton, pos=(1,11)) sizer.Add(nextButton, pos=(1,12)) sizer.Add(stopButton, pos=(1,13)) sizer.Add(exitButton, pos=(5,13)) #initialize song time gauge #timeGauge = wx.Gauge(panel, 20) #sizer.Add(timeGauge, pos=(3,10), span=(0, 0)) #initialize menuFile widget menuFile = wx.Menu() menuFile.Append(0, "L&oad") menuFile.Append(1, "E&xit") menuBar = wx.MenuBar() menuBar.Append(menuFile, "&File") menuAbout = wx.Menu() menuAbout.Append(2, "A&bout...") menuAbout.AppendSeparator() menuBar.Append(menuAbout, "Help") self.SetMenuBar(menuBar) self.CreateStatusBar() self.SetStatusText("Welcome to Jasime!") #place sizer on panel panel.SetSizer(sizer) #initialize icon self.cd_image = wx.Image('cd_icon.png', wx.BITMAP_TYPE_PNG) self.temp = self.cd_image.ConvertToBitmap() self.size = self.temp.GetWidth(), self.temp.GetHeight() wx.StaticBitmap(parent=panel, bitmap=self.temp) #set binding self.Bind(wx.EVT_BUTTON, self.OnQuit, id=exitButton.GetId()) self.Bind(wx.EVT_BUTTON, self.pause, id=pauseButton.GetId()) self.Bind(wx.EVT_BUTTON, self.stop, id=stopButton.GetId()) self.Bind(wx.EVT_MENU, self.loadFile, id=0) self.Bind(wx.EVT_MENU, self.OnQuit, id=1) self.Bind(wx.EVT_MENU, self.OnAbout, id=2) #Load file usiing FileDialog, and create a thread for user control while running the file def loadFile(self, event): foo = wx.FileDialog(self, message="Open a .wav file...", defaultDir=os.getcwd(), defaultFile="", style=wx.FD_MULTIPLE) foo.ShowModal() self.queue = foo.GetPaths() self.threadID = 1 while len(self.queue) != 0: self.song = myThread(self.threadID, self.queue[0]) self.song.start() while self.song.isAlive(): time.sleep(2) self.queue.pop(0) self.threadID += 1 def OnQuit(self, event): self.Close() def OnAbout(self, event): wx.MessageBox("This is a great cup of tea.", "About Jasmine", wx.OK | wx.ICON_INFORMATION, self) def pause(self, event): global isPaused isPaused = not isPaused def stop(self, event): global isStopped isStopped = not isStopped class myThread (threading.Thread): def __init__(self, threadID, wf): self.threadID = threadID self.wf = wf threading.Thread.__init__(self) def run(self): global isPaused global isStopped self.waveFile = wave.open(self.wf, 'rb') #initialize stream self.p = pyaudio.PyAudio() self.stream = self.p.open(format = self.p.get_format_from_width(self.waveFile.getsampwidth()), channels = self.waveFile.getnchannels(), rate = self.waveFile.getframerate(), output = True) self.data = self.waveFile.readframes(1024) isPaused = False isStopped = False #main play loop, with pause event checking while self.data != '': # while isPaused != True: # if isStopped == False: self.stream.write(self.data) self.data = self.waveFile.readframes(1024) # elif isStopped == True: # self.stream.close() # self.p.terminate() self.stream.close() self.p.terminate() class App(wx.App): def OnInit(self): self.frame = Frame() self.frame.Show() self.SetTopWindow(self.frame) return True def main(): app = App() app.MainLoop() if __name__=='__main__': main()

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >