Search Results

Search found 12935 results on 518 pages for 'game recording'.

Page 95/518 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • Creating a music catalog in C# and extracting first 30 seconds as soon as the first words are sung

    - by Rad
    I already read a question: Separation of singing voice from music. I don’t need this complex audio processing. I only need some detection mechanism that would detect that there is some voice/vocal playing while the music is playing (or not playing) I need to extract first 30 seconds when a vocalist starts singing along with full band music. See question 2 below. I want to create a music catalog using ASP.NET MVC 2 and Silverlight clients and C#.NET 4.0 programming language that would be front store. On the backend I would also like to create a desktop WPF/Windows application to create the music catalog from already existing music files, most of which have metadata in them ID3v1, ID3v2.3, ID3v2.4, iTunes MP4, WMA, Vorbis Comments and APE Tags etc. I would possibly like to create a web service that would allow catalog contributors to upload a zipped album and trigger metadata extraction of music data and extraction of music segments as described below. I would be happy if I achieve no. 1 below. Let's say I have 1000ths of songs in mp3 (or other formats) grouped in subfolders using some classification (Genre, Artists, Albums, Composers or other groupings). I want to create tables in DB that would organize songs so they can be searched based on different criteria (year, length, above classification or by song title, description etc) like what iTune store allows to their customers. I want to extract metadata from various formats (I will try to get songs in mp3 format, but there may be other popular formats) and allow music Catalog manager person to add missing data from either desktop or web applications. He or other contributors can upload zipped music via an HTML or Silverlight upload or WPF. Can anybody suggest open source libraries, articles, code snippets that can do that in an automatic way using .NET and possibly SQL Server DB? My main questions are these. This is an audio processing challenge. I want to extract 2 segments of music (questions 1 and 2): 1. How to extract a music segment: 1-2 seconds before a vocal starts singing and up to 30 seconds from that point in time and 2. Much more challenging is to find repeating segments (One would usually find or recognize the names of the songs and songs are usually known by these refrains. How would I go about creating a list of songs that go great together like what Genius from iTune does? Is there any characteristics of music that can be used to match songs? The goal is for people quickly scan and recognize songs i.e. associate melody, words with a title/album so they can make intelligent decisions like buying a song, create similar mood songs. Thanks, Rad

    Read the article

  • How do sprites work?

    - by Alan
    How do sprites work? I've seen sprites from old school games like Super Mario Brothers, and wondered how they're animated to make a game. They're always presented as one big image map, so how are they used? For Mario (as an example) are there precalculated image co-ordinates that outline mario, and are swapped between various mario sprites to produce animation? Or are sprites pre "cut" during game initialization using precalculated images co-ordinates and stored in memory somewhere? Obviously I know nothing about game development.

    Read the article

  • not getting voice , which is recorded could you suggest me what is the bug in the below code ?

    - by kumaryr
    AVAudioSession *audioSession = [AVAudioSession sharedInstance]; NSError *err = nil; [audioSession setCategory:AVAudioSessionCategoryPlayAndRecord error:&err]; if(err){ NSLog(@"audioSession: %@ %d %@", [err domain], [err code], [[err userInfo] description]); return; } [audioSession setActive:YES error:&err]; err = nil; if(err){ NSLog(@"audioSession: %@ %d %@", [err domain], [err code], [[err userInfo] description]); return; } NSMutableDictionary *recordSetting = [[NSMutableDictionary alloc] init]; [recordSetting setValue :[NSNumber numberWithInt: kAudioFormatAppleIMA4] forKey:AVFormatIDKey]; [recordSetting setValue:[NSNumber numberWithFloat:40000.0] forKey:AVSampleRateKey]; [recordSetting setValue:[NSNumber numberWithInt: 2] forKey:AVNumberOfChannelsKey]; [recordSetting setValue :[NSNumber numberWithInt:16] forKey:AVLinearPCMBitDepthKey]; [recordSetting setValue :[NSNumber numberWithBool:NO] forKey:AVLinearPCMIsBigEndianKey]; [recordSetting setValue :[NSNumber numberWithBool:NO] forKey:AVLinearPCMIsFloatKey]; // Create a new dated file NSDate *now = [NSDate dateWithTimeIntervalSinceNow:0]; NSString *caldate = [now description]; NSString *recorderFilePath = [[NSString stringWithFormat:@"%@/%@.caf", DOCUMENTS_FOLDER, caldate] retain]; NSLog(recorderFilePath); url = [NSURL fileURLWithPath:recorderFilePath]; err = nil; recorder = [[ AVAudioRecorder alloc] initWithURL:url settings:recordSetting error:&err]; if(!recorder){ NSLog(@"recorder: %@ %d %@", [err domain], [err code], [[err userInfo] description]); UIAlertView *alert = [[UIAlertView alloc] initWithTitle: @"Warning" message: [err localizedDescription] delegate: nil cancelButtonTitle:@"OK" otherButtonTitles:nil]; [alert show]; [alert release]; return; } //prepare to record [recorder setDelegate:self]; [recorder prepareToRecord]; recorder.meteringEnabled = YES; BOOL audioHWAvailable = audioSession.inputIsAvailable; if (! audioHWAvailable) { UIAlertView *cantRecordAlert = [[UIAlertView alloc] initWithTitle: @"Warning" message: @"Audio input hardware not available" delegate: nil cancelButtonTitle:@"OK" otherButtonTitles:nil]; [cantRecordAlert show]; [cantRecordAlert release]; return; } //[NSTimer scheduledTimerWithTimeInterval:1 target:self selector:@selector( updateTimerDisplay) userInfo:nil repeats:YES]; [recorder recordForDuration:(NSTimeInterval)10 ]; // [NSTimer scheduledTimerWithTimeInterval:1 target:self selector:@selector( updateTimerDisplay) userInfo:nil repeats:YES];

    Read the article

  • Anyone know of a .net library/utility that will convert a word document to an mp3 format

    - by EJB
    Anyone know of any well-supported/proven methods for converting a Microsoft word document to an MP3 or wav format such that hearing-impaired folks could "listen" to documents that I have stored in my web-based document management system? I already have the interface built such that someone can use the telephone to get the list of documents available, with the dates and titles "read" to them over the phone, but now I would like the ability to let someone actually listen to the contents of word files stored in the system. Ideally a .net library or utility that would let me convert the DOC - MP3 after each upload would be best, but one that "read" the file on demand would be OK too.

    Read the article

  • How RPG characters are made

    - by user365314
    If RPG with the ability to change armors and clothes are made, how is it done? I mean the 3d side mostly If i make normal character, that has flat clothes, it would be easy, just change textures, but question is about armors, which have totally different models. So are only armor models recreated or character model with armor? How is it imported into game engine, only armor or character model with new armor? If person changes armor in game, will game swap the hole model or only the armor part? if only the armor part, then how the movement animations are done, are armor models animated on characters in 3d programs or what... :D

    Read the article

  • How would MVVM be for games?

    - by Benny Jobigan
    Particularly for 2d games, and particularly silverlight/wpf games. If you think about it, you can divide a game object into its view (the graphic on the screen) and a view-model/model (the state, ai, and other data for the object). In silverlight, it seems common to make each object a user control, putting the model and view into a single object. I suppose the advantage of this is simplicity. But, perhaps it's less clean or has some disadvantages in terms of the underlying "game engine". What are your thoughts on this matter? What are some advantages and disadvantages of using the MVVM pattern for game development? How about performance? All thoughts are welcome.

    Read the article

  • How to set background image in Java?

    - by Dew
    I am developing a simple platform game using Java using BlueJ as the IDE. Right now I have player/enemy sprites, platforms and other items in the game drawn using polygons and simple shapes. Eventually I hope to replace them with actual images. For now I would like to know what is the simplest solution to setting an image (either URL or from local source) as the 'background' of my game window/canvas? I would appreciate it if it isn't something long or complex as my programming skills aren't very good and I want to keep my program as simple as possible. Kindly provide example codes with comments to elaborate on their function, and also if it's in its own class, how to call on relevant methods used by it on other classes. Thank you very much.

    Read the article

  • Level editor for 3D games with open format or API?

    - by furtelwart
    I would like to experiment with machine generated levels for a 3D game. I'm very open which game this will be. I just like the idea to run through a generated map. For this approach, it would be great if I can use an API or an open format for level designs. Is there an open source level system that can be used in several game engines (ego shooter or whatever)? I don't know if I explained my point clearly, so please add a comment with your question. I will try to clearify my point.

    Read the article

  • save and play recorded sound

    - by blacksheep
    i'd like to save and play again this recorded sounds: @interface Recorder : NSObject { NSMutableArray *times; NSMutableArray *samples; } @end @implementation Recorder – (id) init { [super init]; times = [[NSMutableArray alloc] init]; samples = [[NSMutableArray alloc] init]; return self; } – (void) recordSound: (id) someSound { CFAbsoluteTime now = CFAbsoluteTimeGetCurrent(); NSNumber *wrappedTime = [NSNumber numberWithDouble:now]; [times addObject:wrappedTime]; [samples addObject:someSound]; } @end thanx blacksheep

    Read the article

  • Record demo and save as AVI for upload to YouTube?

    - by OverTheRainbow
    Hello I need to record the demo of a program in Windows, and save this into an AVI file so that I can upload it to YouTube. I tried Wink for this, but unless I overlooked it, it saves files as Flash (FLV), which YouTube refused. Is there an open-source alternative? I don't need something hardcore, just a tool that will let me save a demo, and insert a couple of slides where the demo stops to let the user read stuff and click on a button to resume watching. Thank you.

    Read the article

  • Monitoring an audio line.

    - by Stefan Liebenberg
    I need to monitor my audio line-in in linux, and in the event that audio is played, the sound must be recorded and saved to a file. Similiar to how motion monitors the video feed. Is it possible to do this with bash? something along the lines of: #!/bin/bash # audio device device=/dev/audio-line-in # below this threshold audio will not be recorded. noise_threshold=10 # folder where recordings are stored storage_folder=~/recordings # run indefenitly, until Ctrl-C is pressed while true; do # noise_level() represents a function to determine # the noise level from device if noise_level( $device ) > $noise_threshold; then # stream from device to file, can be encoded to mp3 later. cat $device > $storage_folder/`date`.raw fi; done;

    Read the article

  • Refactor C++ code to use a scripting language?

    - by Justin Ardini
    Background: I have been working on a platformer game written in C++ for a few months. The game is currently written entirely in C++, though I am intrigued by the possibility of using Lua for enemy AI and possibly some other logic. However, the project was designed without Lua in mind, and I have already written working C++ code for much of the AI. I am hoping Lua can improve the extensibility of the game, but don't know if it would make sense to convert existing C++ code into Lua. The question: When, if ever, is it appropriate to take fully functional C++ code and refactor it into a scripting language like Lua? The question is intentionally a bit vague, so feel free give answers that are not relevant to the given background.

    Read the article

  • How to check if aspect ratio auto adjustment is enabled in monitor

    - by kFk
    Game application is written in C++ and uses DirectX 8. I am getting a maximum monitor resolution to calculate it's aspect ratio. Then I use this value to fix game rendering (scale and set clipping to receive normal 4:3 image with black borders on wide screen monitors). How can I check if monitor is using aspect ratio auto adjustment now? Because my scaling plus monitor scaling makes resulting image overscaled. Thanks EDIT: I saw correct different monitor resolution handling with or without aspect ratio auto adjustment in "Royal Envoy" casual game. But don't know how do they do this.

    Read the article

  • using arrays to get best memory alignment and cache use, is it necessary?

    - by Alberto Toglia
    I'm all about performance these days cause I'm developing my first game engine. I'm no c++ expert but after some research I discovered the importance of the cache and the memory alignment. Basically what I found is that it is recommended to have memory well aligned specially if you need to access them together, for example in a loop. Now, In my project I'm doing my Game Object Manager, and I was thinking to have an array of GameObjects references. meaning I would have the actual memory of my objects one after the other. static const size_t MaxNumberGameObjects = 20; GameObject mGameObjects[MaxNumberGameObjects]; But, as I will be having a list of components per object -Component based design- (Mesh, RigidBody, Transformation, etc), will I be gaining something with the array at all? Anyway, I have seen some people just using a simple std::map for storing game objects. So what do you guys think? Am I better off using a pure component model?

    Read the article

  • GStreamer record iradio-mode artifacts

    - by Kanzeon
    I'm trying to record internet radio while listen it. I use the following line, but comes to my attention that when I set the iradio-mode true some noises comes in the recorded file, not in the playback. Without iradio-mode, all is ok. But in my app I need this mode to get the title message. gst-launch souphttpsrc location="<radio channel>" iradio-mode=true ! tee name=t ! queue ! decodebin2 ! audioconvert ! audioresample ! osxaudiosink t. ! queue ! filesink location=rectest.mp3

    Read the article

  • how to save Audiorecorded file to another location? i m trying but i got exception...

    - by rakesh-bhatt99
    NSString recordFile = [NSTemporaryDirectory() stringByAppendingPathComponent: (NSString)inRecordFile]; NSArray *docPaths=NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask,YES); NSString docDir=[[docPaths objectAtIndex:0]stringByAppendingPathComponent: (NSString)inRecordFile]; url = CFURLCreateWithString(kCFAllocatorDefault, (CFStringRef)docPaths, NULL); // create the audio file XThrowIfError(AudioFileCreateWithURL(url, kAudioFileCAFType, &mRecordFormat, kAudioFileFlags_EraseFile, &mRecordFile), "AudioFileCreateWithURL failed"); CFRelease(url);

    Read the article

  • Is it acceptable to design my GLSurfaceView as a main control class?

    - by Omega
    I'm trying to structure a game I'm making in Android so that I have a sound, flexible design. Right now I'm looking at where I can tie my games rules engine and graphics engine together and what should be in between them. At a glance, I've been eying my implementation of GLSurfaceView, where various screen events are captured. My rationale would be to create an instance of my game engine and graphics engine here and receive events and state changes to trigger updates of either where applicable. Further to this, in the future, the GLSurfaceView implementation could also store stubs for players during a network game and implementations of computer opponents and dispatch them appropriately. Does this seem like a sensible design? Are there any kinds of improvements I can make? Thanks for any input!

    Read the article

  • How to implement Pentago AI algorithm

    - by itsho
    Hi, i'm trying to develop Pentago-game in c#. right now i'm having 2 players mode which working just fine. the problem is, that i want One player mode (against computer), but unfortunately, all implements of minimax / negamax are for one step calculated. butin Pentago, every player need to do two things (place marble, and rotate one of the inner-boards) I didn't figure out how to implement both rotate part & placing the marble, and i would love someone to guide me with this. if you're not familiar with the game, here's a link to the game. if anyone want's, i can upload my code somewhere if that's relevant. thank you very much in advance

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >