Search Results

Search found 5434 results on 218 pages for 'digital audio'.

Page 15/218 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Google I/O 2012 - Monetizing Digital Goods with Google Wallet

    Google I/O 2012 - Monetizing Digital Goods with Google Wallet Joel Leitch, Dan Zink, Pali Bhat Whether you're a game developer selling virtual goods or currencies, or a media developer selling news content, videos, music or any other premium digital media, having an simple way to process payments from your customers is important. In this session, we will walk through an explanation of Google Wallet for digital goods, the new features, and the improved pricing model for developers. In addition, Kabam will share their experience with Google Wallet and best practices for integration. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 307 13 ratings Time: 44:31 More in Science & Technology

    Read the article

  • Analog and digital audio output at the same time

    - by wim
    My speakers use a digital input, but my headphones use an analog input. I have them both plugged in, and when I want to use the headphones I just turn off the speakers and switch on the headphones. I know that simultaneous output on digital and analog is supported by the hardware, because it worked fine in Windows XP. But on Ubuntu, I seem to only get one at a time, depending on which setting is selected in the combo box located at System -> Preferences -> Sound -> Hardware. How can I get simultaneous analog and digital output without having to switch the profile every time? I'm on Ubuntu 11.04 and it's an HDA Intel chip.

    Read the article

  • 7-Eleven Improves the Digital Guest Experience With 10-Minute Application Provisioning

    - by MichaelM-Oracle
    By Vishal Mehra - Director, Cloud Computing, Oracle Consulting Making the Cloud Journey Matter There’s much more to cloud computing than cutting costs and closing data centers. In fact, cloud computing is fast becoming the engine for innovation and productivity in the digital age. Oracle Consulting Services contributes to our customers’ cloud journey by accelerating application provisioning and rapidly deploying enterprise solutions. By blending flexibility with standardization, our Middleware as a Service (MWaaS) offering is ensuring the success of many cloud initiatives. 10-Minute Application Provisioning Times at 7-Eleven As a case in point, 7-Eleven recently highlighted the scope, scale, and results of a cloud-powered environment. The world’s largest convenience store chain is rolling out a Digital Guest Experience (DGE) program across 8,500 stores in the U.S. and Canada. Everyday, 7-Eleven connects with tens of millions of customers through point-of-sale terminals, web sites, and mobile apps. Promoting customer loyalty, targeting promotions, downloading digital coupons, and accepting digital payments are all part of the roadmap for a comprehensive and rewarding customer experience. And what about the time required for deploying successive versions of this mission-critical solution? Ron Clanton, 7-Eleven's DGE Program Manager, Information Technology reported at Oracle Open World, " We are now able to provision new environments in less than 10 minutes. This includes the complete SOA Suite on Exalogic, and Enterprise Manager managing both the SOA Suite, Exalogic, and our Exadata databases ." OCS understands the complex nature of innovative solutions and has processes and expertise to help clients like 7-Eleven rapidly develop technology that enhances the customer experience with little more than the click of a button. OCS understood that the 7-Eleven roadmap required careful planning, agile development, and a cloud-capable environment to move fast and perform at enterprise scale. Business Agility Today’s business-savvy technology leaders face competing priorities as they confront the digital disruptions of the mobile revolution and next-generation enterprise applications. To support an innovation agenda, IT is required to balance competing priorities between development and operations groups. Standardization and consolidation of computing resources are the keys to success. With our operational and technical expertise promoting business agility, Oracle Consulting's deep Middleware as a Service experience can make a significant difference to our clients by empowering enterprise IT organizations with the computing environment they seek to keep up with the pace of change that digitally driven business units expect. Depending on the needs of the organization, this environment runs within a private, public, or hybrid cloud infrastructure. Through on-demand access to a shared pool of configurable computing resources, IT delivers the standard tools and methods for developing, integrating, deploying, and scaling next-generation applications. Gold profiles of predefined configurations eliminate the version mismatches among databases, application servers, and SOA suite components, delivered both by Oracle and other enterprise ISVs. These computing resources are well defined in business terms, enabling users to select what they need from a service catalog. Striking the Balance between Development and Operations As a result, development groups have the flexibility to choose among a menu of available services with descriptions of standard business functions, service level guarantees, and costs. Faced with the consumerization of enterprise IT, they can deliver the innovative customer experiences that seamlessly integrate with underlying enterprise applications and services. This cloud-powered development and testing environment accelerates release cycles to ensure agile development and rapid deployments. At the same time, the operations group is relying on certified stacks and frameworks, tuned to predefined environments and patterns. Operators can maintain a high level of security, and continue best practices for applications/systems monitoring and management. Moreover, faced with the challenges of delivering on service level agreements (SLAs) with the business units, operators can ensure performance, scalability, and reliability of the infrastructure. The elasticity of a cloud-computing environment – the ability to rapidly add virtual machines and storage in response to computing demands -- makes a difference for hardware utilization and efficiency. Contending with Continuous Change What does it take to succeed on the promise of the cloud? As the engine for innovation and productivity in the digital age, IT must face not only the technical transformations but also the organizational challenges of the cloud. Standardizing key technologies, resources, and services through cloud computing is only one part of the cloud journey. Managing relationships among multiple department and projects over time – developing the management, governance, and monitoring capabilities within IT – is an often unmentioned but all too important second part. In fact, IT must have the organizational agility to contend with continuous change. This is where a skilled consulting services partner can play a pivotal role as a trusted advisor in the successful adoption of cloud solutions. With a lifecycle services approach to delivering innovative business solutions, Oracle Consulting Services has expertise and a portfolio of services to help enterprise customers succeed on their cloud journeys as well as other converging mega trends .

    Read the article

  • Programming for Digital frames

    - by spartan2417
    A project has recently come to my attention but i have no idea where to start or even if its possible. The idea revolves around programming a clock that is displayed inside a digital photo frame. The user would then be able to put different pictures corresponding to different times inside a usb pen for example, which would load as soon as you put the usb in. The project itself would be a really neat project - if it was just on a computer. I have no idea if what im talking about it even possible on a digital photo frame and if it is what language? Anyone who has any input at all would be great. My current idea is to maybe have a small device at the back, SSD, that runs the program through a screen, completely by passing standard digital photo frames, again though i dont know how to begin with this. And yes ive tried google (although it helps to know what to google).

    Read the article

  • How do I restore my audio after uninstalling Ventrilo?

    - by Marcx
    Hi, I've a Dell studio 1555 bought on september with Windows 7 64bit Professional on it. The audio device works proprerly, while listening to audio contents (from disk or internet) When I use Ventrilo, the audio from other people sounds good and I hear their voices clearly When I use any other VOIP programs like Teamspeak 3, MSN or Skype, I hear a disturbed voice, and it's impossible to comprehend something... Anyway everything worked fine until I installed Ventrilo, but removing it didn´t solve my problem. So how do I restore my audio?

    Read the article

  • How do I use different audio devices for different apps in Windows 8?

    - by Eclipse
    Besides switching the default audio device, how can I send the audio from one app (say x-box music) to one audio device, and another (say the video app) to another audio device? Edit: Looking further, I found this: http://channel9.msdn.com/Events/BUILD/BUILD2011/APP-408T At 16:16, he demonstrates exactly what I'm wanting to do, but when I go to the devices charm, I get a message: "You don't have any devices that can receive content from Music".

    Read the article

  • Best way to learn iphone audio queue services, step by step tutorial

    - by optician
    Hi Everyone, I'm trying to learn how to handle audio at a fairly low level with audio queue services. I have been progrmaing in memory managed languages for quite a while, and have just completed the c programing tutorial by vtc (2007). This has left me comfortable with the understanding of pointers and memory allocation, but the apple documention still leaves me wanting for a simpler implenation and explaination. Maybe I need to learn objective c and cocoa better. I have heard that this book is good. Cocoa(R) Programming for Mac(R) OS X (3rd Edition) Could someone suggest a learning path that is going to help me get an better understanding of working with audio and an iphone. I want to be able to play mp3 files back and also alter the pitch of them as they are playing. I am prepared that I may have to temporarily convert the mp3 files into pcm files to do things like that to them. Thanks everyone.

    Read the article

  • Which audio library to use?

    - by Jeb
    I want to build a .Net application for processing audio, and distribute it using ClickOnce deployment. I need access to a raw audio pipeline. Which audio library should I be using? I've heard the managed libraries for DirectSound are a dead end. I need as little as possible to be installed on the client's machine. Anything outside of the ClickOnce process isn't going to work. NAudio might be a possibility, but isn't there potentially a separate driver install? There's also SlimDX. It's a shame -- the managed DirectX libraries seem to work nicely and from what I've read, DirectX can be included in the ClickOnce install.

    Read the article

  • Unexpected behavior with AudioQueueServices callback while recording audio

    - by rcw3
    I'm recording a continuous stream of data using AudioQueueServices. It is my understanding that the callback will only be called when the buffer fills with data. In practice, the first callback has a full buffer, the 2nd callback is 3/4 full, the 3rd callback is full, the 4th is 3/4 full, and so on. These buffers are 8000 packets (recording 8khz audio) - so I should be getting back 1s of audio to the callback each time. I've confirmed that my audio queue buffer size is correct (and is somewhat confirmed by the behavior). What am I doing wrong? Should I be doing something in the AudioQueueNewInput with a different RunLoop? I tried but this didn't seem to make a difference... By the way, if I run in the debugger, each callback is full with 8000 samples - making me think this is a threading / timing thing.

    Read the article

  • Link to audio in XHTML/EPUB

    - by wxs
    I'm looking into synchronizing an ebook in epub format (so the content is in XHTML) to an audio file. I'm thinking of putting something along the lines of: <a class="audiolink" href="sound.ogg?t=1093"></a> into the body of the document, and then have a custom epub reader that recognizes those tags and synchronizes the audio accordingly. That does seem like a bit of a hack to me though, especially the use of a special class name. Does anyone have any pointers to how this may be done in a more standards-compliant manner (or somewhere where it has been done before)? Ebooks with audio annotation seem like an idea that may already be out there.

    Read the article

  • Trying to build automatic audio-conferencing capability into a WebApp

    - by Keller
    Hey all, I'm working with a team of relatively novice programmers, and we are trying to create a site that will have audio-conferencing capabilities such that whenever someone visits the page, they will immediately have audio-conferencing capabilities with everyone else on the page (5 people max). Can anyone point us in a general direction? Should we be looking into building a custom app, leveraging audio conferencing software, or trying to mimic a webex program? Would Adobe Stratus be useful in getting this kind of functionality? Does anyone have any ideas about how we would design something like this on a macro level? Sorry for the noobish question, but any guidance would be deeply appreciated. Thanks, Keller

    Read the article

  • Seeking through a streamed MP3 file with HTML5 <audio> tag

    - by Kyle Slattery
    Hopefully someone can help me out with this. I'm playing around with a node.js server that streams audio to a client, and I want to create an HTML5 player. Right now, I'm streaming the code from node using chunked encoding, and if you go directly to the URL, it works great. What I'd like to do is embed this using the HTML5 <audio> tag, like so: <audio src="http://server/stream?file=123"> where /stream is the endpoint for the node server to stream the MP3. The HTML5 player loads fine in Safari and Chrome, but it doesn't allow me to seek, and Safari even says it's a "Live Broadcast". In the headers of /stream, I include the file size and file type, and the response gets ended properly. Any thoughts on how I could get around this? I certainly could just send the whole file at once, but then the player would wait until the whole thing is downloaded--I'd rather stream it.

    Read the article

  • Suggestion for creating custom sound recognition software to toggle audio

    - by Parrot owner
    I need to develop a program that toggles a particular audio track on or off when it recognizes a parrot scream or screech. The software would need to recognize a particular range of sounds and allow some variations in the range (as a parrot likely won't replicate its sreeches EXACTLY each time). Example: Bird screeches, no audio. Bird stops screeching for five seconds, audio track praising the bird plays. Regular chattering needs to be ignored completely, as it is not to be discouraged. I've heard of java libraries that have speech recognition with dictionaries built in, but the software would need to be taught the particular sounds that my particular parrot makes - not words or any random bird sound. In addition as I mentioned above, it would need to allow for slight variation in the sound, as the screech will likely never be 100% identical to the recorded version. What would be the best way to go about this/what language should I look into?

    Read the article

  • Server-side Audio Editor

    - by Kristen
    I am looking for an audio editor that we can use server side (ASP + IIS) We want users to be able to upload an audio file, and then offer a 10 second teaser clip to other users for download. Ideally I would like our application to be able to specify Input and Output Filename, Start and End time (or Duration), and be able to fade-in and fade-out, and equalise the volume. Maybe some audio editors have a batch edit facility, and it would just be a question of installing on the server? All the keywords I have tried putting into Google have led me on a wild goose chase, hopefully someone can help me with suggestions. Thanks.

    Read the article

  • iPhone xcode - Best way to control audio from several view controllers

    - by Are Refsdal
    Hi, I am pretty new to iPhone programming. I have a navBar with three views. I need to control audio from all of the views. I only want one audio stream to play at a time. I was thinking that it would be smart to let my AppDelegate have an instance of my audioplaying class and let the three other views use that instance to control the audio. My problem is that I don´t know how my views can use the audioplaying class in my AppDelegate. Is this the best approach and if so, how? Is there a better way?

    Read the article

  • How to have game audio loop at a certain point

    - by Essential
    I have a storm in my game, and so I've made an ambient audio file which slowly grows into a storm and rain fades in, which then becomes a loopable storm audio file. Here is how I've done it: // Play intro clip and merge into main loop var introTime = stormIntro.length; AudioSource.PlayClipAtPoint( stormIntro, Vector3.zero, 0.7 ); Invoke( "StormMusic", introTime ); The way I'm currently trying to do it is get the length of the storm_intro audio clip, play the clip, and then invoke storm_loop to begin after the length of the intro has completed. This kinda works, but not really because there's occasionally a gap between the two. So how can I do it so the transition is seamless?

    Read the article

  • Outputting audio stream into microphone

    - by Brap
    Hey everyone. Is there a way of outputting audio from my program and redirecting that stream to the system's microphone input 'layer'? I understand this might require some low-level calls being 'Pinvoked', but are there any articles that might help me. For example, if I was to run the output audio stream of my application into Window's Sound Recorder program, it would think that the audio is coming from a microphone and thus record that. I don't want to record a stream, just output it to the device's micrphone input. Thanks for any ideas.

    Read the article

  • What tool can record multiple parallel stream to files of defined size?

    - by Hauke
    I would like to record record multiple audio web streams like this one in parallel to an mp3 or wma file for a duration of several days. I would like to be able to limit the file size or the duration stored in each file. The tool can be for any operating system. I do not need anything fancy like song recognition, metadata or silence detection. I haven't been able to find such a piece of software so far. Example: Tap channel "News" results in: News-090902-0000-0100.mp3, News-090902-0100-0200.mp3, etc... Who knows what tool can do this? It can be commercial software. Link in fulltext: 88.84.145.116:8000/listen.pls

    Read the article

  • hdmi AC-3 audio broke after upgrading from 11.10 to 12.04.3

    - by Jim LastName
    I just updated my MythBuntu 11.10 to 12.04.3. Now, when I try to play 5.1 content (ripped DVD), my TV (and receiver) plays a "chattering" sound. I check my receiver and the digital dolby light isn't on--it's in PCM mode. So, either the audio is getting sent as AC-3, but the TV and receiver think it's PCM or the AC-3 audio got converted to multichannel PCM and they can't handle it. My setup: hdmi cable from htpc to TV. TV has an s/pdif output to my receiver. I know TV sends AC-3 audio out correctly because I see digital dolby light come on when I view a digital TV channel and PCM come on when I view an old analog channel. I can connect s/pdif from my htpc to my receiver and the digital dolby light comes on and it can decode the audio just fine. It's just not sending it right over hdmi. Now for some hints to the issue: I noticed in MythTV audio setup when I select alsa:hdmi.... the description only lists 2 channel PCM audio capability. speaker-test -Dhdmi:PCH -c6 errors about a bad channel count (only -c2 works). Finally, I tried vlc and it does the same chattering sound. These all make me think this isn't a MythTV issue, it's something lower than that. I think the best way to troubleshoot this is to start at the drivers and check each layer, one at a time all the way to alsa. I just don't know what the layers are and how to do it. So, I need to find some audio troubleshooting guide to assist me. Or, if one doesn't exist, I'd appreciate some steps. Thanks much, Jim

    Read the article

  • Audio Recording with Appcelerator on Android

    - by user951793
    I would like to record audio and then send the file to a webserver. I am using Titanium 1.8.2 on Win7. The application I am woring on is both for Android and iphone and I do realise that Titanium.Media.AudioRecorder and Titanium.Media.AudioPlayer are for these purpose. Let's concentrate on android for a while. On that platform you can achieve audio recording by creating an intent and then you handle the file in your application. See more here. This implementation has a couple of drawbacks: You cannot stay in your application (as a native audio recorder will start up) You only get back an uri from the recorder and not the actual file. Another implementation is done by Codeboxed. This module is for recording an audio without using intents. The only problem that I could not get this working (along with other people) and the codeboxed team does not respond to anyone since last year. So my question is: Do you know how to record audio on android without using an intent? Thanks in advance. Edit: My problem with codeboxed's module: I downloaded the module from here. I copied the zip file into my project directory. I edited my manifest file with: <modules> <module platform="android" version="0.1">com.codeboxed.audiorecorder</module> </modules> When I try and compile I receive the following error: [DEBUG] appending module: com.mwaysolutions.barcode.TitaniumBarcodeModule [DEBUG] module_id = com.codeboxed.audiorecorder [ERROR] The 'apiversion' for 'com.codeboxed.audiorecorder' in the module manifest is not a valid value. Please use a version of the module that has an 'apiversion' value of 2 or greater set in it's manifest file [DEBUG] touching tiapp.xml to force rebuild next time: E:\TitaniumProjects\MyProject\tiapp.xml I can manage to recognise the module by editing the module's manifest file to this: ` version: 0.1 description: My module author: Your Name license: Specify your license copyright: Copyright (c) 2011 by Your Company apiversion: 2 name: audiorecorder moduleid: com.codeboxed.audiorecorder guid: 747dce68-7d2d-426a-a527-7c67f4e9dfad platform: android minsdk: 1.7.0` But Then again I receive error on compiling: [DEBUG] "C:\Program Files\Java\jdk1.6.0_21\bin\javac.exe" -encoding utf8 -classpath "C:\Program Files (x86)\Android\android-sdk\platforms\android-8\android.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\modules\titanium-media.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\modules\titanium-platform.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\titanium.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\thirdparty.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\jaxen-1.1.1.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\modules\titanium-locale.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\modules\titanium-app.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\modules\titanium-gesture.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\modules\titanium-analytics.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\kroll-common.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\modules\titanium-network.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\ti-commons-codec-1.3.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\modules\titanium-ui.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\modules\titanium-database.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\kroll-v8.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\modules\titanium-xml.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\android-support-v4.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\modules\titanium-filesystem.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\modules\titanium-android.jar;E:\TitaniumProjects\MyProject\modules\android\com.mwaysolutions.barcode\0.3\barcode.jar;E:\TitaniumProjects\MyProject\modules\android\com.mwaysolutions.barcode\0.3\lib\zxing.jar;E:\TitaniumProjects\MyProject\modules\android\com.codeboxed.audiorecorder\0.1\audiorecorder.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\kroll-apt.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\lib\titanium-verify.jar;C:\Users\Gabor\AppData\Roaming\Titanium\mobilesdk\win32\1.8.2\android\lib\titanium-debug.jar" -d E:\TitaniumProjects\MyProject\build\android\bin\classes -proc:none -sourcepath E:\TitaniumProjects\MyProject\build\android\src -sourcepath E:\TitaniumProjects\MyProject\build\android\gen @c:\users\gabor\appdata\local\temp\tmpbqmjuy [ERROR] Error(s) compiling generated Java code [ERROR] E:\TitaniumProjects\MyProject\build\android\gen\com\petosoft\myproject\MyProjectApplication.java:44: cannot find symbol symbol : class AudiorecorderBootstrap location: package com.codeboxed.audiorecorder runtime.addExternalModule("com.codeboxed.audiorecorder", com.codeboxed.audiorecorder.AudiorecorderBootstrap.class); ^ 1 error

    Read the article

  • Is it possible to access raw iphone audio output?

    - by Peter Hall
    Is it possible access raw PCM data from the iphone audio output? I know I can embed an MP3 and use AudioUnit. But if the user is playing music in the background from their itunes library, is it possible to access that audio data? This is for an app that shows visual effects, which react to the music. From what I can tell, it isn't possible, but that's just from lack of finding any information at all, rather than actual confirmation that it can't be done. If it isn't possible to access the audio stream from the ipod, is it possible to access raw audio output from the Media Player inside an app, or is pretty much not permitted to access raw audio data from the itunes library at all? EDIT: I found this question: iOS - Access output audio from background program, which say I can't access the audio from a background app. But is it possible to get the audio data from the itunes library if I play it inside the app?

    Read the article

  • Recording Audio from WMP Stream

    - by Jonathan Sampson
    I'm sitting here listening to a radio show that is being broadcast live over an internet stream, but I would like to keep bits and pieces for later-enjoyment. Is there a way I can easily record streams in real-time? I should note also (not sure if it's necessary or not) that this stream requires me to first login before listening.

    Read the article

  • Driver to split audio to 2 different devices?

    - by ThantiK
    I recently bought one of these USB headsets against my own better judgement, and it's really costing my sanity at this point. Previously when using a standard jack, I just used a splitter so I could split off the things I was doing with my TV or headset, I could just turn the TV off or the headset volume down should I want to use one at a time. Now, along comes this USB headset and I find that I can't choose for the sound of 1 application to pipe to 2 different devices on Windows; How can I solve this? Does any software out there exist for this purpose?

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >