Search Results

Search found 723 results on 29 pages for 'speech bubble'.

Page 10/29 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Manipulating a NSTextField via AppleScript

    - by Garry
    A little side project I'm working on is a digital life assistant, much like project JARVIS. What I'm trying to do is speak to my mac, have my words translated to text and then have the text interpreted by my program. Currently, my app is very simple, consisting of a single window containing a single wrapped NSTextView. Using MacSpeech Dictate, When I say the custom command "Jeeves", MacSpeech ensures that my app is frontmost, highlights any text in the TextField and clears it, then presses the Return key to trigger the textDidEndEditing method of NSTextField. This is done via Applescript. MacSpeech then switches to dictation mode and the next sentence I say will appear in the NSTextField. What I can't figure out is how to signify that I have finished saying a command to my program. I could simply say another keyword like "execute" or something similar that would send an AppleScript return keystroke to my app (thereby triggering the textDidEndEditing event) but this is cumbersome. Is there a notification that happens when text is pasted into a NSTextField? Would a timer work that would fire after maybe three seconds once my program becomes frontmost (three seconds should be sufficient for me to say a command)? Thanks,

    Read the article

  • Call RecognizerIntent from service

    - by Tobia Loschiavo
    Hi, I am working on an Android service. I need to call RecognizerIntent from a service in order to use in the service the recognized text. I have no startActivityForResult() method in Service class so I have problem understanding how to achieve this task. Is it possible? Many thanks

    Read the article

  • Queries with developing a Voice to Text Based Software

    - by harigm
    I am looking for any software which converts the voice to the text. I can get some software which can easily convert the english launguage voice to English text. But my intention is be it any language, whatever the system gets voice that should give the output in the text format in English. Is it possible to get this kind of software? If yes any open source available to help me to use this? If not, Is this feasible to develop this kind of software, Can any one guide how to and where to begin with? I am looking for windows based software

    Read the article

  • How can I change how OS X's 'say' command pronounces a word?

    - by jwhitlock
    OS X's say command is useful for some tasks (such as Skype's 'notify me when a contact comes online), but it is pronouncing some names incorrectly. Is there a way to teach say to pronounce a word differently? For example, try: say "Hi, Joel Spolsky" The 'ol' sounds like 'ball' rather than 'old'. I'd like to add an exception that say "Pronounce Spolsky like this", rather than try to teach new linguistic rules. I bet there is a way since it can pronounce "iphone" as Apple wants. Update - After some research, here's what I've learned: Text-to-speech is split between turning the text to phonemes, and then the phonemes are turned into audio using a voice. Changing the voice doesn't effect the phonemes. The Speech Synthesis Manager has some functions for turning text to phonemes, and a method for registering a speech dictionary that will add new text-phoneme maps. However, Apple's speech dictionary must be in a binary form - I didn't find any plist XML. Using dtrace while running say, I found some interesting files opened in /System/Library/PrivateFrameworks/SpeechDictionary.framework/Resources. This is probably the speech dictionary, but they are all binary, except for Homophones, which is XML. Adding entries to Homophones does nothing - it is probably used in speech-to-text. They are also code signed by Apple - changing them may prevent some programs from working. PrefixDictionary CartNames CartLite SymbolDictionary Homophones There are ways to add text versions of application interface elements so VoiceOver works, a lot of which a developer gets for free, but there are tricky bits. The standard here appears to be to use a phonetic spelling as needed. My guesses are: say is a light layer of code on top of the Speech Synthesis Manager. It would be easy for the Apple devs to add a command line option to take the path to a speech dictionary plist for alternate phoneme mapping, but they didn't. It may be a useful open-source project to write a better say. Skype probably uses Speech Synthesis Manager directly, leaving no hooks to change the way my friend's names are pronounced, other than spelling them phonetically, which is silly. The easiest way to make a command line version of say is how JRobert suggested. Here's my quick implementation, using Doug Harris's spelling suggestion: #!/bin/sh echo $@ | tr '[A-Z]' '[a-z]' | sed "s/spolsky/spowlsky/g" | /usr/bin/say Finally, some fun command line stuff: # Apple is weird sqlite3 /System/Library/PrivateFrameworks/SpeechDictionary.framework/Resources/Tuples .dump # Get too much information about what files are being opened sudo dtrace -n 'syscall::open*:entry { printf("%s %s",execname,copyinstr(arg0)); }' # Just fun say -v bad "Joel Spolsky Spolsky Spolsky Spolsky Spolsky, Joel Spolsky Spolsky Spolsky Spolsky Spolsky" echo "scale=1000; 4*a(1)" | bc -l | say

    Read the article

  • Creating a "chat bubble" on the iPhone, like Tweetie.

    - by Coocoo4Cocoa
    Just curious, did I overlook somewhere in the API to display a chat bubble type image as found in the iPhone's SMS application? There's a few applications out there that use bubbles that look verbatim to the iPhone's and I'm wondering if they're using a native widget or their own image. This is also seen in the Tweetie application where the content of the tweets are.

    Read the article

  • Reading part of system file contents with PHP

    - by zx
    Hi, I have a config file, in etc/ called 1.conf Here is the contents.. [2-main] exten => s,1,Macro(speech,"hi {$VAR1} how is your day going?") exten => s,4,Macro(dial,2,555555555) exten => s,2,Macro(speech,"lkqejqe;j") exten => s,3,Macro(speech,"hi there") exten => s,5,Macro(speech,"this is a test ") exten => s,6,Macro(speech,"testing 2") exten => s,7,Macro(speech,"this is a test") exten => 7,1,Goto(2-tester2,s,1) exten => 1,1,Goto(2-aa,s,1) [2-tester] [2-aa] exten => 1,1,Goto(2-main,s,1) How can I read the content in between speech for example.. exten => s,6,Macro(speech,"testing 2") Just get "testing 2" from that. Thank you in advance!

    Read the article

  • Conceal packet loss in PCM stream

    - by ZeroDefect
    I am looking to use 'Packet Loss Concealment' to conceal lost PCM frames in an audio stream. Unfortunately, I cannot find a library that is accessible without all the licensing restrictions and code bloat (...up for some suggestions though). I have located some GPL code written by Steve Underwood for the Asterisk project which implements PLC. There are several limitations; although, as Steve suggests in his code, his algorithm can be applied to different streams with a bit of work. Currently, the code works with 8kHz 16-bit signed mono streams. Variations of the code can be found through a simple search of Google Code Search. My hope is that I can adapt the code to work with other streams. Initially, the goal is to adjust the algorithm for 8+ kHz, 16-bit signed, multichannel audio (all in a C++ environment). Eventually, I'm looking to make the code available under the GPL license in hopes that it could be of benefit to others... Attached is the code below with my efforts. The code includes a main function that will "drop" a number of frames with a given probability. Unfortunately, the code does not quite work as expected. I'm receiving EXC_BAD_ACCESS when running in gdb, but I don't get a trace from gdb when using 'bt' command. Clearly, I'm trampimg on memory some where but not sure exactly where. When I comment out the *amdf_pitch* function, the code runs without crashing... int main (int argc, char *argv[]) { std::ifstream fin("C:\\cc32kHz.pcm"); if(!fin.is_open()) { std::cout << "Failed to open input file" << std::endl; return 1; } std::ofstream fout_repaired("C:\\cc32kHz_repaired.pcm"); if(!fout_repaired.is_open()) { std::cout << "Failed to open output repaired file" << std::endl; return 1; } std::ofstream fout_lossy("C:\\cc32kHz_lossy.pcm"); if(!fout_lossy.is_open()) { std::cout << "Failed to open output repaired file" << std::endl; return 1; } audio::PcmConcealer Concealer; Concealer.Init(1, 16, 32000); //Generate random numbers; srand( time(NULL) ); int value = 0; int probability = 5; while(!fin.eof()) { char arr[2]; fin.read(arr, 2); //Generate's random number; value = rand() % 100 + 1; if(value <= probability) { char blank[2] = {0x00, 0x00}; fout_lossy.write(blank, 2); //Fill in data; Concealer.Fill((int16_t *)blank, 1); fout_repaired.write(blank, 2); } else { //Write data to file; fout_repaired.write(arr, 2); fout_lossy.write(arr, 2); Concealer.Receive((int16_t *)arr, 1); } } fin.close(); fout_repaired.close(); fout_lossy.close(); return 0; } PcmConcealer.hpp /* * Code adapted from Steve Underwood of the Asterisk Project. This code inherits * the same licensing restrictions as the Asterisk Project. */ #ifndef __PCMCONCEALER_HPP__ #define __PCMCONCEALER_HPP__ /** 1. What does it do? The packet loss concealment module provides a suitable synthetic fill-in signal, to minimise the audible effect of lost packets in VoIP applications. It is not tied to any particular codec, and could be used with almost any codec which does not specify its own procedure for packet loss concealment. Where a codec specific concealment procedure exists, the algorithm is usually built around knowledge of the characteristics of the particular codec. It will, therefore, generally give better results for that particular codec than this generic concealer will. 2. How does it work? While good packets are being received, the plc_rx() routine keeps a record of the trailing section of the known speech signal. If a packet is missed, plc_fillin() is called to produce a synthetic replacement for the real speech signal. The average mean difference function (AMDF) is applied to the last known good signal, to determine its effective pitch. Based on this, the last pitch period of signal is saved. Essentially, this cycle of speech will be repeated over and over until the real speech resumes. However, several refinements are needed to obtain smooth pleasant sounding results. - The two ends of the stored cycle of speech will not always fit together smoothly. This can cause roughness, or even clicks, at the joins between cycles. To soften this, the 1/4 pitch period of real speech preceeding the cycle to be repeated is blended with the last 1/4 pitch period of the cycle to be repeated, using an overlap-add (OLA) technique (i.e. in total, the last 5/4 pitch periods of real speech are used). - The start of the synthetic speech will not always fit together smoothly with the tail of real speech passed on before the erasure was identified. Ideally, we would like to modify the last 1/4 pitch period of the real speech, to blend it into the synthetic speech. However, it is too late for that. We could have delayed the real speech a little, but that would require more buffer manipulation, and hurt the efficiency of the no-lost-packets case (which we hope is the dominant case). Instead we use a degenerate form of OLA to modify the start of the synthetic data. The last 1/4 pitch period of real speech is time reversed, and OLA is used to blend it with the first 1/4 pitch period of synthetic speech. The result seems quite acceptable. - As we progress into the erasure, the chances of the synthetic signal being anything like correct steadily fall. Therefore, the volume of the synthesized signal is made to decay linearly, such that after 50ms of missing audio it is reduced to silence. - When real speech resumes, an extra 1/4 pitch period of sythetic speech is blended with the start of the real speech. If the erasure is small, this smoothes the transition. If the erasure is long, and the synthetic signal has faded to zero, the blending softens the start up of the real signal, avoiding a kind of "click" or "pop" effect that might occur with a sudden onset. 3. How do I use it? Before audio is processed, call plc_init() to create an instance of the packet loss concealer. For each received audio packet that is acceptable (i.e. not including those being dropped for being too late) call plc_rx() to record the content of the packet. Note this may modify the packet a little after a period of packet loss, to blend real synthetic data smoothly. When a real packet is not available in time, call plc_fillin() to create a sythetic substitute. That's it! */ /*! Minimum allowed pitch (66 Hz) */ #define PLC_PITCH_MIN(SAMPLE_RATE) ((double)(SAMPLE_RATE) / 66.6) /*! Maximum allowed pitch (200 Hz) */ #define PLC_PITCH_MAX(SAMPLE_RATE) ((SAMPLE_RATE) / 200) /*! Maximum pitch OLA window */ //#define PLC_PITCH_OVERLAP_MAX(SAMPLE_RATE) ((PLC_PITCH_MIN(SAMPLE_RATE)) >> 2) /*! The length over which the AMDF function looks for similarity (20 ms) */ #define CORRELATION_SPAN(SAMPLE_RATE) ((20 * (SAMPLE_RATE)) / 1000) /*! History buffer length. The buffer must also be at leat 1.25 times PLC_PITCH_MIN, but that is much smaller than the buffer needs to be for the pitch assessment. */ //#define PLC_HISTORY_LEN(SAMPLE_RATE) ((CORRELATION_SPAN(SAMPLE_RATE)) + (PLC_PITCH_MIN(SAMPLE_RATE))) namespace audio { typedef struct { /*! Consecutive erased samples */ int missing_samples; /*! Current offset into pitch period */ int pitch_offset; /*! Pitch estimate */ int pitch; /*! Buffer for a cycle of speech */ float *pitchbuf;//[PLC_PITCH_MIN]; /*! History buffer */ short *history;//[PLC_HISTORY_LEN]; /*! Current pointer into the history buffer */ int buf_ptr; } plc_state_t; class PcmConcealer { public: PcmConcealer(); ~PcmConcealer(); void Init(int channels, int bit_depth, int sample_rate); //Process a block of received audio samples. int Receive(short amp[], int frames); //Fill-in a block of missing audio samples. int Fill(short amp[], int frames); void Destroy(); private: int amdf_pitch(int min_pitch, int max_pitch, short amp[], int channel_index, int frames); void save_history(plc_state_t *s, short *buf, int channel_index, int frames); void normalise_history(plc_state_t *s); /** Holds the states of each of the channels **/ std::vector< plc_state_t * > ChannelStates; int plc_pitch_min; int plc_pitch_max; int plc_pitch_overlap_max; int correlation_span; int plc_history_len; int channel_count; int sample_rate; bool Initialized; }; } #endif PcmConcealer.cpp /* * Code adapted from Steve Underwood of the Asterisk Project. This code inherits * the same licensing restrictions as the Asterisk Project. */ #include "audio/PcmConcealer.hpp" /* We do a straight line fade to zero volume in 50ms when we are filling in for missing data. */ #define ATTENUATION_INCREMENT 0.0025 /* Attenuation per sample */ #if !defined(INT16_MAX) #define INT16_MAX (32767) #define INT16_MIN (-32767-1) #endif #ifdef WIN32 inline double rint(double x) { return floor(x + 0.5); } #endif inline short fsaturate(double damp) { if (damp > 32767.0) return INT16_MAX; if (damp < -32768.0) return INT16_MIN; return (short)rint(damp); } namespace audio { PcmConcealer::PcmConcealer() : Initialized(false) { } PcmConcealer::~PcmConcealer() { Destroy(); } void PcmConcealer::Init(int channels, int bit_depth, int sample_rate) { if(Initialized) return; if(channels <= 0 || bit_depth != 16) return; Initialized = true; channel_count = channels; this->sample_rate = sample_rate; ////////////// double min = PLC_PITCH_MIN(sample_rate); int imin = (int)min; double max = PLC_PITCH_MAX(sample_rate); int imax = (int)max; plc_pitch_min = imin; plc_pitch_max = imax; plc_pitch_overlap_max = (plc_pitch_min >> 2); correlation_span = CORRELATION_SPAN(sample_rate); plc_history_len = correlation_span + plc_pitch_min; ////////////// for(int i = 0; i < channel_count; i ++) { plc_state_t *t = new plc_state_t; memset(t, 0, sizeof(plc_state_t)); t->pitchbuf = new float[plc_pitch_min]; t->history = new short[plc_history_len]; ChannelStates.push_back(t); } } void PcmConcealer::Destroy() { if(!Initialized) return; while(ChannelStates.size()) { plc_state_t *s = ChannelStates.at(0); if(s) { if(s->history) delete s->history; if(s->pitchbuf) delete s->pitchbuf; memset(s, 0, sizeof(plc_state_t)); delete s; } ChannelStates.erase(ChannelStates.begin()); } ChannelStates.clear(); Initialized = false; } //Process a block of received audio samples. int PcmConcealer::Receive(short amp[], int frames) { if(!Initialized) return 0; int j = 0; for(int k = 0; k < ChannelStates.size(); k++) { int i; int overlap_len; int pitch_overlap; float old_step; float new_step; float old_weight; float new_weight; float gain; plc_state_t *s = ChannelStates.at(k); if (s->missing_samples) { /* Although we have a real signal, we need to smooth it to fit well with the synthetic signal we used for the previous block */ /* The start of the real data is overlapped with the next 1/4 cycle of the synthetic data. */ pitch_overlap = s->pitch >> 2; if (pitch_overlap > frames) pitch_overlap = frames; gain = 1.0 - s->missing_samples * ATTENUATION_INCREMENT; if (gain < 0.0) gain = 0.0; new_step = 1.0/pitch_overlap; old_step = new_step*gain; new_weight = new_step; old_weight = (1.0 - new_step)*gain; for (i = 0; i < pitch_overlap; i++) { int index = (i * channel_count) + j; amp[index] = fsaturate(old_weight * s->pitchbuf[s->pitch_offset] + new_weight * amp[index]); if (++s->pitch_offset >= s->pitch) s->pitch_offset = 0; new_weight += new_step; old_weight -= old_step; if (old_weight < 0.0) old_weight = 0.0; } s->missing_samples = 0; } save_history(s, amp, j, frames); j++; } return frames; } //Fill-in a block of missing audio samples. int PcmConcealer::Fill(short amp[], int frames) { if(!Initialized) return 0; int j =0; for(int k = 0; k < ChannelStates.size(); k++) { short *tmp = new short[plc_pitch_overlap_max]; int i; int pitch_overlap; float old_step; float new_step; float old_weight; float new_weight; float gain; short *orig_amp; int orig_len; orig_amp = amp; orig_len = frames; plc_state_t *s = ChannelStates.at(k); if (s->missing_samples == 0) { // As the gap in real speech starts we need to assess the last known pitch, //and prepare the synthetic data we will use for fill-in normalise_history(s); s->pitch = amdf_pitch(plc_pitch_min, plc_pitch_max, s->history + plc_history_len - correlation_span - plc_pitch_min, j, correlation_span); // We overlap a 1/4 wavelength pitch_overlap = s->pitch >> 2; // Cook up a single cycle of pitch, using a single of the real signal with 1/4 //cycle OLA'ed to make the ends join up nicely // The first 3/4 of the cycle is a simple copy for (i = 0; i < s->pitch - pitch_overlap; i++) s->pitchbuf[i] = s->history[plc_history_len - s->pitch + i]; // The last 1/4 of the cycle is overlapped with the end of the previous cycle new_step = 1.0/pitch_overlap; new_weight = new_step; for ( ; i < s->pitch; i++) { s->pitchbuf[i] = s->history[plc_history_len - s->pitch + i]*(1.0 - new_weight) + s->history[plc_history_len - 2*s->pitch + i]*new_weight; new_weight += new_step; } // We should now be ready to fill in the gap with repeated, decaying cycles // of what is in pitchbuf // We need to OLA the first 1/4 wavelength of the synthetic data, to smooth // it into the previous real data. To avoid the need to introduce a delay // in the stream, reverse the last 1/4 wavelength, and OLA with that. gain = 1.0; new_step = 1.0/pitch_overlap; old_step = new_step; new_weight = new_step; old_weight = 1.0 - new_step; for (i = 0; i < pitch_overlap; i++) { int index = (i * channel_count) + j; amp[index] = fsaturate(old_weight * s->history[plc_history_len - 1 - i] + new_weight * s->pitchbuf[i]); new_weight += new_step; old_weight -= old_step; if (old_weight < 0.0) old_weight = 0.0; } s->pitch_offset = i; } else { gain = 1.0 - s->missing_samples*ATTENUATION_INCREMENT; i = 0; } for ( ; gain > 0.0 && i < frames; i++) { int index = (i * channel_count) + j; amp[index] = s->pitchbuf[s->pitch_offset]*gain; gain -= ATTENUATION_INCREMENT; if (++s->pitch_offset >= s->pitch) s->pitch_offset = 0; } for ( ; i < frames; i++) { int index = (i * channel_count) + j; amp[i] = 0; } s->missing_samples += orig_len; save_history(s, amp, j, frames); delete [] tmp; j++; } return frames; } void PcmConcealer::save_history(plc_state_t *s, short *buf, int channel_index, int frames) { if (frames >= plc_history_len) { /* Just keep the last part of the new data, starting at the beginning of the buffer */ //memcpy(s->history, buf + len - plc_history_len, sizeof(short)*plc_history_len); int frames_to_copy = plc_history_len; for(int i = 0; i < frames_to_copy; i ++) { int index = (channel_count * (i + frames - plc_history_len)) + channel_index; s->history[i] = buf[index]; } s->buf_ptr = 0; return; } if (s->buf_ptr + frames > plc_history_len) { /* Wraps around - must break into two sections */ //memcpy(s->history + s->buf_ptr, buf, sizeof(short)*(plc_history_len - s->buf_ptr)); short *hist_ptr = s->history + s->buf_ptr; int frames_to_copy = plc_history_len - s->buf_ptr; for(int i = 0; i < frames_to_copy; i ++) { int index = (channel_count * i) + channel_index; hist_ptr[i] = buf[index]; } frames -= (plc_history_len - s->buf_ptr); //memcpy(s->history, buf + (plc_history_len - s->buf_ptr), sizeof(short)*len); frames_to_copy = frames; for(int i = 0; i < frames_to_copy; i ++) { int index = (channel_count * (i + (plc_history_len - s->buf_ptr))) + channel_index; s->history[i] = buf[index]; } s->buf_ptr = frames; return; } /* Can use just one section */ //memcpy(s->history + s->buf_ptr, buf, sizeof(short)*len); short *hist_ptr = s->history + s->buf_ptr; int frames_to_copy = frames; for(int i = 0; i < frames_to_copy; i ++) { int index = (channel_count * i) + channel_index; hist_ptr[i] = buf[index]; } s->buf_ptr += frames; } void PcmConcealer::normalise_history(plc_state_t *s) { short *tmp = new short[plc_history_len]; if (s->buf_ptr == 0) return; memcpy(tmp, s->history, sizeof(short)*s->buf_ptr); memcpy(s->history, s->history + s->buf_ptr, sizeof(short)*(plc_history_len - s->buf_ptr)); memcpy(s->history + plc_history_len - s->buf_ptr, tmp, sizeof(short)*s->buf_ptr); s->buf_ptr = 0; delete [] tmp; } int PcmConcealer::amdf_pitch(int min_pitch, int max_pitch, short amp[], int channel_index, int frames) { int i; int j; int acc; int min_acc; int pitch; pitch = min_pitch; min_acc = INT_MAX; for (i = max_pitch; i <= min_pitch; i++) { acc = 0; for (j = 0; j < frames; j++) { int index1 = (channel_count * (i+j)) + channel_index; int index2 = (channel_count * j) + channel_index; //std::cout << "Index 1: " << index1 << ", Index 2: " << index2 << std::endl; acc += abs(amp[index1] - amp[index2]); } if (acc < min_acc) { min_acc = acc; pitch = i; } } std::cout << "Pitch: " << pitch << std::endl; return pitch; } } P.S. - I must confess that digital audio is not my forte...

    Read the article

  • How does text-to-speech shortcut read any highlighted text?

    - by TP
    Hi, I am trying to develop a cocoa application that requires to read highlighted text from any application. But so far I cannot find any decent solution because the accessibility API isn't always work. (e.g. Firefox) Does anyone know it is implemented in text-to-speech included in Leopard? I have found this SO question but it isn't solved.

    Read the article

  • Fix elements within objects at their position? (Prevent movement when resizing)

    - by Skadier
    I would like to know if it is possible to fix elements at their absolute position within custom elements in InDesign CS5? I created a kind of speech bubble and I would like to place a stripline within this bubble to separate two content areas. Just a little scheme to show the desired layout as Pseudo-Markup :D <speech-bubble> <textbox>HEADER SECTION</textbox> <stripline> <textbox>Some other text</textbox> </speech-bubble> I created something like this but with two separate elements which aren't connected. So I have to select both of them in order to move the whole bubble. Then I tried to connect them using Object->Paths->Create linked path but then the stripline moves and the HEADER SECTION moves too. All in all I would like to have a speech bubble which can be resized in order to hold more text but it shouldn't make the HEADER_SECTION larger or move the stripline. Hope you understand what I mean :D Thanks in advance!

    Read the article

  • Custom MKPinAnnotation callout view similar to Default MKPinAnnotation

    - by Sijo
    i want to create a custom callout bubble on MKMapView. But i want to create the call out bubble in the same manner of default bubble. So how to create a View look like annotaion in this image I want a custom custom view which look like "Parked Location" annotaion in the following image. with custom width, height etc. P I am not able to add required details in Default bubble. Thats why am creating custom bubble. Plz help me..thanks..

    Read the article

  • Updating multiple Sprites - AS3 performance best practices

    - by dani
    Within the container "BubbleContainer" I have multiple "Bubble sprites". Each bubble's graphics object (a circle) is updated on a timer event. Let's say I have 50 Bubble sprites and each circle's radius should be updated with a mathematical formula. How do I organize this logic? How do I update all Bubble sprites within the BubbleContainer? (should I call a bubble.update() function or make a temporary reference to the graphics object?) Where do I put the Math logic? (as static functions?)

    Read the article

  • jquery bubblepopup inside jquery carousel

    - by sam
    Hi, I am trying to use the jquery bubble popup inside the jquery carousel. the problem everything works fine. but say if i have 10 items in the carousel, display 5 at a time, hiding the remaining 5 items(which could be opened by clicking the left or right navigation). so when we move the mouse over each carousel item that is visible on the screen, we can see the bubble popup. here, if i move the mouse over the empty area on the screen, i am still getting the bubble pop which shouldnt be visible because that particular carousel item is hidden.if i keep on moving over all the hidden items, i am able to see the bubble popup. is there anyway to hide the bubble pop when the carousel item is not visible on the screen.

    Read the article

  • Application Demos in UPK

    - by [email protected]
    Over the years, User Productivity Kit has expanded to include solutions to many project challenges. As of UPK 3.6.1, solutions are provided for pre and post application go-live learning, application testing, system documentation, presentation output, and more. New in UPK 3.6.1 are additional features that can be used effectively for application demo purposes. This can come in handy when you need to do a demo but don't want to show or can't show the live application. Maybe you're doing a presentation for a group of project stakeholders and want to focus on the business workflow implemented by the application rather than the mechanics of using it. Or possibly, you need to show the application but you're disconnected from any network preventing you from running the live application. In any of these cases, a presentation aid that represents the live application is what's needed. Previous versions of the UPK topic player would allow you to do this but would always show those UPK user interface elements that help a user learn the application. When you're presenting the narrative live, the UPK bubbles can be a distraction. UPK 3.6.1 provides some new features that allow you to control whether the bubbles display. There are two ways to hide bubbles in a topic. The first is a topic property that allows you to control bubbles across the entire topic. There are 3 settings for the Show Bubbles topic property. The default setting is Use frame settings which allows you to control whether bubbles display on a frame by frame basis. When you choose Always, the bubbles will always display regardless of the frame setting. The final choice is Never. Choosing Never will hide every bubble in your topic with one setting change. As with Always, choosing Never will ignore the frame setting. The second way to control the bubbles is at the frame level. First ensure that the topic's Show Bubbles property is set to Use frame settings. Navigate to the frame on which you want to turn off the bubble and click the Display bubble for this frame button to turn off the bubble. When you play the topic, the bubble will no longer be displayed. Depending on your needs, you might also use another longstanding UPK feature that allows you to control whether the action area displays on a frame. Just click the Action area on/off button to toggle its display. I've found the frame properties to be useful beyond creating presentation aids. When creating "See It!" only topics for more advanced users, I may hide the bubbles on some of the more straightforward frames. For example, if I have a form where one needs to fill out an address, I may display the first bubble in the sequence and explain what the subsequent steps are doing. I then hide bubbles on the remaining frames which are the more mechanical steps of entering the address. We'd like to hear your thoughts on this new UPK feature. Use the comments below to tell us how you've used it. John Zaums Senior Director, Product Development Oracle User Productivity Kit

    Read the article

  • Should I throw my own ArgumentOutOfRangeException or let one bubble up from below?

    - by Neil N
    I have a class that wraps List< I have GetValue by index method: public RenderedImageInfo GetValue(int index) { list[index].LastRetrieved = DateTime.Now; return list[index]; } If the user requests an index that is out of range, this will throw an ArgumentOutOfRangeException . Should I just let this happen or check for it and throw my own? i.e. public RenderedImageInfo GetValue(int index) { if (index >= list.Count) { throw new ArgumentOutOfRangeException("index"); } list[index].LastRetrieved = DateTime.Now; return list[index]; } In the first scenario, the user would have an exception from the internal list, which breaks mt OOP goal of the user not needing to know about the underlying objects. But in the second scenario, I feel as though I am adding redundant code. Edit: And now that I think of it, what about a 3rd scenario, where I catch the internal exception, modify it, and rethrow it?

    Read the article

  • Oracle UPK Content Development Tool Settings

    - by [email protected]
    Oracle UPK Content Development tool settings: Before developing UPK content, your UPK Developer needs to be configured with certain standard settings to ensure the content will have a uniform look. To set the options: 1. Open the UPK Developer. 2. Click the Tools menu. 3. Click Options. After you configure the UPK Options, you can share these preferences with other content developers by exporting them to an .ops file. This is particularly useful in workgroup environments where multiple authors are working on the same content that requires consistent output regardless of who authored the content. (To learn more about Exporting/Importing Content Defaults refer to the Content Development.pdf guide that is delivered with the UPK Developer.) Here is a list of a few UPK Developer tool settings that Oracle UPK Content Developers use to develop UPK pre-built content: Screen resolution is set to 1024 x 768. See It mode frame delay is set to 5 seconds. Know It Required % is set to 70% and all three levels of remediation are selected. We opt to automatically record keyboard shortcuts. We use the default settings for the Bubble icon and Pointer position. Bubble color is yellow (Red = 255, Green = 255, Blue = 128). Bubble text is Verdana, Regular, 9 pt. ***Intro and end frame settings match the bubble settings Note: The Content Defaults String Input Settings will change based on which application (interface) you are recording against. For example here is a list of settings for different Oracle applications: • Agile - Microsoft Sans Serif, Regular, 8 • EBS - Microsoft Sans Serif, Regular, 10 • Hyperion - Microsoft Sans Serif, Regular, 8 • JDE E1 - Arial, Regular, 10 • PeopleSoft - Arial, Regular, 9 • Siebel - Arial, Regular, 8 Remember, it is recommended that you set the content defaults before you add documents and record content. When the content defaults are changed, existing documents are not affected and continue to use the defaults that were in effect when those documents were created. - Kathryn Lustenberger, Oracle UPK & Tutor Outbound Product Management

    Read the article

  • Attempting to calculate width of Map Overlays on the fly

    - by Bloudermilk
    Hey all- I am working on an Android app that utilizes the Google Maps API MapView, MapController, MapActivity, and ItemizedOverlay. I am basically trying to recreate certain functionalities of the Maps app (damn Google for not providing speech bubbles—for lack of a better name—for items!), particularly those speech bubbles. I have an invisible XML structure for the speech bubble in the XML layout file containing my MapView. The first time I show a speech bubble I grab that XML and remove it from it's current parent, applying some ItemizedOverlay.LayoutParams to it, and add it to the MapView as an Overlay. I position it above the item that was selected, fill it with the proper text, then set it to visible. This all works great. The goal here, though, is to also automatically animate the map to reveal any parts of a speech bubble that may be off-screen when it opens. So I'm trying popup.getWidth() (popup is the instance of my LinearLayout that is the speech bubble) after I do all the manipulation to the bubble, even after I display it to the user. Problem is, popup.getWidth() is returning me the width of the previously displayed popup, not the currently displayed one. I can't figure out why this would be happening if I'm fetching the width after I set it to visible with its new dimensions (which, by the way, are relative when I'm setting them with LayoutParams: fill_content for both width and height).. I have even tried forcing both the MapView and the "popup" to invalidate() before trying to fetch the width. Any ideas why this may be happening? How can I force the View to settle into its new dimensions before trying to fetch them? Thanks! Nick

    Read the article

  • Interactive Web Collaboration Platform

    - by user1477586
    I am currently unsatisfied with most, if not all, web-based collaboration tools I've ever used. Examples of what I'm considering a "web-based collaboration tool" are TWiki, WebEx, etc. What I have in mind of developing is to develop my own. The target "market" is for software/hardware developers who will likely be working on separate projects linked by a common goal. Example: You work at a company who is designing a brand new printer. So, one man is working on the ink cartridge, another on the paper feed, one on the circuitry of the printer and one more is working on the drivers. What I would like to make (unless it already exists) is a website that, upon loading, presents the user with a web that has each project within it's own bubble, graphically. These bubbles would all be linked to the central bubble of "[Company Name]". Upon selecting an individual bubble the browser would focus and zoom in on that project bubble and expand more information (in bullets or with more affiliated bubbles) about subprojects, progress, setbacks, etc. Then, at a terminal "node" (i.e. a bubble with no sub-bubbles) a selection would then load a page with information relevant to that node. That's a lot of monologue. In short, I want to know how I might approach this. My background is with algorithmic use of Python and C++. I've never done interactive web design like this; though, I have gone through the W3C tutorials on HTML4/5. I'm willing to learn a new language I'm just wondering which language/set of languages seem(s) most appropriate for this project. Thanks, world, for imparting your knowledge to me. An example of what the start page might look like is:

    Read the article

  • Access Flash Symbol Via Code from an .as Class

    - by David
    Hi, I've got a movie clip symbol in my .fla file that I need to reference in an .as file which is in a subfolder in the project. I select the symbol in the library and edit its properties. Its name is bubble; I Export for ActionScript and Export in frame 1. I give it a class name of Bubble. Now I need to go to my .as class, called SomeClass.as There, I need to reference the symbol, because I want to move it from within related code in that .as file. But if I try bubble.x I get an error. If I try var myBubble:Bubble = new Bubble(); I get 'Access of undefined property myBubble'. I was told that I might try 'importing the document class' but how do you import a class which is in the root directory of your app from within a class that's in a subfolder? (Don't know if this would provide the solution anyway)... Tanks.

    Read the article

  • What program to use to create subtitles for a video?

    - by user4124
    I have a video that I want to create subtitles for. Is there a program that can perform rudimentary speech-to-text in order to set the correct start/stop of each individual subtitle create rudimentary text subtitles (using some sort of speech-to-text) I know about gnome-subtitles. However, it requires extensive effort to create those subtitles manually. You need to select yourself the start and stop for each sentence. Youtube has the above features (creates rudimentary text subtitles at the correct timings, using speech-to-text). However I would rather not upload the videos to Youtube just to get my subtitles. Is it possible to do the subtitles efficiently in Ubuntu? Update: I plan to use the .srt subtitles only, and do not need to hard code them on the videos. My biggest requirement is to have the program automatically find the start/stop for each sentence, so that I write the text in it.

    Read the article

  • using a PHP print_r array result in javascript/jquery

    - by Phil Jackson
    Hello all.I have a simple jquery/ajax request to the server which returns the structure and data of an array. I was wondering if there was a quick way in which I can use this array structure and data using jquery; A simple request; var token = $("#token").val(); $.ajax({ type: 'POST', url: './', data: 'token=' + token + '&re=8', cache: false, timeout: 5000, success: function(html){ // do something here with the html var } }); the result ( actual result from PHP's print_r(); ); Array ( [0] => Array ( [username] => Emmalene [contents] => <ul><li class="name">ACTwebDesigns</li><li class="speech">helllllllo</li></ul> <ul><li class="name">ACTwebDesigns</li><li class="speech">sds</li></ul> <ul><li class="name">ACTwebDesigns</li><li class="speech">Sponge</li><li class="speech">dick</li></ul> <ul><li class="name">ACTwebDesigns</li><li class="speech">arghh</li></ul> ) ) I was thinking along the lines of var demo = Array(html); // and then do something with the demo var Not sure if that would work it just sprang to mind. Any help is much appreciated.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >