Search Results

Search found 10496 results on 420 pages for 'real yang'.

Page 37/420 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Real time updates in php for a pbbg. (Php, JavaScript, and html)

    - by Darknesschaos
    I am trying to create a PBBG (persistent browser based game) like that of OGame, Space4k, and others. My problem is with the always-updating resource collection and with the building times, as in a time is set when the building, ship, research, and etc completes building and updates the user's profile even if the user is offline. What and/or where should I learn to make this? Should it be a constantly running script in the background Note that I wish to only use PHP, HTML, CSS, JavaScript, and Mysql but will learn something new if needed. Cron jobs or the Windows equivalent seem to be the way, but it doesn't seem right or best to me.c

    Read the article

  • Why doesn't functools.partial return a real function (and how to create one that does)?

    - by epsilon
    So I was playing around with currying functions in Python and one of the things that I noticed was that functools.partial returns a partial object rather than an actual function. One of the things that annoyed me about this was that if I did something along the lines of: five = partial(len, 'hello') five('something') then we get TypeError: len() takes exactly 1 argument (2 given) but what I want to happen is TypeError: five() takes no arguments (1 given) Is there a clean way to make it work like this? I wrote a workaround, but it's too hacky for my taste (doesn't work yet for functions with varargs): def mypartial(f, *args): argcount = f.func_code.co_argcount - len(args) params = ''.join('a' + str(i) + ',' for i in xrange(argcount)) code = ''' def func(f, args): def %s(%s): return f(*(args+(%s))) return %s ''' % (f.func_name, params, params, f.func_name) exec code in locals() return func(f, args)

    Read the article

  • traits in php – any real world examples/best practices?

    - by Max
    Traits have been one of the biggest additions for PHP 5.4. I know the synatax and understand the idea behind traits, like horizontal code re-usage for common stuff like logging, security, caching etc. However, I still dont know yet how I would make use of traits in my projects. Are there any open source projects that already use traits? Any good articles/reading material on how to structure architectures using traits?

    Read the article

  • This should be real easy. How to find a button through its Value (Jquery Selector)

    - by Raja
    I have this HTML: <div id='grid'> <input type="button" value="edit"/> <input type='button' value='update'/> </div> How to I attach a click event only to Update button. I know I could add a class to it or specity an id but both are not possible since it is in gridview. I tried this: $("input:button[@value='update']").click(function(){alert('test');}); but it displays an alert for both buttons. I know I must be doing something silly. Any help is much appreciated. Thanks Kobi for helping me out. Given below is the right answer. $("input[value='update']").click(function(){alert('test');});

    Read the article

  • Is there a tool that will show diagrams of my SQL database in real-time?

    - by Rising Star
    I've created some diagrams of SQL tables using the "Reverse Engineer" feature of Microsoft Office Visio. I like being able to visualize my relational databases in this manner. However, what I get is just a static document that I can print, e-mail to colleagues, and click widgets on. Earlier this year, I saw at a demo that the new version of Visual Studio 2010 has a new feature called the "Architect Explorer", which allows developers to view relationships among .net classes on the fly. It has many features for filtering the data that the developer is interested in. It would be really awesome if I could visually browse my tables and stored procedures and see what is related to what by primary key, foreign key, and referenced in stored procedures. I realize that I'm talking about two entirely different technologies and it's not a perfect analogy, but is there some similar tool that would allow me to visualize tables in my SQL database?

    Read the article

  • WCF or Sockets for Communication in a real-time environment?

    - by PieterG
    I have a scenario where I need to send a series of data across 2 clients. The data includes Serialized XML that will contain commands that the other client will need to react to. I will also need to send images across the wire as I need to provide a chat facility in the form of video/audio chat. I would like a single communication medium for both, as the number of messages/commands might be a few. WCF or Sockets?

    Read the article

  • How to find the real problem line in my code with Application Verifier ?

    - by Newbie
    I am now trying to use this Application Verifier debugging tool, but i am stuck, first of all: it breaks the program at a line that is simple variable set line (s = 1; for example) Secondly, now when i run this program under debugger, my program seems to have changed its behaviour: i am drawing image, and now one of the colors has changed o_O, all those parts of the image that i dont draw on, has changed the color to #CDCDCD when it should be #000000, and i already set the default color to zero, still it becomes to #CDCDCD. How do i make any sense to this? Here is the output AV gave me: VERIFIER STOP 00000002: pid 0x8C0: Access violation exception. 14873000 : Invalid address causing the exception 004E422C : Code address executing the invalid access 0012EB08 : Exception record 0012EB24 : Context record AVRF: Noncontinuable verifier stop 00000002 encountered. Terminating process ... The program '[2240] test.exe: Native' has exited with code -1073741823 (0xc0000001).

    Read the article

  • Annotation retention policy: what real benefit is there in declaring `SOURCE` or `CLASS`?

    - by watery
    I know there are three retention policies for Java annotations: CLASS: Annotations are to be recorded in the class file by the compiler but need not be retained by the VM at run time. RUNTIME: Annotations are to be recorded in the class file by the compiler and retained by the VM at run time, so they may be read reflectively. SOURCE: Annotations are to be discarded by the compiler. And although I understand their usage scenarios, I don't get why it is such an important thing to specify the retention policy that retention policies exist at all. I mean, why aren't all the annotations just kept at runtime? Do they generate so much bytecode / occupy so much memory that stripping those undeclared as RUNTIME does make that much difference?

    Read the article

  • How to enable real time CSS editing in chrome?

    - by narayanpatra
    I have seem a lot of videos in which developers changing CSS on the fly in chrome. I tried the same thing but chrome dod not allow me to change the code. I can't write on the style sheet. Is there any specific setting to do this? Kindly help. EDIT: To edit the css, I right clock on an element, select inspect element. It will open the console. I select the id of the element and go to style.css in resource and try to change the css. It do not allow me to write there.

    Read the article

  • FastObjects.NET(is an OODB from Versant) performence in Real Scenerios?

    - by Lalit
    FastObjects.NET Saves the whole class object(if marked with attribute Persistent) at once in file system(using serilization or similar technology). They are promissing that it is even faster then normal SQL DB approach. My team also thought it is better and faster to save the whole object once instead of each field one by one. Defination of their website: FastObjects .NET 10.0 fully conforms to the Microsoft.NET 2.0 framework. Tightly integrated with Visual Studio 2005, it offers a developer-friendly, object-oriented alternative to a relational database for .NET persistence. I want to have your experiences of using FastObjects in production scenerio? They are promising for Indexing/Transaction/clustoring/replication.

    Read the article

  • Oracle Data Integration 12c: Simplified, Future-Ready, High-Performance Solutions

    - by Thanos Terentes Printzios
    In today’s data-driven business environment, organizations need to cost-effectively manage the ever-growing streams of information originating both inside and outside the firewall and address emerging deployment styles like cloud, big data analytics, and real-time replication. Oracle Data Integration delivers pervasive and continuous access to timely and trusted data across heterogeneous systems. Oracle is enhancing its data integration offering announcing the general availability of 12c release for the key data integration products: Oracle Data Integrator 12c and Oracle GoldenGate 12c, delivering Simplified and High-Performance Solutions for Cloud, Big Data Analytics, and Real-Time Replication. The new release delivers extreme performance, increase IT productivity, and simplify deployment, while helping IT organizations to keep pace with new data-oriented technology trends including cloud computing, big data analytics, real-time business intelligence. With the 12c release Oracle becomes the new leader in the data integration and replication technologies as no other vendor offers such a complete set of data integration capabilities for pervasive, continuous access to trusted data across Oracle platforms as well as third-party systems and applications. Oracle Data Integration 12c release addresses data-driven organizations’ critical and evolving data integration requirements under 3 key themes: Future-Ready Solutions : Supporting Current and Emerging Initiatives Extreme Performance : Even higher performance than ever before Fast Time-to-Value : Higher IT Productivity and Simplified Solutions  With the new capabilities in Oracle Data Integrator 12c, customers can benefit from: Superior developer productivity, ease of use, and rapid time-to-market with the new flow-based mapping model, reusable mappings, and step-by-step debugger. Increased performance when executing data integration processes due to improved parallelism. Improved productivity and monitoring via tighter integration with Oracle GoldenGate 12c and Oracle Enterprise Manager 12c. Improved interoperability with Oracle Warehouse Builder which enables faster and easier migration to Oracle Data Integrator’s strategic data integration offering. Faster implementation of business analytics through Oracle Data Integrator pre-integrated with Oracle BI Applications’ latest release. Oracle Data Integrator also integrates simply and easily with Oracle Business Analytics tools, including OBI-EE and Oracle Hyperion. Support for loading and transforming big and fast data, enabled by integration with big data technologies: Hadoop, Hive, HDFS, and Oracle Big Data Appliance. Only Oracle GoldenGate provides the best-of-breed real-time replication of data in heterogeneous data environments. With the new capabilities in Oracle GoldenGate 12c, customers can benefit from: Simplified setup and management of Oracle GoldenGate 12c when using multiple database delivery processes via a new Coordinated Delivery feature for non-Oracle databases. Expanded heterogeneity through added support for the latest versions of major databases such as Sybase ASE v 15.7, MySQL NDB Clusters 7.2, and MySQL 5.6., as well as integration with Oracle Coherence. Enhanced high availability and data protection via integration with Oracle Data Guard and Fast-Start Failover integration. Enhanced security for credentials and encryption keys using Oracle Wallet. Real-time replication for databases hosted on public cloud environments supported by third-party clouds. Tight integration between Oracle Data Integrator 12c and Oracle GoldenGate 12c and other Oracle technologies, such as Oracle Database 12c and Oracle Applications, provides a number of benefits for organizations: Tight integration between Oracle Data Integrator 12c and Oracle GoldenGate 12c enables developers to leverage Oracle GoldenGate’s low overhead, real-time change data capture completely within the Oracle Data Integrator Studio without additional training. Integration with Oracle Database 12c provides a strong foundation for seamless private cloud deployments. Delivers real-time data for reporting, zero downtime migration, and improved performance and availability for Oracle Applications, such as Oracle E-Business Suite and ATG Web Commerce . Oracle’s data integration offering is optimized for Oracle Engineered Systems and is an integral part of Oracle’s fast data, real-time analytics strategy on Oracle Exadata Database Machine and Oracle Exalytics In-Memory Machine. Oracle Data Integrator 12c and Oracle GoldenGate 12c differentiate the new offering on data integration with these many new features. This is just a quick glimpse into Oracle Data Integrator 12c and Oracle GoldenGate 12c. Find out much more about the new release in the video webcast "Introducing 12c for Oracle Data Integration", where customer and partner speakers, including SolarWorld, BT, Rittman Mead will join us in launching the new release. Resource Kits Meet Oracle Data Integration 12c  Discover what's new with Oracle Goldengate 12c  Oracle EMEA DIS (Data Integration Solutions) Partner Community is available for all your questions, while additional partner focused webcasts will be made available through our blog here, so stay connected. For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Stay Connected Oracle Newsletters

    Read the article

  • Can't set screen brightness in Gentoo system

    - by Real Yang
    My system: Linux gentoo 3.10.7-gentoo-r1 VGA compatible controller: NVIDIA Corporation GT216M [GeForce GT 240M] (rev a2) output of xbacklight: No outputs have backlight property output of xrandr: xrandr: Failed to get size of gamma for output default Screen 0: minimum 640 x 480, current 1280 x 720, maximum 1280 x 768 default connected 1280x720+0+0 0mm x 0mm 1280x720 0.0* 1024x768 61.0 800x600 61.0 640x480 60.0 1280x768 0.0 output of ls /proc/acpi: button/ event When I'm in kernel 3.8.13, I can change my brightness using xbacklight. I compiled 3.10.7-r1 using genkernel all. Before the upgrade I did get a notice of "compatible issues for Nvdia users" from emerge but I still don't know the details. It there anyway to let me set the brightness? Then i found a ebuild app-laptop/nvdiabl-0.81 and tried to emerege nvidabl, I got this message: Your kernel does not support FB_BACKLIGHT. To enable you it you can enable any frame buffer with backlight control or nouveau. Note that you cannot use FB_NVIDIA with nvidia's proprietary driver Please check to make sure these options are set correctly. Failure to do so may cause unexpected problems. Once you have satisfied these options, please try merging this package again. ERROR: app-laptop/nvidiabl-0.81::gentoo failed (pretend phase): Incorrect kernel configuration options Call stack: ebuild.sh, line 93: Called pkg_pretend nvidiabl-0.81.ebuild, line 31: Called linux-mod_pkg_setup linux-mod.eclass, line 559: Called linux-info_pkg_setup linux-info.eclass, line 911: Called check_extra_config linux-info.eclass, line 805: Called die The specific snippet of code: die "Incorrect kernel configuration options" [SOLVED] I enter the menuconfig again and check the Device Drivers -> Graphics support -> Support for frame buffer devices, then i found this: <*> nVidia Framebuffer Support [*] Support for backlight control (NEW) What can i say. Recompiling...

    Read the article

  • USB software protection dongle for Java with an SDK which is cross-platform “for real”. Does it exist?

    - by Unai Vivi
    What I'd like to ask is if anybody knows about an hardware USB-dongle for software protection which offers a very complete out-of-the-box API support for cross-platform Java deployments. Its SDK should provide a jar (only one, not one different library per OS & bitness) ready to be added to one's project as a library. The jar should contain all the native stuff for the various OSes and bitnesses From the application's point of view, one should continue to write (api calls) once and run everywhere, without having to care where the end-user will run the software The provided jar should itself deal with loading the appropriate native library Does such a thing exist? With what I've tried so far, you have different APIs and compiled libraries for win32, linux32, win64, linux64, etc (or you even have to compile stuff yourself on the target machine), but hey, we're doing Java here, we don't know (and don't care) where the program will run! And we can't expect the end-user to be a software engineer, tweak (and break!) its linux server, link libraries, mess with gcc, litter the filesystem, etc... In general, Java support (in a transparent cross-platform fashion) is quite bad with the dongle SDKs I've evaluated so far (e.g. KeyLok and SecuTech's UniKey). I even purchased (no free evaluation kit available) SecureMetric SDKs&dongles (they should've been "soooo" straighforward to integrate -- according to marketing material :\ ) and they were the worst ever: SecureDongle X has no 64bit support and SecureDongle SD is not cross-platform at all. So, has anyone out there been through this and found the ultimate Java security usb dongle for cross-platform deployments? Note: software is low-volume, high-value; application is off-line (intranet with no internet access), so no online-activation alternatives and the like. -- EDIT Tried out HASP dongles (used to be called "Aladdin"), and added them to the no-no list: here, too, there is no out-of-the-box (out-of-the-jar) support: e.g. end-linux-user has to manually put the .so library (the specific file for the appropriate bitness) in the right place on his filesystem, and export an env. variable accordingly. -- EDIT 2 I really don't understand all the negativity and all the downvoting: is this a taboo topic? Is it so hard to understand that a freelance developer has to put food on the table everyday to feed its family and pay the bills at the end of the month? Please don't talk about "adding value" as a supplier, because that'd be off-topic. Furthermore I'm not in direct contact with end-customers, but there's an intermediate reselling entity: it's this entity I want to prevent selling copies of the software without sharing the revenue. -- EDIT 3 I'd like to emphasize the fact that the question is looking for a technical answer, not one about opinions concerning business models, philosophical lucubrations on the concept of value, resellers' reliability, etc. I cannot change resellers, because this isn't a "general purpose" kind of sw, but a very vertical one and (for some reasons it's not worth explaining here) I must go through them. I just need to prevent the "we sold 2 copies, here's your share [bwahaha we sold 10]" scenario.

    Read the article

  • operator overloading of stream extraction operator in C++ help

    - by Crystal
    I'm having some trouble overloading my stream extraction operator in C++ for a hw assignment. I'm not really sure why I am getting these compile errors since I thought I was doing it right... Here is my code: Complex.h #ifndef COMPLEX_H #define COMPLEX_H class Complex { //friend ostream &operator<<(ostream &output, const Complex &complexObj) const; public: Complex(double = 0.0, double = 0.0); // constructor Complex operator+(const Complex &) const; // addition Complex operator-(const Complex &) const; // subtraction void print() const; // output private: double real; // real part double imaginary; // imaginary part }; #endif Complex.cpp #include <iostream> #include "Complex.h" using namespace std; // Constructor Complex::Complex(double realPart, double imaginaryPart) : real(realPart), imaginary(imaginaryPart) { } // addition operator Complex Complex::operator+(const Complex &operand2) const { return Complex(real + operand2.real, imaginary + operand2.imaginary); } // subtraction operator Complex Complex::operator-(const Complex &operand2) const { return Complex(real - operand2.real, imaginary - operand2.imaginary); } // Overload << operator ostream &Complex::operator<<(ostream &output, const Complex &complexObj) const { cout << '(' << complexObj.real << ", " << complexObj.imaginary << ')'; return output; // returning output allows chaining } // display a Complex object in the form: (a, b) void Complex::print() const { cout << '(' << real << ", " << imaginary << ')'; } main.cpp #include <iostream> #include "Complex.h" using namespace std; int main() { Complex x; Complex y(4.3, 8.2); Complex z(3.3, 1.1); cout << "x: "; x.print(); cout << "\ny: "; y.print(); cout << "\nz: "; z.print(); x = y + z; cout << "\n\nx = y + z: " << endl; x.print(); cout << " = "; y.print(); cout << " + "; z.print(); x = y - z; cout << "\n\nx = y - z: " << endl; x.print(); cout << " = "; y.print(); cout << " - "; z.print(); cout << endl; } Compile erros: complex.cpp(23) : error C2039: '<<' : is not a member of 'Complex' complex.h(5) : see declaration of 'Complex' complex.cpp(24) : error C2270: '<<' : modifiers not allowed on nonmember functions complex.cpp(25) : error C2248: 'Complex::real' : cannot access private member declared in class 'Complex' complex.h(13) : see declaration of 'Complex::real' complex.h(5) : see declaration of 'Complex' complex.cpp(25) : error C2248: 'Complex::imaginary' : cannot access private member declared in class 'Complex' complex.h(14) : see declaration of 'Complex::imaginary' complex.h(5) : see declaration of 'Complex' Thanks!

    Read the article

  • How to add image in ListView android

    - by Wawan Den Frastøtende
    i would like to add images on my list view, and i have this code package com.wilis.appmysql; import android.app.ListActivity; import android.content.Intent; import android.os.Bundle; //import android.util.Log; import android.view.View; import android.widget.ArrayAdapter; import android.widget.ListView; import android.widget.Toast; public class menulayanan extends ListActivity { /** Called when the activity is first created. */ public void onCreate(Bundle icicle) { super.onCreate(icicle); // Create an array of Strings, that will be put to our ListActivity String[] menulayanan = new String[] { "Berita Terbaru", "Info Item", "Customer Service", "Help","Exit"}; //Menset nilai array ke dalam list adapater sehingga data pada array akan dimunculkan dalam list this.setListAdapter(new ArrayAdapter<String>(this, android.R.layout.simple_list_item_1,menulayanan)); } @Override /**method ini akan mengoveride method onListItemClick yang ada pada class List Activity * method ini akan dipanggil apabilai ada salah satu item dari list menu yang dipilih */ protected void onListItemClick(ListView l, View v, int position, long id) { super.onListItemClick(l, v, position, id); // Get the item that was clicked // Menangkap nilai text yang dklik Object o = this.getListAdapter().getItem(position); String pilihan = o.toString(); // Menampilkan hasil pilihan menu dalam bentuk Toast tampilkanPilihan(pilihan); } /** * Tampilkan Activity sesuai dengan menu yang dipilih * */ protected void tampilkanPilihan(String pilihan) { try { //Intent digunakan untuk sebagai pengenal suatu activity Intent i = null; if (pilihan.equals("Berita Terbaru")) { i = new Intent(this, PraBayar.class); } else if (pilihan.equals("Info Item")) { i = new Intent(this, PascaBayar.class); } else if (pilihan.equals("Customer Service")) { i = new Intent(this, CustomerService.class); } else if (pilihan.equals("Help")) { i = new Intent(this, Help.class); } else if (pilihan.equals("Exit")) { finish(); } else { Toast.makeText(this,"Anda Memilih: " + pilihan + " , Actionnya belum dibuat", Toast.LENGTH_LONG).show(); } startActivity(i); } catch (Exception e) { e.printStackTrace(); } } } and i want to add different image per list, so i mean is i want to add a.png to "Berita Terbaru", b.png to "Info Item", c.png "Customer Service" , so how to do it? i was very confused about this, thanks before...

    Read the article

  • Conceal packet loss in PCM stream

    - by ZeroDefect
    I am looking to use 'Packet Loss Concealment' to conceal lost PCM frames in an audio stream. Unfortunately, I cannot find a library that is accessible without all the licensing restrictions and code bloat (...up for some suggestions though). I have located some GPL code written by Steve Underwood for the Asterisk project which implements PLC. There are several limitations; although, as Steve suggests in his code, his algorithm can be applied to different streams with a bit of work. Currently, the code works with 8kHz 16-bit signed mono streams. Variations of the code can be found through a simple search of Google Code Search. My hope is that I can adapt the code to work with other streams. Initially, the goal is to adjust the algorithm for 8+ kHz, 16-bit signed, multichannel audio (all in a C++ environment). Eventually, I'm looking to make the code available under the GPL license in hopes that it could be of benefit to others... Attached is the code below with my efforts. The code includes a main function that will "drop" a number of frames with a given probability. Unfortunately, the code does not quite work as expected. I'm receiving EXC_BAD_ACCESS when running in gdb, but I don't get a trace from gdb when using 'bt' command. Clearly, I'm trampimg on memory some where but not sure exactly where. When I comment out the *amdf_pitch* function, the code runs without crashing... int main (int argc, char *argv[]) { std::ifstream fin("C:\\cc32kHz.pcm"); if(!fin.is_open()) { std::cout << "Failed to open input file" << std::endl; return 1; } std::ofstream fout_repaired("C:\\cc32kHz_repaired.pcm"); if(!fout_repaired.is_open()) { std::cout << "Failed to open output repaired file" << std::endl; return 1; } std::ofstream fout_lossy("C:\\cc32kHz_lossy.pcm"); if(!fout_lossy.is_open()) { std::cout << "Failed to open output repaired file" << std::endl; return 1; } audio::PcmConcealer Concealer; Concealer.Init(1, 16, 32000); //Generate random numbers; srand( time(NULL) ); int value = 0; int probability = 5; while(!fin.eof()) { char arr[2]; fin.read(arr, 2); //Generate's random number; value = rand() % 100 + 1; if(value <= probability) { char blank[2] = {0x00, 0x00}; fout_lossy.write(blank, 2); //Fill in data; Concealer.Fill((int16_t *)blank, 1); fout_repaired.write(blank, 2); } else { //Write data to file; fout_repaired.write(arr, 2); fout_lossy.write(arr, 2); Concealer.Receive((int16_t *)arr, 1); } } fin.close(); fout_repaired.close(); fout_lossy.close(); return 0; } PcmConcealer.hpp /* * Code adapted from Steve Underwood of the Asterisk Project. This code inherits * the same licensing restrictions as the Asterisk Project. */ #ifndef __PCMCONCEALER_HPP__ #define __PCMCONCEALER_HPP__ /** 1. What does it do? The packet loss concealment module provides a suitable synthetic fill-in signal, to minimise the audible effect of lost packets in VoIP applications. It is not tied to any particular codec, and could be used with almost any codec which does not specify its own procedure for packet loss concealment. Where a codec specific concealment procedure exists, the algorithm is usually built around knowledge of the characteristics of the particular codec. It will, therefore, generally give better results for that particular codec than this generic concealer will. 2. How does it work? While good packets are being received, the plc_rx() routine keeps a record of the trailing section of the known speech signal. If a packet is missed, plc_fillin() is called to produce a synthetic replacement for the real speech signal. The average mean difference function (AMDF) is applied to the last known good signal, to determine its effective pitch. Based on this, the last pitch period of signal is saved. Essentially, this cycle of speech will be repeated over and over until the real speech resumes. However, several refinements are needed to obtain smooth pleasant sounding results. - The two ends of the stored cycle of speech will not always fit together smoothly. This can cause roughness, or even clicks, at the joins between cycles. To soften this, the 1/4 pitch period of real speech preceeding the cycle to be repeated is blended with the last 1/4 pitch period of the cycle to be repeated, using an overlap-add (OLA) technique (i.e. in total, the last 5/4 pitch periods of real speech are used). - The start of the synthetic speech will not always fit together smoothly with the tail of real speech passed on before the erasure was identified. Ideally, we would like to modify the last 1/4 pitch period of the real speech, to blend it into the synthetic speech. However, it is too late for that. We could have delayed the real speech a little, but that would require more buffer manipulation, and hurt the efficiency of the no-lost-packets case (which we hope is the dominant case). Instead we use a degenerate form of OLA to modify the start of the synthetic data. The last 1/4 pitch period of real speech is time reversed, and OLA is used to blend it with the first 1/4 pitch period of synthetic speech. The result seems quite acceptable. - As we progress into the erasure, the chances of the synthetic signal being anything like correct steadily fall. Therefore, the volume of the synthesized signal is made to decay linearly, such that after 50ms of missing audio it is reduced to silence. - When real speech resumes, an extra 1/4 pitch period of sythetic speech is blended with the start of the real speech. If the erasure is small, this smoothes the transition. If the erasure is long, and the synthetic signal has faded to zero, the blending softens the start up of the real signal, avoiding a kind of "click" or "pop" effect that might occur with a sudden onset. 3. How do I use it? Before audio is processed, call plc_init() to create an instance of the packet loss concealer. For each received audio packet that is acceptable (i.e. not including those being dropped for being too late) call plc_rx() to record the content of the packet. Note this may modify the packet a little after a period of packet loss, to blend real synthetic data smoothly. When a real packet is not available in time, call plc_fillin() to create a sythetic substitute. That's it! */ /*! Minimum allowed pitch (66 Hz) */ #define PLC_PITCH_MIN(SAMPLE_RATE) ((double)(SAMPLE_RATE) / 66.6) /*! Maximum allowed pitch (200 Hz) */ #define PLC_PITCH_MAX(SAMPLE_RATE) ((SAMPLE_RATE) / 200) /*! Maximum pitch OLA window */ //#define PLC_PITCH_OVERLAP_MAX(SAMPLE_RATE) ((PLC_PITCH_MIN(SAMPLE_RATE)) >> 2) /*! The length over which the AMDF function looks for similarity (20 ms) */ #define CORRELATION_SPAN(SAMPLE_RATE) ((20 * (SAMPLE_RATE)) / 1000) /*! History buffer length. The buffer must also be at leat 1.25 times PLC_PITCH_MIN, but that is much smaller than the buffer needs to be for the pitch assessment. */ //#define PLC_HISTORY_LEN(SAMPLE_RATE) ((CORRELATION_SPAN(SAMPLE_RATE)) + (PLC_PITCH_MIN(SAMPLE_RATE))) namespace audio { typedef struct { /*! Consecutive erased samples */ int missing_samples; /*! Current offset into pitch period */ int pitch_offset; /*! Pitch estimate */ int pitch; /*! Buffer for a cycle of speech */ float *pitchbuf;//[PLC_PITCH_MIN]; /*! History buffer */ short *history;//[PLC_HISTORY_LEN]; /*! Current pointer into the history buffer */ int buf_ptr; } plc_state_t; class PcmConcealer { public: PcmConcealer(); ~PcmConcealer(); void Init(int channels, int bit_depth, int sample_rate); //Process a block of received audio samples. int Receive(short amp[], int frames); //Fill-in a block of missing audio samples. int Fill(short amp[], int frames); void Destroy(); private: int amdf_pitch(int min_pitch, int max_pitch, short amp[], int channel_index, int frames); void save_history(plc_state_t *s, short *buf, int channel_index, int frames); void normalise_history(plc_state_t *s); /** Holds the states of each of the channels **/ std::vector< plc_state_t * > ChannelStates; int plc_pitch_min; int plc_pitch_max; int plc_pitch_overlap_max; int correlation_span; int plc_history_len; int channel_count; int sample_rate; bool Initialized; }; } #endif PcmConcealer.cpp /* * Code adapted from Steve Underwood of the Asterisk Project. This code inherits * the same licensing restrictions as the Asterisk Project. */ #include "audio/PcmConcealer.hpp" /* We do a straight line fade to zero volume in 50ms when we are filling in for missing data. */ #define ATTENUATION_INCREMENT 0.0025 /* Attenuation per sample */ #if !defined(INT16_MAX) #define INT16_MAX (32767) #define INT16_MIN (-32767-1) #endif #ifdef WIN32 inline double rint(double x) { return floor(x + 0.5); } #endif inline short fsaturate(double damp) { if (damp > 32767.0) return INT16_MAX; if (damp < -32768.0) return INT16_MIN; return (short)rint(damp); } namespace audio { PcmConcealer::PcmConcealer() : Initialized(false) { } PcmConcealer::~PcmConcealer() { Destroy(); } void PcmConcealer::Init(int channels, int bit_depth, int sample_rate) { if(Initialized) return; if(channels <= 0 || bit_depth != 16) return; Initialized = true; channel_count = channels; this->sample_rate = sample_rate; ////////////// double min = PLC_PITCH_MIN(sample_rate); int imin = (int)min; double max = PLC_PITCH_MAX(sample_rate); int imax = (int)max; plc_pitch_min = imin; plc_pitch_max = imax; plc_pitch_overlap_max = (plc_pitch_min >> 2); correlation_span = CORRELATION_SPAN(sample_rate); plc_history_len = correlation_span + plc_pitch_min; ////////////// for(int i = 0; i < channel_count; i ++) { plc_state_t *t = new plc_state_t; memset(t, 0, sizeof(plc_state_t)); t->pitchbuf = new float[plc_pitch_min]; t->history = new short[plc_history_len]; ChannelStates.push_back(t); } } void PcmConcealer::Destroy() { if(!Initialized) return; while(ChannelStates.size()) { plc_state_t *s = ChannelStates.at(0); if(s) { if(s->history) delete s->history; if(s->pitchbuf) delete s->pitchbuf; memset(s, 0, sizeof(plc_state_t)); delete s; } ChannelStates.erase(ChannelStates.begin()); } ChannelStates.clear(); Initialized = false; } //Process a block of received audio samples. int PcmConcealer::Receive(short amp[], int frames) { if(!Initialized) return 0; int j = 0; for(int k = 0; k < ChannelStates.size(); k++) { int i; int overlap_len; int pitch_overlap; float old_step; float new_step; float old_weight; float new_weight; float gain; plc_state_t *s = ChannelStates.at(k); if (s->missing_samples) { /* Although we have a real signal, we need to smooth it to fit well with the synthetic signal we used for the previous block */ /* The start of the real data is overlapped with the next 1/4 cycle of the synthetic data. */ pitch_overlap = s->pitch >> 2; if (pitch_overlap > frames) pitch_overlap = frames; gain = 1.0 - s->missing_samples * ATTENUATION_INCREMENT; if (gain < 0.0) gain = 0.0; new_step = 1.0/pitch_overlap; old_step = new_step*gain; new_weight = new_step; old_weight = (1.0 - new_step)*gain; for (i = 0; i < pitch_overlap; i++) { int index = (i * channel_count) + j; amp[index] = fsaturate(old_weight * s->pitchbuf[s->pitch_offset] + new_weight * amp[index]); if (++s->pitch_offset >= s->pitch) s->pitch_offset = 0; new_weight += new_step; old_weight -= old_step; if (old_weight < 0.0) old_weight = 0.0; } s->missing_samples = 0; } save_history(s, amp, j, frames); j++; } return frames; } //Fill-in a block of missing audio samples. int PcmConcealer::Fill(short amp[], int frames) { if(!Initialized) return 0; int j =0; for(int k = 0; k < ChannelStates.size(); k++) { short *tmp = new short[plc_pitch_overlap_max]; int i; int pitch_overlap; float old_step; float new_step; float old_weight; float new_weight; float gain; short *orig_amp; int orig_len; orig_amp = amp; orig_len = frames; plc_state_t *s = ChannelStates.at(k); if (s->missing_samples == 0) { // As the gap in real speech starts we need to assess the last known pitch, //and prepare the synthetic data we will use for fill-in normalise_history(s); s->pitch = amdf_pitch(plc_pitch_min, plc_pitch_max, s->history + plc_history_len - correlation_span - plc_pitch_min, j, correlation_span); // We overlap a 1/4 wavelength pitch_overlap = s->pitch >> 2; // Cook up a single cycle of pitch, using a single of the real signal with 1/4 //cycle OLA'ed to make the ends join up nicely // The first 3/4 of the cycle is a simple copy for (i = 0; i < s->pitch - pitch_overlap; i++) s->pitchbuf[i] = s->history[plc_history_len - s->pitch + i]; // The last 1/4 of the cycle is overlapped with the end of the previous cycle new_step = 1.0/pitch_overlap; new_weight = new_step; for ( ; i < s->pitch; i++) { s->pitchbuf[i] = s->history[plc_history_len - s->pitch + i]*(1.0 - new_weight) + s->history[plc_history_len - 2*s->pitch + i]*new_weight; new_weight += new_step; } // We should now be ready to fill in the gap with repeated, decaying cycles // of what is in pitchbuf // We need to OLA the first 1/4 wavelength of the synthetic data, to smooth // it into the previous real data. To avoid the need to introduce a delay // in the stream, reverse the last 1/4 wavelength, and OLA with that. gain = 1.0; new_step = 1.0/pitch_overlap; old_step = new_step; new_weight = new_step; old_weight = 1.0 - new_step; for (i = 0; i < pitch_overlap; i++) { int index = (i * channel_count) + j; amp[index] = fsaturate(old_weight * s->history[plc_history_len - 1 - i] + new_weight * s->pitchbuf[i]); new_weight += new_step; old_weight -= old_step; if (old_weight < 0.0) old_weight = 0.0; } s->pitch_offset = i; } else { gain = 1.0 - s->missing_samples*ATTENUATION_INCREMENT; i = 0; } for ( ; gain > 0.0 && i < frames; i++) { int index = (i * channel_count) + j; amp[index] = s->pitchbuf[s->pitch_offset]*gain; gain -= ATTENUATION_INCREMENT; if (++s->pitch_offset >= s->pitch) s->pitch_offset = 0; } for ( ; i < frames; i++) { int index = (i * channel_count) + j; amp[i] = 0; } s->missing_samples += orig_len; save_history(s, amp, j, frames); delete [] tmp; j++; } return frames; } void PcmConcealer::save_history(plc_state_t *s, short *buf, int channel_index, int frames) { if (frames >= plc_history_len) { /* Just keep the last part of the new data, starting at the beginning of the buffer */ //memcpy(s->history, buf + len - plc_history_len, sizeof(short)*plc_history_len); int frames_to_copy = plc_history_len; for(int i = 0; i < frames_to_copy; i ++) { int index = (channel_count * (i + frames - plc_history_len)) + channel_index; s->history[i] = buf[index]; } s->buf_ptr = 0; return; } if (s->buf_ptr + frames > plc_history_len) { /* Wraps around - must break into two sections */ //memcpy(s->history + s->buf_ptr, buf, sizeof(short)*(plc_history_len - s->buf_ptr)); short *hist_ptr = s->history + s->buf_ptr; int frames_to_copy = plc_history_len - s->buf_ptr; for(int i = 0; i < frames_to_copy; i ++) { int index = (channel_count * i) + channel_index; hist_ptr[i] = buf[index]; } frames -= (plc_history_len - s->buf_ptr); //memcpy(s->history, buf + (plc_history_len - s->buf_ptr), sizeof(short)*len); frames_to_copy = frames; for(int i = 0; i < frames_to_copy; i ++) { int index = (channel_count * (i + (plc_history_len - s->buf_ptr))) + channel_index; s->history[i] = buf[index]; } s->buf_ptr = frames; return; } /* Can use just one section */ //memcpy(s->history + s->buf_ptr, buf, sizeof(short)*len); short *hist_ptr = s->history + s->buf_ptr; int frames_to_copy = frames; for(int i = 0; i < frames_to_copy; i ++) { int index = (channel_count * i) + channel_index; hist_ptr[i] = buf[index]; } s->buf_ptr += frames; } void PcmConcealer::normalise_history(plc_state_t *s) { short *tmp = new short[plc_history_len]; if (s->buf_ptr == 0) return; memcpy(tmp, s->history, sizeof(short)*s->buf_ptr); memcpy(s->history, s->history + s->buf_ptr, sizeof(short)*(plc_history_len - s->buf_ptr)); memcpy(s->history + plc_history_len - s->buf_ptr, tmp, sizeof(short)*s->buf_ptr); s->buf_ptr = 0; delete [] tmp; } int PcmConcealer::amdf_pitch(int min_pitch, int max_pitch, short amp[], int channel_index, int frames) { int i; int j; int acc; int min_acc; int pitch; pitch = min_pitch; min_acc = INT_MAX; for (i = max_pitch; i <= min_pitch; i++) { acc = 0; for (j = 0; j < frames; j++) { int index1 = (channel_count * (i+j)) + channel_index; int index2 = (channel_count * j) + channel_index; //std::cout << "Index 1: " << index1 << ", Index 2: " << index2 << std::endl; acc += abs(amp[index1] - amp[index2]); } if (acc < min_acc) { min_acc = acc; pitch = i; } } std::cout << "Pitch: " << pitch << std::endl; return pitch; } } P.S. - I must confess that digital audio is not my forte...

    Read the article

  • Long connection times from PHP to MySQL on EC2

    - by Erik Giberti
    I'm having an intermittent issue connecting to a database slave with InnoDB. Intermittently I get connections taking longer than 2 seconds. These servers are hosted on Amazon's EC2. The app server is PHP 5.2/Apache running on Ubuntu. The DB slave is running Percona's XtraDB 5.1 on Ubuntu 9.10. It's using an EBS Raid array for the data storage. We already use skip name resolve and bind to address 0.0.0.0. This is a stub of the PHP code that's failing $tmp = mysqli_init(); $start_time = microtime(true); $tmp-options(MYSQLI_OPT_CONNECT_TIMEOUT, 2); $tmp-real_connect($DB_SERVERS[$server]['server'], $DB_SERVERS[$server]['username'], $DB_SERVERS[$server]['password'], $DB_SERVERS[$server]['schema'], $DB_SERVERS[$server]['port']); if(mysqli_connect_errno()){ $timer = microtime(true) - $start_time; mail($errors_to,'DB connection error',$timer); } There's more than 300Mb available on the DB server for new connections and the server is nowhere near the max allowed (60 of 1,200). Loading on both servers is < 2 on 4 core m1.xlarge instances. Some highlights from the mysql config max_connections = 1200 thread_stack = 512K thread_cache_size = 1024 thread_concurrency = 16 innodb-file-per-table innodb_additional_mem_pool_size = 16M innodb_buffer_pool_size = 13G Any help on tracing the source of the slowdown is appreciated. [EDIT] I have been updating the sysctl values for the network but they don't seem to be fixing the problem. I made the following adjustments on both the database and application servers. net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_sack = 0 net.ipv4.tcp_timestamps = 0 net.ipv4.tcp_fin_timeout = 20 net.ipv4.tcp_keepalive_time = 180 net.ipv4.tcp_max_syn_backlog = 1280 net.ipv4.tcp_synack_retries = 1 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 87380 16777216 [EDIT] Per jaimieb's suggestion, I added some tracing and captured the following data using time. This server handles about 51 queries/second at this the time of day. The connection error was raised once (at 13:06:36) during the 3 minute window outlined below. Since there was 1 failure and roughly 9,200 successful connections, I think this isn't going to produce anything meaningful in terms of reporting. Script: date /root/database_server.txt (time mysql -h database_Server -D schema_name -u appuser -p apppassword -e '') /dev/null 2 /root/database_server.txt Results: === Application Server 1 === Mon Feb 22 13:05:01 EST 2010 real 0m0.008s user 0m0.001s sys 0m0.000s Mon Feb 22 13:06:01 EST 2010 real 0m0.007s user 0m0.002s sys 0m0.000s Mon Feb 22 13:07:01 EST 2010 real 0m0.008s user 0m0.000s sys 0m0.001s === Application Server 2 === Mon Feb 22 13:05:01 EST 2010 real 0m0.009s user 0m0.000s sys 0m0.002s Mon Feb 22 13:06:01 EST 2010 real 0m0.009s user 0m0.001s sys 0m0.003s Mon Feb 22 13:07:01 EST 2010 real 0m0.008s user 0m0.000s sys 0m0.001s === Database Server === Mon Feb 22 13:05:01 EST 2010 real 0m0.016s user 0m0.000s sys 0m0.010s Mon Feb 22 13:06:01 EST 2010 real 0m0.006s user 0m0.010s sys 0m0.000s Mon Feb 22 13:07:01 EST 2010 real 0m0.016s user 0m0.000s sys 0m0.010s [EDIT] Per a suggestion received on a LinkedIn question, I tried setting the back_log value higher. We had been running the default value (50) and increased it to 150. We also raised the kernel value /proc/sys/net/core/somaxconn (maximum socket connections) to 256 on both the application and database server from the default 128. We did see some elevation in processor utilization as a result but still received connection timeouts.

    Read the article

  • Parallel processing slower than sequential?

    - by zebediah49
    EDIT: For anyone who stumbles upon this in the future: Imagemagick uses a MP library. It's faster to use available cores if they're around, but if you have parallel jobs, it's unhelpful. Do one of the following: do your jobs serially (with Imagemagick in parallel mode) set MAGICK_THREAD_LIMIT=1 for your invocation of the imagemagick binary in question. By making Imagemagick use only one thread, it slows down by 20-30% in my test cases, but meant I could run one job per core without issues, for a significant net increase in performance. Original question: While converting some images using ImageMagick, I noticed a somewhat strange effect. Using xargs was significantly slower than a standard for loop. Since xargs limited to a single process should act like a for loop, I tested that, and found it to be about the same. Thus, we have this demonstration. Quad core (AMD Athalon X4, 2.6GHz) Working entirely on a tempfs (16g ram total; no swap) No other major loads Results: /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 1 convert -auto-level real 0m3.784s user 0m2.240s sys 0m0.230s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 2 convert -auto-level real 0m9.097s user 0m28.020s sys 0m0.910s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 10 convert -auto-level real 0m9.844s user 0m33.200s sys 0m1.270s Can anyone think of a reason why running two instances of this program takes more than twice as long in real time, and more than ten times as long in processor time to complete the same task? After that initial hit, more processes do not seem to have as significant of an effect. I thought it might have to do with disk seeking, so I did that test entirely in ram. Could it have something to do with how Convert works, and having more than one copy at once means it cannot use processor cache as efficiently or something? EDIT: When done with 1000x 769KB files, performance is as expected. Interesting. /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 1 convert -auto-level real 3m37.679s user 5m6.980s sys 0m6.340s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 1 convert -auto-level real 3m37.152s user 5m6.140s sys 0m6.530s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 2 convert -auto-level real 2m7.578s user 5m35.410s sys 0m6.050s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 4 convert -auto-level real 1m36.959s user 5m48.900s sys 0m6.350s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 10 convert -auto-level real 1m36.392s user 5m54.840s sys 0m5.650s

    Read the article

  • Problem with develop of XML Schema based on an existent XML

    - by farhad
    Hello! I have a problem with the validation of this piece of XML: <?xml version="1.0" encoding="UTF-8"?> <i-ching xmlns="http://www.oracolo.it/i-ching"> <predizione> <esagramma nome="Pace"> <trigramma> <yang/><yang/><yang/> </trigramma> <trigramma> <yin/><yin/><yin/> </trigramma> </esagramma> <significato>Questa combinazione preannuncia <enfasi>boh</enfasi>, e forse anche <enfasi>mah, chissa</enfasi>.</significato> </predizione> <predizione> <esagramma nome="Ritorno"> <trigramma> <yang/><yin/> <yin/> </trigramma> <trigramma> <yin/><yin/><yin/> </trigramma> </esagramma> <significato>Si prevede con certezza <enfasi>qualcosa</enfasi>, <enfasi>ma anche <enfasi>no</enfasi></enfasi>.</significato> </predizione> </i-ching> This XML Schema was developed with Russian Dolls technique: <?xml version="1.0" encoding="UTF-8"?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://www.oracolo.it/i-ching" targetNamespace="http://www.oracolo.it/i-ching" > <xsd:element name="i-ching"> <xsd:complexType> <xsd:sequence> <xsd:element name="predizione" minOccurs="0" maxOccurs="64"> <xsd:complexType> <xsd:sequence> <xsd:element name="esagramma"> <xsd:complexType> <!-- vi sono 2 trigrammi --> <xsd:sequence> <xsd:element name="trigramma" minOccurs="2" maxOccurs="2"> <xsd:complexType> <xsd:sequence minOccurs="3" maxOccurs="3"> <xsd:choice> <xsd:element name="yang"/> <xsd:element name="yin"/> </xsd:choice> </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:sequence> <xsd:attribute name="nome" type="xsd:string"/> </xsd:complexType> </xsd:element> <!-- significato: context model misto --> <xsd:element name="significato"> <xsd:complexType mixed="true"> <xsd:sequence> <xsd:element name="enfasi" type="xsd:string" minOccurs="0" maxOccurs="unbounded"/> </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:schema> For exercise I have to develop an XML Schema to validate the previous XML. The problem is that oxygen says me this: cvc-complex-type.2.4.a: Invalid content was found starting with element 'predizione'. One of '{predizione}' is expected. Start location: 3:6 End location: 3:16 URL: http://www.w3.org/TR/xmlschema-1/#cvc-complex-type why? is it something wrong with my xml schema? thank you very much

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >