Search Results

Search found 7955 results on 319 pages for 'signal processing'.

Page 82/319 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Why changing signals causes NameErrors in a sane code? - PyGtk issue

    - by boywithaxe
    I'm working on a very simple app for Ubuntu. I've asked a question on stackoverflow, and it seems that the issue I am having is caused by signals, not by the scope of variables, as I originally thought. The problem I am having is that when TextBox emits a signal through activate the whole code works without a glitch. But when I change the signal to insert-at-click it returns NameErrors in every non-TextBox-linked function. Now, It is highly possible I am doing something completely wrong here, but is it at least probable that signals could affect global variable assignments?

    Read the article

  • Are there deprecated practices for multithread and multiprocessor programming that I should no longer use?

    - by DeveloperDon
    In the early days of FORTRAN and BASIC, essentially all programs were written with GOTO statements. The result was spaghetti code and the solution was structured programming. Similarly, pointers can have difficult to control characteristics in our programs. C++ started with plenty of pointers, but use of references are recommended. Libraries like STL can reduce some of our dependency. There are also idioms to create smart pointers that have better characteristics, and some version of C++ permit references and managed code. Programming practices like inheritance and polymorphism use a lot of pointers behind the scenes (just as for, while, do structured programming generates code filled with branch instructions). Languages like Java eliminate pointers and use garbage collection to manage dynamically allocated data instead of depending on programmers to match all their new and delete statements. In my reading, I have seen examples of multi-process and multi-thread programming that don't seem to use semaphores. Do they use the same thing with different names or do they have new ways of structuring protection of resources from concurrent use? For example, a specific example of a system for multithread programming with multicore processors is OpenMP. It represents a critical region as follows, without the use of semaphores, which seem not to be included in the environment. th_id = omp_get_thread_num(); #pragma omp critical { cout << "Hello World from thread " << th_id << '\n'; } This example is an excerpt from: http://en.wikipedia.org/wiki/OpenMP Alternatively, similar protection of threads from each other using semaphores with functions wait() and signal() might look like this: wait(sem); th_id = get_thread_num(); cout << "Hello World from thread " << th_id << '\n'; signal(sem); In this example, things are pretty simple, and just a simple review is enough to show the wait() and signal() calls are matched and even with a lot of concurrency, thread safety is provided. But other algorithms are more complicated and use multiple semaphores (both binary and counting) spread across multiple functions with complex conditions that can be called by many threads. The consequences of creating deadlock or failing to make things thread safe can be hard to manage. Do these systems like OpenMP eliminate the problems with semaphores? Do they move the problem somewhere else? How do I transform my favorite semaphore using algorithm to not use semaphores anymore?

    Read the article

  • Repeated disconnects on WPA PEAP network

    - by exasperated
    My school has a WPA PEAP network with GTC inner authentication. I am able to connect to the network, but once I load a website or two, the network become unresponsive (i.e. in Chromium, it gets stuck at "Sending request"), and I'm eventually disconnected. Any help will be greatly appreciated. Here's some log output. I can provide more if needed: Ubuntu 13.04 3.8.0-32-generic x86_64 lsusb: 03:00.0 Network controller: Intel Corporation Centrino Advanced-N 6235 (rev 24) lsmod: iwldvm                241872  0  mac80211              606457  1 iwldvm iwlwifi               173516  1 iwldvm cfg80211              511019  3 iwlwifi,mac80211,iwldvm dmesg: [    3.501227] iwlwifi 0000:03:00.0: irq 46 for MSI/MSI-X [    3.503541] iwlwifi 0000:03:00.0: loaded firmware version 18.168.6.1 [    3.527153] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEBUG disabled [    3.527162] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEBUGFS enabled [    3.527170] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEVICE_TRACING enabled [    3.527178] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEVICE_TESTMODE enabled [    3.527186] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_P2P disabled [    3.527192] iwlwifi 0000:03:00.0: Detected Intel(R) Centrino(R) Advanced-N 6235 AGN, REV=0xB0 [    3.527240] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [    3.551049] ieee80211 phy0: Selected rate control algorithm 'iwl-agn-rs' [  375.153065] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [  375.159727] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [  375.553201] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [  375.559871] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [ 1892.110738] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 1892.117357] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [ 5227.235372] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 5227.242122] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [ 5817.817954] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 5817.824560] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [ 5824.571917] iwlwifi 0000:03:00.0 wlan0: disabling HT/VHT due to WEP/TKIP use [ 5824.571929] iwlwifi 0000:03:00.0 wlan0: disabling HT as WMM/QoS is not supported by the AP [ 5824.571935] iwlwifi 0000:03:00.0 wlan0: disabling VHT as WMM/QoS is not supported by the AP [ 6956.290061] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 6956.296671] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [ 6963.080560] iwlwifi 0000:03:00.0 wlan0: disabling HT/VHT due to WEP/TKIP use [ 6963.080566] iwlwifi 0000:03:00.0 wlan0: disabling HT as WMM/QoS is not supported by the AP [ 6963.080570] iwlwifi 0000:03:00.0 wlan0: disabling VHT as WMM/QoS is not supported by the AP [ 7613.469241] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 7613.475870] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [ 7620.201265] iwlwifi 0000:03:00.0 wlan0: disabling HT/VHT due to WEP/TKIP use [ 7620.201278] iwlwifi 0000:03:00.0 wlan0: disabling HT as WMM/QoS is not supported by the AP [ 7620.201285] iwlwifi 0000:03:00.0 wlan0: disabling VHT as WMM/QoS is not supported by the AP [ 8232.762453] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 8232.769065] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [ 8239.581772] iwlwifi 0000:03:00.0 wlan0: disabling HT/VHT due to WEP/TKIP use [ 8239.581784] iwlwifi 0000:03:00.0 wlan0: disabling HT as WMM/QoS is not supported by the AP [ 8239.581792] iwlwifi 0000:03:00.0 wlan0: disabling VHT as WMM/QoS is not supported by the AP [13763.634808] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [13763.641427] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [16955.598953] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [16955.605574] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 lshw:    *-network        description: Wireless interface        product: Centrino Advanced-N 6235        vendor: Intel Corporation        physical id: 0        bus info: pci@0000:03:00.0        logical name: wlan0        version: 24        serial: b4:b6:76:a0:4b:3c        width: 64 bits        clock: 33MHz        capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless        configuration: broadcast=yes driver=iwlwifi driverversion=3.8.0-32-generic firmware=18.168.6.1 ip=10.250.169.96 latency=0 link=yes multicast=yes wireless=IEEE 802.11abgn        resources: irq:46 memory:f7c00000-f7c01fff iwlist scan: Cell 02 - Address: 24:DE:C6:B0:C7:D9                     Channel:36                     Frequency:5.18 GHz (Channel 36)                     Quality=29/70  Signal level=-81 dBm                       Encryption key:on                     ESSID:"CatChat2x"                     Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s                               36 Mb/s; 48 Mb/s; 54 Mb/s                     Mode:Master                     Extra:tsf=0000004ff3fe419b                     Extra: Last beacon: 27820ms ago                     IE: Unknown: 0009436174436861743278                     IE: Unknown: 01088C129824B048606C                     IE: Unknown: 030124                     IE: IEEE 802.11i/WPA2 Version 1                         Group Cipher : CCMP                         Pairwise Ciphers (1) : CCMP                         Authentication Suites (1) : 802.1x                     IE: Unknown: 2D1ACC011BFFFF000000000000000000000000000000000000000000                     IE: Unknown: 3D1624001B000000FF000000000000000000000000000000                     IE: Unknown: DD180050F2020101800003A4000027A4000042435E0062322F00                     IE: Unknown: DD1E00904C33CC011BFFFF000000000000000000000000000000000000000000                     IE: Unknown: DD1A00904C3424001B000000FF000000000000000000000000000000           Cell 04 - Address: 24:DE:C6:B0:C3:E9                     Channel:149                     Frequency:5.745 GHz                     Quality=28/70  Signal level=-82 dBm                       Encryption key:on                     ESSID:"CatChat2x"                     Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s                               36 Mb/s; 48 Mb/s; 54 Mb/s                     Mode:Master                     Extra:tsf=000000181f60e19c                     Extra: Last beacon: 28680ms ago                     IE: Unknown: 0009436174436861743278                     IE: Unknown: 01088C129824B048606C                     IE: Unknown: 030195                     IE: Unknown: 050400010000                     IE: IEEE 802.11i/WPA2 Version 1                         Group Cipher : CCMP                         Pairwise Ciphers (1) : CCMP                         Authentication Suites (1) : 802.1x                     IE: Unknown: 2D1ACC011BFFFF000000000000000000000000000000000000000000                     IE: Unknown: 3D1695001B000000FF000000000000000000000000000000                     IE: Unknown: DD180050F2020101800003A4000027A4000042435E0062322F00                     IE: Unknown: DD1E00904C33CC011BFFFF000000000000000000000000000000000000000000                     IE: Unknown: DD1A00904C3495001B000000FF000000000000000000000000000000                     IE: Unknown: DD07000B8601040817                     IE: Unknown: DD0E000B860103006170313930333032           Cell 09 - Address: 24:DE:C6:B0:C0:29                     Channel:149                     Frequency:5.745 GHz                     Quality=39/70  Signal level=-71 dBm                       Encryption key:on                     ESSID:"CatChat2x"                     Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s                               36 Mb/s; 48 Mb/s; 54 Mb/s                     Mode:Master                     Extra:tsf=00000112fb688ede                     Extra: Last beacon: 27716ms ago ifconfig (while connected): wlan0     Link encap:Ethernet  HWaddr b4:b6:76:a0:4b:3c             inet addr:10.250.16.220  Bcast:10.250.31.255  Mask:255.255.240.0           inet6 addr: fe80::b6b6:76ff:fea0:4b3c/64 Scope:Link           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1           RX packets:230023 errors:0 dropped:0 overruns:0 frame:0           TX packets:130970 errors:0 dropped:0 overruns:0 carrier:0           collisions:0 txqueuelen:1000            RX bytes:255999759 (255.9 MB)  TX bytes:16652605 (16.6 MB) iwconfig (while connected): wlan0     IEEE 802.11abgn  ESSID:"CatChat2x"             Mode:Managed  Frequency:5.745 GHz  Access Point: 24:DE:C6:B0:C0:29              Bit Rate=6 Mb/s   Tx-Power=15 dBm              Retry  long limit:7   RTS thr:off   Fragment thr:off           Power Management:off           Link Quality=36/70  Signal level=-74 dBm             Rx invalid nwid:0  Rx invalid crypt:0  Rx invalid frag:0           Tx excessive retries:0  Invalid misc:3   Missed beacon:0

    Read the article

  • Wireless speed drops over time and doesn't get up if not reconnected

    - by Vili Lehto
    I am using Ubuntu 12.10 64bit and I have a Asus PCE-N15 wireless card(in PCI-E slot). The problem is that first when I connect to my WiFi network the speed is just fine, and actually my link speed never drops, it shows a solid 150mb/s+ link speed and good signal. But the download and upload speeds drop dramatically after couple minutes of use and I have to reconnect to fix this. Network is type IEEE 802.11bgn and uses WPA2/AES encryption. I don't have similar problems on the windows side, and the network is working fine on every other device. iwconfig shows this: wlan0 IEEE 802.11bgn ESSID:"WLAN-AP" Mode:Managed Frequency:2.412 GHz Access Point: 00:1E:AB:05:EF:31 Bit Rate=300 Mb/s Tx-Power=20 dBm Retry long limit:7 RTS thr=2347 B Fragment thr:off Encryption key:off Power Management:off Link Quality=61/70 Signal level=-49 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:2 Missed beacon:0 So the question is: Is there a way to fix this? Thanks in advance.

    Read the article

  • HP 510 laptop doesn't suspend when I close the lid

    - by Yuri
    I have an HP 510 laptop with Ubuntu 12.04. I have all the correct settings for suspending when the lid is closed, as far as I can tell, so I think it's a problem with detecting the event. The way it "should" be detected, from what I can see, is a little hardware switch that is pushed when the lid closes. This switch manually puts off the backlight, and sends the suspend signal. All I can think is that the signal isn't being properly interpreted. Can anyone suggest a fix? Update: I tested whether the button was actually working based on the suggestion by josinalvo, and I found that in the directory /proc/acpi/button/lid/ there is no LID directory. There is, however, a C1CF folder, and in it is a state file. When using the this file instead of ID, I discovered that, no, when I close the lid, the state does not change.

    Read the article

  • Ubiquitous BIP

    - by Tim Dexter
    The last number I heard from Mike and the PM team was that BIP is now embedded in more than 40 oracle products. That's a lot of products to keep track of and to help out with new releases, etc. Its interesting to see how internal Oracle product groups have integrated BIP into their products. Just as you might integrate BIP they have had to make a choice about how to integrate. 1. Library level - BIP is a pure java app and at the bottom of the architecture are a group of java libraries that expose APIs that you can use. they fall into three main areas, data extraction, template processing and formatting and delivery. There are post processing capabilities but those APIs are embedded withing the template processing libraries. Taking this integration route you are going to need to manage templates, data extraction and processing. You'll have your own UI to allow users to control all of this for themselves. Ultimate control but some effort to build and maintain. I have been trawling some of the products during a coffee break. I found a great post on the reporting capabilities provided by BIP in the records management product within WebCenter Content 11g. This integration falls into the first category, content manager looks after the report artifacts itself and provides you the UI to manage and run the reports. 2. Web Service level - further up in the stack is the web service layer. This is sitting on the BI Publisher server as a set of services, runReport and scheduleReport are the main protagonists. However, you can also manage the reports and users (locally managed) on the server and the catalog itself via the services layer.Taking this route, you still need to provide the user interface to choose reports and run them but the creation and management of the reports is all handled by the Publisher server. I have worked with a few customer on this approach. The web services provide the ability to retrieve a list of reports the user can access; then the parameters and LOVs for the selected report and finally a service to submit the report on the server. 3. Embedded BIP server UI- the final level is not so well supported yet. You can currently embed a report and its various levels of surrounding  'chrome' inside another html based application using a URL. Check the docs here. The look and feel can be customized but again, not easy, nor documented. I have messed with running the server pages inside an IFRAME, not bad, but not great. Taking this path should present the least amount of effort on your part to get BIP integrated but there are a few gotchas you need to get around. So a reasonable amount of choices with varying amounts of effort involved. There is another option coming soon for all you ADF developers out there, the ability to drop a BIP report into your application pages. But that's for another post.

    Read the article

  • Microsoft Unveils New Logo

    Indeed, with those four familiar colored squares - set in a bigger square rather than standing on a point in a diamond - Microsoft's new corporate logo seems almost inevitable. As you'd expect, the company's name makes up part of the logo, but instead of the thick italic letters it has used for the past two and a half decades, it's in a more standard, lighter font. Jeff Hansen, Microsoft's general manager of brand strategy, notes that the point of the new logo is to signal the heritage but also signal the future - a newness and a freshness. It's very fitting when you consider just how many...

    Read the article

  • Vala: How to use Glades ActionGroup correctly

    - by Tom
    Could someone give me an example how to use Glade's ActionGroup with a Gtk.Builder in a Vala-Application? If I have an ActionGroup with an Action save, and on this action I've set the activate Signal to "on_save_clicked", should it be fine to write: [CCode (instance_pos = -1)] public void on_save_clicked(){ print("I would like to save, please\n"); } in global scope and then use builder.connect_signals(null)? When i do this, i just get "Could not find signal handler 'on_save_clicked'" when executing the program. No compile errors. I'm using valac --pkg gtk+-3.0 test.vala for compiling and glade-3.10.0

    Read the article

  • Can a 10-bit monitor connection preserve all tones in 8-bit sRGB gradients on a wide-gamut monitor?

    - by hjb981
    This question is about color management and the use of a higher color depth, 10 bits per channel (30 bits in total, resulting in 1.07 billion colors, or 1024 shades of gray, sometimes referred to as "deep color") compared to the standard of 8 bits per channel (24 bits in total, 16.7 million colors, 256 shades of gray, sometimes referred to as "true color"). Do not confuse with "32 bit color", which usually refers to standard 8 bit color with an extra channel ("alpha channel") for transparency (used to achieve effects like semi-transparent windows etc). The following can be assumed to be in place: 1: A wide-gamut monitor that supports 10-bit input. Further, it can be assumed that the monitor has been calibrated to its native gamut and that an ICC color profile has been created. 2: A graphics card that supports 10-bit output (and is connected to the monitor via DisplayPort). 3: Drivers for the graphics card that support 10-bit output. If applications that support 10-bit output and color profiles would be used, I would expect them to display images that were saved using different color spaces correctly. For example, both an sRGB and an adobeRGB image should be displayed correctly. If an sRGB image was saved using 8 bits per channel (almost always the case), then the 10-bit signal path would ensure that no tonal gradients were lost in the conversion from the sRGB of the image to the native color space of the monitor. For example: If the image contains a pixel that is pure red in 8 bits (255,0,0), the corresponding value in 10 bits would be (1023,0,0). However, since the monitor has a larger color space than sRGB, sending the signal (1023,0,0) to the monitor would result in a red that was too saturated. Therefore, according to the ICC color profile, the signal would be transformed into a different value with less red saturation, for example (987,0,0). Since there are still plenty of levels left between 0 and 987, all 256 values (0-255) for red in the sRGB color space of the file could be uniquely mapped to color-corrected 10-bit values in the monitor's native color space. However, if the conversion was done in 8 bits, (255,0,0) would be translated to (246,0,0), and there would now only be 247 available levels for the red channel instead of 256, degrading the displayed image quality. My question is: how does this work on Ubuntu? Let's say that I use Firefox (which is color-aware and uses ICC color profiles). Would I get 10-bit processing, thus preserving all levels of an 8-bit picture? What is the situation like for other applications, especially photo applications like Shotwell, Rawtherapee, Darktable, RawStudio, Photivo etc? Does Ubuntu differ from other operating systems (Linux and others) on this point?

    Read the article

  • Mobile music playback on android phone keeps buffering

    - by ianio
    I have copied some music to a ubuntu synced folder on my karmic machine. I have the Ubuntu One android app installed on android 2.2 phone. When listening to music I get frequent breaks whilst the music is buffering, even with full signal. every other music streaming servie such as last fm and bbc player are fine. What is going wrong? mp3s should not have trouble buffering over a good 3g signal. I have set cache size to 200mb but i cant find a buffer size. Is it my mp3s? they are usually in the order of 320kb/s this is a breaker for me if there is no solution which is a massive shame as i like the principal of the software. cheers

    Read the article

  • Newly updated ubuntu not loading in moniter.

    - by user211652
    I just recently upgraded from Ubuntu 12.04 to Ubuntu 12.10. I installed the update and tried to boot from the newly installed version. As I booted in it would pop up the loading screen and then I would lose signal and it would say that there was no signal on my monitor. How do I fix this because I really want to use the new version but if it's not going to work I'll just forget about it. I can however boot into the old install and I works fine.

    Read the article

  • Qwt plot not working , simple plot curve not appeat

    - by user1629213
    I followed the example of simple plot in qwt examples to plot a curve. The axis and the graph appear in the Qt main window user interface but the curve not. I assigned values to fit the curve but the curve not appear. Any suggestions and help how to solve the problem? Here is my code MainWindow::MainWindow( int argc, char** argv, QWidget *parent ) : QMainWindow( parent ) , qnode( argc,argv ) { ui.setupUi( this ); // Calling this incidentally connects all ui's triggers to on_...() callbacks in this class. QObject::connect( ui.actionAbout_Qt, SIGNAL( triggered( bool )), qApp, SLOT( aboutQt( ))); // qApp is a global variable for the application ReadSettings( ); setWindowIcon( QIcon( ":/images/icon.png" )); ui.tab_manager->setCurrentIndex( 0 ); // ensure the first tab is showing - qt-designer should have this already hardwired, but often loses it (settings?). QObject::connect( &qnode, SIGNAL( rosShutdown( )), this, SLOT( close( ))); /********************* ** Logging **********************/ ui.view_logging->setModel( qnode.loggingModel( )); QObject::connect( &qnode, SIGNAL( loggingUpdated( )), this, SLOT( updateLoggingView( ))); QObject::connect( &qnode, SIGNAL( graphReceived( )), this, SLOT( onGraphReceived( ))); QObject::connect( &qnode, SIGNAL( parameterReceived( )), this, SLOT( onParameterReceived( ))); /********************* ** Auto Start **********************/ if ( ui.checkbox_remember_settings->isChecked( )) { on_button_connect_clicked( true ); } ui.parameters->setAttribute( Qt::WA_NoMousePropagation ); ui.parameters->setAttribute( Qt::WA_OpaquePaintEvent ); ui.plotgraph->setAttribute( Qt::WA_NoMousePropagation ); ui.plotgraph->setAttribute( Qt::WA_OpaquePaintEvent ); p_plot = new QwtPlot(ui.plotgraph); p_plot->setTitle( "Plot LinVel" ); p_plot->setCanvasBackground( Qt::white ); // Axis p_plot->setAxisTitle( QwtPlot::xBottom, "Time(sec)" ); p_plot->setAxisTitle( QwtPlot::yLeft, "Linear Velocity (m/sec)" ); p_plot->setAxisScale( QwtPlot::yLeft, 0.0, 10.0 ); p_plot->setAxisScale( QwtPlot::xBottom, 0.0, 50.0 ); p_plot->insertLegend( new QwtLegend() ); //samplingThread.start(); QwtPlotGrid *grid = new QwtPlotGrid(); grid->attach( p_plot ); curve = new QwtPlotCurve(); curve->setTitle( "Linear velocity" ); // Set curve styles curve->setPen( Qt::blue, 4 ), curve->setRenderHint( QwtPlotItem::RenderAntialiased, true ); QwtSymbol *symbol = new QwtSymbol( QwtSymbol::Ellipse, QBrush( Qt::yellow), QPen( Qt::red, 2 ), QSize( 8, 8 ) ); curve->setSymbol( symbol); // Assign values to the curve //curve->setSamples(ui.plotgraph.get_linv_g());//yaw_g,trav_g,wall_g; curve->attach( p_plot ); p_plot->resize( 600, 400 ); p_plot->show(); void MainWindow::onGraphReceived( ) { { QMutexLocker locker( &qnode.m_mutex ); } } void MainWindow::onParameterReceived( ) { { QMutexLocker locker( &qnode.m_mutex ); std::vector<double> p_ = qnode.get_parameters(); std::cout << p_[0]<<" "<<p_[1]<<" "<<p_[2]<<" "<<p_[3]<<" "<<p_[4] << std::endl; } } Any help?

    Read the article

  • android alarm not able to launch alarm activity

    - by user965830
    I am new to android programming and am trying to make this simple alarm app. I have my code written and it is compiled with no errors. The app runs in the emulator, that is the main activity asks the date and time, but when i click on the confirm button, it displays the message - "Unfortunately, Timer1 has stopped working." The code for my main activity is as follows: public void onClick(View v) { EditText date = (EditText) findViewById(R.id.editDate); EditText month = (EditText) findViewById(R.id.editMonth); EditText hour = (EditText) findViewById(R.id.editHour); EditText min = (EditText) findViewById(R.id.editMin); int dt = Integer.parseInt(date.getText().toString()); int mon = Integer.parseInt(month.getText().toString()); int hr = Integer.parseInt(hour.getText().toString()); int mnt = Integer.parseInt(min.getText().toString()); Intent myIntent = new Intent(MainActivity.this, AlarmActivity.class); pendingIntent = PendingIntent.getService(MainActivity.this, 0, myIntent, 0); AlarmManager alarmManager = (AlarmManager) getSystemService(ALARM_SERVICE); Calendar calendar = Calendar.getInstance(); calendar.set(Calendar.DATE, dt); calendar.set(Calendar.MONTH, mon); calendar.set(Calendar.HOUR, hr); calendar.set(Calendar.MINUTE, mnt); alarmManager.set(AlarmManager.RTC_WAKEUP, calendar.getTimeInMillis(), pendingIntent); } I do not understand what all the errors in logcat mean, so i am posting them: 06-25 16:03:32.175: I/Process(566): Sending signal. PID: 566 SIG: 9 06-25 16:03:53.775: I/dalvikvm(612): threadid=3: reacting to signal 3 06-25 16:03:54.046: I/dalvikvm(612): Wrote stack traces to '/data/anr/traces.txt' 06-25 16:03:54.255: I/dalvikvm(612): threadid=3: reacting to signal 3 06-25 16:03:54.305: I/dalvikvm(612): Wrote stack traces to '/data/anr/traces.txt' 06-25 16:03:54.735: I/dalvikvm(612): threadid=3: reacting to signal 3 06-25 16:03:54.785: I/dalvikvm(612): Wrote stack traces to '/data/anr/traces.txt' 06-25 16:03:54.925: D/gralloc_goldfish(612): Emulator without GPU emulation detected. 06-25 16:05:09.605: D/AndroidRuntime(612): Shutting down VM 06-25 16:05:09.605: W/dalvikvm(612): threadid=1: thread exiting with uncaught exception (group=0x409c01f8) 06-25 16:05:09.685: E/AndroidRuntime(612): FATAL EXCEPTION: main 06-25 16:05:09.685: E/AndroidRuntime(612): java.lang.NumberFormatException: Invalid int: "android.widget.EditText@41030b40" 06-25 16:05:09.685: E/AndroidRuntime(612): at java.lang.Integer.invalidInt(Integer.java:138) 06-25 16:05:09.685: E/AndroidRuntime(612): at java.lang.Integer.parse(Integer.java:375) 06-25 16:05:09.685: E/AndroidRuntime(612): at java.lang.Integer.parseInt(Integer.java:366) 06-25 16:05:09.685: E/AndroidRuntime(612): at java.lang.Integer.parseInt(Integer.java:332) 06-25 16:05:09.685: E/AndroidRuntime(612): at com.kapymay.tversion1.MainActivity$1.onClick(MainActivity.java:34) 06-25 16:05:09.685: E/AndroidRuntime(612): at android.view.View.performClick(View.java:3511) 06-25 16:05:09.685: E/AndroidRuntime(612): at android.view.View$PerformClick.run(View.java:14105) 06-25 16:05:09.685: E/AndroidRuntime(612): at android.os.Handler.handleCallback(Handler.java:605) 06-25 16:05:09.685: E/AndroidRuntime(612): at android.os.Handler.dispatchMessage(Handler.java:92) 06-25 16:05:09.685: E/AndroidRuntime(612): at android.os.Looper.loop(Looper.java:137) 06-25 16:05:09.685: E/AndroidRuntime(612): at android.app.ActivityThread.main(ActivityThread.java:4424) 06-25 16:05:09.685: E/AndroidRuntime(612): at java.lang.reflect.Method.invokeNative(Native Method) 06-25 16:05:09.685: E/AndroidRuntime(612): at java.lang.reflect.Method.invoke(Method.java:511) 06-25 16:05:09.685: E/AndroidRuntime(612): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:784) 06-25 16:05:09.685: E/AndroidRuntime(612): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:551) 06-25 16:05:09.685: E/AndroidRuntime(612): at dalvik.system.NativeStart.main(Native Method) 06-25 16:05:10.445: I/dalvikvm(612): threadid=3: reacting to signal 3 06-25 16:05:10.575: I/dalvikvm(612): Wrote stack traces to '/data/anr/traces.txt'

    Read the article

  • multiple timer to one process (without linking to rt)

    - by Richard
    Hi, is there any way to register multiple timer to a single process? I have tried following code, yet without success. (Use "gcc -lrt" to compile it...). Program output nothing, which should atleast print "test". Is it possibly due to the dependence to linking to rt? #define TT_SIGUSR1 (SIGRTMAX) #define TT_SIGUSR2 (SIGRTMAX - 1) #define TIME_INTERVAL_1 1 #define TIME_INTERVAL_2 2 #include <signal.h> #include <time.h> #include <stdio.h> #include <unistd.h> #include <linux/unistd.h> #include <sys/syscall.h> #include <sys/time.h> #include <sys/types.h> #include <sched.h> #include <signal.h> #include <setjmp.h> #include <errno.h> #include <assert.h> timer_t create_timer(int signo) { timer_t timerid; struct sigevent se; se.sigev_signo = signo; if (timer_create(CLOCK_REALTIME, &se, &timerid) == -1) { fprintf(stderr, "Failed to create timer\n"); exit(-1); } return timerid; } void set_timer(timer_t timerid, int seconds) { struct itimerspec timervals; timervals.it_value.tv_sec = seconds; timervals.it_value.tv_nsec = 0; timervals.it_interval.tv_sec = seconds; timervals.it_interval.tv_nsec = 0; if (timer_settime(timerid, 0, &timervals, NULL) == -1) { fprintf(stderr, "Failed to start timer\n"); exit(-1); } return; } void install_sighandler2(int signo, void(*handler)(int)) { struct sigaction sigact; sigemptyset(&sigact.sa_mask); sigact.sa_flags = SA_SIGINFO; //register the Signal Handler sigact.sa_sigaction = handler; // Set up sigaction to catch signal first timer if (sigaction(signo, &sigact, NULL) == -1) { printf("sigaction failed"); return -1; } } void install_sighandler(int signo, void(*handler)(int)) { sigset_t set; struct sigaction act; /* Setup the handler */ act.sa_handler = handler; act.sa_flags = SA_RESTART; sigaction(signo, &act, 0); /* Unblock the signal */ sigemptyset(&set); sigaddset(&set, signo); sigprocmask(SIG_UNBLOCK, &set, NULL); return; } void signal_handler(int signo) { printf("receiving sig %d", signo); } int main() { printf("test"); timer_t timer1 = create_timer(TT_SIGUSR1); timer_t timer2 = create_timer(TT_SIGUSR2); set_timer(timer1, TIME_INTERVAL_1); set_timer(timer2, TIME_INTERVAL_2); install_sighandler2(TT_SIGUSR1, signal_handler); install_sighandler(TT_SIGUSR2, signal_handler); while (1) ; return 0; }

    Read the article

  • java Processbuilder - exec a file which is not in path on OS X

    - by Jakob
    Okay i'm trying to make ChucK available in exported Processing sketches, i.e. if i export an app from Processing, the ChucK VM binary will be executed from inside the app. So as a user of said app you don't need to worry about ChucK being in your path at all. Right now i'm generating and executing a bash script file, but this way i don't get any console output from ChucK back into Processing: #!/bin/bash cd "[to where the Chuck executable is located]" ./chuck --kill killall chuck # just to make sure ./chuck chuckScript1.ck cuckScriptn.ck then Process p = Runtime.getRuntime().exec("chmod 777 "+scriptPath); p = Runtime.getRuntime().exec(scriptPath); This works but i want to run ChucK directly from Processing instead, but can't get it to execute: String chuckPath = "[folder in which the chuck executable is located]" ProcessBuilder builder = new ProcessBuilder (chuckPath+"/chuck", "test.ck"); final Process process = builder.start(); InputStream is = process.getInputStream(); InputStreamReader isr = new InputStreamReader(is); BufferedReader br = new BufferedReader(isr); String line; while((line = br.readLine()) != null) println(line); println("done chuckin'! exitValue: " + process.exitValue()); Sorry if this is newbie style :D

    Read the article

  • How can I improve my real-time behavior in multi-threaded app using pthreads and condition variables

    - by WilliamKF
    I have a multi-threaded application that is using pthreads. I have a mutex() lock and condition variables(). There are two threads, one thread is producing data for the second thread, a worker, which is trying to process the produced data in a real time fashion such that one chuck is processed as close to the elapsing of a fixed time period as possible. This works pretty well, however, occasionally when the producer thread releases the condition upon which the worker is waiting, a delay of up to almost a whole second is seen before the worker thread gets control and executes again. I know this because right before the producer releases the condition upon which the worker is waiting, it does a chuck of processing for the worker if it is time to process another chuck, then immediately upon receiving the condition in the worker thread, it also does a chuck of processing if it is time to process another chuck. In this later case, I am seeing that I am late processing the chuck many times. I'd like to eliminate this lost efficiency and do what I can to keep the chucks ticking away as close to possible to the desired frequency. Is there anything I can do to reduce the delay between the release condition from the producer and the detection that that condition is released such that the worker resumes processing? For example, would it help for the producer to call something to force itself to be context switched out? Bottom line is the worker has to wait each time it asks the producer to create work for itself so that the producer can muck with the worker's data structures before telling the worker it is ready to run in parallel again. This period of exclusive access by the producer is meant to be short, but during this period, I am also checking for real-time work to be done by the producer on behalf of the worker while the producer has exclusive access. Somehow my hand off back to running in parallel again results in significant delay occasionally that I would like to avoid. Please suggest how this might be best accomplished.

    Read the article

  • I have two choices of Master's classes this fall. Which is the most useful?

    - by ahplummer
    (For background purposes and context): I am a Software Engineer, and manage other Software Engineers currently. I kind of wear two hats right now: one of a programmer, and one as a 'team lead'. In this regard, I've started going back to school to get my Master's degree with an emphasis in Computer Science. I already have a Bachelor's in Computer Science, and have been working in the field for about 13 years. Our primary development environment is a Windows environment, writing in .NET, Delphi, and SQL Server. Choice #1: CST 798 DATA VISUALIZATION Course Description: Basically, this is a course on the "Processing" language: http://processing.org/ Choice #2: CST 711 INFORMATICS Course Description: (From catalog): Informatics is the science of the use and processing of data, information, and knowledge. This course covers a variety of applied issues from information technology, information management at a variety of levels, ranging from simple data entry, to the creation, design and implementation of new information systems, to the development of models. Topics include basic information representation, processing, searching, and organization, evaluation and analysis of information, Internet-based information access tools, ethics and economics of information sharing.

    Read the article

  • activemessaging with stomp and activemq.prefetchSize=1

    - by Clint Miller
    I have a situation where I have a single activemq broker with 2 queues, Q1 and Q2. I have two ruby-based consumers using activemessaging. Let's call them C1 and C2. Both consumers subscribe to each queue. I'm setting activemq.prefetchSize=1 when subscribing to each queue. I'm also setting ack=client. Consider the following sequence of events: 1) A message that triggers a long-running job is published to queue Q1. Call this M1. 2) M1 is dispatched to consumer C1, kicking off a long operation. 3) Two messages that trigger short jobs are published to queue Q2. Call these M2 and M3. 4) M2 is dispatched to C2 which quickly runs the short job. 5) M3 is dispatched to C1, even though C1 is still running M1. It's able to dispatch to C1 because prefetchSize=1 is set on the queue subscription, not on the connection. So the fact that a Q1 message has already been dispatched doesn't stop one Q2 message from being dispatched. Since activemessaging consumers are single-threaded, the net result is that M3 sits and waits on C1 for a long time until C1 finishes processing M1. So, M3 is not processed for a long time, despite the fact that consumer C2 is sitting idle (since it quickly finishes with message M2). Essentially, whenever a long Q1 job is run and then a whole bunch of short Q2 jobs are created, exactly one of the short Q2 jobs gets stuck on a consumer waiting for the long Q1 job to finish. Is there a way to set prefetchSize at the connection level rather than at the subscription level? I really don't want any messages dispatched to C1 while it is processing M1. The other alternative is that I could create a consumer dedicated to processing Q1 and then have other consumers dedicated to processing Q2. But, I'd rather not do that since Q1 messages are infrequent--Q1's dedicated consumers would sit idle most of the day tying up memory.

    Read the article

  • Generate and download a text file in javascript

    - by Mark B
    All my research so far suggests this can't be done, but I'm hoping someone here has some cunning ideas. I have a form on a website which allows users to bulk upload lots of URLs to add to a list on the server. There's quite a lot of server-side processing to do on each URL, so to avoid timeouts and to display progress, I've implemented the upload using jQuery to submit the URLs one at a time using ajax. This is all working nicely. However, part of the processing on each URL is deduplicating it against the complete list. The ajax call returns a status indicating either a successful upload or a rejection due to duplication. As the upload progresses, I tell the user how many URLs have been rejected as duplicates (along with overall progress and ETA). The problem now is how to give the user a complete list of the failed duplicate URLs. I've kept them in an array in my jQuery, and would like the user to be able to click on a link on the form to download a text file containing those URLs. Is this possible just using client-side processing? The server-side processing basically handles a single keyword at a time. I'd rather not have to store the duplicates in a database table with some kind of session key which gets sent with every ajax call, and is then used at the end to generate the text file server-side (and then gets cleaned up some time later). I can see how to do this, but it seems very clunky and a bit 20th century.

    Read the article

  • DB optimization to use it as a queue

    - by anony
    We have a table called worktable which has some columns(key(primary key), ptime, aname, status, content) we have something called producer which puts in rows in this table and we have consumer which does an order-by on the key column and fetches the first row which has status as 'pending'. The consumer does some processing on this row: 1. updates status to "processing" 2. does some processing using content 3. deletes the row we are facing contention issues when we try to run multiple consumers(probably due to the order-by which does a full table scan)... using Advanced queues would be our next step but before we go there we want to check what is the max throughput we can achieve with multiple consumers and producer on the table. What are the optimizations we can do to get the best numbers possible? Can we do an in-memory processing where a consumer fetches 1000 rows at a time processes and deletes? will that improve? What are other possibilities? partitioning of table? parallelization? Index organized tables?...

    Read the article

  • Data Integration Solution?

    - by Shlomo
    At my company we have a number of data feeds and processing that run on any given day. The number of feeds and processing steps is starting to out-number the ability to manage it ad-hoc as it is managed currently. Is there a good solution that helps with logging and managing/scheduling dependencies? For example: A: When file x is FTP dropped into directory D1, kick off processing step B B: Load flat file into DB1 C: When file y is FTP dropped into directory D2, kick off processing Step D D: Load flat file into DB11 E: When B and D are done, churn through the data, and load new data into DB111. F: When Step E is done, launch application process P G: etc... I want those steps to run at the appropriate times, not to mention if B fails, there's no reason to run steps E & F, but I could still run C & D. When I re-run B successfully, it should trigger just E & F to re-run, not C & D. We're a .NET/C#/Sql Server shop, and I'm already familiar with SSIS. Is that really the best there is? That manages steps well, but not external dependencies, or logging. Open source (.NET) preferred, but not required.

    Read the article

  • The Data Scientist

    - by BuckWoody
    A new term - well, perhaps not that new - has come up and I’m actually very excited about it. The term is Data Scientist, and since it’s new, it’s fairly undefined. I’ll explain what I think it means, and why I’m excited about it. In general, I’ve found the term deals at its most basic with analyzing data. Of course, we all do that, and the term itself in that definition is redundant. There is no science that I know of that does not work with analyzing lots of data. But the term seems to refer to more than the common practices of looking at data visually, putting it in a spreadsheet or report, or even using simple coding to examine data sets. The term Data Scientist (as far as I can make out this early in it’s use) is someone who has a strong understanding of data sources, relevance (statistical and otherwise) and processing methods as well as front-end displays of large sets of complicated data. Some - but not all - Business Intelligence professionals have these skills. In other cases, senior developers, database architects or others fill these needs, but in my experience, many lack the strong mathematical skills needed to make these choices properly. I’ve divided the knowledge base for someone that would wear this title into three large segments. It remains to be seen if a given Data Scientist would be responsible for knowing all these areas or would specialize. There are pretty high requirements on the math side, specifically in graduate-degree level statistics, but in my experience a company will only have a few of these folks, so they are expected to know quite a bit in each of these areas. Persistence The first area is finding, cleaning and storing the data. In some cases, no cleaning is done prior to storage - it’s just identified and the cleansing is done in a later step. This area is where the professional would be able to tell if a particular data set should be stored in a Relational Database Management System (RDBMS), across a set of key/value pair storage (NoSQL) or in a file system like HDFS (part of the Hadoop landscape) or other methods. Or do you examine the stream of data without storing it in another system at all? This is an important decision - it’s a foundation choice that deals not only with a lot of expense of purchasing systems or even using Cloud Computing (PaaS, SaaS or IaaS) to source it, but also the skillsets and other resources needed to care and feed the system for a long time. The Data Scientist sets something into motion that will probably outlast his or her career at a company or organization. Often these choices are made by senior developers, database administrators or architects in a company. But sometimes each of these has a certain bias towards making a decision one way or another. The Data Scientist would examine these choices in light of the data itself, starting perhaps even before the business requirements are created. The business may not even be aware of all the strategic and tactical data sources that they have access to. Processing Once the decision is made to store the data, the next set of decisions are based around how to process the data. An RDBMS scales well to a certain level, and provides a high degree of ACID compliance as well as offering a well-known set-based language to work with this data. In other cases, scale should be spread among multiple nodes (as in the case of Hadoop landscapes or NoSQL offerings) or even across a Cloud provider like Windows Azure Table Storage. In fact, in many cases - most of the ones I’m dealing with lately - the data should be split among multiple types of processing environments. This is a newer idea. Many data professionals simply pick a methodology (RDBMS with Star Schemas, NoSQL, etc.) and put all data there, regardless of its shape, processing needs and so on. A Data Scientist is familiar not only with the various processing methods, but how they work, so that they can choose the right one for a given need. This is a huge time commitment, hence the need for a dedicated title like this one. Presentation This is where the need for a Data Scientist is most often already being filled, sometimes with more or less success. The latest Business Intelligence systems are quite good at allowing you to create amazing graphics - but it’s the data behind the graphics that are the most important component of truly effective displays. This is where the mathematics requirement of the Data Scientist title is the most unforgiving. In fact, someone without a good foundation in statistics is not a good candidate for creating reports. Even a basic level of statistics can be dangerous. Anyone who works in analyzing data will tell you that there are multiple errors possible when data just seems right - and basic statistics bears out that you’re on the right track - that are only solvable when you understanding why the statistical formula works the way it does. And there are lots of ways of presenting data. Sometimes all you need is a “yes” or “no” answer that can only come after heavy analysis work. In that case, a simple e-mail might be all the reporting you need. In others, complex relationships and multiple components require a deep understanding of the various graphical methods of presenting data. Knowing which kind of chart, color, graphic or shape conveys a particular datum best is essential knowledge for the Data Scientist. Why I’m excited I love this area of study. I like math, stats, and computing technologies, but it goes beyond that. I love what data can do - how it can help an organization. I’ve been fortunate enough in my professional career these past two decades to work with lots of folks who perform this role at companies from aerospace to medical firms, from manufacturing to retail. Interestingly, the size of the company really isn’t germane here. I worked with one very small bio-tech (cryogenics) company that worked deeply with analysis of complex interrelated data. So  watch this space. No, I’m not leaving Azure or distributed computing or Microsoft. In fact, I think I’m perfectly situated to investigate this role further. We have a huge set of tools, from RDBMS to Hadoop to allow me to explore. And I’m happy to share what I learn along the way.

    Read the article

  • Using Node.js as an accelerator for WCF REST services

    - by Elton Stoneman
    Node.js is a server-side JavaScript platform "for easily building fast, scalable network applications". It's built on Google's V8 JavaScript engine and uses an (almost) entirely async event-driven processing model, running in a single thread. If you're new to Node and your reaction is "why would I want to run JavaScript on the server side?", this is the headline answer: in 150 lines of JavaScript you can build a Node.js app which works as an accelerator for WCF REST services*. It can double your messages-per-second throughput, halve your CPU workload and use one-fifth of the memory footprint, compared to the WCF services direct.   Well, it can if: 1) your WCF services are first-class HTTP citizens, honouring client cache ETag headers in request and response; 2) your services do a reasonable amount of work to build a response; 3) your data is read more often than it's written. In one of my projects I have a set of REST services in WCF which deal with data that only gets updated weekly, but which can be read hundreds of times an hour. The services issue ETags and will return a 304 if the client sends a request with the current ETag, which means in the most common scenario the client uses its local cached copy. But when the weekly update happens, then all the client caches are invalidated and they all need the same new data. Then the service will get hundreds of requests with old ETags, and they go through the full service stack to build the same response for each, taking up threads and processing time. Part of that processing means going off to a database on a separate cloud, which introduces more latency and downtime potential.   We can use ASP.NET output caching with WCF to solve the repeated processing problem, but the server will still be thread-bound on incoming requests, and to get the current ETags reliably needs a database call per request. The accelerator solves that by running as a proxy - all client calls come into the proxy, and the proxy routes calls to the underlying REST service. We could use Node as a straight passthrough proxy and expect some benefit, as the server would be less thread-bound, but we would still have one WCF and one database call per proxy call. But add some smart caching logic to the proxy, and share ETags between Node and WCF (so the proxy doesn't even need to call the servcie to get the current ETag), and the underlying service will only be invoked when data has changed, and then only once - all subsequent client requests will be served from the proxy cache.   I've built this as a sample up on GitHub: NodeWcfAccelerator on sixeyed.codegallery. Here's how the architecture looks:     The code is very simple. The Node proxy runs on port 8010 and all client requests target the proxy. If the client request has an ETag header then the proxy looks up the ETag in the tag cache to see if it is current - the sample uses memcached to share ETags between .NET and Node. If the ETag from the client matches the current server tag, the proxy sends a 304 response with an empty body to the client, telling it to use its own cached version of the data. If the ETag from the client is stale, the proxy looks for a local cached version of the response, checking for a file named after the current ETag. If that file exists, its contents are returned to the client as the body in a 200 response, which includes the current ETag in the header. If the proxy does not have a local cached file for the service response, it calls the service, and writes the WCF response to the local cache file, and to the body of a 200 response for the client. So the WCF service is only troubled if both client and proxy have stale (or no) caches.   The only (vaguely) clever bit in the sample is using the ETag cache, so the proxy can serve cached requests without any communication with the underlying service, which it does completely generically, so the proxy has no notion of what it is serving or what the services it proxies are doing. The relative path from the URL is used as the lookup key, so there's no shared key-generation logic between .NET and Node, and when WCF stores a tag it also stores the "read" URL against the ETag so it can be used for a reverse lookup, e.g:   Key Value /WcfSampleService/PersonService.svc/rest/fetch/3 "28cd4796-76b8-451b-adfd-75cb50a50fa6" "28cd4796-76b8-451b-adfd-75cb50a50fa6" /WcfSampleService/PersonService.svc/rest/fetch/3    In Node we read the cache using the incoming URL path as the key and we know that "28cd4796-76b8-451b-adfd-75cb50a50fa6" is the current ETag; we look for a local cached response in /caches/28cd4796-76b8-451b-adfd-75cb50a50fa6.body (and the corresponding .header file which contains the original service response headers, so the proxy response is exactly the same as the underlying service). When the data is updated, we need to invalidate the ETag cache – which is why we need the reverse lookup in the cache. In the WCF update service, we don't need to know the URL of the related read service - we fetch the entity from the database, do a reverse lookup on the tag cache using the old ETag to get the read URL, update the new ETag against the URL, store the new reverse lookup and delete the old one.   Running Apache Bench against the two endpoints gives the headline performance comparison. Making 1000 requests with concurrency of 100, and not sending any ETag headers in the requests, with the Node proxy I get 102 requests handled per second, average response time of 975 milliseconds with 90% of responses served within 850 milliseconds; going direct to WCF with the same parameters, I get 53 requests handled per second, mean response time of 1853 milliseconds, with 90% of response served within 3260 milliseconds. Informally monitoring server usage during the tests, Node maxed at 20% CPU and 20Mb memory; IIS maxed at 60% CPU and 100Mb memory.   Note that the sample WCF service does a database read and sleeps for 250 milliseconds to simulate a moderate processing load, so this is *not* a baseline Node-vs-WCF comparison, but for similar scenarios where the  service call is expensive but applicable to numerous clients for a long timespan, the performance boost from the accelerator is considerable.     * - actually, the accelerator will work nicely for any HTTP request, where the URL (path + querystring) uniquely identifies a resource. In the sample, there is an assumption that the ETag is a GUID wrapped in double-quotes (e.g. "28cd4796-76b8-451b-adfd-75cb50a50fa6") – which is the default for WCF services. I use that assumption to name the cache files uniquely, but it is a trivial change to adapt to other ETag formats.

    Read the article

  • Can't remove burg theme packages

    - by Lassi
    Today after trying to install and remove BURG and few themes I faced an issue. Now I can't install or remove anything. Here is the output (unfortunately partly in Finnish, I couldn't change language since it also seems to depend on package listings: lassi@lassi-ubuntu:~$ sudo apt-get autoremove Luetaan pakettiluetteloita... Valmis Muodostetaan riippuvuussuhteiden puu Luetaan tilatietoja... Valmis Seuraavat paketit POISTETAAN: burg-theme-fortune burg-theme-gnome burg-theme-picchio 0 päivitetty, 0 uutta asennusta, 3 poistettavaa ja 0 päivittämätöntä. 3 ei asennettu kokonaan tai poistettiin. Toiminnon jälkeen vapautuu 7 180 k t levytilaa. Haluatko jatkaa [K/e]? k (Luetaan tietokantaa... 166462 files and directories currently installed.) Poistetaan pakettia burg-theme-fortune... sudo: update-burg: command not found dpkg: virhe käsiteltäessä burg-theme-fortune (--remove): aliprosessi installed post-removal script palautti virhetilakoodin 1 Poistetaan pakettia burg-theme-gnome... sudo: update-burg: command not found dpkg: virhe käsiteltäessä burg-theme-gnome (--remove): aliprosessi installed post-removal script palautti virhetilakoodin 1 Poistetaan pakettia burg-theme-picchio... sudo: update-burg: command not found dpkg: virhe käsiteltäessä burg-theme-picchio (--remove): aliprosessi installed post-removal script palautti virhetilakoodin 1 Käsittelyssä tapahtui liian monta virhettä: burg-theme-fortune burg-theme-gnome burg-theme-picchio E: Sub-process /usr/bin/dpkg returned an error code (1) Basically what seems to happen is this: It creates the package lists, then tries to remove packet burg-theme-fortune. This fails as update-burg command was not found. Then dpkg reports an error while processing the packet. Same goes with all 3 packages. In the end it claims that there were too many errors, and packages stay installed. I also tried installing burg as it tries to run command update-burg, but appears that it tries to delete these packages always when I try to install or remove or do anything with apt. Any ideas how I could solve this issue? Edit: Here is the output of apt-get install burg (tried installing again to get English output) lassi@lassi-ubuntu:~$ LC_ALL=C sudo apt-get install burg [sudo] password for lassi: Reading package lists... Done Building dependency tree Reading state information... Done burg is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 3 not fully installed or removed. Need to get 0 B/6169 kB of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 167497 files and directories currently installed.) Preparing to replace burg-theme-fortune 0.5.0-1 (using .../burg-theme-fortune_0.5.0-1_all.deb) ... Unpacking replacement burg-theme-fortune ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: warning: subprocess old post-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error processing /var/cache/apt/archives/burg-theme-fortune_0.5.0-1_all.deb (--unpack): subprocess new post-removal script returned error exit status 1 Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Preparing to replace burg-theme-gnome 0.5.0-1 (using .../burg-theme-gnome_0.5.0-1_all.deb) ... Unpacking replacement burg-theme-gnome ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: warning: subprocess old post-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error processing /var/cache/apt/archives/burg-theme-gnome_0.5.0-1_all.deb (--unpack): subprocess new post-removal script returned error exit status 1 Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Preparing to replace burg-theme-picchio 0.5.0-1 (using .../burg-theme-picchio_0.5.0-1_all.deb) ... Unpacking replacement burg-theme-picchio ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: warning: subprocess old post-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error processing /var/cache/apt/archives/burg-theme-picchio_0.5.0-1_all.deb (--unpack): subprocess new post-removal script returned error exit status 1 Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/burg-theme-fortune_0.5.0-1_all.deb /var/cache/apt/archives/burg-theme-gnome_0.5.0-1_all.deb /var/cache/apt/archives/burg-theme-picchio_0.5.0-1_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) lassi@lassi-ubuntu:~$

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >