Search Results

Search found 2327 results on 94 pages for 'signal strength'.

Page 19/94 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • how to add menu dynamically in Qt

    - by Solitaire
    Hi, I want to add, submenu to a menu item dynamically. How can I achive this? I tried like this, I have created an Action and submenu. Then I have added the submenu to action. But, I have connected the “triggered” signal of action. its getting crash if I click on the action.. I have also handled the “aboutToShow” signal of menu, same its also getting crash when I click on action.. Here is the sampe code. Submenu = new QMenu(this); connect(Submenu, SIGNAL( aboutToShow()), this, SLOT(move ())); QAction *test = new QAction(tr("Selection"), this); test ->setMenu(Submenu); menubar()->addAction(test); I want to get the notification, before the display of submenu..

    Read the article

  • Referencing other modules in atexit

    - by Dmitry Risenberg
    I have a function that is responsible for killing a child process when the program ends: class MySingleton: def __init__(self): import atexit atexit.register(self.stop) def stop(self): os.kill(self.sel_server_pid, signal.SIGTERM) However I get an error message when this function is called: Traceback (most recent call last): File "/usr/lib/python2.5/atexit.py", line 24, in _run_exitfuncs func(*targs, **kargs) File "/home/commando/Development/Diploma/streaminatr/stream/selenium_tests.py", line 66, in stop os.kill(self.sel_server_pid, signal.SIGTERM) AttributeError: 'NoneType' object has no attribute 'kill' Looks like the os and signal modules get unloaded before atexit is called. Re-importing them solves the problem, but this behaviour seems weird to me - these modules are imported before I register my handler, so why are they unloaded before my own exit handler runs?

    Read the article

  • SQL Server and Hyper-V Dynamic Memory Part 2

    - by SQLOS Team
    Part 1 of this series was an introduction and overview of Hyper-V Dynamic Memory. This part looks at SQL Server memory management and how the SQL engine responds to changing OS memory conditions.   Part 2: SQL Server Memory Management As with any Windows process, sqlserver.exe has a virtual address space (VAS) of 4GB on 32-bit and 8TB in 64-bit editions. Pages in its VAS are mapped to pages in physical memory when the memory is committed and referenced for the first time. The collection of VAS pages that have been recently referenced is known as the Working Set. How and when SQL Server allocates virtual memory and grows its working set depends on the memory model it uses. SQL Server supports three basic memory models:   1. Conventional Memory Model   The Conventional model is the default SQL Server memory model and has the following properties: - Dynamic - can grow or shrink its working set in response to load and external (operating system) memory conditions. - OS uses 4K pages – (not to be confused with SQL Server “pages” which are 8K regions of committed memory).- Pageable - Can be paged out to disk by the operating system.   2. Locked Page Model The locked page memory model is set when SQL Server is started with "Lock Pages in Memory" privilege*. It has the following characteristics: - Dynamic - can grow or shrink its working set in the same way as the Conventional model.- OS uses 4K pages - Non-Pageable – When memory is committed it is locked in memory, meaning that it will remain backed by physical memory and will not be paged out by the operating system. A common misconception is to interpret "locked" as non-dynamic. A SQL Server instance using the locked page memory model will grow and shrink (allocate memory and release memory) in response to changing workload and OS memory conditions in the same way as it does with the conventional model.   This is an important consideration when we look at Hyper-V Dynamic Memory – “locked” memory works perfectly well with “dynamic” memory.   * Note in “Denali” (Standard Edition and above), and in SQL 2008 R2 64-bit (Enterprise and above editions) the Lock Pages in Memory privilege is all that is required to set this model. In 2008 R2 64-Bit standard edition it also requires trace flag 845 to be set, in 2008 R2 32-bit editions it requires sp_configure 'awe enabled' 1.   3. Large Page Model The Large page model is set using trace flag 834 and potentially offers a small performance boost for systems that are configured with large pages. It is characterized by: - Static - memory is allocated at startup and does not change. - OS uses large (>2MB) pages - Non-Pageable The large page model is supported with Hyper-V Dynamic Memory (and Hyper-V also supports large pages), but you get no benefit from using Dynamic Memory with this model since SQL Server memory does not grow or shrink. The rest of this article will focus on the locked and conventional SQL Server memory models.   When does SQL Server grow? For “dynamic” configurations (Conventional and Locked memory models), the sqlservr.exe process grows – allocates and commits memory from the OS – in response to a workload. As much memory is allocated as is required to optimally run the query and buffer data for future queries, subject to limitations imposed by:   - SQL Server max server memory setting. If this configuration option is set, the buffer pool is not allowed to grow to more than this value. In SQL Server 2008 this value represents single page allocations, and in “Denali” it represents any size page allocations and also managed CLR procedure allocations.   - Memory signals from OS. The operating system sets a signal on memory resource notification objects to indicate whether it has memory available or whether it is low on available memory. If there is only 32MB free for every 4GB of memory a low memory signal is set, which continues until 64MB/4GB is free. If there is 96MB/4GB free the operating system sets a high memory signal. SQL Server only allocates memory when the high memory signal is set.   To summarize, for SQL Server to grow you need three conditions: a workload, max server memory setting higher than the current allocation, high memory signals from the OS.    When does SQL Server shrink caches? SQL Server as a rule does not like to return memory to the OS, but it will shrink its caches in response to memory pressure. Memory pressure can be divided into “internal” and “external”.   - External memory pressure occurs when the operating system is running low on memory and low memory signals are set. The SQL Server Resource Monitor checks for low memory signals approximately every 5 seconds and it will attempt to free memory until the signals stop.   To free memory SQL Server does the following: ·         Frees unused memory. ·         Notifies Memory Manager Clients to release memory o   Caches – Free unreferenced cache objects. o   Buffer pool - Based on oldest access times.   The freed memory is released back to the operating system. This process continues until the low memory resource notifications stop.    - Internal memory pressure occurs when the size of different caches and allocations increase but the SQL Server process needs to keep its total memory within a target value. For example if max server memory is set and certain caches are growing large, it will cause SQL to free memory for re-use internally, but not to release memory back to the OS. If you lower the value of max server memory you will generate internal memory pressure that will cause SQL to release memory back to the OS.    Memory pressure handling has not changed much since SQL 2005 and it was described in detail in a blog post by Slava Oks.   Note that SQL Server Express is an exception to the above behavior. Unlike other editions it does not assume it is the most important process running on the system but tries to be more “desktop” friendly. It will empty its working set after a period of inactivity.   How does SQL Server respond to changing OS memory?    In SQL Server 2005 support for Hot-Add memory was introduced. This feature, available in Enterprise and above editions, allows the server to make use of any extra physical memory that was added after SQL Server started. Being able to add physical memory when the system is running is limited to specialized hardware, but with the Hyper-V Dynamic Memory feature, when new memory is allocated to a guest virtual machine, it looks like hot-add physical memory to the guest. What this means is that thanks to the hot-add memory feature, SQL Server 2005 and higher can dynamically grow if more “physical” memory is granted to a guest VM by Hyper-V dynamic memory.   SQL Server checks OS memory every second and dynamically adjusts its “target” (based on available OS memory and max server memory) accordingly.   In “Denali” Standard Edition will also have sqlserver.exe support for hot-add memory when running virtualized (i.e. detecting and acting on Hyper-V Dynamic Memory allocations).   How does a SQL Server workload in a guest VM impact Hyper-V dynamic memory scheduling?   When a SQL workload causes the sqlserver.exe process to grow its working set, the Hyper-V memory scheduler will detect memory pressure in the guest VM and add memory to it. SQL Server will then detect the extra memory and grow according to workload demand. In our tests we have seen this feedback process cause a guest VM to grow quickly in response to SQL workload - we are still working on characterizing this ramp-up.    How does SQL Server respond when Hyper-V removes memory from a guest VM through ballooning?   If pressure from other VM's cause Hyper-V Dynamic Memory to take memory away from a VM through ballooning (allocating memory with a virtual device driver and returning it to the host OS), Windows Memory Manager will page out unlocked portions of memory and signal low resource notification events. When SQL Server detects these events it will shrink memory until the low memory notifications stop (see cache shrinking description above).    This raises another question. Can we make SQL Server release memory more readily and hence behave more "dynamically" without compromising performance? In certain circumstances where the application workload is predictable it may be possible to have a job which varies "max server memory" according to need, lowering it when the engine is inactive and raising it before a period of activity. This would have limited applicaability but it is something we're looking into.   What Memory Management changes are there in SQL Server “Denali”?   In SQL Server “Denali” (aka SQL11) the Memory Manager has been re-written to be more efficient. The main changes are summarized in this post. An important change with respect to Hyper-V Dynamic Memory support is that now the max server memory setting includes any size page allocations and managed CLR procedure allocations it now represents a closer approximation to total sqlserver.exe memory usage. This makes it easier to calculate a value for max server memory, which becomes important when configuring virtual machines to work well with Hyper-V Dynamic Memory Startup and Maximum RAM settings.   Another important change is no more AWE or hot-add support for 32-bit edition. This means if you're running a 32-bit edition of Denali you're limited to a 4GB address space and will not be able to take advantage of dynamically added OS memory that wasn't present when SQL Server started (though Hyper-V Dynamic Memory is still a supported configuration).   In part 3 we’ll develop some best practices for configuring and using SQL Server with Dynamic Memory. Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Using MVP, how to create a view from another view, linked with the same model object

    - by Dinaiz
    Background We use the Model-View-Presenter design pattern along with the abstract factory pattern and the "signal/slot" pattern in our application, to fullfill 2 main requirements Enhance testability (very lightweight GUI, every action can be simulated in unit tests) Make the "view" totally independant from the rest, so we can change the actual view implementation, without changing anything else In order to do so our code is divided in 4 layers : Core : which holds the model Presenter : which manages interactions between the view interfaces (see bellow) and the core View Interfaces : they define the signals and slots for a View, but not the implementation Views : the actual implementation of the views When the presenter creates or deals with views, it uses an abstract factory and only knows about the view interfaces. It does the signal/slot binding between views interfaces. It doesn't care about the actual implementation. In the "views" layer, we have a concrete factory which deals with implementations. The signal/slot mechanism is implemented using a custom framework built upon boost::function. Really, what we have is something like that : http://martinfowler.com/eaaDev/PassiveScreen.html Everything works fine. The problem However, there's a problem I don't know how to solve. Let's take for example a very simple drag and drop example. I have two ContainersViews (ContainerView1, ContainerView2). ContainerView1 has an ItemView1. I drag the ItemView1 from ContainerView1 to ContainerView2. ContainerView2 must create an ItemView2, of a different type, but which "points" to the same model object as ItemView1. So the ContainerView2 gets a callback called for the drop action with ItemView1 as a parameter. It calls ContainerPresenterB passing it ItemViewB In this case we are only dealing with views. In MVP-PV, views aren't supposed to know anything about the presenter nor the model, right ? How can I create the ItemView2 from the ItemView1, not knowing which model object is ItemView1 representing ? I thought about adding an "itemId" to every view, this id being the id of the core object the view represents. So in pseudo code, ContainerPresenter2 would do something like itemView2=abstractWidgetFactory.createItemView2(); this.add(itemView2,itemView1.getCoreObjectId()) I don't get too much into details. That just work. The problem I have here is that those itemIds are just like pointers. And pointers can be dangling. Imagine that by mistake, I delete itemView1, and this deletes coreObject1. The itemView2 will have a coreObjectId which represents an invalid coreObject. Isn't there a more elegant and "bulletproof" solution ? Even though I never did ObjectiveC or macOSX programming, I couldn't help but notice that our framework is very similar to Cocoa framework. How do they deal with this kind of problem ? Couldn't find more in-depth information about that on google. If someone could shed some light on this. I hope this question isn't too confusing ...

    Read the article

  • Organizations &amp; Architecture UNISA Studies &ndash; Chap 7

    - by MarkPearl
    Learning Outcomes Name different device categories Discuss the functions and structure of I/.O modules Describe the principles of Programmed I/O Describe the principles of Interrupt-driven I/O Describe the principles of DMA Discuss the evolution characteristic of I/O channels Describe different types of I/O interface Explain the principles of point-to-point and multipoint configurations Discuss the way in which a FireWire serial bus functions Discuss the principles of InfiniBand architecture External Devices An external device attaches to the computer by a link to an I/O module. The link is used to exchange control, status, and data between the I/O module and the external device. External devices can be classified into 3 categories… Human readable – e.g. video display Machine readable – e.g. magnetic disk Communications – e.g. wifi card I/O Modules An I/O module has two major functions… Interface to the processor and memory via the system bus or central switch Interface to one or more peripheral devices by tailored data links Module Functions The major functions or requirements for an I/O module fall into the following categories… Control and timing Processor communication Device communication Data buffering Error detection I/O function includes a control and timing requirement, to coordinate the flow of traffic between internal resources and external devices. Processor communication involves the following… Command decoding Data Status reporting Address recognition The I/O device must be able to perform device communication. This communication involves commands, status information, and data. An essential task of an I/O module is data buffering due to the relative slow speeds of most external devices. An I/O module is often responsible for error detection and for subsequently reporting errors to the processor. I/O Module Structure An I/O module functions to allow the processor to view a wide range of devices in a simple minded way. The I/O module may hide the details of timing, formats, and the electro mechanics of an external device so that the processor can function in terms of simple reads and write commands. An I/O channel/processor is an I/O module that takes on most of the detailed processing burden, presenting a high-level interface to the processor. There are 3 techniques are possible for I/O operations Programmed I/O Interrupt[t I/O DMA Access Programmed I/O When a processor is executing a program and encounters an instruction relating to I/O it executes that instruction by issuing a command to the appropriate I/O module. With programmed I/O, the I/O module will perform the requested action and then set the appropriate bits in the I/O status register. The I/O module takes no further actions to alert the processor. I/O Commands To execute an I/O related instruction, the processor issues an address, specifying the particular I/O module and external device, and an I/O command. There are four types of I/O commands that an I/O module may receive when it is addressed by a processor… Control – used to activate a peripheral and tell it what to do Test – Used to test various status conditions associated with an I/O module and its peripherals Read – Causes the I/O module to obtain an item of data from the peripheral and place it in an internal buffer Write – Causes the I/O module to take an item of data form the data bus and subsequently transmit that data item to the peripheral The main disadvantage of this technique is it is a time consuming process that keeps the processor busy needlessly I/O Instructions With programmed I/O there is a close correspondence between the I/O related instructions that the processor fetches from memory and the I/O commands that the processor issues to an I/O module to execute the instructions. Typically there will be many I/O devices connected through I/O modules to the system – each device is given a unique identifier or address – when the processor issues an I/O command, the command contains the address of the address of the desired device, thus each I/O module must interpret the address lines to determine if the command is for itself. When the processor, main memory and I/O share a common bus, two modes of addressing are possible… Memory mapped I/O Isolated I/O (for a detailed explanation read page 245 of book) The advantage of memory mapped I/O over isolated I/O is that it has a large repertoire of instructions that can be used, allowing more efficient programming. The disadvantage of memory mapped I/O over isolated I/O is that valuable memory address space is sued up. Interrupts driven I/O Interrupt driven I/O works as follows… The processor issues an I/O command to a module and then goes on to do some other useful work The I/O module will then interrupts the processor to request service when is is ready to exchange data with the processor The processor then executes the data transfer and then resumes its former processing Interrupt Processing The occurrence of an interrupt triggers a number of events, both in the processor hardware and in software. When an I/O device completes an I/O operations the following sequence of hardware events occurs… The device issues an interrupt signal to the processor The processor finishes execution of the current instruction before responding to the interrupt The processor tests for an interrupt – determines that there is one – and sends an acknowledgement signal to the device that issues the interrupt. The acknowledgement allows the device to remove its interrupt signal The processor now needs to prepare to transfer control to the interrupt routine. To begin, it needs to save information needed to resume the current program at the point of interrupt. The minimum information required is the status of the processor and the location of the next instruction to be executed. The processor now loads the program counter with the entry location of the interrupt-handling program that will respond to this interrupt. It also saves the values of the process registers because the Interrupt operation may modify these The interrupt handler processes the interrupt – this includes examination of status information relating to the I/O operation or other event that caused an interrupt When interrupt processing is complete, the saved register values are retrieved from the stack and restored to the registers Finally, the PSW and program counter values from the stack are restored. Design Issues Two design issues arise in implementing interrupt I/O Because there will be multiple I/O modules, how does the processor determine which device issued the interrupt? If multiple interrupts have occurred, how does the processor decide which one to process? Addressing device recognition, 4 general categories of techniques are in common use… Multiple interrupt lines Software poll Daisy chain Bus arbitration For a detailed explanation of these approaches read page 250 of the textbook. Interrupt driven I/O while more efficient than simple programmed I/O still requires the active intervention of the processor to transfer data between memory and an I/O module, and any data transfer must traverse a path through the processor. Thus is suffers from two inherent drawbacks… The I/O transfer rate is limited by the speed with which the processor can test and service a device The processor is tied up in managing an I/O transfer; a number of instructions must be executed for each I/O transfer Direct Memory Access When large volumes of data are to be moved, an efficient technique is direct memory access (DMA) DMA Function DMA involves an additional module on the system bus. The DMA module is capable of mimicking the processor and taking over control of the system from the processor. It needs to do this to transfer data to and from memory over the system bus. DMA must the bus only when the processor does not need it, or it must force the processor to suspend operation temporarily (most common – referred to as cycle stealing). When the processor wishes to read or write a block of data, it issues a command to the DMA module by sending to the DMA module the following information… Whether a read or write is requested using the read or write control line between the processor and the DMA module The address of the I/O device involved, communicated on the data lines The starting location in memory to read from or write to, communicated on the data lines and stored by the DMA module in its address register The number of words to be read or written, communicated via the data lines and stored in the data count register The processor then continues with other work, it delegates the I/O operation to the DMA module which transfers the entire block of data, one word at a time, directly to or from memory without going through the processor. When the transfer is complete, the DMA module sends an interrupt signal to the processor, this the processor is involved only at the beginning and end of the transfer. I/O Channels and Processors Characteristics of I/O Channels As one proceeds along the evolutionary path, more and more of the I/O function is performed without CPU involvement. The I/O channel represents an extension of the DMA concept. An I/O channel ahs the ability to execute I/O instructions, which gives it complete control over I/O operations. In a computer system with such devices, the CPU does not execute I/O instructions – such instructions are stored in main memory to be executed by a special purpose processor in the I/O channel itself. Two types of I/O channels are common A selector channel controls multiple high-speed devices. A multiplexor channel can handle I/O with multiple characters as fast as possible to multiple devices. The external interface: FireWire and InfiniBand Types of Interfaces One major characteristic of the interface is whether it is serial or parallel parallel interface – there are multiple lines connecting the I/O module and the peripheral, and multiple bits are transferred simultaneously serial interface – there is only one line used to transmit data, and bits must be transmitted one at a time With new generation serial interfaces, parallel interfaces are becoming less common. In either case, the I/O module must engage in a dialogue with the peripheral. In general terms the dialog may look as follows… The I/O module sends a control signal requesting permission to send data The peripheral acknowledges the request The I/O module transfers data The peripheral acknowledges receipt of data For a detailed explanation of FireWire and InfiniBand technology read page 264 – 270 of the textbook

    Read the article

  • Lubuntu: neither shut-down nor restart works

    - by Rantanplan
    aI have a freshly installed Lubuntu 14.04.1 (installed with forcepae option on a laptop with Pentium M processor). The only problem that I have found so far is that I cannot shut-down or restart the laptop. It always continues showing "Lubuntu" and some dots. Pressing Esc it says wait-for-state stop/waiting * Stopping rsync daemon rsync [OK] * Asking all remaining processes to terminate… [OK] * Killing all remaining processes… [fail] ModemManager [597] : <info> Caught signal, shutting down… ModemManager [597] : <info> ModemManager is shut down nm-dispatcher.action: Could not get the system bus. Make sure the message bus daemon is running! Message: Did not receive a reply. Possible causes: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. * Deactivating swap… [OK] * Will now halt The cursor remains blinking but the only way to switch it off is to hold the power-off key pressed for some seconds. I tried sudo shutdown -h now, sudo halt and sudo poweroff resulting in the same problem. I also tried to add acpi=force in GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" in /etc/default/grub and run sudo update-grub; then, using the taskbar's shot-down button lead to a direct stop of the laptop equal to holding the power-off key pressed for some seconds. Next I followed the answer http://askubuntu.com/a/202481/288322. Now, I directly receive some messages during shut-down starting wait-for-state stop/waiting * Stopping rsync daemon rsync [OK] * Asking all remaining processes to terminate… [OK] [ 240.944277] INFO: task kworker/0:2:24: block for more than 120 seconds. [ 240.944461] Tainted: G S 3.13.0-34-generic #60-Ubuntu [ 240.944623] "echo 0 > /proc/sys/kernel/hung_tasks_timeout_secs" disables this message. followed by some more similar lines and then: * Killing all remaining processes… [fail] ModemManager [576] : <info> Caught signal, shutting down… nm-dispatcher.action: Caught signal 15, shutting down... ModemManager [576] : <info> ModemManager is shut down * Deactivating swap… [OK] * Will now halt [ 600.944276] INFO: task kworker/0:2:24: block for more than 120 seconds. [ 600.944458] Tainted: G S 3.13.0-34-generic #60-Ubuntu [ 600.944619] "echo 0 > /proc/sys/kernel/hung_tasks_timeout_secs" disables this message. Then, nothing more was coming during the next 5 minutes. If you know where can I find relevant error information, I will be happy to search for them.

    Read the article

  • Le Bluetooth 4.0 dans les starting-blocks, plus puissant il rend possible le développement d'applica

    Le Bluetooth 4.0 dans les starting-blocks Il rendrait possible le développement d'applications sportives, médicales et domotiques Le Bluetooth, dans sa version 4, serait prêt à bondir. C'est le message qui vient d'être donné sur le site officiel de la technologie. Le Bluetooth 4 marque en tout cas une réelle évolution. La portée du signal pourra à présent dépasser les 60 mètres (et être modulable). Sa consommation électrique sera plus réduite. Et le Bluetooth inclura à présent la norme radio 802.11 pour le transfert haut débit de fichiers. Ces évolutions, notamment la modularité du signal, permettront d'élargir la gamme de produits qui peuvent être int...

    Read the article

  • Why changing signals causes NameErrors in a sane code? - PyGtk issue

    - by boywithaxe
    I'm working on a very simple app for Ubuntu. I've asked a question on stackoverflow, and it seems that the issue I am having is caused by signals, not by the scope of variables, as I originally thought. The problem I am having is that when TextBox emits a signal through activate the whole code works without a glitch. But when I change the signal to insert-at-click it returns NameErrors in every non-TextBox-linked function. Now, It is highly possible I am doing something completely wrong here, but is it at least probable that signals could affect global variable assignments?

    Read the article

  • Are there deprecated practices for multithread and multiprocessor programming that I should no longer use?

    - by DeveloperDon
    In the early days of FORTRAN and BASIC, essentially all programs were written with GOTO statements. The result was spaghetti code and the solution was structured programming. Similarly, pointers can have difficult to control characteristics in our programs. C++ started with plenty of pointers, but use of references are recommended. Libraries like STL can reduce some of our dependency. There are also idioms to create smart pointers that have better characteristics, and some version of C++ permit references and managed code. Programming practices like inheritance and polymorphism use a lot of pointers behind the scenes (just as for, while, do structured programming generates code filled with branch instructions). Languages like Java eliminate pointers and use garbage collection to manage dynamically allocated data instead of depending on programmers to match all their new and delete statements. In my reading, I have seen examples of multi-process and multi-thread programming that don't seem to use semaphores. Do they use the same thing with different names or do they have new ways of structuring protection of resources from concurrent use? For example, a specific example of a system for multithread programming with multicore processors is OpenMP. It represents a critical region as follows, without the use of semaphores, which seem not to be included in the environment. th_id = omp_get_thread_num(); #pragma omp critical { cout << "Hello World from thread " << th_id << '\n'; } This example is an excerpt from: http://en.wikipedia.org/wiki/OpenMP Alternatively, similar protection of threads from each other using semaphores with functions wait() and signal() might look like this: wait(sem); th_id = get_thread_num(); cout << "Hello World from thread " << th_id << '\n'; signal(sem); In this example, things are pretty simple, and just a simple review is enough to show the wait() and signal() calls are matched and even with a lot of concurrency, thread safety is provided. But other algorithms are more complicated and use multiple semaphores (both binary and counting) spread across multiple functions with complex conditions that can be called by many threads. The consequences of creating deadlock or failing to make things thread safe can be hard to manage. Do these systems like OpenMP eliminate the problems with semaphores? Do they move the problem somewhere else? How do I transform my favorite semaphore using algorithm to not use semaphores anymore?

    Read the article

  • Repeated disconnects on WPA PEAP network

    - by exasperated
    My school has a WPA PEAP network with GTC inner authentication. I am able to connect to the network, but once I load a website or two, the network become unresponsive (i.e. in Chromium, it gets stuck at "Sending request"), and I'm eventually disconnected. Any help will be greatly appreciated. Here's some log output. I can provide more if needed: Ubuntu 13.04 3.8.0-32-generic x86_64 lsusb: 03:00.0 Network controller: Intel Corporation Centrino Advanced-N 6235 (rev 24) lsmod: iwldvm                241872  0  mac80211              606457  1 iwldvm iwlwifi               173516  1 iwldvm cfg80211              511019  3 iwlwifi,mac80211,iwldvm dmesg: [    3.501227] iwlwifi 0000:03:00.0: irq 46 for MSI/MSI-X [    3.503541] iwlwifi 0000:03:00.0: loaded firmware version 18.168.6.1 [    3.527153] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEBUG disabled [    3.527162] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEBUGFS enabled [    3.527170] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEVICE_TRACING enabled [    3.527178] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEVICE_TESTMODE enabled [    3.527186] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_P2P disabled [    3.527192] iwlwifi 0000:03:00.0: Detected Intel(R) Centrino(R) Advanced-N 6235 AGN, REV=0xB0 [    3.527240] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [    3.551049] ieee80211 phy0: Selected rate control algorithm 'iwl-agn-rs' [  375.153065] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [  375.159727] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [  375.553201] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [  375.559871] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [ 1892.110738] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 1892.117357] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [ 5227.235372] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 5227.242122] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [ 5817.817954] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 5817.824560] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [ 5824.571917] iwlwifi 0000:03:00.0 wlan0: disabling HT/VHT due to WEP/TKIP use [ 5824.571929] iwlwifi 0000:03:00.0 wlan0: disabling HT as WMM/QoS is not supported by the AP [ 5824.571935] iwlwifi 0000:03:00.0 wlan0: disabling VHT as WMM/QoS is not supported by the AP [ 6956.290061] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 6956.296671] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [ 6963.080560] iwlwifi 0000:03:00.0 wlan0: disabling HT/VHT due to WEP/TKIP use [ 6963.080566] iwlwifi 0000:03:00.0 wlan0: disabling HT as WMM/QoS is not supported by the AP [ 6963.080570] iwlwifi 0000:03:00.0 wlan0: disabling VHT as WMM/QoS is not supported by the AP [ 7613.469241] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 7613.475870] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [ 7620.201265] iwlwifi 0000:03:00.0 wlan0: disabling HT/VHT due to WEP/TKIP use [ 7620.201278] iwlwifi 0000:03:00.0 wlan0: disabling HT as WMM/QoS is not supported by the AP [ 7620.201285] iwlwifi 0000:03:00.0 wlan0: disabling VHT as WMM/QoS is not supported by the AP [ 8232.762453] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 8232.769065] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [ 8239.581772] iwlwifi 0000:03:00.0 wlan0: disabling HT/VHT due to WEP/TKIP use [ 8239.581784] iwlwifi 0000:03:00.0 wlan0: disabling HT as WMM/QoS is not supported by the AP [ 8239.581792] iwlwifi 0000:03:00.0 wlan0: disabling VHT as WMM/QoS is not supported by the AP [13763.634808] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [13763.641427] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 [16955.598953] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [16955.605574] iwlwifi 0000:03:00.0: Radio type=0x2-0x1-0x0 lshw:    *-network        description: Wireless interface        product: Centrino Advanced-N 6235        vendor: Intel Corporation        physical id: 0        bus info: pci@0000:03:00.0        logical name: wlan0        version: 24        serial: b4:b6:76:a0:4b:3c        width: 64 bits        clock: 33MHz        capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless        configuration: broadcast=yes driver=iwlwifi driverversion=3.8.0-32-generic firmware=18.168.6.1 ip=10.250.169.96 latency=0 link=yes multicast=yes wireless=IEEE 802.11abgn        resources: irq:46 memory:f7c00000-f7c01fff iwlist scan: Cell 02 - Address: 24:DE:C6:B0:C7:D9                     Channel:36                     Frequency:5.18 GHz (Channel 36)                     Quality=29/70  Signal level=-81 dBm                       Encryption key:on                     ESSID:"CatChat2x"                     Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s                               36 Mb/s; 48 Mb/s; 54 Mb/s                     Mode:Master                     Extra:tsf=0000004ff3fe419b                     Extra: Last beacon: 27820ms ago                     IE: Unknown: 0009436174436861743278                     IE: Unknown: 01088C129824B048606C                     IE: Unknown: 030124                     IE: IEEE 802.11i/WPA2 Version 1                         Group Cipher : CCMP                         Pairwise Ciphers (1) : CCMP                         Authentication Suites (1) : 802.1x                     IE: Unknown: 2D1ACC011BFFFF000000000000000000000000000000000000000000                     IE: Unknown: 3D1624001B000000FF000000000000000000000000000000                     IE: Unknown: DD180050F2020101800003A4000027A4000042435E0062322F00                     IE: Unknown: DD1E00904C33CC011BFFFF000000000000000000000000000000000000000000                     IE: Unknown: DD1A00904C3424001B000000FF000000000000000000000000000000           Cell 04 - Address: 24:DE:C6:B0:C3:E9                     Channel:149                     Frequency:5.745 GHz                     Quality=28/70  Signal level=-82 dBm                       Encryption key:on                     ESSID:"CatChat2x"                     Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s                               36 Mb/s; 48 Mb/s; 54 Mb/s                     Mode:Master                     Extra:tsf=000000181f60e19c                     Extra: Last beacon: 28680ms ago                     IE: Unknown: 0009436174436861743278                     IE: Unknown: 01088C129824B048606C                     IE: Unknown: 030195                     IE: Unknown: 050400010000                     IE: IEEE 802.11i/WPA2 Version 1                         Group Cipher : CCMP                         Pairwise Ciphers (1) : CCMP                         Authentication Suites (1) : 802.1x                     IE: Unknown: 2D1ACC011BFFFF000000000000000000000000000000000000000000                     IE: Unknown: 3D1695001B000000FF000000000000000000000000000000                     IE: Unknown: DD180050F2020101800003A4000027A4000042435E0062322F00                     IE: Unknown: DD1E00904C33CC011BFFFF000000000000000000000000000000000000000000                     IE: Unknown: DD1A00904C3495001B000000FF000000000000000000000000000000                     IE: Unknown: DD07000B8601040817                     IE: Unknown: DD0E000B860103006170313930333032           Cell 09 - Address: 24:DE:C6:B0:C0:29                     Channel:149                     Frequency:5.745 GHz                     Quality=39/70  Signal level=-71 dBm                       Encryption key:on                     ESSID:"CatChat2x"                     Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s                               36 Mb/s; 48 Mb/s; 54 Mb/s                     Mode:Master                     Extra:tsf=00000112fb688ede                     Extra: Last beacon: 27716ms ago ifconfig (while connected): wlan0     Link encap:Ethernet  HWaddr b4:b6:76:a0:4b:3c             inet addr:10.250.16.220  Bcast:10.250.31.255  Mask:255.255.240.0           inet6 addr: fe80::b6b6:76ff:fea0:4b3c/64 Scope:Link           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1           RX packets:230023 errors:0 dropped:0 overruns:0 frame:0           TX packets:130970 errors:0 dropped:0 overruns:0 carrier:0           collisions:0 txqueuelen:1000            RX bytes:255999759 (255.9 MB)  TX bytes:16652605 (16.6 MB) iwconfig (while connected): wlan0     IEEE 802.11abgn  ESSID:"CatChat2x"             Mode:Managed  Frequency:5.745 GHz  Access Point: 24:DE:C6:B0:C0:29              Bit Rate=6 Mb/s   Tx-Power=15 dBm              Retry  long limit:7   RTS thr:off   Fragment thr:off           Power Management:off           Link Quality=36/70  Signal level=-74 dBm             Rx invalid nwid:0  Rx invalid crypt:0  Rx invalid frag:0           Tx excessive retries:0  Invalid misc:3   Missed beacon:0

    Read the article

  • Wireless speed drops over time and doesn't get up if not reconnected

    - by Vili Lehto
    I am using Ubuntu 12.10 64bit and I have a Asus PCE-N15 wireless card(in PCI-E slot). The problem is that first when I connect to my WiFi network the speed is just fine, and actually my link speed never drops, it shows a solid 150mb/s+ link speed and good signal. But the download and upload speeds drop dramatically after couple minutes of use and I have to reconnect to fix this. Network is type IEEE 802.11bgn and uses WPA2/AES encryption. I don't have similar problems on the windows side, and the network is working fine on every other device. iwconfig shows this: wlan0 IEEE 802.11bgn ESSID:"WLAN-AP" Mode:Managed Frequency:2.412 GHz Access Point: 00:1E:AB:05:EF:31 Bit Rate=300 Mb/s Tx-Power=20 dBm Retry long limit:7 RTS thr=2347 B Fragment thr:off Encryption key:off Power Management:off Link Quality=61/70 Signal level=-49 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:2 Missed beacon:0 So the question is: Is there a way to fix this? Thanks in advance.

    Read the article

  • HP 510 laptop doesn't suspend when I close the lid

    - by Yuri
    I have an HP 510 laptop with Ubuntu 12.04. I have all the correct settings for suspending when the lid is closed, as far as I can tell, so I think it's a problem with detecting the event. The way it "should" be detected, from what I can see, is a little hardware switch that is pushed when the lid closes. This switch manually puts off the backlight, and sends the suspend signal. All I can think is that the signal isn't being properly interpreted. Can anyone suggest a fix? Update: I tested whether the button was actually working based on the suggestion by josinalvo, and I found that in the directory /proc/acpi/button/lid/ there is no LID directory. There is, however, a C1CF folder, and in it is a state file. When using the this file instead of ID, I discovered that, no, when I close the lid, the state does not change.

    Read the article

  • Microsoft Unveils New Logo

    Indeed, with those four familiar colored squares - set in a bigger square rather than standing on a point in a diamond - Microsoft's new corporate logo seems almost inevitable. As you'd expect, the company's name makes up part of the logo, but instead of the thick italic letters it has used for the past two and a half decades, it's in a more standard, lighter font. Jeff Hansen, Microsoft's general manager of brand strategy, notes that the point of the new logo is to signal the heritage but also signal the future - a newness and a freshness. It's very fitting when you consider just how many...

    Read the article

  • Vala: How to use Glades ActionGroup correctly

    - by Tom
    Could someone give me an example how to use Glade's ActionGroup with a Gtk.Builder in a Vala-Application? If I have an ActionGroup with an Action save, and on this action I've set the activate Signal to "on_save_clicked", should it be fine to write: [CCode (instance_pos = -1)] public void on_save_clicked(){ print("I would like to save, please\n"); } in global scope and then use builder.connect_signals(null)? When i do this, i just get "Could not find signal handler 'on_save_clicked'" when executing the program. No compile errors. I'm using valac --pkg gtk+-3.0 test.vala for compiling and glade-3.10.0

    Read the article

  • Can a 10-bit monitor connection preserve all tones in 8-bit sRGB gradients on a wide-gamut monitor?

    - by hjb981
    This question is about color management and the use of a higher color depth, 10 bits per channel (30 bits in total, resulting in 1.07 billion colors, or 1024 shades of gray, sometimes referred to as "deep color") compared to the standard of 8 bits per channel (24 bits in total, 16.7 million colors, 256 shades of gray, sometimes referred to as "true color"). Do not confuse with "32 bit color", which usually refers to standard 8 bit color with an extra channel ("alpha channel") for transparency (used to achieve effects like semi-transparent windows etc). The following can be assumed to be in place: 1: A wide-gamut monitor that supports 10-bit input. Further, it can be assumed that the monitor has been calibrated to its native gamut and that an ICC color profile has been created. 2: A graphics card that supports 10-bit output (and is connected to the monitor via DisplayPort). 3: Drivers for the graphics card that support 10-bit output. If applications that support 10-bit output and color profiles would be used, I would expect them to display images that were saved using different color spaces correctly. For example, both an sRGB and an adobeRGB image should be displayed correctly. If an sRGB image was saved using 8 bits per channel (almost always the case), then the 10-bit signal path would ensure that no tonal gradients were lost in the conversion from the sRGB of the image to the native color space of the monitor. For example: If the image contains a pixel that is pure red in 8 bits (255,0,0), the corresponding value in 10 bits would be (1023,0,0). However, since the monitor has a larger color space than sRGB, sending the signal (1023,0,0) to the monitor would result in a red that was too saturated. Therefore, according to the ICC color profile, the signal would be transformed into a different value with less red saturation, for example (987,0,0). Since there are still plenty of levels left between 0 and 987, all 256 values (0-255) for red in the sRGB color space of the file could be uniquely mapped to color-corrected 10-bit values in the monitor's native color space. However, if the conversion was done in 8 bits, (255,0,0) would be translated to (246,0,0), and there would now only be 247 available levels for the red channel instead of 256, degrading the displayed image quality. My question is: how does this work on Ubuntu? Let's say that I use Firefox (which is color-aware and uses ICC color profiles). Would I get 10-bit processing, thus preserving all levels of an 8-bit picture? What is the situation like for other applications, especially photo applications like Shotwell, Rawtherapee, Darktable, RawStudio, Photivo etc? Does Ubuntu differ from other operating systems (Linux and others) on this point?

    Read the article

  • Mobile music playback on android phone keeps buffering

    - by ianio
    I have copied some music to a ubuntu synced folder on my karmic machine. I have the Ubuntu One android app installed on android 2.2 phone. When listening to music I get frequent breaks whilst the music is buffering, even with full signal. every other music streaming servie such as last fm and bbc player are fine. What is going wrong? mp3s should not have trouble buffering over a good 3g signal. I have set cache size to 200mb but i cant find a buffer size. Is it my mp3s? they are usually in the order of 320kb/s this is a breaker for me if there is no solution which is a massive shame as i like the principal of the software. cheers

    Read the article

  • Newly updated ubuntu not loading in moniter.

    - by user211652
    I just recently upgraded from Ubuntu 12.04 to Ubuntu 12.10. I installed the update and tried to boot from the newly installed version. As I booted in it would pop up the loading screen and then I would lose signal and it would say that there was no signal on my monitor. How do I fix this because I really want to use the new version but if it's not going to work I'll just forget about it. I can however boot into the old install and I works fine.

    Read the article

  • Qwt plot not working , simple plot curve not appeat

    - by user1629213
    I followed the example of simple plot in qwt examples to plot a curve. The axis and the graph appear in the Qt main window user interface but the curve not. I assigned values to fit the curve but the curve not appear. Any suggestions and help how to solve the problem? Here is my code MainWindow::MainWindow( int argc, char** argv, QWidget *parent ) : QMainWindow( parent ) , qnode( argc,argv ) { ui.setupUi( this ); // Calling this incidentally connects all ui's triggers to on_...() callbacks in this class. QObject::connect( ui.actionAbout_Qt, SIGNAL( triggered( bool )), qApp, SLOT( aboutQt( ))); // qApp is a global variable for the application ReadSettings( ); setWindowIcon( QIcon( ":/images/icon.png" )); ui.tab_manager->setCurrentIndex( 0 ); // ensure the first tab is showing - qt-designer should have this already hardwired, but often loses it (settings?). QObject::connect( &qnode, SIGNAL( rosShutdown( )), this, SLOT( close( ))); /********************* ** Logging **********************/ ui.view_logging->setModel( qnode.loggingModel( )); QObject::connect( &qnode, SIGNAL( loggingUpdated( )), this, SLOT( updateLoggingView( ))); QObject::connect( &qnode, SIGNAL( graphReceived( )), this, SLOT( onGraphReceived( ))); QObject::connect( &qnode, SIGNAL( parameterReceived( )), this, SLOT( onParameterReceived( ))); /********************* ** Auto Start **********************/ if ( ui.checkbox_remember_settings->isChecked( )) { on_button_connect_clicked( true ); } ui.parameters->setAttribute( Qt::WA_NoMousePropagation ); ui.parameters->setAttribute( Qt::WA_OpaquePaintEvent ); ui.plotgraph->setAttribute( Qt::WA_NoMousePropagation ); ui.plotgraph->setAttribute( Qt::WA_OpaquePaintEvent ); p_plot = new QwtPlot(ui.plotgraph); p_plot->setTitle( "Plot LinVel" ); p_plot->setCanvasBackground( Qt::white ); // Axis p_plot->setAxisTitle( QwtPlot::xBottom, "Time(sec)" ); p_plot->setAxisTitle( QwtPlot::yLeft, "Linear Velocity (m/sec)" ); p_plot->setAxisScale( QwtPlot::yLeft, 0.0, 10.0 ); p_plot->setAxisScale( QwtPlot::xBottom, 0.0, 50.0 ); p_plot->insertLegend( new QwtLegend() ); //samplingThread.start(); QwtPlotGrid *grid = new QwtPlotGrid(); grid->attach( p_plot ); curve = new QwtPlotCurve(); curve->setTitle( "Linear velocity" ); // Set curve styles curve->setPen( Qt::blue, 4 ), curve->setRenderHint( QwtPlotItem::RenderAntialiased, true ); QwtSymbol *symbol = new QwtSymbol( QwtSymbol::Ellipse, QBrush( Qt::yellow), QPen( Qt::red, 2 ), QSize( 8, 8 ) ); curve->setSymbol( symbol); // Assign values to the curve //curve->setSamples(ui.plotgraph.get_linv_g());//yaw_g,trav_g,wall_g; curve->attach( p_plot ); p_plot->resize( 600, 400 ); p_plot->show(); void MainWindow::onGraphReceived( ) { { QMutexLocker locker( &qnode.m_mutex ); } } void MainWindow::onParameterReceived( ) { { QMutexLocker locker( &qnode.m_mutex ); std::vector<double> p_ = qnode.get_parameters(); std::cout << p_[0]<<" "<<p_[1]<<" "<<p_[2]<<" "<<p_[3]<<" "<<p_[4] << std::endl; } } Any help?

    Read the article

  • android alarm not able to launch alarm activity

    - by user965830
    I am new to android programming and am trying to make this simple alarm app. I have my code written and it is compiled with no errors. The app runs in the emulator, that is the main activity asks the date and time, but when i click on the confirm button, it displays the message - "Unfortunately, Timer1 has stopped working." The code for my main activity is as follows: public void onClick(View v) { EditText date = (EditText) findViewById(R.id.editDate); EditText month = (EditText) findViewById(R.id.editMonth); EditText hour = (EditText) findViewById(R.id.editHour); EditText min = (EditText) findViewById(R.id.editMin); int dt = Integer.parseInt(date.getText().toString()); int mon = Integer.parseInt(month.getText().toString()); int hr = Integer.parseInt(hour.getText().toString()); int mnt = Integer.parseInt(min.getText().toString()); Intent myIntent = new Intent(MainActivity.this, AlarmActivity.class); pendingIntent = PendingIntent.getService(MainActivity.this, 0, myIntent, 0); AlarmManager alarmManager = (AlarmManager) getSystemService(ALARM_SERVICE); Calendar calendar = Calendar.getInstance(); calendar.set(Calendar.DATE, dt); calendar.set(Calendar.MONTH, mon); calendar.set(Calendar.HOUR, hr); calendar.set(Calendar.MINUTE, mnt); alarmManager.set(AlarmManager.RTC_WAKEUP, calendar.getTimeInMillis(), pendingIntent); } I do not understand what all the errors in logcat mean, so i am posting them: 06-25 16:03:32.175: I/Process(566): Sending signal. PID: 566 SIG: 9 06-25 16:03:53.775: I/dalvikvm(612): threadid=3: reacting to signal 3 06-25 16:03:54.046: I/dalvikvm(612): Wrote stack traces to '/data/anr/traces.txt' 06-25 16:03:54.255: I/dalvikvm(612): threadid=3: reacting to signal 3 06-25 16:03:54.305: I/dalvikvm(612): Wrote stack traces to '/data/anr/traces.txt' 06-25 16:03:54.735: I/dalvikvm(612): threadid=3: reacting to signal 3 06-25 16:03:54.785: I/dalvikvm(612): Wrote stack traces to '/data/anr/traces.txt' 06-25 16:03:54.925: D/gralloc_goldfish(612): Emulator without GPU emulation detected. 06-25 16:05:09.605: D/AndroidRuntime(612): Shutting down VM 06-25 16:05:09.605: W/dalvikvm(612): threadid=1: thread exiting with uncaught exception (group=0x409c01f8) 06-25 16:05:09.685: E/AndroidRuntime(612): FATAL EXCEPTION: main 06-25 16:05:09.685: E/AndroidRuntime(612): java.lang.NumberFormatException: Invalid int: "android.widget.EditText@41030b40" 06-25 16:05:09.685: E/AndroidRuntime(612): at java.lang.Integer.invalidInt(Integer.java:138) 06-25 16:05:09.685: E/AndroidRuntime(612): at java.lang.Integer.parse(Integer.java:375) 06-25 16:05:09.685: E/AndroidRuntime(612): at java.lang.Integer.parseInt(Integer.java:366) 06-25 16:05:09.685: E/AndroidRuntime(612): at java.lang.Integer.parseInt(Integer.java:332) 06-25 16:05:09.685: E/AndroidRuntime(612): at com.kapymay.tversion1.MainActivity$1.onClick(MainActivity.java:34) 06-25 16:05:09.685: E/AndroidRuntime(612): at android.view.View.performClick(View.java:3511) 06-25 16:05:09.685: E/AndroidRuntime(612): at android.view.View$PerformClick.run(View.java:14105) 06-25 16:05:09.685: E/AndroidRuntime(612): at android.os.Handler.handleCallback(Handler.java:605) 06-25 16:05:09.685: E/AndroidRuntime(612): at android.os.Handler.dispatchMessage(Handler.java:92) 06-25 16:05:09.685: E/AndroidRuntime(612): at android.os.Looper.loop(Looper.java:137) 06-25 16:05:09.685: E/AndroidRuntime(612): at android.app.ActivityThread.main(ActivityThread.java:4424) 06-25 16:05:09.685: E/AndroidRuntime(612): at java.lang.reflect.Method.invokeNative(Native Method) 06-25 16:05:09.685: E/AndroidRuntime(612): at java.lang.reflect.Method.invoke(Method.java:511) 06-25 16:05:09.685: E/AndroidRuntime(612): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:784) 06-25 16:05:09.685: E/AndroidRuntime(612): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:551) 06-25 16:05:09.685: E/AndroidRuntime(612): at dalvik.system.NativeStart.main(Native Method) 06-25 16:05:10.445: I/dalvikvm(612): threadid=3: reacting to signal 3 06-25 16:05:10.575: I/dalvikvm(612): Wrote stack traces to '/data/anr/traces.txt'

    Read the article

  • multiple timer to one process (without linking to rt)

    - by Richard
    Hi, is there any way to register multiple timer to a single process? I have tried following code, yet without success. (Use "gcc -lrt" to compile it...). Program output nothing, which should atleast print "test". Is it possibly due to the dependence to linking to rt? #define TT_SIGUSR1 (SIGRTMAX) #define TT_SIGUSR2 (SIGRTMAX - 1) #define TIME_INTERVAL_1 1 #define TIME_INTERVAL_2 2 #include <signal.h> #include <time.h> #include <stdio.h> #include <unistd.h> #include <linux/unistd.h> #include <sys/syscall.h> #include <sys/time.h> #include <sys/types.h> #include <sched.h> #include <signal.h> #include <setjmp.h> #include <errno.h> #include <assert.h> timer_t create_timer(int signo) { timer_t timerid; struct sigevent se; se.sigev_signo = signo; if (timer_create(CLOCK_REALTIME, &se, &timerid) == -1) { fprintf(stderr, "Failed to create timer\n"); exit(-1); } return timerid; } void set_timer(timer_t timerid, int seconds) { struct itimerspec timervals; timervals.it_value.tv_sec = seconds; timervals.it_value.tv_nsec = 0; timervals.it_interval.tv_sec = seconds; timervals.it_interval.tv_nsec = 0; if (timer_settime(timerid, 0, &timervals, NULL) == -1) { fprintf(stderr, "Failed to start timer\n"); exit(-1); } return; } void install_sighandler2(int signo, void(*handler)(int)) { struct sigaction sigact; sigemptyset(&sigact.sa_mask); sigact.sa_flags = SA_SIGINFO; //register the Signal Handler sigact.sa_sigaction = handler; // Set up sigaction to catch signal first timer if (sigaction(signo, &sigact, NULL) == -1) { printf("sigaction failed"); return -1; } } void install_sighandler(int signo, void(*handler)(int)) { sigset_t set; struct sigaction act; /* Setup the handler */ act.sa_handler = handler; act.sa_flags = SA_RESTART; sigaction(signo, &act, 0); /* Unblock the signal */ sigemptyset(&set); sigaddset(&set, signo); sigprocmask(SIG_UNBLOCK, &set, NULL); return; } void signal_handler(int signo) { printf("receiving sig %d", signo); } int main() { printf("test"); timer_t timer1 = create_timer(TT_SIGUSR1); timer_t timer2 = create_timer(TT_SIGUSR2); set_timer(timer1, TIME_INTERVAL_1); set_timer(timer2, TIME_INTERVAL_2); install_sighandler2(TT_SIGUSR1, signal_handler); install_sighandler(TT_SIGUSR2, signal_handler); while (1) ; return 0; }

    Read the article

  • Why is my ipad's wireless so flakey?

    - by Mark
    I'm the proud owner of a new IPad here in the UK. All is good, except for the wifi, which is a bit flakey. It connects fine to my Draytek router which is set for WPA/WPA2 and 56g only, displaying full signal strength. Then, after a few minutes, it goes down to minimum strength... And sometimes it goes back up again. A few times, it seems to loose connection completely, and needs to be turned off and on again. I've looked at the Apple support site, and have tried their recommendations (which are not really very relevant), but still nothing. I've tried setting the router to wpa2 only, and setting long-preamble. Right now, I guess I want to know if it's a hardware problem with my device and should be returned, or if it's a problem with all ipads which will be resolved. Guess I could take it back to the Mac genius bar, but I find those guys so incredibly pretentious and, frankly, rather useless, that i'd rather wait until I've exercised other options!

    Read the article

  • How to associate Wi-Fi beacon info with a virtual "location"?

    - by leander
    We have a piece of embedded hardware that will sense 802.11 beacons, and we're using this to make a map of currently visible bssid -> signalStrength. Given this map, we would like to make a determination: Is this likely to be a location I have been to before? If so, what is its ID? If not, I should remember this location: generate a new ID. Now what should I store (and how should I store it) to make future determinations easier? This is for an augmented-reality app/game. We will be using it to associate particular characters and events with "locations". The device does not have internet or cellular access, so using a geolocation service is out of consideration for the time being. (We don't really need to know where we are in reality, just be able to determine if we return there.) It isn't crucial that it be extremely accurate, but it would be nice if it was tolerant to signal strength changes or the occasional missing beacon. It should be usable in relatively low numbers of access points (e.g. rural house with one wireless router) or many (wandering around a dense metropolis). In the case of a city, it should change location every few minutes of walking (continuously-overlapping signals make this a bit more tricky in naive code). A reasonable number of false positives (match a location when we aren't actually there) is acceptable. The wrong character/event showing up just adds a bit of variety. False negatives (no location match) are a bit more troublesome: this will tend to add a better-matching new location to the saved locations, masking the old one. While we will have additional logic to ensure locations that the device hasn't seen in a while will "orphan" any associated characters or events (if e.g. you move to a different country), we'd prefer not to mask and eventually orphan locations you do visit regularly. Some technical complications: signalStrength is returned as 1-4; presumably it's related to dB, but we are not sure exactly how; in my experiments it tends to stick to either 1 or 4, but occasionally we see numbers in between. (Tech docs on the hardware are sparse.) The device completes a scan of one-quarter of the channel space every second; so it takes about 4-5 seconds to get a complete picture of what's around. The list isn't always complete. (We are making strides to fix this using some slight sampling period randomization, as recommended by the library docs. We're also investigating ways to increase the number of scans without killing our performance; the hardware/libs are poorly behaved when it comes to saturating the bus.) We have only kilobytes to store our history. We have a "working" impl now, but it is relatively naive, and flaky in the face of real-world Wi-Fi behavior. Rough pseudocode: // recordLocation() -- only store strength 4 locations m_savedLocations[g_nextId++] = filterForStrengthGE( m_currentAPs, 4 ); // determineLocation() bestPoints = -inf; foreach ( oldLoc in m_savedLocations ) { points = 0.0; foreach ( ap in m_currentAPs ) { if ( oldLoc.has( ap ) ) { switch ( ap.signalStrength ) { case 3: points += 1.0; break; case 4: points += 2.0; break; } } } points /= oldLoc.numAPs; if ( points > bestPoints ) { bestLoc = oldLoc; bestPoints = points; } } if ( bestLoc && bestPoints > 1.0 ) { if ( bestPoints >= (2.0 - epsilon) ) { // near-perfect match. // update location with any new high-strength APs that have appeared bestLoc.addAPs( filterForStrengthGE( m_currentAPs, 4 ) ); } return bestLoc; } else { return NO_MATCH; } We record a location currently only when we have NO_MATCH and the app determines it's time for a new event. (The "near-perfect match" code above would appear to make it harder to match in the future... It's mostly to keep new powerful APs from being associated with other locations, but you'd think we'd need something to counter this if e.g. an AP doesn't show up in the next 10 times I match a location.) I have a feeling that we're missing some things from set theory or graph theory that would assist in grouping/classification of this data, and perhaps providing a better "confidence level" on matches, and better robustness against missed beacons, signal strength changes, and the like. Also it would be useful to have a good method for mutating locations over time. Any useful resources out there for this sort of thing? Simple and/or robust approaches we're missing?

    Read the article

  • stop the CSS filter property cascading

    - by Ayoub
    I tried to make box-shadow affect work cross browsers, in order to make ie work I used "filter" property but the effect cascades to the child element (in my case a span) I tried to stop it by using filter : none but it didn't work searched the web but I couldn't find a solution. please help me solve this problem. HTML code : <div id="shadow"> <span>text text</span> </div> CSS code : #shadow{ -moz-box-shadow: 3px 3px 4px #000; -webkit-box-shadow: 3px 3px 4px #000; box-shadow: 3px 3px 4px #000; /* For IE 8 */ -ms-filter: "progid:DXImageTransform.Microsoft.Shadow(Strength=4, Direction=135, Color='#000000')"; /* For IE 5.5 - 7 */ filter: progid:DXImageTransform.Microsoft.Shadow(Strength=4, Direction=135, Color='#000000'); }

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >