Search Results

Search found 9667 results on 387 pages for 'hardware monitoring'.

Page 309/387 | < Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >

  • Which number of processes will give me the best performance ?

    - by Maarten
    I am doing some expensive caluations right now. It is one programm, which I run several instances of at the same time. I am running them under linux on a machine with 4 cpus with 6 cores each. The cpus are Intel Xeon X5660, which support hyper thearting. (That's some insane hardware, huh?) Right now I am running 24 processes at once. Would it be better to run more, b/c of HT ?

    Read the article

  • Are we asking too much of transactional memory?

    - by Carl Seleborg
    I've been reading up a lot about transactional memory lately. There is a bit of hype around TM, so a lot of people are enthusiastic about it, and it does provide solutions for painful problems with locking, but you regularly also see complaints: You can't do I/O You have to write your atomic sections so they can run several times (be careful with your local variables!) Software transactional memory offers poor performance [Insert your pet peeve here] I understand these concerns: more often than not, you find articles about STMs that only run on some particular hardware that supports some really nifty atomic operation (like LL/SC), or it has to be supported by some imaginary compiler, or it requires that all accesses to memory be transactional, it introduces type constraints monad-style, etc. And above all: these are real problems. This has lead me to ask myself: what speaks against local use of transactional memory as a replacement for locks? Would this already bring enough value, or must transactional memory be used all over the place if used at all?

    Read the article

  • Intercept and ignore keyboard event in Windows 7 32bit

    - by Sg2010
    Hi all, My hardware has a problem, from time to time it's sending a "keydown" followed by a "keyup" of event: keydown: None LButton, OemClear 255 keyup: None LButton, OemClear 255 keydown: None LButton, OemClear 255 keyup: None LButton, OemClear 255 It goes like this forever, in Windows. In general it doesn't affect most of the applications, because this key is not printable. I think it's a special function key, like a media key or something. It doesn't do anything. But, in some applications that LISTEN to keydown and keyup, I get undesire and unexpected behaviour. Is there a way to intercept these 2 events in Windows (for all applications, for Windows itself) and make the OS ignore them? This is really important to me, if you can think of any solution, I'd be forever thankful.

    Read the article

  • Has anyone run VxWorks on a desktop PC as a target

    - by Steve Roe
    Can I use a desktop PC to run VxWorks as the operating system? In other words, can a standard PC be used as a target processor? I'm not talking about hosting Workbench and a VxSim on the same machine. Rather, I'm considering running just VxWorks (and my application) on a PC. It seems feasible as long as we can configure a board support package, and write or obtain device drivers for the I/O cards on the PCI bus. What I wonder is, has anyone actually done this? I'm interested in saving a bit of money on hardware over a single board computer and cPCI backplane by using a spare desktop sitting around unused. The application is for a test set to be used in a lab. So, I don't need the portability, etc. of a typical embedded processor.

    Read the article

  • Fast partial sorting algorithm

    - by trican
    I'm looking for a fast way to do a partial sort of 81 numbers - Ideally I'm looking to extract the lowest 16 values (its not necessary for the 16 to be in the absolutely correct order). The target for this is dedicated hardware in an FPGA - so this slightly complicated matters as I want the area of the resultant implementation as small as possible. I looked at and implemented the odd-even merge sort algorithm, but I'm ideally looking for anything that might be more efficient for my needs (trade algorithm implementation size for a partial sort giving lowest 16, not necessarily in order as opposed to a full sort) Any suggestions would be very welcome Many thanks

    Read the article

  • sound volume increase beyond 100% whenever possible on linux

    - by fakedrake
    Some audio output from files or streams is too low. It is obvious that hardware is able to play the same sounds but louder but because of the data it just plays it at some low level even at 100% volume. Vlc can generally increase the volume of a file up to 200%. Is there a way to do the same thing VLC does system-wide and if possible for an arbitrary v percentage value. If there is no application that does this, where should i look into for libs to do it myself or what code should i modify(eg code in the alsamixer) thank you

    Read the article

  • Core i7 c1e and speedstepping - BSOD on shutdown

    - by DeaconDesperado
    I'm having an interesting problem with my recent Core i7 Digital Audio workstation build that I am curious to see if others have encountered. First, here are the specs on the machine. ASUS P6TD Deluxe Intel X58 Socket LGA1366 MB Intel Core i7-950 3.06Ghz 8M LGA1366 CPU CORSAIR DOMINATOR 6GB (3 x 2GB) 240-Pin DDR3 SDRAM DDR3 1600 Western Digital Caviar Black WD5001AALS 500GB Plus a couple ASUS optical drives and a 750W Corsair PSU. Running Windows 7 x64. All this is connected to the nefarious Digi 002 firewire audio interface for use with Pro Tools. I following mostly the specs posted by many other I7 users in the digidesign community who pooled their collective knowledge in this thread. Now after completing my build, I fell victim to the "UD5 squeal" described at that forum thread. So taking the advice posted, I disabled c1e advanced halt state and Intel speed stepping (I would likely have done this anyway to maintain a stable clock, power consumption isn't really a relevant concern on this machine.) I enabled XMP to set the ram timings properly as well. What I am experiencing is a BSOD upon shutdown, but only immediately after windows fully exits and ends all processes. The error is a MACHINE_CHECK_EXCEPTION 0x000000. The funny thing is that it is extremely intermitent and only occurs if the shutdown immediately followed a period of relative idleness. It does not a generate a minidump, I suspect because windows monitoring has terminated by the time this error occurs. No damage is evident and one can simply turn off manually and the system will act as though a proper shutdown had occurred. If anything it is a annoyance, I just want to be certain it is not affecting my long term stability. I have read that the i7 950 does not like DRAM voltages past 1.65, but that they are acceptable if they are within .5 of the BLCK setting. I have tried disabling XMP and setting all timings to auto and the problem still manifests in an identical way. It is suspect that the cpu idleness preceding shutdown is the determining factor, as both c1e and speedstepping are both settings intended to modify handling of this state. Any suggestions or prior experiences would be greatly appreciated. EDIT: The behavior very closely resembles what's described in this thread: http://www.tomshardware.com/forum/12003-63-shut-problem-windows The benign nature of it of is identical. I can't seem to download the hotfix cited there however.

    Read the article

  • How to access a SIM card programatically?

    - by mawg
    Just any old GSM compatible SIM card (bonus for 3G USIM). I presume I need some hardware? Can anyone recommend something cheap for hobbyist, and something more professional? I presume that there will be full docs of an API with the h/w, so maybe this should be tagged "no-programming-related"? Sorry, if so. Any good URLs or books (I am conversant with the 3GPP standards). I'm not (black hat) hacking, don't worry, just not pleased with the likes of SIM Card Secretary, Data Doctor Recovery, etc, so would like to code my own, but might turn it commercial, or offer SIM card programming services (data recover from damaged card, etc) as a sideline.

    Read the article

  • Ubuntu: Multiple NICs, one used only for Wake-On-LAN

    - by jcwx86
    This is similar to some other questions, but I have a specific need which is not covered in the other questions. I have an Ubuntu server (11.10) with two NICs. One is built into the motherboard and the other is a PCI express card. I want to have my server connected to the internet via my NAT router and also have it able to wake from suspend using a Magic Packet (henceforth referred to as Wake-On-LAN, WOL). I can't do this with just one of the NICs because each has an issue - the built-in NIC will crash the system if it is placed under heavy load (typically downloading data), whilst the PCI express NIC will crash the system if it is used for WOL. I have spent some time investigating these individual problems, to no avail. My plan is thus: use the built-in NIC solely for WOL, and use the PCI express card for all other network communication except WOL. Since I send the WOL Magic Packet to a specific MAC address, there is no danger of hitting the wrong NIC, but there is a danger of using the built-in NIC for general network access, overloading it and crashing the system. Both NICs are wired to the same LAN with address space 192.168.0.0/24. The built-in ethernet card is set to have interface name eth1 and the PCI express card is eth0 in Ubuntu's udev persistent rules (so they stay the same upon reboot). I have been trying to set this up with the /etc/network/interfaces file. Here is where I am currently: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.0.3 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 gateway 192.168.0.1 auto eth1 iface eth1 inet static address 192.168.0.254 netmask 255.255.255.0 I think by not specifying a gateway for eth1, I prevent it being used for outgoing requests. I don't mind if it can be reached on 192.168.0.254 on the LAN, i.e. via SSH -- it's IP is irrelevant to WOL, which is based on MAC addresses -- I just don't want it to be used to access internet resources. My kernel routing table (from route -n) is Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 100 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 My question is this: Is this sufficient for what I want to achieve? My research has thrown up the idea of using static routing to specify that eth1 should only be used for WOL on the local network, but I'm not sure this is necessary. I have been monitoring the activity of the interfaces using iptraf and it seems like eth0 takes the vast majority of the packets, though I am not sure that this will be consistent based on my configuration. Given that if I mess up the configuration, my system will likely crash, it is important to me to have this set up correctly!

    Read the article

  • Why is my ipad's wireless so flakey?

    - by Mark
    I'm the proud owner of a new IPad here in the UK. All is good, except for the wifi, which is a bit flakey. It connects fine to my Draytek router which is set for WPA/WPA2 and 56g only, displaying full signal strength. Then, after a few minutes, it goes down to minimum strength... And sometimes it goes back up again. A few times, it seems to loose connection completely, and needs to be turned off and on again. I've looked at the Apple support site, and have tried their recommendations (which are not really very relevant), but still nothing. I've tried setting the router to wpa2 only, and setting long-preamble. Right now, I guess I want to know if it's a hardware problem with my device and should be returned, or if it's a problem with all ipads which will be resolved. Guess I could take it back to the Mac genius bar, but I find those guys so incredibly pretentious and, frankly, rather useless, that i'd rather wait until I've exercised other options!

    Read the article

  • apcupsd on Linux does not report on APC BackUPS Pro 900

    - by lserni
    From what documentation I could find, the UPS should be (is!) supported by Linux and ought to work with apcupsd. I looked for specific problems such as the infamous Microlink protocol, and found none. I have found a feedback from a guy in UK that reports using this very model on a not-too-different OS version (his OpenSuSE 12.1, mine 12.3 x86_64). The USB port is detected, lsusb reports Bus 002 Device 003: ID 051d:0002 American Power Conversion Uninterruptible Power Supply and lsusb -v -s002:003 confirms and expands: Bus 002 Device 003: ID 051d:0002 American Power Conversion Uninterruptible Power Supply Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x051d American Power Conversion idProduct 0x0002 Uninterruptible Power Supply bcdDevice 0.90 iManufacturer 1 American Power Conversion iProduct 2 Back-UPS RS 900G FW:879.L4 .I USB FW:L4 bNumConfigurations 1 Configuration Descriptor: [...] Interface Descriptor: [...] bInterfaceClass 3 Human Interface Device bInterfaceSubClass 0 No Subclass bInterfaceProtocol 0 None iInterface 0 HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 1.00 bCountryCode 33 US bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 1134 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0008 1x 8 bytes bInterval 100 Device Status: 0x0000 (Bus Powered) The kernel recognizes this and duly sets up crw------- 1 root root 180, 96 Nov 4 16:11 /dev/usb/hiddev0 As far as I know, everything is as it should be. I have put the standard configuration in /etc/apcupsd/apcupsd.conf (which is Unix-terminated, ASCII-only, no BOM (just in case)) UPSCABLE usb UPSTYPE usb DEVICE (I have also tried commenting out DEVICE, and setting a device of /dev/puppa results in an access attempt to /dev/puppa, not some /var/lib/dev/puppa or /dev/puppa\r\n). Yet, what apcaccess tells me is VERSION : 3.14.10 (13 September 2011) suse CABLE : USB Cable DRIVER : USB UPS Driver UPSMODE : Stand Alone STARTTIME: 2013-11-04 16:24:22 +0100 MODEL : STATUS : NOBATT LINEV : 000.0 Volts LOADPCT : 0.0 Percent Load Capacity BCHARGE : 000.0 Percent TIMELEFT : 0.0 Minutes MBATTCHG : 5 Percent MINTIMEL : 3 Minutes MAXTIME : 0 Seconds SENSE : Low LOTRANS : 000.0 Volts HITRANS : 000.0 Volts It doesn't recognize the model, and reports no battery (and no voltage). This confirms that it's not the Microlink problem, or it would report the battery status, if precious little else. If I disconnect the USB cable, I get an apcupsd message to the effect that communications have been lost; and I get the "communication restored" broadcast too, if I reconnect the cable. apcupsd is monitoring. So everything tells me that it should work -- only it doesn't. Does anyone spot what I'm missing?

    Read the article

  • Best calculator software to help programmers

    - by RHaguiuda
    As a embedded systems programmer I always need to make lots of base conversions (dec to hex, hex to bin and so on...), and I must admit: Windows 7 calculator is a good calc, but too limited in my point of view. I work a lot with communications protocols and it`s common to need some base conversion in this field of knowledge. I`m looking for a calculator software (not a hardware one), to help with base conversions, but it must also support scientific calc. Can anyone help on this? Since this subject is intended to help programmers, I did not ask this in SuperUser.com. Thanks.

    Read the article

  • Getting websites to detect our mobile browser

    - by Chromatix
    I've been asked to find out a sensible way to make the majority of popular websites detect our browser - which is functionally complete, but is running on rather constrained hardware - as a "mobile" browser. The idea is that the heaviest popular websites seem to have mobile versions, which render much faster and fit better on the screen. I've looked at the inverse question, which tells me that there isn't an obvious standard way of doing it - http://www.brainhandles.com/techno-thoughts/detecting-mobile-browsers is a case in point. This is borne out by looking at a variety of User-Agent strings from popular mobile and desktop browsers. So far the best idea we can come up with is to add "Mobile" to the string somewhere, since this is the main visible difference between Safari for iPad/iPhone and for Windows/Mac. Does anyone have a better idea?

    Read the article

  • How to choose the right web application framework?

    - by thenextwebguy
    http://en.wikipedia.org/wiki/Comparison_of_web_application_frameworks Since we are ambitiously aiming to be big, scalability is important, and so are globalization features. Since we are starting out without funding, price/performance and cost of licences/hardware is important. We definitely want to bring AJAX well present in the web interface. But apart from these, there's no further criteria I can come up with. I'm most experienced with C#/ASP.net, PHP and Java, in that order, but don't turn down other languages (Ruby, Python, Scala, etc.). How can we determine from the jungle of frameworks the one that suits best our goal? What other questions should we be asking ourselves? Reference material: articles, book recommendations, websites, etc.?

    Read the article

  • programmatically controlling power sockets in the UK

    - by cartoonfox
    It's very simple. I want to plug a lamp into the UK mains supply. I want to be able to power it on and off from software - say from serial port commands, or by running a command-line or something I can get to from ruby or Java. I see lots written about how to do this with X10 with American power systems - but has anybody actually tried doing this in the UK? If you got this working: 1) Exactly what hardware did you use? 2) How do you control it from software? Thanks!

    Read the article

  • 0xDEADBEEF equivalent for 64-bit development?

    - by Peter Mortensen
    For C++ development for 32-bit systems (be it Linux, Mac OS or Windows, PowerPC or x86) I have initialised pointers that would otherwise be undefined (e.g. they can not immediately get a proper value) like so: int *pInt = reinterpret_cast<int *>(0xDEADBEEF); (To save typing and being DRY the right-hand side would normally be in a constant, e.g. BAD_PTR.) If pInt is dereferenced before it gets a proper value then it will crash immediately on most systems (instead of crashing much later when some memory is overwritten or going into a very long loop). Of course the behavior is dependent on the underlying hardware (getting a 4 byte integer from the odd address 0xDEADBEEF from a user process may be perfectly valid), but the crashing has been 100% reliable for all the systems I have developed for so far (Mac OS 68xxx, Mac OS PowerPC, Linux Redhat Pentium, Windows GUI Pentium, Windows console Pentium). For instance on PowerPC it is illegal (bus fault) to fetch a 4 byte integer from an odd address. What is a good value for this on 64-bit systems?

    Read the article

  • Web application/ site service (like Google App Engine) for PHP/ MySQL and Postgres

    - by Simon
    I would like to find a service similar to Google App Engine for PHP/ MySQL/ Postgres sites/ applications. We host two different types of site. i). PHP/ Mysql/ Zend Framework <VirtualHost *:80> DocumentRoot "/home/websites/website.com/public" ServerName website.com # This should be omitted in the production environment SetEnv APPLICATION_ENV development <Directory "/home/websites/website.com/public"> Options Indexes MultiViews FollowSymLinks AllowOverride All Order allow,deny Allow from all RewriteEngine On RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ index.php [NC,L] </Directory> </VirtualHost> ii). Matrix CMS - PHP/ Postgres + loads of pear classes <VirtualHost *:80> ServerName server.example.com DocumentRoot /home/websites/mysource_matrix/core/web Options -Indexes FollowSymLinks <Directory /home/websites/mysource_matrix> Order deny,allow Deny from all </Directory> <DirectoryMatch "^/home/websites/mysource_matrix/(core/(web|lib)|data/public|fudge)"> Order allow,deny Allow from all </DirectoryMatch> <DirectoryMatch "^/home/websites/mysource_matrix/data/public/assets"> php_flag engine off </DirectoryMatch> <FilesMatch "\.inc$"> Order allow,deny Deny from all </FilesMatch> <LocationMatch "/(CVS|\.FFV)/"> Order allow,deny Deny from all </LocationMatch> Alias /__fudge /home/websites/mysource_matrix/fudge Alias /__data /home/websites/mysource_matrix/data/public Alias /__lib /home/websites/mysource_matrix/core/lib Alias / /home/websites/mysource_matrix/core/web/index.php/ </VirtualHost> My key requirements are: I don't want to worry/ know/ care about the server/ infrastructure Secure/ up to date software/ os Good monitoring Automatic scalability SLA I apologise for the length of the question. In short all I want to do is i). create vhost, ii). create db iii). install app/ site iv). relax. Thanks. Edit: I include the Matrix vhost because that is the only complication that I cannot really do via a .htaccess file.

    Read the article

  • In writing games that deal with scancodes, what do I need to know to support international keyboards

    - by sludge
    I am writing an input system for a game that needs to be able to handle keyboard schemes that are not just qwerty. In designing the system, I must take into consideration: Two types of input: standard shooter controls (lots of buttons being pressed and raw samples collected) and flight sim controls (the button's label is what the user presses to toggle something) Alternative software keyboard layouts (dvorak, azerty, etc) as supplied by the OS Alternative hardware keyboard layouts that supply Unicode characters My initial inclination is to sample the USB HID unicode scancodes. Interested on thoughts on what I need to do to be compatible with the world's input devices and recommendation of input APIs on both platforms.

    Read the article

  • What's wrong with my logic here?

    - by stu
    In java they say don't concatenate Strings, instead you should make a stringbuffer and keep adding to that and then when you're all done, use toString() to get a String object out of it. Here's what I don't get. They say do this for performance reasons, because concatenating strings makes lots of temporary objects. But if the goal was performance, then you'd use a language like C/C++ or assembly. The argument for using java is that it is a lot cheaper to buy a faster processor than it is to pay a senior programmer to write fast efficient code. So on the one hand, you're supposed let the hardware take care of the inefficiencies, but on the other hand, you're supposed to use stringbuffers to make java more efficient. While I see that you can do both, use java and stringbuffers, my question is where is the flaw in the logic that you either use a faster chip or you spent extra time writing more efficient software.

    Read the article

  • What 'best practices' exist for handing enum heirarchies?

    - by FerretallicA
    I'm curious as to any solutions out there for addressing enum heirarchies. I'm working through some docs on Entity Framework 4 and trying to apply it to a simple inventory tracking program. The possible types for inventory to fall into are as follows: INVENTORY ITEM TYPES: Hardware PC Desktop Server Laptop Accessory Input (keyboards, scanners etc) Output (monitors, printers etc) Storage (USB sticks, tape drives etc) Communication (network cards, routers etc) Software What recommendations are there for handling enums in a situation like this? Are enums even the solution? I don't really want to have a ridiculously normalised database for such a relatively simple experiment (eg tables for InventoryType, InventorySubtype, InventoryTypeToSubtype etc). I don't really want to over-complicate my data model with each subtype being inherited even though no additional properties or methods are included (except PC types which would ideally have associated accessories and software but that's probably out of scope here). It feels like there should be a really simple, elegant solution to this but I can't put my finger on it. Any assistance or input appreciated!

    Read the article

  • Drawing individual pixels with iphone sdk.

    - by blob8108
    Hi, I've been trying to figure out how to make a powder toy style game on the iPhone. My problem is how to draw pixels to the screen. From what I've read, OpenGL is better for games as it is faster/hardware accelerated, but there is no method to draw pixels directly to the screen. Apparently drawing pixels to an off-screen frame buffer is the way to go, but how do I then pass this to OpenGL? Do I use a texture? (this is assuming I have no previous knowledge of iPhone graphics programming). Thanks!

    Read the article

  • How does operating system software maintains time clocks?

    - by Neeraj
    Hi everyone, This may sound a bit less relevant but I couldn't think of a better place to ask this question. Now consider this situation, you install an OS on your system, set the timezone and time, do some stuff and turn it off. (Note that there is no power going in to the computer). Now next time (say after some hours or days) you turn it on again, and you see the updated time. How is this possible even when my computer is not connected to the internet and was consuming no power during the period it was down.(Is there some kind of hardware hack?) please clarify!

    Read the article

  • Automatic layout of manual network mapping

    - by Paul
    So I have a small business network mainly consisting of two routed layer-2 domains with a total of ca. 100 devices spread over ca. 2000m² production and office spaces. Typical problems to solve using the graph would be: Over what (cable) path is a PC connected to the server? Where to expect devices connected to a switch port? I want to generate a graph of the physical network topology: Nodes are endpoint devices, switch ports, wall outlets, patch panel ports etc. Edges are cable connections. Ideally, grouping edges (or segments) that pass through the same bundle could be grouped. Also I would like to augment the graph data with automatically gathered data (monitoring state, MAC address, Switch port <- MAC entries to build up parts of the map). At the moment I use graphviz for this inside a Confluence wiki like that: layout = "neato" overlap = scale subgraph { rankdir = "TB" subgraph cluster_r1pf1 { r1pf1 [label="{ Rack 1 PF 1 | { <p1>P1 | <p2>P2 | <p3>P3} }", shape=record] } subgraph cluster_switch1 { switch1 [label="{ Rack 1 Switch 1 | { <p1> P1 | <p1> P1 | <p3> P3} }", shape=record] } r1pf1:p1 -> switch1:p1 (obviously there are dozens of entries omitted here) Problem is: I have a hard time to influence graphviz to generate a bearable layout. Edges overlap so bad that you can't read the diagram anymore. The question is: What other tools (be it interactive like Visio, Omnigraffle or I/O-oriented like graphviz) exist that would allow an easily versionable (as in: Operates on a text file) documentation that is both machine and human readable and editable? Why not OmniGraffle or Visio? Well we don't have Macs and Visio is not available at the moment. To buy it I would need good arguments. Automation would be one of that. But last time I looked, versioning Visio files or even thinking about automatic handling was a nightmare. Related: Network Mapping Tools basically asks the same with a focus on generating the complete graph automatically (but without the need to document cabling connections) Recommendations for automatic computer inventory brings up links of "all-in-one" solutions

    Read the article

  • HTC Hero Mouse Ball click not working on Custom ListView?

    - by UMMA
    dear friends, i have created custom list view using class EfficientAdapter extends BaseAdapter implements { private LayoutInflater mInflater; private Context context; public EfficientAdapter(Context context) { mInflater = LayoutInflater.from(context); this.context = context; } public View getView(final int position, View convertView, ViewGroup parent) { ViewHolder holder; convertView = mInflater.inflate(R.layout.adaptor_content, null); convertView.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { } }); } and other necessary methods... also } using touch screen when i click on list item OnClickListener of list item is called. but when i use Mouse Ball / Track Ball (Phone Hardware) to click on ListItem OnClickListener of list item is not called. can any one guide me is this Phone bug or My Fault? any help would be appriciated.

    Read the article

  • How to debug and detect hang issue

    - by igor
    I am testing my application (Windows 7, WinForms, Infragistics controls, C#, .Net 3.5). I have two monitors and my application saves and restores forms' position on the first or second monitors. So I physically switched off second monitor and disabled it at Screen Resolution on the windows display settings form. I need to know it is possible for my application to restore windows positions (for those windows that were saved on the second monitor) to the first one. I switched off second monitor and press Detect to apply hardware changes. Then Windows switched OFF the first monitor for a few seconds to apply new settings. When the first monitor screen came back, my application became unresponsive. My application was launched in debug mode, so I tried to navigate via stack and threads (Visual Studio 2008), paused application, started and did not find any thing that help me to understand why my application is not responsive. Could somebody help my how to detect the source of issue.

    Read the article

< Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >