Search Results

Search found 10209 results on 409 pages for 'multi monitor'.

Page 76/409 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • Windows 7: from Geforce 8800 to three monitors?

    - by lance
    I've got a GeForce 8800 that I've quite happy with. It drives my two 23" widescreen displays well. Now I've got a 19" standard display that I want to stick between the two widescreens. My second PCIe 16x slot is unused (as is the PCI slot below that), and I want to add a card to my Win7 x64 system. This 19" display won't be used for gaming, so I don't need anything fancy. Here are two cards I was considering, but I'm wondering if they're bad choices for some reason? If they're both fine choices, which is better and why? Again, I'm needing to power only the 19" standard display with this card, and it won't play games. I just need 1280x1024 in Win7 x64. NVidia: Galaxy 95TFE8HUFEXX GeForce 9500 GT Video Card - 512MB DDR2, PCI Express 2.0 ATI: ASUS EAH4350 SILENT/DI/51 Radeon HD 4350 Video Card - 512MB DDR2, PCI Express 2.0

    Read the article

  • Why would a process monitoring script use exit 1; on finding no problems?

    - by user568458
    General question: On a Linux (Centos) server, if a process monitoring script run by cron is set to close with exit 1; rather than exit 0; on finding that everything is okay and that no action is needed, is that a mistake? Or are there legitimate reasons for calling exit 1; instead of exit 0; on the "Everything's fine, no action needed" condition? exit 0; on finding no problems seems to me to be more appropriate. But maybe there's something I'm not aware of. For example, maybe there's something specific to Cron? Or maybe there's a convention in process monitoring scripts that 'failure' means 'this script failed to need to fix a problem' (rather than what I would expect which is that exit 1; would mean 'the process being monitored has failed'?) My specific case: I'm looking at a process monitoring script written by my web hosting company. By process monitoring script, I mean a script executed by Cron on a regular basis that checks if an important system process is running, and if it isn't running, takes actions such as mailing an administrator or restarting the process. Here's the (generalised) structure of their script, for a service running on port 8080 (in this case, Apache Tomcat): SERVICE=$(/usr/sbin/lsof -i tcp:8080 | wc -l); if [ $SERVICE != 0 ]; then exit 1; else #take action fi Seems simple enough even for someone with limited knowledge like me, except the exit 1; part seems odd. As I understand it, exit 0; closes a program and signifies to the parent that executed the program that everything is fine, exit n; where n0 and n<127 signifies that there has been some kind of error or problem. Here, their script seems to go against that rule - it calls exit 1; in the condition where everything is fine, and doesn't exit after taking remedial action in the problem condition. To me, this looks like a mistake - but my experience in this area is limited. Are there cases where calling exit 1; in the "Everything's fine, no action needed" condition is more appropriate than calling exit 0;? Or is it a mistake? Wider context is pretty simple. It's a Centos VPS, running Plesk. The script is being called by Cron via Plesk's "Scheduled tasks" Cron manager. There's no custom layer between Cron and this script that would respond in an unusual way to the exit call. It's a fairly average, almost out-of-the box Plesk-managed Centos VPS (in so far as there is such a thing). The process being monitored by this script is Apache Tomcat.

    Read the article

  • Why do people like widescreen when it is, de facto, less space?

    - by Kerry
    I find that many of my friends/non-programmers or designers like widescreens. It makes very little sense to me as you in fact have less space than a 4:3 (do the math). The closer to a perfect square the more space you actually have on your screen. I got a 21" 16:9 and two 19" 4:3 The 21" is nearly the same height, but I think its a tenth of an inch shorter if I'm correct. I forget the calculation but it is nearly the same actual space. I can understand if you're using your computer for constant movie-watching but I think that's more of people's "ideal" than a reality. Thoughts?

    Read the article

  • Backlight dimming don't work

    - by Mathias
    My Packard Bell EasyNote TS11HR notebook does not have an option for dimming the display backlight. At night, my eyes begin to hurt because of the strong light from the screen. My laptop is 2-3 months old and I am sure it has worked before. When I click on the battery icon in the notification area, it says in my language (Danish): "the setting for light does possibly reduce the life of the battery". However, I cannot dim the backlight. I have tried downloading programs for dimming the screen but they only make the screen darker, instead of dimming the backlight. I have tried updating my drivers and looking in the BIOS for a setting. I also plan to use an Ubuntu LiveCD to try controlling it. As of now though, the backlight is locked at maximum. Any ideas?

    Read the article

  • Run 3 monitors on two different video cards?

    - by hullot
    Can I run 3 monitors on two different video cards? I have an ATI and Nvidia brand card. The ATI has 2 HDMI connections. They both work. Both cards are also picked up in Windows, one being the ATI and the other one as the Nvidia, but it says VGA Controller, although the card only takes 2 DVi. So, one DVI cable goes into that Nvidia card. 3 Monitors, but only 2 the HDMI ones from the ATI pick up, not the third one which is connected to the Nvidia via DVI. How can I run three monitors then? I suppose I can't install both drivers, so I'm unsure what to do. Is this possible? I just want the Nvidia card to power the third screen, no gaming on it, nothing. Also the ATI is picked up as primary card as well, so no hurdle there. EDIT: Hm, just installed the Nvidia drivers and it picked up the third screen no problem. Hope there aren't any major conflicts. Will post this as an answer as correct when I'm able. Can't as a new user.

    Read the article

  • Why am I seeing red dots on my LCD screen?

    - by mydoghasworms
    My laptop is about 2.5 years old. Now I am starting to see red dots on certain shades of colour (mainly dark colours, blues and blacks), and it is not limited to certain pixels, because when you move a window around, the red dots move with it, staying on the certain shades of colour. Is this a problem with the LCD screen, or is it the GPU? Is there a way to determine this? It is clearly not a driver issue, because it happens in Linux and Windows, and my Windows setup has not changed prior to the issue starting.

    Read the article

  • Have programmers at your work not taken up or been averse to an offer of a second monitor?

    - by Chris Knight
    I'm putting together a business case for the developers in my company to get a second monitor. After my own experiences and research, this seems a no-brainer to me in terms of increasing productivity and morale/happiness. One question which has niggled me is if I should be pushing to get all developers onto a second monitor or let folk opt-in (i.e. they get one if they want one). Thoughts on this are welcome, but my specific question relates to a snippet on this site: But when the IT manager at Thibeault's company asked other employees if they wanted dual monitors last year, few jumped at the offer. Blinded by my own pre-judgement, this surprised me. Has anyone else experienced this? I fully appreciate that some people prefer a single larger monitor, but my general experience of researching the web suggests that most programmers prefer a dual (or more) setup. I'm guessing this should be tempered with the thought that those developers who contribute to such discussions might not be considered your average developer who might not care one way or the other. Anyway, if you have experienced the above have you tried to sell the concept of dual monitors to the masses? If everyone just got 2 monitors regardless if they wanted it or not, were there adverse reactions or negative effects? UPDATE: The developers are on a mixture of 17", 22", or 24" single monitors. The desks should be able to accommodate dual 22" monitors as I am proposing, though this will take some getting used to I imagine.

    Read the article

  • Does immutability entirely eliminate the need for locks in multi-processor programming?

    - by GlenPeterson
    Part 1 Clearly Immutability minimizes the need for locks in multi-processor programming, but does it eliminate that need, or are there instances where immutability alone is not enough? It seems to me that you can only defer processing and encapsulate state so long before most programs have to actually DO something. If a program performs actions on multiple processors, something needs to collect and aggregate the results. All this involves multi-process communication before, after, and possibly during some transformations. The start and end state of the machines are different. Can this always be done with no locks just by throwing out each object and creating a new one instead of changing the original (a crude view of immutability)? What cases still require locking? I'm interested in both the theoretical/academic answer and the practical/real-world answer. I know a lot of functional programmers like to talk about "no side effect" but in the "real world" everything has a side effect. Every processor click takes time and electricity and machine resources away from other processes. So I understand that there may be more than one perspective to answer this question from. If immutability is safe, given certain bounds or assumptions, I want to know what the borders of the "safety zone" are exactly. Some examples of possible boundaries: I/O Exceptions/errors Interfaces with programs written in other languages Interfaces with other machines (physical, virtual, or theoretical) Special thanks to @JimmaHoffa for his comment which started this question! Part 2 Multi-processor programming is often used as an optimization technique - to make some code run faster. When is it faster to use locks vs. immutable objects? Given the limits set out in Amdahl's Law, when can you achieve better over-all performance (with or without the garbage collector taken into account) with immutable objects vs. locking mutable ones? Summary I'm combining these two questions into one to try to get at where the bounding box is for Immutability as a solution to threading problems.

    Read the article

  • Can T520+Bumblebee run an external monitor via DisplayPort?

    - by Fen
    Using 64-bit Ubuntu 11.10 and integrated (Intel) graphics, I can run the 1600x900 laptop display plus a 1600x1200 external monitor connected to the VGA adapter. But my external monitor is 1920x1200 so I have black stripes on each side. I believe the resolution is limited like this as the maximum resolution available from the Intel GPU is 2560x1600 = 4,096,000 pixels and I'm asking for a 3520x1200 = 4,224,000 display (with 1200x300 lost above the laptop screen). At 3200x1200 = 3,840,000 pixels, the Intel GPU seems happy. Under Windows, the same limit exists when using the VGA adapter, but if I turn Optimus on then I can connect the external monitor to the DisplayPort and get its full resolution and an extended desktop. I've seen that Bumblebee can run apps on the DisplayPort using the 'optirun' command. My question is: can Bumblebee run the DisplayPort in concert with the Intel card running the laptop screen creating a large virtual desktop (as on Windows)? If so, are there any pointers to how to do this? I tried once, failed, and dropped back to Integrated Graphics (and black stripes) as I could find no reports of this configuration working.

    Read the article

  • How can I get my monitor's maximum resolution without the proprietary AMD graphic driver installed?

    - by Venki
    I am using Ubuntu 14.04. I have an AMD Radeon 5570 HD graphic card. Actually, the default open source REDWOOD drivers aren't allowing me to choose my monitor's maximum screen resolution(which is 1366 x 768). I just have two resolutions displayed which are 1024x768 and 800x600 . If I give the command : xrandr -s 1366x768 then the output is: Size 1366x768 not found in available modes So just for the sake of getting 1366x768 resolution I am forced to install the proprietary graphic driver that AMD gives me from its site. But if I install it(which itself is quite a problem-prone process), I undergo a lot of 'inconvenience'. Sometimes after an OS update, the driver crashes unity. Then I will have to uninstall that driver from a tty and google around for a solution. Also I encounter screen tearing problems occasionally. In addition I also cant see my login screen(See this question which states this particular problem). The main problem is AMD does not update its driver as quick as Ubuntu updates its OS. This is quite irritating. So, I want the maximum resolution(and performance) that my graphics card and monitor can give me without installing the 'problematic' proprietary graphic card driver that AMD gives. Is this possible? Suggestions please. Thanks in advance. PS :- More system specs details:- Intel i3 2100 processor AMD P8H61-M PLUS2 motherboard AMD Radeon 5570 HD graphic card DELL monitor (BTW, Thank you for reading through my elaborate description!)

    Read the article

  • Should I build a multi-threaded system that handles events from a game and sorts them, independently, into different threads based on priority?

    - by JonathonG
    Can I build a multi-threaded system that handles events from a game and sorts them, independently, into different threads based on priority, and is it a good idea? Here's more info: I am about to begin work on porting a mid-sized game from Flash/AS3 to Java so that I can continue development with multi-threading capabilities. Here's a small bit of background about the game: The game contains numerous asynchronous activities, such as "world updating" (the game environment is constantly changing based on a set of natural laws and forces), procedural generation of terrain, NPCs, quests, items, etc., and on top of that, the effects of all of the player's interactions with his environment are programmatically calculated in real time, based on a set of constantly changing "stats" and once again, natural laws and forces. All of these things going on at once, in an asynchronous manner, seem to lend themselves to multi-threading very well. My question is: Can I build some kind of central engine that handles the "stacking" of all of these events as they are triggered, and dynamically sorts them out amongst the available threads, and would it be a good idea? As an example: Essentially, every time something happens (IE, a magic missile being generated by a spell, or a bunch of plants need to grow to their next stage), instead of just processing that task right then and adding the new object(s) to a list of managed objects, send a reference to that event to a core "event handler" that throws it into a stack of all other currently queued events, which then sorts them out and orders them according to urgency, splits them between a number of available threads for as-fast-as-possible multithreaded execution.

    Read the article

  • Windows Spooler Events API doesn't generate events for network printers

    - by clyfe
    the context i use Spooler Events API to capture events generated by the spooler when a user prints a document ie. FindFirstPrinterChangeNotification FindNextPrinterChangeNotification the problem When I print a document on the network printers from my machine no events are captured by the monitor (uses the fuctions above) Notice Events ARE generated OK for local printers, only Network Printers are problematic!

    Read the article

  • Visual studio has gone crazy trying to create files

    - by zachary
    I downloaded Process Monitor and began monitoring my project directory that I am writing code in. I see endless entries every couple seconds of: Operation: Create File C:\Inetpub\wwwroot\csharp C:\Inetpub\wwwroot\code C:\Inetpub\wwwroot\web and so on for the rest of the templates Then it says the result is PATH NOT FOUND. What is happening? Has Visual Studio GONE CRAZY?!?!?!?!?!?!?!

    Read the article

  • Changing the default boot option without losing the boot menu

    - by hvd
    I've had a working multi-boot setup with the Windows boot loader, containing menu items for two Windows 7 systems, and one for Grub. Grub in turn contains multiple menu items, but I think that's not relevant here. I've upgraded one system to Windows 8. When I now set a different system as the default, I lose the boot menu, and I lose the possibility of booting into the other systems. I've set Windows 7 as the default, rebooted, and get Windows 7, but I don't get to choose which system to boot into. I can run its own bcdedit to change the default back to Windows 8, and another reboot shows the boot menu again, but how can I avoid defaulting to Windows 8? Here are my current boot settings, is there anything that is misconfigured? C:\WINDOWS\system32>bcdedit Windows Boot Manager -------------------- identifier {bootmgr} device partition=F: description Windows Boot Manager locale nl-NL inherit {globalsettings} integrityservices Enable default {current} resumeobject {2f8b77f0-a30b-11e1-a9c6-a4bd8d37f662} displayorder {current} {2f8b77e3-a30b-11e1-a9c6-a4bd8d37f662} {2f8b77ee-a30b-11e1-a9c6-a4bd8d37f662} toolsdisplayorder {memdiag} timeout 30 Windows Boot Loader ------------------- identifier {current} device partition=C: path \WINDOWS\system32\winload.exe description Windows 8 locale nl-NL inherit {bootloadersettings} integrityservices Enable recoveryenabled No allowedinmemorysettings 0x15000075 osdevice partition=C: systemroot \WINDOWS resumeobject {2f8b77f0-a30b-11e1-a9c6-a4bd8d37f662} nx OptIn bootmenupolicy Standard Windows Boot Loader ------------------- identifier {2f8b77e3-a30b-11e1-a9c6-a4bd8d37f662} device partition=D: path \Windows\system32\winload.exe description Windows 7 locale nl-NL osdevice partition=D: systemroot \Windows resumeobject {59616f59-a2ba-11e1-b73a-806e6f6e6963} nx OptIn pae Default bootmenupolicy Standard hypervisorlaunchtype Auto detecthal Yes sos No debug No Real-mode Boot Sector --------------------- identifier {2f8b77ee-a30b-11e1-a9c6-a4bd8d37f662} device partition=C: path \grub\winloader\grub.boot description Grub 2

    Read the article

  • How can I get Pinch to Zoom back in Desktop mode?

    - by Ben Brocka
    Windows 7 had an old implimentation of Pinch to Zoom where bringing your fingers apart/together would act similar to ctrl + +/-, the standard zoom. It's not as nice as granular zoom (like iOS/Android use) but it worked. Most notably it doesn't work in Chrome (did before) but I haven't noticed it working in any other apps. In windows 8 desktop mode, pinch to zoom doesn't seem to work at all. It doesn't even work in One Note 2010, which, if I recall correctly, had granular zoom in Windows 7. I have an (older) 2 touch point multi-touch monitor, and I can see the visual feedback that the two touch points and coming closer/farther apart, but it doesn't zoom. Note I'm using the touchscreen, not a touchpad or the Arch mouse or other peripherals. Can I enable this somehow or is it gone from Desktop mode? It works fine in Metro apps. Additionally I get weird visual feedback when placing my second finger on the screen; a shrinking transparent square appears somewhere between the two fingers, visually similar to the Right Click visual queue when long-pressing. It's not a right click though, I can't tell what, if anything, it's doing.

    Read the article

  • How to force GNOME panels et. al. to display on a different monitor without mirroring?

    - by GrueKun
    So, I recently purchased a new 23" monitor for my PC. However, I can't use it with the PC currently as I am waiting on a replacement heatsink. In the mean time, I wanted to use it with my Dell laptop. I hooked it up to the VGA port, and it seems to be working properly. However, I wanted to know if there was a way I could move all of the main display elements over to the attached monitor? I wanted to shut the LCD panel off on the laptop and hook it up like a desktop. Relevant specs: Ubuntu 10.10 x64 Intel graphics chipset The attached monitor is currently set as the default monitor. Any suggestions are welcome. :)

    Read the article

  • curl multiple requests cookie issue

    - by embedded
    I'm using curl multi API for multiple curl requests. first I'm using a single request to login to a site and save the cookie file. then I'm using the curl multi API to get some data from that site. The problem I'm facing is that from some reason the cookie file does not get read and I'm redirecting to the main login page. I must say that this work once in awhile so I can't point what went wrong. Any help would be much appreciated. Thanks

    Read the article

  • WPF Manipulation and Touch events not firing, but mouse events do?

    - by Smetad Anarkist
    I have a Samsung LD220Z multi touch monitor, and when I'm experimenting with some WPF and touch features the touch events arent firing. The mouse down event seems to be firing however. Does this mean that I have to take into account that not all touch screen behave the same, and how do I get the touch inputs on my screen? I've tried the Windows 7 Touch pack that microsoft released. And the multi touch features seems to work there. Any suggestion on how to proceed with this?

    Read the article

  • Bandwidth Monitoring in asp.net

    - by asifch
    HI, We are developing a multi-tenant application in Asp.Net with separate DB for each tenant, in which one of the requirement is to monitor the bandwidth usage for each tenant, i have tried to search but not found much help on the topic,we want to monitor exactly how much bandwidth is being used for each tenant while each tenant can have its own top level domain or a sub domain or a combination of both. so what are the available options, the ones which i can think of can be 1. IIS Log Monitoring means a separate application which will calculate the bandwidth for each tenant. 2. Log Each Request and Response for a tenant from within the application and then calculate the total bandwidth usage based on that. 3. Use some third part components if available So what do you think will be the best approach, also if there is any other way to do this.

    Read the article

  • curl multiple request session issue

    - by embedded
    I'm using curl multi API for multiple curl requests. first I'm using a single request to login to a site and save the cookie file. then I'm using the curl multi API to get some data from that site. The problem I'm facing is that from some reason the cookie file does not get read and I'm redirecting to the main login page. I must say that this work once in awhile so I can't point what went wrong. Any help would be much appreciated. Thanks

    Read the article

  • Do Android/webOS devices support multi-touch Javascript events?

    - by Rufo Sanchez
    On iPhone, iPod touch and (presumably) iPad, Apple has multi-touch event handling available via JavaScript in Mobile Safari. I know the Nexus One recently added multi-touch support via an update, and I believe webOS is also multi-touch enabled. Do Android 2.1 and/or webOS have access to multi-touch in the browser, or is this currently exclusive to Apple devices?

    Read the article

  • Is it possible to use a laptop as an external monitor?

    - by Jephir
    I need to create a multi-monitor setup for my desktop computer but I have no additional monitors with me right now, aside from my laptop. Is it possible to use my laptop screen as an external monitor? Note that I am not trying to connect a monitor to my laptop, rather, I am trying to connect my laptop screen to a desktop video card (if this is possible).

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >