Search Results

Search found 16560 results on 663 pages for 'high tech resources'.

Page 159/663 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • laptop overheating ran defraggler and now its not as hot

    - by Marko
    Trying to diagnose and fix an overheating Acer 5735 laptop, running speedfan and doing general workload to try and cause the overheat conditions. I notice that windows xp is badly fragmented according to defraggler, at 58% fragmentation. So I defrag whilst watching the speedfan window, which was at the start reporting high warning style symbols for all of the sensors. After the defrag, I rebooted and ran a few programs, and even defraggler again and the sensors in speedfan all reported green i.e. not high. Wondering if there is a correlation between windows fragmentation causing the hard drive to work harder and produce more heat inside the laptop? dont want to just assume that the problems are resolved, so either speedfan is not accurate enough or fragmentation can lead to additional hard drive heat? All comments or suggestions welcome.

    Read the article

  • DB2 LUW tools for diagnosing issues when the stuff hits the fan

    - by Ichorus
    I am no DBA and very much a novice when it comes to DB2 so even 'obvious' answers are welcome to this question: I love db2top but sometimes I cannot get it to run if the load average is high on a db2 LUW. This morning I was looking at an issue where load average shot up suddenly, I could not get db2top to come up and I needed to find out what was happening. What can I do to find out who is doing what in this situation? I suspected a horribly bad query was being run by someone...is there a good way to find information on poor performing SQL on the fly in that type of situation? Are there any good ways to collect good, actionable stats who/where bad sql is coming from in the event that load average is so high? I know about db2pd but I am not sure how to use it effectively and slogging through tens of thousands of lines of raw data is probably not the most efficient way to get at the heart of a problem. Any tips or resources?

    Read the article

  • Using AHK PostMessage to send WM_WININICHANGE to Program Manager

    - by SaintWacko
    I've written a script which updates an environment variable, but I need to tell Program Manager to update the computer's programs with this new information. I was given this as the API call that is made within another program to cause this: ::SendMessage(::FindWindow("Progman", NULL), WM_WININICHANGE, 0L, (LPARAM)"Environment"); I am attempting to translate this into an AutoHotKey PostMessage call, but I'm doing something wrong, as it isn't working. Here's where I've gotten so far: PostMessage, 0x1A,, (LPARAM)"Environment", "Program Manager" Here are the AHK resources I've been looking at to do this: List of Windows Messages Send Messages to a Window or Its Controls PostMessage / SendMessage And here are the resources that I used to figure out the original API call: SendMessage function WM_WININICHANGE message Can anyone help me figure out what I'm doing wrong?

    Read the article

  • wireless access point

    - by Warren Bullock III
    I'm hoping to get some suggestions for possible Wireless Access point/router models which will allow us to have two separate networks. We run an internal network on 10.x.x.x IP range where we have shares and other network resources for which we would like to have our regular users access. However, we would also like to offer a separate wireless network for guests which ideally would be on 192.168.x.x and these users would not be able to see any of the resources sitting on the 10.x.x.x network. Anyone have any recommendations on single devices that might be able to get the job done? I was looking at the Linksys E4200 and it seems to support what I'm looking to do... any others? Thanks in advance for any suggestions.

    Read the article

  • monitoring TCP/IP performance on Solaris

    - by Andy Faibishenko
    I am trying to tune a high message traffic system running on Solaris. The architecture is a large number (600) of clients which connect via TCP to a big Solaris server and then send/receive relatively small messages (.5 to 1K payload) at high rates. The goal is to minimize the latency of each message processed. I suspect that the TCP stack of the server is getting overwhelmed by all the traffic. What are some commands/metrics that I can use to confirm this, and in case this is true, what is the best way to alleviate this bottleneck?

    Read the article

  • Which keyboard has better ergomics?

    - by Absolute0
    When I was a kid I fell hard on my right wrist and since then I always get wrist pains when angling my wrist very high up (ie: when using a very high shaped mouse or doing push ups). So I have narrowed down my choices for a keyboard to the following 2: Microsoft Natural 4000: And the Razer Arctosa: The Razer is a slim type keyboard similar to a laptop feel and the hand-rest would help with keeping my hands straight with respect to my forearms. I am more inclined on getting the razer but am not sure if this will benefit my wrists in the long run. Any thoughts on this would be greatly appreciated. Thanks.

    Read the article

  • PCI scan findings and problems with week ciphers on ports 993,443,995,465

    - by user64991
    From PCI scan results: Synops is : The remote service encrypts traffic using a protocol with known weaknesses . Description : The remote service accepts connections encrypted using SSL 2.0, which reportedly suffers from several cryptographic flaws and has been deprecated for several years. An attacker may be able to exploit these issues to conduct man-in-the-middle attacks or decrypt communications between the affected service and clients . See also : http://www.schneier.com/paper-ssl.pdf Solution: Consult the application's documentation to disable SSL 2.0 and use SSL 3.0 or TLS 1.0 instead. Risk Factor: Medium / CVSS Base Score : 2 (AV:R/AC:L/Au:NR/C:P/A:N/I:N/B:N) I have tried to change SSLProtocol all -SSLv2 to SSLProtocol -ALL +SSLv3 +TLSv1 And SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW To SSLCipherSuite ALL:!ADH:RC4+RSA:+HIGH:!MEDIUM:!LOW:!SSLv2:!EXPORT But using SSLdigger, it shows the same result. Is this the right way to do something like this?

    Read the article

  • CPU load, USB connection vs. NIC

    - by T.J. Crowder
    In general, and understanding the answer may vary by manufacturer and model (and driver, and...), in consumer-grade workstations with integrated NICs, does the NIC rely on the CPU for a lot of help (as is typically the case with a USB controller, for instance), or is it fairly intelligent and capable on its own (like, say, the typical Firewire controller)? Or is the question too general to answer? (If it matters, you can assume Linux.) Background: I'm looking at connecting a device (digital television capture) that will be delivering ~20-50 Mbit/sec of data to a somewhat under-powered workstation. I can get a USB 2 High-speed device, or a network-attached device, and am interested in avoiding impacting the CPU where possible. Obviously, if it's a 100Mbit NIC, that's roughly half its theoretical inbound bandwidth, whereas it's only roughly a tenth of the 480 Mbit/second the USB 2 "High Speed" interface. But if the latter requires a lot of CPU support and the former doesn't...

    Read the article

  • Recommendations for VMWare web server environment with load balancer.

    - by Ben
    We run IIS websites on a VMWare production server that pull image content and video content from a separate IIS instance on another server (media server). The media calls (images and video) are straight http:// calls and not using a streaming application. During peak traffic periods, we clone the production server five times and have a load balancer distribute traffic to all five production servers. The media server does not get ramped up. We noticed that the processing and resources on the media server gets very taxed during this period. Would it make sense to run the IIS instance for the media server locally on the production server and have it cloned with the production servers, then have a rule on the load balancer negotiating these media calls from the website? Would it be better to allocate more resources (memory and CPUs) to the media server VM and not clone it with the production servers? Recommendations are sincerely appreciated.

    Read the article

  • Windows Server 2008 Scheduled Tasks not running - 0x80041323 - Reduce Number of tasks running in the specified context?

    - by Mayb2Moro
    I am getting the following problem on a number of windows 2008 servers. 0x80041323 Task Scheduler failed to start task \Reporting" in TaskEngine "S-1-5-18:NT AUTHORITY\System:Service:" for user "NT AUTHORITY\System". User Action: Reduce the number of tasks running in the specified user context. I've done lots of research around the web but have been unable to come up with a working answer. I have found some information suggesting increasing a value in the registry key "TasksInMemoryQueue" which I have done, but even setting this as high as 500 has not helped. I have rebooted the server after setting this value. The server does run a high volume of Scheduled tasks, there could be 150 or so running at any one time, but certainly not 500. The scheduled tasks are all running under the system user. Does anyone have any ideas?

    Read the article

  • eAccelerator ignore my new setting?

    - by Mwebe Nkrumah
    Hi, Im using eAccelerator 0.9.5.2, CentOS 5.3, lighttpd 1.4.22 But because eAccelerator is cached in RAM, I needs too much RAM. So Im trying to cache in hard disk. (my website is not generate money, so Im thinking about cheaper solution) So, I modify /etc/php.d/eaccelerator.ini with below codes: extension="eaccelerator.so" eaccelerator.shm_size="12" eaccelerator.cache_dir="/var/cache/eaccelerator" eaccelerator.enable="1" eaccelerator.optimizer="1" eaccelerator.check_mtime="0" eaccelerator.debug="0" eaccelerator.filter="" eaccelerator.shm_max="20M" eaccelerator.shm_ttl="1800" eaccelerator.shm_prune_period="0" eaccelerator.shm_only="0" eaccelerator.compress="0" eaccelerator.compress_level="9" eaccelerator.keys="disk_only" eaccelerator.sessions="disk_only" eaccelerator.content="disk_only" So, the output of phpinfo() as below: http://img175.imageshack.us/img175/1104/screenshggot.png But after using "disk_only" in eAccelerator and restart lighttpd & php-cgi using killall, my RAM usage is still high for php-cgi. Reboot the server also not works. The data is created in cache directory, but RAM usage is still high.

    Read the article

  • Bacula vs. BackupPC [closed]

    - by ujjain
    I have been googling about the differences between them. Bacula has lots of roles BackupPC is easier to configure Bacula works with agent, not rsync (great for Windows backups) It seems that Bacula is most often compared to Amanda though, while BackupPC seems a perfectly lovely and popular backup distribution to. I currently backup my servers with rsnapshot, but I am looking for a professional scalable solution that could also back-up 50 hosts without problems. Preferably a solution that can offer bare metal restores for my Linux servers. I am not looking to reinstall the exact same version of Plesk, the software, etc... Update: I see this ranks high in Google, I found a good article: http://www.serverfocus.org/backuppc-vs-bacula-vs-amanda. I personally think that BackupPC is good for smaller environment, but Bacula, despite the high learning curve, is better for environments that requilre scaling.

    Read the article

  • How to remove blank lines in .txt file

    - by Brant
    I want to change text file format as following , but don't know how to do (2) 5. The function of the condenser is to: a) vapourise the liquid refrigerant b) change high pressure refrigerant vapour to liquid c) pressurise low pressure refrigerant vapour d) vent off vapourised refrigerant e) lower the liquid refrigerant pressure (2) 6. One tonne of refrigeration is: a) 13958 kJ per day b) 100 kJ per minute c) 233 kJ per minute d) 13958 J per hour e) 335 J per second (2) 5. The function of the condenser is to: a) vapourise the liquid refrigerant b) change high pressure refrigerant vapour to liquid c) pressurise low pressure refrigerant vapour d) vent off vapourised refrigerant e) lower the liquid refrigerant pressure (2) 6. One tonne of refrigeration is: a) 13958 kJ per day b) 100 kJ per minute c) 233 kJ per minute d) 13958 J per hour e) 335 J per second

    Read the article

  • In what way does non-"full n-key rollover" hinder fast typists?

    - by Michael Kjörling
    Wikipedia claims (although the latter claim does not cite a source) that: High-end keyboards that provide full n-key rollover typically do so via a PS/2 interface as the USB mode most often used by operating systems has a maximum of only six keys plus modifiers that can be pressed at the same time.[4] This hinders fast typists, ... In what way would the system being able to recognize only six non-modifier keys at once hinder a fast typist? I consider myself a relatively fast typist and I usually press one key, plus modifiers, at once; I can't imagine any real-life situation in which the system only recognizing six non-modifier keys being pressed at once has been a limiting factor in my keyboard usage. (Multi-stroke keyboard shortcuts as used by high-end software like Visual Studio, Emacs and the like are a different matter.) Note that I am not really interested in answers centered around multiplayer computer games; I'm looking for answers that give reasons that would be relevant to typists, somehow supporting the statement made on Wikipedia.

    Read the article

  • Latency, Ping and Other Questions

    - by Paulo Cassiano
    In a high traffic application, like an online auction system, few ms could determine 'to win or 'to lose' the 'battle'. I'm from Brazil. Here, I 'ping' local sites - like UOL - and receive replies in ~ 11ms. When I 'ping' US sites - like RackSpace - I receive replies in ~ 130 ms! The point is: I need a (very good like RackSpace [1]) infra-structure to host my killer online auction application, but there's no (RackSpace like) options in Brazil... Assuming that all users are located here, in Brazil, is it 'sine qua non' condition to host my application here, in Brazil? I think ~130 ms is a very high latency but, all users will receive this reply, sure? Well, where should I host my application? [1] Feel free to point me to any other very good host option other than RackSpace. I've cited it because I only know these guys...

    Read the article

  • How can I take browser screenshots at a higher resolution than my browser supports?

    - by Joshua Carmody
    I need to take a screenshot of a website as it would appear on a very high resolution monitor... say 4000x3000 pixels. My laptop's screen has a native resolution of 1400x768. Basically, I need to simulate having a monitor resolution much higher than my monitor and video card actually supports. I want the screenshot of the site to look pretty much how it does when you hit CTRL MINUS (zoom out) in Firefox repeatedly, but without any loss of pixels due to scaling. How can I do this? Is there some way to use virtual machine software to simulate a super-high-res display? If not, is there some way to open a browser window bigger than the screen, and then capture its contents as a PNG somehow? Anything else that might work?

    Read the article

  • Dock displays low-resolution icons

    - by squircle
    Recently, I've noticed that the dock has been starting to display low-resolution icons in place of the former high-resolution icons for common apps like Stickies, Word, iTunes and Preview. Looking at the .icns file within each program, all copies of the icon are present within the file (high and low resolutions), but the dock refuses to display them, leaving some programs looking like this: Restarting doesn't stop this behaviour, nor does a killall Dock, nor removing the icon and replacing it in the dock. In Finder, the icons display normally. Does anybody know what may be causing this issue? Thanks!

    Read the article

  • Multiple servers acting like a single one with all the hardware?

    - by marc.riera
    Hello, by now I have 10 servers for hpc, power computing oriented. My users need to launch several processes using qmake. The users are used to work with ubuntu 9.10, and the software from the repositories is switable for them. I've deployed ubuntu 9.10 to all 10 servers (pxe rocks). By now we work with parallel-ssh and cluster-ssh, which allows as to launch the same process to all servers. With this tools this tools the servers remain as independent but with the same software and the same launched command. Now we would like to go to next step and see all the servers as a single one with all the resources from the other 9 as if was its resources. The difference would be substantial in time to process and also time to design the command to launch. Any advice on wich software to use will be very useful? Thanks

    Read the article

  • vpn/Openvpn as a cloud service

    - by 8pipe
    I am working on creating a small cloud (any number of EC2 instances that can be deployed based on load) implementing a VPN as a service for the company I'm working for. This is basically a project gathering together various vpn resources under one aegis as a cloud based service. As a user of openvpn, I'm somewhat familiar with being able to connect, but I'm looking for resources to start this project. Essentially I need to be able to: run a certificate authority and manage keys to distribute to coworkers build an ami that handles openvpn as a service balance the load if necessary among machines instances as needed Any suggestions for tutorials, things to avoid, roadblocks I might not be seeing from a novice perspective, etc. or just help in visualizing this is appreciated.

    Read the article

  • Samsung syncmaster SA300

    - by lee
    I recently bought the above moniter. i am now using the DVI port in the moniter but the picture quality stil seems average like the same when i was using the VGA port. THere hasnt been a noticeable change. I am using the DVI output from my graphics card but i thought the picture quality should be alot more High Def than what it is. im just really disappointed in the out come as was expecting nearly High Def quality from my new moniter but just seems a bit normal. Could i need to change any settings?

    Read the article

  • Is it possible to define a virtual directory in IIS and make the files relative to the physical dir

    - by Mikey John
    Is it possible to define a virtual directory in IIS and somehow make the files in that directory relative to the physical directory and not to the virtual directory ? For instance on my server I have the following folders: D:\WebSite\Css\myTheme.css, D:\WebSite\Images\image1.jpg I created a virtual directory on IIS resources.mysite: Inside my website I reference the sheet like this resources.mysite/myTheme.css But inside myTheme.css I reference pictures from ../Images/images1.jpg. So the problem is that image1.jpg is not found because it is relative to the physical folder and not the virtual folder on IIS. Can I solve this problem without modifying the style sheet ?

    Read the article

  • Why is my connection to Playstation Network so unreliable? [closed]

    - by jammus
    Hello friends. I'm 28 and my girlfriend is 24. Our home internet connection is pretty reliable, it's almost always up and can get fairly high download speeds. However, my experience with the Playstation Network is pretty frustrating. I'm always getting kicked off or getting quite high latency. Are there any tips or tricks that you might help my on-line gaming run more smoothly? I'm using a wireless connection for the PS3, is this likely to affect things?

    Read the article

  • Solaris TCP/IP performance tuning

    - by Andy Faibishenko
    I am trying to tune a high message traffic system running on Solaris. The architecture is a large number (600) of clients which connect via TCP to a big Solaris server and then send/receive relatively small messages (.5 to 1K payload) at high rates. The goal is to minimize the latency of each message processed. I suspect that the TCP stack of the server is getting overwhelmed by all the traffic. What are some commands/metrics that I can use to confirm this, and in case this is true, what is the best way to alleviate this bottleneck? PS I posted this on StackOverflow originally. One person suggested snoop and dtrace. dtrace seems pretty general - are there any additional pointers on how to use it to diagnose TCP issues?

    Read the article

  • CLI-Based monitoring tool for KVM

    - by Pinnacle
    I am developing a scheduler for running VMs on KVM. The scheduling has over-commitment of resources like memory and CPU. For this, I need a CLI-based monitoring tool that keeps me giving information about the resource usage of each VM, because it might be the case that due to over-provisioning of resources, VMs on a particular host are running very slowly depending on the benchmarks/programs each VM is running, and then I need to migrate a VM to another host and so on. I looked into libvirt-based tools like collects, MUNIN, Nagios-vert, etc.( http://libvirt.org/apps.html#monitoring ) I also looked into Ubuntu utility perf-kvm ( http://manpages.ubuntu.com/manpages/maverick/man1/perf-kvm.1.html ) I want to ask which CLI-based would be recommended by the community so that I can make a automated scheduler that takes care of the above situation.

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >