Search Results

Search found 18761 results on 751 pages for 'lot'.

Page 531/751 | < Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >

  • Word 2007 crashes on Server 2008 R2 terminal services

    - by John Rennie
    We are finding that Word 2007 (with SP2) crashes when used on a Windows 2008 R2 terminal server. Typically it crashes when you click File/Open or File/Save, but not every time. Maybe one time in four, and just to be really confusing, on a test server in my office I can't make it crash. I have just today set up a brand new shiny 2k8 R2 terminal server with as simple a setup as possible, e.g. no anti-virus to confuse things, and we're still seeing crashes. My question is has anyone else seen this, and if so any clues on what's happening? We have a support case open with Microsoft, and the MS support engineer has conceded it's happening, but has so far been unable to find the reason. On possible factor is that all the 2k8 R2 terminal servers I've seen this on have been Hyper-V VMs (running on a 2k8 R2 host). I'm about to put in a physical 2k8 R2 terminal server at the customer where we're seeing the most crashes, in case this is relevant. More news soon. Sorry if this posting seems a bit vague, but this has just bitten us and is causing a lot of pain and sleepless nights :-( If anyone can help I'll be enormously grateful! Update: we've given up and gone back to 2008 pre-R2. Both Office 2003 and 2007 both work fine now. I think there are some problems with TS in R2. Googling doesn't find much, so I thought it was just me. It's reassuring to find that someone else has seen the same problem.

    Read the article

  • PfSense: dhcpd: send_packet: No buffer space available

    - by Tillebeck
    Pfsense 2.0.1-RELEASE (i386) I get a lot of entries in the log saying: dhcpd: send_packet: No buffer space available There are aprox 40 active users and they complain about prolonged times for getting an IP and periodically about problems accessing the internet, prolonged response times etc. In the log there is this entry repeatedly (periodically): Jun 10 18:27:30 dhcpd: send_packet: No buffer space available Jun 10 18:40:53 dhcpd: send_packet: No buffer space available Jun 10 19:01:15 dhcpd: send_packet: No buffer space available Jun 10 19:10:47 dhcpd: send_packet: No buffer space available Jun 10 19:31:10 dhcpd: send_packet: No buffer space available Jun 10 19:53:51 dhcpd: send_packet: No buffer space available Jun 10 20:23:32 dhcpd: send_packet: No buffer space available Jun 10 21:01:42 dhcpd: send_packet: No buffer space available Jun 10 21:04:52 dhcpd: send_packet: No buffer space available Jun 10 21:23:29 dhcpd: send_packet: No buffer space available Jun 10 21:46:05 dhcpd: send_packet: No buffer space available Jun 10 22:02:17 dhcpd: send_packet: No buffer space available Jun 10 22:02:45 dhcpd: send_packet: No buffer space available Jun 10 22:06:21 dhcpd: send_packet: No buffer space available Jun 10 22:08:45 dhcpd: send_packet: No buffer space available Jun 10 22:09:19 dhcpd: send_packet: No buffer space available Jun 10 22:22:23 dhcpd: send_packet: No buffer space available I guess it is the same periods that the users complain about reduced access to the internet. I have seen other threads about it. But none related to pfsense. Any ideas of what to do?

    Read the article

  • i5 540M or i7 720QM for laptop running VMs and software development tools?

    - by Donald Hughes
    I'm a software developer that would primarily be running Windows 7 as the primary operating system. On a typical day, I might, at any given moment, be running Visual Studio, Expression Web, SQL Server developer (and Management Console), IIS, Photoshop, a dozen browser tabs in 2-3 different browsers, Skype video chat, streaming music, and a couple of VMs (WinXP and Ubuntu) for testing/experimentation. Obviously, RAM is a concern, which is why I plan to use 8 GB so I can devote enough to the VMs to be usable. I'm also tempted to use an ExpressCard SSD for storing the VM disks to ease disk contention. And I know that that is asking a lot from a laptop, and I should just use a desktop, but I need to be able to take my work with me between several locations. It seems that at a reasonable price point, it comes down to the i5 540M versus the i7 720QM. I'm leaning toward the i7 since it would allow me to dedicate a whole hyperthreaded core to each VM, and still have two cores left for the primary OS. I've heard that the i5 has better battery life, but I'm curious for my scenario if there would be a meaningful difference. I don't usually work without a plug, but I do occasionally ride the train or fly and it would be nice to have at least 3 hours of juice for unusual circumstances. And, finally, for this usage scenario, would a dedicated video option be preferred over the i5's integrated video? It sounds like Visual Studio 2010 (and Windows 7) can take advantage of the video card.

    Read the article

  • Mac OS X 10.6 Setup for Apache/MySQL/Perl

    - by Russell C.
    I just got a new Mac and have been trying to setup a local development environment for my perl applications for a few days now with no luck. I'm getting no where fast so I hope someone else who has done this successfully could help. I started by installing MAMP which I thought would take care of everything for me but unfortunately it doesn't take care of some important perl modules. I used CPAN to install all our required modules except that it seems DBD::mysql doesn't install correctly through CPAN. After reading a lot online, lots of people reported problems with this and recommended using MacPorts to install the module which I have tried doing with no luck using the following command: sudo port install p5-dbd-mysql After what seems like a successful install of DBD::mysql, Apache continues to report the following error when trying to run any of our Perl scripts: [Fri Apr 30 18:51:07 2010] [error] [client 127.0.0.1] install_driver(mysql) failed: Can't locate DBD/mysql.pm in @INC (@INC contains: /Library/Perl/Updates/5.10.0/darwin-thread-multi-2level /Library/Perl/Updates/5.10.0 /System/Library/Perl/5.10.0/darwin-thread-multi-2level /System/Library/Perl/5.10.0 /Library/Perl/5.10.0/darwin-thread-multi-2level /Library/Perl/5.10.0 /Network/Library/Perl/5.10.0/darwin-thread-multi-2level /Network/Library/Perl/5.10.0 /Network/Library/Perl /System/Library/Perl/Extras/5.10.0/darwin-thread-multi-2level /System/Library/Perl/Extras/5.10.0 .) at (eval 1835) line 3. [Fri Apr 30 18:51:07 2010] [error] [client 127.0.0.1] Perhaps the DBD::mysql perl module hasn't been fully installed, [Fri Apr 30 18:51:07 2010] [error] [client 127.0.0.1] or perhaps the capitalisation of 'mysql' isn't right. [Fri Apr 30 18:51:07 2010] [error] [client 127.0.0.1] Available drivers: DBM, ExampleP, File, Gofer, Proxy, SQLite, Sponge. I'm not sure where to go from here but my Mac isn't much of a development environment if Perl isn't able to talk to the database. I'd really appreciate any help and advice you might be able to provide in getting my system setup successfully. Thanks in advance!

    Read the article

  • Account Lockout with pam_tally2 in RHEL6

    - by Aaron Copley
    I am using pam_tally2 to lockout accounts after 3 failed logins per policy, however, the connecting user does not receive the error indicating pam_tally2's action. (Via SSH.) I expect to see on the 4th attempt: Account locked due to 3 failed logins No combination of required or requisite or the order in the file seems to help. This is under Red Hat 6, and I am using /etc/pam.d/password-auth. The lockout does work as expected but the user does not receive the error described above. This causes a lot of confusion and frustration as they have no way of knowing why authentication fails when they are sure they are using the correct password. Implementation follows NSA's Guide to the Secure Conguration of Red Hat Enterprise Linux 5. (pg.45) It's my understanding that that only thing changed in PAM is that /etc/pam.d/sshd now includes /etc/pam.d/password-auth instead of system-auth. If locking out accounts after a number of incorrect login attempts is required by your security policy, implement use of pam_tally2.so. To enforce password lockout, add the following to /etc/pam.d/system-auth. First, add to the top of the auth lines: auth required pam_tally2.so deny=5 onerr=fail unlock_time=900 Second, add to the top of the account lines: account required pam_tally2.so EDIT: I get the error message by resetting pam_tally2 during one of the login attempts. user@localhost's password: (bad password) Permission denied, please try again. user@localhost's password: (bad password) Permission denied, please try again. (reset pam_tally2 from another shell) user@localhost's password: (good password) Account locked due to ... Account locked due to ... Last login: ... [user@localhost ~]$

    Read the article

  • Windows Backup failed as another backup or recovery is in progress.

    - by remunda
    Hi, i have set up backup schedule on our server. SQL Server 2008 to 01:00am on windows server 2008 R2 to 4:00am. Sql backup runs well, but system backup ends sometimes with error. The error is : http://technet.microsoft.com/en-us/library/dd364768%28WS.10%29.aspx I mean, that is caused by SQL Server, because it unexpectedly runs backup. this is from sql server log (it runs for every database): Date 4.2.2010 4:00:21 Log SQL Server (Current - 29.1.2010 23:25:00) Source spid72 Message I/O is frozen on database master. No user action is required. However, if I/O is not resumed promptly, you could cancel the backup. and Date 4.2.2010 4:00:24 Log SQL Server (Current - 29.1.2010 23:25:00) Source Backup Message Database backed up. Database: master, creation date(time): 2010/01/29(23:25:32), pages dumped: 370, first LSN: 859:56:37, last LSN: 859:80:1, number of dump devices: 1, device information: (FILE=1, TYPE=VIRTUAL_DEVICE: {'{17B91D5C-9968-4D11-A7F1-1A31523D32F0}25'}). This is an informational message only. No user action is required. My questions is: Why runs sql backup with windows backup? How can i dissolve this that errors? Thank you a lot.

    Read the article

  • Apachebench on node.js server returning "apr_poll: The timeout specified has expired (70007)" after ~30 requests

    - by Scott
    I just started working with node.js and doing some experimental load testing with ab is returning an error at around 30 requests or so. I've found other pages showing a lot better concurrency numbers than I am such as: http://zgadzaj.com/benchmarking-nodejs-basic-performance-tests-against-apache-php Are there some critical server configuration settings that need done to achieve those numbers? I've watched memory on top and I still see a decent amount of free memory while running ab, watched mongostat as well and not seeing anything that looks suspicious. The command I'm running, and the error is: ab -k -n 100 -c 10 postrockandbeyond.com/ This is ApacheBench, Version 2.0.41-dev <$Revision: 1.121.2.12 $> apache-2.0 Copyright (c) 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Copyright (c) 2006 The Apache Software Foundation, http://www.apache.org/ Benchmarking postrockandbeyond.com (be patient)...apr_poll: The timeout specified has expired (70007) Total of 32 requests completed Does anyone have any suggestions on things I should look in to that may be causing this? I'm running it on osx lion, but have also run the same command on the server with the same results. EDIT: I eventually solved this issue. I was using a TTAPI, which was connecting to turntable.fm through websockets. On the homepage, I was connecting on every request. So what was happening was that after a certain number of connections, everything would fall apart. If you're running into the same issue, check out whether you are hitting external services each request.

    Read the article

  • getting a pyserial not loaded error

    - by skinnyTOD
    I'm getting a "pyserial not loaded" error with the python script fragment below (running OSX 10.7.4). I'm trying to run a python app called Myro for controlling the Parallax Scribbler2 robot - figured it would be a fun way to learn a bit of Python - but I'm not getting out of the gate here. I've searched out all the Myro help docs but like a lot in-progress open source programs, they are a moving target and conflicting, out of date, or not very specific about OSX. I have MacPorts installed and installed py27-serial without error. MacPorts lists the python versions I have installed, along with the active version: Available versions for python: none python24 python25 python25-apple python26 python26-apple python27 python27-apple (active) python32 Perhaps stuff is getting installed in the wrong places or my PATH is wrong (I don't much know what I am doing in Terminal and have probably screwed something up). Trying to find out about my sys.path - here's what I get: import sys sys.path ['', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python27.zip', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages', '/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload', '/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/PyObjC', '/Library/Python/2.7/site-packages'] Is that a mess? Can I fix it? Anyway, thanks for reading this far. Here's the python bit that is throwing the error. The error occurs on 'try import serial'. # Global variable robot, to set SerialPort() robot = None pythonVer = "?" rbString = None ptString = None statusText = None # Now, let's import things import urllib import tempfile import os, sys, time try: import serial except: print("WARNING: pyserial not loaded: can't upgrade!") sys.exit() try: input = raw_input # Python 2.x except: pass # Python 3 and better, input is defined try: from tkinter import * pythonver = "3" except: try: from Tkinter import * pythonver = "2" except: pythonver = "?"

    Read the article

  • Avoiding DNS timeouts when a dns server fails

    - by user65124
    Hi there. We have a small datacenter with about a hundred hosts pointing to 3 internal dns servers (bind 9). Our problem comes when one of the internal dns servers becomes unavailable. At that point all the clients that point to that server start performing very slowly. The problem seems to be that the stock linux resolver doesn't really have the concept of "failing over" to a different dns server. You can adjust the timeout and number of retries it uses, (and set rotate so it will work through the list), but no matter what settings one uses our services perform much more slowly if a primary dns server becomes unavailable. At the moment this is one of the largest sources of service disruptions for us. My ideal answer would be something like "RTFM: tweak /etc/resolv.conf like this...", but if that's an option I haven't seen it. I was wondering how other folks handled this issue? I can see 3 possible types of solutions: Use linux-ha/Pacemaker and failover ips (so the dns IP VIPs are "always" available). Alas, we don't have a good fencing infrastructure, and without fencing pacemaker doesn't work very well (in my experience Pacemaker lowers availability without fencing). Run a local dns server on each node, and have resolv.conf point to localhost. This would work, but it would give us a lot more services to monitor and manage. Run a local cache on each node. Folks seem to consider nscd "broken", but dnrd seems to have the right feature set: it marks dns servers as up or down, and won't use 'down' dns servers. Any-casting seems to work only at the ip routing level, and depends on route updates for server failure. Multi-casting seemed like it would be a perfect answer, but bind does not support broadcasting or multi-casting, and the docs I could find seem to suggest that multicast dns is more aimed at service discovery and auto-configuration rather than regular dns resolving. Am I missing an obvious solution?

    Read the article

  • Where can I ask questions that aren't IT questions?

    - by Adam Davis
    Thank you for your confidence in our abilities But have you read this? FAQ We get a lot of interesting technical questions on here, but Server Fault is meant to be first and foremost a resource for system administrators and IT professionals. In other words, Server Fault isn't meant to cater to every computer problem - just those that only exist in or can best be solved in the IT and sysadmin domain Yes, someone here might be able to help you, but you'll find that other forums more focused on your topic can give you a much better answer than a bunch of IT professionals. It's likely that your question will be downvoted, closed, and in some cases marked "offensive." It's not that we hate you, it's just that we like to keep our corn pops separate from our cocoa puffs. In that vein, here are a bunch of other forums where you can get help: (This list contains the top two forums for each category as voted for below.) Programming Stackoverflow Consumer Level Computer Hardware Ars Technica OpenForum Tom's Hardware Forums Computer Software Anandtech Forum Web Design/Hosting/CMS SitePoint Forums Web Hosting Talk Math, Science, Engineering Physics Forums PlanetMath Other/General Ask Metafilter Google Search Mahalo Answers Check out the answers below for even more suggestions! What forums can people go to to ask the questions that are off topic here? Please list only ONE forum per answer so votes can bring the best forums to the top. Return to FAQ Index

    Read the article

  • "No more threads can be created in the system" in Network and Sharing Center

    - by Zell Faze
    A while back I noticed on one of our laboratory computers (Windows 7, very little extra software installed) that the network connection icon in the system tray would claim that it had no network connection, even though it did. This issue would go away after the computer was rebooted, but would surface again the next time I looked at the computer (a few days later). Upon opening the Network and Sharing Center I am shown an actual error message, but not one that seems to give me a lot of information about what the problem is. In the place of the usual information about network adapters and whether you are connected to the Internet it simply says: "No more threads can be created in the system." The Event Viewer shows hundreds of events from different services also with the same message. "Volume Shadow Copy Service error: Unexpected error calling routine CoCreateInstance. hr = 0x800700a4, No more threads can be created in the system."; "The WinHTTP Web Proxy Auto-Discovery Service service failed to start due to the following error: A thread could not be created for the service."; "The IP Helper service terminated with the following error: No more threads can be created in the system." As far as I can tell, this message seems to mean that there is some sort of resource leak in Windows where something is creating a large number of threads and those threads are not being killed off? I've tried restarting WMI and several services related to networking, without avail. Can anyone provide more information on what "No more threads can be created in the system" might mean and what I might be able to do to fix the issue? Currently the only solution appears to be restarting.

    Read the article

  • PHP CLI not respecting memory limit in php.ini

    - by user13743
    I am using drush, which is a command-line php app to manage a drupal website. I am running a command to import a lot of data, which is causing me to hit php's memory limit. PHP Fatal error: Allowed memory size of 536870912 bytes exhausted ... Which is 512MB if I'm doing the math correctly (536870912 / 1024 / 1024 = 512). I've changed the directive in the php.ini that drush uses: $> drush status ... PHP configuration : /etc/php5/cli/php.ini $> grep memory /etc/php5/cli/php.ini ; Maximum amount of memory a script may consume (128MB) ; http://php.net/memory-limit memory_limit = 1024M But I'm still hitting the 512 MB limit! I am running in a virtual machine, whose memory settings I changed from 512 to 1025 MB of RAM to allow drush to run. $> free -m total used free shared buffers cached Mem: 1010 578 431 0 14 392 -/+ buffers/cache: 172 837 Swap: 382 0 382 So it says it has some 431 MB free, now that I've bumped the vm up to 1024. I guess half the memory is being used to run the GUI, but I don't understand how the GUI was running okay when the vm had 512 MB of ram. Why is the PHP cli still hitting a 512 MB memory limit? If it was hitting a system memory limit, shouldn't it die around 431MB, which is what the free command says is available?

    Read the article

  • RAID-1 and regular drive removal (using RAID-1 as a backup measure)

    - by Vi
    Is using mdadm's RAID-1 of 2 partitions (one on laptop's internal HDD, one on external HDD) a good idea. I want the system to work as RAID-1 if both drives are present, work as regular volume (degradad RAID-1) if external HDD is unplugged and quickly resync when I plug external HDD again. Questions: Is it a good idea? Will write-intent bitmap be enough for this task or I need something else? Should I consider doing it at filesystem level (3b. if yes, how?). Basic requirements are: Quick resync when I re-add the external drive (provided I hasn't changed that partition). More or less consistent data on the removed drive if I remove it not during write/resync operation. If I remove the drive during resync I expect the data to be somewhat inconsistent, but expect quick resync completion when I re-add it again. E.g. I want the the remaining drive to track what is changed (there can be a lot of changes) and that sync back only those parts that need it.

    Read the article

  • Where is vmlinux on my Ubuntu installation?

    - by Jason Baker
    I'm trying to work with starting up oprofile, and I'm running into a problem at this step: opcontrol --vmlinux=/path/to/vmlinux Ubuntu has no package called vmlinux, and when I do a locate vmlinux, I get a lot of files: /usr/src/linux-headers-2.6.28-14/arch/h8300/boot/compressed/vmlinux.lds /usr/src/linux-headers-2.6.28-14/arch/m68k/kernel/vmlinux-std.lds /usr/src/linux-headers-2.6.28-14/arch/m68k/kernel/vmlinux-sun3.lds /usr/src/linux-headers-2.6.28-14/arch/mn10300/boot/compressed/vmlinux.lds /usr/src/linux-headers-2.6.28-14/arch/sh/boot/compressed/vmlinux_64.lds /usr/src/linux-headers-2.6.28-14/arch/x86/boot/compressed/vmlinux_32.lds /usr/src/linux-headers-2.6.28-14/arch/x86/boot/compressed/vmlinux_64.lds /usr/src/linux-headers-2.6.28-14/include/asm-generic/vmlinux.lds.h /usr/src/linux-headers-2.6.28-15/arch/h8300/boot/compressed/vmlinux.lds /usr/src/linux-headers-2.6.28-15/arch/m68k/kernel/vmlinux-std.lds /usr/src/linux-headers-2.6.28-15/arch/m68k/kernel/vmlinux-sun3.lds /usr/src/linux-headers-2.6.28-15/arch/mn10300/boot/compressed/vmlinux.lds /usr/src/linux-headers-2.6.28-15/arch/sh/boot/compressed/vmlinux_64.lds /usr/src/linux-headers-2.6.28-15/arch/x86/boot/compressed/vmlinux_32.lds /usr/src/linux-headers-2.6.28-15/arch/x86/boot/compressed/vmlinux_64.lds /usr/src/linux-headers-2.6.28-15/include/asm-generic/vmlinux.lds.h /usr/src/linux-headers-2.6.28-16/arch/h8300/boot/compressed/vmlinux.lds /usr/src/linux-headers-2.6.28-16/arch/m68k/kernel/vmlinux-std.lds /usr/src/linux-headers-2.6.28-16/arch/m68k/kernel/vmlinux-sun3.lds /usr/src/linux-headers-2.6.28-16/arch/mn10300/boot/compressed/vmlinux.lds /usr/src/linux-headers-2.6.28-16/arch/sh/boot/compressed/vmlinux_64.lds /usr/src/linux-headers-2.6.28-16/arch/x86/boot/compressed/vmlinux_32.lds /usr/src/linux-headers-2.6.28-16/arch/x86/boot/compressed/vmlinux_64.lds /usr/src/linux-headers-2.6.28-16/include/asm-generic/vmlinux.lds.h Which one of these is the one I'm looking for?

    Read the article

  • Why does traceroute take much longer than ping?

    - by PHP
    How to explain this? C:\Documents and Settings\Administrator>tracert google.com Tracing route to google.com [64.233.189.104] over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms 192.168.0.1 2 7 ms <1 ms <1 ms reserve.cableplus.com.cn [218.242.223.209] 3 108 ms 135 ms 163 ms 211.154.70.10 4 * * * Request timed out. 5 2 ms * 1 ms 211.154.64.114 6 1 ms 1 ms 1 ms 211.154.72.185 7 1 ms 1 ms 1 ms 202.96.222.77 8 2 ms 1 ms 2 ms 61.152.81.145 9 1 ms 2 ms 1 ms 61.152.86.54 10 1 ms 1 ms 1 ms 202.97.33.238 11 2 ms 2 ms 2 ms 202.97.33.54 12 2 ms 1 ms 2 ms 202.97.33.5 13 33 ms 33 ms 33 ms 202.97.61.50 14 34 ms 34 ms 34 ms 202.97.62.214 15 34 ms 186 ms 37 ms 209.85.241.56 16 35 ms 35 ms 44 ms 66.249.94.34 17 34 ms 34 ms 34 ms hkg01s01-in-f104.1e100.net [64.233.189.104] Trace complete. So average time should be :1+7+108+2+1+1+2+1+1+2+2+33+34+34+35+34+34+35+34,which is a lot bigger than ping C:\Documents and Settings\Administrator>ping google.com Pinging google.com [64.233.189.104] with 32 bytes of data: Reply from 64.233.189.104: bytes=32 time=34ms TTL=241 Reply from 64.233.189.104: bytes=32 time=34ms TTL=241 Reply from 64.233.189.104: bytes=32 time=34ms TTL=241 Reply from 64.233.189.104: bytes=32 time=34ms TTL=241 Ping statistics for 64.233.189.104: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 34ms, Maximum = 34ms, Average = 34ms

    Read the article

  • Mouse wheel in VirtualPC (mostly) does not work on 64-bit Windows 7 RC

    - by JonStonecash
    I have recently upgraded my laptop from WinXP Pro (32-bit) to Windows 7 RC (64-bit). I have a number of VirtualPC 2007 images that I use for testing on various platforms and looking at beta software. I have installed the 64-bit version of VirtualPC. The images all work with the exception of the mouse wheel within the virtual machine. I have tried this out with WinXP Pro, Windows 7 RC, and Windows Server 2008 images. All are 32-bit and all exhibit the same behavior: a gentle rotation of the wheel does nothing; a quick rotation of the wheel sometimes gets a scroll and sometimes not. I regard this behavior as unusable as I tend to use the mouse wheel a lot. All of this worked just fine on WinXP. I have re-installed the Virtual machine additions on all of the machines. The Windows 7 RC virtual image was created after the upgrade to Windows 7 and the 64-bit version of VirtualPC (just to isolate the possibility that I had corrupted the images during the transition). I have googled, binged, and yahoo-ed. There are scattered mentions of this problem (dating back to VPC 2004) but no solutions. I am aware that I could start up one of these images and then use remote desktop connections to get access to that image. I, in fact, do just that for some development that I am doing; the mouse works just fine. This is acceptable in this case because I spend hours at a time in the development VM. These test environments are different in that I will bring up an image for just a short time: minutes rather than hours. Adding the rdc step is much more significant in these cases. Does anyone have any idea of what to do next?

    Read the article

  • How to make 7zip faster

    - by user34463
    I normally use winRAR over 7Zip simply because its faster and only a little less efficient with compression. I did a few tests on different filetypes and sizes comparing the 7zip and winRAR default settings on their normal compression and their best compression, and in a lot of cases winRAR was 50% faster and in some it was actually 100% faster. But, I do like FOSS more. So here are my questions: Is there a way to make 7zip speed up? I'd like it to at least be on par with rar's speed Is there a way to make recovery segments in 7zip like you can in rar? I didn't see any, but I guess it could be a command line thing. I tested winrar and 7zip using the latest stable version of each (4.something with 7zip). Is the 9.x beta release noticeably faster at compression? I'm talking about faster at a comparable setting in WinRAR, not just lowering to bare minimum compression. If it matters, I use a quad core intel i7 720 (1.6ghz)/(2.8ghz) with 4gb DDR3 ram, and the 64-bit version of 7zip.

    Read the article

  • ADSIEdit Cleanup After Exchange 2003 Crash During Transition To Exchange 2010

    - by ThaKidd
    Hello all. I would value some input from a few Exchange 2010 experts. I have almost completed the transition from Exchange 2003 Standard to Exchange 2010 Standard. Everything went smoothly until I tried to uninstall Exchange 2003. At that point the server bit the dust and died completely. I now have NO access to the old Exchange System Management MMC as I am running Windows 2008 SR2 and Windows 7 only. I can only fix this with ADSIEdit, EMShell, and EMConsole. I have used the 2010 shell to move/remove/verify that all mailboxes, public folders and OAB are hosted on Exchange 2010. I also verified that the routing connector has been deleted. The only two things that were not done was to remove the Recipient Update Service and actually perform the removal of the 2003 software. I have spent a lot of time going through ASDIedit and have located the old Administrative Group and the Exchange 2003 server listed under it. I also located the Recipient Update Service which includes two entries; Enterprise and my domain name. I have read that it is an unwise idea to remove the old administrative group so I won't bother messing with that. I am repeatedly getting three warnings in the Application Log. Both are from MSExchangeTransport EventID 5006 (Cannot find route to Mailbox Server OLDSERVER) and 5020 (The topology doesn't contain a route to Exchange 2000 Server or Exchange Server 2003) So my questions are: To clean out AD of the old Exchange 2003 info, can I delete the server name folder (Configuration - Services - Microsoft Exchange - ExchOrg - Administrative Groups - First Administrative Group - Servers - Old Server) and also delete the Update Recipient Service (Enterprise) and Update Recipient Service (DOMAIN) containers safely? Are there any additional items I need to address to ensure the AD is clean? Thanks in advance for your help!

    Read the article

  • Why is my Current Printer unavailable in Office ?

    - by cros
    Whenever I try to print any document from Microsoft Office 2007 in Windows Vista 64-bit there is a great possibility that the print job will fail with the following error message: Current printer is unavailable. Select another printer. Only problem is no printer works, not even Bullzip PDF Printer. The only way to resolve this that I have found so far is a reboot, but I don't want to do that all the time. I am using Windows Vista 64-bit. I've had the problem using both SP1 and SP2. The problem occurs on both locally installed and network printers, as well as the virtual printer Bullzip PDF Printer. My primary source of the problem has been Excel, but the error has also occurred in Word. Changing the default printer and restarting the Microsoft Office-application solves this temporarily, but not permanently. Google:ing the error message returns a lot of questions but no solutions, so seems like a frequent problem. What could be a permanent solution for this problem? UPDATE: It seems that my problem stems from me opening MS Office applications by opening a document from Total Commander with administrative rights. This somehow makes the applications not find the printers. Opening MS Office applications either from the Start menu or by opening a document in a non-administrator Explorer allows me to print.

    Read the article

  • PostgreSQL lots of writes

    - by strife911
    Hi, I am using postgreSQL for a scientific application (unsupervised clustering). The python program is multi-threaded so that each thread manages its own postmaster process (one per core). Hence, their is a lot of concurrency. Each thread-process loop infinitely though two SQL queries. The first is for reading, the second is for writing. The read operation considers 500 time the amount of rows the write operation considers. Here is the output of dstat: ----total-cpu-usage---- ------memory-usage----- -dsk/total- --paging-- --io/total- usr sys idl wai hiq siq| used buff cach free| read writ| in out | read writ 4 0 32 64 0 0|3599M 63M 57G 1893M|1524k 16M| 0 0 | 98 2046 1 0 35 64 0 0|3599M 63M 57G 1892M|1204k 17M| 0 0 | 68 2062 2 0 32 66 0 0|3599M 63M 57G 1890M|1132k 17M| 0 0 | 62 2033 2 1 32 65 0 0|3599M 63M 57G 1904M|1236k 18M| 0 0 | 80 1994 2 0 31 67 0 0|3599M 63M 57G 1903M|1312k 16M| 0 0 | 70 1900 2 0 37 60 0 0|3599M 63M 57G 1899M|1116k 15M| 0 0 | 71 1594 2 1 37 60 0 0|3599M 63M 57G 1898M| 448k 17M| 0 0 | 39 2001 2 0 25 72 0 0|3599M 63M 57G 1896M|1192k 17M| 0 0 | 78 1946 1 0 40 58 0 0|3599M 63M 57G 1895M| 432k 15M| 0 0 | 38 1937 I am pretty sure I could write more often than that for I have seen it write up to 110-140M on dstat. How can I optimize this process?

    Read the article

  • Backup Gmail using Mail.app and IMAP without redundancy

    - by Cawas
    I don't care for actually using mail app, I use mostly the gmail interface and mail app just for offline, for quickly reading and eventually replying. Everything is working fine, I think I've followed every guide out there... Here's a great one. But I could find nothing about avoiding redundancy. Well, I can manually do that either by using POP or by checking off most of my labels out of IMAP. But I do use a lot of labels and I often label messages with more than 1 label. And I want them on mail app. Is there anyway to make it keep just 1 copy of repeated messages? Maybe there's a message id or checksum that could be used... If there isn't a way to do it, be assured I still prefer having the extra messages and "wasting" space rather than not having any. edit: I've came across many solutions for finding duplicate files, but they just delete the files. That just make things worst: Mail will just sync it all again. I've realized it's probably better to keep two accounts setup, POP for backup and IMAP for everything else with removing the "All Mail" from it. That's because if the "All Mail" on the server is deleted for any reason, my "All Mail" local will also get deleted, while POP will keep all files regardless of the server. This doesn't solve the redundancy issue at all, but it doesn't create any new issue as well, and I can even use the search properly, without duplicated results, if I search just on the POP. So it helps optimizing a little bit. But I still think the best way to solve this issue would be having something such as aamann's Mail Scripts tweaked to hardlinking the duplicates rather than deleting, and optimized to not need to scan everything every time. I'm trying to contact him and see what we can do. At any pace, I'm still looking for an answer!

    Read the article

  • kickstart: reference floppy drive via %ksappend or %include

    - by virtualeyes
    Having trouble getting %ksappend or %include to work when referencing a local floppy drive. Booting off remote server's cd-rom drive I am able to load the CentOS 6 minimal install image, and then add ks=hd:fd0/ks-jvm.cfg to boot params to load kickstart init file from floppy disk. That works fine. The problem is that I want to load a streamlined generic init file off the floppy and then, within the init, %ksappend or %include specific config files relative to the type of server I'm building (JVM, MySQL, Apache, etc.) I do not have DHCP, networking needs to be specified statically, so %ksappend and %include both fail when attempting to reference http://some-LAN-IP/foo.cfg since networking has not yet been set. The kickstart setup only works when I glob in the entire config into a single file, which is great, but ugly and difficult to maintain when I return later, having forgotten the original setup. At this point I'd be happy if I could get %ksappend or %include working with a floppy drive reference in the %post section; that would consolidate a lot of common boilerplate that all kickstarts will rely on (sshd_config, rsync config, resolve.conf, and so on) Thanks for providing the magic floppy drive reference that is eluding me!

    Read the article

  • Exchange 2003 SP2 and Windows Server 2008 R2 Domain Controllers

    - by Brian
    I'm looking at adding two Windows Server 2008 R2 Domain Controllers into our Windows Server 2003 domain to support our Exchange 2003 SP2 server and replace a retiring Windows Server 2003 Server. Our Domain and Forest functional levels are currently Windows Server 2003, which supports domain controller operating systems (Windows Server 2008 R2, Windows Server 2008 and Windows Server 2003) according to the "Appendix of Functional Level Features" on Technet . So there should not be an issue other than running adprep /forestprep and adprep /domain.... right!? But, according to the Exchange Server Supportability Matrix, Windows Server 2008 R2 Active Directory Servers are not supported as global catalog servers or domain controllers in a Exchange 2003 SP2 environment!!!??? This was a shock to me... How can Windows Server 2008 R2 be a DC for a Windows Server 2003 domain and forest, but not communicate with an Exchange 2003 SP2 server? Hopefully, I'm not the first to see this issue (or maybe I am), but I know a lot of Exchange 2003 admins will not be happy if there is not a work around... or is Microsoft trying to push everyone automatically to Exchange 2010...

    Read the article

  • "Unable to initialize module" warning after update PHP on CentOS 5.4

    - by ohho
    After upgrading PHP from 5.1x to 5.2.10, there are a lot of warning when php -v: [root@localhost ~]# php -v PHP Warning: PHP Startup: fileinfo: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP Warning: PHP Startup: mcrypt: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP Warning: PHP Startup: memcache: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP Warning: PHP Startup: mhash: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP Warning: PHP Startup: mssql: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP Warning: PHP Startup: readline: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP Warning: PHP Startup: tidy: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP 5.2.10 (cli) (built: Nov 13 2009 11:24:03) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2009 Zend Technologies How can I fix that? Thanks!

    Read the article

  • How do I hide "Extra File" and "100%" lines from robocopy output?

    - by Danny Tuppeny
    I have a robocopy script to back up our Kiln repositories that runs nightly, which looks something like this: robocopy "$liveRepoLocation" "$cloneRepoLocation" /MIR /MT /W:3 /R:100 /LOG:"$backupLogLocation\BackupKiln.txt" /NFL /NDL /NP In the output, there are a ton of lines that contain "Extra file"s, like this: *EXTRA File 153 E:\Kiln Backup\elasticsearch\data\elasticsearch-kiln\nodes\0\indices\kiln-2\0\index\_yxe.fdt *EXTRA File 12 E:\Kiln Backup\elasticsearch\data\elasticsearch-kiln\nodes\0\indices\kiln-2\0\index\_yxe.fdx *EXTRA File 128 E:\Kiln Backup\elasticsearch\data\elasticsearch-kiln\nodes\0\indices\kiln-2\0\index\_yxe.fnm *EXTRA File 363 E:\Kiln Backup\elasticsearch\data\elasticsearch-kiln\nodes\0\indices\kiln-2\0\index\_yxe.frq *EXTRA File 13 E:\Kiln Backup\elasticsearch\data\elasticsearch-kiln\nodes\0\indices\kiln-2\0\index\_yxe.nrm Additionally, there are then hundreds of lines at the bottom that contain nothing but "100%"s, like this: 100% 100% 100% 100% 100% 100% 100% In addition to making the log files enormous (there are a lot of folders/files in Kiln repos), it makes it annoying to scan through the logs now and then to see if everything was working ok. How do I stop "Extra Files" appearing in the log? How do I stop these silly "100%" lines appearing in the log? I've tried every combination of switch I can think of (the current switches are listed above in the command), but neither seem to hide these!

    Read the article

< Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >