Search Results

Search found 26555 results on 1063 pages for 'active directory explorer'.

Page 762/1063 | < Previous Page | 758 759 760 761 762 763 764 765 766 767 768 769  | Next Page >

  • Why are certain folders in my XP network share really, really slow?

    - by bikefixxer
    I have a workgroup set up with Windows XP. My file "server" is running XP Pro and the clients are running XP home. I've turned simple file sharing off on the server because certain clients need access to certain folders and not to others, and I want to keep it that way. Therefore, I've used the granular sharing/security settings to enable certain clients access to certain folders. I'm using the net use command in a batch file on the clients to add the share when they logon so it's always available via a mapped drive or a shortcut. On some clients "My Documents" points to the mapped drive, but all of the local and application settings stay local. Everything works well except for accessing a certain folder on the network. It contains a lot of random batch files and self-executable programs I use for diagnostics and what not, and nearly every time I open the folder the computer hangs for 15-60 seconds. This happens on every machine, including the server (but not nearly as often as the clients). I've searched high and low and cannot figure it out and it's driving me crazy. Here are all the things I've tried to no avail: Disabled firewall (XP) and anti-virus (ESET NOD32) Deleted any desktop.ini file I can find in the share Disabled "automatically search for network folders and printers" Disabled "remember each folder's view settings" Set HKCU\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer NoRecentDocsNetHood = 1 Tried with mapped drives and with UNC shortcuts Ran CHKDSK Removed Read-Only attribute from all folders (well, tried to remove, it always came back on with a half check) Added the server's static IP to the hosts file on the clients I've tried monitoring the server's performance to see if anything makes sense. Occasionally the issue coincides with a spike in pages/sec (memory) but not always. Other than that, everything else seems normal. The anti-virus would seem to be the most likely cause to me considering the batch files and what not, but it still hangs when it is completely disabled. I'm at a loss and if anyone can help me with this I'd greatly appreciate it!

    Read the article

  • Sane patch schedule for Windows 2003 cluster

    - by sixlettervariables
    We've got a cluster of 75 Win2k3 nodes at work in a coarse grained compute cluster. The cluster is behind a mountain of firewalls and resides in its own VLAN. Jobs of all sizes and types run on the cluster and all of the executables running are custom-made. (ed: additional notes on our executables) The jobs range from 30 seconds to 7 days in duration, and may contain one executable or 2000 sub-jobs (of short duration). Obviously we are trying to avoid the situation where our IT schedules a reboot during a 7 day production job. We have scheduling software which accomodates all of the normal tasks for a coarse grained cluster and we can control which machines are active for submission, etc. If WSUS was in some way scriptable (or the client could state it's availability for shutdown) we could coordinate the two systems and help out. Currently, the patch schedule is the Sunday after Super Tuesday regardless of what is running on the cluster. We have to ask for an exemption every time we want to delay patching a machine for a long running production job. Basically, while our group is responsible for the machines we have little control over IT's patch schedule. Is patching monthly with MS's schedule sane for a production Windows cluster? Are there software hooks in WSUS where we could say, "please don't reboot just yet"?

    Read the article

  • Barriers to IPv6 deployment: addressing

    - by sysadmin1138
    There are several things that are keeping IPv6 deployment from being a topic of active discussion here at my work. There are the usual technical issues, but one non-technical one appears to be a major stumbling block on the path to actually getting a deployment project going. Addresses, memorizing of. Specifically, IPv4 addresses are comprehensible, and IPv6 addresses just look like a big long string of hex. The human mind has real trouble memorizing lists of more than 7-8 items, and an IPv4 address (192.168.231.148) has four items in it which makes it easy for us to memorize. A fully populated IPv6 address has not only 8 sections, but each section has 4 hex digits in it. IPv6 addresses were not designed for memorization. To the technician who knows that the DNS server is at 192.168.42.42 (or more likely "42.42", since the company prefix is likely memorized), the idea of memorizing an IPv6 address fills them with dread. Which in turn makes them much less enthusiastic about participating in an IPv6 deployment project. Because of how our network works we're not fully dynamic in terms of v4 addressing. We have several to many subnets that are entirely statically assigned for a variety of reasons, chief among them being that the overhead of static DHCP assignments is perceived as being too great. Also, some devices still aren't smart enough to pull DNS addresses out of DHCP while also having a static assignment, and therefore require manually configured DNS settings. Therefore, some v6 address memorization will have to be done. We're not under any mandate to get v6 out the door, so we don't have pressure from the top. However, it is time to start prepping our infrastructure to handle IPv6 even if we don't convert wholesale. For those of you who have been in IPv6-land for a while, what short-cut methods do you use to discuss or keep track of subnets and specific/critical IP addresses? If I can help reduce some of the dread surrounding IPv6 we might get the project going.

    Read the article

  • How to pin "Visual Studio 2010 Documentation" shortcut to Windows 7 taskbar?

    - by Chris W. Rea
    I just installed Microsoft Visual Studio 2010 at home, on my Windows 7 PC. One of the items installed with VS2010 is "Microsoft Visual Studio 2010 Documentation". I like to have the documentation installed locally and at my fingertips, and so before had always added a shortcut for the help viewer to my Quick Launch toolbar. However, I'm not able to pin the new documentation to the Windows 7 taskbar. It's frustrating. Note carefully: When I launch "Microsoft Visual Studio 2010 Documentation" from the Start menu, it seems to perform two functions: First, it launches the "Help Library Agent", which is a local HTTP server from which the help content is served... similar to the local ASP.NET web development server. Second, it launches the default web browser against the localhost URL corresponding to the port on which the "Help Library Agent" is running, for example: http://127.0.0.1:47873/help/1-1444/ms.help?method=f1&query=msdnstart&product=VS&productVersion=100&locale=en-US ... in other words, the program doesn't leave behind an active foreground process that displays in the taskbar. So, I can't choose "Pin this program to taskbar" as one might do so with a typical program. How can I get a shortcut to "Microsoft Visual Studio 2010 Documentation" in the Windows 7 taskbar? Has anybody got a workaround for this?

    Read the article

  • Why can't I connect to my router's config page with Windows 7?

    - by user17940
    I've got a Belkin wireless router, and just bought a new Dell computer with Windows 7 pre-installed. I can connect to the Internet and my home network just fine, but when I try to visit my router's configuration page at http://192.168.2.1, I get a "Connection was reset" error. Nothing I do will make the router's configuration page come up in my web browser. More background information: I could always get to the router's config page from my Windows XP machine. I never had any trouble prior to getting this Windows 7 computer. I can ping 192.168.2.1 successfully from my Windows 7 computer. My PC is connected to the router by a physical CAT5 cable, not via wireless. Every device connected to my router, including the new computer, can get to the Internet with no problem. Here are some things that did not solve the problem: I tried turning off IPV6 in Windows. I tried turning off my firewall and antivirus software I tried using https instead of http I tried disabling and then enabling the network connection in Windows I tried reverting my network card driver back to an older version I have tried both Firefox and Internet Explorer web browsers. Has anyone experienced something like this before, and solved it? Thanks a lot for your help!

    Read the article

  • ubuntu 10.04 + php + postfix

    - by mononym
    I have a server I am running: Ubuntu 10.04 php 5.3.5 (fpm) Nginx I have installed postfix, and set it to loopback-only (only need to send) The problem is it is not sending. if i issue (at command line): echo "testing local delivery" | mail -s "test email to localhost" [email protected] I get the email no problem, but through PHP it does not arrive. When I send it via PHP, mail.log shows: Mar 28 10:15:04 host postfix/pickup[32102]: 435EF580D7: uid=0 from=<root> Mar 28 10:15:04 host postfix/cleanup[32229]: 435EF580D7: message-id=<20120328091504.435EF580D7@FQDN> Mar 28 10:15:04 host postfix/qmgr[32103]: 435EF580D7: from=<root@FQDN>, size=1127, nrcpt=1 (queue active) Mar 28 10:15:04 host postfix/local[32230]: 435EF580D7: to=<root@FQDN>, orig_to=<root>, relay=local, delay=3.1, delays=3/0.01/0/0.09, dsn=2.0.0, status=sent (delivered to maildir) Mar 28 10:15:04 host postfix/qmgr[32103]: 435EF580D7: removed any help appreciated, my main.cf file: smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = FQDN alias_maps = hash:/etc/aliasesalias_database = hash:/etc/aliases myorigin = /etc/mailname #myorigin = $mydomain mydestination = FQDN, localhost.FQDN, , localhost relayhost = $mydomain mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = loopback-only virtual_alias_maps = hash:/etc/postfix/virtual home_mailbox = mail/

    Read the article

  • Anonymous file sharing without login window, from Windows 7 server to XP clients

    - by Niten
    I'm trying to provide machines on a small LAN with read-only, anonymous access to files shared from a Windows 7 workstation (let's call it WIN7SVR). In particular, I don't want clients to have to deal with a login window when they navigate to, e.g., \\WIN7SVR in Windows Explorer, but we do not have a domain and synchronizing accounts between the server and clients would be intractable. There are both Windows 7 and Windows XP clients that need access to these shares. I got this working for Windows 7 clients by just enabling the Guest account on WIN7SVR and setting appropriate share permissions. Other Windows 7 machines automatically try logging in as Guest, it seems, so their users don't have to deal with the login window. The problem is with the XP clients--they can access the server if the user enters "Guest" in the login window, but I don't want users to have to do that. So from what I gather, in my limited understanding of Windows file sharing, this boils down to granting null sessions access to file shares on WIN7SVR. But I've had no success so far on that front. I've tried all the following in the local group policy editor on the Windows 7 server: Set Network access: Let Everyone permissions apply to anonymous users to Enabled Set Network access: Restrict anonymous access to Named Pipes and Shares to Disabled Added the names of corresponding shares to Network access: Shares that can be accessed anonymously Added "ANONYMOUS LOGON" to Access this computer from the network under User Rights Assignment Any advice would be highly appreciated... I'm mostly a Unix guy, so I feel somewhat out of my league with Windows file sharing. I do understand that any sort of anonymous access to file shares isn't generally ideal from a security standpoint, but it's the most practical solution for us in this case, and access to our network is well enough controlled that share-level security isn't a concern.

    Read the article

  • MacBook Pro with OSX 10.6.3 (Snow Leopard) Wi-Fi network connection breaks after few minutes

    - by Yanick Landry
    I have a MacBook Pro with OSX 10.6.3 (Snow Leopard). After connecting on a Wi-Fi network, the connection "breaks" after a few minutes. What I mean by "breaking" is that all requests, whether it is loading a web page, connecting to a share folder, connecting to my local router at 192.168.0.1, or pinging anything doesn't get through (time out). When in a "break" situation, I can see in the Network Settings panel that I still have an active IP, which I can successfully ping. I have this problem at home with a router D-Link DI-624 and at work with a D-Link WBR-2310, all with updated firmwares. I thought DHCP was the issue. So I tried assigning a fixed IP address (192.168.0.166). It successfully connects, but after a few minutes, the connection still breaks. The solution I'm currently using is that I disable the AirPort (on the Network icon menu in the top bar), wait a few seconds then re-enable it. It then quickly works, but the connection still breaks after a few minutes. I tried Googling my problem but I think I can't find any good keywords ! It's my first question here, so sorry if I don't respect some rules.

    Read the article

  • Collect temperature and fan speed with munin from Windows 7 PC?

    - by nfm
    Hi, I'm quite fond of munin and using it also at home to monitor my PCs. What was super-duper easy under Linux is pretty much unsolvable for me under Windows: I'd like to monitor CPU and Motherboard temperatures as well as fan speed. On Linux I'm using lm-sensors and the plugin for munin was basically there. I access already some information from my Windows machine via SNMP (disk space, CPU usage, memory usage); the graphs are simple as is the information exposed via SNMP, but they do their job. But when it comes to temperature and fan speed I'm running against a wall. My research so far resulted in that Windows does not by default provide out of the box ability to retrieve temperature/fan speed data. Third party applications are necessary which have know-how how to communicate with the Motherboard chips. The best I cam up with is that SpeedFan exposes a shared memory interface and there exists a library which hooks into Windows SNMP facility and bridges over to SpeedFans shared memory interface; it's called SFSNMP (site currently down). Unfortunately the library doesn't work, there's a bug report at SpeedFan open about it, but it's currently not moving (although the SFSNMP author is active there) . So, unless that's going to work like anytime soon, are there any alternatives? I'm not found of buying any software to get that feature, given that I take it as granted that my system exposes me the information to properly monitor it, but anyway don't just not answer because of this.

    Read the article

  • How can I update generic non-pnp monitor?

    - by njk
    Background I've been running a KVM switch with my monitor at 1920 x 1080 over VGA for over a year. Did a Windows Update on 12/11/12 which did the following: Update for Windows 7 for x64-based Systems (KB2779562) Security Update for Windows 7 for x64-based Systems (KB2779030) Cumulative Security Update for Internet Explorer 8 for Windows 7 for x64-based Systems (KB2761465) Windows Malicious Software Removal Tool x64 - December 2012 (KB890830) Security Update for Windows 7 for x64-based Systems (KB2753842) Security Update for Windows 7 for x64-based Systems (KB2758857) Security Update for Windows 7 for x64-based Systems (KB2770660) After a restart, my extended monitor was dark. I attempted to reset the extended display configuration, and noticed my monitor was being detected as a Generic Non-PnP Monitor: I uninstalled, downloaded new, and re-installed display drivers. Nothing. I attempted to unplug my monitor from the power for 15 minutes. Nothing. I followed some of the suggestions on this thread; specifically DanM's which suggested to create a new *.inf file and replace that in Device Manager. Device Manager said the "best driver software for your device is already installed". The only thing that works is when the monitor is directly attached to the laptop. This obviously is not what I want. My thought is to somehow remove the Generic Non-PnP Monitor from registry. How would I accomplish this and would this help? Any other suggestions? Relevant Hardware ASUS VE276 Monitor TRENDnet 2-Port USB KVM Switch (TK-207K) HP Laptop w/ ATI Radeon HD 4200 Screens

    Read the article

  • Having munin server monitoring problem: Graphs not being generated.

    - by geerlingguy
    When I run munin-cron (munin-cron --debug), I get the following error: 2010/05/10 13:39:01 [WARNING] Call to accept timed out. Remaining workers: archstl.org;archstl.archstl.org 2010/05/10 13:39:01 [DEBUG] Active workers: 1/8 These errors simply keep repeating themselves until I quit munin-cron. I've followed the directions for debugging munin on the 'Debugging Munin plugins' wiki page, but I get the following results when going through their directions: After telnetting to localhost 4949, I can see a list of plugins, see a node at archstl.archstl.org, but can't fetch anything. The output is as follows: >fetch cpu . However, on the same machine (which is both the node and the master munin server), I can run munin-run cpu, and it prints the results correctly to the command line, like so: user.value 100829130 nice.value 3479880 system.value 13969362 idle.value 664312639 iowait.value 12180168 irq.value 14242 softirq.value 199526 steal.value 0 Looking at the wiki page mentioned above, it looks like it might be a plugin environment problem, but I can't figure out how to fix/change this... If the plugin does run with munin-run but not through telnet, you probably have a PATH problem. Tip: Set env.PATH for the plugin in the plugin's environment file.

    Read the article

  • Time Machine vs Source Control?

    - by Blub
    Finally got convinced to start using some kind of version control for my code instead of zipping down a copy of the project at the end of each day. Downloaded Tortoise SVN and used it to create a repository localy on my hdd. I've been using it for 2 days now but I have to say that using it is actually more hassle than just copying the project manually in explorer. Sure, you only store incremental changes but with the cheap disks of today I can't really say that's an argument when you only have small projects. I haven't realy found a quick way to browse the older versions of my files eighter. What I want is an infinite undo that is completely transparent while I code, if I save the file I want a backup. I don't want to check out, check in and don't even get me started on moving files. I haven't tried Time Machine for OS X but it looks like it's exactly what I'm looking for. Does such a program exist for windows? Preferably free and with some kind of tagging-system so I can tag a timestamp when the project is working etc. Maybe should add that I mostly work alone on a single computer. Update: Some of you asked why I want backup. Since I work alone it's mostly to allow me to quickly hack up a solution without worrying that something will screw up.

    Read the article

  • Courier-imap login problem after upgrading / enabling verbose logging

    - by halka
    I've updated my mail server last night, from Debian etch to lenny. So far I've encountered a problem with my postfix installation, mainly that I managed to broke the IMAP access somehow. When trying to connect to the IMAP server with Thunderbird, all I get in mail.log is: Feb 12 11:57:16 mail imapd-ssl: Connection, ip=[::ffff:10.100.200.65] Feb 12 11:57:16 mail imapd-ssl: LOGIN: ip=[::ffff:10.100.200.65], command=AUTHENTICATE Feb 12 11:57:16 mail authdaemond: received auth request, service=imap, authtype=login Feb 12 11:57:16 mail authdaemond: authmysql: trying this module Feb 12 11:57:16 mail authdaemond: SQL query: SELECT username, password, "", '105', '105', '/var/virtual', maildir, "", name, "" FROM mailbox WHERE username = '[email protected]' AND (active=1) Feb 12 11:57:16 mail authdaemond: password matches successfully Feb 12 11:57:16 mail authdaemond: authmysql: sysusername=<null>, sysuserid=105, sysgroupid=105, homedir=/var/virtual, [email protected], fullname=<null>, maildir=xoxo.sk/[email protected]/, quota=<null>, options=<null> Feb 12 11:57:16 mail authdaemond: Authenticated: sysusername=<null>, sysuserid=105, sysgroupid=105, homedir=/var/virtual, [email protected], fullname=<null>, maildir=xoxo.sk/[email protected]/, quota=<null>, options=<null> ...and then Thunderbird proceeds to complain that it cant' login / lost connection. Thunderbird is definitely not configured to connect through SSL/TLS. POP3 (also provided by Courier) is working fine. I've been mainly looking for a way to make the courier-imap logging more verbose, like can be seen for example here. Edit: Sorry about the mess, I've found that I've been funneling the log through grep imap, which naturally didn't display entries for authdaemond. The verbose logging configuration entry is found in /etc/courier/imapd under DEBUG_LOGIN=1 (set to 1 to enable verbose logging, set to 2 to enable dumping plaintext passwords to logfile. Careful.)

    Read the article

  • User authentication -- username mismatch in IIS in ASP.NET application

    - by Cory Larson
    Last week, an employee's Active Directory username was changed (or a new one was created for them). For the purposes of this example, let's assume these usernames: Old: Domain\11111 New: Domain\22222 When this user now logs in using their new username, and attempts to browse to any one of a number of ASP.NET applications using only Windows Authentication (no Anonymous enabled), the system authenticates but our next layer of database-driven permissions prevents them from being authorized. We tracked it down to a mismatch of usernames between their logon account and who IIS thinks they are. Below are the outputs of several ASP.NET variables from apps running in a Windows 2008 IIS7.5 environment: Request.ServerVariables["AUTH_TYPE"]: Negotiate Request.ServerVariables["AUTH_USER"]: Domain\11111 Request.ServerVariables["LOGON_USER"]: Domain\22222 Request.ServerVariables["REMOTE_USER"]: Domain\11111 HttpContext.Current.User.Identity.Name: Domain\11111 System.Threading.Thread.CurrentPrincipal.Identity.Name: Domain\11111 From the above, I can see that only the LOGON_USER server variable has the correct value, which is the account the user used to log on to their machine. However, we use the "AUTH_USER" variable for looking up the database permissions. In a separate testing environment (completely different server: Windows 2003, IIS6), all of the above variables show "Domain\22222". So this seems to be a server-specific issue, like the credentials are somehow getting cached either on their machine or on the server (the former seems more plausible). So the question is: how do I confirm whether it's the user's machine or the server that is botching the request? How should I go about fixing this? I looked at the following two resources and will be giving the first one a try shortly: http://www.interworks.com/blogs/jvalente/2010/02/02/removing-saved-credentials-passwords-windows-xp-windows-vista-or-windows-7 http://stackoverflow.com/questions/2325005/classic-asp-request-servervariableslogon-user-returning-wrong-username/5299080#5299080 Thanks.

    Read the article

  • Windows Update and lsass.exe

    - by David
    I have a brand new installation of Windows XP (SP1 or older). I installed Norton AntiVirus, Firefox, Putty, and Cygwin. No other software is present. Windows Update finds the following 64 updates: KB905760, KB978262, Internet Explorer 8, KB71961, KB954155, KB968816, KB923561, KB950762, KB949402, KB950974, KB951376, KB951748, KB952004, KB952954, KB955069, KB956572, KB956802, KB956803, KB956844, KB958470, KB958869, KB959426, KB960803, KB960859, KB961501, KB969059, KB970238, KB970238, KB971032, KB971468, KB971657, KB972270, KB973507, KB973815, KB973904, KB974112, KB974318, KB974392, KB975025, KB975560, KB975561, KB975713, .......) When these updates are applied, the system reboots to a black screen with two error messages. The first error message says: lsass.exe - Application Error The application failed to initialize properly (0xc00000142). Click on OK to terminate the application. The second error message says: services.exe - Application Error The application failed to initialize properly (0xc00000142). Click on OK to terminate the application. I then proceed to boot into Safe Mode, use System Restore, and everything works fine again until the 64 updates re-appear in Windows Update. I can see two options: disable Auto-Updates or install each of the 64 updates one at a time until finding the troublesome update. Does anyone have any better ideas?

    Read the article

  • Not getting IP from ISP on Multicast Network

    - by Johan Nielsen
    Im having an odd issue with my ISP (COMX.dk) I have a managed access gateway box (Telsay) with three 8P8C ports for use with Internet and Ip-Tv (respectively on different VLANS (so does my ISP tell me)) To utilize a port you will need to register your device's mac address through an online interface. You will then get your device paired with a static ip. I am using one port actively and I have registered another device (router). The router is configured to listen for an active dhcpd on the network. When my router get a lease I get a private ip 192.168.2.2 (not the one bound to my mac) which is odd! I unconnected my router from the gateway and connected my laptop directly. Same thing happened - I was given a private address. I did a port scan on the gateway and found port 80 to be open and browsed to the ip. I was then presented with a management interface of a Belkin wireless router (HMMM!!!!) <--by the way, not my gear At this point I called the ISP to let them know of my issue/findings - Only to be replied "Well, we cant see any rogue dhcp servers" (thinking to myself, well I can) I then decided that it could be fun to try the other port of my gateway, only to experience the same. So I reconnected my router and used the remaining port to make an observer(wireshark promic etc.) I am able to see my router trying to discover a dhcp server but I can also see my ISP's IGMP and PIMv2 packages just repeating the same pattern. Hello...Hello...Hello :) So I called them again, only to get the same response, "we dont see any rogue dhcp's...we cant see the host you are talking to (mac address of the Belkin router)...you are definitively connected through wireless?!?(no im not, no such thing as a wireless wire - i thought to myself)" My questions is, What is going on? (besides from what im reporting here) What am I seeing that the don't? What can I tell them in order for them to resolve mine/their issue?

    Read the article

  • Server 2008 RAID 5 Write Speeds

    - by Solipsism
    I recently configured a RAID 5 partition in Server 2008 with 4 RAID 5 disks. These disks are connected through a SATA expansion card that uses PCIe. This morning, I checked and they had finally finished synchronizing, and so I tried to do some speed tests. Copying off the disks started pretty much fine - speeds began at 125MB/s, then trailed down to about 70MB/s, which I found odd but not worrying. Writing TO the disks however is a completely different story. I attempted to copy some of my VM host ISOs onto the disks (~2-4 GB apiece) and this resulted in speeds of approximately 10MB/s. I tried copying both from a local disk (connected directly to the motherboard) and from another server ththe gigabit network and results were the same. I checked the performance monitor while transferring the files and the only thing that stuck out was that my memory hard faults shot up to 6,000 per minute (spiking around 200/s) by explorer.exe. The system is running 2GB of DDR667 ECC RAM and a quad-core 2.3GHz opteron. Is there anything I can do to fix this performance issue (buy more RAM? move the drives to a faster box?, etc) or am I just screwed so long as I stick to windows.

    Read the article

  • Upgrade SQLServer 2008 hardware

    - by John
    Forgive me if I'm not able to be totally clear here. It is not intentional, I'm a senior level developer in a very small company having to act like a manager at the moment. Anyway, the story is that we have 2 older dell servers with SQL Server 2008 Standard in a "cluster". I put that in quotes because I'm still not 100% clear what that means. We have 2 brand new blade servers and want to move the existing databases to the new hardware. Ok, so here is the gotcha. We need to do this with little or no down time. I'm being told that we can evict the passive node, then pull in one of the new servers. But I'm also being told that this is a dangerous step because something could go wrong that would cause the cluster to fail and then we would be left with nothing because the active server would not be able to come back up. Does anyone have any thoughts on how to handle this? I'm being told that the only way to ensure success is to have at least a day of down time where we bring up a new cluster on the new hardware and then migrate the databases 1 by 1.

    Read the article

  • Completed downloads freeze Windows

    - by Ben Hooper
    The Issue Shortly after a file download via Google Chrome for Windows completes, the download will get stuck on "0 seconds left" and all other programs (except Google Chrome, for some reason, but browsing will not work) completely freezes into Windows' infamous "Not Responding" state, affecting Explorer particularly badly. Eventually, the programs will recover themselves but they will recover significantly faster if you cancel the file download, relative to how quickly you react. Performing the exact same operation immediately after cancelling the download usually works without issue. This issue occurs when with any file type (.ZIP, .MSI, .MSG, .PNG, .URL, etc) of any size from any source (Dropbox, SourceForge, Imgur, even tiny and locally-generated BLObs created by my own Chrome extension, etc) to any location.   Potential Causes As this issue is so inconsistent, I haven't been able to prove whether the issue is Chrome-specific or being caused by my system or my Chrome configuration but it's happening on both my work and home PCs. I originally suspected that this issue was being caused by security software scanning completed downloads for threats but I'm not as confident in that theory anymore as the issue persisted even after changing my security software from ESET NOD32 and Malwarebytes Anti-Malware Pro to ESET Endpoint to Microsoft Security Essentials.   System Information (of both PCs) Windows version: 7 Service Pack 1 64-bit Google Chrome version: 30.0.1599.101 (but has been happening for a long time)   Screenshots

    Read the article

  • Second monitor stopped being recognized by Windows

    - by Eric J.
    One of the developer PC's running Windows Vista Ultimate had the second monitor stop being recognized in Windows overnight. There were no hardware or driver changes at the time, though I have subsequently updated to the latest nVidia drivers (card is NVIDIA GeForce 210). The non-recognized monitor IS recognized during the boot sequence. In fact, only the "bad" one shows the POST or the Windows loading screen. At some point during Windows initialization after the loading screen disappears and before the logon screen appears, the active monitor switches. Any thoughts? UPDATE: When I open the Vista monitor properties window, I see my primary display and secondary display depicted. The primary one is portrayed as the regular blue box, but the secondary one is portrayed greyed-out. I have the option to "Extend desktop to this monitor", the only resolution is 800x600, and all of the advanced monitor properties are greyed out as well. If I opt to extend the desktop, the greyed-out box turns blue, when I then select Apply the screens flash as usual and I'm given the 15 second countdown to accept the new settings and when I do, everything returns to the previously broken state... secondary monitor is portrayed greyed-out again. At no point is the desktop shown on the secondary monitor.

    Read the article

  • Safely remove external USB drive fails due to $extend

    - by moontear
    When connecting an external USB 3.0 hard drive to my USB 3.0 ports I can never safely remove it. Somehow windows always keeps the journal files open: "Always" as in this time I only connected the drive, copied a 10GB VM and wanted to disconnect it afterwards (like 15 minutes after copying, so all copying was done). As you can see there is no other program keeping a handle on the disk besides System. I tried restarting explorer.exe as well as RemoveDrive.exe from Uwe Sieber. No luck, the locks on the hard drive always remain. My only solution is just unplugging it (whereas I'm afraid of damaging the data?) or restarting the computer (always helps, doesn't it?). Might it have something to do with me only having a SSD hard drive and the external disk is a regular drive? Might it have something to do with the USB 3.0 drivers (NEC Electronics USB Hub)? I never have this problem when using the regular USB 2.0 ports. Any ideas on how to properly unmount the disk?

    Read the article

  • How should a small company administer their web server?

    - by John Isaacks
    We currently have our website hosted by a small company that is actually a reseller for Rackspace. They act as our server administrators. They configured the servers, handle the backups, if there is a problem, we call them and they fix it. We are growing and want to move away from our shared server to either a cloud or dedicated server. I am thinking cloud myself but I am open to either. The current company doesn't seem to want to offer us anything more than a shared hosting plan. I looked into cloud solutions at vps.net, with them I would have to be the server administrator myself. I am the website programmer but administering the server is outside my comfort zone. vps.net does have a $99/month plan for Pro-Active Managed Support but I am not sure if this is the equivalent on a server admin that is there when you need them. We could hire someone in house, but I think that would be overkill for our needs. I am not exactly sure what we need, I do know we need as close to 100% uptime as we possible can. and we need the ability to add/remove/change the server configuration/software/etc. when needed (though changes shouldn't be very often once everything is setup right). Can someone point me in the right direction? What do other companies do?

    Read the article

  • Windows Server 2008 R2 running at a snail's pace

    - by Django Reinhardt
    Really weird problem here. Our main web server has started running at a snail's pace, for absolutely no reason we can discern. Even after restarting the machine, when there's no little or no ram usage and CPU usage is fluctuating between 0 and 30%, simple tasks, like opening Internet Explorer, or waiting for My Computer to open, take forever. There are no processes hogging system resources that we can see... the machine itself is just exhibiting extremely slow behaviour. I've never seen a machine do this. A lot of security updates had built up, so we decided to let Windows install them. When we looked through the history upon restarting, though, they had failed with error code 800706BA. I don't know if this could be related or not. Any help in this matter would be greatly appreciated. As mentioned in the title, we're running a Windows Server 2008 R2 machine. It's also running SQL Server and IIS. It has 16GB of RAM and a decent Quad Core processor. It's also been fine until now -- and we haven't changed a thing. Thanks for any help.

    Read the article

  • Migrating to AWS Cloud with auto-scaling - where to put Redis and ElasticSearch?

    - by RobMasters
    I've been trying to research this topic but haven't found anywhere that recommends where to install services such as Redis and ElasticSearch when migrating to a cloud framework. I'm currently running a Symfony2 application on 2 static servers - one is running MySQL and the other is the public facing web server, which also has Redis and ElasticSearch running on it. Both of these servers are virtualised, but they're static in terms of not being able to replicate at present (various aspects are still dependent on the local filesystem). The goal is to migrate to AWS and use auto-scaling to be able to spin up and kill web servers as required, but I'm not clear on what I should put on each EC2 instance. Should they be single-responsibility only? i.e. Set up individual instances for the web server(s), Redis, and ElasticSearch and most likely an RDS instance for MySQL and only set up auto-scaling on the web server(s)? I don't foresee having to scale the ElasticSearch server anytime soon as it's only driving the search functionality, but it's possible that Redis may need to be replicated at some point - but should this be done manually? I'm not sure of how this could be done automatically as each instance needs to be configured to know about it's master/slave(s) as far as I know. I'd appreciate advice on this. One more quick question while I'm here - how would I be able to deploy code changes when there are X web servers currently active? I'm using a Capifony deployment script (Symfony2 version of Capistrano), which I think can handle multiple servers easily enough by specifying an array of :domain addresses...but how can should this be handled when the number of web servers can vary?

    Read the article

  • Shortcut to "printer and faxes" on another computer

    - by Doltknuckle
    I have a print server running windows server 2008 that has about 50 printers on it. In windows XP, I was able to connect to the server using the UNC name and make a shortcut to the "printers and faxes" folder. (For the record, I know that it really isn't a folder, but that's outside the scope of this question.) I have recently switched to windows 7 and I find that the jump lists are really useful. One of the things I want to do is make it easy to connect to that server's "printers and faxes" folder. I would like to use something like a shortcut that I can open and go immediately to that location. The problem is that windows 7 doesn't have a way to create a shortcut like you could in WinXP. They have a button on the toolbar that says "view remote printers" which sends you to the correct folder. I'd like to avoid having to type out the server name. I also can't use the "view network" link in windows explorer. Our organization has over 6,000 machines and viewing the network lists all of them. This is all about saving time by using the minimum number of mouse clicks and key presses in normal operation. Does anyone have any suggestions?

    Read the article

< Previous Page | 758 759 760 761 762 763 764 765 766 767 768 769  | Next Page >