Search Results

Search found 18913 results on 757 pages for 'ideas'.

Page 520/757 | < Previous Page | 516 517 518 519 520 521 522 523 524 525 526 527  | Next Page >

  • Can't connect to Sql Server 2008 named instance

    - by eidylon
    I just installed Sql 2008 Express on a new server running Windows Server 2008. I know Sql is working properly, because I can connect to the db fine locally, on the server. I cannot connect to it from a client machine though, neither by IP address nor by machine name (iporname\instance). I know I have the correct IP address, because I am RDCing into the server to perform all this configuration and setup, and if I ping the server name, it is resolving to the correct IP address as well. On the server, I have set up an inbound firewall exception allowing all traffic on any port on any protocol to sqlservr.exe. In SSMS, in server > Properties > Connections Allow remote connections to this server is enabled. In Sql Server Configuration Manager, TCP/IP is enabled in both the Protocols for <instance> and the Client Protocols sections. I looked in the Windows logs, but don't see anything about connections being denied or dropped. As far as I can see, I have everything set right, but cannot connect from a client machine. The client CAN connect to other Sql 2008 Express servers okay, so I know the client configuration is correct. Any ideas where else I can look for info of what/where/how this connection is dropping, greatly apprecaited! The error being returned by the client is: **TITLE: Connect to Server** Cannot connect to [MY.IP.ADD.RSS]\[MYINSTNAME]. ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1)

    Read the article

  • Time not propagating to machines on Windows domain

    - by rbeier
    We have a two-domain Active Directory forest: ourcompany.com at the root, and prod.ourcompany.com for production servers. Time is propagating properly through the root domain, but servers in the child domain are unable to sync via NTP. So the time on these servers is starting to drift, since they're relying only on the hardware clock. WHen I type "net time" on one of the production servers, I get the following error: Could not locate a time-server. More help is available by typing NET HELPMSG 3912. When I type "w32tm /resync", i get the following: Sending resync command to local computer The computer did not resync because no time data was available. "w32tm /query /source" shows the following: Free-running System Clock We have three domain controllers in the prod.ourcompany.com subdomain (overkill, but the result of a migration - we haven't gotten rid of one of the old ones yet.) To complicate matters, the domain controllers are all virtualized, running on two different physical hosts. But the time on the domain controllers themselves is accurate - the servers that aren't DCs are the ones having problems. Two of the DCs are running Server 2003, including the PDC emulator. The third DC is running Server 2008. (I could move the PDC emulator role to the 2008 machine if that would help.) The non-DC servers are all running Server 2008. All other Active Directory functionality works fine in the production domain - we're only seeing problems with NTP. I can manually sync each machine to the time source (the PDC emulator) by doing the following: net time \\dc1.prod.ourcompany.com /set /y But this is just a one-off, and it doesn't cause automated time syncing to start working. I guess I could create a scheduled task which runs the above command periodically, but I'm hoping there's a better way. Does anyone have any ideas as to why this isn't working, and what we can do to fix it? Thanks for your help, Richard

    Read the article

  • 613 threads limit on debian

    - by Joel
    When running this program thread-limit.c on my dedicated debian server, the output says that my system can't create more than around 600 threads. I need to create more threads, and fix my system misconfiguration. Here are a few informations about my dedicated server: de801:/# uname -a Linux de801.ispfr.net 2.6.18-028stab085.5 #1 SMP Thu Apr 14 15:06:33 MSD 2011 x86_64 GNU/Linux de801:/# java -version java version "1.6.0_26" Java(TM) SE Runtime Environment (build 1.6.0_26-b03) Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode) de801:/# ldd $(which java) linux-vdso.so.1 => (0x00007fffbc3fd000) libpthread.so.0 => /lib/libpthread.so.0 (0x00002af013225000) libjli.so => /usr/lib/jvm/java-6-sun-1.6.0.26/jre/bin/../lib/amd64/jli/libjli.so (0x00002af013441000) libdl.so.2 => /lib/libdl.so.2 (0x00002af01354b000) libc.so.6 => /lib/libc.so.6 (0x00002af013750000) /lib64/ld-linux-x86-64.so.2 (0x00002af013008000) de801:/# cat /proc/sys/kernel/threads-max 1589248 de801:/# ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 794624 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 10240 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 128 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited Here is the output of the C program de801:/test# ./thread-limit Creating threads ... Address of c = 1061520 KB Address of c = 1081300 KB Address of c = 1080904 KB Address of c = 1081168 KB Address of c = 1080508 KB Address of c = 1080640 KB Address of c = 1081432 KB Address of c = 1081036 KB Address of c = 1080772 KB 100 threads so far ... 200 threads so far ... 300 threads so far ... 400 threads so far ... 500 threads so far ... 600 threads so far ... Failed with return code 12 creating thread 637. Any ideas how to fix this please ?

    Read the article

  • Mac Port error installing gsoap

    - by Kevin
    Hi All, I have installed Mac Ports V1.8.1 no worries. I ran sudo port -v selfupdate no worries. I ran sudo port install gsoap And get the following error message. --- Computing dependencies for gsoap --- Fetching gsoap --- Attempting to fetch gsoap_2.7.13.tar.gz from http://optusnet.dl.sourceforge.net/gsoap2 --- Verifying checksum(s) for gsoap --- Extracting gsoap --- Applying patches to gsoap --- Configuring gsoap Error: Target org.macports.configure returned: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_gsoap/work/gsoap-2.7" && ./configure --prefix=/opt/local --enable-samples " returned error 77 Command output: checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... no checking for mawk... no checking for nawk... no checking for awk... awk checking whether make sets $(MAKE)... no checking build system type... i386-apple-darwin10.2.0 checking host system type... i386-apple-darwin10.2.0 checking whether make sets $(MAKE)... (cached) no checking for C++ compiler default output file name... configure: error: C++ compiler cannot create executables See `config.log' for more details. Error: Status 1 encountered during processing. Any ideas as to why it is failing. Regards Kevin

    Read the article

  • what's the difference between a Volume and a Partition in Windows 7 diskpart

    - by user170232
    I was trying to follow the Intel guide for setting up iRST (Intel Rapid Start Technology) on my new laptop. The Intel manual says you need to create a *Volume that is as big or bigger than your available memory, set it to a specific id (id=84), then go into the iRST tool and adjust some settings. Looking at the disk manager on the laptop, I see there is already a Partition labeled as "Hibernation Partition" which is a little bigger than the memory in my system. So it looks like iRST was already set up...BUT, it's a Partition, not a Volume. Here's what the manual says to do: (from: http://download.intel.com/support/motherboards/desktop/sb/rapid_start_technology_user_guide.pdf) diskpart list disk select disk x (where x is the disk to use, there's only one disk in this laptop) create partition primary size=X000 (where X000 is the size to create) detail disk (which lists details for the disk. This is where i get hung up) select volume Z (where Z is the *partition you created previously) ** it says the 'detail disk' command will list the volume #, but it doesn't. ** 'detail disk' only lists two "volumes" for Recovery and OS. ** if i do 'list partition', i see the 8 GB *partition labeled as "Hibernation Partition") ** so I can't continue with the following steps: set id=84 override exit The reason I went looking for the manual is because when iRST is enabled in the BIOS, the system won't resume from sleep. When it's disabled, it works fine, but the system goes into (legacy?) Hibernation mode and takes a while to come out of Hibernation. the iRST is supposed to resume from deep sleep very quickly. So, what's the difference between a Volume and a Partition? Should I delete the Hibernation Partition and create a Hibernation Volume? Anyone have any ideas? (if it matters, this is on a Dell XPS 13 with BIOS A08) Thanks! J

    Read the article

  • Apache/Passenger and cpulimit

    - by Dave Smylie
    I run a ruby on rails site that processes email - the email is dumped directly into the web app via a POST from postfix. At times I can get a burst of email coming in causing a prolonged surge in CPU usage making my VPS provider understandable unhappy with me. These emails don't need to be processed in a timely manner - they just need to be (eventually) processed. Obviously I can't just nice the process as that only looks at the cpu usage on my VPS and can't take into account the cpu usage on the other VPS's. I have found a utility called cpulimit that will you put hard limits on cpu usage for a particular process. (eg 20%). This seems ideal for this purpose, but I can't work out to integrate with apache/passenger. Passenger starts up a ruby process for each server and restarts them periodically. Each time the pid will change. Cpulimit needs to be given a pid number for it to act on. Anyone got any ideas how I could get passenger to fire off a call this command when it's starting up this particular virtual host?

    Read the article

  • How does one change the UUID of a Volume on Mac OS X 10.6?

    - by Emmel
    Does anyone know how to change the UUID of a Volume? The background for this question is that I have a duplicate UUID issue: I have /Volumes/OldMacHD with a UUID of XYZ. I have /Volumes/Mirror1 with a UUID of XYZ (same UUID! I bet that's because OldMacHD USED to be part of this mirror). I got these UUIDs via 'diskutil info /dev/thatdisknumber | grep UUID'. I'd like to change the UUID of 'Mirror1'. I discovered by chance the 'hfs.util' utility, since these are HFS volumes after all. The man page for hfs.util says that if you issue the -s flag, this changes the UUID. However, if you type hfs.util all by itself, it doesn't show you the -s option at all, just every option besides that! Grr. I tried it anyway: sudo /System/Library/Filesystems/hfs.fs/hfs.util -s /dev/disk4 (the raid volume). Nothing happens. No error message, no success message. UUID exactly the same. I tried it while the volume was unmounted. Any ideas?

    Read the article

  • After update, suddenly lost ability to access Windows Server 2008 R2 shares from Windows XP clients

    - by Knute Knudsen
    Today I lost the ability to see my Windows Server 2008 R2 shares from any of my 3 Windows XP machines in my small office. The 5 Win7 machines haven't been affected (they are still able to browse/access the 2008 server), but none of my WinXP machines can access the 2008R2 server anymore. Yesterday (and for the previous year) everything was working fine. I do not have a domain setup. I can still access Win7 shares from WinXP clients. Browsing the server logs, I see that the following update was installed last night: > Installation Ready: The following updates are downloaded and ready for > installation. This computer is currently scheduled to install these > updates on ?Thursday, ?November ?15, ?2012 at 3:00 AM: > - Security Update for Windows Server 2008 R2 x64 Edition (KB2761226) > - Security Update for Microsoft .NET Framework 3.5.1 on Windows 7 and Windows Server 2008 R2 SP1 for x64-based Systems (KB2729452) > - Windows Malicious Software Removal Tool x64 - November 2012 (KB890830) > - Cumulative Security Update for Internet Explorer 9 for Windows Server 2008 R2 x64 Edition (KB2761451) It seems likely that something was changed in last night's update, but so far I haven't seen anything on microsoft.com to prove it. I did hear that XP is reaching the end of the road soon. Any ideas?

    Read the article

  • Can you make a Windows network default user profile NOT apply to a certain operating system?

    - by Jordan Weinstein
    I would like to create a network Default User account for Windows 7 only. This is on a Windows 2003 domain with servers from Windows 2000 to 2008 R2 and Windows XP on workstation side. We're about to do a full migration to Windows 7 and I'd like to start using the network default user profile functionality as we're not migrating user profiles over. Want everyone to start clean. I followed the simple steps from this page: http://support.microsoft.com/kb/973289 under the heading: "How to turn the default user profile into a network default user profile in Windows 7 and in Windows Server 2008 R2" but the problem is that profile would then apply to a new user\admin logging into a 2008 server. That's no good. Anyone have any ideas on how to limit what actually uses that network profile? I was thinking about setting deny permissions for all my admin\service accounts on that "\\dcserver\netlogon\Default User.v2" folder but then it might be timing out and cause other problems. Haven't tried yet as that seems like a bad way of making this work.

    Read the article

  • F5 Networks iRule/Tcl - Escaping UNICODE 6-character escape sequences so they are processed as and r

    - by openid.malcolmgin.com
    We are trying to get an F5 BIG-IP LTM iRule working properly with SharePoint 2007 in an SSL termination role. This architecture offloads all of the SSL processing to the F5 and the F5 forwards interactive requests/responses to the SharePoint front end servers via HTTP only (over a secure network). For the purposes of this discussion, iRules are parsed by a Tcl interpretation engine on the F5 Networks BIG-IP device. As such, the F5 does two things to traffic passing through it: Redirects any request to port 80 (HTTP) to port 443 (HTTPS) through HTTP 302 redirects and URL rewriting. Rewrites any response to the browser to selectively rewrite URLs embedded within the HTML so that they go to port 443 (HTTPS). This prevents the 302 redirects from breaking DHTML generated by SharePoint. We've got part 1 working fine. The main problem with part 2 is that in the response rewrite because of XML namespaces and other similar issues, not ALL matches for "http:" can be changed to "https:". Some have to remain "http:". Additionally, some of the "http:" URLs are difficult in that they live in SharePoint-generated JavaScript and their slashes (i.e. "/") are actually represented in the HTML by the UNICODE 6-character string, "\u002f". For example, in the case of these tricky ones, the literal string in the outgoing HTML is: http:\u002f\u002fservername.company.com\u002f And should be changed to: https:\u002f\u002fservername.company.com\u002f Currently we can't even figure out how to get a match in a search/replace expression on these UNICODE sequence string literals. It seems that no matter how we slice it, the Tcl interpreter is interpreting the "\u002f" string into the "/" translation before it does anything else. We've tried various combinations of Tcl escaping methods we know about (mainly double-quotes and using an extra "\" to escape the "\" in the UNICODE string) but are looking for more methods, preferably ones that work. Does anyone have any ideas or any pointers to where we can effectively self-educate about this? Thanks very much in advance.

    Read the article

  • Problem with wireless networking

    - by Rodnower
    Hello, I have atheros wifi hardware, intell chipset, gigabyte laptop and CentOS 5 installed. Now I try to use wireless network and get problems. First of all I want to say that I have 2 OS on my laptop, and when I load Windows XP I still may to access to the wireless network. First I try to get it on Linux was to make active wlan0 interface in: system - administration - network but I get: Determining IP information for wlan0... failed. Second I try also was unsuccessfully: [root 1 network-scripts]# ifup-wireless Error : unrecognised wireless request "off" This relevant output of iwconfig is: Warning: Driver for device wlan0 recommend version 21 of Wireless Extension, but has been compiled with version 20, therefore some driver features may not be available... wlan0 IEEE 802.11 ESSID:"" Mode:Managed Frequency:2.462 GHz Access Point: Not-Associated Tx-Power=27 dBm Retry min limit:7 RTS thr:off Fragment thr=2352 B Encryption key:off Link Quality:0 Signal level:0 Noise level:0 Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 {output not in the original format} The same things are happen even if I do: modprobe wlan0 (this not get error) Important to say that modprobe not succeed to find ath_pci, tharefor I decide to download latest version of the madwifi driver from http://madwifi-project.org. I extracted this, but when I make this, this is what I get: [root 1 madwifi-0.9.4]# make /bin/sh: line 0: cd: /lib/modules/2.6.18-164.el5/build: No such file or directory Makefile.inc:66: * /lib/modules/2.6.18-164.el5/build is missing, please set KERNELPATH. Stop. I tried to set KERNELPATH, but I think that it was incorrect: [root 1 madwifi-0.9.4]# make KERNELPATH=/lib/modules/2.6.18-164.el5/kernel/ /bin/sh: cc: command not found Makefile.inc:81: * Cannot detect kernel version - please check compiler and KERNELPATH. Stop. Some one have any ideas? Thank you very much for ahead.

    Read the article

  • SQLVDI error - attempt to release mutex not owned by caller

    - by Chris W
    I've started getting some errors in the App event log of one of our database servers (Windows 2003 & SQL Server 2005). The nightly full database backups are completing successfully however immediately after the job success is written to the event log there is a run of entries that say: SQLVDI: Loc=CVDS. Desc=Release(ClientAliveMutex). ErrorCode=(288)Attempt to release mutex not owned by caller. There's five of these logged - the server itself has more than 20 databases on it which are all backed up successfully. The server is backed up by Bacula using a VSS backup. Has anyone got any ideas what would be causing the errors? They seem to have started after a re-boot on Friday to install some patches which included KB960089. Edit: After getting the errors for a few days they've now stopped without any action on my part other than letting the backups continue as they were. It may be a coincidence but they stopped after Bacula completed its weekly full rather than the daily incremental backup.

    Read the article

  • What does this ssh error mean?

    - by kevin
    This is my last resort. I've been trying to figure out the problem here for hours. Here's the deal: I have copied my private key from machine #1 onto machine #2. Machine #1 is able to connect via ssh to a server with my public key just fine, but machine #2 gives the following output, when trying to connect to the server: $ ssh -vvv -i /home/kevin/.ssh/kev_rsa [email protected] -p 22312 OpenSSH_5.3p1 Debian-3ubuntu6, OpenSSL 0.9.8k 25 Mar 2009 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 192.168.1.244 [192.168.1.244] port 22312. debug1: Connection established. debug3: Not a RSA1 key file /home/kevin/.ssh/kev_rsa. debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace ... Permission denied (publickey). There is obviously more debug output that I have omitted, and I can provide upon request. I am convinced however that it doesn't like my private key file. I also had a suspicion that it has to do with how I copied it from machine #1 to machine #2. I copy/pasted the text from the private key onto a flash drive. This might be the problem, however, when I duplicated this method on another working private key file, and did a diff on the original, to the copy/pasted one, they are identical. I've been struggling with this. If I could just get a little more information on why it doesn't like my key, I could fix it I'm sure. Anyone have any ideas on this? Is there some meta-data somewhere that tells ssh that a file is in fact an RSA key?

    Read the article

  • DHCP not responding from laptop or router, but works on directly plugged PC?

    - by Matt H
    I'm at my sister in law's place in Singapore. I'm not from Singapore but am here for a few months. She has some sort of cable modem made by motorolla (SB5101 Surfboard). I think it goes, through starhub or similar provider. Anyhow, her PC is directly attached by cable (not wireless) and she can access the internet. There is no wireless router connected to it. The PC is configured with DHCP and appears to be working. However, the moment I unplug her PC and plug in my laptop, it doesn't get an address. The interesting thing here is that I also see this toredo tunnel adaptor etc. I'm not familiar with what that is. It appears to be being assigned an IP v6 address and an IP v4 address. I thought perhaps it's my laptop, but also when I plug in my DDWRT based router, it also fails to get a DHCP assigned address on the WAN port. I can't also seem to connect into any web configuration on the motorolla modem either. Any ideas? what kind of setup is this? all I'd like to do is plug in my wireless router so I can roam around the house and also access the internet.

    Read the article

  • Unable To Successfully Use Java Anywhere (Windows 7 64bit)

    - by Nathan Smith
    This is a bit of an odd issue: (This is with JRE 7u7 32bit.) New Lenovo W530 laptop, everything is shiny. I tried to use a java applet for the first time recently (in Chrome). I got the Java logo and it started spinning like it was going to load, but didn't get anywhere and the browser locked up. I tried IE. Same deal. Tried Firefox. Same deal. So, I did the usual things: Rebooted, nothing. Un-installed Java, rebooted, re-installed, nothing. Tried it installing a 64bit Java as well. Same thing. I tried JRE6. Same thing. I uninstalled and manually deleted all the registry entries I could find related to it, then re-installed. Same thing. I un-installed and used the JavaRa utility. Same exact thing. I've not been able to use web applets or desktop stuff (minecraft), until yesterday. I did a graphics driver update and I was able to play minecraft for small amounts of time. It crashed regularly and wouldn't start reliably. No change in web applets. I'm at my wits end here. I'm to the point where reformatting is my only option. If anyone has any ideas at all, that would be awesome.

    Read the article

  • Odd IIS FTP Failure

    - by Monkey Boson
    We're running a script on our production box that zips up our database and FTPs it to a backup box every night. Our production box is running Redhat Enterprise 5. Our backup box is running Windows XP Pro / IIS 5.1. Both machines are on the same VLAN (not sure if this is imporatant). The backup file usually clocks in at around 3GB. Every now and again (~5% of the time), the backup script fails. The shell script on the "client side" - which looks at return codes - never identifies any problem since ftp always returns 0. On the "server side", IIS writes out a log that looks like this: #Software: Microsoft Internet Information Services 5.1 #Version: 1.0 #Date: 2009-08-08 07:04:25 #Fields: time c-ip cs-method cs-uri-stem sc-status sc-win32-status 07:04:25 192.168.111.235 [15]USER backup 331 0 07:04:25 192.168.111.235 [15]PASS - 230 0 07:05:54 192.168.111.235 [15]created backup_20090808.zip 426 10035 07:06:16 192.168.111.235 [15]QUIT - 426 0 Now, I know that 426 means "Connection closed, transfer aborted", which is sort-of a catch-all for "IIS was not happy". The real puzzler is the wincode: 10035 (WSAEWOULDBLOCK -- Resource temporarily unavailable). My understanding is that this code is normal when using non-blocking socket calls - which would almost certainly be used by any FTP Server implementation. My first guess that it might be a timeout issue doesn't make sense, since we're only talking about a few minutes here and the timeout was left at the default 900 s. Does anybody have any ideas about what is causing this problem, and how it may be fixed? Thanks!

    Read the article

  • Hostname error on my Slicehost Ubuntu server

    - by allesklar
    Like many folks who upgraded to Rails 2.2, I got an exception raised when sending an email. This version of Rails or later does require using tls for sending emails. The message in the production log file says: hostname was not match with the server certificate I did a whole lot of research and work on this and did everything I could. I changed my slice's hostname to ohlalaweb.com. If I run the command 'hostname' at the CL I get: ohlalaweb.com Postfix seems to work fine. I can send emails from the CL to my gmail, yahoo, and google apps gmail accounts with no problems. Here is the result of cat /etc/postfix/main.cf # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. myorigin = /etc/mailname smmtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ohlalaweb.pem smtpd_tls_key_file=/etc/ssl/certs/ohlalaweb.pem smtpd_use_tls=yes # SA created next line to force postfix to use self create certificate smtpd_tls_auth_only=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = ohlalaweb.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases mydestination = localhost.localdomain, localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all I have regenerated the ssl keys with the ohlalaweb.com host name. Any ideas or suggestions?

    Read the article

  • Display stretches 4:3 ratios; Adds scrolling to other ratios

    - by Matt
    I have a dual monitor setup. Normally, they both display at 1680x1050. They have been setup this way for about a year. I'm using Windows XP Professional 2003 x64 SP2. Today, out of nowhere, one of the monitors kicked back to a lower resolution. I was not playing with any configuration at the time.. in fact all I had done was close a window (maybe a browser). But the thing is that the resolution is still preserved partially by the fact that the screen will scroll when you move the mouse. So it's like looking through a 1024x768 window into a 1680x1050 world. The monitor itself does not appear to be damaged, because I also have it connected to my netbook (via KVM) and higher resolutions work fine. I tried uninstalling/reinstalling the drivers to no avail. System restore doesn't help either. I'm unsure of the exact ATI card I'm using.. Device Manager lists it as "Radeon X300/X550/X1050". There is no Catalyst Control Center software installed. I tried to install it, but there doesn't seem to be a way to install it by itself ... it forces you to install another driver, which breaks both of my displays, forcing me to go into safe mode and run system restore again. Any ideas? Thanks EDIT: After playing around more, I discovered that the "scrolling" behavior is only present for aspect ratios that are not 4:3. For 4:3 ratios, it just stretches out to fit the wide screen. My monitor's native ratio is 16:9 .. what could be causing it to think it needs to scroll?

    Read the article

  • An XKB keyboard map that responds to the left and right shift key individually

    - by mbfisher
    First off, excuse my ignorance of X and XKB; I've been trying to hack together a solution in the hope of being able to achieve what I want without requiring a detailed grasp of it. I'm trying to create an XKB keyboard map on Ubuntu 12.04 that allows me to stipulate which of the two shift keys constitutes the Level2 modifier. Specifically, the 4 key should only produce a $ when the right shift is held, not the left. My reading so far: http://www.charvolant.org/~doug/xkb/html/node5.html http://people.uleth.ca/~daniel.odonnell/Blog/custom-keyboard-in-linuxx11 http://www.x.org/releases/X11R7.5/doc/input/XKB-Enhancing.html Lots of searching! I've attempted to define a custom type, and then refer to it explicitly in a symbols map: /usr/share/X11/xkb/types/mbfisher: default xkb_types "mbfisher" { type "RIGHT_SHIFT" { modifiers = None+Shift_R; map[None] = Level1; map[Shift_R] = Level2; }; } /usr/share/X11/xkb/symbols/mbfisher: default partial alphanumeric_keys xkb_symbols "basic" { name[Group1]= "mbfisher"; key <AE04> { type= "RIGHT_SHIFT", symbols[Group1]= [ 4, dollar ] }; }; I'm then selecting the map with the Ubuntu Keyboard Layout GUI. This obviously disables the alphanumeric keyboard apart from the 4 key, but the dollar sign can still be typed with either shift key. I'm conscious of writing a massive question with lots of useless information so I'll stop here; please ask for anything I've missed out. Any ideas?

    Read the article

  • Environment variables in bash_profile or bashrc?

    - by Viriato
    I have found this question [blog]: Difference between .bashrc and .bash_profile very useful but after seeing the most voted answer (very good by the way) I have further questions. Towards the end of the most voted, correct answer I see the statement as follows : Note that you may see here and there recommendations to either put environment variable definitions in ~/.bashrc or always launch login shells in terminals. Both are bad ideas. Why is it a bad idea (I am not trying to fight, I just want to understand)? If I want to set an environment variable and add it to the PATH (for example JAVA_HOME) where it would be the best place to put the export entry? in ~/.bash_profile or ~/.bashrc? If the answer to question number 2 is ~/.bash_profile, then I have two further questions: 3.1. What would you put under ~/.bashrc? only aliases? 3.2. In a non-login shell, I believe the ~/.bash_profile is not being "picked up". If the export of JAVA_HOME entry was in bash_profile would I be able to execute javac & java commands? Would it find them on the PATH? Is that the reason why some posts and forums suggest setting JAVA_HOME and alike to ~/.bashrc? Thanks in advance.

    Read the article

  • SQLVDI error - attempt to release mutex not owned by caller

    - by Chris W
    I've started getting some errors in the App event log of one of our database servers (Windows 2003 & SQL Server 2005). The nightly full database backups are completing successfully however immediately after the job success is written to the event log there is a run of entries that say: SQLVDI: Loc=CVDS. Desc=Release(ClientAliveMutex). ErrorCode=(288)Attempt to release mutex not owned by caller. There's five of these logged - the server itself has more than 20 databases on it which are all backed up successfully. The server is backed up by Bacula using a VSS backup. Has anyone got any ideas what would be causing the errors? They seem to have started after a re-boot on Friday to install some patches which included KB960089. Edit: After getting the errors for a few days they've now stopped without any action on my part other than letting the backups continue as they were. It may be a coincidence but they stopped after Bacula completed its weekly full rather than the daily incremental backup.

    Read the article

  • Creating mdraid device on top of other existing mdraid devices

    - by Dmitriusan
    I'm considering creating something like "hierarchical raid" and wondering whether it is possible using pure mdraid. Moreover, I'm going to boot from this device. I'm using Ubuntu Server 12.04 LTS with Grub2 bootloader. Motivation behind doing that is: I have 4 x 1tb 7200rpm disks. Two are newer and faster (up to 200mb/sec) and other two are slower (up to 140mb/sec). I want to create RAID-0 device from them. When creating such RAID-0 directly from 4 hard disks, I get summary speed up to ~480mb/sec. That is roughly 4*120mb/sec, so RAID-0 works with speed of the slowest device. I have an idea to create a separate RAID-0 md0 device from 500gb partitions of slower hard disks. Theoretically, this md0 device will have speed 2*140=240~280mb/sec. After that, I'm going to add this md0 device to RAID-0 with faster disks, finishing with up to 3*200=600mb/sec. Stripe-width for this raid will be 2x times bigger than for underlying raid with slow disks. Questions are: is it possible or I'm missing something? will that work as expected? can I boot from such consolidated raid device? any better ideas? any pitfalls? I don't want to use fakeraid for consolidating slow disks for multiple reasons (portability, ability to customize parameters and so on). PS Speed is needed for home virtualization server and just for experience/fun. Reliability is provided via regular automatic backups to a separate device. PPS I considered also using different stripe-width for hard disks with different speed in single raid, but mdraid does not seem to support that.

    Read the article

  • Windows Media Player 12 Library import keeps dying

    - by duckworth
    I cannot get WMP 12 to import my library. I have searched around various forums and tried all the common solutions like disabling Media Sharing, deleted my %LOCALAPPDATA%\Microsoft\Media Player directory and tried reimporting, etc. but nothing works. I have even removed the Media features from Windows setup and re-added them. I have a large mp3 collection shared on the network from another Windows box. I add the folder (tried as a mapped drive and UNC path) and it begins importing. After about 30 minutes into the import (the CurrentDatabase_372.wmdb hits just under 400MB) my WMP player stops importing and all of the icons in WMP turn to red x's and my library is gone. I close and reopen WMP 12 and the library is empty and the CurrentDatabase_372.wmdb is small and it strarts importing again. Rinse, lather, repeat. I am going nuts as WMP11 on Vista handles this same setup perfectly. I am at my wits end on what else to try. I am running a legit Windows 7 Ultimate X64 RTM install. Here is a screenshot of what WMP12 looks like when the import dies: Any other ideas? Edit: OK, I Just confirmed this is definitely a problem not specific to my computer or configuration. I just did a clean installation of Windows 7 Ultimate x86 on an old test machine, opened WMP12 and added the same network folder of mp3's and it crashed about an hour into the import with the same appearance as the screenshot I posted above and the library disappears. So the problem has to be one of several things: The large size of the library The fact that the library is on the network A specific file or file is causing it the player to crash

    Read the article

  • I love google Chrome, but some non-static pages like Piwik render it unresponsive

    - by gogowitsch
    The web-stat software Piwik stops reacting on mouse clicks after 1-2 seconds. The same is true for Google Maps and Producteev (but GMail and most other pages work like a charm). These rely heavily on JS, and work without Flash. I can click for a very short time period and then the mouse cursor doesn't feel the UI anymore (it doesn't turn into a I over input fields, though it moves; if the freeze occured while the pointer was over an input field, the cursor keeps being a I) and all clicks on the DOM are being ignored by Chrome. No message appears, neither obvious nor in the Console (F12). There is no obstructing div or the like in the DOM (F12). Since I couldn't find any hints on the source of my problems, I suspected my plugins and extensions. Unfortunately, neither deactivating all plugins nor all extensions solved the problem. for the problematic pages, it always happens no Dropbox running several GB of free RAM the taskmanager doesn't show any high CPU or memory utilization (the offending tab uses 30 MB and uses 0-1 % CPU) all problematic pages work in other browsers (Chrome, Firefox, IE) the rest of the computer is very responsive the computers use different security suites (Kaspersky and Avira) The effect exists between several (synchronized) Chrome instances on different machines, all running Windows 7. Both the OS and Chrome are updated automatically. Other tabs and the Chrome chrome (tabs, menus, toolbar buttons of the browser itself) still work. I really don't like switching between browsers. Any ideas?

    Read the article

  • SMTP Unreachable from Specific Networks

    - by Jason George
    I host my business site through a VPS account. The instance runs Ubuntu and I'm using POSTFIX+Dovecot as my mail server. For the most part, the mail server works fine. I have noticed, however, that I can not send mail from specific local networks. I noticed this at a client's office serval months ago. I can receive email, but any time I tried to send mail when connected to their network the connection would time out. Since I could send my mail after leaving, I chalked it up to improper network configuration and didn't worry about it. Unfortunately I've recently moved, switched service providers, and am forced to use the service providers router due to the special set-up they put in place to give me DSL in the sticks--well beyond the typical range for a DSL run. Now I'm unable to send email from home, which is a problem. I have tried sending email through my phone (using cellular service rather than my DSL) just to confirm the server is currently working. I'm not even sure where start debugging. Any ideas on how I might track down the issue would be greatly appreciated.

    Read the article

< Previous Page | 516 517 518 519 520 521 522 523 524 525 526 527  | Next Page >