Search Results

Search found 16333 results on 654 pages for 'exception safe'.

Page 363/654 | < Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >

  • When tab groups are loaded, Firefox becomes unresponsible for minutes (Unresponsive script)

    - by unor
    I have several tab groups (~ 20) in Firefox. I can start the browser without any problems. However, as soon as I … click at the "Group tabs" icon in the toolbar, or right-click on a tab and hover over "Move to tab group", … Firefox becomes unresponsible/freezes for a rather long time (more than 2 minutes). It seems to load all tab groups (it doesn't load all the pages! I deactivated this in the settings). While this is happening, I get several "Unresponsive script" warnings, like: Script: chrome://global/content/bindings/tabbox.xml:0 (most of the time) Script: chrome://global/content/bindings/tabbox.xml:418 Script: chrome://browser/content/tabview.js:400 Script: chrome://browser/content/tabview.js:522 Script: resource://modules/sessionstore/SessionStore.jsm:3578 Script: resource:///components/PageThumbsProtocol.js:79 (rare) Script: resource://gre/modules/XPCOMUtils.jsm:323 (rare) (probably also other warnings, didn't record them yet, though) On all of these I click "Continue". After ~ 2-3 minutes and 3-5 warnings, I can use Firefox again. Now I can switch tab groups without any problems. Why is this happening? How can I prevent the long loading time? Is there maybe a about:config setting I could try? I started Firefox in Safe Mode (= without any add-ons): the problem still exists.

    Read the article

  • Problem booting virtual machine after converting VMDK to VHD

    - by vg1890
    I used the VMWare VCenter Converter Standalone Client to convert a physical drive on my old PC to a virtual drive. The conversion worked fine and I ended up with a valid VMDK file. Next, I wanted to convert the VMDK to a VHD for use with Microsoft Virtual PC, since that's what I use on my new box. I used WinImage for the conversion and that worked fine, too. I can access the files from the virtual drive through WinImage. However, when I create a new virtual machine using Virtual PC and add the existing VHD file, the machine doesn't boot. The initial boot screen flashes with the amount of RAM and then the screen goes black. If I turn off the VM and reboot in safe mode I can see the drivers being loaded until eventually it gets to crcdisk.sys and hangs indefinitely. Any ideas how to fix this? I'm not opposed to starting over from scratch if there's another method to turn my physical machine into a Virtual PC VM. Thanks! EDIT - I should add that the virtual drive is a system boot drive and not a secondary drive. EDIT - I tried booting from the install CD and doing a repair. The result was that the system could not be repaired due to a "driver error."

    Read the article

  • How to import certificate for Apache + LDAPS?

    - by user101956
    I am trying to get ldaps to work through Apache 2.2.17 (Windows Server 2008). If I use ldap (plain text) my configuration works great. LDAPTrustedGlobalCert CA_DER C:/wamp/certs/Trusted_Root_Certificate.cer LDAPVerifyServerCert Off <Location /> AuthLDAPBindDN "CN=corpsvcatlas,OU=Service Accounts,OU=u00958,OU=00958,DC=hca,DC=corpad,DC=net" AuthLDAPBindPassword ..removed.. AuthLDAPURL "ldaps://gc-hca.corpad.net:3269/dc=hca,dc=corpad,dc=net?sAMAccountName?sub" AuthType Basic AuthName "USE YOUR WINDOWS ACCOUNT" AuthBasicProvider ldap AuthUserFile /dev/null require valid-user </Location> I also tried the other encryption choices besides CA_DER just to be safe, with no luck. Finally, I also needed this with Apache tomcat. For tomcat I used the tomcat JRE and ran a line like this: keytool -import -trustcacerts -keystore cacerts -storepass changeit -noprompt -alias mycert -file Trusted_Root_Certificate.cer After doing the above line ldaps worked greate via tomcat. This lets me know that my certificate is a-ok. Update: Both ldap modules are turned on, since using ldap instead of ldaps works fine. When I run a git clone this is the error returned: C:\Tempgit clone http://eqb9718@localhost/git/Liferay.git Cloning into Liferay... Password: error: The requested URL returned error: 500 while accessing http://eqb9718@loca lhost/git/Liferay.git/info/refs fatal: HTTP request failed access.log has this: 127.0.0.1 - eqb9718 [23/Nov/2011:18:25:12 -0600] "GET /git/Liferay.git/info/refs service=git-upload-pack HTTP/1.1" 500 535 127.0.0.1 - eqb9718 [23/Nov/2011:18:25:33 -0600] "GET /git/Liferay.git/info/refs HTTP/1.1" 500 535 apache_error.log has nothing. Is there any more verbose logging I can turn on or better tests to do?

    Read the article

  • Windows Server 2008 Services won't start after patch

    - by Antitribu
    After installing the run of the mill patches today on a Windows Server 2008 (Running as an AD controller and Exchange 2007 Server) the machine came back up with "configuring updates stage 3 of 3 0% complete". The machine had been kept reasonably up to date so this likely was caused by a very recent patch. At the leaste the following patches were installed: KB973037 KB969947 KB973565 Restarting the server into safe mode and then subsequently rebooting (with no changes made) allowed the computer to restart and I can now log in normally. However none of the critical services start; including but not limited to Exchange, DNS and Terminal Services (Obviously if DNS doesn't start other things will break). I am unable to run Internet Explorer but Chrome will work. There are no meaningful errors in the event logs as to why services won't start. Under KDC I have The Key Distribution Center (KDC) cannot find a suitable certificate to use for smart card logons, or the KDC certificate could not be verified. Smart card logon may not function correctly if this problem is not resolved. To correct this problem, either verify the existing KDC certificate using certutil.exe or enroll for a new KDC certificate. This is going to be an evil one to debug and I'm kinda hoping someone has encountered it and knows the answer off hand. Thanks all.

    Read the article

  • Vista logs into black screen, "Application Error 0xc0000022" for explorer.exe

    - by IMAPC
    Whenever I attempt to log into a Windows Vista computer (the only account on it), I'm presented with a black screen & a cursor. I can open Task Manager, and from there I can launch applications. It seems to be using Aero Basic (instead of the full Aero which I had set as default before the problem started). When attempting to launch "explorer.exe" I get "explorer.exe - Application Error 'The application failed to initialize properly (0xc0000022). Click OK to terminate the application.'" Every now and then I get an error along the lines of "the application has failed to start because its side by side configuration is incorrect please see the application event log for more detail." I can boot into safe mode successfully, but I still get the black screen when I log into it in regular mode. I've tried most of the suggestions here, but did not work. I'm attempting to back up everything right now in case the only fix is to reinstall Windows. Has anyone seen this before?

    Read the article

  • How to figure out how much RAM each prefork thread requires for maximum Wordpress performance on an EC2 small instance

    - by two7s_clash
    Just read Making WordPress Stable on EC2-Micro In the "Tuning Apache" section, I can't quite figure out how he comes up with his numbers for his prefork config. He explains how to get the numbers for an average process, which I get. But then: Or roughly 53MB per process...In this case, ten threads should be safe. This means that if we receive more than ten simultaneous requests, the other requests will be queued until a worker thread is available. In order to maximize performance, we will also configure the system to have this number of threads available all of the time. From 53MB per process, with 613MB of RAM, he somehow gets this config, which I don't get: <IfModule prefork.c> StartServers 10 MinSpareServers 10 MaxSpareServers 10 MaxClients 10 MaxRequestsPerChild 4000 </IfModule> How exactly does he get this from 53MB per process, with 613MB limit? Bonus question From the below, on a small instance (1.7 GB memory), what would good settings be? bitnami@ip-10-203-39-166:~$ ps xav |grep httpd 1411 ? Ss 0:00 2 0 114928 15436 0.8 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1415 ? S 0:06 10 0 125860 55900 3.1 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1426 ? S 0:08 19 0 127000 62996 3.5 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1446 ? S 0:05 48 0 131932 72792 4.1 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1513 ? S 0:05 7 0 125672 54840 3.1 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1516 ? S 0:02 2 0 125228 48680 2.7 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1517 ? S 0:06 2 0 127004 55796 3.1 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1518 ? S 0:03 1 0 127196 54208 3.0 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1531 ? R 0:04 0 0 127500 54236 3.0 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf

    Read the article

  • Blank Icon in Control Panel?

    - by Wesley
    Hi all, Just recently I opened my Control Panel and saw that a blank icon came up before my list of icons. Has anyone else experienced this before? Would it be safe to delete it? Thanks in advance. EDIT: Oops, I should have mentioned this too, but when I try creating a shortcut on the desktop, I get a dialog box saying "Windows could not create the shortcut. Check if the disk is full." Clearly it isn't, since I still have 102 GB of free hard disk space... any ideas? EDIT2: Here is a screenie of my complete Control Panel: EDIT3: *So I did a manual search for .CPL in Windows Search and then looked at the list in TweakUI and got 3 unmatched .CPL files. They are [ncpa.CPL], [sapi.CPL], and [QuickTime.CPL]. I clearly know the QuickTime one, but not too sure about the rest. SCRATCH THAT, I FIGURED IT OUT... BUT IT'S NOT THE BLANK ICON.

    Read the article

  • How to find out where or if MYSQL5 logs are stored on a machine WHM/Cpanel

    - by moi
    I have a WHM/Cpanel re-seller hosting account on a virtual private server (Linux). I have root access to the machine via SSH I am trying to locate a file that contains information that will help me to determine which users have accessed what db and from which hosts. I would imagine this kind of data is stored in a log file somewhere. The MySQL page says: The general query log - Established client connections and statements received from clients See: http://dev.mysql.com/doc/refman/5.0/en/server-logs.html It also says: By default, all log files are created in the mysqld data directory. So, I am am NOT asking where are the general query log logs stored, (cos I expect I will get answers saying "it depends") Please help me work out: "How can go about finding out where MySQL general query log logs are stored on a linux machine" Couple of things i've already tried: I looked at /etc/my.cnf it was a tiny file that only contained the following info: [mysqld] skip-bdb skip-innodb set-variable = max_connections=500 safe-show-database ~ ~ I have looked in: /var/lib/mysql/ But I could not see any log-like file names in that directory. Any clues on this would be most welcome.

    Read the article

  • mysql.proc has gone corrupt. How can I fix it?

    - by Metalcoder
    I have a server running Debian 5.0, and MySQL. Suddendly, MySQL stopped working, and after many attempts to fix it, I decided to reinstall it. I installed MySQL 5.1.63, and when started it goes to safe mode. I made some typing, and when I executed mysql_upgrade as root, it complained: ... Running 'mysql_fix_privilege_tables'... ERROR 1548 (HY000) at line 1111: Cannot load from mysql.proc. The table is probably corrupted ERROR 1064 (42000) at line 1112: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'sqlstate 'HY000' set message_text='Unexpected content found in the performance_s' at line 1 ERROR 1548 (HY000) at line 1125: Cannot load from mysql.proc. The table is probably corrupted FATAL ERROR: Upgrade failed I checked the mysql.proc table, and it's comment column was slightly different from my backup. -- My backup says: `comment` char(64) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '', -- But it were: `comment` text CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, So, I restored my mysql database backup, and now they all match, but mysql_upgrade still trigger the same errors. I also tried do check and repair the mysql.proc table, but got no success.

    Read the article

  • mysqld_safe Can't log to error log and syslog at the same time. Remove all --log-error configuration options for --syslog to take effect

    - by photon
    When I'm trying to install MySQL 5.5 community edition on my Ubuntu 10.04 by compiling the source code, I met the following problem: $ fg % 1 sudo ../bin/mysqld_safe --basedir=/usr/local/mysql_community_5.5/data --user=mysql --defaults-file=/etc/my.cnf [sudo] password for linnan: Sorry, try again. [sudo] password for linnan: 121023 09:26:21 mysqld_safe Can't log to error log and syslog at the same time. Remove all --log-error configuration options for --syslog to take effect. Internal program error (non-fatal): unknown logging method '/usr/local/mysql_community_5.5/log/mysql.log' 121023 09:26:21 mysqld_safe Logging to '/var/log/mysql/error.log'. Internal program error (non-fatal): unknown logging method '/usr/local/mysql_community_5.5/log/mysql.log' 121023 09:26:22 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 121023 09:26:23 mysqld_safe mysqld from pid file /var/lib/mysql/ubuntu.pid ended It seems that the problem is related to log configuration. I've noticed a bugfix related to this problem: http://bugs.mysql.com/bug.php?id=50083 But I still have no idea how to solve it. The relative content in /etc/my.cnf: [mysqld] port = 3306 socket = /tmp/mysql.sock skip-external-locking key_buffer_size = 384M max_allowed_packet = 1M table_open_cache = 512 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 8M myisam_sort_buffer_size = 64M thread_cache_size = 8 query_cache_size = 32M # Try number of CPU's*2 for thread_concurrency thread_concurrency = 8 character-set-server=utf8 [mysqld-safe] basedir=/usr/local/mysql_community_5.5 datadir=/usr/local/mysql_community_5.5/data mysqld_safe_syslog.cnf: /etc/mysql/conf.d/mysqld_safe_syslog.cnf: [mysqld_safe] syslog

    Read the article

  • mdadm superblock hiding/shadowing partition

    - by Kjell Andreassen
    Short version: Is it safe to do mdadm --zero-superblock /dev/sdd on a disk with a partition (dev/sdd1), filesystem and data? Will the partition be mountable and the data still there? Longer version: I used to have a raid6 array but decided to dismantle it. The disks from the array are now used as non-raid disks. The superblocks were cleared: sudo mdadm --zero-superblock /dev/sdd The disks were repartitioned with fdisk and filesystems created with mfks.ext4. All disks where mounted and everything worked fine. Today, a couple of weeks later, one of the disks is failing to be recognized when trying to mount it, or rather the single partition on it. sudo mount /dev/sdd1 /mnt/tmp mount: special device /dev/sdd1 does not exist fdisk claims there to be a partition on it: sudo fdisk -l /dev/sdd Disk /dev/sdd: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xb06f6341 Device Boot Start End Blocks Id System /dev/sdd1 1 243201 1953512001 83 Linux Of course mount is right, the device /dev/sdd1 is not there, I'm guessing udev did not create it because of the mdadm data still on it: sudo mdadm --examine /dev/sdd /dev/sdd: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : b164e513:c0584be1:3cc53326:48691084 Name : pringle:0 (local to host pringle) Creation Time : Sat Jun 16 21:37:14 2012 Raid Level : raid6 Raid Devices : 6 Avail Dev Size : 3907027120 (1863.02 GiB 2000.40 GB) Array Size : 15628107776 (7452.06 GiB 8001.59 GB) Used Dev Size : 3907026944 (1863.02 GiB 2000.40 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : 3ccaeb5b:843531e4:87bf1224:382c16e2 Update Time : Sun Aug 12 22:20:39 2012 Checksum : 4c329db0 - correct Events : 1238535 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 3 Array State : AA.AAA ('A' == active, '.' == missing) My mdadm --zero-superblock apparently didn't work. Can I safely try it again without losing data? If not, are there any suggestion on what do to? Not starting mdadm at all on boot might be a (somewhat unsatisfactory) solution.

    Read the article

  • How to stop SophosAV from scanning directories under source control

    - by user26453
    From googling it seems its well known that SophosAV as well as other AV programs have issues with how they interact and can inhibit source control utilities like TortoiseHG or TortoiseSVN. One solution is to exclude directories under source control from on-access scanning as detailed here on Sophos's support site. There is a corollary article that mentions some issues related to this, namely need to place multiple entries for exclusions based on the possibility of the location being accessed through the short vs. long name (e.g., Progra~1 vs. "Program Files"). One other twist is I am using a junction to relocate my user directory, C:\Users\Username, to a second hard drive, E:. Since I am not sure how this interacts I have included the source control directory as they are nested in both locations. As a result, I have included the two exclusions for the on-access scanning exclusions (and to be on the safe side on-demand exclusions as well, although this should only come into play when I select a parent directory of the exclusion to be scanned on-demand, but still). You'll notice I have no need to add extra exclusions for those locations based on short vs. long name distinctions. The two exclusion I have then, for both on-access and on-demand scanning exclusions are: C:\Users\Username\source-control-directory E:\source-control-directory However, this does not seem to work as TortoiseHG still lags terribly in response to any request as AV software starts scanning when the directory is accessed via TortoiseHG. I can verify without a doubt that Sophos is causing the problems: I can completely disable on-access scanning. Once this is done TortoiseHG responds very fast to all operations. I cannot leave this disabled obviously, but since the exclusion don't seem to be working, what next?

    Read the article

  • Making OpenSSL work on PHP Windows 2008 server with FastCGI

    - by KacieHouser
    I have been researching all day. Here is what I have done: In C:/PHP/php.ini and C:/PHP/php-cgi-fcgi.ini I have made the extension_dir = "C:/PHP/ext" I uncommented extension=php_openssl.dll I went to http://windows.php.net/download/ and got the thread safe version with the PHP 5.4 (5.4.8) version of DLL's In C:/PHP/ext I replaced the php_openssl.dll with the one I downloaded In System32 and SysWOW64 I added the following DLL's ssleay.dll libeay.dll I restarted the IIS server in the Server Manager under Web Server and stopped and started the World Wide Web Publishing Service That didn't work, so I tried same thing with the unthreaded versions. I still get: Fatal error: Call to undefined function ftp_ssl_connect() in C:\inetpub\wwwroot\REMOVED_dev\save_data.php on line 5 Here are related things from phpinfo(): System Windows NT DEV-WEB1 6.1 build 7601 (Windows Server 2008 R2 Standard Edition Service Pack 1) i586 Compiler MSVC9 (Visual C++ 2008) Architecture x86 Configure Command cscript /nologo configure.js "--enable-snapshot-build" "--enable-debug-pack" "--disable-zts" "--disable-isapi" "--disable-nsapi" "--without-mssql" "--without-pdo-mssql" "--without-pi3web" "--with-pdo-oci=C:\php-sdk\oracle\instantclient10\sdk,shared" "--with-oci8=C:\php-sdk\oracle\instantclient10\sdk,shared" "--with-oci8-11g=C:\php-sdk\oracle\instantclient11\sdk,shared" "--with-enchant=shared" "--enable-object-out-dir=../obj/" "--enable-com-dotnet" "--with-mcrypt=static" "--disable-static-analyze" "--with-pgo" Server API CGI/FastCGI Configuration File (php.ini) Path C:\Windows Loaded Configuration File C:\PHP\php-cgi-fcgi.ini Scan this dir for additional .ini files (none) Additional .ini files parsed (none) Registered PHP Streams php, file, glob, data, http, ftp, zip, compress.zlib, compress.bzip2, https, ftps, sqlsrv, phar Registered Stream Socket Transports tcp, udp, ssl, sslv3, sslv2, tls FTP support enabled Protocols dict, file, ftp, ftps, gopher, http, https, imap, imaps, ldap, pop3, pop3s, rtsp, scp, sftp, smtp, smtps, telnet, tftp openssl OpenSSL support enabled OpenSSL Library Version OpenSSL 0.9.8t 18 Jan 2012 OpenSSL Header Version OpenSSL 0.9.8x 10 May 2012 What am I missing here?

    Read the article

  • Bash script 'while read' loop causes 'broken pipe' error when run with GNU Parallel

    - by Joe White
    According to the GNU Parallel mailing list this is not a GNU Parallel-specific problem. They suggested that I post my problem here. The error I'm getting is a "broken pipe" error, but I feel I should first explain the context of my problem and what causes this error. It happens when trying to use any bash script containing a 'while read' loop in GNU Parallel. I have a basic bash script like this: #!/bin/bash # linkcheck.sh while read domain do host "$domain" done Assume that I want to pipe in a large list (250mb say). cat urllist | ./linkcheck.sh Running host command on 250mb worth of URLs is rather slow. To speed things up I want to break up the input into chunks before piping it and then run multiple jobs in parallel. GNU Parallel is capable of doing this. cat urllist | parallel --pipe -j0 parallel ./linkcheck.sh {} {} is substituted by the contents of urllist line-by-line. Assume that my systems default setup is capable of running 500ish jobs per instance of parallel. To get round this limitation we can parallelize Parallel itself: cat urllist | parallel -j10 --pipe parallel -j0 ./linkcheck.sh {} This will run 5000'ish jobs. It will also, sadly, cause the error "broken pipe" (bash FAQ). Yet the script starts to work if I remove the while read loop and take input directly from whatever is fed into {} e.g., #!/bin/bash # linkchecker.sh domain="$1" host "$1" Why will it not work with a while read loop? Is it safe to just turn off the SIGPIPE signal to stop the "broken pipe" message, or will that have side effects such as data corruption? Thanks for reading.

    Read the article

  • EEE PC Keyboard malfunctioning - Ctrl key "sticks" after 10 seconds

    - by DWilliams
    I was given a EEE PC belonging to a friend of a friend to fix. The keyboard did not appear to work at all. I spent a while testing out various things, blowing the keyboard out, checking for damage, and so on. Nothing appeared to be physically wrong. At first I noticed that the keyboard appeared to work just fine for 10 seconds (on average, sometimes more sometimes less) after being powered on. It had been restored to the factory default xandros installation with no user set up, so I couldn't get in to mess with things since I couldn't type to make a user. I made an ubuntu live USB to boot it from, and managed to get the boot order changed to boot from USB in the ~10 seconds of working keyboard I had (I don't think I've ever had to rush around BIOS menus that quickly). After I got Ubuntu up on it, I played around a bit more and determined that apparently the ctrl key is stuck down (not literally, but it's on all the time). If I open gedit, pressing the "o" key brings the open dialog, "s" opens the save dialog, and all other behaviour you would expect to see if you were holding down the control key. The only exception that I noticed is the "9" and "0" keys. They function normally. Figuring that out I made a xandros user with a name/password consisting of 9's and 0's. I couldn't find any options in Xandros that could potentially be helpful. I'm not familiar with EEE PCs. Is it safe to assume that the keyboard is simply dead or could there be another problem? I don't want to purchase another keyboard for him if that isn't going to fix the problem. The netbook doesn't show any obvious signs of damage but the owner is a biker and very often has it with him on the road so it's been subjected to a good bit of vibration.

    Read the article

  • Windows 2003 DNS or IIS6 Problem?

    - by Mario
    Weird DNS problem... We have an intranet located internally on a windows 2003 / iis6 server - DNS handled internally on another windows 2003 server. The intranet, amongst other functions, hosts a ecommerce store I wrote that sells nike apparel embroidered with our company logo. Up until recently, it would send an email to payroll and the cost would be deducted from the employees paycheck. lets say this store is located at http://mydomain.com (only available internally) Now, we've been told by the accountants that we can no longer auto deduct from payroll and the employee needs to pay with a credit card or cash. So i went to thawte.com and ordered an SSL cert to be on the safe side (even though the CC gateway is secure) and they told me i need to drop the .com from the domain name Not wanting to mess with a system thats perfectly functional, i created another DNS entry that just points to mydomain (no .com) and left the old one in there. so they would go to http://mydomain On my Mac (OS X 10.6) i can hit either one just fine On Windows XP / Windows XP Embedded or Windows 7 (the vast majority of the pc's on our network) http://mydomain - returns nothing http://mydomain.com still works https://mydomain.com works but says the cert is invalid (as it should, it was issued to mydomain - not mydomain.com) my question is: why does it work on my Mac and not on a Windows PC (i get dhcp and dns just like any other pc on the network) and will removing the .com one from the DNS server resolve this? I've done all the usual attempts - ipconfig /flushdns, ipconfig /renew and release even going so far as to stop and restart DNS client on my Windows 7 box; rebooting and shutting down - adding a regedit entry something along the lines of SecureResponses and rebooting nothing works... I think its the .com and the not conflicting in DNS but i'm not sure - and why not on OS X We're closed on sunday and i'm going to remote in and see what happens if i remove the .com from DNS but any other ideas? -Mario

    Read the article

  • How to update the hard disk device drivers for a ghosted hard drive image so it can run on different hardware: Ultra ATA > SATA

    - by rism
    I've ghosted a Winxp machine from one laptop with Ultra ATA drive, and would like to set it up on another laptop as a multiboot option on another hard driver with a SATA drive. I can install the partition fine but if i make it active and try to boot it it blue screens. The blue screen is so fast i cant even read it, other than to make out it's saying "something", im picking probably hard drive as it goes through POST fine. So basically i would like to boot into my Win7 OS, and then somehow manipulate the XP partition to use updated drivers for the new hard drive/laptop so that i can then at least boot into the XP OS on the new machine and update all the other drivers in safe mode or whatever to get it to run. I assume someone is going to tell me to just do a fresh install, but that kinda defeats the purpose of ghosting at this point. There is a significant amount of personalisation, development setup on the XP machine that i would like to just transfer as is. As it stands ive invested minmal time in getting it to run, just a ghost and recovery and then a blue screen boot or two, so its still well worth it to me, time wise to try this way. Thanks.

    Read the article

  • In Windows 7, why can't I use perfmon against a remote server?

    - by SomeGuy
    I am on Windows 7 and trying to run perfmon against Windows 2003 and Windows 2008 servers. I am running into the same issue with all remote machines. When creating a data collector set, I specify a domain account that is in the administrators group on the remote machines (and "Performance Log Users" and "Performance Monitor Users" to be safe). On the "Available Counters" screen, When I type in a remote computer name, PerfMon locks up for a good 2-3 minutes before I can add any counters. I can then save the collector set. However, when I save it, the go/stop buttons are disabled if I click the set in the left panel, and missing if I click the Data collector set itself in the right panel. See the screens below. I can run data collector sets against my local machine with no problem. I am opening perfmon with my local account in both scenarios. I also have Remote Registry Service started on each remote machine. What is going on?

    Read the article

  • Smart Array P400 battery failure

    - by RobIII
    The P400 Smart Array controller in my HP DL380G5 is indicating: Battery Status: Failed, Replace Battery 1 in the System Management Homepage. Also the IML indicates: POST Error: 1794-Drive Array - Array Accelerator Battery Charge Low I do have several replacement batteries (actually, several controllers including batteries) lying around at work but never had to actually replace one. I am wondering if the battery replacement (swap) could be done hot or if I need to take down my server to do the battery replacement. I have replaced other spare parts like fans and drives before and all those can be replaced hot. I just don't know about the battery of the P400 Smart Array controller. Any help would be appreciated. EDIT1 For those interested: straight from the horse's mouth: With thanks to Frands Hansen pointing it out here. EDIT2 In the end I did just power down, to be on the safe side and because the manual says so, and replace the battery. Couldn't be easier. Unplugged the old one of the end of the cable (not the end connected to the controller) and reconnected a spare one. The replaced battery is now in the server for about an hour and currently (still) recharging. I'm assuming all will end wel.

    Read the article

  • PHP Startup: Unable to load dynamic library 'C:"\php\php_mysql.dll' - The specified module could not be loaded

    - by Tiny
    I'm trying to upgrade php 5.4.14 from php 5.4.3 in wamp server 2.2e. I have downloaded php-5.4.14-Win32-VC9-x86 (thread safe). Extracted it under C:\wamp\bin\php. Copied wampserver.conf from C:\wamp\bin\php\php5.4.3 to C:\wamp\bin\php\php5.4.14. Renamed php.ini-development to phpForApache.ini. -The port number the wamp server has been changed in the http.conf file to 8087 from its default 80. This is mentioned here though it is about upgrading from php 5.3.5 to php 5.4.0. After this, Restarting of the wamp server and services all over again has all been done and those two versions appeared in the menu php-versions (which is opened when the icon of the server is clicked). But when I attempt to enable a library like php_mysql or php_mysqli, a warning message box appears. PHP Startup: Unable to load dynamic library 'C:"\php\php_mysql.dll' - The specified module could not be loaded. I have also tried to removing the semicolon before them in the php.ini file but to no avail. I'm running Microsoft Windows XP Professional Version 2002, service pack 3. Where might be the problem? EDIT: I have changed extension_dir from C:\php to c:\wamp\bin\php\php5.4.14\ext\ in php.ini as the answer below indicates and the library is now loaded correctly but it says, 1045 - Access denied for user 'root'@'localhost' (using password: YES) though the user name and the password are the same as they are in MySQL in the config.inc.php file under phpmyadmin. I have also tried to restart MySQL56 service from Control Panel-Services(Local) but it keeps giving the same error. Does someone know why this happens?

    Read the article

  • Wifi network undetectable on a Lenovo G470

    - by Rex
    I have a WPA2 secured network with a visible SSID at home that works perfectly fine with a Dell laptop,a HP netbook and sundry mobile phones. When I try connecting my sister's Lenovo G470, it refuses to detect the wifi network no matter what, but shows up the neighbors' networks. The Lenovo also works correctly at her office. Both laptops run Windows 7. Already tried/checked the following: Manually configure wifi network settings (copied over from the Dell) Ensure there is no MAC address filtering on the router. Ensure router DHCP server is not running out of addresses to assign (I have set it to allocate upto 10). Reboot laptop, router etc. Is this a known problem, and is there anything else one could try? Update - The problematic Lenovo uses Windows 7 Home Basic while the Dell that works uses Home Premium and the HP netbook uses Starter edition - if that makes any difference. Further update - It is able to connect if I reboot into safe mode with networking. However in 'normal' mode it shows up the network sporadically, and then says there was an error connecting to it. All the network parameters, password, encryption, etc etc are EXACTLY the same as they are on the Dell.

    Read the article

  • How to repair Mac OSX without reinstalling?

    - by RahulVyas
    I have an intel PC. I have installed iDeneb Mac OS X in my pc. It's running fine. After that I thought to install Snow Leopard for running the iPad SDK. So I bought a retail Snow Leopard and boot it with Rebel EFI boot loader. When I was installing Snow Leopard, at the end of installation, setup gives an error. So I restarted my PC and boot with Rebel and I saw that mac was there so I boot that into safe mode and Mac OSX runs. After that I install iPad SDK. But when I try to create an application, XCode is not responsive. It hangs when I choose new project, give it a name and save it on disk. After just as I gave name and choose save it hangs. Is there any way i can repair my Mac OSX without reinstalling? I have also unable to boot into normal mode and also without Rebel CD. So I want to boot without Rebel CD and also want to run iPad SDK so please help me to solve this problem.

    Read the article

  • Distribute IP packets accross different NIC queues with MSI (Message Signalled Interrupts)

    - by Ansis Atteka
    NetXtreme II BCM5709 Gigabit Ethernet NIC supports MSI feature (Message Signaled Interrupts) and it has 8 queues. Each queue has its own Interrupt handler in /proc/interrupts. What I am trying to accomplish is to tell NIC which packets should go to which queue. Questions: Is it possible to manually specify which IP packets should go to which queue by encapsulated protocol type (e.g. IPsec packets go in one queue, while TCP packets go in another queue)? If it is possible - how can I do it under Linux? If it is not possible - should I look at MSI-X capable NIC cards to solve this problem? More details: We have one Interface that is terminating IPSec and forwarding/terminating TCP connections. The IPSec packet decryption is inlined (this means that decryption is done under the same ksoftirqd/X context). We are trying to find out if we will be able to improve total performance if IPSec packets will be scheduled on another CPU than TCP packets. One more limitation is that IPSec code is not MP-safe, hence I can not run it under more than one ksoftirqd/X. By default it seems that packets are distributed/hashed by source IP over the 8 NIC queues. The bottleneck is IPSec that chokes out TCP traffic while it is decrypting/encrypting IPSec packets at ~100% CPU. OS is Ubuntu 10.10 (2.6.32-27-server) and NIC is Broadcom BCM5709.

    Read the article

  • GTX 280 purple snow on bootup, card works without drivers

    - by Brokar
    i have owned a ASUS GTX280 for 3 years now. The card has been great all along but i started having problems 10 days ago. I was playing Diablo 3 for 1 week on max settings no problems, then suddenly my display kept getting some weird purple/colours as soon as i booted and logged into windows. Went into safe mode, updated drivers and it kept crashing. Formatted PC, fresh windows install with new WHQL drivers again same problem. Uninstalled nvidia drivers and pc has been running great for 4 days now, ofcourse i cannot run games but everything works on 1680x1050 resolution and i can browse internet,watch movies and use my PC for everything but gaming. As soon as i install nvidia drivers PC won't boot. I only wanna game a few hours a week (very busy program with school this month so it might be a blessing that i cannot game) and i would love it if i could keep the card. I am looking to upgrde later on when i will have time for gaming but i wonder if i could still use the card somehow with different/new drivers (tried older drivers that came with the card on a CD aswell) tldr: PC works fine with no nvidia drivers (apart from gaming ofc). Once i install WHQL drivers or older ones, cannot even log into windows. Fix?

    Read the article

  • How can I explain to dspam that the user "brandon" is the same as "brandon@mydomain"

    - by Brandon Craig Rhodes
    I am using dspam for spam filtering by running the "dspamd" daemon under Ubuntu 9.10 and then setting up a Postfix rule that says: smtpd_recipient_restrictions = ... check_client_access pcre:/etc/postfix/dspam_everything ... where that PCRE map looks like this: /./ FILTER lmtp:[127.0.0.1]:11124 This works well, and means that all users on my system get all of their email, whether "dspam" thinks it is innocent or not, and have the option of filtering on its decisions or ignoring them. The problem comes when I want to train dspam using my email archives. After reading about the "dspam" command, I tried this on the files in my Inbox and spam boxes (which date from when I was using another filtering solution): for file in Mail/Inbox/*; do cat $file | dspam --class=innocent --source=corpus; done for file in Mail/spam/*; do cat $file | dspam --class=spam --source=corpus; done The symptom I noticed after doing all of this was that dspam was horrible at classifying spam — it couldn't find any! The problem, when I tracked it down, was that I was training the user "brandon" with the above commands, but the incoming email was instead compared against the username "brandon@mydomain", so it was running against a completely empty training database! So, what can I do to make the above commands actually train my fully-qualified email address rather than my bare username? I would like to avoid having to run "dspam" as root with a "--user" option. I would have expected that the "dspam" configuration files would have had an "append_domain" attribute or something with which to decorate local usernames with an appropriate email domain, but I can't find any such thing. When I used to use the Berkeley DB backend to "dspam", I solved this problem by creating a symlink from one of the databases to the other. :-) But that solution eventually died because the BDB backend is not thread-safe, so now I have moved to the PostgreSQL back-end and need a way to solve the problem there. And, no, the table where it keeps usernames has a UNIQUE constraint that prevents me from listing both usernames as mapping to the same ID. :-)

    Read the article

< Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >