Search Results

Search found 20289 results on 812 pages for 'service locator'.

Page 694/812 | < Previous Page | 690 691 692 693 694 695 696 697 698 699 700 701  | Next Page >

  • Installing xampp on system that already have mysql

    - by Charith
    I'm rather new to PHP and xampp. I have a computer that has installed MySQL server and MySQL workbench as I was working with Java and NetBeans. Now I want to use my computer for developing PHP and other web stuff too. I installed xampp successfully. But when I'm trying to access phpMyAdmin, it gives me an error saying mysql server rejected its connection Actually I tried stopping my current MySQL service and installing it again. However xampp have a its own mysql server in its installation path too. I tried configuring config.inc.php to use my existing installation of MySQL which is on a separate path. But I failed. Can anyone please instruct me how to configure this xampp to use my existing MySQL server to do everything and ignore the installed one with itself? I don't want two MySQL services to run on my system and clash in future. I'll be glad if anyone can explain to me what is best to use when you're developing Java, PHP, C and all the stuff on the same machine. P.S.: I have been given a password for my existing MySQL sever (user = root) as we do it usually when installing MySQL alone.

    Read the article

  • Postfix qmgr process causes heavy overload on mailservers

    - by Mattias
    We are using Postfix as MTA for our e-mailmarketing software and once in a while we see that the load on one of the mailservers rises above 5. The load is caused by the qmgr-process which is the heart of Postfix and I see that it is consuming a lot of CPU resources. The process seems to be stuck because after 15 minutes it is still doing the samething and still increasing the load. Once I restart the postfix service the load rapidly decreases to below 1 and Postfix continues to send e-mails without any problems. I'm wondering if anyone else has encountered this problem and if people have suggestions on how to prevent it. The problem shows up on all our mailservers but almost never at more than 1 at the time. It seems to be triggered only when we are sending a mailing but the size (10 or 100.000 e-mails doesn't seem to make a difference). It maybe happens once a week or even less often and the time and day is also different every time. We tried to solve the problem by decreasing the amount of messages qmgr is allowed to process but this didn't solve it. We are using Postfix 2.5.5 on Debian Lenny 5.0.8 (postfix is installed through the default Debian repository). No special messages can be found in the logs (syslog, messages, mail.*). Thank you for your time

    Read the article

  • Cannot start Postgres daemon after installing with Yum

    - by Sean the Bean
    I was trying to install Postgres 9.1.4 on Fedora 17 using Yum. If I do: sudo yum install postgres-libs sudo yum install postgres sudo yum install postgis All the installs appear to complete successfully (i.e., no errors), but I cannot start the Postgres daemon using: service postgresql initdb Like the official Postgres download guide says to do (http://www.postgresql.org/download/linux/redhat/). The error says Unknown operation initdb. RPM tells me that it installed psql to /usr/bin/, which I confirmed. It turns out that only a few components installed correctly (psql, pg_dump, pg_configure, and a few others), but most are missing (e.g., pg_ctl and postgres). I've tried several different configurations and had several of my coworkers (with more linux experience than me) look at it, but so far nothing has worked. Two of them have also run into similar issues installing Postgres using apt-get on Ubuntu, which makes me think the rpm isn't doing its job. It seems the only solution to build it from source, which is more robust anyway, but of course it takes longer. I'm wondering, though, if anyone else has run into this issue and/or has successfully installed Postgres on either Fedora or Ubuntu using a package manager like yum or apt-get? Is the rpm broken?

    Read the article

  • few basic questions on webhosting (namservers & dns records)

    - by claws
    I bought a domain name on name.com & I want to use free webhosting on 110mb.com By default name.com integrates services of Google apps. Name server entries are ns1.name.com ns2.name.com ns3.name.com ns4.name.com When I registered on 110mb.com it gave me two addresses ns1.110mb.com ns2.110mb.com This is where I'm lost. The concept is that "Domain name should point to an address of the server where the website is hosted" right? Then why are these 4 entires by default. How exactly is it working? should I remove these 4 and then add 110mb.com servers or just append 110mb.com server addresses to name.com ones. I would like to use google apps. If I change these name server addresses would that remove google apps? I especially want to use email service of google. And I really don't understand what is CNAME, MX, or something something. I want to learn about these stuff & how it exactly works. When I search for webhost tutorial. I'm unable to find any fruitful results.

    Read the article

  • What kernel modules are required for wi-fi to work?

    - by Leonid Shevtsov
    My custom-built 2.6.32 kernel cannot connect to any WPA-protected network. The kernel includes (probably?) everything that should be needed for wifi, including IPv4 network support (IPv6 is disabled), the ath5k wireless driver (which is used in the generic Ubuntu 2.6.31 kernel) and all crypto APIs. The card is being detected, however, iwlist scan returns wlan0 Failed to read scan data : Network is down and network-manager log says <info> (wlan0): driver supports SSID scans (scan_capa 0x01). <info> (wlan0): new 802.11 WiFi device (driver: 'ath5k') <info> (wlan0): exported as /org/freedesktop/NetworkManager/Devices/1 <info> (wlan0): now managed <info> (wlan0): device state change: 1 -> 2 (reason 2) <info> (wlan0): bringing up device. <info> (wlan0): preparing device. <info> (wlan0): deactivating device (reason: 2). supplicant_interface_acquire: assertion `mgr_state == NM_SUPPLICANT_MANAGER_STATE_IDLE' failed <info> modem-manager is now available <WARN> default_adapter_cb(): bluez error getting default adapter: The name org.bluez was not provided by any .service files <info> Trying to start the supplicant... <info> (wlan0): supplicant manager state: down -> idle <info> (wlan0): device state change: 2 -> 3 (reason 0) <WARN> nm_supplicant_interface_add_cb(): Unexpected supplicant error getting interface: wpa_supplicant couldn't grab this interface. The exact same configuration works with the generic kernel. Is anything except wifi and crypto api needed for wi-fi to work?

    Read the article

  • Nginx + PHP-FPM Timeouts, almost zero load consumption?

    - by javipas
    I've got a server running on a Linode with Ubuntu 10.04 LTS, Nginx 0.7.65, MySQL 5.1.41 and PHP 5.3.2 with PHP-FPM. There is a WordPress blog on it, updated to WordPress 3.2.1 recently. I have made no changes to the server (except updating WordPress) and while it was running fine, a couple of days ago I started having downtimes. I tried to solve the problem, and checking the error_log I saw many timeouts and messages that seemed to be related to timeouts. The server is currently logging this kind of errors: 2011/07/14 10:37:35 [warn] 2539#0: *104 an upstream response is buffered to a temporary file /var/lib/nginx/fastcgi/2/00/0000000002 while reading upstream, client: 217.12.16.51, server: www.mydomain.com, request: "GET /page/2/ HTTP/1.0", upstream: "fastcgi://127.0.0.1:9000", host: "www.mydomain.com", referrer: "http://www.mydomain.com/" 2011/07/14 10:40:24 [error] 2539#0: *231 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 46.24.245.181, server: www.mydomain.com, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.mydomain.com", referrer: "http://www.google.es/search?sourceid=chrome&ie=UTF-8&q=mydomain" and even saw this previous serverfault discussion with a possible solution: to edit /etc/php/etc/php-fpm.conf and change request_terminate_timeout=30s instead of ;request_terminate_timeout= 0 The server worked for some hours, and then broke again. I edited the file again to leave it as it was, and restarted again php-fpm (service php-fpm restart) but no luck: the server worked for a few minutes and back to the problem over and over. The strange thing is, although the services are running, htop shows there is no CPU load (see image) and I really don't know how to solve the problem. The config files are on pastebin The php-fpm.conf file is here The /etc/nginx/nginx.conf is here The /etc/nginx/sites-available/www.mydomain.com is here Please help :(

    Read the article

  • Tidy up old Windows Server Backup snapshots

    - by dty
    Hi, I'm running wbadmin from a scheduled job, backing up my C: and D: drives to my E: and (I believe!) including the system state: wbadmin start backup -backuptarget:e: -include:c:,d: -allCritical -noVerify -quiet I'd like to delete old backups, but I'm concerned that all the information I can find says to use wbadmin to delete old system state backups, and vssadmin to delete other backups. As far as I know, my backups ARE system state backups, but are using VSS on E: for storage, so I'm worried about trying either of these techniques for fear of losing all my backups. This is a home network, so I don't have a spare server to test this on. I'm also happy to simply restrict the space used on E:, but I can't make sense of the difference between the /for and /on parameters of the relevant vssadmin command. For reference, here's the output of vssadmin show shadows: Contents of shadow copy set ID: {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx} Contained 1 shadow copies at creation time: 07/01/2011 08:12:05 Shadow Copy ID: {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx} Original Volume: (E:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Shadow Copy Volume: \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy83 Originating Machine: x.y.com Service Machine: x.y.com Provider: 'Microsoft Software Shadow Copy provider 1.0' Type: DataVolumeRollback Attributes: Persistent, No auto release, No writers, Differential [... repeated a lot...] vssadmin show shadowstorage: Shadow Copy Storage association For volume: (C:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Shadow Copy Storage volume: (C:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Used Shadow Copy Storage space: 0 B Allocated Shadow Copy Storage space: 0 B Maximum Shadow Copy Storage space: 5.859 GB Shadow Copy Storage association For volume: (D:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Shadow Copy Storage volume: (D:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Used Shadow Copy Storage space: 0 B Allocated Shadow Copy Storage space: 0 B Maximum Shadow Copy Storage space: 40.317 GB Shadow Copy Storage association For volume: (E:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Shadow Copy Storage volume: (E:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Used Shadow Copy Storage space: 168.284 GB Allocated Shadow Copy Storage space: 171.15 GB Maximum Shadow Copy Storage space: UNBOUNDED wbadmin get versions: Backup time: 07/01/2011 03:00 Backup target: 1394/USB Disk labeled xxxxxxxxx(E:) Version identifier: 01/07/2011-03:00 Can Recover: Volume(s), File(s), Application(s), Bare Metal Recovery, System State [... repeated a lot...]

    Read the article

  • Firefox not using Kerberos despite being configured to

    - by Nicolas Raoul
    I am deploying Linux/Firefox on a corporate Kerberos network. I followed this Kerberos-on-Firefox procedure but still Firefox does not connect via the company's Kerberos. I am using Firefox 3.0.18 on RedHat EL Server 5.5 Here is what I did: Run kinit on the command line to create a Kerberos ticket Check with klist: the ticket is valid until tomorrow, service principal is krbtgt/[email protected]. In Firefox, set network.negotiate-auth.trusted-uris and network.negotiate-auth.delegation-uris to .dc.thecompany.com. Load the company's portal page via its full hostname: http://server37.thecompany.com/alfresco. (note: server37 is actually the machine I am running Firefox on, but that should not be a problem I guess) PROBLEM: the company's intranet portal still serves me the login/password page. The same portal correctly uses Kerberos on Internet Explorer/Windows 7 machines, same settings, and shows the user's personal page. The server does not see any Kerberos request coming. Did I do something wrong? I enabled NSPR_LOG_MODULES=negotiateauth:5 as explained here, but the log file stays empty.

    Read the article

  • PLESK PostFix Error Local in maillog, how to troubleshoot

    - by RCNeil
    I'm using the PHP mail() function, using PostFix, on CentOS6, Plesk 10.4, and my email is not getting delivered to a particular address. My personal GMail and Yahoo email addresses receive email from my server fine and do not produce errors. After a wonderful suggestion on here, I checked my mail logs, and this is the error I see : Apr 10 10:26:29 ######### postfix/qmgr[8323]: 19EA21827: from= <[email protected]>, size=645, nrcpt=1 (queue active) Apr 10 10:26:29 ######### postfix-local[8331]: postfix-local: [email protected], [email protected], dirname=/var/qmail/mailnames Apr 10 10:26:29 ######### postfix-local[8331]: cannot chdir to mailname dir name: No such file or directory Apr 10 10:26:29 ######### postfix-local[8331]: Unknown user: [email protected] Apr 10 10:26:29 ######### postfix/pipe[8330]: 19EA21827: to=<[email protected]>, relay=plesk_virtual, delay=0.15, delays=0.11/0/0/0.04, dsn=2.0.0, status=sent (delivered via plesk_virtual service) Apr 10 10:26:29 ######### postfix/qmgr[8323]: 19EA21827: removed [email protected] is the name I've declared in php.ini for sendmail_from = "[email protected]" sendmail_path = "/usr/sbin/sendmail -t -f [email protected]" and the recipient is supposed to be [email protected]. Is this an error on my side or the recipients? Can I address this on my server? Many thanks SF.

    Read the article

  • Looking for a host based network monitor solution

    - by Ole Martin Handeland
    Hi all! Problem So, my hosting company has a network usage graph for my dedicated server. It seems that one day earlier this month, my network usage suddenly spiked with several hundred megabytes transferred (usually it's in the tens, not hundreds). It was probably me, but i just can't be sure who or what it was. Question So my question is; does anyone know of any host based solution for monitoring network usage that would tell me the client's IP-address, the port/service he/she used? What I don't want I'm just guessing that someone will suggest i use nagios, munin, zabbix, cacti, mrtg - I've also looked at those, but a graph over network usage will not give me the answers I'm looking for. :-) Almost there I've already looked at a lot of monitoring solutions, and I've tried [ntop][http://www.ntop.org/], [darkstat][http://unix4lyfe.org/darkstat/] and others. Darkstat just didn't give me the answers. Although it listed a lot of statistics, and i could list the clients - it doesn't show me the network usage for a particular period. Ntop is by far the best I've seen so far - but i think it mostly shows current network usage, not the historical part. I could run apt-get upgrade and download a whole bunch of software, but not see it in the log afterwards.

    Read the article

  • Unable to force Debian to do unattended install... libc6 wants interactive confirm

    - by JD Long
    I'm trying to create a script that forces a Debian Lenny install to install the latest version of CRAN R. During the install it appears libc6 is upgraded and the install wants interactive confirm that it's OK to restart three services (mysql, exim4, cron). This process HAS to be unattended as it runs on Amazon's Elastic Map Reduce (EMR) machines. But I'm running out of options. Here's a few things I've tried: This previous question appears to be exactly what I'm looking for. So I set up my install script as follows: # set my CRAN repos... yes, I know there's a new convention where to put these. echo "deb http://cran.r-project.org/bin/linux/debian lenny-cran/" | sudo tee -a /etc/apt/sources.list echo "deb-src http://cran.r-project.org/bin/linux/debian lenny-cran/" | sudo tee -a /etc/apt/sources.list # set the dpkg.cfg options per the previous SuperUser question echo "force-confold" | sudo tee -a /etc/dpkg/dpkg.cfg echo "force-confdef" | sudo tee -a /etc/dpkg/dpkg.cfg export DEBIAN_FRONTEND=noninteractive # add key to keyring so it doesn't complain gpg --keyserver pgp.mit.edu --recv-key 381BA480 gpg -a --export 381BA480 > jranke_cran.asc sudo apt-key add jranke_cran.asc sudo apt-get update # install the latest R sudo apt-get install --yes --force-yes r-base But this script hangs with the following request for input: OK, so I tried stopping the services using the following script: sudo /etc/init.d/mysql stop sudo /etc/init.d/exim4 stop sudo /etc/init.d/cron stop sudo apt-get install --yes --force-yes libc6 This does not work and the interactive screen comes back, but this time with only cron listed as the service that must be restarted. So is there a way to make libc6 just restart these services with no user input? Or is there a way to stop cron so it does not cause an interactive prompt? Maybe a creative option I've never thought of? Keep in mind that this system is brought up, some Hadoop code is run, and then it's torn down. So I can put up with side effects and bad behavior that we might not want in a production desktop machine or web server.

    Read the article

  • Problem with network after malware attack

    - by Cruelio
    Im trying to help some friends with a Win XP machine. I got rid of the malware using Malware Bytes, and HiJackThis. But now they(I) have another problem. When the computer boot into Windows it seems fine. When I start Internet Explorer the browser window opens just fine, but nothing happens for at minute or two. After the two minutes of waiting, the network icon appears in the taskbar next to the clock, and then everything works. The computer is connected to the internet using a Ethernet adapter. I have looked at the Rvent Log and found an error from Perfnet with eventid 2004 <Provider Name="PerfNet" /> <EventID Qualifiers="49152">2004</EventID> <Level>2</Level> <Task>0</Task> <Keywords>0x80000000000000</Keywords> What I have tried so far: In the device manager i have uninstalled the Ethernet adapter and installed it again. I have uninstalled and installed the Windows File and Printer Sharing service. I have verified that both server and workstation services are started. What should I do next?

    Read the article

  • Win 2003 SBS - secure enough by default?

    - by Pekka
    I have to set up a Windows 2003 Small Business Server to work as a Subversion repository and possibly as an E-Mail server later. The machine is a virtual one, hosted with a hosting company, and freshly initialized. I used the Security Configuration Wizard to deactivate all server roles. After I install Subversion, I will open the necessary ports for the service; in addition, obviously, RDP will stay open so I can remote control the machine. Automatic updates are activated, and I will set up E-Mail notification every time somebody logs on to the server. I'm a programmer and not a professional systems administrator, so I would like to know whether you would regard this a sane and secure setup for a (publicly available) box to host sensitive code and/or E-Mail on. Is there anything in addition I should do to make the machine secure? Is there anything I can do on a long-term basis to keep the machine secure, apart from monitoring the event log (as far as I can make sense out of it), and seeing that any hotfixes are installed properly?

    Read the article

  • How to work around blocked outbound hkp port for apt keys

    - by kief_morris
    I'm using Ubuntu 9.10, and need to add some apt repositories. Unfortunately, I get messages like this when running sudo apt-get update: W: GPG error: http://ppa.launchpad.net karmic Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 5A9BF3BB4E5E17B5 W: GPG error: http://ppa.launchpad.net karmic Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 1DABDBB4CEC06767 So, I need to install the keys for these repositories. Under 9.10 we now have the option to do this: sudo add-apt-repository ppa:nvidia-vdpau/ppa See this Ubuntu help article for details. This is great, except that I'm running this on a workstation behind a firewall which blocks outbound connections to pretty much all ports except those required by secretaries running Windows and IE. The port in question here is the hkp service, port 11371. There appear to be ways to manually download keys and install them on apt's keyring. There may even be a way to use add-apt-repository or wget or something to download a key from an alternative server making it available on port 80. However, I haven't yet found a concise set of steps for doing so. What I'm looking for is: How to find a public key for an apt-package (recommendations for resources which have these, and/or tips for searching. Searching for the key hash doesn't seem all that effective so far.) How to retrieve a key (can it be done automatically using gpg or add-apt-repository?) How to add a key to apt's keyring Thanks in advance.

    Read the article

  • Windows 2008 server smart card security module problem

    - by chris13work
    Hi, I've got a smart card reader and a server application using it as a security module. If I run it under DOS prompt, everything is fine. The server is running and clients can connect to it. I tried to install the server as window service and start it. The server starts but always gives back authentication error because it cannot call the smart card to do encryption. Then I tried to start it with task scheduler and set the trigger factor as "on startup". The server starts also but still cannot access the smart card reader. Then I tried remote desktop to the machine and run the server application under DOS prompt. Same error is returned. The situation is that the smart card reader only works under active console desktop environment. In the server application, WINSCARD API is used to access the smart card reader. Any suggestion so that we can access the smart card reader in running services? OS: Windows Server 2008 Smart Card Driver: Windows USB smart card Reader Smart Card API: WINSCARD

    Read the article

  • Windows 2008 server smart card security module problem

    - by chris13work
    Hi, I've got a smart card reader and a server application using it as a security module. If I run it under DOS prompt, everything is fine. The server is running and clients can connect to it. I tried to install the server as window service and start it. The server starts but always gives back authentication error because it cannot call the smart card to do encryption. Then I tried to start it with task scheduler and set the trigger factor as "on startup". The server starts also but still cannot access the smart card reader. Then I tried remote desktop to the machine and run the server application under DOS prompt. Same error is returned. The situation is that the smart card reader only works under active console desktop environment. In the server application, WINSCARD API is used to access the smart card reader. Any suggestion so that we can access the smart card reader in running services? OS: Windows Server 2008 Smart Card Driver: Windows USB smart card Reader Smart Card API: WINSCARD

    Read the article

  • IIS SMTP unable to relay for domain on local network.

    - by MartinHN
    Hi I have a network with the following servers: EXCH01 - Exchange Server for @domain.com e-mails. TEST-REP01 - Local reporting server. Have IIS SMTP installed and configured. What happens on the TEST-REP01, is that I have a Windows Service that reads reports in a SQL server that is ready for delivery. All reports is sent one at a time, using the local SMTP server. It works perfectly, unless the recipient e-mail address is [email protected] - the domain that the EXCH01 server manages. I get the following error: Mailbox unavailable. The server response was: 5.7.1 Unable to relay for [email protected] What can I do to troubleshoot this further? I can't seem to find useful information in the SMTP log: 2011-01-05 03:01:38 192.168.8.168 TEST-REP01 SMTPSVC1 TEST-REP01 192.168.8.168 0 EHLO - +TEST-REP01 250 0 210 15 0 SMTP - - - - 2011-01-05 03:01:38 192.168.8.168 TEST-REP01 SMTPSVC1 TEST-REP01 192.168.8.168 0 MAIL - +FROM:<[email protected]> 250 0 44 31 0 SMTP - - - - 2011-01-05 03:01:38 192.168.8.168 TEST-REP01 SMTPSVC1 TEST-REP01 192.168.8.168 0 RCPT - +TO:<[email protected]> 550 0 50 28 0 SMTP - - - - 2011-01-05 03:01:38 192.168.8.168 TEST-REP01 SMTPSVC1 TEST-REP01 192.168.8.168 0 QUIT - TEST-REP01 240 0 50 28 0 SMTP - - - - 2011-01-05 03:01:38 192.168.8.168 TEST-REP01 SMTPSVC1 TEST-REP01 192.168.8.168 0 EHLO - +TEST-REP01 250 0 210 15 0 SMTP - - - - 2011-01-05 03:01:38 192.168.8.168 TEST-REP01 SMTPSVC1 TEST-REP01 192.168.8.168 0 MAIL - +FROM:<[email protected]> 250 0 44 31 0 SMTP - - - - 2011-01-05 03:01:38 192.168.8.168 TEST-REP01 SMTPSVC1 TEST-REP01 192.168.8.168 0 RCPT - +TO:<[email protected]> 550 0 50 28 0 SMTP - - - - 2011-01-05 03:01:38 192.168.8.168 TEST-REP01 SMTPSVC1 TEST-REP01 192.168.8.168 0 QUIT - TEST-REP01 240 0 50 28 0 SMTP - - - - Relay access on the local IIS (TEST-REP01) are set to allow 127.0.0.1, TEST-REP01 and other servers such as TEST-WEB01. DNS settings on the local network is not domain.com - but companyname-domain.local.

    Read the article

  • Determine wifi capabilities of Windows box (with WSUS install rules)

    - by Hagen von Eitzen
    I need to determine if a computer is in fact a laptop with wifi capabilities (with emphasis on wifi rather than laptop). More precisely, I want to distribute a piece of software I wrote via WSUS and Local Update Publisher to those clients. To this end, I want to create appropriate "Package installable rules", that is simple rule used bay the Windows Update Service on the client to decide beforehand whether or not an update/installation package is applicable. Typically, such "installabel rules" are logical combinations of rules of type "File exists", "Registry Key exists", "WMI Query", "MSI Product Installed", so I'd prefer one of those methods. The method I hope someone here can help me find should work with Win 7/Vista, preferably also with XP. My guess is that WMI query is the way to go, but I have little experience in that. I have found that one can e.g. query for EnclosureType and that might detect a laptop. However, I would be much happier if an actually available wifi interface would be detected. Does anyone have an idea how to approach this? If there is anything you need me to clarify, don't hesitate to comment.

    Read the article

  • Server 2003 Remote Desktop loses its virtual printer image of the local printer

    - by Charles Hart
    Server 2003 Remote Desktop provides service to stores served by several ISPs. The server loses its virtual printer image of the local printer (as seen from the remote store site) and a copy of the original local printer appears on the local computer with a different driver without notice. Specifically: A remote desktop session is opened on a local computer that has a Brother HL2140 USB printer connected and the associated software installed with a correct driver shown under the “advanced” button. The server has the same Brother software and driver. An application that is running on the server attempts to print on the local printer connected to the local computer running Vista Pro or XP Pro. Either it works correctly (Good) or it does not print (Bad) or it prints on another Local Printer connected to another local computer logged into the server (Bad and Odd). When it doesn’t print (or prints somewhere else) we ask the customer to look for the (virtual) printer using the Remote desktop view of the server and the printer is gone. Then we ask the customer to look at the printers folder in the local computer. There are several possibilities: The printer is there, but the driver is mysteriously changed in the drop down to MDX something; we have the customer select the other (proper) Brother driver, and all is well again, as now after the change, the virtual printer in the server (which now matches the local printer) appears again, and so printing can resume. A “copy” of the printer mysteriously appears in the local printer’s folder and after we delete it the virtual printer in the server appears again and so printing can resume. Note that in both case 1 and 2, the server sometimes sends the print job elsewhere, to some other local computer. Meanwhile in the log file, endless errors are reported and the server eventually crashes, sometimes twice a day. I’m puzzled what changes the local printer driver and I’m puzzled what loads the copy 2 or copy 3 of the printer in the local printer folder. This entire description randomly occurs on any of 40+ local computers in eight different locations in different ISPs, all sharing one Domain.

    Read the article

  • SQL Server 2008: Getting Login failed for user "Domain\User". Failed to open the explicitly specified database [CLIENT: IP.ADD.RR.ESS]

    - by GodEater
    This is a very similar issue to " SQL Server 2008 login problem with ASP.NET application: Failed to open the explicitly specified database " which unfortunately seems to have gone unsolved. My issue here is subtly different. Firstly the account failing login is not 'NT AUTHORITY\NETWORK SERVICE' - it's an actual domain account. Secondly, there are two machines involved - I gathered from the first question it was a single machine running both the IIS and SQL instances. The application which is trying to connect to the database is an ASP.NET one running on another server (if that makes any different, I'm not sure it does.) The ConnectionString being used in the web.config for the application is : data source=MySQLServer;initial catalog=MyDatabase;integrated security=sspi; And the Application Pool is set to NetworkService for Identity. So - in the web app, I get the following error : Cannot open database "MyDatabase" requested by the login. The login failed. Login failed for user 'MyDomain\WebServerMachineName$' In the SQL Server logs I see : Login failed for user 'MyDomain\WebServerMachineName$'. Reason: Failed to open the explicitly specified database. [CLIENT: Web.Server.IP.Address] Running this bit of SQL against the database in question : USE [MyDatabase] GO SELECT SDP.name AS [User Name], SDP.type_desc AS [User Type], UPPER(SDPS.name) AS [Database Role] FROM sys.database_principals SDP INNER JOIN sys.database_role_members SDRM ON SDP.principal_id=SDRM.member_principal_id INNER JOIN sys.database_principals SDPS ON SDRM.role_principal_id = SDPS.principal_id Gets me this result : MyDomain\WebServerMachineName$ WINDOWS_USER DB_DDLADMIN MyDomain\WebServerMachineName$ WINDOWS_USER DB_DATAREADER MyDomain\WebServerMachineName$ WINDOWS_USER DB_DATAWRITER Which appears to me to indicate I've got the permissions right. Anyone have any idea why it's not working, or how I can narrow the issue down some more?

    Read the article

  • Wireless router that supports Bonjour between wire- and wireless- connected machines

    - by cefstat
    At home I have an ADSL modem that I use also as router. For the record, it is a DavoLink DV-2020 provided by Tele2 in the Netherlands. It turns out that if a computer is connected with a cable to the router and another computer is connected wirelessly, then they cannot see each other's services that are advertised through Bonjour (Apple's service discovery protocol, an implementation of Zeroconf). The combinations wired/wired and wireless/wireless work fine. This means that somehow wire- and wireless- connected machines are on different physical networks although their IPs are in the same range (192.168.1.*). The modem in question doesn't provide many options that I could play with. So, I was thinking of buying a second router to connect to the modem, and then connect all my machines to this second router. The problem is that I am afraid that I will have again the same problem. I am looking for suggestions on routers that offer the functionality I want (Bonjour between wired and wireless connections). I suppose that one solution would be Apple's Airport Extreme Base Station but at 160€ it is ridiculously expensive. Any other options out there? And why is it so difficult to find in the technical characteristics if wired and wireless connections are on the same physical network?

    Read the article

  • Configure one IIS site to handle two separate SSL certificates using external Load Balancing or SSL Acceleration Servers

    - by bmccleary
    I have one web application on our server that needs to be referenced by two different domain names, both of which have their own SSL certificates. The application is exactly the same for both domains, but we have to keep the two domain names for legal reasons. The problem is that, since both domains need to have their own SSL certificate, that inside of our IIS 7.5 configuration we have to have two separate IIS applications (both pointing to the same physical location) with their own unique IP address and SSL certificate installed. Now, I know that, due to the nature of SSL communications, that this is by design and that you can't assign more than one SSL certificate per IP address and domain name. My question is… is there any way around this limitation and keep one web application in IIS and have it service two SSL certificates based on host name? I know that with the basic IIS configuration that this is not possible, but I was thinking that with some sort of combination of external load balancing and/or SSL acceleration servers/services that we could have these servers process the SSL request and leave IIS clean to have one single application. I am not familiar at all with these technologies, hence the reason I am asking if it is theoretically possible. If not, does anyone else know how to achieve this?

    Read the article

  • Backing up SQL NetApp Snapshots using TSM

    - by WerkkreW
    In our environment we have a 3 node SQL 2005 Cluster which is on NetApp storage. We are currently using SMSQL (NetApp SnapManager for SQL) to take Snapshot backups of the data. This works great, but due to some audit requirements we are also forced to maintain some copies on tape. We have used NDMP in other places across the enterprise but we do not want to use it in this specific instance. Basically what I need to do is, get the most recent snapshot copy of the databases on tape, via Tivoli Storage Manager (TSM). What I have done is, obtained a basic Windows Server 2003 VM with SnapDrive installed, which is SAN attached and zoned to the NetApp, and I have written a batch file to do the following: Mount the latest __RECENT snapshot lun to the host, using a specific drive letter Perform a TSM based incremental backup Dis-mount the LUN This seems to work fine, except sometimes the LUN's do not mount due to some sort of timeout. Also, due to my limited knowledge of windows batch scripting, I have no way to monitor the success or failure of these backups since I do not know how to send a valid return code back to the TSM scheduling service. Is there a more efficient/elegant way to accomplish this without NDMP?

    Read the article

  • Write permissions on uploaded files - PHP & Linux

    - by letseatfood
    I am working on a PHP script that transfers files using FTP functions. It has always worked on my production server (which is a hosting service). The development server I have just setup (I am a novice to servers) is Debian Lenny with Apache2, PHP5, and MySQL5. The file transfer works correctly, but once the file has been written to the server, it has permissions of 600. This makes it impossible for me to view the file (JPEG) in the web browser, as permission is denied. I have scoured the internet and even broken my server installation and reinstalled it trying to figure this out (which has been fun, nonetheless!). I know it is unwise to set 777 permissions on public accessible files, but even that will not solve the problem. The only thing that works is if I chmod 777 thefile.jpg after it has been transferred, which is not a working solution. I tried changing the owner of my site files to www-data per this post, but that also does not work. My user is mike, and it still does not work whether the owner of the files is mike or root. Would somebody point me in the right direction? Thanks! And, of course, let me know if I can clarify anything.

    Read the article

  • Why is the wrong name server information at crsnic.net & gtld-servers.net ?

    - by danorton
    Did I screw this up? I don’t even know how this might have happened, so I’d like to learn. I’m trying out HostGator’s reseller service and I bought a domain name through it, but I didn’t want the default name servers and so I changed them during the registration. After registration the domain name record is correct everywhere except at whois-servers.net and whois.crsnic.net and it looks like the DNS network is using that same information. $ whois -h whois.enom.com. example.com ... Name Servers: dns1.name-services.com dns2.name-services.com dns3.name-services.com dns4.name-services.com dns5.name-services.com ... $ whois -h whois.crsnic.net. example.com Domain Name: EXAMPLE.COM Registrar: ENOM, INC. Whois Server: whois.enom.com Referral URL: http://www.enom.com Name Server: NS1.HOSTGATOR.COM Name Server: NS2.HOSTGATOR.COM Status: clientTransferProhibited Updated Date: 01-jun-2010 Creation Date: 31-may-2010 Expiration Date: 31-may-2011 >>> Last update of whois database: Tue, 01 Jun 2010 19:20:47 UTC <<< ... $ dig +norecurse @b.gtld-servers.net. example.com. NS ... ;; AUTHORITY SECTION: example.com. 172763 IN NS ns2.hostgator.com. example.com. 172763 IN NS ns1.hostgator.com. ... My next step is to let HostGator have a look, but first I want to better understand how this happened. Thanks.

    Read the article

< Previous Page | 690 691 692 693 694 695 696 697 698 699 700 701  | Next Page >