Search Results

Search found 56300 results on 2252 pages for 'local working'.

Page 86/2252 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • SSL Certificate Stops Working after Server Reboot on IIS7, W2K8

    - by Zac
    We recently upgraded from W2K3/IIS6 to W2K8/IIS7 and have been having problems with our SSL Certificate (Thawte 123 SSL certificate) ceasing to work after rebooting. Initially, the intermediate certificates would stop working and we could repair the problem by reinstalling all of them after the reboot (annoying, but not the end of the world). Unfortunately, this is no longer working. The certificate chain has been doublechecked by several tools and people with decent knowledge but no one has been able to identify the cause of the problem. The bindings in IIS have been checked as well The cert itself is also still valid. NOTE 1: I have seen THIS question which seems to be very similar, but there is no satisfactory answer in that post and it's a year old so not likely to get one any time soon. NOTE 2: I'm asking this on behalf of a co-worker so won't be able to provide instant feedback to any questions/suggestions but I will pass it on. The url is: http://www.flirtalike.com / https://www.flirtalike.com Screenshots:

    Read the article

  • ubuntu qcow2 image for local usage

    - by aisbaa
    I'm using kvm and I would like to run ubuntu server on it. My goal is to run db2 database instance for development. Is there ready to use ubuntu qcow2 images online for such purpose? Or should I install it from live cd? I've found this instruction UEC/Images, but at launch I get: $ kvm -fda ${floppy} -drive if=virtio,file=./disk.img -boot a ... Nothing to boot: No such file or directory (http://ipxe.org/...) No more network devices No bootable device. Solution: I havent found pre-installed ubuntu virtual machine image online, so solution is to install it by your self.

    Read the article

  • Rsync plugin to many local wordpress installs via script or cli

    - by Nick Abbey
    I am maintaining a large number of wordpress installs on a production server, and we are looking to deploy InfiniteWP for managing these installs. I am looking for a way to script the distribution of the plugin folder to all of these installs. On server wp-prod, all sites are stored in /srv//site/ The plugin needs to be copied from ~/iws-plugin to /srv//site/wp-content/plugins/ Here's some pseudo code to explain what I need to do: array dirs = <all folders in /srv> for each d in dirs if exits "/srv/d/site/wp-content/plugins" rsync -avzh --log-file=~/d.log ~/plugin_base_folder /srv/d/site/wp-content/plugins/ else touch d.log echo 'plugin folder for "d" not found' >> ~/d.log end end I just don't know how to make it happen from the cli or via bash. I can (and will) tinker with a bash or ruby script on my test server, but I'm thinking the command-line-fu here on SF is strong enough to handle this issue much more quickly than I can hack together a solution. Thanks!

    Read the article

  • RAM module randomly stopped working

    - by nhinkle
    My laptop is a Dell Inspiron e1505. It came with 2x512 MB of RAM installed, and about a year and a half ago I decided to upgrade it to 2x1 GB. I bought two 1GB memory modules from newegg and installed them, and all worked fine. Just last night my laptop was working fine; this morning I booted up and there was a BIOS warning saying "amount of system memory has changed". I tried reseating the modules, but that didn't fix it. Then I removed each one individually, and determined that one of the sticks appears to no longer be working. This happened very abruptly - I hadn't noticed any problems which might have been indicative of impending failure. Does anybody have any clue what may have caused this, and if there's any hope of making it work again? The memory only had a 30 day warranty, so I can't RMA it.

    Read the article

  • DNS request times out then succeeds on my local network. Why?

    - by Dan
    I have a W2K3 Server that is the Domain Controller and also the DNS server. I wanted to make another DNS zone on my network called "something.local" and then make 'A' records to point requests like 'admin.something.local' and 'www.something.local' to machines on my network. I keep getting DNS timeouts but then after 2 tries it succeeds. Why would this happen? How can I troubleshoot? From my desktop I run: nslookup admin.something.local and get: Server: server.domain.com.au.local Address: 192.168.0.10 DNS request timed out. timeout was 2 seconds. DNS request timed out. timeout was 2 seconds. Name: admin.something.local Address: 192.168.0.191 If I go back the other way: nslookup 192.168.0.191 I get: Server: server.domain.com.au.local Address: 192.168.0.10 Name: admin.something.local Address: 192.168.0.191 My DNS server address is 192.168.0.10. The new DNS zone is not hooked up to active directory. I do not have much experience with DNS. Yesterday it was working fine. I have tried doing an 'ipconfig /flushdns' on both my desktop and the DNS server

    Read the article

  • Postfix block senders outside from local domains

    - by Tibor Peter Toth
    I would like to block every mail that is coming in from a domain that is running on my server. Example: I have domain1.com on my mail server and I'm getting a mail from outside with an email address of [email protected] Then I know it's a Spam, because domain1.com is on my server, so the sender cannot come from outside. I want postfix to check for this, and simply block these kind of emails. I know this is a function in postfix, just don't know which one. Thanks.

    Read the article

  • How to Confirm working of Nginx Caching Proxy

    - by Mark
    I am having nginx on port 80 and apache on port 8080 on same server. I have configured nginx such that it act as reverse proxy(i am not sure whether its working or not) using this tutorial http://tumblr.intranation.com/post/766288369/using-nginx-reverse-proxy. steps i followed to verify proxy. opened same page on two different machines within an interval of 5 seconds. but in the apache access.log every request is showing 200 response code.Does that indicate that caching is not working? and nginx access.log is showing nothing.

    Read the article

  • UDP blocked by Windows XP Firewall when sending to local machine

    - by user36367
    I work for a software development company but the issue doesn't seem to be programming-related. Here is my setup: Windows XP Professional with Service Pack 3, all updated Program that sends UDP datagrams Program that receives UDP datagrams Windows Firewall set to allow inbound UDP datagrams on a specific port (Scope: Subnet) If I send a UDP datagram on any port to other, similar machines, it goes through. If I send the UDP datagram to the same computer running the program that sends (whether using broadcast, localhost IP or the specific IP of the machine), the receiver program gets nothing. I've traced the problem down to the Windows XP Firewall, as Windows 7 does not have this problem (and I do not wish to sully my hands with Vista). If the exception I create for that UDP port in the WinXP firewall is set for a Scope of Subnet the datagram is blocked, but if I set it to All Computers or specifically enter my network settings (192.168.2.161 or 192.168.2.0/255.255.255.0) it works fine. Using different UDP ports makes no difference. I've tried different programs to reproduce this problem (ServerTalk to send and either IP Port Spy or PortPeeker to receive) to make sure it's not our code that's the issue, and those programs' datagrams were blocked as well. Also, that computer only has one network interface, so there are no additional network weirdness. I receive my IP from a DHCP server, so this is a straightforward setup. Given that it doesn't happen in Windows 7 I must assume it's a defect in the Windows XP Firewall, but I'd think someone else would have encountered this problem before. Has anyone encountered anything like this? Any ideas?

    Read the article

  • I want to build an debian apt site for local LAN updates

    - by user73504
    Hi, I have downloaded all debian's DVD disks, and I have set up apache httpd service. I combined all dvd disk 's file, but I found the .gpg file I need and I can't create it. it looks like source's signature file. so when I set my /etc/apt/sources.list file as follow: deb http://192.168.1.102/apt/debian squeeze main contrib it noticed me the gpg files verilied faild. so I want to know , how to create gpg file, and do I need some other work except put DVD's file to the apache's htdocs path?

    Read the article

  • Securing a local server physically

    - by Daniele
    We are an online business. We have a very powerful server with hard disk mirroring in our office that we are using for a variety of internal business-critical functions. We want to keep that machine in our office but we want to make sure it is as secure as possible (within reason). Obviously we are already backing it up everyday off-site. My question is more about not-too-expensive physical measures to protect the machine against thieves and disasters such as fire. What would you suggest?

    Read the article

  • Downgrade 'local' packages in Debian/Ubuntu

    - by Matt Joiner
    I recently unticked the "pre-released updates" option in Software Sources on my Ubuntu Lucid 10.04.1 installation. The Ubuntu wiki states the following regarding this source: The proposed updates are updates which are waiting to be moved into the recommended updates queue after some testing. They may never reach recommended or they may be replaced with a more recent update. Roughly 20 installed packages have indeed not made it into recommended updates, and occasionally cause conflicts when I install new software, as related packages of the newer version are unavailable now that I've disabled the source. How can I force a downgrade of all packages for which an earlier version exists in a enabled repository?

    Read the article

  • UDP blocked by Windows XP Firewall when sending to local machine

    - by user36367
    Hi there, I work for a software development company but the issue doesn't seem to be programming-related. Here is my setup: - Windows XP Professional with Service Pack 3, all updated - Program that sends UDP datagrams - Program that receives UDP datagrams - Windows Firewall set to allow inbound UDP datagrams on a specific port (Scope: Subnet) If I send a UDP datagram on any port to other, similar machines, it goes through. If I send the UDP datagram to the same computer running the program that sends (whether using broadcast, localhost IP or the specific IP of the machine), the receiver program gets nothing. I've traced the problem down to the Windows XP Firewall, as Windows 7 does not have this problem (and I do not wish to sully my hands with Vista). If the exception I create for that UDP port in the WinXP firewall is set for a Scope of Subnet the datagram is blocked, but if I set it to All Computers or specifically enter my network settings (192.168.2.161 or 192.168.2.0/255.255.255.0) it works fine. Using different UDP ports makes no difference. I've tried different programs to reproduce this problem (ServerTalk to send and either IP Port Spy or PortPeeker to receive) to make sure it's not our code that's the issue, and those programs' datagrams were blocked as well. Also, that computer only has one network interface, so there are no additional network weirdness. I receive my IP from a DHCP server, so this is a straightforward setup. Given that it doesn't happen in Windows 7 I must assume it's a defect in the Windows XP Firewall, but I'd think someone else would have encountered this problem before. Has anyone encountered anything like this? Any ideas? Thanks in advance!

    Read the article

  • NFSv4 - ACLs not working

    - by alex
    I've setup an NFSv4 server (Debian) and client (Ubuntu/Mythbuntu). It seems that uid username mapping is working nicely out-of-the-box (I get correct usernames on ls -l if the usernames match between the two boxes, even if the uids don't), but ACLs are not working. I've installed nfs4-acl-tools and I can read the ACLs correctly on the client, but they don't get applied. What needs to be done for ACLs to work? To clarify; username mapping works for regular permissions. ACLs are applied using uid/gid (I can even set ACLs by uid and they work).

    Read the article

  • Configure Postfix to use external MX servers for delivery of local mail if user is unknown

    - by mr.b
    I have a following setup: linux box with postfix configured to be responsible for example.com domain domain's MX servers are configured so that mail sent to example.com is sent to google mail servers several user accounts on linux machine exist (same machine also hosts example.com site) When someone from the outside attempts to send mail to address ending with @example.com, it gets routed to google mail (and there handled appropriately). When linux machine tries to send mail to outside world, mail is delivered correctly, as reverse dns and spf records are configured correctly, so linux machine is valid mail sender for example.com domain (along with google mail servers). However, here's the problem. When php application (hosted at linux box) tries to send mail to [email protected] (and someuser doesn't exist on linux box), it fails, since it doesn't even consult google mail servers, but postfix smtp locally concludes that "someuser" is unknown. So, the question is: how do I tell postfix to relay mails sent to @example.com domain to google mail servers (so, to servers specified in MX records), IF and only if a mailbox is not found locally.

    Read the article

  • Mac OS X 10.6.5 and link-local addresses (169.254.x.x)

    - by WMR
    Starting with the latest update of Mac OS X (10.6.5) all Apple applications (Safari, Mail, iChat, etc.) don't connect to the internet anymore, if the assigned IP address is from the 169.254.0.0./16 range. This is not a routing problem, I can still ping any server I want, even connecting via command line tools works. I know this problem could easily be fixed by changing the IP address to a more common RFC1918 address (e.g. 192.168.0.0./16), but this is what the ISP assigns via DHCP and I am not sure I can convince them (Xplornet) to change that. So I am wondering if there's a (hidden?) setting that would convince Apple applications, that they are in fact still online.

    Read the article

  • Power Supply Not Working?

    - by Mr.Glass
    So I recently made some purchases for a new computer including: Motherboard: Gigabyte GA-EP45-UD3R CPU : Intel Core2Quad 2.66Ghz Power Supply: Antec Basiq 500w UPDATE: I got the Power Supply working, the motherboard's lights come on, the video card fan spins, BUT the CPU fan does NOT spin and there is no video. Whats the possibility of lacking power? In the mobo guide it says the use of a power supply providing a 2x4 12V connector is recommended (I do not have this connector) by the CPU manufacturer when using an Intel Extreme Edition (I'm using Core2Quad) UPDATE: Got it working, intstalling windows 7 now. THANKS GUYS!!

    Read the article

  • mod_rewrite RewriteRule is not working

    - by buggy1985
    Hi, This is a follow-up of this question: Rewrite URL - how to get the hostname and the path? And a copy of this: mod_rewrite RewriteRule is not working I got this Rewrite Rule: RewriteEngine On RewriteRule ^(http://[-A-Za-z0-9+&@#/%=~_|!:,.;]*)/([-A-Za-z0-9+&@#/%=~_|!:,.;]*)\?([A-Za-z0-9+&@#/%=~_|!:,.;]*)$ http://http://www.xmldomain.com/bla/$2?$3&rtype=xslt&xsl=$1/$2.xsl it seems to be correct, and exactly what I need. But it doesn't work on my server. I get a 404 page not found error. mod_rewrite is enabled, as the following simple rule is working fine: RewriteEngine On RewriteRule ^page/([^/\.]+)/?$ index.php?page=$1 [L] Can you help? Thanks

    Read the article

  • Git clone/pull across local network

    - by Tom Sarduy
    I'm trying to clone/pull a repository in another PC using Ubuntu Quantal. I have done this on Windows before but I don't know what is the problem on ubuntu. I tried these: git clone file:////pc-name/repo/repository.git git clone file:////192.168.100.18/repo/repository.git git clone file:////user:pass@pc-name/repo/repository.git git clone smb://c-pc/repo/repository.git git clone //192.168.100.18/repo/repository.git Always I got: Cloning into 'intranet'... fatal: '//c-pc/repo/repository.git' does not appear to be a git repository fatal: The remote end hung up unexpectedly or fatal: repository '//192.168.100.18/repo/repository.git' does not exist More: The other PC has username and password Is not networking issue, I can access and ping it. I just installed git doing apt-get install git (dependencies installed) I'm running git from the terminal (I'm not using git-shell) What is causing this and how to fix this? Any help would be great! UPDATE I have cloned the repo on Windows using git clone //192.168.100.18/repo/intranet.git without problems. So, the repo is accessible and exist! Maybe the problem is due user credentials?

    Read the article

  • Accessing shared resource on local computer from users of different physical location

    - by Joe
    Sounds like easy task to some but such a difficult task for me to do... The main requirement for this task is to setup something in offices located on different locations, so (1st question) users are able to log on to the domain without VPN when they are in one of the offices. Additionally, (2nd question)how they can log on to the domain server when they are on the road like in a starbuck, what do they have to do to connect to domain after VPN connection are successful. also it's my understanding that, we can't share resource from computers on different network segments, (3rd question)what is the best solution to bridge/combine two network segments(two office in different locations) so computers of different location can see each other. Thank you in advance for any response.

    Read the article

  • Local IP address same as Google's external

    - by GRIGORE-TURBODISEL
    I'm exampling Google's IPs, but you get the idea. What happens if somebody configures a router's LAN address pool to range from 62.231.75.2 to 62.231.75.255, then his computer's IP address to 62.231.75.232 and someone else on the network tries to access Google? Or better off, is there any case in which someone in that network can, by merely attempting to access Google, accidentally bump into another computer on the network?

    Read the article

  • google video chat works faster on local LAN

    - by bashrc
    Recently the internet speed on our college LAN has dropped drastically. ( Avg file download speed 13 Kbps :( ). However g-talk's video chat remains unbelievably fluent when done with someone within the college's LAN. However video chat is practically impossible for anyone else who is not in the college network. My college has a proxy server through which all computers inside the college connect to the internet. I suspect its due to the proxy server. Also how g-talk maintains video chat? Is it something in the mechanism that speeds up video chat between two clients with the same IP? Since all computers use the same proxy,their IP would appear to be the same to google server.

    Read the article

  • Nginx, as reverse proxy, could not proxy_pass to a domain pointing to the local JBOSS

    - by larryzhao
    My environment is Ubuntu 12.04, Nginx 1.20, and Torquebox 2.0.3 which is actually JBoss AS 7. I have two app deployed on Torquebox, it listens to 8080 and have different hostnames, app1.mydomain.com and app2.mydomain.com. I added 127.0.0.1 app1.mydomain.com and 127.0.0.1 app2.mydomain.com in /etc/hosts then I curl app1.mydomain.com:8080 and curl app2.mydomain.com:8080 both have correct return. Then I go to my nginx. I would like nginx to pass the visit to www.app1.com to app1.mydomain.com:8080, so I have the following configuration: # primary server - proxypass to torquebox server { listen 80; server_name www.app1.com; access_log off; error_log off; # proxy to Torquebox location / { proxy_pass http://app1.mydomain:8080/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } } But it doesn't work. curl www.app1.com returns nothing. And if I visit www.app1.com in Safari, the http return code is 404. I don't know why, need help.

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >