Search Results

Search found 19788 results on 792 pages for 'remote host'.

Page 681/792 | < Previous Page | 677 678 679 680 681 682 683 684 685 686 687 688  | Next Page >

  • Routing WIFI and LAN for specific traffic

    - by jakebird451
    I have two network devices aboard my macbook pro: WIFI (en1): Used for general traffic. Connects to an ip of 192.168.19.* via DHCP LAN (en0): Used for specific traffic. Connects to an ip of 192.168.2.10 as a static IP. Does not connect to a router, only a switch for direct routing connection. I have 4 IP addresses I need to access on the LAN: 192.168.2.1 192.168.2.21 192.168.2.20 192.168.2.30 The rest of the traffic needs to go to WIFI. I have tried setting up a routing table for the specific ip addresses, but I only managed to mess up my network. I do not venture out into the world of networking too often, but this was the latest command I have been trying: sudo route add -host 192.168.2.30 -interface en0 This command killed my ability to use ping. It told me that ping could not allocate memory (is that even possible)? It also killed my wifi access. Logging out and back in fixed the issue. I really do not mind to make this solution permanent, so I am fine with a temporary routing.

    Read the article

  • Strange ssh login

    - by Hikaru
    I am running debian server and i have received a strange email warning about ssh login It says, that user mail logged in using ssh from remote address: Environment info: USER=mail SSH_CLIENT=92.46.127.173 40814 22 MAIL=/var/mail/mail HOME=/var/mail SSH_TTY=/dev/pts/7 LOGNAME=mail TERM=xterm PATH=/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games LANG=en_US.UTF-8 SHELL=/bin/sh KRB5CCNAME=FILE:/tmp/krb5cc_8 PWD=/var/mail SSH_CONNECTION=92.46.127.173 40814 my-ip-here 22 I looked in /etc/shadow and find out, that password for is not set mail:*:15316:0:99999:7::: I found this lines for login in auth.log n 3 02:57:09 gw sshd[2090]: pam_winbind(sshd:auth): getting password (0x00000388) Jun 3 02:57:09 gw sshd[2090]: pam_winbind(sshd:auth): pam_get_item returned a password Jun 3 02:57:09 gw sshd[2091]: pam_winbind(sshd:auth): user 'mail' granted access Jun 3 02:57:09 gw sshd[2091]: Accepted password for mail from 92.46.127.173 port 45194 ssh2 Jun 3 02:57:09 gw sshd[2091]: pam_unix(sshd:session): session opened for user mail by (uid=0) Jun 3 02:57:10 gw CRON[2051]: pam_unix(cron:session): session closed for user root and lots of auth failures for this user. There is no lines with COMMAND string for this user. Nothing was found with "rkhunter" and with "ps aux" process inspection, also there is no suspicious connections was found with "netstat" (as I can see) Can anyone tell me how it is possible and what else should be done? Thanks in advance.

    Read the article

  • How best to handle end user notification in the event of system failure incl. email?

    - by BrianLy
    I've been asked to research ways of handling end user notifications when systems such as email are experiencing problems. Perhaps an example will make this a little clearer. We have a number of sites in different countries. Recently email was impacted at one of the sites, but it could have been a complete network outage. Information was provided by phone to local IT managers at the site but onward communication was slower than some would have liked. It seems like almost everyone at the site has a personal mobile phone which could receive text messages, and perhaps access a remote website with postings on the situation. However managing and supporting a system to text people on these relatively infrequent occasions would be very costly to do internally. What are other people doing to handle situations like this? Some things I've thought of include: Database of phone numbers to text. Seems costly and not very easy to maintain for an already stretched IT group. Is there an external service that would let you do this policies? Send voicemail message to all phones on site. Maintain an external website. This would not work in all situations (network failure), and there is a limit on the amount of info that can be posted externally. A site outage could be sensitive information in some situations. How could the site be password protected? Maybe OpenId/Facebook connect would work. Use a site like Yammer.com which is publicly accessible but only by people with a company email address. Anyone using this for IT outage notifications? To me it looks like there is no clear answer, and that there are solutions for some subsets of users. To be comprehensive a number of solutions would need to be combined. Any additional thoughts or recommendations? What worked or didn't work for your organization?

    Read the article

  • Microsoft Server 2003 Explorer shows duplicate local shares

    - by user52167
    Hi folks, I am new here and I could really use some advice please. I am having a problem with our file server. When I try to browse the shared folders using explorer, several of the shared folders all appear to have the same name. Whenever I attempt to rename one of the affected folders, all the affected folders name also change. Our File Server is Windows Server 2003 R2. I am logged on directly to the server using remote desktop. When I open the folder all is as it should be, the proper content is there and the address bar displays the correct folder name and path. The share names are correct, so everything that needs to access the shared folder/files can do so. Also when I browse to the folder using the command-line all it as it should be there too. The only issue seems to be the incorrect display name when browsing using explorer. Can anyone offer any advice or help as to how to resolve this issue please? It would be most appreciated. Thanks

    Read the article

  • What are the possible problems, when wget returns code 500 but same request works in normal browsers?

    - by markus
    What should I be looking for, when wget returns 500 but the same URL works fine in my web browser? I don't see any access_log entries that seem to be related to the error. DEBUG output created by Wget 1.14 on linux-gnu. <SSL negotiation info stripped out> ---request begin--- GET /survey/de/tools/clear-caches/password/<some-token> HTTP/1.1 User-Agent: Wget/1.14 (linux-gnu) Accept: */* Host: testing.thesurveylab.net Connection: Keep-Alive ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.0 500 Internal Server Error Date: Wed, 12 Dec 2012 14:53:07 GMT Server: Apache/2.2.3 (CentOS) Set-Cookie: blueprint2-staging=8jnbmkqapl30hjkgo0u6956pd1; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Strict-Transport-Security: max-age=8640000;includeSubdomains X-UA-Compatible: IE=Edge,chrome=1 Content-Length: 5 Connection: close Content-Type: text/html; charset=UTF-8 ---response end--- 500 Internal Server Error Stored cookie testing.thesurveylab.net -1 (ANY) / <session> <insecure> [expiry none] blueprint2-staging 8jnbmkqapl30hjkgo0u6956pd1 Closed 3/SSL 0x0000000001f33430 2012-12-12 15:53:07 ERROR 500: Internal Server Error.

    Read the article

  • The Web Hosting Connundrum for "not quite" developers

    - by saltcod
    Hey all, Apologies if this post feels like its been covered elsewhere, but I don't think it has. I've been down a winding web hosting road. To date, I've tried: Joyent, Media Temple, Bluehost, Hostgator, and finally Linode. The reason for switching are likely obvious to everyone: speed. With the exception of the lightening fast Linode, all of the shared hosts are absolutely sloooow. What do do when you're not really a "developer" While I'v grown addicted to the speed of Linode, I really don't feel like its where I should be. I have this nagging feeling in the back of my mind that one of these days (likely soon), I'm going to run into something that i won't be able to figure out and i'll have days worth of downtime. Just the other day, for example, I realized that one of my domains wasn't sending emails. After 4(!) hours looking into the problem, I still can't get sendmail or postfix to work. Four hours!! I want to be a Drupal expert, not a Ubuntu expert That's really the heart of my problem: I spend way too much time learning Ubuntu's ins-and-outs, and not nearly enough time working on Drupal. So here goes: Is there a web host out there anywhere that offers the speed of Linode, but will let me focus on Drupal instead of sys-admin-ing? Thanks! [ I know, I know. There are going to be lots of people who read this saying - "just learn Ubuntu like a real developer". And I get that. I do. But when I work full-time and try and develop some of these sites in my evenings and weekends, I'm really feeling like the sys-admin stuff gets in the way.

    Read the article

  • Revamping an old and unstable office IT-solution using Windows Server and OpenVPN

    - by cmbrnt
    I've been given the cumbersome task to totally redo the IT-infrastructure for a customer's office. They are currently running Windows XP all over, with one computer acting as a file server with no control over which users have access to which files, and so on. To top it off, this file server also functions as a workstation, which means it gets rebooted every time the user notices some sluggish behavior or experiences problems with flash games. To say the least, this isn't working for them. Now - I've got a very slim budget, but I need to set up a new server, and I wish to run Windows Server 2008 on it. I also need the ability to access the network remotely via VPN. Would it be a good idea to install VMware ESXi 4.1 onto the new server, and then run Windows Server 2008 as well as a separate Debian install for openvpn on it? I don't like the Domain Controller for the future AD to also run a VPN-server, because of stability issues when something goes to hell with either of them. There will be no redundancy though. However, I'm not sure if there is something to gain by installing a VPN solution on the Windows Server itself, when it comes to accessing file shares on the network via VPN. I don't know how to enable users logging in via the VPN to access the remote files, since they will be accessing the network from their own home computers (which is indeed a really bad idea, but this is what I've got to work with). They won't be logged in to the windows Domain, but rather their home workgroups. I need to be able to grant access to files in certain directories based on the logged in AD-user, but every computer won't necessarily be configured to log into the domain. I'm not sure how to explain this in a good way, but I'd be happy to clarify if somethings not clear. Any help would be great, because I've got a feeling that I can't do this without introducing a bunch of costly new rules when it comes to their IT-solution. I'd rather leave that untouched and go on my merry way to the next assignment.

    Read the article

  • How to setup bindings for development IIS 7.5 with lot of sites

    - by Antonio Bakula
    I am a programmer in a small ASP.NET shop with very little expirience in server administration, and I have to setup IIS 7.5 to host lot of sites on newly installed windows server 2008 R2, these sites are test "clones" for sites on "real" web server and they should be accessible only in local network (domain). Developers should add new sites for our new customers. Project managers use this server to check progress and test new sites and new features, QA people have to have access to this site and test before we copy it to the "real" web server. Developers only have access to IIS console, in fact they can use RDP to test server with their developer domain credentials and permissions, also developers are local admins on that machine (tester). On our previous server I used different port numbers for each site. That worked but don't like this solution, I would prefer to use subdomains. But here are the problems: manually adding DNS records is not an option because we do not wont that developers have to administer domain DNS server, and currently this had to be done with domain administrator credentials. Is there a some way to add DNS record automatically ? I tried to add DNS record for subdomains on test server with wildcard (*.tester) and that seems to work for some time but that change coused some bad problems in our domain network and admin forbid me to mess with DNS, he said that I have to add DNS record for every subdomain manually and that I can not use wildcards, and there is nothing that I can do about it, mainly for "politicall" reasons :( obviously our admin is pretty much uncooperative, outsorced from different organization and I can't do anything about that. can I add another DNS server on that machine ? What must be setup on clients machines to "tell" them to use domain DNS server and tester domain server ? So please I need someone to give me some advice, what should I do ? Is different port numbers only option left ? Thanks !

    Read the article

  • Relative path incorrect in the view layer when hosting a rails3 app in a subdirectory using passenger and apache

    - by Saifis
    I want to host multiple Rails apps on a multiple server using sub-directories. And have encountered some relative path problems. I have made a symbolic link to the app's public directory and placed it in the /var/www/html directory, var/www/html/ /test_app (symbolic link to the public folder of test_app) and set apache as so LoadModule passenger_module /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.12/ext/apache2/mod_passenger.so PassengerRoot /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.12 PassengerRuby /usr/local/bin/ruby <VirtualHost *:80> ServerName test.com DocumentRoot /var/www/html Options Indexes FollowSymLinks -MultiViews RailsBaseURI /test_app </Location> </VirtualHost> The links in the app itself works just fine, all the links acknowledge the test_app/ directory and work, however, when it comes to showing images in the public directory in the view, the relative path goes wrong. Say I have /system/files/1/aaa.png it goes looking for it in /var/www/html/system/files/1/aaa.png rather than /var/www/html/test_app/system/files/1/aaa.png As far as I understand this is an Apache setting problem than something to be done in Rails, if its possible I would prefer to have it contained in the conf file of apache rather than having to alter the code.

    Read the article

  • OpenVZ container is running but does not show in vzlist nor can I find the private/conf files for the container

    - by Kakeakeai
    I was creating a new OpenVZ container on one of our VPS Nodes while the power went out for that machine. After bringing the machine back online I could no longer access the container CTID=101. I could not destroy it using "vzctl destroy 101", I can not enter or control it, and "vzlist -a" does NOT display any containers at all (this was a fresh node and the first container was being created). I decided to create a new container at this point assuming that the old container just was not saved for some reason. However when I go to add the ip/host to the new container I get a warning that the IP is already in use. After doing a ping to the IP I realized there was a machine on that IP. I SSH into the machine and discover it is the OLD container that some how is orphaned. I can not find it on the filesystem, I can not find it using VZ commands, and It is set to start on Node boot so it is impossible to shutdown (even ssh in and typing the "shutdown now" command just reboots the container not shut it down). Is this a flaw in OpenVZ or am I missing something? I have all the outputs and logs if needed. Thank you all so much in advance.

    Read the article

  • Reverse SSH tunnel: how can I send my port number to the server?

    - by Tom
    I have two machines, Client and Server. Client (who is behind a corporate firewall) opens a reverse SSH tunnel to Server, which has a publicly-accessible IP address, using this command: ssh -nNT -R0:localhost:2222 [email protected] In OpenSSH 5.3+, the 0 occurring just after the -R means "pick an available port" rather than explicitly calling for one. The reason I'm doing this is because I don't want to pick a port that's already in use. In truth, there are actually many Clients out there that need to set up similar tunnels. The problem at this point is that the server does not know which Client is which. If we want to connect back to one of these Clients (via localhost) then how do we know which port refers to which client? I'm aware that ssh reports the port number to the command line when used in the above manner. However, I'd also like to use autossh to keep the sessions alive. autossh runs its child process via fork/exec, presumably, so that the output of the actual ssh command is lost in the ether. Furthermore, I can't think of any other way to get the remote port from Client. Thus, I'm wondering if there is a way to determine this port on Server. One idea I have is to somehow use /etc/sshrc, which is supposedly a script that runs for every connection. However, I don't know how one would get the pertinent information here (perhaps the PID of the particular sshd process handling that connection?) I'd love some pointers. Thanks!

    Read the article

  • How to resolve 'No internet connectivity issues' with a Virtualised 2008 R2 Server using Forefront UAG

    - by user684589
    I have spent some considerable time reading up on as many possible blogs and articles as I can to help me solve why my VM (Running on Hyper-V) for DirectAccess has suddenly stopped being able to access the internet. The VM setup shares the same internet connection on which I have written and submitted this question so I know that the actual underlying internet connection is fully functional. Previous to last week the DirectAccess was fully functional and had no issues. This is a recent problem which was led up to by a number of consistent crashes on the DA machine when access was attempted. Upon reboot all seemed well until recently. I am not certain whether it is relevant, but previously to this I had a number of power issues where the entire VM host shutdown unexpectedly leaving around 8 VM's in a bad way. Upon restart, the UAG DirectAccess machine was unable to access its configuration service (although the service was started) but this seemed to relate to the Light-Weight Active Directory Service AD LDS which had a corrupted database. Having repaired this database, I restarted the service and could subsequently reconnect to the configuration service again. For good measure I re-bound the network adapters (virtualised through Hyper-V) and DirectAccess claimed to be all happy again. However as it stands my machine is still unable to access the internet showing the "No internet connectivity" exclamation mark for the external facing NIC. I have also tried removing the adapters, disabling, re-enabling and the problem persists. The intranet part of the VM CorpNet seems to be fully functional as before and I'm running out of ideas. Any input would be greatly appreciated. I am not an advanced Domain Administrator so please be gentle.

    Read the article

  • 2 routers at home- how to connect with VNC?

    - by Charles Leviton
    I have two routers at home. First router is upstairs and is connected to the cable modem. 2nd router is downstairs and acts as "signal booster" for the 1st router. Devices connected to the upstairs router have IP addresses of the form 192.168.1.n Devices connected to the upstairs router have IP addresses of the form 192.168.2.n. I blindly followed instructions from a website to do this set up, just glad it works! Upstairs I have a PC running Win 7 64 bit. Its assigned IP is 192.168.1.7. I have a VNC viewer running on this. Downstairs I have a 2nd PC running Vista 32 Home edition bit that is connected to the 2nd router and has IP Address 192.168.2.114. VNC server is running on this. It's listening on 5900. There is no firewall. When I try to connect to this downstairs PC from upstairs it fails with message "Failed to connect to server". I cannot ping to this either. If I try to connect to this downstairs PC using VNC Viewer from another computer that's connected to the same downstairs router then it works like a charm. So what's the work around if the viewer is on a different "network"? I don't have any problems doing remote desktop connection from the downstairs PC to the upstairs PC even if they are connected to different routers. Router information- Upstairs- ASUS RTN13U, downstairs- DD-WRT v24 RC-5 Thanks! P.S. I posted this on the Ultra VNC forum as well but that doesn't seem to have a lot of activity, so taking the liberty to multipost.

    Read the article

  • Customizing tmux status to represent current working directory and files

    - by user69397
    I've been playing with this for a couple of days, so I'm sure I'm missing something simple. Love tmux. Using it for development and have so many windows I need a better way of distinguishing them in the status bar and in the buffer list. Seeing a list of "bash" and "vim" isn't really helpful at all. And since they're all on the same host - don't care about the hostname right now. I'd like to show the current working directory, and the file being worked on. For example when I view the list of buffers I currently see: (0) 0: vim [100x44] (1 panes) "murph" (1) 1: vim [100x44] (1 panes) "murph" (2) 2: bash- [100x44] (1 panes) "murph" (3) 3: bash* [100x44] (1 panes) "murph" Here's what I'd like to see 0:vim main.py ~/devl/project1 1:vim index.html ~/devl/samples/staticfiles 2:bash ~/devl/sandbox 3:bash ~/.vimrc I'd like to see similar info in the status bar for each individual window. While I am able to get PWD to show up in the status bar of a window, it's only the working directory from where tmux was launched. This isn't any help as I change directories. I'm hoping this can be done without a bunch of scripts. Thanks all.

    Read the article

  • PFSence VPN Routing

    - by SvrGuy
    We use PFSense firewalls at three installations with the following LAN networks: 1.) Datacenter #1: 10.0.0.0/16 2.) Datacenter #2: 10.1.0.0/16 3.) HQ: 10.2.0.0/16 All of these locations are linked via an IPSEC tunnel that works properly. Hosts in any of the above networks can communicate with hosts in any other of the above networks. Now, for our laptops etc. we established a road warrior network 10.3.0.0/16 and have implemented OpenVPN to link the laptops etc. to Datacenter #1. This works great too, so our laptops can connect and communicate with any host in Datacenter #1 (anything on 10.0.0.0/16) The problem is the laptops can't communicate with any hosts that Datacenter #1 can reach by its IPSEC tunnel to Datacenter #2 (and/or the HQ for that matter). Does anyone know what to do configuration wise on the PFSense box in Datacenter #1 to configure to route packets received on the OpenVPN tunnel to Datacenter #2 over the IPSEC tunnel? It could be a setting on the OpenVPN or some sort of static route or some such. Any ideas?

    Read the article

  • Adventures in Drupal multisite config with mod_rewrite and clean urls

    - by moexu
    The university where I work is planning to offer Drupal hosting to staff/faculty who want a Drupal site. We've set up Drupal multisite with clean urls and it's mostly working except for some weird redirects. If you have two sites where one is a substring of the other then you'll randomly be redirected to the other site. I tracked the problem to how mod_rewrite does path matching, so with a config file like this: RewriteCond %{REQUEST_URI} ^/drupal RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupal/index.php?q=$1 [last,qsappend] RewriteCond %{REQUEST_URI} ^/drupaltest RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupaltest/index.php?q=$1 [last,qsappend] /drupaltest will match the /drupal line and all of the links on the /drupaltest page will be rewritten to point to /drupal. If you put the end of string character ($) at the end of each rewrite condition then it will always match on the correct site and the links will always be rewritten correctly. That breaks down as soon as a user logs in though because the query string is appended to the url so just the base url will no longer match. You can also fix the problem by ordering the sites in the config file so that the smallest substring will always be last. I suggested storing all of the sites in a table and then querying, sorting, and rewriting the config file every time a Drupal site is requested so that we could guarantee the order. The system administrator thought that was kludgy and didn't address the root problem. Disabling clean urls should also fix the problem but the users really want them so I'd prefer to keep them if possible. I think we could also fix it by using an .htaccess file in each site to handle the clean url rewriting but that also seems suboptimal since it will generate a higher load on the server and the server is intended to host the majority of the university's external facing web content. Is there some magic I can do with mod_rewrite to get it to work? Would another solution be better? Am I doing something the wrong way to begin with?

    Read the article

  • Strange issue in header location redirect

    - by hd01
    I have three websites hosted (example1.com, example2.com, example3.com) on a server. There is a page (test.php) on example1.com with just code below inside it: <?php header('Location:http://example2.com/a.php'); ?> When I browse test.php it goes to http://example1.com/a.php . it doesn't understand it is another domain url, it tried to find the page on itself. but when I put http://google.com instead of example2.com/a.php it works correct. I really get confused. What is the problem ? Should I set some configuration on the server? ( I am administrator of the hosting server ). Ps. The server is behind a pound server. Here's the Firebug Net output for example1.com/test.php Response Headers: HTTP/1.1 302 Found Date: Tue, 09 Oct 2012 09:03:34 GMT Server: Apache/2.2.16 (Debian) Location: http://example1.com/a.php Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 21 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html; charset=utf-8 Request Headers: Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding gzip, deflate Accept-Language en-us,en;q=0.5 Connection keep-alive Cookie mycookie Host example1.com User-Agent Mozilla/5.0 (X11; Linux i686; rv:14.0) Gecko/20100101 Firefox/14.0.1

    Read the article

  • stopping fastcgi_cache for php-enabled custom error page

    - by Ian
    Since enabling the fastcgi_cache on my nginx server, my php-enabled custom error page has suddenly stopped working and I'm getting the internal 404 message instead. In nginx.conf: fastcgi_cache_path /var/lib/nginx/fastcgicache levels=1:2 keys_zone=MYCACHE:5m inactive=2h max_size=1g loader_files=1000 loader_threshold=2000; map $http_cookie $no_cache { default 0; ~SESS 1; } fastcgi_cache_key "$scheme$request_method$host$request_uri"; add_header X-My-Cache $upstream_cache_status; map $uri $no_cache_dirs { default 0; ~^/(?:phpMyAdmin|rather|poll|webmail|skewed|blogs|galleries|pixcache) 1; } the cache relevant stuff in my fastcgi.conf: fastcgi_cache MYCACHE; fastcgi_keep_conn on; fastcgi_cache_bypass $no_cache $no_cache_dirs; fastcgi_no_cache $no_cache $no_cache_dirs; fastcgi_cache_valid 200 301 5m; fastcgi_cache_valid 302 5m; fastcgi_cache_valid 404 1m; fastcgi_cache_use_stale error timeout invalid_header updating http_500; fastcgi_ignore_headers Cache-Control Expires; expires epoch; fastcgi_cache_lock on; If I disable the fastcgi_cache, the php-enabled 404 page works as it has for years. How would I disable the cache for the custom error page?

    Read the article

  • Configure New Server for .htaccess

    - by Phil T
    I have a new LAMP CENTOS 5 server I am setting up and trying to copy the configuration from another web server I have. I am stuck with what I think is a mod_rewrite problem. If I go to http://old-server.com/any_page_name.php it correctly routes through some handling code in index.php and shows me a graceful "Page Cannot Be Displayed" message. But if I go to http://new-server.com/any_page_name.php I get an ugly Apache 404 Not Found error message. I looked in both httpd.conf files and they both have only one reference to mod_rewrite. LoadModule rewrite_module modules/mod_rewrite.so So it seems like that should be fine. At the bottom of httpd.conf I have the code: <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/html ServerName new-server.com ErrorLog logs/new-server.com-error_log CustomLog logs/new-server.com-access_log common </VirtualHost> Then in the root of /var/www/html I have the exact same .htaccess file that looks like this: RewriteEngine on Options +FollowSymlinks RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . index.php [L] ErrorDocument 404 /page-unavailable/ <files ~ "\.tpl$"> order deny,allow allow from none deny from all </files> So I don't see why the page load at old-server.com works fine while new-server.com doesn't route through index.php like I want it to do. Thanks.

    Read the article

  • What can be a reason for phpMyAdmin login to be not working (not at all, no reaction on submit)?

    - by Ivan
    When I open "http://localhost/phpmyadmin/", enter "root" as the user name and my MySQL root password and press go, then if I was using Firefox, I was getting offered to download index.php file (of a zero length), if I was using Opera 11, it said "Connection closed by remote server". Following recommendations I've removed all packages related to phpMyAdmin, PHP, MySQL and Apache and then reinstalled them step-by step (instead of just issuing apt-get install phpmyadmin and relying on the system to install the whole LAMP stack via dependencies as I've done before). The only change I've got was Firefox to stop offering to download index.php - now when I press Ok to submit my password, it just doesn't show any visible reaction at all. What may the reason be and how to fix it? I use up-to-date Xubuntu 11.04. Reinstalling the whole LAMP stack and phpMyAdmin did not help, neither did removing AppArmor. I've tried to use SQLBuddy instead, but there's exactly the same problem. So, I think, the problem is not in phpMyAdmin but in MySQL, Apache or something. MySQL seems to work if I use command line to access it. Apache & PHP seems to work also, as the login page of phpMyAdmin displays correctly.

    Read the article

  • Allow outgoing connections for DNS

    - by Jimmy
    I'm new to IPtables, but I am trying to setup a secure server to host a website and allow SSH. This is what I have so far: #!/bin/sh i=/sbin/iptables # Flush all rules $i -F $i -X # Setup default filter policy $i -P INPUT DROP $i -P OUTPUT DROP $i -P FORWARD DROP # Respond to ping requests $i -A INPUT -p icmp --icmp-type any -j ACCEPT # Force SYN checks $i -A INPUT -p tcp ! --syn -m state --state NEW -j DROP # Drop all fragments $i -A INPUT -f -j DROP # Drop XMAS packets $i -A INPUT -p tcp --tcp-flags ALL ALL -j DROP # Drop NULL packets $i -A INPUT -p tcp --tcp-flags ALL NONE -j DROP # Stateful inspection $i -A INPUT -m state --state NEW -p tcp --dport 22 -j ACCEPT # Allow established connections $i -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow unlimited traffic on loopback $i -A INPUT -i lo -j ACCEPT $i -A OUTPUT -o lo -j ACCEPT # Open nginx $i -A INPUT -p tcp --dport 443 -j ACCEPT $i -A INPUT -p tcp --dport 80 -j ACCEPT # Open SSH $i -A INPUT -p tcp --dport 22 -j ACCEPT However I've locked down my outgoing connections and it means I can't resolve any DNS. How do I allow that? Also, any other feedback is appreciated. James

    Read the article

  • PHP Not Automatically Updating With ?reload=1

    - by user32007
    A friend built a ranking system on his site and I am trying to host in on mine via wordpress and godaddy. It updates for him but when I load it to my site, it works for 6 hours, but as soon as the reload is supposed to occur, it errors and I get a 500 timeout error. His page is at: jeremynoeljohnson .com/yakezieclub My page is currently at http://sweatingthebigstuff.com/yakezieclub but when you ?reload=1 it will give the error. Any idea why this might be happening? Any settings that I might need to change? Here is the top of the index.php file. I'm not sure which part of any of it is messing up. I literally uploaded the same code as him. Here's the reload part: $cachefile = "rankings.html"; $daycachefile = "rankings_history.xml"; $cachetime = (60 * 60) * 6; // every 6 hours, the cache refreshes $daycachetime = (60 * 60) * 24; // every 24 hours, the history will be written to - or whenever the page is requested after 24 hours has passed $writenewdata = false; if (!empty($_GET['reload'])) { if ($_GET['reload']== 1) { $cachetime = 1; } } if (!empty($_GET['reloadhistory'])) { if ($_GET['reloadhistory'] == 1) { $daycachetime = 1; $cachetime = 1; } } if (file_exists($daycachefile) && (time() - $daycachetime < filemtime($daycachefile))) { // Do nothing } else { $writenewdata = true; $cachetime = 1; } // Serve from the cache if it is younger than $cachetime if (file_exists($cachefile) && (time() - $cachetime < filemtime($cachefile))) { include($cachefile); echo ""; exit; } ob_start(); // start the output buffer ?

    Read the article

  • Unwanted forced authentication after server restart (Win 2k3)

    - by Felthragar
    We're running a Win 2k3 R2 Standard 64-bit edition server. On this server we're running a fileserver and the ability to allow remote login to our network through vpn. We do not currently utilize a domain setup, all user accounts are local accounts on the server. Each employee is given a unique account to login to the server. The password is a randomly generated 16 character long string, which makes it hard to remember. What we've done is basicly had the password stored on the client machine (standard "Remember Me" functionality). This has worked well. However, last night our server automatically restarted after an automatic update. After that, some of our employees, myself included, had to re-authenticate with the server, submitting our credentials again. Then again, some others did not have to re-authenticate. Do you guys have any idea why this is? Is there a setting to prevent this? I've checked the logs but I couldn't find anything of interest. Then again I'm not really sure what I'm looking for. Thanks in advance, I'll try to answer any additional questions you may have. Edit: When I say "login" or "authenticate" I mean through the standard windows samba protocol. Edit 2: Ok, new day. Tonight the server restarted again, and the same two clients that had to re-authenticate yesterday had to re-authenticate today as well. The rest did not.

    Read the article

  • vSphere education - What are the downsides of configuring virtual machines with *too* much RAM?

    - by ewwhite
    VMware memory management seems to be a tricky balancing act. With cluster RAM, Resource Pools, VMware's management techniques (TPS, ballooning, host swapping), in-guest RAM utilization, swapping, reservations, shares and limits, there are a lot of variables. I'm in a situation where clients are using dedicated vSphere cluster resources. However, they are configuring the virtual machines as though they were on physical hardware. In turn, this means a standard VM build may have 4 vCPUs and 16GB or more of RAM. I come from the school of starting small (1 vCPU, minimal RAM), checking real-world use and adjusting up as necessary. Some examples from a "problem" cluster. Resource pool summary - Looks almost 4:1 overcommitted. Note the high amount of ballooned RAM. Resource allocation - The Worst Case Allocation column shows that these VMs would have access to less than 50% of their configured RAM under constrained conditions. The real-time memory utilization graph of the top VM in the listing above. 4 vCPU and 64GB RAM allocated. It averages under 9GB use. Summary of the same VM What are the downsides of overcommitting and overconfiguring resources (specifically RAM) in vSphere environments? Assuming that the VMs can run in less RAM, is it fair to say that there's overhead to configuring virtual machines with more RAM than they need? What is the counter-argument to: "if a VM has 16GB of RAM allocated, but only uses 4GB, what's the problem??"? E.g. do customers need to be educated? What specific metric should be used to meter RAM usage. Tracking the peaks of "Active" versus time?

    Read the article

  • Windows VPN client connect on different port

    - by John Gardeniers
    Scenario: Two Windows Server 2003 machines running RRAS VPNs. The firewall port forwards 1723 to one of those machines for normal remote access. I'd like to find a way to connect to the second machine as well. Not because I need to but just because it's the sort of thing I reckon should be possible but can't figure out how to do. Is it possible to have the Windows PPTP VPN client (on XP in this instance) connect on a port other than 1723? If so, I can simply port forward another port to the second server. I've done a fair bit of Googling over the last few days and have only found others asking the same question but no answers. I have of course tried to add a port number in the host name or IP connection box, in various formats, but to no avail. While this might be possible with a third part client I'm really only interested in whether or not it can be done with the Windows built-in client and if so how?. Perhaps there's a registry hack I'm not aware of?

    Read the article

< Previous Page | 677 678 679 680 681 682 683 684 685 686 687 688  | Next Page >