Search Results

Search found 14745 results on 590 pages for 'setting'.

Page 431/590 | < Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >

  • Installing mod_mono on Ubuntu: handler doesn't seem to get registered

    - by Trevor Johns
    I'm trying to install mod_mono on Apache 2 (Prefork MPM). I'm using Ubuntu Karmic, and just want an auto-hosting setup (so that any .aspx files are executed, similar to how PHP is normally setup). I did the following to install Mono: $ apt-get install libapache2-mod-mono mono-apache-server2 mono-devel $ a2dismod mod_mono $ a2enmod mod_mono_auto I've confirmed that mod_mono is getting loaded by Apache. However, any .aspx pages I try to load are returned unprocessed and still have an application/x-asp-net MIME type. It's as if the mod_mono handler never gets registered with Apache. Here's the contents of /etc/mod_mono_auto.load: LoadModule mono_module /usr/lib/apache2/modules/mod_mono.so And here's /etc/mod_mono_auto.conf: MonoAutoApplication enabled AddType application/x-asp-net .aspx AddType application/x-asp-net .asmx AddType application/x-asp-net .ashx AddType application/x-asp-net .asax AddType application/x-asp-net .ascx AddType application/x-asp-net .soap AddType application/x-asp-net .rem AddType application/x-asp-net .axd AddType application/x-asp-net .cs AddType application/x-asp-net .config AddType application/x-asp-net .dll DirectoryIndex index.aspx DirectoryIndex Default.aspx DirectoryIndex default.aspx I've even tried setting the handler explicitly: AddHandler mono .aspx .ascx .asax .ashx .config .cs .asmx .asp Nothing seems to help. Any ideas how to get this working?

    Read the article

  • VPN with client-to-client direct connectivity?

    - by Johannes Ernst
    When setting up a VPN, clients (say client1 and client2) usually authenticate to a server, and together the three constitute the VPN. When client1 wishes to send a packet to client2, this packet usually gets routed by way of server. Are there products / configuration blueprints for products where it is possible to send packets directly from client1 to client2 without going though server? (if the underlying network topology permits it, e.g. no firewalls in the way) If not, is there a way by which client1 can send a packet to client2 by way of server, without the server being able to snoop on the content of the packet? (E.g. because the packet is encrypted with the public key of client2) I just asked in the OpenVPN forum, and the answer I got was "not with OpenVPN". So my question is: are there other products with which this is possible? Open-source preferred ... One use case: client1 and client2, typically in separate offices, find themselves both at headquarters. Do they still need to talk to each other via the public internet? Links appreciated. Thank you.

    Read the article

  • Subversion all or nothing access to repo tree

    - by Glader
    I'm having some problems setting up access to my Subversion repositories on a Linux server. The problem is that I can only seem to get an all-or-nothing structure going. Either everyone gets read access to everything or noone gets read or write access to anything. The setup: SVN repos are located in /www/svn/repoA,repoB,repoC... Repositories are served by Apache, with Locations defined in etc/httpd/conf.d/subversion.conf as: <Location /svn/repoA> DAV svn SVNPath /var/www/svn/repoA AuthType Basic AuthName "svn repo" AuthUserFile /var/www/svn/svn-auth.conf AuthzSVNAccessFile /var/www/svn/svn-access.conf Require valid-user </Location> <Location /svn/repoB> DAV svn SVNPath /var/www/svn/repoB AuthType Basic AuthName "svn repo" AuthUserFile /var/www/svn/svn-auth.conf AuthzSVNAccessFile /var/www/svn/svn-access.conf Require valid-user </Location> ... svn-access.conf is set up as: [/] * = [/repoA] * = userA = rw [/repoB] * = userB = rw But checking out URL/svn/repoA as userA results in Access Forbidded. Changing it to [/] * = userA = r [/repoA] * = userA = rw [/repoB] * = userB = rw gives userA read access to ALL repositories (including repoB) but only read access to repoA! so in order for userA to get read-write access to repoB i need to add [/] userA = rw which is mental. I also tried changing Require valid-user to Require user userA for repoA in subversion.conf, but that only gave me read access to it. I need a way to default deny everyone access to every repository, giving read/write access only when explicitly defined. Can anyone tell me what I'm doing wrong here? I have spent a couple of hours testing and googling but come up empty, so now I'm doing the post of shame.

    Read the article

  • how to remove an entry from system tray?

    - by altvali
    I've searched for an answer to this one and I haven't found one yet. How do i remove a single item from Windows' System tray? I'm targeting Windows XP. Edit: This is not about preventing items from starting up. I want the program to keep running, I just need another script/program to remove the first one's entry from system tray. Second Edit: One approach that I can think of is to try to hide the intended app by modifying registry keys. On several test machines I've found some registry entries that match the System tray information at HKEY_USERS\something-that-looks-like S-1-5-21-682003330-1563985344-725345543-1003\Software\Microsoft\Windows\CurrentVersion\Explorer\TrayNotify with BalloonTip, IconStream and PastIconsStream containing systray information. The important one is IconStream. On other machines, these are found at hkey_classes_root/local/setting/software/microsoft/windows/currentversion/TrayNotify I'm quite sure there's no danger in changing those specific registries, but I don't know how to write code for that. Can anyone help me with the code and with confirming if this has the desired effect of hiding the systray icon for an active program?

    Read the article

  • Configuring port forwarding for SSH - no response outside LAN [migrated]

    - by WinnieNicklaus
    I recently moved, and at the same time purchased a new router (Linksys E1200). Prior to the move, I had my old router set up to forward a port for SSH to servers on my LAN, and I was using DynDNS to manage the external IP address. Everything worked great. I moved and set up the new router (unfortunately, the old one is busted so I can't try things out with it), updated the DynDNS address, and attempted to restore my port forwarding settings. No joy. SSH connections time out, and pings go unanswered. But here's the weird part (i.e., key to the whole thing?): I can ping and SSH just fine from within this LAN. I'm not talking about the local 192.168.1.* addresses. I can actually SSH from a computer on my LAN to the DynDNS external address. It's only when the client is outside the LAN that connections are dropped. This surely suggests a particular point of failure, but I don't know enough to figure out what it is. I can't figure out why it would make a difference where the connections originate, unless there's a filter for "trusted" IP addresses, which is perhaps just restricted to my own. No settings have been touched on the servers, and I can't find any settings suggesting this on the router admin interface. I disabled the router's SPI firewall and "Filter anonymous traffic" setting to no avail. Has anyone heard of this behavior, and what can I do to get past it?

    Read the article

  • DNS Does Not Register at Off-site Locations

    - by Russ Warren
    First of all, let me give you the specifics of our setup: Windows Small Business Server 2008 Domain w/ all applicable updates on the DC The DC does DHCP for the main site The DC does DNS for all sites 3 sites including our headquarters where the DC is located All sites are connected through OpenVPN SSL tunnels terminated by an Untangle box at each site The 2 remote sites us the Untangle box as a DHCP server for their subnet, which assigns the DC as the primary DNS server Collection of Windows XP and Windows 7 workstations connected to the domain Here's the issue: All of the workstations at the main site register with the DNS server on the domain controller fine. As they grab an IP from the DHCP server, it updates the DNS server with the new host record. I have 2 systems (each at different remote sites) that fail to register with the DNS server. I've attempted the following troubleshooting steps: Confirmed the network adapter is using the DC as a DNS server Confirmed 2-way traffic is possible between DC and workstation Verified the "Register with DNS server" setting was checked in the adapter properties Attempted ipconfig /registerdns and received no errors For the time being, I have setup a DHCP reservation for these systems and manually created a host record. This seems to work fine, but I need a solution for any new systems that go out there.

    Read the article

  • How to install subversion on 1&1 server with windows?

    - by Miles M.
    I would like to start using Unfuddle for my project on 1&1 server. I never used subversion and core control before. So, I read a lot of documentation about it but each time, I get lost at the very beginning : I've downloaded the latest version of subversion. But on every tutorial, the way to follow is different. First I sae, on a lot of tuts, that you have to enter command lines. Is that ONLY for Linux ? Like here : http://chwalisz.org/2007/08/05/subversion-on-11-shared-hosting/ I also find something completely different on some website, I think (correct me if I'm wrong) it is the Windows tuts, deeply different frm the linu one. So I found that : http://www.codinghorror.com/blog/2008/04/setting-up-subversion-on-windows.html http://geekswithblogs.net/emanish/archive/2006/06/14/81905.aspx http://better-scm.shlomifish.org/subversion/Svn-Win32-Inst-Guide.html And I don t understand : Do I still have to put the sibversion file on the server ? Do I have to install Apach ? where, on my computer or on my server ? I'm working ith WampServer so I thing I have already Apach installed right ? When they say it is for Windows, do they mean it is for windows servers or for your own OS ? 'Cause my servers are on linux. How could I install Subversion on a 1&1 linux server from my W7 OS computer ? Thanks, that's a lot of question but that realle messy in my mind, I can't find something clear ..

    Read the article

  • Apache, suexec, PHP, suPHP

    - by Chris_K
    While I'm quite comfortable as a Linux user, my Linux Admin-fu is a bit weak. Thus, I'm here looking for guidance with a CentOS server I'm about to build. I need to setup an Apache2 web server for a few of our clients. I want each client's web content to be under their home directory (USERDIR in apache.conf, right?) for the static HTML sites. I want Apache to run as the client (suexec?). Some of their stuff will be PHP apps and I'm under the impression I'll want to look at suphp as well then. So basically I want to look like a small version of a shared web hosting company. Considering how common those are I thought I'd easily find a nice current How-To guide on setting this all up but so far I've had very little luck. I suspect my search words are off. So the questions (feel free to answer any or all): Anyone have some solid links to current/modern guides that would help me set this all up? No, the apache documentation site is not a guide ;-) Since I have a mix of static sites and PHP apps do I want/need both suexec and suphp installed? If so, does that introduce any challenges I should be aware of? Should I be looking at other options instead of suexec and suphp? I plan to give the end users SSH, SFTP or SCP access to their stuff (if that affects anything). Thanks in advance for your help.

    Read the article

  • ServerName not working in Apache2 and Ubuntu

    - by CreativeNotice
    Setting up a dev LAMP server and I wish to allow dynamic subdomains, aka ted.servername.com, bob.servername.com. Here's my sites-active file <VirtualHost *:80> # Admin Email, Server Name, Aliases ServerAdmin [email protected] ServerName happyslice.net ServerAlias *.happyslice.net # Index file and Document Root DirectoryIndex index.html DocumentRoot /home/sysadmin/public_html/happyslice.net/public # Custom Log file locations LogLevel warn ErrorLog /home/sysadmin/public_html/happyslice.net/log/error.log CustomLog /home/sysadmin/public_html/happyslice.net/log/access.log combined And here's the output from sudo apache2ctl -S VirtualHost configuration: wildcard NameVirtualHosts and default servers: *:80 is a NameVirtualHost default server happyslice.net (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost happyslice.net (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost happyslice.net (/etc/apache2/sites-enabled/happyslice.net:5) Syntax OK The server hostname is srv.happyslice.net. As you can see from apache2ctl when I use happyslice.net I get the default virtual host, when I use a subdomain, I get the happyslice.net host. So the later is working how I want, but the main url does not. I've tried all kinds of variations here, but it appears that ServerName just isn't being tied to the correct location. Thoughts? I'm stumped. FYI, I'm running Apache2.1 and Ubuntu 10.04 LTS

    Read the article

  • Providing access to a Samba server for VPN clients

    - by Kamil Kisiel
    We have some Windows users that connect to our network via VPN from home. They need to be able to connect to our Samba server and access a mapped network drive just as they do as when they are on our LAN. The complication is that VPN clients are placed on a subnet other than our office LAN, and behind a firewall. What's the easiest way for me to allow them to still connect to the network share? The solutions I've currently seen involve setting up a WINS server for name resolution purposes and then tunnelling a bunch of the NetBIOS stuff through the firewall. However that means I'd have to set up the VPN DHCP server to hand out the WINS address, something I'm not even sure is possible on the Cisco hardware we have. I'm thinking there must be an easier way. Should I use an LMHOSTS file? Or just map by IP address? Also, I'm not terribly familiar with Windows networking, so which ports would I need to pass through my firewall in order to get the file sharing through?

    Read the article

  • APC Smart UPS network shutdown issue

    - by Rob Clarke
    Here is a bit about our setup: We have 2x Smart-UPS RT 6000 XL units with network management cards We are running Powerchute from a network server Powerchute is connected to the management cards of both UPSs UPSs are set to do a graceful shutdown via Powerchute when the battery duration is under 20 minutes We also have a command file that runs with Powerchute Although our setup is redundant we do not have an equal load on each server due to APC switches for single power devices The problem is that as we do not have an equal load on each server the batteries drain at different rates. This means that the UPSs both get to the specified low battery duration at completely different times. The problem here is that UPS 1 may have run down to 5 minutes and is in desperate need of initiating a Powerchute shutdown - UPS 2 still has 25 minutes of runtime so no shutdown is initiated. Consequently UPS 1 goes down and takes all the servers with and then shuts down UPS 2 as well! What we need to happen are 1 of either 2 things: Powerchute initiates the shutdown as soon as either UPS reaches the 20 minutes low battery duration setting - and doesnt wait for both The UPS with the heavier load expends its entire battery but does not shutdown both UPSs and lets the load be switched across to the UPS that still has runtime remaining. That way when the UPS that still has runtime reaches its low battery duration it can proceed with the graceful shutdown via Powerchute. Hope that makes sense, any help is greatly appreciated!

    Read the article

  • Default Gateway solution on NAT'd network (best options)

    - by kwiksand
    I've recently changed a network from a bunch of machines exposed to the net on a network to a more security conscious Firewall-fronted network with a DMZ for public services. Everything's mostly working perfectly now, but I've got the old problem of NAT Loopback where a machine within the LAN wants to access a public service via the public/external IP. I've solved this problem previously in a small/SOHO environment simply using NAT loopback features of the router in use or a simple iptables rule to do the same, but I want to make sure I make the most resilient choice with the least concern. It seems I can: Use iptables as I've said to DNAT and MASQUERADE the change source/destination so the connection works correctly i.e iptables -A PREROUTING -t nat -d ip.of.eth0.here -p tcp --dport 8080 -j DNAT --to 192.168.0.201:8080 iptables -t nat -A POSTROUTING -s 192.168.0.0/24 -p tcp --dport 8080 -d 192.168.0.201 -j MASQUERADE Use split DNS, with internal mappings for public IP's Potentially do some route nastyness by setting the Default Gateway to use a different externally exposed IP to then come back in the public route (messy) Someone mentioned putting the Default Gateway within the DMZ as well (on serverfault), but I can't find the post again. I'm sure this is a common issue for many with NAT'd networks, but I've not really seen the perfect solve all when it comes to fixing this problem. What is your opinion?

    Read the article

  • Redundant OpenVPN connections with advanced Linux routing over an unreliable network

    - by konrad
    I am currently living in a country that blocks many websites and has unreliable network connections to the outside world. I have two OpenVPN endpoints (say: vpn1 and vpn2) on Linux servers that I use to circumvent the firewall. I have full access to these servers. This works quite well, except for the high package loss on my VPN connections. This packet loss varies between 1% and 30% depending on time and seems to have a low correlation, most of the time it seems random. I am thinking about setting up a home router (also on Linux) that maintains OpenVPN connections to both endpoints and sends all packets twice, to both endpoints. vpn2 would send all packets from home to vpn1. Return trafic would be send both directly from vpn1 to home, and also through vpn2. +------------+ | home | +------------+ | | | OpenVPN | | links | | | ~~~~~~~~~~~~~~~~~~ unreliable connection | | +----------+ +----------+ | vpn1 |---| vpn2 | +----------+ +----------+ | +------------+ | HTTP proxy | +------------+ | (internet) For clarity: all packets between home and the HTTP proxy will be duplicated and sent over different paths, to increase the chances one of them will arrive. If both arrive, the first second one can be silently discarded. Bandwidth usage is not an issue, both on the home side and endpoint side. vpn1 and vpn2 are close to each other (3ms ping) and have a reliable connection. Any pointers on how this could be achieved using the advanced routing policies available in Linux?

    Read the article

  • Connecting to MySQL Server from PHP Command Line (MAMP)

    - by Austin White
    First of all, I'm using Mac OSX 1.6, MAMP 1.9, PHP 5.3.4, and MySQL 5.1.44. I'm in the process of setting up a video encoding service for a site using Chris Boulton's PHP-Resque and Redis. Once the worker process is fired and the videos have been encoded, I need to save their locations to a mysql database. The php script is being run from the shell, so that is where the issue begins. I import the mysql settings and when it attempts to connect, I get the following errors: Warning: mysqli::mysqli(): php_network_getaddresses: getaddrinfo failed: nodename nor servname provided, or not known in /Users/austingym/Documents/Dropbox/Website/htdocs/homefree/lib/MySQLi_Extended.class.php on line 24 Warning: mysqli::mysqli(): [2002] php_network_getaddresses: getaddrinfo failed: nodename nor servn (trying to connect via tcp://MYSQL_SERVER:3306) in /Users/austingym/Documents/Dropbox/Website/htdocs/homefree/lib/MySQLi_Extended.class.php on line 24 Warning: mysqli::mysqli(): (HY000/2002): php_network_getaddresses: getaddrinfo failed: nodename nor servname provided, or not known in /Users/austingym/Documents/Dropbox/Website/htdocs/homefree/lib/MySQLi_Extended.class.php on line 24 Warning: mysqli::set_charset(): Couldn't fetch MySQLi_Extended in /Users/austingym/Documents/Dropbox/Website/htdocs/homefree/lib/MySQLi_Extended.class.php on line 32 I realize that the error is occurring because it's trying to connect to tcp://MYSQL_SERVER:3306, when MySQL is on port 8889. I've been reading about Mac OSX and MAMP errors regarding the mysql.sock and I've gone through multiple forums and tried various fixes, but none have worked. I've tried PATH=/Applications/MAMP/Library/bin/:/Applications/MAMP/bin/php5.3/bin/:/opt/local/bin:/opt/local/sbin:$PATH and sudo ln -s /Applications/MAMP/tmp/mysql/mysql.sock /tmp/mysql.sock but neither have worked. I even ran a search on my machine for "3306" to find where it's being set, but because that's the normal default, I'm guessing it's not being set explicitly. Any clues on how to fix this rather challenging error?

    Read the article

  • Join ActiveDirectory (Win 2k8R2) to OpenDirectory(Snow Leopard)

    - by Tom O'Connor
    The vast majority of questions and so on regarding the interoperability of Active and Open directories involves getting Mac clients to see an AD and auth against it. What we'd like to do is get a Windows 7 workstation to auth completely against Open Directory. We tried setting it up as an NT4 type PDC, and that doesn't work satisfactorily. We tried using pGina and the LDAP backend, which allows Authentication, but has no support for Authorization, and as a result, if we mount an NFS Share, the user has the rights to do anything they damn well please. Not ideal for security (Totally bloody unacceptable, actually). We tried using a Samba server (newer version than on the Open Directory Server) as an intermediate, so that it knows about the LDAP server on the OD Server, but uses Samba 4 instead of v3. That didn't work either. We could login, but couldn't mount, and if we did, we had the same rights as with pGina. If we right-click the mounted drive in Windows, and have a look at NFS UID, it returns -2, not the correct (mapped) UID. So the final plan I've got is to use an Active Directory, inside a Windows 2008R2 Virtual Machine. What I want to achieve is to have the Active Directory sync it's user data from OpenDirectory (read-only would be fine). That way, we'd have the ability to connect Windows 7 clients to a "virtual domain" which would actually just grab information from OD's LDAP. All the information I've found is about how to go the other way. Does anyone know how we can do this?

    Read the article

  • Cisco ASA5505 won't sync with NTP

    - by Martijn Heemels
    Today I noticed that the clock my Cisco ASA 5505 firewall was running about 15 minutes late, which surprised me since I've set up the NTP client. My two NTP servers 10.10.0.1 and 10.10.0.2 are virtualized Windows Server 2008 R2 domain controllers, and both have the correct time. As shown below, the ASA knows about the two servers, can ping them and seems to poll them periodically, so I suppose it can reach them both. The ASA claims its time source is NTP, however the clock is unsynchronized. Neither host is marked as synced. Result of the command: "ping 10.10.0.1" Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.10.0.1, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms Result of the command: "sh ntp ass" address ref clock st when poll reach delay offset disp ~10.10.0.1 .LOCL. 1 78 1024 377 0.5 643.69 17.0 ~10.10.0.2 10.10.0.1 2 190 1024 377 0.9 655.91 58.4 * master (synced), # master (unsynced), + selected, - candidate, ~ configured Result of the command: "sh ntp stat" Clock is unsynchronized, stratum 16, no reference clock nominal freq is 99.9984 Hz, actual freq is 99.9984 Hz, precision is 2**6 reference time is 00000000.00000000 (07:28:16.000 CEST Thu Feb 7 2036) clock offset is 0.0000 msec, root delay is 0.00 msec root dispersion is 0.00 msec, peer dispersion is 0.00 msec Result of the command: "sh clock detail" 10:33:23.769 CEDT Tue Jun 26 2012 Time source is NTP UTC time is: 08:33:23 UTC Tue Jun 26 2012 Summer time starts 02:00:00 CEST Sun Mar 25 2012 Summer time ends 03:00:00 CEDT Sun Oct 28 2012 I've tried the basic steps of manually setting the time and removing and adding the timeservers, to no avail. My ASA's ntp config is simply: ntp server 10.10.0.1 ntp server 10.10.0.2 Do I need to enable authentication to use a Windows NTP server? Any thoughts?

    Read the article

  • Best practices for settings for Oracle database creation

    - by Gary
    When installing an Oracle Database, what non-default settings would you normally apply (or consider applying) ? I'm not after hardware dependent setting (eg memory allocation) or file locations, but more general items. Similarly anything that is a particular requirement for a specific application rather than generally applicable isn't really useful. Do you separate out code/API schemas (PL/SQL owners) from data schemes (table owners) ? Do you use default or non-default roles, and if the latter, do you password protect the role ? I'm also interested in whether there's any places where you do a REVOKE of a GRANT that is installed by default. That may be version dependent as 11g seems more locked down for its default install. These are ones I used in a recent setup. I'd like to know whether I missed anything or where you disagree (and why). Database Parameters Auditing (AUDIT_TRAIL to DB and AUDIT_SYS_OPERATIONS to YES) DB_BLOCK_CHECKSUM and DB_BLOCK_CHECKING (both to FULL) GLOBAL_NAMES to true OPEN_LINKS to 0 (did not expect them to be used in this environment) Character set - AL32UTF8 Profiles I created an amended password verify function that used the apex dictionary table (FLOWS_030000.wwv_flow_dictionary$) as an extra check to prevent simple passwords. Developer logins CREATE PROFILE profile_dev LIMIT FAILED_LOGIN_ATTEMPTS 8 PASSWORD_LIFE_TIME 32 PASSWORD_REUSE_TIME 366 PASSWORD_REUSE_MAX 12 PASSWORD_LOCK_TIME 6 PASSWORD_GRACE_TIME 8 PASSWORD_VERIFY_FUNCTION verify_function_11g SESSIONS_PER_USER unlimited CPU_PER_SESSION unlimited CPU_PER_CALL unlimited PRIVATE_SGA unlimited CONNECT_TIME 1080 IDLE_TIME 180 LOGICAL_READS_PER_SESSION unlimited LOGICAL_READS_PER_CALL unlimited; Application login CREATE PROFILE profile_app LIMIT FAILED_LOGIN_ATTEMPTS 3 PASSWORD_LIFE_TIME 999 PASSWORD_REUSE_TIME 999 PASSWORD_REUSE_MAX 1 PASSWORD_LOCK_TIME 999 PASSWORD_GRACE_TIME 999 PASSWORD_VERIFY_FUNCTION verify_function_11g SESSIONS_PER_USER unlimited CPU_PER_SESSION unlimited CPU_PER_CALL unlimited PRIVATE_SGA unlimited CONNECT_TIME unlimited IDLE_TIME unlimited LOGICAL_READS_PER_SESSION unlimited LOGICAL_READS_PER_CALL unlimited; Privileges for a standard schema owner account CREATE CLUSTER CREATE TYPE CREATE TABLE CREATE VIEW CREATE PROCEDURE CREATE JOB CREATE MATERIALIZED VIEW CREATE SEQUENCE CREATE SYNONYM CREATE TRIGGER

    Read the article

  • Passwordless SSH not working - keys copied and permissions set

    - by Comcar
    I know this question has been asked, but I'm certain I've done what all the other answers suggest. Machine A: used keygen -t rsa to create id_rsa.pub in ~/.ssh/ copied Machine A's id_rsa.pub to Machine B user's home directory Made the file permissions of id_rsa.pub 600 Machine B added Machine A's pub key to authorised_keys and authorised_keys2: cat ~/id_rsa.pub ~/.ssh/authorised_keys2 made the file permissions of id_rsa.pub 600 I've also ensured both the .ssh directories have the permission 700 on both machine A and B. If I try to login to machine B from machine A, I get asked for the password, not the ssh pass phrase. I've got the root users on both machines to talk to each other using password-less ssh, but I can't get a normal user to do it. Do the user names have to be the same on both sides? Or is there some setting else where I've missed. Machine A is a Ubuntu 10.04 virtual machine running inside VirtualBox on a Windows 7 PC, Machine B is a dedicated Ubuntu 9.10 server UPDATE : I've run ssh with the option -vvv, which provides many many lines of output, but this is the last few commands: debug3: check_host_in_hostfile: filename /home/pete/.ssh/known_hosts debug3: check_host_in_hostfile: match line 1 debug1: Host '192.168.1.19' is known and matches the RSA host key. debug1: Found key in /home/pete/.ssh/known_hosts:1 debug2: bits set: 504/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug3: Wrote 16 bytes for a total of 1015 debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug3: Wrote 48 bytes for a total of 1063 debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/pete/.ssh/identity ((nil)) debug2: key: /home/pete/.ssh/id_rsa (0x7ffe1baab9d0) debug2: key: /home/pete/.ssh/id_dsa ((nil)) debug3: Wrote 64 bytes for a total of 1127 debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-keyex,gssapi-with-mic,gssapi,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Trying private key: /home/pete/.ssh/identity debug3: no such identity: /home/pete/.ssh/identity debug1: Offering public key: /home/pete/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug3: Wrote 368 bytes for a total of 1495 debug1: Authentications that can continue: publickey,password debug1: Trying private key: /home/pete/.ssh/id_dsa debug3: no such identity: /home/pete/.ssh/id_dsa debug2: we did not send a packet, disable method debug3: authmethod_lookup password debug3: remaining preferred: ,password debug3: authmethod_is_enabled password debug1: Next authentication method: password

    Read the article

  • Networking lost after update from Debian Wheezy to Jessie

    - by Charaf
    I am currently setting a Virtual Machine for development purposes. I did a big part of this configuration under Wheezy, but I need some debs that were available only on Jessie. So, I've updated the sources.list and did a dist-upgrade. Everything went well, but after the reboot, I noticed that I lost all the networking. Repositories are unreachable, as well as a simple ping google.fr returns nothing. What can I do to quickly restore networking so that I can continue my working. I have a poor connexion and can not afford to download the whole install DVDs. root@vm~# ifconfig lo Link encap:Boucle locale inet adr:127.0.0.1 Masque:255.0.0.0 adr inet6::1/128 Scope:Hôte UP LOOPBACK RUNNING MTU:65536 Metric 1 RX packets:452 errors:0 dropped:0 overruns:0 frame:0 TX packets:452 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:0 RX bytes:164238 (160.3 KiB) TX bytes:164238 (160.3 KiB) root@vm~# I am running VMware 1.0.1 build 1379776 and the last update of Jessie (debian 3.14.4-1) Please help. Thanks.

    Read the article

  • Datacenter IP Addressing and DNS Management

    - by user65248
    Hello everyone Basically we are setting up a small Datacenter, about 300 amps power and max 50 racks, Im saying these coz I wanna u imagine the size and requirements, I have studied networking mostly Microsoft and Windows based systems , but I cant get how the IP addressing and DNS management and configuration works in a Datacenter , and unfortunately I have to setup everything by myself but defe we will have some staff to do some job. Now my questions Datacenter IP Addressing Suppose we have got a block of 200 IP addresses from our ISP, How can I manage these block of IP addresses, is there any software out there to simplify this I heard that using DHCP server in a datacenter is not recommended, otherwise what would u say about MS DHCL server ofc considering we need to have backup serversin case of failur How can I assign a block of IPs to a specific rack, I know with different software and management its different but Im asking how it is done normally IP addresses are exposed to the whole network, what if a customer try to use an IP address and is not assigned to their server or rack , how can I prevent this or how can I track the IP usage DNS Management Im goin to setup at least two servers for our DNS servers, I know nothing about Datacenter DNS system, but I have configured DNS server in normal networks and also for webservers, Now I wanna know What exactly needs to be done for a DNS in a datacenter that is not done for normal networks. How can I configure PTR records why cant I configure PTR records on my webserver side DNS server and it should be done on datacenter DNS server , I mean what is the difference in DC DNS servers that allow us to to so , I know the question is very silly and simple but Im confused Is there any software outthere to allow doing the whole thing, I mean automatically add records to the DNS and also managin IP addresses !? Thanks in advance

    Read the article

  • adding trac to apache2 configuration file

    - by Michael
    I currently have apache2 running from a mythtv/mythweb install. This made two config files available in sites-enabled. One of them ("default-mythbuntu") has the VirtualHost directive and seems like a normal file (except a change to the directory index). There is also a mythweb.conf file that only has directives and sets various variables for mythweb. I want to host a trac site as well. According to this site: http://trac.edgewall.org/wiki/TracOnUbuntu there are some setting I need to set for the Trac site. They give me directions for making a VirtualHost, but I think I should use the current VirtualHost and just add the directives (I'll need to change the default location they point to from the site above to just point to the trac location). Where should I put these directives? Can I make a Trac.conf with just the settings for Trac and enable it, or do I need to put them in the default-mythbuntu file? I don't like that later because it doesn't separate out the Trac configs. How does Apache know that the mythweb (and the trac.conf I want to make) belong to the virtualhost defined in the default-mythbuntu? It is the only virtualhost that is being defined on my system if that matters.

    Read the article

  • Connecting to IPv6 hosts when mobile and on a Surface?

    - by Cerebrate
    Specifically, at my usual location, I have an IPv6 network which connects to the Internet via a static tunnel set up to Hurricane Electric's tunnel broker ( http://www.tunnelbroker.net/ ). This works essentially perfectly, allowing inbound and outbound connectivity. Now, however, I need to connect back to host(s) on that network over IPv6 from mobile tablet(s); meaning the conditions are such that there is no guarantee or even likelihood of native IPv6 support where it happens to be at any given time, and the IPv4 address of the tablet will change on a fairly regular basis. The native Teredo support, as configured by default, functions well enough to let me ping my target hosts, but appears to have neither the reliability nor the throughput to support anything else; I have been unable to make any actual connections (trying a number of TCP-based protocols) using it. I had considered setting up an independent tunnel for the tablet(s), and using scripts to update the client endpoint IP address when it changes, but since both (a) many of the locations will be behind NAT devices over which I have no control, and (b) the option over which I do have control is an AT&T Unite hotspot which does not offer protocol 41 forwarding or respond to ICMP on its public address, this approach does not seem viable. I am additionally constrained as the mobile tablet(s) in question are Surface RTs, and as such are incapable of running, for example, AICCU client software. What is my best option to pursue to obtain IPv6 connectivity in this scenario?

    Read the article

  • Why is .htaccess not allowed in a directory but is allowed in another?

    - by JD Isaacks
    I have apache2 installed on ubuntu 10.4 inside my var/www/ directory [amung others] I have a cakephp and a dvdcatalog directories. Each of which have CakePHP 1.3 installed. I can access them both via localhost/cakephp and localhost/dvdcatalog But the dvdcatalog shows up with no css styling. They both have these files: /var/www/cakephp/app/webroot/css/cake.generic.css /var/www/dvdcatalog/app/webroot/css/cake.generic.css But when I go to http://localhost/cakephp/css/cake.generic.css it sees the file but it does not see the file when I go to http://localhost/dvdcatalog/css/cake.generic.css I think this means the cakephp folder is able to use .htaccess and the dvdcatalog is not. I setup the cakephp directory last month when I was following in the blog tutorial. I am setting up the dvdcatalog directory now for a different tutorial. So I am not sure if I am missing a step. in my /etc/apache2/apache2.conf file I have this: <Directory "/var/www/*"> Order allow,deny Allow from all AllowOverride All </Directory> Which I thought gave .htaccesss to all. Does anyone have any ideas what the problem is?

    Read the article

  • Can't access my accelerated hard disk from msdos after installing linux on ssd cache

    - by Chibueze Opata
    I mistakenly installed Linux Mint on my ssd (forgot my PC actually came with one), when it detected a ~31GiB disk that it wanted to install to, I was a bit confused since I had brought out 30Gb in my primary disk for it, but I clicked continue. After installation, I tried to boot back into my Windows and it brought out some Intel Raid Disk Utility stuff saying I should disable acceleration on a disk something couldn't be found, I canceled it but whatever I tried, recovery tools, setups etc, I couldn't just access the drive which was apparently using the SSD as cache. Since then I've been stuck. I tried setting the 'raid' flag to the disk from 'gParted', still I couldn't. I tried the diskraid utility from windows recover disk, it said it couldn't detect any raid, diskpart sees the partition but doesn't see the volume, when I remove the raid flag, it sees the volume as one of raw type, and I can't access anything. I can however mount the drive from terminal in Mint and access my files, but I don't have any backup media at the moment so I can do a factory re-install. Please how do I go about solving the issue, precisely I would like to know how to boot into the drive again. Thanks!

    Read the article

  • Problems with image/file upload in MediaWiki on Windows 2008 Server R2, using wrong temp directory

    - by Lasse V. Karlsen
    I have installed MediaWiki 1.15.2 under IIS as per the MediaWiki installation instructions for Windows 2008 Server. I have configured PHP to use a specific temp directory: upload_tmp_dir="C:\php\uploadtemp" I have specified that MediaWiki is allowed to upload: $wgEnableUploads = true; But when I try to upload an image, I get this error message in my browser: Internal error Could not find file "C:\Windows\Temp\php1AEA.tmp". Retrying will simply give me a new filename, but in the same location. The directory does not have any php* files in it, but since they're "temporary", they might be gone in a flash before Windows Explorer is able to show them so that might be a red herring. I've googled for this, and the most promising lead I found was on this page: Image upload problem - Is this bug fixed?, but since the text says "a bugfix was posted on the bug-report page", but provides no link to which bug page this relates to (php or mediawiki) nor the actual bug report, I've not found conclusively the bug report in question so that didn't help me much. Lots of pages indicates that this is a permission issue, so I tried setting permissions on c:\windows\temp as Modify by Everyone, still no dice. I tried changing the two system environment variables TEMP and TMP to point to C:\Temp instead, but MediaWiki still complains about not finding the file in C:\Windows\Temp. Note that I don't care a lot about where the files will actually be stored temporarily, so c:\windows\temp is fine by me. I do, however, care about them actually being uploaded correctly. Does anyone know of a fix, have any leads I can follow, or whatnot? The server is running Windows 2008 Server R2, all patches installed, and the PHP installed is 5.3.2, using IIS FastCGI.

    Read the article

< Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >